I'm highly skeptical about this paper just because the resulting images are in color. How the hell would the model even infer that from the input data?
I'm highly skeptical about this paper just because the resulting images are in color. How the hell would the model even infer that from the input data?
It is an overfitted model thst use WiFi data as hints for generation:
"We consider a WiFi sensing system designed to monitor indoor environments by capturing human activity through wireless signals. The system consists of a WiFi access point, a WiFi terminal, and an RGB camera that is available only during the training phase. This setup enables the collection of paired channel state information (CSI) and image data, which are used to train an image generation model"
That's just a diffusion model (Stable Diffusion 1.5) with a custom encoder that uses CSI measurements as input. So apparently the answer is it's all hallucinated.
Right but it’s hallucinating the right colours which to me feels like some data is leaking somewhere. Because no way wifi sees colours
Different materials and dyes have different dialectical properties. These examples are probably confabulation but I'm sure it's possible in principle.
Assuming you mean dielectric, but I do like the idea that different colors are different arguments in conflict with each other.
Well perhaps it can, a 2.4Ghz antenna is just a very red lightbulb. Maybe material absorption correlates, though it would be a long shot?
You can't even pick colour out of infra-red-illuminated night time photography. There's no way you can pick colour out of WiFi-illuminated photography.
There would be some correlation between the visual color of objects and the spectrum of an object in another EM frequency, many object's color share the same dye or pigment materials, but it seems pretty unlikely that it would be reliable at all with a spectrum of different objects and materials and dyes because there is no universal RGB dye or pigment set we rely upon. You can make the same red color many different ways but each material will have different spectral "colors" outside of the visual range. Even something simple like black plastics can be completely transparent in other spectrums like the PS3 was to infrared. Structural colors would probably be impossible to see discern however I don't think too many household objects have structural colors unless you got a stuffed bird or fish on the wall.
If it sees the shape of a fire extinguisher, the diffusion model will "know" it should be red. But that's not all that's going on here. Hair color etc seems impossible to guess, right? To be fair I haven't actually read the paper so maybe they explain this
downvoted until you read the paper
This is largely guesswork but I think whats happening is this. The training set contains images of a small number of rooms taken from specific camera angles with only that individual standing in it, and associated wifi signal data. The model then learns to predict the posture of the individual given the wifi signal data, outputting the prediction as a colour image. Given that the background doesn't vary across images, the model learns to predict it consistently with accurate colors etc.
The interesting part of the whole setup is that the wifi signal seems to contain the information required to predict the posture of the individual to a reasonably high degree of accuracy, which is actually pretty cool.
The model was trained on images of that particular room, from that particular angle. It can only generate images of that particular room.