> The method takes advantage of normal network communication between connected devices and the router. These devices regularly send feedback signals within the network, known as beamforming feedback information (BFI), which are transmitted without encryption and can be read by anyone within range.

> By collecting this data, images of people can be generated from multiple perspectives, allowing individuals to be identified. Once the machine learning model has been trained, the identification process takes only a few seconds.

> In a study with 197 participants, the team could infer the identity of persons with almost 100% accuracy – independently of the perspective or their gait.

So what's the resolution of these images, and what's visible/invisible to them? Does it pick up your clothes? Your flesh? Or mosty your bones?

What happens is that a large body of water (pun intended) has the ability to absorb and reflect wifi signals as it moves through the room. For this you need to generate traffic and measure for instance RSSI or CSI (basically, signal strength) of the packets. If you increase frequency you can detect smaller movements such as arms moving vs. a body, or exclude pets if you reduce sensitivity. It works well for detecting presence and movement in a defined space, but ideally requires you to cross the path between two mains-powered devices, such as light bulbs or wifi mesh points. Passing a cafe doesn't seem too likely.

If you want to do advanced sensing, trying to identify a person, I would postulate you need to saturate a space with high frequency wifi traffic, ideally placed mesh points, and let the algo train on identifying people first by a certain signature (combination of size/weight, movement/gait, breath / chest movements).

Source: I worked on such technologies while at Signify (variants of this power Philips/Wiz "SpaceSense" feature).

More here: https://www.theverge.com/2022/9/16/23355255/signify-wiz-spac...

You are confusing it with the earlier methods. This is similar but not the same method that doesn't use RSSI or CSI and it is passive.

This approach relies solely on the "unencrypted parts of legitimate traffic". The attacker does not need to send any packets or "generate" their own traffic; they simply "listen" to the natural communication between an access point and its clients.

BFI is much more complex than simple signal strength. RSSI is an aggregation of information that the researchers describe as "not robust" for fine-grained tasks In contrast, BFI is a high-resolution, compressed representation of signal characteristics. This rich data allows the system to distinguish between 197 different individuals with 99.5% accuracy, something impossible with basic RSSI.

While older CSI methods often focused on walking directly between a specific transmitter and receiver (Line-of-Sight), BFI allows a single malicious node to capture "every perspective" between the router and all its legitimate clients.

Also CSI requires specialized hardware and custom firmware, this one isn't, just wifi module in monitor mode.

Resolution and positional accuracy are very poor. It’s more like ‘an approximate bag of water detector’.

Gait analysis is complete fiction. Especially with a non-visual approach like this.

Given the number of gait analysis publications over several decades using varying techniques, can you recommend a good review article disproving all of them?

Given the number of publications about curing <pick your uncured disease> over several decades using varying techniques, can you recommend a good review article disproving all of them?

Answer: no need, if it had been cured, it would be cured. And it is not.

My point being that many publications saying "towards X" may mean that we are making some progress towards X, but they don't mean at all that X is possible.

I don’t think anyone has ever tried to publish something disproving all of the gait analysis claims. That would be an odd sort of thing. But I have not seen anything come to something that we could call productized and reliable. It’s relatively easy to publish theoretical papers. Much harder to show it working reliably in the wild.

If you can do that you can infer when someone is home or away.

From the paper linked by jbotz

> The results for CSI can also be found in Figure 3. We find that we can identify individuals based on their normal walking style using CSI with high accuracy, here 82.4% ± 0.62.

If you're a person of interest you could be monitored, your walking pattern internalized in the model then followed through buildings. That's my intuition at practical applications, and the level of detail.

They tested correlation between different perspectives (same scene and AP even) later in the paper and achieved an accuracy of 0%. Not to discount other methods being able to achieve that.

> So what's the resolution of these images, and what's visible/invisible to them?

The researchers never claimed to generate "images," that's editorializing by this publication. The pipeline just generates a confidence value for correlating one capture from the same sensor setup with another.

[Sidenote: did ACM really go "Open Access" but gate PDF download behind the paid tier? Or is the download link just very well hidden in their crappy PDF viewer?]

It's at least possible to record heart rate with wifi, so that suggests a broad variety of biometrics can be recorded.

https://arxiv.org/abs/2510.24744