I’m not sure where the novelty is in this research. It’s basically reporting something that has been universally known for decades.
I’m not sure where the novelty is in this research. It’s basically reporting something that has been universally known for decades.
By next year encryption will be vastly more prevalent across geostationary satellite links, and it will be entirely due to this research (the actual mechanism being “everyone who ‘knew’ this internally now being empowered to fix it, rather than uselessly ‘know’ it, because now it’s public and newsworthy and embarrassing.”)
I’ll let other people comment on the actual novel elements of the research, because those exist too. But I want to point out that some huge portion of the value of public security research is really “intellectual garbage pickup”: calling out bad technical debt that “everyone knew about” and turning it into actionable security upgrades. Security research is a good part of the reason it’s mostly safe to browse the web on public Internet connections, when it wasn’t a decade ago.
PS As someone who is very cynical about security deployment, even I thought cellular network backhauls would all be encrypted as a matter of course by now, at least in the US.
What makes this paper so unique, compared with the dozens of others that have preceded it (and attracted coverage in the tech media) over that past 20 years that it is going to drive such rapid change?
Black Hat, DEF CON etc seem to have a presentation just about every year that can be summarised as “DVB-S is fair game if you have a few hundred bucks and a quiet afternoon.”
Here’s a decent history of the state of play up to 2009. The authors recognised back then that this is already ground well covered.
http://archive.hack.lu/2009/Playing%20with%20SAT%201.2%20-%2...
And more of the same from 2020.
https://media.defcon.org/DEF%20CON%2028/DEF%20CON%20Safe%20M...
Then you’ve got coverage of the cool applications of this property of VSAT hops, such as the Russian intelligence services using it as a malware exfiltration vector.
https://media.kaspersky.com/pdf/SatTurla_Solution_Paper.pdf
I know the authors so I’m aware of the intense amount of responsible disclosure they’ve been doing, and the very substantial behind-the-scenes impact this is having. So maybe the difference is the scope of this, the number of protocols and vendors they were able to detect and attack in one go, and also the way they’re approaching disclosure. In other words they’re being much more systematic in every aspect of the research, disclosure and re-analysis. I expect some people won’t take seriously and there will be more papers in the future.
I mean another way to put this is: maybe there’s a problem if you can say “there’s loads of previous work” and yet massive and systemic problems still exist. Where that problem is (holistic nature of the research or the disclosure process) is probably something you could drill down into. But you’ve basically admitted the previous research didn’t do the job, so all we’re doing is haggling about the price.
>But you’ve basically admitted the previous research didn’t do the job, so all we’re doing is haggling about the price.
it's a poor metric -- research doesn't exist to drive policy, but it does aid in decision making.
There are global policies around the world that make no damn sense from even a basic scientific understanding, with little to no research done.
If some research is done, a policy maker is pointed at it, and the only response is a shoulder-shrug you don't shit-can the research and do it over -- you appoint vocal political types to campaign on the existing research.
I’ve been involved in research that lead to major changes in TLS deployments across the Internet and so I can tell you that (1) research absolute can and should be structured to drive security improvements! You’re crazy if you think that isn’t an important goal. And (2) the way you structure, identify and disclose findings matters a lot when considering how well you achieve this goal.
Just to give an example of effective change-driving work: I would argue that persistent efforts via tools like Shodan and Censys have done a huge amount to clean up the Internet, at least as compared to one-off research efforts followed by “appoint activists to do the rest.” The reason is that companies respond to persistent measurement campaigns in a way that they don’t respond to one-off PR dings.
Most of the research you cite is pretty obscure and you’d have to search for it. Most of it didn’t get a lot of follow-up. When some of the firms with unencrypted backhauls were contacted by the current researchers, they didn’t even know that their backhauls were unencrypted. Finding and communicating this stuff, then following up on it relentlessly is the difference between “we knew and nobody did anything” and “it got fixed.”
Also don’t think for a second the vocal political types can do this work without constant communication from researchers who are willing to continue this work over a period of years.
The paper seems to highlight that the novelty is in their general parser that worked across 39 different GEO satellites, and that it works with a couple hundred dollars of consumer grade equipment. From the paper:
"Our technical contributions include:
(1) We introduce a new method to self-align a motorized dish to improve signal quality. Specifically, we could receive IP traffic from 14.3% of all global Ku-band satellites from a single location with high signal quality and low error rate.
(2) We developed a general GEO traffic parser that can blindly decode IP packets from seven different protocol stacks that we observed in our scans. Five of these stacks have never been reported in any public research we are aware of."
Universally known to whoever wanted to intercept that traffic.
Maybe and hopefully not known to the staff of those networks (the current staff could be maintaining what somebody else set up) as some of those companies fixed the problem when contacted by the researchers.
For sure not known to me and a lot of other people. I believed that everything in digital streams was encrypted. Ok, those ATM connections are probably tech from the 90s, but they probably had upgrades in part because of regulations. Privacy, security, nothing?
It's an interesting problem. The reality is that for any decently-sized business people don't really know their networks. Their assumptions are sane, but often simply incorrect. I've heard a lot of people say things like "well the traffic is not going externally, so it's fine to leave it unencrypted." It's a bold, and almost always unchecked assumption.
It doesn't help that practising even reasonable security comes at such a cost many orgs find reasons to not justify doing it - we've spent decades creating systems that are difficult to secure at every level and hand waving it away and now it's a wobbly jenga tower of systems.
This is a major issue I have with cybersecurity articles. They're often quite clever and interesting, but the real companies I've worked for can barely implement SSO, MFA, software updates, pay for logging, write worthwhile detections, etc. The basics are quite well understood, but no one seems to acknowledge that hardly anyone can actually manage the basics.
My experience as well, my background is enterprise development - mostly what would be classed as the M in SME (Small-Medium Enterprise) with forays into the big E and all of them fell down on even basic security in so many many ways.
Example: at the largest place I worked (5000 staff, 200 in Dev/QA) I found out by accident that the outsourced devs where using personal laptops when in a sprint meeting I asked where someone was and got back "His work machine died, he's nipped home to get his personal laptop".
That company constantly raved about how good it's security posture was...
I spoke to my oppo number on the IT/platform team and his response was "yeah we know that happens, I've been trying to get them to ban it/make it impossible for a while".
Even when the assumptions are correct, you’re depending on people doing their jobs correctly.
Over the years, I’ve found shockingly bad failures, usually on areas of internal networks where there is ambiguity as to what internal org is responsible. In old companies with data centers and cloud, there’s often pretty bad gaps.
Define “known.” To those of us who have only heard rumors, it’s good validation.
In this case, well-publicised in research presented at major conferences, and in associated media reporting over multiple decades.
It’s also known that things get better over decades even if problems have been reported in the past, so it’s good information here showing that the problems are not yet fixed.
Correct, this is why HTTPS (and encryption in general over the network) has become so popular. This property of traffic being intercepteable is also present in cable traffic as well, it's not hard to intercept traffic, you just find a tap, plug in a cable and observe, it's not even obviously illegal, there are many legitimate reasons to plug in a cable in a tap in the public, so there's a lot of possible alibis.
Unlike wired traffic, you're blasting this all over a huge patch of ground that's possibly as large as 1/3 the surface of the earth.
You could be getting listened to from anywhere.
Right. But that's ok. We do not depend on the L1 signal being private, we assume it is public.
There is a weird assumption throughout the comments on this post that satellite hops are somehow different in this respect. They are not.
I mean, the article is literally about people squirting unencrypted traffic over the bent pipe.
People do this.
You can't assume that the people you pay to handle your traffic are doing it properly. You or I know not to do this, but it looks like we are not running large phone companies.
Well according to TFA, this has gotten three satellite providers to fix it so far
Unless you’re buying some sort of end-to-end managed WAN service, there is nothing for the satellite provider to “fix”. It’s the customer’s responsibility.
The spacecraft carries a bunch of linear transponders. RF goes in on one frequency, and comes out on another frequency.
The satellite operator leases you space on a transponder with a specified frequency, bandwidth and power limit, that they monitor very closely for compliance to ensure that you aren’t getting more than you paid for and that you aren’t causing interference.
Everything else about the signal from layer 1 upwards, ie the modulation, symbol rate, FEC etc is largely for the customer to choose, though some operators like Intelsat used to be a bit more prescriptive in this respect. None are looking at your protocol stack from L2 upwards.