> This will be a machine for automatically generating suspects.
According to proponents, this is untrue. The intent of that database is that looking into it will still require a warrent, and will thusly require the suspect to already have been identified.
I'm no expert, but that sounds reasonably similar to how we treat other investigative means.
At the same time, proponents have said that the whole idea of the database is to detect people with suspicious behavior.
Also, this is still nothing like getting a warrant to a wire tap - any suspicion will reveal YEARS of private information about you to the investigators. Furthermore, knowing that this can be used to identify suspects, surely it will have an effect on peoples behaviors.
They propose to include health records! What if you like to read about bomb making out of curiosity, have a relative who is in jail for violence, and you start seeing a psychiatrist? How many boxes have to be ticked before a flag is raised, and how is that going to affect what you tell the psychiatrist about how you really feel?
I also don't trust the police to not make mistakes or behave unethically enough to be comfortable with this. Denmark is not a very corrupt country, but we still see misuse of power. Just recently it was revealed how a police handler explicitly instructed an informant to lie in court and frame someone else, just so the handler could keep his source. Are these the kind of people who should have access to my search history and health data? No fucking thanks.
> How many boxes have to be ticked before a flag is raised
If the proponents are right, an infinite amount. The information will never "raise a flag" since looking at it would require the flag to already have been raised (in the form of a warrant).
> and how is that going to affect what you tell the psychiatrist about how you really feel?
I think psychiatrists are already required to report you if they believe you're a danger to others.
> but we still see misuse of power.
This concern I sympathise with more, but I also have to imagine that this information bank could make it easier to investigate and convict this sort of misuse of power.
> If the proponents are right, an infinite amount. The information will never "raise a flag" since looking at it would require the flag to already have been raised (in the form of a warrant).
From the main critical opponent Justitia which consists of law professionals:
https://justitia-int.org/wp-content/uploads/2025/03/Justitia...
"Samtidig lægger lovforslaget op til, at PET vil kunne træne maskinlæringsmodeller til at genkende mønstre i disse data. En sådan udvikling øger overvågningstrykket markant"
Translation: "At the same time, the bill proposes that PET will be able to train machine learning models to recognize patterns in this data. Such a development significantly increases surveillance pressure"
> I think psychiatrists are already required to report you if they believe you're a danger to others.
That is not my point. A psychiatrist will not report you just if they think you are schizophrenic or a psychopath. However, how will a machine learning model categorize you if it knows this information AND all your social media posts AND any other things that may be attributed to you, such as your browsing history showing that you are interested in how to make TATP? Add to this that there is no way to ensure data quality and that collected data in the database may be incorrectly attributed to you, e.g. other people posting incriminating stuff on your social media profile.
> This concern I sympathise with more, but I also have to imagine that this information bank could make it easier to investigate and convict this sort of misuse of power.
The people misusing the power will also be the people who know exactly what to do to not end up putting a trail of evidence in the database.
> From the main critical opponent Justitia which consists of law professionals:
You're moving into some pretty specialized terrirory here. I'm not a lawyer, and I suspect you aren't either. We're quite frankly not equipped to have this discusion. I'll muddy up the picture a little for you to make that point clear.
It's true that Justitia wrote that in their opinion about the proposal. An opinion the relavant authority actually asked for and then incorporated into the proposal. What you're looking at there is part of the process of defining a law, not a critique of a finished law. In the their comments to the responses, justitsministeriet (the relavant authority in this case) writes[1]:
"Justitsministeriet finder det dog afgørende, at dette sker på en måde, hvor de nuværende regler i PET-loven ikke lempes i de tilfælde, hvor PET’s behandling af oplysningerne i et datasæt får en mere målrettet karakter"
Translated: "The relavant authority believes it is critical that this processing of data does not relax the current rules where the processing is more directly targetted"
Let me be clear. I don't intend to make a point for or against that law. I'm quite frankly not qualified to make that assesment. I don't understand most of what they write, nor do I care to. I read stuff like "it may have a chilling effect on freedom of speech" and think "well that's sort of the point. If you were going to write something about how you'd like to bomb a school, I'd like you to not write that", which is obviously missing the point of the discussion, but they're also not talking to me.
In cases like this I prefer to fall back to my trust in the process. I didn't vote for Peter Humlegaard, I'm much more anti capitalist than that, but I also have no reason to believe that he's some hitler-esque proto-facist. PET is calling on more tools, and two independant experts helped our authorities draft a law that looks roughly like something Norway and Great Britain has. That seems reasonable to me. I'm sure they'll land this in a somewhat reasonable way, and then I'm sure we can change it if it turns out it sucks.
[1]: https://www.ft.dk/samling/20241/lovforslag/L218/bilag/1/3009... (page 11)
How do you prevent a misuse or switching to "let's just start looking into this without warrant until this popular issue(i.e. immigrants, USA, Russia, religios tensions, ethnic tensions) is solved" when the next political crisis hits?
You don't. Democracy has to be able to make those decisions to be legitimate.
See recent article [1] about a municipality (?) violating its own law and state law to share surveillance data (license plates) with almost 300 agencies.
[1]: https://news.ycombinator.com/item?id=44747091
Once you have collected the data it won’t be uncollected, to paraphrase Pink Floyd, when the right one walks out of the door.
The same could be said of the entire state, or any heirarchical organization of people.
If we were truely, terminally, afraid of "the wrong one" we couldn't build anything.
Information recorded will always have the temptations and tendencies to be misused. Might happen slowly, but over time they would find more and more reasons to get a warrant and at some point some hapless judge will just hand them out like daily business.
Experience shows, that humans cannot be trusted to remain vigilant forever.
There is no reason to believe it wouldn't eventually be used to generate leads as opposed to needing a warrant to sift through.
Again, I'm no expert, but I do believe the law would be what would stop you. It could be poorly written, but then we should just rewrite it.
I don't quite understand your position. The intelligence community has shown time and again that they are happy to be innovative (and secretive) with interpretations of the law that enable them access to vast swaths of U.S. persons data without a warrant.
This is recent history, too. With the NSA interpreting the addition of the word "relevant" in Section 215 of the Patriot Act to mean "indefinite bulk collection of records on every U.S. citizen".
Where do you get your confidence from? The confidence that there will be robust public debate before an encroachment on the exploitation of data already collected on a country's citizens?
Do you believe this sort of bulk seizure and screening of the data of a country's citizens to be limited to the U.S.?
Police, in many countries, have already been found to violate the laws protecting surveillance systems that already exist.
If a warrant doesn't stop them today, why do you think it will tomorrow?
I don't believe in "police" as a transnational group. I don't believe that the actions of police in some other country carries any information about the culture of police in mine.
If police use these systems outside of their intended and legally mandated forms, that must be dealt with. We do need effective police though. We do that with robust surveillance infrastructure for police queries in the database, possible even with a mandatory log of queries as part of discovery.
I don't have to "think" it will stop them, I can utilize the levers of democracy to check them.
The obvious question there is... Has it ever happened in yours?
I'd be surprised it you couldn't find some instances, but I'm also confident that those cases were dealt with by procedural enhancements.
Just recently we had a case where an employee was caught snooping in some address and family data. The person was fired, reported to the police for investigation, and the relevant employer is now looking at their processes to make sure it doesn't happen again. Along with that, everybody directly affected has been notified. That seems like a reasonable response to me.
I'm much more concerned with all the times we don't find out. We need strong checks on access to this data, which is fortunately also a legal requirement. I generally trust that the relevant authorities are keeping track of that.
Importantly, what I hope you're seeing from this reply is a trust in the institutions of my government. I trust that the processes are being followed, and that the processes are built in such a way that they check each other.
That doesn't seem like a reasonable response to me.
An employee was caught criminally stalking their family, and using the force of the government to do so.
Rather than being prosecuted, like happens outside the force, they were fired and let go to continue living their life - likely to be rehired in another police force if the pattern plays out as it regularly does.
That this can happen without large alarm bells, means that the checks on access are not effective - because it is not a once in a lifetime event.
I do see your trust. But I also see you yourself producing evidence suggesting such trust is unfounded.
> An employee was caught criminally stalking their family, and using the force of the government to do so.
So far he has not been caught criminally doing anything, because the system that found a brrach of process is not the system that determines criminality. Right now he has violated an internal process and been fired for that deriliction of duty.
> Rather than being prosecuted
He is very likely ALSO going to be prosecuted, since the system that found the violafion of the process also determined that such a violation is possibly illegal and activated the police. He is being investigated, and if they can prove anything criminal, he'll get convicted for that.
Obviously the bar for proving criminality is higher than the bar for dismissal.
> That this can happen without large alarm bells, means that the checks on access are not effective
This is exactly the debate that is happening right now because of this case. I'll end by quoting a professor that commented on this case recently:
"This should make us prioritize investing in security, investing in describing our processes and ways of working, such that you can find outliers. Maybe instead of investing in AI, which is fun to have but doesn't actually solve any of the serious problems"