Worth pointing out that this is part of a much larger encroachment on user privacy, and not just in the US: https://community.qbix.com/t/increasing-state-of-surveillanc...

I wish that article wasn't extremely ai written

[dead]

What would that improve in this case?

It would be more concise and the analysis section at the end would be more useful. I still read it I just hate reading articles online knowing I could have run a chatgpt deep research to the same effect.

Can you tell me what you would cut, in this article, specifically, that would make it more meaningfully concise?

The point isn't that you can't run the deep research. Everyone now has more capabilities, and if you want to waste time and tokens you can do it. The point is someone has done the work compiling these, and made it available once, for everyone to read. Think "caching". It has the exact amount of information needed to show the details of every attack. There is a lot. Sadly making it "concise" will remove information -- there is that much.

I do usually make edits to an article after I get it from an AI, as an editor would do when a writer submits something. I hate having AI shibboleths like "It's not X. It's Y". So I make it more humanized. But at the end of the day, the article does what it's supposed to do: make people aware of things in one place, rather than have to research it themselves every time.

Why not just write it yourself? We can all have ChatGPT regurgitate the same information. You're supposed to add value, editorializing isn't enough.

Just like I don't want to look at AI art or listen to AI music, I don't want to read AI written blogslop.

The web is now full of shit. What a waste.

Writing it myself would mean doing the research myself. How would I do that? ChatGPT can do it faster at scale. Then the summaries are short enough that cutting any particular part wouldn't make sense. I could re-word it, I guess.

Why don't you write all your assembly code yourself? Why do you use a compiler? Why do you generate images, when you can draw them yourself? You're supposed to add value.

I don't think preparing a list of all the threats, editing it and publishing it for others is a "waste". I'm not publishing random stuff, this is important and in line with what I want people to know.

Some people on HN downvote any criticism of AI, other people complain that things are written by AI. If you're such big fans of AI being used more and more, then accept the consequences!

How can you be certain that the ChatGPT "research" you cite is a faithful representation of facts? How do you know that OpenAI/Anthropic/Google haven't introduced RLHF to subtly steer model output on specific topics to align with their political/economic interests?

I'm seeing increasing numbers of people credulously citing ChatGPT/Claude/Gemini output as ground-truth fact. Many more are increasingly lulled into a false sense of security by the citations models append (to the point of neglecting even a bare-minimum skim of the cited sources, much less critically evaluating/contextualizing the nature of the sources themselves). My fear is that most people are blissfully ignorant at the new paradigms of propaganda that AI could enable; most of us here wouldn't be taken by the "slop" image-gen deepfakes (right now), but can you say the same about a couple of citations taken out of context?

We already know how trivial it is to win over a sizeable chunk of society by introducing red-herrings, misrepresenting statistical data, etc. -- oil companies perfected that art, and now as a result a huge number of voters in the US believe that climate change (doesn't exist|isn't man-made|is unavoidable). And that effort was "fully manual" and carried out without the aid of extensive psychological profiling at the individual level via an ad-surveillance complex. Today, society is almost completely defenseless against the extreme granularity/subtlety of manipulation that ownership of frontier AI models enables, especially when it's armed with even a fraction of the torrent of personal data that's being collected on each of us every day.

The people doing the downvoting are different people from the whiners. I'm one of the whiners.

That's kinda fair, like it's still useful to prepare a list, but it's also like if you didn't go research your information yourself why would I start from a position of charitability when I read it? When I research something with LLMs, I know to double-check everything myself before I use it as a basis for my thought or repeat it to other people. Knowing an article is AI written forces me to doubt every sentence. Or maybe it's worse, I have to assume nobody cared about the sentence. The old format was a guarantee that someone gave enough shits to put the article together. Relevance comes implicitly bundled in each sentence. It's like someone talking to you in public in that there's often a reason to pay attention.

It's not as though that person is going to say something correct, or ethical, but I've had a lifetime of dealing with human kinds of wrongness. When stuff is wrong, I'll know it's wrong because the article is slanted or wrong because the author was lazy etc., which will let me discount it selectively and still get value from it when, e.g., a slanted author contradicts themselves. Reading an LLM article I have no clue whether the person who put it up even read the whole thing, so when I read sentences, I have no guarantee that the sentence communicates something worth paying attention to. I dislike that ambiguity and would prefer to guarantee that the text is slop by asking a bot myself. Then I know its worth upfront. I'd be fine with it if these sites included a direct statement in bold at the top: HEY THIS IS AI SLOP IF YOU DONT WANT THAT LEAVE. Then I know exactly how to parse it.

You might like my new startup, then: https://safebots.ai

I spent way too much time on actually building this — with Claude and double checking everything — so an article I publish can be OK to push out. We aren’t building a bridge for thousands of cars here, it’s an article.

A lot of things are automated and 95% of the time they are correct. The key is knowing whether the last mile is worth fixing, if the consequences are minor.

I read through your presentation but I still feel pretty confused about what ur startup does. Could you explain it?

The purpose is to try to catch a sliver of all that fun money flying around in the current VC money.

I wanna give him a shot at explaining it

Shimman is wrong. The goal is much bigger, and almost the opposite of what he thinks. It's trying to solve the problem of people chasing "slivers" of money and selling out, which happened in Web2 and Web3: https://safebots.ai/singularity.pdf

What the startup does is make a verifiably trusted, zero-configuration, turnkey environment for businesses to move their data into and run AI workloads on, without worrying about their data being stolen, or some Agents doing unpredictable things. The environment is super-secured, with no ssh. It's an appliance, with over-the-air M of N updates. Think more "Tesla car" and less "OpenClaw". That's the foundation.

That environment then builds everything around a graph database, for people, organizations, and even code. We have Grokers that can ingest a codebase statically once, and then present the graph databases as a far better "RAG" than cosine similarity and pinecone vector databases.

At its most basic level: Agents can't be trusted. We want predictable Workflows, not agents. They can do 99% of everything Agents can, if done properly, and the remaining 1% are the dangerous parts https://safebots.ai/agents.html

It's a lot of innovations at once, including:

Collaborative Bots that are safer than agents.

Workflows and tools that can read, reason and propose actions.

Policies that must be satisfied before actions can be taken.

Logging of everything. Verifiable security and audits for SOC2 compliance etc. etc.

Everything is configurable and designed for serious businesses, not a grandma that finished a Chinese course on how to install OpenClaw on her terminal and not get pwned

> Writing it myself would mean doing the research myself. How would I do that?

This is why you should write things yourself. There is no way an AI would write something so insane in response to that question. Since I can now read your true understanding of the world, I know not to waste my time on your ai slop. I have no reason to believe you fact checked the 'research' done using AI if you cant even understand how the research should have been done in the first place. You want to waste the time of others but arent even willing to sacrifice a bit yourself.

You're free to write it using AI, but I'm free not to read it. The fact that it's written by AI is a strong signal that the references can't be trusted anyways.

No one is forcing to you read it!

It would be written by a human.

From https://news.ycombinator.com/newsguidelines.html:

> Don't post generated comments or AI-edited comments. HN is for conversation between humans.

It's poorly structured. I think a better split between technical vs social measures and how they interact would result in a much better article. It also doesn't seem to even mention DPI or great firewall of China as prior art.