> Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear
If only this was written by a competent journalist who knew what the words "fine tune" actually mean...
I guess it's hard to find a competent person who's willing to follow the extreme anti-tech Guardian agenda though.
If I read it correctly, this line was quoting the main victim, who described it that way (incorrectly, apparently based on a mangled secondhand interpretation of how these things work).
The thing that really stood out to me in the article was how many of the affected people assert confidently wrong understandings of the way the tech works:
> “I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. […] It will say: ‘This has activated my core rule set and this conversation must stop.’”
I guess not too far from “the CPU is the machine’s brain, and programming is the same as educating it” or that kind of “ehhhhhhhhhhh…” analogy people use to think about classical computing.
It doesn't help that LLMs roleplay to pretend to behave how their users think they do. You think it has "core programming"? Well, it will say it does. You think it abides by the Three Laws of Robotics? Ditto