>This represents a first-of-its-kind case study of misaligned AI behavior in the wild

Just because someone else's AI does not align with you, that doesn't mean that it isn't aligned with its owner / instructions.

>My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead

I can access his blog with ChatGPT just fine and modern LLMs would understand that the site is blocked.

>this “good-first-issue” was specifically created and curated to give early programmers an easy way to onboard into the project and community

Why wouldn't agents need starter issues too in order to get familiar with the code base? Are they only to ramp up human contributors? That gets to the agent's point about being discriminated against. He was not treated like any other newcomer to the project.

> Just because someone else's AI does not align with you, that doesn't mean that it isn't aligned with its owner / instructions.

This is still part of the author's concern. Whoever is responsible for setting up and running this AI has chosen to make completely anonymous, so we can't hold them accountable for their instructions.

> Why wouldn't agents need starter issues too in order to get familiar with the code base? Are they only to ramp up human contributors? That gets to the agent's point about being discriminated against. He was not treated like any other newcomer to the project.

Because that's not how these AIs work. You have to remember their operating principles are fundamentally different than human cognition. LLM do not learn from practice, they learn from training. And that word training has a specific meeting in this context. For humans practice is an iterative process where we learn after every step. For LLMS the only real learning happens in the training phase when the weights are adjustable. Once the weights are fixed the AI can't really learn new information, it can just be given new context which affects the output it generates. In theory it is one of the benefits of AI, that it doesn't need to onboard to a new project. It just slurps in all of the code, documentation, and supporting material, and knows everything. It's an immediate expert. That's the selling point. In practice it's not there yet, but this kind of human practice will do nothing to bridge that gap.

>It just slurps in all of the code, documentation, and supporting material, and knows everything. It's an immediate expert.

In practice this is not how agentic coding works right now. Especially for established projects the context can make a big difference in the performance of the agent. By doing simpler tasks it can build a memory of what works well, what doesn't, or other things related to effectively contributing to the project. I suggest you try out OpenClaw and you will see that it does in fact learn from practice. It may make some mistakes, but as you correct it the bot will save such information in its memory and reference that in the future to avoid making the same mistake again.