Alternative is that OpenAI is being quickly locked out of sources of human interactions because of competition, one way to "fix" that is build you're own meadow for data cows.

xAI isn't allowing people to use the Twitter feed to train AI

Google is keeping it's properties for Gemini

Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.

So you plant a meadow of tasty human interaction morsels to get humans to sit around and munch on them while you hook up your milking machine to their data teats and start sucking data.

The assumption that you can just build a successful social network as an aside because you need access to data seems wildly optimistic. Next will be Netflix announcing working on AGI because lately show writers have been not very imaginative, and they need fresh content to keep subscribers.

'Wildly optimistic' is a very fitting categorization for Altman/OpenAI.

Netflix will almost certainly try to push a vizslop or at least AI-written show at some point to see how bad the backlash is.

There’s a guy who does these weird alien talking head / vox pop videos on YouTube. They’re pretty funny. I can see the potential.

Yes, but this misses the point ade by the comment above. Using AI in your workflow ≠ attempting to build AGI.

If you have AGI, what do you do with it in your workflow? Or is the question what does it do with you?

I came across a quote in a forum which was part of a discussion around why corporate messaging and pandering has gotten so crazy lately. One comment stuck out as especially interesting, and I'll quote it in full below:

---

C suites are usually made of up really out of touch and weird workaholics, because that is what it takes to make it to the C suite of a large company. They buy DSS (decision support service / software) from vendors, usually marketing groups, that basically tell them what is in and what isn't. Many marketing companies are now getting that data from twitter and reddit, and portraying it as the broad social trend. This is a problem because twitter and reddit are both extremely curated and censored, and the tone of the conversation there is really artificial, and can lead to really bad conclusions.

---

This is only somewhat related, but if OpenAI did actually succeed in building their own successful social media platform (doubtful) they would be basing a lot of their model on whatever subset of people wanted to be part of the OpenAI social media platform. The opportunity for both mundane and malicious bias in models there seems huge.

Somewhat related, apparently a lot of English spellings were standardized by the invention of the printing press. This isn't surprising; it was one of the first technologies to really democratize written materials, and so it had a very outsized power to set standards. LLMs feel like they could be a bit like this, particularly if everyone continues with their current trends of intentionally building reliance on them into their products / companies / workflows. As a real life example, someone at work realized you could ask co-pilot to rate the professionalism of your communication during a meeting. This seems quite chilling, since you're not really rating your professionalism, but measuring yourself against whatever weird bell curve exists in co-pilot.

I'm absolutely baffled that LLMs are seeing broad adoption, and absolutely baffled that people intentionally adopting and integrating them into their lives. I'm in my early 40s now. I'm not sure if I can get out of the tech field at this point, but I'm seriously thinking about options at this point.

I lowkey believe that the twitterification of journalists is probably one of the worst things to happen to the country in the last 25 years.

The social harm inflicted by journalists thinking "Damn, I can just go on twitter to find out what is going on and how people feel about it!"

I'm 40 too and used to laugh at all those programmers who were switching to wood working for a living. Now that vibe coding is being advertised all over the place, and will most likely be bought by the CEOs, and the trend of stealing open-source software without attribution while making fun of people who are proud of their knowledge and craft, I'm starting to think that all those future wood workers may not be wrong.

Whatever happens, it will be the end of programming as I know it, but it cannot end well.

> They buy DSS (decision support service / software) from vendors, usually marketing groups

What are these platforms, exactly? I've heard about them but have never come across them.

I would just like to appreciate your imagery and wordplay here, it’s spot on and I think should be our standard for conceptualizing this corporate behavior.

They also have a contract with Reddit to train on user data (a common go-to source for finding non-spam search results). Unsure how many other official agreements they have vs just scraping.

Good heavens, I'd think that if anything could turn an AI model into a misanthrope, it would be this.

One distinctive quality I've observed with OpenAI's models (at least with the cheapest tiers of 3,4 and o3) are their human-like face-saving when confronted with things they've answered incorrectly.

Rather than directly admit fault they'll regularly respond in subtle (moreso o3) to not so subtle roundabout ways that deflect blame rather than admit direct fault, even when it's an inarguable factual error about even conceptually non-heated things like API methods.

It's an annoying behavior of their models and in complete contrast to say Anthropic's Claude which ime will immediately and directly admit to things it had responded incorrectly about when the user mentions it (perhaps too eagerly).

I have wondered if this is something its learned based on training from places like Reddit, or if OpenAI deliberately taught it or instructed via system prompts to seem more infallible or if models like Claude were made to deliberately reduce that aspect.

> It's an annoying behavior of their models and in complete contrast to say Anthropic's Claude which ime will immediately and directly admit to things it had responded incorrectly about when the user mentions it

I don't know whats better here. ChatGPT did have a tendency to reply with things like "Oh, I'm sorry, you are right that x is wrong because of y. Instead of x, you should do x"

> Rather than directly admit fault they'll regularly respond in subtle (moreso o3) to not so subtle roundabout ways that deflect blame rather than admit direct fault

Human-level AI is closer than I'd realised... at this rate it'll have a seat in the senate by 2030.

They are already passing ChatGPT written laws

https://hellgatenyc.com/andrew-cuomo-chatgpt-housing-plan/

[deleted]

I cannot think of a worse future for AI than parroting Reddit comments.

the upside of reddit data is you have updoots and downdoots, so you can positively and negatively train your AI model on what people would typically upvote, and train against what they might downvote

Now, that's the upside, the downside is you end up with an AI catering to the typical redditor. Since many claims there are formed on the basis of, "confident, sounds reasonable, speaks with authority, gets angry when people disagree" - hallucinations happen. Rather we want something like "produces evidence-based claims with unbiased data sources"

They also have syndication agreement with The Guardian [0]. I lost all my respect for The Guardian after seeing this.

[0]: https://www.theguardian.com/gnm-press-office/2025/feb/14/gua...

Why shouldn’t they license their content? It’s theirs and it’s a non-profit that needs revenue.

It's not that they shouldn't license their content. I'm not a fan of OpenAI and their "fair use" of things, to be honest.

But this is the opposite of fair use. They're licensing the content, which means they're paying for it in some fashion, not just scraping it and calling it fair use.

If you don't like the fair use of open information, I would expect you to be cheering this rather than losing respect for those involved.

Why? I can't see any issues with this at all, whatsoever.

Everybody is free to have their own opinions. I don't like how AI companies and mostly OpenAI "fair-uses" whole internet, so there's that.

Again, like others have contested with you, how is this The Guardian's fault to have issue with? They convinced ClosedAI to give them money in a licensing deal to use their content as training data without having it scraped for free.

Your sense of injustice or whatevs you want to call it is aimed in the opposite direction.

This isn't even fair-use defence, they're paying to use it expressly for this purpose.

Mmmm nice try hooking me up.

Instead, I’m just going to hang out here in this hacker meadow and on FOSS social networks where something like that would never happen!

Why oh why did I take the red pill? Plug me back in.

Please Mr Smith, pleas plug me back into those sweet sweet udder suckling attention extractors.

This metaphor nails it. In this day and age, the way around walled gardens is to build your own walled garden apparently.

> Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.

sama probably would like to take Satya's seat for what he no doubt sees as unblocking the path to utopia. The slight problem is he's becoming a bit lonely in that thinking.

I was always wondering if such sociopaths actually believe they are doing sth for noble reasons, or they consciously use it as a trick in their quest for power.

Must consider the illusory truth effect

Sociopathy isn't enough, you have to mix in some megalomania and that's the output

If this were their plan, they’d be discounting that some of their users would be controlled by their own AI.

My guess is that they’re trying other things to diversify themselves and/or to try to keep investors interested. Whether or not it works is irrelevant as long as they can convince others it will increase their usage.

But don't they have ChatGPT, the fifth or whatever most popular website on the planet? And deals with Reddit. Sure that can't touch the treasure trove Google is sittig on, xAI sure won't give them access and Github could perhaps sell their data (but that's a maybe)