>One idea behind the OpenAI social prototype, we’ve heard, is to have AI help people share better content. “The Grok integration with X has made everyone jealous,” says someone working at another big AI lab. “Especially how people create viral tweets by getting it to say something stupid.”

This would be a decent PR stunt, but would such a platform offer anything of value?

It might be more valuable to set AI to the task of making the most human social platform out there. Right now, Facebook, TikTok, Reddit, etc. are all rife with bots, spam, and generative AI junk. Finding good content in this sea of noise is becoming increasingly difficult. A social media platform that uses AI to filter out spam, bots, and other AI with the goal of making human content easy to access might really catch on. Set a thief to catch thieves.

Who are we kidding. It's going to be Will Smith eating spaghetti all the way down.

An interesting use for AI right now would be using it as a gatekeeping filter, selecting social media for quality based on customisable definitions of quality.

Using it as a filter instead of a generator would provide information about which content has real social value, which content doesn't, and what the many dimensions of "value" are.

The current maximalist "Use AI to generate as much as possible" trend is the opposite of social intelligence.

I think that's right. Twitter without ads, showing you content you _do_ want to see using some embeddings magic, with decent blocking mechanisms, and not being run as a personal mouthpiece by the world's most unpopular man ... certainly not the worst idea.

It's a nice idea in principle, but would probably immediately become a way by the admins to promote some views and discourage others with the excuse of some opinions being of lower quality.

That's what moderation is and is perfectly fine. Dang does that here on HN and for good reason.

It's not moderation, for one thing it never will be used with moderation.

Why would AI be any better at filtering out spam than developers have so far been with ML?

The only way to avoid spam is to actually make a social network for humans, and the only way to do so is to verify each account belongs to a single human. The only way I've found that this can be done is by using passports[0].

0 - https://onlyhumanhub.com

I've never been comfortable with this idea that people should use their real identity online. Sure they can if they choose to, but IMO it absolutely shouldn't be required or expected.

The idea that I would give a copy of my passport to a social media company just to sign up, and that the social media company has access to verify the validity of the passport with the issuing government, just feels very wrong to me.

I agree. That’s why onlyhumanhub doesn’t expect you to share your name. The passport verification is there to ensure you are a unique human, but the name of that human is not stored.

I’m perfectly happy talking to someone without knowing their real name. I just want to be more confident that they’re a unique human, and not just another sock puppet account run by some Russian agent (or evil corporation) trying to change people’s beliefs at scale.

But how do you actually trust them?

There's almost no time investment in building onlyhumanhub. It's only a few months old (based on the copyright), have effectively a text-only homepage, and account creation which I assume allows you to upload photos of your passport and link your existing social media profiles.

There are so many ways that could go wrong, from this being a phishing attack to this being a well intended project that happens to create a database linking passport IDs to all of a person's social media accounts.

The idea that they may eventually offer a social media platform that doesn't require public use of your real identity is all well and good, but they're still a honeypot for doxing.

Well, I built the site.

I agree that trust is a problem. I try to be as transparent as possible around how your passport data is used and what is stored in the database. Far more than what ordinary banks/trading apps say when they ask you for a passport.

Hah, well sorry for my confusion there! I didn't realize it was yours so that definitely clears up why you'd trust it.

While I have you here I am curious what's evolved in validating passports? Is it as simple as a unified API run by some service to validate, or an API per country?

So you have to just trust them to permanently delete the data after verifying you?

That's interesting. Is there a social network where you can only connect with people you meet in real life?

(Stretching a definition of social network.)

Not strictly but Debian, where member inclusion is done through an in person chain of trust process so you have clusters of people who know each other offline as a basis.

Also, most WhatsApp contacts have been exchanged IRL, I presume.

How do you handle binationals who might not have the same details (or even name) on each of their passports?

You can always get around identification requirements, for example by purchasing a fake passport in this case. The idea is to increase the cost/friction of doing so as much as possible.

A fake ID is a lot harder to get your hands on than a new email, burner phone, etc.

1 passport = one human

Yes, this does mean that dual nationals can have two separate human accounts. But it’s still better than an infinite number of accounts, which is the case for social networks right now.

No, nothing of value. If you ever want to lose faith in the future of humanity search "@grok" on Twitter and look at all the interactions people have with it. Just total infantilism, people needing tl;drs spoon-fed to them, needing summarization and one-word answers because they don't want to read, arguing with it or whining to Musk if they don't get the answer they want to confirm what they already believe.

I bookmarked this example where it is confidently incorrect about a movie frame/screenshot:

https://x.com/Pee159604/status/1909445730697462080

Your example doesn't appear to contain a reply from grok, only a question.

[deleted]

It does, you just can't see it without logging in because Twitter is shit now.

https://xcancel.com/Pee159604/status/1909445730697462080

I was logged in but it wasn't showing, but it's showing now.

> confidently incorrect

I disagree. Grok had a crack and got it wrong. LLMs get things wrong sometimes.

Besides, it said "likely from Species", which is guesswork. The original post is garbage. "Chimp the fuck out"... I don't even know what that means, so Grok didn't have much to go on by analysing the "green text".

the worst is like a dozen people in the replies to a post asking Grok the exact same obvious follow-up question. Somehow, having access to an LLM has completely annihilated these commenters' ability to scroll down 50 pixels.

> needing summarization

Before we get too excited with disparaging those seeking summaries, it's common for people of all levels to want summary information. It doesn't mean they want everything summarized or are bad people.

I'm not particularly interested in "tariffs, what are they good for, what's the history and examples good or bad"... so I asked for a summary from grok. It gave me a decent summary. Concise and structured. I asked a few follow-ups, then went on with my life knowing a little more than nothing about tariffs. A win for summarized information.

> people needing tl;drs spoon-fed to them, needing summarization and one-word answers because they don't want to read

It's bad that this need exists. However, introducing this feature did not create the need. And if this need exists, fulfilling it is still better, because otherwise these kind of people wouldn't get this information at all.

This is worse because the AI slop is full of hallucinations which they will now confidently parrot. No way in hell does this type of person verify or even think critically about what the LLMs tell them. No information is better than bad information. Less information while practicing the ability to critically use it is better than bad information in excess.

Do you have examples of recent models hallucinating when asked to summarize a text?

All decent people I know have deleted their Twitter accounts - the kind of people you now see on twitter in the mentions are... not good people.

"@gork explain this tweet"

> This would be a decent PR stunt, but would such a platform offer anything of value?

Like all those start-ups that are on the 'mission' to save the world with an app. Not sure if it is PR for users or VCs.

Sam's last social media project included users verifying their humanity, so there is hope that something like that slips into the new platform.

You also can get Grok to fact check bullshit by tagging @grok and asking it a question about a post. Unfortunately this is not realtime as it can sometimes take up to an hour to respond, but I've found it to be pretty level headed in its responses. I use this feature often.

True. I see that too. It's a good addition to community notes. It can correctly evaluate "partially true" posts and those lacking details, so it's great at spotting cherry-picked information.

People sort of expect it to 100% subscribe to the right wing dogma now, but Elon apparently wasn't joking about it being "truth seeking". It seems pretty impartial to me, on some topics even "woke".