Sam Altman has made (for me) encouraging statements in the past about short-form video like TikTok being the best current example of misaligned AI. While this release references policies to combat "Doomscrolling and RL-sloptimization", it's curious that OpenAI would devote resources to building a social app based on AI generated short form video, which seems to be a core problem in our world. IMO you can't tweak the TikTok/YouTube shorts format and make it a societal good all of a sudden, especially with exclusively AI content. This is a disturbing development for Altman's leadership, and sort of explains what happened in 2023 when they tried to remove him... -> says one thing, does the opposite.

>IMO you can't tweak the TikTok/YouTube shorts format and make it a societal good all of a sudden, especially with exclusively AI content.

I agree. At best, short videos can be entertainment that destroys your attention span. Anything more is impossible. Even if there were no bad actors producing the content, you can't condense valuable information into this format.

I'm optimistic about the Sora app! My hope is that it becomes much more whimsical and fun than TikTok because everyone on the app knows that all content is fake. Hopefully that means less rage-bait and more creative content, like OG YouTube. Nobody's going to get their news from Sora because it's literally 100% fake.

> it becomes much more whimsical and fun than TikTok because everyone on the app knows that all content is fake.

Sounds about as plausible as "ironically taking heroin".

> Nobody's going to get their news from Sora because it's literally 100% fake.

I'm with Neal Stephenson ("Fall", in this case) on this prediction, although I really hope I'm wrong.

That said… does anyone have an invite code?

Would also love an invite code if anyone has one.

> much more whimsical and fun than TikTok

In the early years everyone told me that TikTok is actually fun and whimsical (like just after it stopped being musical.ly), and it's all about fun collaboration, and amateur comedy sketches, fun dances and lipsyncs, and people posting fun reactions to each other etc, all lighthearted and that social media is finally fun again!

Why would it be more like OG YouTube, when the content they demoed very closely resembles YouTube shorts? The key difference is OG YouTube was long form.

> Hopefully that means less rage-bait

I have seen what people generate with AI, and I do not have good news for you.

Sam Altman is a businessman. His job is to say whatever assuages his market, and that includes gaslighting you when you're disgusted by AI.

If you never expected Altman to be the figurehead of principled philosophy, none of this should surprise you. Of course the startup alumni guy is going to project maligned expectations in the hopes of being a multi-trillion dollar company. The shareholders love that shit, Altman is applying the same lessons he learned at Worldcoin to a more successful business.

There was never any question why Altman was removed, in my mind. OpenAI outgrew it's need for grifters, but the grifter hadn't yet outgrown his need for OpenAI.

> His job is to say whatever assuages his market

I understand the cynicism but this is in fact not the job of a businessman. We shouldn't perpetuate the pathological meme that it is.

So the job of a businessman is not to increase shareholder value?

Nope. A CEO can't essentially steal from shareholders, but otherwise they have extremely broad latitude in how they engage in business.

There is no legal or moral imperative to make antisocial, unethical, or short term decisions that "maximize shareholder value."

This is something that morally weak people tell themselves (and others) to justify the depravity they're willing to sink to in order to satiate their greed.

The concept doesn't even make sense: different shareholders have different priorities and time horizons. A businessperson has no way to know what it objectively means to maximize their returns. They must make a subjective determination, and they have extremely broad latitude to do that.

If I run an AI business, then people using more AI means more business. If noone uses my AI then I go out of business

Increasing shareholder value can be done in the broadest sense by just increasing business

If I fund my own business, I can control growth and _choose_ ethics over profits, in the hope that stunting growth is acceptable if my customers value ethics too, and that whomever I someday pass my company to shares these values

If I take capital investment, I now have a contractual agreement to provide returns on that investment. Yes failure to adhere can result in lawsuits or legal penalties. Or I can be fired/voted out for failing to bring high enough returns. I now _cannot_ choose ethics over profits, due to the conflict of interest of self-preservation

So you are correct - there is no legal or moral contract to behave unethically, but there is instead a strong systemic and self-preserving incentive to do so

I think we almost agree here, but you make it sound as if the exec can simply stand up and do the right thing here. I argue the exec will simply be pushed aside for another

This is what people refer to when they talk about the binds that hold modern day mega-corps

If you yourself are an exec, I personally think you can understand these truths and work with them as best you can, and still be a good human being of course, but that there are lines that should not be crossed to keep a job

It is a collective issue we need to solve that of course starts with each individual seeing the true situation with kindness and compassion

You’re just saying there are incentives for unethical behavior? Yeah, obviously.

They don’t need to be excused by “well that’s their obligation.” It’s not! Actually, a person’s obligation is to act morally even when there are incentives otherwise, which is approximately all the time for nearly every person.

This is something children learn (lest they be excluded from their society) yet Very Smart People in the upper echelons of the business world conveniently forget.

> If I take capital investment, I now have a contractual agreement to provide returns on that investment. Yes failure to adhere can result in lawsuits or legal penalties.

This is not true. If you've signed a contract that says anything like this, consider getting a real lawyer.

To be clear I'm not disgusted by AI in general, I'm disgusted by short form video and AI/ML in service of dopamine reward loop hacking.