What is their angle with this?
Surely SamA doesn’t actually think that they’ll more than 20x their compute in a few years? I’m sure the researchers there would love to do more research, with more compute, faster, but 20+x growth is not a practical expectation.
Is the goal here to create a mad rush to build data centers, which should decrease their costs with more supply? Do they just want governments to step in and to help somehow? Is it part of marketing/hype? Is this trying to project confidence to investors on future revenue expectations?
Surely SamA doesn’t actually think that they’ll more than 20x their compute in a few years?
If their goal is to train say, a 100T model on the whole youtube dataset they will need 20000x more compute. And that would be my goal if I were him.
Why 20000x more compute? I thought they were at approx 1T with current compute?
Edit: looked it up. 10k+ times more for training compute. Sheesh. Get the Dyson sphere ready lol.
Mainly because global video data corpus is > 100k larger than global text corpus, so you will need to train much larger models for much longer (than current LLMs).
https://www.youtube.com/watch?v=Dy6Dw9rOAFQ
Couple of decades away from a Dyson Sphere? Recalibrating my trust in this man’s statements asap
Now I'm more on the side of him being delusional.
"At this point I think I know more about manufacturing than anyone currently alive on Earth."
It's that dumbass at your work who thinks that solely because he landed a job that pays him more than their parents ever made combined in his early 20s he can school everyone on every topic imaginable, from nutrition to religion.
Him and Elon makes way more than that dumbass so ego get inflated even more.
I don't especially like Tucker Carlson, but I think the more screen time we'll give to this people with an open mic it's better for everyone to have a first hand experience of how detached from reality these people are.
It is better to remain silent and be thought a fool than to open your mouth and remove all doubt
Thanks for the example.
The pleasure was all mine.
>I don't especially like Tucker Carlson, but I think the more screen time we'll give to this people with an open mic it's better for everyone to have a first hand experience of how detached from reality these people are.
Idk, seems like we’re in a pretty shit situation politically right now because many delusional men were given an open microphone access.
As much as it would be nice to believe we lived in a world where people can discern truth from illusion, that doesn’t seem to be the case.
And, given that, it seems like the wisest course of action would be to come up with some means of forcing mass media to adhere to fact and remove from public discourse anyone who refuses to acknowledge or espouse it, lest they bend reality in their interest, but instead we decided to invent machines for manufacturing the most sophisticated lies ever seen and disseminate them to everyone.
It’s not going to end well.
> As much as it would be nice to believe we lived in a world where people can discern truth from illusion, that doesn’t seem to be the case.
> And, given that, it seems like the wisest course of action would be to come up with some means of forcing mass media to adhere to fact and remove from public discourse anyone who refuses to acknowledge or espouse it...
How is it a wise course of action for people to force people to adhere to fact if we don't live in a world where people can discern truth? Don't you see the contradiction?
I read the parent as very subtle satire. They are following the logical conclusion of the desire to suppress disagreeable speech.
Absolutely right, and it's ubiquitous across organizations too.
I've never met an executive I respect. They're all absolute experts at appearing competent.
I mean, they're selected for it so that's not surprising
I guess the surprising part is that appearing competent is more important to shareholders than being competent
Actually checking if someone is competent requires actual work, though. Work is for lesser people. Shareholders just know if a person is or is not competent, that's why they have so much money, right?
I can never tell if these guys have come to genuinely love the smell of their own farts or if they're just constantly in sales mode. Like maybe all those hours in meetings with investors and shareholder or whatever has gotten them stuck, like your mom used to warn you about when you'd make faces at your little brother.
When your job is to constantly be making the pitch for your company, and you live in a world where every conversation you have can be news before the end of the day, the mask can never come off.
If they know it won't bring in revenue, they can't get out of "sales mode" because when the runway stops they get left out. Like musical chairs with one chair left, you want to keep the game going if you don't think you can get it. And you're filthy rich as long as the game's going.
Have you every talked to someone that’s been in sales for a long time when they’re not at work?
Some of them (almost) never turn it off. It’s unfortunate because it makes them seem completely disingenuous.
It’s almost like we’re trying to boil the planet.
That would be awesome.
The AI bubble bursts when he stumbles to get that money.
If they want to survive they need to outrun Google and have a differentiated service. Which as of now it's not clear that OpenAI will have a reason to exist in the near future with Anthropic and Google.
They're likely betting on either training a model so big they can't be ignored or possibly focusing more B2B which means lots of compute to resell.
If their plan was to go toe to toe with Google as a foundation model/inference provider they would 100% be getting ground to dust, that's not a winnable fight. There's a reason they've pivoted to product and retained Jony Ive.
Anthropic gets a TON of hate on social media, their models are fragile, their infra is poorly managed, they're 100% going to end up in Jeff's pocket. OpenAI is a survivor.
Perceptually, it helps to take scrutiny off the current spend? It isn't a bubble if you can just scoff at $100 billion and say like" "thats pocket change, this will actually require 10 quadrillion dollars!!"
He needs 11 figures of cash injected as soon as possible. The people who can give it want a big return. Given the current losses, the only way to make the math right is to lie outrageously about what’s possible.
He's been kicking this can for years now. Looking forward to the day he's forced to stop.
I believe it is because of RL you are no longer limited by training data as you generate it during learning on the fly so benchmark driven learning could scale with compute
they also seem to assume that everyone will use AI from them in the future, especially with new "pulse" combined with ads. scaling this will need much more compute.
is this reasonable? I'm not convinced, but this is how I believe it's their reasoning
My guess:
Altman figures AI will be a big deal and constrained by avaiable compute.
If he locks in all the available compute and related finance before the competition then he's locked in as the #1 AI company.
I'm not sure 20x or 5x or 40x matters, nor revenue expectations, so much as being ahead of the competition.
Sounds like the commodore playbook from the 80s
I think they could probably 'use' 10X. There are rumours that one of the reasons they're not shipping the new Jonny Ive device is that they haven't got the compute for it. If you need 10X probably better to ask for 20x and have a glut than ask for 10x and have a shortage.
The phrase you are looking for is “commodifying the periphery.” As adjacent bottlenecks open up, the bottleneck you control becomes more valuable.
Pascal’s wager applied to tech cycles. The fervent adherence to the hype is akin to religious zealots in many ways
In the early days of Bitcoin, we would argue security models and laugh about Bitcoin mining taking some significant percentage of global power supply. It's been giving around 1% for a while now despite the supply falling off.
I wouldn't put bets on what the outer limits for AI are going to be. However, it's a huge productivity boost across a huge range of workflows. Models are still making large gains as they become more sophisticated.
If I had Sam Altman's access to other people's capital, I would be making large bets that it will keep growing.
> Surely SamA doesn’t actually think that they’ll more than 20x their compute in a few years?
He does, or at least, he believes if its plausible they should attempt to.
We live in odd times. It sort of reminds me of Feb 2020. All you really needed to know was the Rt and rest was just math. Who knows if it'll matter or pencil out in a decade, but, it's completely reasonable at these growth rates and with the iron laws known to keep scaling.
> What is their angle with this?
My pet theory: Sam makes more money when OpenAI spends than when OpenAI earns.