Everyone is fighting for their view of the world right now and for those of us who do not have any investment in the success of one of the models, it feels like pure speculation at this point.

I don't think there's a human on this planet who can even predict the state of the industry in 3 years. In my entire time in the industry, I have always felt like I had a good line of sight three years away. Even when the iphone came on the scene, it felt like a generational increase rather than a revolution.

We just have no idea. We don't know the extent of how it can improve. We don't know if we are still on exponential improvement or the end of the S curve. We don't know what investment is going to be like. We don't know if there is autonomy in the future. We don't know if it's going to look more like the advancement of autonomous vehicles where everyone thought we were just a year or two away from full autonomy - or at least people bought the hype cycle.

Any anyone who says they know has something to sell you.

It’s exciting, isn’t it? I’ve been programming for a quarter century now and this is the first time in 20 years that the future of tech is exciting again and the world is software’s oyster.

Can’t wait for AI 2.0 and ads :(

Exciting is one word for it. The dot com crash was also "exciting", I suppose.

I see it as a chance for the capital class to sell everyone shovels and build railroads that will further cement their power and influence, all the while insisting software and art are more democratic than ever. All the while using the same tools to build surveillance infrastructure that will make any dissent impossible.

So yeah, exciting is one word you could choose.

Well the railroads aren't the seat of power that they used to be---the progress ebbs and flows in mysterious ways. Even their arguable replacement, air travel, is not the paradigm of economic dominance.

Everybody is now excited about the top end: more parameters, larger context window. Justly so, but I think that it's more important in the long term what will happen at the low end, both in the software and hardware direction. Software obviously has started (c.f. DeepSeek). As for hardware, Google TPU ASIC is six years old, and has presumably 4096 ALUs and uses few watts of power to perform 4 TOPs. Several years later, I would hope that it's reachable goal for those 4 watts to do as many TOPs as say H20 (400 or so).

Once the models stabilize, I could see a low power inference peripheral design with huge TPU matrix with an embedded model parameter Flash EPROM feeding it. This could go into anything from white goods to computers. Imagine a thermometer that you speak symptoms to and ask for diagnosis; is it a good idea? I don't know... but is it possible? I could see that it is.

> Well the railroads aren't the seat of power that they used to be

You don't think that Carnegies, Vanderbilts, Goulds, etc don't have outsized influence and wealth?

> and ads

Ha this is so true. All the major LLMs have a surprisingly ad free experience right now. Likely because they’re all aggressively competing for customers. But as soon as one of them realises the money that can be made from ads and how that will pay for daily operation costs after the investors give up investing, then the game is up

They're not ad-free. Ads are integrated into the content of the responses. For example, ChatGPT biases towards itself as a provider of services even when asking it not to. This behavior is easy to see and probably will be replicated out for paying clients.

Yes for sure companies are already paying for having their products added to the training set of popular LLMs.

Source?

Meta is an advertising company.

Google is an advertising company.

Microsoft, in fits of jealousy and veiled rage, wishes it were an advertising company.

OpenAI is not profitable yet despite having high charge rates.

If they aren't capitalizing on ad infusion yet, their shareholders should rightfully be getting angry, right? Money left on the table.

Simply because you and I may not be able to buy ads does not mean they arent already there.

Dont forget the pg classic on the topic: https://paulgraham.com/submarine.html

That’s not a source, that’s speculation

Yes it's obvious this has to happen. AIs tend to promote certain products a lot more than others. If a training set costs billions to process, the incentive to sneak your product into the set... Maybe give it a few additional training rounds of that... It's like an ad which keeps coming back over and over for the life of the model whenever someone asks a related question.

>>for sure companies are already paying for having their products added to the training set of popular LLMs.<<

Is there a source for such claims. I am also interested to know.

A Mastodon post I saw recently hit upon all the "Agentic" stuff is a rerun of the "Semantic Web" era, including its mistakes such as drastically overusing the word "agent" and also expecting Companies to build APIs with free or cheap access that bypass their ads and dark pattern-enhanced (er, "revenue-generating") websites.

After another MCP discussion today got into "Neo4j is the best database for almost every MCP agent, because…" I realized I may have "Semantic Web PTSD" and finally have a name for some of the less than excited gut reactions to "Agentic" I've been having for a while now. (Beyond also the terrible gut reaction to both the "word" "agentic" itself and also naming a key protocol after the villain from Tron.)

Is it though ? I feel like what ML can achieve is amazing, but, call me a pessimist, I'm bracing for the influx of elaborate scams, propaganda, deep fakes etc who will manage to drive a wedge even further in our society than social networks did. As for programming, my skills are not good enough not to risk an atrophy if I use code completion. I do ask Claude for feedback though. I've always been a tech enthusiast but this is the first time I want to build a cabin in the woods and live off-grid. The tech itself is amazing but I'm not looking forward to all the slop we're going to be flooded with.

Oh absolutely. If the history of the internet, social media, and smartphones is anything to go by, those negative consequences are coming like a freight train and this will all end in tears in a decade or two. Or even sooner, because that cycle seems to be accelerating too - education probably being the prime example.

But like the OP said, we can’t even predict what’s going to happen even three years out so I’ve just resigned myself to “going with the flow” and enjoying the ride as much as I can. If the negative consequences are coming for us, might as well get as much benefit as we can now while we’re all still wide eyed and bushy tailed.

Yea i think a lot of wars have come about from a change in communication technology (eg, radio, film), and can imagine AI might cause a lot of chaos until we learn how to deal with AI-generated video, audio, text, and images. Imagine if Goebbels or the Rwandan genocide had access to AI?

As exciting as nuclear armageddon, sure.

one of the big reasons I'm rooting for AI is it's a valid alternative to search engines and they've baked a paid model into the revenue stream from the very beginning. I'm more than happy to pay a premium for the ad free version

If exciting is worrying that I may lose my job, then yes I’m excited.

Yeah it’s just a matter of time. Marketing executives already realize they aren’t able to control the message as well. If you get product recommendations through AI, you don’t see the company’s website and marketing copy, and don’t go through their various funnels. The company… can’t do marketing. How do you get the message you want into the training data?

While I despise ads, it’s an interesting problem. Even on my personal site, I can’t really control how people learn about me.

>I don't think there's a human on this planet who can even predict the state of the industry in 3 years.

I predict a 1999-dotcom-boom style crash of the AI hype within 3 years, slower incremental improvements, and increased rent seeking (ads, sponsored responses, etc.) and price jacking.

That crash was due to laying out too much capital in the form of fiber that wasn’t being used. The difference today is that every GPU added is immediately maxed out, and if it’s not it’s because the power plant for the area is maxed out.

And the GPUs are being maxed out because money seeking a high return is being thrown at LLMs in the hope that they turn into AGI, because there's no other market segment promising such high returns... and being believed.

I suspect the bubble is self-sustaining, because everyone making investment decisions is confident that they won't be blamed if (IMHO, when) the bubble collapses, because "everyone was investing in AI" while simultaneously, even those who might be skeptical are terrified of missing the boat if LLMs really do live up to their hype, or at least convinced they can ride the bubble and find a greater fool before it pops.

Just to echo another comment, I remember seeing the iPhone for the first time (2 maybe?). It was amazing. Mind blowingly good. It was revolutionary in every sense of the word. I'd heard the hype but if anything it undersold the reality.

You saw this thing and knew the future had changed.

It's very easy to forget Android just shamelessly ripped it off wholesale. The early version of Android looked nothing like the complete iPhone clone they ended up with.

And I have an android phone right now. But it only exists because of the revolutionary iPhone.

Credit where credit's due.

And right now AI feels like just before the iPhone. Like then we had touch screens, and we had new battery tech and we had mini-apps on the internet. And we had iPods showing small form factor MP3 players. But they didn't work together coherently.

And then the iPhone came along.

Like we're waiting for the iPhone of AI, to bring the tech together to an actually usable state.

I half-suspect the Rabbit M1 and the Humane AI Pin were onto something that could've been the "iPhone of AI", but because they botched it so completely, no one will want to invest (time, money, effort, resources) into exploring that path for several years.

This isn‘t your main point, but the iPhone absolutely was the biggest revolution since the Internet. The world before it is wholly different to the world after it. AI looks to have a similar impact, but just like the iPhone it‘ll be a few years before everyone realizes the world has changed.

There were mobile internet, touchscreen, “large” screen, downloadable mobile apps before iPhone. I could listen to music and watch movies on my phone before iPhone. I’m continuously online since 2001. AFAIK it had not a single major feature which didn’t exist before, except its design. It really had a terrific design for its time.

It was a step forward, but it was incremental. Internet was also incremental. In every sense. Just because the general populace didn’t hear about it until then, didn’t mean that it was that “revolutionary”. Yes, they crossed a line which made them useful, something what people want. Sometimes mainly because of marketing. But still incremental.

This whole modern neural network saga started around 2011. Every step was incremental since then. Just because most people didn’t hear about these just in 2022, doesn’t mean that large LLMs were suddenly here from nothing. They still need to improve for example to not make the code quality plummets immediately when programmers start to use it. It was, and will be an incremental process.

The iPhone opened up a whole new world of opportunities which were very clear from the start.

No one, except Steve Ballmer, would describe it as a potential fad or question how good it can actually get before Apple goes bankrupt from all the investment into this new tech.

I like this new stuff we get now, but the iPhone felt like a clear win with no downsides of a potential societal collapse.

Mobile phone addiction is our generation's smoking. We just don't realize it yet.

Mobile phones & social media, tobacco, opium, gin... it seems like every century or so there's an epidemic of "this readily available thing creates addictive stimulation" and a lot of people get lost to it until society wises up about that particular thing. And then a generation or three later, the pattern repeats.

Good point.

Pretty sure it's known. But, just like smoking, it's tolerated. Can't make the line go down.

I was there and it was a very common view, though perhaps not a majority view, that the iPhone was a flash in the pan. There were lots of people committed to the idea that only physical keyboards could work for mobile. Touch interfaces were viewed highly skeptically. And it wasn't just the Microsoft or Palm people saying it, it was large chunks of their customers. The initial goal of the iPhone was 1%, yes, 1% share of the phone market! And many thought that was impossible for Apple and their strange new device.

Where are we now? Around the 3s era?

By then, no one was saying that anymore.

The AI scepticism is still going strong.

don't worry, human, that will be corrected soon enough. You should learn to welcome your new AI overlords. Ask your regular handler LLM for advice on how best to do that.

Sent from my iMPC

>No one, except Steve Ballmer, would describe it as a potential fad or question how good it can actually get before Apple goes bankrupt from all the investment into this new tech.

Countless pundits and many heads of companies like Motorolla and Nokia, said exactly what you say "nobody except Ballmer" would say.

>The world before it is wholly different to the world after it.

Mostly for the worse. Mental health crisis, depression, loneliness epidemic, antisocial tendencies, attentions destroyed, total 24/7 surveillance, hard dependency on a couple of mobile OS vendors...

No it won’t, it already happened. Just look at how school has been disrupted with everyone using these things.

People were lifting their ChatGPT prompts on their devices during graduation.

[deleted]

It wasn’t on day one though.

We can make some educated guesses and extrapolate some trends. But you are right that most of the people currently claiming to know what's going to happen in 3 years were fast asleep when chat gpt launched, which is almost 3 years ago. And I include myself in that group. Most in the industry did not see that coming. Not even a little bit.

At this point we have half the industry being overly pessimistic and the other half being unreasonably optimistic. The median truth would be in the middle. But I don't think that's a very sound position to take either.

The reason is that I think we're actually dealing with a severe imagination deficit in society. That always happens around big technological changes. And this definitely looks and feels like such a thing. Ten years from now it might all seem obvious in retrospect. But right now we have the optimist camp predicting what boils down to the automotive equivalent of "faster horses" (AGI, I robot, self driving cars, and all the rest). It's going to be this wonderful utopia where no-one works and everything runs by itself. I'm not a big believer in that and I don't think that's how economies work.

And we have a bunch of pessimists predicting that it's all going to end in tears. Dystopia, everybody is going to be unemployed, and a lot of other Luddite nonsense.

The optimists basically lack imagination so they just reach for what science fiction told them is going to happen (i.e. rely on other people's science fiction). And then the pessimists basically are stuck imagining the worst always happens and failing to imagine that there might be things that actually do work.

It's fairly easy to predict/bet that both sides are probably imagining things wrong. Just like people did three years ago. Including myself here. So, not making a prediction here. But, kind of curious to see how the next few years will unfold. Lots of amazing stuff in the past three. I'll have some more of that please.

>But you are right that most of the people currently claiming to know what's going to happen in 3 years were fast asleep when chat gpt launched, which is almost 3 years ago. And I include myself in that group. Most in the industry did not see that coming. Not even a little bit.

The researchers responsible for LLMs didn't see them coming either. The architecture is quite basic, if they knew it would have this success they'd do it way sooner as a PoC, even when hardware wasn't quite up to it.

I’m not sure it was so much folks “not seeing it coming” with ChatGPT, only that anyone with anything beyond surface-level understanding of ML workloads and data science wouldn’t have thought to attempt to package such an immense, expensive, and potentially powerful product as a “chat bot”, and certainly nobody would have thought it would be a good idea to then heavily market it as some sci-fi notion of “artificial intelligence”, and proceed to gaslight the entire planet with what it might eventually be capable of. Only because, with the exception of Sam Altman, such a person would have been laughed out of a career. The only real surprise with LLMs seems to be that unscrupulous people have managed to raise so much goddamn money lying about its potential, that it’s got a real chance of crashing the global economy once everyone realizes it’s never actually gonna replace any meaningful jobs.

I agree with you. The downfall will not be caused by the AI working/not working. It will be caused by all the damn lies all these money hungry snakes are shouting.

It's understandable how it feels this way.

The most important thing to do is play and build small things with all of them as they take turns being in the lead or running the fastest sideways in this horse race.

What's new to someone might be old to most etc. What's critical to the individual is knowing what's new to them vs what they know is out there.

I am not sure after 2 years of this if there will be a single clearcut winner, as much as they will all contribute best practices to something more emergent.

Webapps when they first began also didn't have much structure, frameworks, etc, and it evolved from just building a lot, and trying to build better.

Agreed.

The trick I've always used in these circumstances is the cynical approach. That is assume nothing changes. If it does change, adopt late, rather than burn all your time, money and energy on churn and experimentation.

In the last 35 years of doing this, I've seen perhaps 10% of technology actually stick around for more than a few years. I'll adopt at maturity and discard when it's thoroughly obsolete.

Being fintech, no technology so far has fundamentally changed what business we do or even how it's done for a long time even if we pretend it does. A lot of the changes have just been a cost naively written off through arbitrary justification or keeping up with trends. 99% of what we do is CRUD, shit reports and batch processing, just like it was when it was S/390.

Even fewer things have had an ROI or a real customer benefit. Then again we have actual customers not investors.

Why do we need to know what happens?

Just use the tools we have at our disposal and create cool, profitable, or meaningful stuff.

It was never easier than today.

Prioritization. If Microsoft is going to release something in a month, it's likely not worth working on to the same extent.

I'm not talking about building AI tools.

Build regular programs that are useful and will not get replaced by AI.

You can't time the market.

Nice to see some balanced skepticism rather than the constant barrage of prognostication and bold opinions stated as fact. This kind of take doesn’t drive engagement though.

I really like this. It feels like the only inevitability is change. Change, to some extent, not yet known.

You clearly are selling something, too: doubt. (Note your appeal to your own authority.)

Don't kid yourself. Skepticism is not neutrality. These days throwing shade is a growth industry. There's money to be made shorting just like there is going long. Neither is the objective, disinterested position, although skepticism always enjoys the appearance of prudence, at least to the ignorant.

Anyone who says they're not trying to sell you something lacks self-awareness about what they themselves have been sold.