Google has been doing more R&D and internal deployment of AI and less trying to sell it as a product. IMHO that difference in focus makes a huge difference. I used to think their early work on self-driving cars was primarily to support Street View in thier maps.

There was a point in time when basically every well known AI researcher worked at Google. They have been at the forefront of AI research and investing heavily for longer than anybody.

It’s kind of crazy that they have been slow to create real products and competitive large scale models from their research.

But they are in full gear now that there is real competition, and it’ll be cool to see what they release over the next few years.

I also think the presence of Sergey Brin has been making a difference in this.

[deleted]

Please, Google was terrible about using the tech the had long before Sundar, back when Brin was in charge.

Google Reader is a simple example: Googl had by far the most popular RSS reader, and they just threw it away. A single intern could have kept the whole thing running, and Google has literal billions, but they couldn't see the value in it.

I mean, it's not like being able to see what a good portion of America is reading every day could have any value for an AI company, right?

Google has always been terrible about turning tech into (viable, maintained) products.

Is there an equivalent to Godwin's law wrt threads about Google and Google Reader?

See also: any programming thread and Rust.

I'm convinced my last groan will be reading a thread about Google paper clipping the world, and someone will be moaning about Google Reader.

“A more elegant weapon of a civilised age.”

I never get the moaning about killing Reader. It was never about popularity or user experience.

Reader had to be killed because it [was seen as] a suboptimal ad monetization engine. Page views were superior.

Was Google going to support minimizing ads in any way?

How is this relevant? At best it’s tangentially related and low effort

Can you not vibe code it back into existence yet?

Took a while but I got to the google reader post. Self host tt-rss, it's much better

Ex-googler: I doubt it, but am curious for rationale (i know there was a round of PR re: him “coming back to help with AI.” but just between you and me, the word on him internally, over years and multiple projects, was having him around caused chaos b/c he was a tourist flitting between teams, just spitting out ideas, but now you have unclear direction and multiple teams hearing the same “you should” and doing it)

That makes sense. A "secret shopper" might be a better way to avoid that but wouldn't give him the strokes of being the god in the room.

Oh ffs, we have an external investor who behaves like that. Literally set us back a year on pet nonsense projects and ideas.

What'd he say

> It’s kind of crazy that they have been slow to create real products and competitive large scale models from their research.

I always thought they deliberately tried to contain the genie in the bottle as long as they could

Their unreleased LaMDA[1] famously caused one of their own engineers to have a public crashout in 2022, before ChatGPT dropped. Pre-ChatGPT they also showed it off in their research blog[2] and showed it doing very ChatGPT-like things and they alluded to 'risks,' but those were primarily around it using naughty language or spreading misinformation.

I think they were worried that releasing a product like ChatGPT only had downside risks for them, because it might mess up their money printing operation over in advertising by doing slurs and swears. Those sweet summer children: little did they know they could run an operation with a seig-heiling CEO who uses LLMs to manufacture and distribute CSAM worldwide, and it wouldn't make above-the-fold news.

[1] https://en.wikipedia.org/wiki/LaMDA#Sentience_claims

[2] https://research.google/blog/lamda-towards-safe-grounded-and...

Attention is all you need was written by Googlers IIRC.

It has always felt to me that the LLM chatbots were a surprise to Google, not LLMs, or machine learning in general.

Not true at all. I interacted with Meena[1] while I was there, and the publication was almost three years before the release of ChatGPT. It was an unsettling experience, felt very science fiction.

[1]: https://research.google/blog/towards-a-conversational-agent-...

ChatGPT really innovated on making the chat not say racist things that the press could report on. Other efforts before this failed for that reason.

The surprise was not that they existed: There were chatbots in Google way before ChatGPT. What surprised them was the demand, despite all the problems the chatbots have. The pig problem with LLMs was not that they could do nothing, but how to turn them into products that made good money. Even people in openAI were surprised about what happened.

In many ways, turning tech into products that are useful, good, and don't make life hell is a more interesting issue of our times than the core research itself. We probably want to avoid the valuing capturing platform problem, as otherwise we'll end up seeing governments using ham fisted tools to punish winners in ways that aren't helpful either

The uptake forced the bigger companies to act. With image diffusion models too - no corporate lawyer would let a big company release a product that allowed the customer to create any image...but when stable diffusion et al started to grow like they did...there was a specific price of not acting...and it was high enough to change boardroom decisions

Well, I must say ChatGPT felt much more stable than Meena when I first tried it. But, as you said, it was a few years before ChatGPT was publicly announced :)

Google and OpenAI are both taking very big gambles with AI, with an eye towards 2036 not 2026. As are many others, but them in particular.

It'll be interesting to see which pays off and which becomes Quibi