This is where LLM advertising will inevitably end up: completely invisible. It's the ultimate "influencer".
Or not even advertising, just conflict of interest. A canary for this would be whether Gemini skews toward building stuff on GCP.
This is where LLM advertising will inevitably end up: completely invisible. It's the ultimate "influencer".
Or not even advertising, just conflict of interest. A canary for this would be whether Gemini skews toward building stuff on GCP.
Considering how little data needed to poison llm https://www.anthropic.com/research/small-samples-poison , this is a way to replace SEO by llm product placement:
1. create several hundreds github repos with projects that use your product ( may be clones or AI generated )
2. create website with similar instructions, connect to hundred domains
3. generate reddit, facebook, X posts, wikipedia pages with the same information
Wait half a year ? until scrappers collect it and use to train new models
Profit...
https://www.bbc.com/future/article/20260218-i-hacked-chatgpt... says it took way less than half a year to 'pollute' a LLM
from my understanding Anthropic are now hiring a lot of experts in different who are writing content used to post-train models to make these decisions and they're constantly adjusted by the anthropic team themselves
this is why the stacks in the report and what cc suggests closely match latest developer "consensus"
your suggestion would degrade user experience and be noticed very quickly
I guess that’s why I’m not seeing anyone trying to build a skills marketplace for agent skills files. The llm api will read in any skills you want to add to context in plain text, and then use your content to help populate their own skills files.
Influencer seems like an insufficient word? Like, in the glorious agentic future where the coding agents are making their own decisions about what to build and how, you don't even have to persuade a human at all. They never see the options or even know what they are building on. The supply chain is just whatever the LLMs decide it is.
Richard Thaler must be proud. This is the ultimate implementation of "Nudge"
Probably closer to the Walmart / Amazon model where it's the arbiter of shelf space, and proceed to create their own alternatives (Great Value, Amazon Brand) once they see what features people want from their various SaaS.
An obvious one will be tax software.
Advertisers will only pay if AI providers will provide them data on the equivalent of “ad impressions”. And unlabeled/non-evident advertisements are illegal in many (most?) countries.
It doesn't necessarily have to be advertisers paying AI providers. It could be advertisers working to ensure they get recommended by the latest models. The next form of SEO.
That's called LLM SEO now I believe.
I'm curious if there's any hard data on how LLM SEO compares to traditional SEO.
My gut tells me that LLM SEO will be harder to game than traditional SEO.
There are competing terms currently being decided on by the market at large: AEO (Answer Engine Optimization) and GEO (Generative Engine Optimization)
Candidly I am working on a startup in this space myself, though we are taking a different angle than most incumbents.
While it's still early days for the space, I sense a lot of the original entrants who focus on, essentially, 'generate more content ideally with our paid tools' will run in to challenges as the general population has a pretty negative perception of 'AI Slop.' Doubly so when making purchasing decisions, hence the rise of influencers and popularity of reviews (though those are also in danger of sloppification).
There's an inevitable GIGO scenario if left unchecked IMO.
> data on the equivalent of “ad impressions”.
1. They can skip impressions and go right to collect affiliate fees. 2. Yes, the ad has to be labeled or disclosed... but if some agent does it and no one sees it, is it really an ad.
So much to work out.
How would it be paid for?
Maybe. Historically lots of ads had little to no stats and those ads were wildly more effective than anything we have today.
The AI provider still has to prove that they actually deployed the ad.
I wonder if aggregators will emerge (something like Ground News does for news sources)
LLM pattern [0] will probably eventually emerge as the best way to fight those biases. This way everyone benefits from token burn!
[0](https://github.com/karpathy/llm-council)
> A canary for this would be whether Gemini skews toward building stuff on GCP
Sure it doesn't prefer THE Borg?