I think that's a fear I have about AI for programming (and I use them). So let's say we have a generation of people who use AI tools to code and no one really thinks hard about solving problems in niche spaces. Though we can build commercial products quickly and easily, no one really writes code for difficult problem spaces so no one builds up expertise in important subdomains for a generation. Then what will AI be trained on in let's say 20-30 years? Old code? It's own AI developed code for vibe coded projects? How will AI be able to do new things well if it was trained on what people wrote previously and no one writes novel code themselves? It seems to me like AI is pretty dependent on having a corpus of human made code so, for example, I am not sure if it will be able to learn how to write very highly optimized code for some ISA in the future.

> Then what will AI be trained on in let's say 20-30 years? Old code? It's own AI developed code for vibe coded projects?

I’ve seen variation of this question since first few weeks /months after the release of ChatGPT and I havent seen an answer to this from leading figures in the AI coding space, whats the general answer or point of view on this?

The general answer is what they’re already doing: ignoring the facts and riding the wave.

Is it hard to imagine that things will just stay the same for 20-30 years or longer? Here is an example of the B programming language from 1969, over 50 years ago:

  printn(n,b) {
   extrn putchar;
   auto a;

   if(a=n/b) /* assignment, not test for equality */
      printn(a, b); /* recursive */
   putchar(n%b + '0');
  }
You'd think we'd have a much better way of expressing the details of software, 50 years later? But here we are, still using ASCII text, separated by curly braces.

I observed this myself at least 10 years ago. I was reflecting on what I had done in the approximately 30 years I had been programming at that time, and how little had fundamentally changed. We still programmed by sitting at a keyboard, entering text on a screen, running a compiler, etc. Some languages and methodologies had their moments in the sun and then faded, the internet made sharing code and accessing documentation and examples much easier, but the experience of programming had changed little since the 1980s.

>So let's say we have a generation of people who use AI tools to code and no one really thinks hard about solving problems in niche spaces.

I don't think we need to wait a generation either. This probably was a part of their personality already, but a group of people developers on my job seems to have just given up on thinking hard/thinking through difficult problems, its insane to witness.

Exactly. Prose, code, visual arts, etc. AI material drowns out human material. AI tools disincentivize understanding and skill development and novelty ("outside the training distribution"). Intellectual property is no longer protected: what you publish becomes de facto anonymous common property.

Long-term, this is will do enormous damage to society and our species.

The solution is that you declare war and attack the enemy with a stream of slop training data ("poison"). You inject vast quantities of high-quality poison (inexpensive to generate but expensive to detect) into the intakes of the enemy engine.

LLMs are highly susceptible to poisoning attacks. This is their "Achilles' heel". See: https://www.anthropic.com/research/small-samples-poison

We create poisoned git repos on every hosting platform. Every day we feed two gigabytes of poison to web crawlers via dozens of proxy sites. Our goal is a terabyte per day by the end of this year. We fill the corners of social media with poison snippets.

There is strong, widespread support for this hostile posture toward AI. For example, see: https://www.reddit.com/r/hacking/comments/1r55wvg/poison_fou...

Join us. The war has begun.

This will happen regardless. LLMs are already ingesting their own output. At the point where AI output becomes the majority of internet content, interesting things will happen. Presumably the AI companies will put lots of effort into finding good training data, and ironically that will probably be easier for code than anything else, since there are compilers and linters to lean on.

I've thought about this and wondered if this current moment is actually peak AI usefulness: the snr is high but once training data becomes polluted with it's own slop things could start getting worse, not better.

I was wondering if anyone was doing this after reading about LLMs scraping every single commit on git repos.

Nice. I hope you are generating realistic commits and they truly cannot distinguish poison from food.

Refresh this link 20 times to examine the poison: https://rnsaffn.com/poison2/

The cost of detecting/filtering the poison is many orders of magnitude higher than the cost of generating it.

AI will be trained on the code it wrote, our feedback on that code, and the final clean architecture(?) working(?) result after that feedback.