I love this so much! It got me thinking about the future we’re heading towards, that took me down a rabbit hole.

As agents become the dominant code writers, the top concerns for a “working class” programming language would become reducing errors and improving clarity. I think that will lead to languages becoming more explicit and less fun for humans to write, but great for producing code that has a clear intent and can be easily modified without breaking. Rust in its rawest form with lifetimes and the rigmarole will IMO top the charts.

The big question that I still ponder over: will languages like Hoot have a place in the professional world? Or will they be relegated to hobbyists, who still hand-type code for the love of the craft. It could be the difference between having a kitchen gardening hobby vs modern farming…

I have been wondering what an AI first programming language might look like and my closest guess is something like Scheme/Lisp. Maybe they get more popular in the long run.

Smalltalk offers several excellent features for LLM agents:

- Very small methods that function as standalone compilation units, enabling extremely fast compilation.

- Built-in, fast, and effective code browsing capabilities (e.g., listing senders, implementors, and instance variable users...). This makes it easy for the agent to extract only the required context from the system.

- Powerful runtime reflectivity and easily accessible debugging capabilities.

- A simple grammar with a more natural, language-like feel compared to Lisp.

- Natural sandboxing

If someone wants to try it out, both Glamorous Toolkit and plain Pharo have tooling that allows integration of both local and remote LLM services.

Some links to start off with:

https://gtoolkit.com/

https://github.com/feenkcom/gt4llm

https://pharo.org/

https://omarabedelkader.github.io/ChatPharo/

Edit: I suppose the next step would be to teach an LLM about "moldable exceptions", https://arxiv.org/pdf/2409.00465 (PDF), have it create its own debuggers.

I think the bitter lesson has an answer to that question. The best AI language is whichever one has the largest corpus of high quality training data. Perhaps new language designers will come up with new ways to create large, high quality corpi in the future, but for the foreseeable future it looks like the big incumbents have an unassailable advantage.

Perhaps the opposite: a language small enough that its entirety can easily be stuffed in context.

I'm working on what I hope is an AI-first language now, but I'm taking the opposite approach: something like Swift/DartTypeScript with plenty of high level constructs that compactly describe intent.

I'm focusing on very high-quality feedback from the compiler, and sandboxing via WASM to be able to safely iterate without human intervention - which Hoot has as well.

LLM's are mainly trained on English natural language text, so you'll want a language that looks as much as possible like English. COBOL is it, then.