> “We’re heading into a new age of AI-assisted coding, and right now, it’s difficult to predict how that will play out. But if I had to place a bet, I would say that in the long run, AIs are more likely to generate high-quality code in a language like Gleam. Gleam makes it quick and easy for AIs to check their code, get instant feedback, and iterate. That should be an advantage compared to languages that are slow to build, have cryptic error messages, and can’t catch mistakes at build-time.”

Interesting point and one I haven't seen before. Almost like arguing that AI will work best with things it can learn quickly, rather than things that have lots of examples.

I feel like now that LLMs are getting better the quality of the examples matters more than the quantity.

Garbage in, garbage out. If you confuse it with a lot of Junior-level code and have a languages that constantly changes best practices, the output might not be great.

On the other hand, if you have a languages that was carefully designed from the start and avoids making breaking changes, if it has great first party documentation and a unified code style everyone adheres to, the LLM will have an easier time.

The later also happens to be better for humans. Honestly the best bet is to make a good language for humans. Generative AI is still evolving rapidly so no point in designing the lang for current weaknesses.

If the main win of starting over with a new language is that you don't have a giant glut of legacy example code and documentation targeting no-longer-the-best-practice, maybe there's a solution where you take an established modern language like rust or go and feed the LLM a more curated set of material from which to learn.

Like instead of "the entire internet", here's a few hundred best-practice projects, some known up-to-date documentation/tutorials, and a whitelist of 3rd party modules that you're allowed to consider using.

It feels like it should be true that a referentially transparent type safe language would be the 'right' language for ai coding since each code block is stateless you should be able to in parellel decompose problems and test them infinitely down.

If you have good enough (LSP+MCP) tooling, I'd expect that the LLM can learning quickly vs the LLM having lots of examples would converge towards being the same thing. At the very least it could generate many potential examples, put them all through the tooling to deterministic to get many "true" examples and then learn from that.

> […] rather than things that have lots of examples.

Well, one glaring issue with the assumption of the quality of LLM output being mostly dependent on a large volume of examples online would be Sturgeon's law.

I can’t say where AI will end up but I firmly believe it will pick winners and losers in the next generations of programming languages. Not always for the better.

Any language that is difficult for an AI to understand will have to get popular by needing far less boilerplate code for AIs to write in the first place. We may finally start designing better APIs. Or lean into it and make much worse ones that necessitate AI. Look especially to an AI company to create a free razor and sell you the blades.

Beautiful. I’ve taken a few cracks at learning Gleam, but I found I quickly get stuck in abstraction hell—building types on types in types without coding any behavior. I would probably have more success learning Erlang first, just to get a handle on those functional patterns the BEAM was built for. I should take another crack at it.

Just FYI: unlike many pure FPs, building types on types is generally not a pattern that you do in erlang (or Elixir) and is largely considered an anti pattern in both communities.

You might not get the "handle" you're looking for?

For what it’s worth, I don’t think there’s much about Gleam’s design that is specific to “the functional patterns the BEAM was built for.” If you’re getting stuck in abstraction hell, consider asking the community for advice on what would be more idiomatic.

Amazing to hear success stories of Gleam in production! Running on the beam really feels like a super power

For Gleam//Erlang is there an easy way to package up an executable you can distribute without also shipping Erlang?

I can't speak to Gleam, but for Elixir I just used Burrito to create a single executable: https://github.com/burrito-elixir/burrito I think it works for just Erlang too.

I haven't used it, but from the docs, I don't see why this wouldn't work for any language that compiles to beam files. You might need to adjust the build setup a bit.

Personally, I think I'd prefer something that worked without unpacking, but I don't actually need something like this, so my preferences aren't super important :D

Yes, I've created single-file Gleam executables by compiling to JavaScript and then using Node's experimental SEA (single executable application) feature. As a bonus, typically I've found the JavaScript targets to run a good deal faster for number-crunching tasks.

How big is a hello world executable in that case?

Hefty. The process is effectively just injecting all of the JS into the Node interpreter executable, so it's the size of the interpreter plus whatever you stuff inside. It's close to 50MB.

Oof, well that’s not ideal.

No, the VM needs to be installed on the machine, similar to C#, Java, Python, etc.

There have been some projects for creating self-extracting executable archives for the VM, and some projects for compiling BEAM programs to native code, but nothing has become well established yet.

I think C# has been able to compile the vm + your DLL into a single binary that doesn’t extract for a while now. There’s currently ongoing work for Java to do this.

You can compile to javascript as well.

What is the current status quo of using processes in Gleam? When I've looked, the language tour doesn't even get to anything about processes, messaging, or OTP. I found it here https://hexdocs.pm/gleam_otp/0.1.1/index.html by searching Google, but it seems like it's almost an afterthought.

I'm curious if I've had the wrong impression of Gleam. My assumption was that it was bringing static types to the BEAM's processes and OTP, but it seems like it's mainly a statically typed language that just happens to be on the BEAM and that it isn't necessarily looking to solve the "static type the messages" in Erlang and Elixir. Is that correct?

I'm not saying either way is bad or good. I'm just trying to get a sense of the language's origins and where it's going compared to Elixir and its gradual typing story. For example, if I know and like F#, Elixir, and Rust, what is the selling point of Gleam?

Note that you linked to the 0.1.1 version of the gleam_otp documentation. The latest version resides at https://hexdocs.pm/gleam_otp/index.html and both gleam_erlang and gleam_otp have hit 1.0.0 already. It doesn't contain every feature yet (like dynamic supervisors) but it's usable (and I've rolled my own dynamic supervisor in the meantime).

The reason it's not in the language tour is because it's not part of the language itself. There's no async specific syntax or feature in the language itself. They all depend on the target since gleam can compile to erlang or javascript. If you compile to erlang you can use gleam_erlang and gleam_otp to leverage OTP. If you compile to javascript you use gleam_javascript and have to work with promises. It's definitely not just an afterthought and gleam_otp recently had a big 1.0 update.