The title doesn't make justice to the content.
I really liked the paragraph about LLMs being "alien intelligence"
> Many engineers I know fall into 2 camps, either the camp that find the new class of LLMs intelligent, groundbreaking and shockingly good. In the other camp are engineers that think of all LLM generated content as “the emperor’s new clothes”, the code they generate is “naked”, fundamentally flawed and poison.
I like to think of the new systems as neither. I like to think about the new class of intelligence as “Alien Intelligence”. It is both shockingly good and shockingly terrible at the exact same time.
Framing LLMs as “Super competent interns” or some other type of human analogy is incorrect. These systems are aliens and the sooner we accept this the sooner we will be able to navigate the complexity that injecting alien intelligence into our engineering process leads to.
It's a similitude I find compelling. The way they produce code and the way you have to interact with them really feels "alien", and when you start humanizing them, you get emotions when interacting with it and that's not correct.
I mean, I do get emotional and frustrated even when good old deterministic programs misbehaved and there was some bug to find and squash or work-around, but the LLM interactions can bring the game to a complete new level. So, we need to remember they are "alien".
I’m reminded of Dijkstra: “The question of whether machines can think is about as relevant as the question of whether submarines can swim.”
These new submarines are a lot closer to human swimming than the old ones were, but they’re still very different.
Some movements expected alien intelligence to arrive in the early 2020s. They might have been on the mark after all ;)
Isn't the intelligence of every other person alien to ourselves? The article ends with a need to "protect our own engineering brands" but how is that communicated? I found this [https://meta.discourse.org/t/contributing-to-discourse-devel...] which seems woefully inadequate. In practice, conventions are communicated through existing code. Are human contributors capable of grasping an "engineering brand" by working on a few PRs?
> Isn't the intelligence of every other person alien to ourselves?
If we agree that we are all humans and assume that all the other humans are conscious as one is, I think we can extrapolate that there is generic "human intelligence" concept. Even if it's pretty hard do nail it down, and even if there are several definitions of human intelligence out there.
For the other part of the comment, not too familiar with Discourse opensource approach but I guess that those rules are there mainly for employees, but since they develop in the open and public, they make them public as well.
My point was that AI-produced code is not so foreign than no human could produce it, nor do any two humans produce the same style of code. So I'm not sure exactly what the idea of "engineering brand" is meant to protect.
This is why at a fundamental level, the concept of AGI doesn't make a lot of sense. You can't measure machine intelligence by comparing it to a human's. That doesn't mean machines can't be intelligent...but rather that the measuring stick cannot be an abstracted human being. It can only be the accumulation of specific tasks.