> That is the entire point, right? Us having to specify things that we would never specify when talking to a human.

Maybe in the distant future we'll realize that the most reliable way to prompting LLMs are by using a structured language that eliminates ambiguity, it will probably be rather unnatural and take some time to learn.

But this will only happen after the last programmer has died and no-one will remember programming languages, compilers, etc. The LLM orbiting in space will essentially just call GCC to execute the 'prompt' and spend the rest of the time pondering its existence ;p

You joke, but this is the very problem I always run into vibe coding anything more complex than basically mashing multiple example tutorials together. I always try to shorthand things, and end up going around in circles until I specify what I want very cleanly, in basically what amounts to psuedocode. Which means I've basically written what I want in python.

This can still be a really big win, because of other things that tend to be boiler around the core logic, but it's certainly not the panacea that everyone who is largely incapable of being precise with language thinks it is.

You could probably make a pretty good short story out of that scenario, sort of in the same category as Asimov's "The Feeling of Power".

The Asimov story is on the Internet Archive here [1]. That looks like it is from a handout in a class or something like that and has an introductory paragraph added which I'd recommend skipping.

There is no space between the end of that added paragraph and the first paragraph of the story, so what looks like the first paragraph of the story is really the second. Just skip down to that, and then go up 4 lines to the line that starts "Jehan Shuman was used to dealing with the men in authority [...]". That's where the story starts.

[1] https://ia800806.us.archive.org/20/items/TheFeelingOfPower/T...

Thanks, I enjoyed reading that! The story that lay at the back of my mind when making the comment was "A Canticle for Leibowitz" [1]. A similar theme and from a similar era.

The story I have half a mind to write is along the lines of a future we envision already being around us, just a whole lot messier. Something along the lines of this [2] XKCB.

[1] https://en.wikipedia.org/wiki/A_Canticle_for_Leibowitz

[2] https://xkcd.com/538/

This is going into my training courses at work. Thanks!

> Maybe in the distant future we'll realize that the most reliable way to prompting LLMs are by using a structured language that eliminates ambiguity, it will probably be rather unnatural and take some time to learn.

On the foolishness of "natural language programming". https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...

    Since the early days of automatic computing we have had people that have felt it as a shortcoming that programming required the care and accuracy that is characteristic for the use of any formal symbolism. They blamed the mechanical slave for its strict obedience with which it carried out its given instructions, even if a moment's thought would have revealed that those instructions contained an obvious mistake. "But a moment is a long time, and thought is a painful process." (A.E.Houseman). They eagerly hoped and waited for more sensible machinery that would refuse to embark on such nonsensical activities as a trivial clerical error evoked at the time.
(and it continues for some many paragraphs)

https://news.ycombinator.com/item?id=8222017 2014 - 154 comments

https://news.ycombinator.com/item?id=35968148 2023 - 65 comments

https://news.ycombinator.com/item?id=43564386 2025 - 277 comments

A structured language without ambiguity is not, in general, how people think or express themselves. In order for a model to be good at interfacing with humans, it needs to adapt to our quirks.

Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.

Unfortunately, the solution is likely going to be further interconnectivity, so the model can just ask the car where it is, if it's on, how much fuel/battery remains, if it thinks it's dirty and needs to be washed, etc

> in order to better service ai

That wasn't the point at all. The idea is about rediscovering what always worked to make a computer useful, and not even using the fuzzy AI logic.

Yep, humans have had a remedy for the problem of ambiguity in language for tens of thousands of years, or there never could have been an agricultural revolution giving birth to civilization in the first place.

Effective collaboration relies on iterating over clarifications until ambiguity is acceptably resolved.

Rather than spending orders of magnitude more effort moving forward with bad assumptions from insufficient communication and starting over from scratch every time you encounter the results of each misunderstanding.

Most AI models still seem deep into the wrong end of that spectrum.

>Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.

I think there's a substantial subset of tech companies and honestly tech people who disagree. Not openly, but in the sense of 'the purpose of a system is what it does'.

I agree but it feels like a type-of-mind thing. Some people gravitate toward clean determinism but others toward chaotic and messy. The former requires meticulous linear thinking and the latter uses the brain’s Bayesian inference.

Writing code is very much “you get what you write” but AI is like “maintain a probabilistic mental model of the possible output”. My brain honestly prefers the latter (in general) but I feel a lot of engineers I’ve met seem to stray towards clean determinism.

I think it's very likely that machine intelligence will influence human language. It already is influencing the grammar and patterns we use.

I think such influence will be extremely minimal, like confined to dozens of new nouns and verbs, but no real change in grammar, etc.

Interactions between humans and computers in natural language for your average person is much much less then the interactions between that same person and their dog. Humans also speak in natural language to their dogs, they simplify their speech, use extreme intonation and emphasis, in a way we never do with each other. Yet, despite having been with dogs for 10,000+ years, it has not significantly affected our language (other then giving us new words).

EDIT: just found out HN annoyingly transforms U+202F (NARROW NO-BREAK SPACE), the ISO 80000-1 preferred way to type thousand separator

> I think such influence will be extremely minimal.

AI will accelerate “natural” change in language like anything else.

And as AI changes our environment (mentally, socially, and inevitably physically) we will change and change our language.

But what will be interesting is the rise of agent to agent communication via human languages. As that kind of communication shows up in training sets, there will be a powerful eigenvector of change we can’t predict. Other than that it’s the path of efficient communication for them, and we are likely to pick up on those changes as from any other source of change.

> Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.

I'm on the spectrum and I definitely prefer structured interaction with various computer systems to messy human interaction :) There are people not on the spectrum who are able to understand my way of thinking (and vice versa) and we get along perfectly well.

Every human has their own quirks and the capacity to learn how to interact with others. AI is just another entity that stresses this capacity.

Speak for yourself. I feel comfortable expressing myself in code or pseudo code and it’s my preferred way to prompt an LLM or write my .md files. And it works very effectively.

> Unfortunately, the solution is likely going to be further interconnectivity, so the model can just ask the car where it is, if it's on, how much fuel/battery remains, if it thinks it's dirty and needs to be washed, etc

So no abstract reasoning.

Prompting is definitely a skill, similar to "googling" in the mid 00's.

You see people complaining about LLM ability, and then you see their prompt, and it's the 2006 equivalent of googling "I need to know where I can go for getting the fastest service for car washes in Toronto that does wheel washing too"

Ironically, the phrase that was a bad 2006 google query is a decent enough LLM prompt, and the good 2006 google query (keywords only) would be a bad LLM prompt.

That’s not true at all. I get plenty of perfect responses with few word prompts often containing typos.

This isn’t always the case and depends on what you need.

How customized are your system prompts (i.e. the static preferences you set at the app level)?

And do you perhaps also have memory enabled on the LLMs you are thinking of?

Communication is definitely a skill, and most people suck at it in general. And frequently poor communication is a direct result from the fact that we don't ourselves know what we want. We dream of a genie that not only frees us from having to communicate well, but of having to think properly. Because thinking is hard and often inconvenient. But LLMs aren't going to entirely free us from the fact that if garbage goes in, garbage will come out.

"Communication usually fails, except by accident." —Osmo A. Wiio [1]

[1] https://en.wikipedia.org/wiki/Wiio%27s_laws

I’ve been looking for tooling that would evaluate my prompt and give feedback on how to improve. I can get somewhere with custom system prompts (“before responding ensure…”) but it seems like someone is probably already working on this? Ideally it would run outside the actual thread to keep context clean. There are some options popping up on Google but curious if anyone has a first anecdote to share?

The Lojban language already exists and allows for eliminating ambiguity. It's obviously not practical for general use, though.

https://en.wikipedia.org/wiki/Lojban

Lojban is syntactically unambitious. Semantically it's still just as vague as any natural language.

How about...

https://en.wikipedia.org/wiki/Ithkuil

> Ithkuil is an experimental constructed language created by John Quijada. It is designed to express more profound levels of human cognition briefly yet overtly and clearly, particularly about human categorization. It is a cross between an a priori philosophical and a logical language. It tries to minimize the vagueness and semantic ambiguity in natural human languages. Ithkuil is notable for its grammatical complexity and extensive phoneme inventory, the latter being simplified in an upcoming redesign.

> ...

> Meaningful phrases or sentences can usually be expressed in Ithkuil with fewer linguistic units than natural languages. For example, the two-word Ithkuil sentence "Tram-mļöi hhâsmařpţuktôx" can be translated into English as "On the contrary, I think it may turn out that this rugged mountain range trails off at some point."

Half as Interesting - How the World's Most Complicated Language Works https://youtu.be/x_x_PQ85_0k (length 6:28)

It reminds me of the difficulty of getting information on or off a blockchain. Yes, you’ve created this perfect logical world. But, getting in or out will transform you in unknown ways. It doesn’t make our world perfect.

> But this will only happen after the last programmer has died and no-one will remember programming languages, compilers, etc.

If we're 'lucky' there will still be some 'priests' around like in the Foundation novels. They don't understand how anything works either, but can keep things running by following the required rituals.

Maybe in the distant future we'll realize that the most reliable way to prompting LLMs are by using a structured language that eliminates ambiguity

So, back to COBOL? :)

> So, back to COBOL? :)

well more like a structured _querying_ language

So, back to Prolog? :)

> structured language that eliminates ambiguity

That has been tried for almost half a century in the form of Cyc[1] and never accomplished much.

The proper solution here is to provide the LLM with more context, context that will likely be collected automatically by wearable devices, screen captures and similar pervasive technology in the not so distant future.

This kind of quick trick questions are exactly the same thing humans fail at if you just ask them out of the blue without context.

[1] https://en.wikipedia.org/wiki/Cyc

> Maybe in the distant future we'll realize that the most reliable way to prompting LLMs are by using a structured language that eliminates ambiguity, it will probably be rather unnatural and take some time to learn.

We've truly gone full circle here, except now our programming languages have a random chance for an operator to do the opposite of what the operator does at all other times!

One might think that a structure language is really desirable, but in fact, one of the biggest methods of functioning behind intelligence is stupidity. Let me explain: if you only innovate by piecing together lego pieces you already have, you'll be locked into predictable patterns and will plateau at some point. In order to break out of this, we all know, there needs to be an element of randomness. This element needs to be capable of going in the at-the-moment-ostensibly wrong direction, so as to escape the plateau of mediocrity. In gradient descent this is accomplished by turning up temperature. There are however many other layers that do this. Fallible memory - misremembering facts - is one thing. Failing to recognize patterns is another. Linguistic ambiguity is yet another, and that is a really big one (cf Sapir–Whorf hypothesis). It's really important to retain those methods of stupidity in order to be able to achieve true intelligence. There can be no intelligence without stupidity.

I believe this is the principle that makes biology such a superior technology.

> structured language that eliminates ambiguity... CODE! Wait....

>> Maybe in the distant future we'll realize that the most reliable way to prompting LLMs are by using a structured language that eliminates ambiguity, it will probably be rather unnatural and take some time to learn.

Like a programming language? But that's the whole point of LLMs, that you can give instructions to a computer using natural language, not a formal language. That's what makes those systems "AI", right? Because you can talk to them and they seem to understand what you're saying, and then reply to you and you can understand what they're saying without any special training. It's AI! Like the Star Trek[1] computer!

The truth of course is that as soon as you want to do something more complicated than a friendly chat you find that it gets harder and harder to communicate what it is you want exactly. Maybe that's because of the ambiguity of natural language, maybe it's because "you're prompting it wrong", maybe it's because the LLM doesn't really understand anything at all and it's just a stochastic parrot. Whatever the reason, at that point you find yourself wishing for a less ambiguous way of communication, maybe a formal language with a full spec and a compiler, and some command line flags and debug tokens etc... and at that point it's not a wonderful AI anymore but a Good, Old-Fashioned Computer, that only does what you want if you can find exactly the right way to say it. Like asking a Genie to make your wishes come true.

______________

[1] TNG duh.

> Like a programming language?

Does the next paragraph not make that clear?

[dead]