You would be surprised, however, at how much detail humans also need to understand each other. We often want AI to just "understand" us in ways many people may not initially have understood us without extra communication.
You would be surprised, however, at how much detail humans also need to understand each other. We often want AI to just "understand" us in ways many people may not initially have understood us without extra communication.
People poorly specifying problems and having bad models of what the other party can know (and then being surprised by the outcome) is certainly a more general albeit mostly separate issue.
This issue is the main reason why a big percentage of jobs in the world exist. I don't have hard numbers, but my intuition is that about 30% of all jobs are mainly "understand what side a wants and communicate this to side b, so that they understand". Or another perspective: almost all jobs that are called "knowledge work" are like this. Software development is mainly this. Side a are humans, side b is the computer. The main goal of ai seems to get into this space and make a lot of people superflous and this also (partly) explains why everyone is pouring this amount of money into ai.
Developers are - on average - terrible at this. If they weren't, TPMs, Product Managers, CTOs, none of them would need to exist.
It's not specific to software, it's the entire World of business. Most knowledge work is translation from one domain/perspective to another. Not even knowledge work, actually. I've been reading some works by Adler[0] recently, and he makes a strong case for "meaning" only having a sense to humans, and actually each human each having a completely different and isolated "meaning" to even the simplest of things like a piece of stone. If there is difference and nuance to be found when it comes to a rock, what hope have we got when it comes to deep philosophy or the design of complex machines and software?
LLMs are not very good at this right now, but if they became a lot better at, they would a) become more useful and b) the work done to get them there would tell us a lot about human communication.
[0] https://en.wikipedia.org/wiki/Alfred_Adler
> Developers are - on average - terrible at this. If they weren't, TPMs, Product Managers, CTOs, none of them would need to exist.
This is not really true, in fact products become worse the farther away from the problem a developer is kept.
Best products I worked with and on (early in my career, before getting digested by big tech) had developers working closely with the users of the software. The worst were things like banking software for branches, where developers were kept as far as possible from the actual domain (and decision making) and driven with endless sterile spec documents.
Yet IDEs are some of the worst things in the world. From EMacs to Eclipse to XCode, they are almost all bad - yet they are written by devs for devs.
Unfortunately, they are written by IDE-devs for non IDE-devs.
I disagree, I feel (experienced) developers are excellent at this.
It's always about translating between our own domain and the customer's, and every other new project there's a new domain to get up to speed with in enough detail to understand what to build. What other professions do that?
That's why I'm somewhat scared of AIs - they know like 80% of the domain knowledge in any domain.
I think developers are usually terrible at it only because they are way too isolated from the user.
If they had the chance to take the time to have a good talk with the actual users it would be different.
The typical job of a CTO is nowhere near "finding out what business needs and translate that into pieces of software". The CTO's job is to maintain an at least remotely coherent tech stack in the grand scheme of things, to develop the technological vision of a company, to anticipate larger shifts in the global tech world and project those onto the locally used stack, constantly distilling that into the next steps to take with the local stack in order to remain competitive in the long run. And of course to communicate all of that to the developers, to set guardrails for the less experienced, to allow and even foster experimentation and improvements by the more experienced.
The typical job of a Product Manager is also not to directly perform this mapping, although the PM is much closer to that activity. PMs mostly need to enforce coherence across an entire product with regard to the ways of mapping business needs to software features that are being developed by individual developers. They still usually involve developers to do the actual mapping, and don't really do it themselves. But the Product Manager must "manage" this process, hence the name, because without anyone coordinating the work of multiple developers, those will quickly construct mappings that may work and make sense individually, but won't fit together into a coherent product.
Developers are indeed the people responsible to find out what business actually wants (which is usually not equal to what they say they want) and map that onto a technical model that can be implemented into a piece of software - or multiple pieces, if we talk about distributed systems. Sometimes they get some help by business analysts, a role very similar to a developer that puts more weight on the business side of things and less on the coding side - but in a lot of team constellations they're also single-handedly responsible for the entire process. Good developers excel at this task and find solutions that really solve the problem at hand (even if they don't exactly follow the requirements or may have to fill up gaps), fit well into an existing solution (even if that means bending some requirements again, or changing parts of the solution), are maintainable in the long run and maximize the chance for them to be extendable in the future when the requirements change. Bad developers just churn out some code that might satisfy some tests, may even roughly do what someone else specified, but fails to be maintainable, impacts other parts of the system negatively, and often fails to actually solve the problem because what business described they needed turned out to once again not be what they actually needed. The problem is that most of these negatives don't show their effects immediately, but only weeks, months or even years later.
LLMs currently are on the level of a bad developer. They can churn out code, but not much more. They fail at the more complex parts of the job, basically all the parts that make "software engineering" an engineering discipline and not just a code generation endeavour, because those parts require adversarial thinking, which is what separates experts from anyone else. The following article was quite an eye-opener for me on this particular topic: https://www.latent.space/p/adversarial-reasoning - I highly suggest anyone working with LLMs to read it.
This is why we fed it the whole internet and every library as training data...
By now it should know this stuff.
Future models know it now, assuming they suck in mastodon and/or hacker news.
Although I don't think they actually "know" it. This particular trick question will be in the bank just like the seahorse emoji or how many Rs in strawberry. Did they start reasoning and generalising better or did the publishing of the "trick" and the discourse around it paper over the gap?
I wonder if in the future we will trade these AI tells like 0days, keeping them secret so they don't get patched out at the next model update.
The answer can be “both”.
They won’t get this specific question wrong again; but also they generalise, once they have sufficient examples. Patching out a single failure doesn’t do it. Patch out ten equivalent ones, and the eleventh doesn’t happen.
Yeah, the interpolation works if there are enough close examples around it. Problem is that the dimensionality of the space you are trying to interpolate in is so incomprehensibly big that even training on all of the internet, you are always going to have stuff that just doesn't have samples close by.
Even I don’t “know” how many “R”s there are in “strawberry”. I don’t keep that information in my brain. What I do keep is the spelling of the word “strawberry” and the skill of being able to count so that I can derive the answer to that question anytime I need.
Right. The equivalent here, for this problem, would be something like asking for context. And the LLM response should've been:
"Well, you need your car to be at the car wash in order to wash it, right?"
For many words I can't say the number of each letters but I only have an abstract memory of how they look so when I write say "strawbery" I just realize it looks odd and correct it.
Right. But, unlike AI, we are usually aware when we're lacking context and inquire before giving an answer.
Wouldn't that be nice. I've been party and witness to enough misunderstandings to know that this is far from universally true, even for people like me who are more primed than average to spot missing context.
I never said it's universally true.
TIL my wife may be AI!
> You would be surprised, however, at how much detail humans also need to understand each other.
But in this given case, the context can be inferred. Why would I ask whether I should walk or drive to the car wash if my car is already at the car wash?
But also why would you ask whether you should walk or drive if the car is at home? Either way the answer is obvious, and there is no way to interpret it except as a trick question. Of course, the parsimonious assumption is that the car is at home so assuming that the car is at the car wash is a questionable choice to say the least (otherwise there would be 2 cars in the situation, which the question doesn't mention).
But you're ascribing understanding to the LLM, which is not what it's doing. If the LLM understood you, it would realise it's a trick question and, assuming it was British, reply with "You'd drive it because how else would you get it to the car wash you absolute tit."
Even the higher level reasoning, while answering the question correctly, don't grasp the higher context that the question is obviously a trick question. They still answer earnestly. Granted, it is a tool that is doing what you want (answering a question) but let's not ascribe higher understanding than what is clearly observed - and also based on what we know about how LLMs work.
> They still answer earnestly.
Gemini at least is putting some snark into its response:
“Unless you've mastered the art of carrying a 4,000-pound vehicle over your shoulder, you should definitely drive. While 150 feet is a very short walk, it's a bit difficult to wash a car that isn't actually at the car wash!”
Marketing plan comes to mind for labs: find AI tells, fix them, & astroturf on socials that only _your_ frontier model reallly understands the world
I think a good rule of thumb is to default to assuming a question is asked in good faith (i.e. it's not a trick question). That goes for human beings and chat/AI models.
In fact, it's particularly true for AI models because the question could have been generated by some kind of automated process. e.g. I write my schedule out and then ask the model to plan my day. The "go 50 metres to car wash" bit might just be a step in my day.
> I think a good rule of thumb is to default to assuming a question is asked in good faith (i.e. it's not a trick question).
Sure, as a default this is fine. But when things don't make sense, the first thing you do is toss those default assumptions (and probably we have some internal ranking of which ones to toss first).
The normal human response to this question would not be to take it as a genuine question. For most of us, this quickly trips into "this is a trick question".
Rule of thumb for who, humans or chatbots? For a human, who has their own wants and values, I think it makes perfect sense to wonder what on earth made the interlocutor ask that.
Rule of thumb for everyone (i.e. both). If I ask you a question, start by assuming I want the answer to the question as stated unless there is a good reason for you to think it's not meant literally. If you have a lot more context (e.g. you know I frequently ask you trick or rhetorical questions or this is a chit-chat scenario) then maybe you can do something differently.
I think being curious about the motivations behind a question is fine but it only really matters if it's going to affect your answer.
Certainly when dealing with technical problem solving I often find myself asking extremely simple questions and it often wastes time when people don't answer directly, instead answering some completely different other question or demanding explanations why I'm asking for certain information when I'm just trying to help them.
> Rule of thumb for everyone (i.e. both).
That's never been how humans work. Going back to the specific example: the question is so nonsensical on its face that the only logical conclusion is that the asker is taking the piss out of you.
> Certainly when dealing with technical problem solving I often find myself asking extremely simple questions and it often wastes time when people don't answer directly
Context and the nature of the questions matters.
> demanding explanations why I'm asking for certain information when I'm just trying to help them.
Interestingly, they're giving you information with this. The person you're asking doesn't understand the link between your question and the help you're trying to offer. This is manifesting as a belief that you're wasting their time and they're reacting as such. Serious point: invest in communication skills to help draw the line between their needs and how your questions will help you meet them.
Sure, in a context in which you're solving a technical problem for me, it's fair that I shouldn't worry too much about why you're asking - unless, for instance, I'm trying to learn to solve the question myself next time.
Which sounds like a very common, very understandable reason to think about motivations.
So even in that situation, it isn't simple.
This probably sucks for people who aren't good at theory of mind reasoning. But surprisingly maybe, that isn't the case for chatbots. They can be creepily good at it, provided they have the context - they just aren't instruction tuned to ask short clarifying questions in response to a question, which humans do, and which would solve most of these gotchas.
Therefore the correct response would be to inquire back to clarify the question being asked.
Given that an estimated 70% of human communication is non-verbal, it's not so surprising though.
Does that stat predate the modern digital age by a number of years?
I regularly tell new people at work to be extremely careful when making requests through the service desk — manned entirely by humans — because the experience is akin to making a wish from an evil genie.
You will get exactly what you asked for, not what you wanted… probably. (Random occurrences are always a possibility.)
E.g.: I may ask someone to submit a ticket to “extend my account expiry”.
They’ll submit: “Unlock Jiggawatts’ account”
The service desk will reset my password (and neglect to tell me), leaving my expired account locked out in multiple orthogonal ways.
That’s on a good day.
Last week they created Jiggawatts2.
The AIs have got to be better than this, surely!
I suspect they already are.
People are testing them with trick questions while the human examiner is on edge, aware of and looking for the twist.
Meanwhile ordinary people struggle with concepts like “forward my email verbatim instead of creatively rephrasing it to what you incorrectly though it must have really meant.”
There's a lot of overlap between the smartest bears and the dumbest humans. However, we would want our tools to be more useful than the dumbest humans...