You're still operating with layers of lexical abstraction and indirection. Models full of dated syntactic and semantic concepts about software that waste cycles.
Ultimately useless layers of state that the goal you set out to test for inevitably complicates the process.
In chip design land we're focused on streamlining the stack to drawing geometry. Drawing it will be faster when the machine doesn't have decades of programmer opinions to also lose cycles to the state management.
When there are no decisions but extend or delete a bit of geometry we will eliminate more (still not all) hallucinations and false positives than we get trying to organize syntax which has subtly different importance to everyone (misunderstanding fosters hallucinations).
Most software out there is developer tools, frameworks, they need to do a job.
Most users just want something like automated Blender that handles 80% of an ask (look like a word processor or a video game) they can then customize and has a "play" mode that switches out of edit mode. That’s the future machine and model we intend to ship. Fonts are just geometric coordinates. Memory matrix and pixels are just geometric coordinates. The system state is just geometric coordinates[1].
Text driven software engineering modeled on 1960-1970s job routines, layering indirection on math states in the machine, is not high tech in 2025 and beyond. If programmers were car people they would all insist on a Model T being the only real car.
Copy-paste quote about never getting one to understand something when their paycheck depends on them not understanding it.
Intelligence gave rise to language, language does not give rise to intelligence. Memorization and a vain sense of accomplishment that follows is all there is to language.
[1]https://iopscience.iop.org/article/10.1088/1742-6596/2987/1/...
I'm not sure I follow this entirely, but if the assertion is that "everything is math" then yeah, I totally agree. Where I think language operates here is as the medium best situated to assign objects to locations in vector space. We get to borrow hundreds of millions of encodings/relationships. How can you plot MAN against FATHER against GRAPEFRUIT using math without circumnavigating the human experience?
When I write to an unknown audience, unable to know in advance what terms they rely on, I tend to circumlocute to build emotional subtext. They might only get some percent but it may be familiar enough terms to act as middleware to the rest.
The words Man, father, and grapefruit aren't essential to existence of man, father, grapefruit. All existed before language.
What you mean by "human experience" is "bird song my culture uses to describe shared space". Leave meaning to be debated in meat space and include the current geometry of the language in the model. Just make it mutable.
The machine can just focus on rendering geometry to the pixel limit of the machine using electrical theory; it doesn't need to care internally if it's text with meaning. It's only represented like that on the screen anyway. Compress the information required to just geometric representation and don't anthropomorphize machine state manipulation.