Humans also drop any hard requirements you specify regularly, and similarly require review. Nevertheless we manage to increase reliability of human output through processes and reviews, and most of the methods we use for harnesses are taken from experience with how to reduce reliability issues in humans, who are notoriously difficult to ensure delivers reliably.
The primary way to increase reliability is to automate. Instead of humans producing some output manually, humans producing machines which produce that output.
I've seen a disturbing trend where a process that could've been a script or a requirement that could've been enforced deterministically is in fact "automated" through a set of instructions for an LLM.
Sure, when that is possible. However, there are lots of processes we don't know how to automate in a deterministic way. Hence the vast amount of investment in building organisations of people with mechanism to make peoples output more reliable through structure, reviews, and so on.
Large parts of human civilization rests on our ability to make something unreliable less unreliable through organisational structure and processes.
We resolve that through liability, penalties, trust, responsibility, review and oversight.
At the end of the day, if I am spending X$s for automation, I want to be able to sleep at night knowing my factory will not build a WMD or delete itself.
If its simply a tool that is a multiplier for experts, then do I really need it? How much does it actually make my processes more efficient, faster, or more capable of earning revenue?
There is a LOT that is forgiven when tech is new - but at some point the shiny newness falls off and it is compared to alternatives.
Liability, penalties, trust, and responsibility are means we use to try to influence the application of the processes that do. They do not directly affect reliability. They can be applied just as much to a team using AI as one that does not.
Review and oversight does address reliability directly, and hence why we make use of those in processes to improve the reliability of mechanical processes as well, and why they are core elements of AI harnesses.
> If its simply a tool that is a multiplier for experts, then do I really need it? How much does it actually make my processes more efficient, faster, or more capable of earning revenue?
You can ask the same thing about all the supporting staff around the experts in your team.
> There is a LOT that is forgiven when tech is new - but at some point the shiny newness falls off and it is compared to alternatives.
Only teams without mature processes are not doing that for AI today.
Most of the deployments of AI I work on are the outcome of comparing it to alternatives, and often are part of initiatives to increase reliability of human teams jut as much as increasing raw productivity, because they are often one and the same.
Underrated comment.
So many applications of LLMs have even to start with deterministic brain when using a non-deterministic llm and then wonder why it’s not working.
it's strange to see software engineers using skills aka human description of small scripts instead of scripting things directly. often there were cli / tools / libraries to do what a skill does for many years. maybe it's culture issue, people who enjoy automation / devops / predictability will naturally help themselves, but other people just want to "delegate" and be done without trying.
[flagged]
Because certain aspects (both are error prone) are similar and comparable. The notion that two entities need to be close in abilities for it to be possible to compare them is nonsense.
You make the point for me: We managed to put men on the moon despite humans being enormously unreliable and error prone, because we built system around them that allowed for harnessing the good bits and reducing the failures to acceptable levels.
We are - I am anyway - using our lessons from building reliable systems from unreliable elements to raise the reliability of outputs of LLMs the same way.
> We are - I am anyway - using our lessons from building reliable systems from unreliable elements to raise the reliability of outputs of LLMs the same way.
:) :) :) I could tell immediately you are somehow vested in the "success" of the LLM. So 600 B dollars and five years later, can you tell me how far did you guys get? Apollo programme costed a tiny fraction of that and started putting people on the moon some ~10 years later. Would you say that you are on the way to accomplish something similar in the next five years?
Calm down. They were comparing a very specific and narrow aspect of both. Not totally equivalent maybe, but that doesn't justify a tantrum.
I am incredibly calm. I just wonder at the idiots who think they should compare the magnificiently efficient human brain to the shitslop machines.