Because when a project is done in 10 minutes by llm - it will be abandoned in a week.
When a person intentionally does it and spends a month or two - they far more likely will support it as they created this project with some intention in the first place.
Then I don’t understand. My point was that it doesn’t matter whether the machine or the human actually wrote the code; liability for any injury ultimately remains with the human that put the agent to work. Similarly, if a developer at a company wrote code that injured you, and she wrote that code at the direction of the company, you don’t sue the developer, you sue the company.
I’d be willing to bet the classes of bugs introduced would be different for humans vs LLMs. You’d probably see fewer low level bugs (such as off-by-one bugs), but more cases where the business logic is incorrect or other higher concerns are incorrect.
Because when a project is done in 10 minutes by llm - it will be abandoned in a week.
When a person intentionally does it and spends a month or two - they far more likely will support it as they created this project with some intention in the first place.
With llms this is not the case
Why are you entitled to ongoing support of a free tool?
How long are you entitled to such support?
What does “support” mean to you, exactly?
If the tool works for you already, why do you need support for it?
A bug from slop could cost $10K
So could a bug introduced by a human being. What's the difference?
Accountability is the difference.
An LLM is just an agent. The principal is held accountable. There’s nothing really all that novel here from a liability perspective.
That was my point exactly. I just didn’t write it as precisely as you.
Then I don’t understand. My point was that it doesn’t matter whether the machine or the human actually wrote the code; liability for any injury ultimately remains with the human that put the agent to work. Similarly, if a developer at a company wrote code that injured you, and she wrote that code at the direction of the company, you don’t sue the developer, you sue the company.
How exactly do end users hold AWS devs / AWS LLMs accountable
The human
How much would a bug from a human cost?
I’d be willing to bet the classes of bugs introduced would be different for humans vs LLMs. You’d probably see fewer low level bugs (such as off-by-one bugs), but more cases where the business logic is incorrect or other higher concerns are incorrect.