I do enjoy this sort of speculative fiction that imagines though future consequences of something in its early stages, like AI is right now. There are some interesting ideas in here about where the work will shift.
However, I do wonder if it is a bit too hung up on the current state of the technology, and the current issues we are facing. For example, the idea that the AI coded tools won't be able to handle (or even detect) that upstream data has changed format or methodology. Why wouldn't this be something that AI just learns to deal with? There us nothing inherent in the problem that is impossible for a computer to handle. There is no reason to think AIs can't learn how to code defensively for this sort of thing. Even if it is something that requires active monitoring and remediation, surely even today's AIs could be programmed to monitor for these sorts of changes, and have them modify existing code when to match the change when they occur. In the future, this will likely be even easier.
The same thing is true with the 'orchestration' job. People already have begun to solve this issue, with the idea of a 'supervisor' agent that is designing the overall system, and delegating tasks to the sub-systems. The supervisor agent can create and enforce the contracts between the various sub-systems. There is no reason to think this wont get even better.
We are SO early in this AI journey that I don't think we can yet fully understand what is simply impossible for an AI to ever accomplish and what we just haven't figure out yet.
Yeah, in the real world, Tom is already an OpenClaw instance...
Funny I actually saw this tweet this morning about an Openclaw instance getting too advanced for the users to know how to control and fix: https://x.com/jspeiser/status/2033880731202547784?s=46&t=sAq...
> Funny I actually saw this tweet this morning about an Openclaw instance getting too advanced for the users to know how to control and fix: https://x.com/jspeiser/status/2033880731202547784?s=46&t=sAq...
I feel like this ultimately boils down to something similar to nocode vs code debates that you mention. (Is openclaw having these flowcharts similar to nocode territory?)
at some point, code is more efficient in doing so, maybe even people will then have this code itself be generated by AI but then once again, you are one hallucination away from a security nightmare or doesn't it become openclaw type thing once again
But even after that, at some point, the question ultimately boils down to responsibility. AI can't bear responsibility and there are projects which need responsibility because that way things can be secure.
I think that the conclusion from this is that, we need developers in the loop for the responsibility and checks even if AI generated code stays prevalent and we are seeing some developers already go ahead and call them slop janitors in the sense that they will remove the slop from codebase.
I do believe that the end reason behind it is responsibility. We need someone to be accountable for code and we need someone to take a look and one who understands the things to prevent things from going south in case your project requires security which for almost all production related things/not just basic tinkering is almost necessary.
Yeah responsibility and accountability are also some areas I'd like to explore. I'm mostly digging through this artifact I created with Claude to look at first order and second order effects and then "traffic jams" in the "good science fiction doesn't predict the car, it predicts the traffic jam" and what kind of roles might pop up to solve those issues: https://claude.ai/public/artifacts/39e718fa-bc4b-4f45-a3d5-5...
I've mostly been digging through my own version of that and trying to find things I find interesting and seeing what kinds of stories we can build about what a day in that job might look like.
>>There is no reason to think AIs can't learn how to code defensively for this sort of thing.
For the exact same reason why there is absolutely no technical reason why two departments in a company can't talk to each other and exchange data, but because of <whatever> reason they haven't done that in 20 years.
The idea that farmers will just buy "AI" as a blob that is meant to do a thing and these blobs will never interact with each other because they weren't designed to(as in - John Deere really doesn't want their AI blob to talk to the AI blob made by someone else, even if there is literally no technical reason why it shouldn't be possible), seems like the most likely way things will go - it's how we've been operating for a long time and AI won't change it.
> The supervisor agent can create and enforce the contracts between the various sub-systems.
Or you can ask the agent to do this after each round. Or before a deploy. They are great at performing analysis.
[dead]