“If it takes longer to explain to the system all the things you want to do and all the details of what you want to do, then all you have is just programming by another name”
If it's taking you that long to direct the AI, then either you're throwing too small a problem at it, or too big a problem at it, or you're not directing its attention properly.
In a way, your prompts should feel like writing user documentation:
Refactor all of the decode functions in decoder.rs to return the number of bytes decoded
in addition to the decoded values they already return. While refactoring, follow these principles:
* Use established best practices and choose the most idiomatic approaches.
* Avoid using clone or unwrap.
* Update the unit tests to account for the extra return values, and make sure the tests check the additional return values.
When you're finished, run clippy and fix any issues it finds. Then run rustfmt on the crate.
I feel you've done a decent job of disguising "you're not approaching it with the right mindset".
It's more "you need to learn how a tool really works in order to use it most effectively"
Talking to a chat bot like you'd talk to a human is a quick way to be disappointed. They don't work that way, despite sometimes sounding like they do.
But don't forget, "we don't know how it really works".
More like "it is impossible to infer anything useful from what we know how they work".
But the end result is the same.
My read was along the lines of "you're holding it wrong"
And then for some bizarre reason sometimes it doesn't really work, AI added a bunch of random shit, then you can feel your rage bubbling up in real-time as you have to re-prompt it.
I've yet to have that happen. But then again so far I've only used it for Rust, and it's hard for the AI to maintain hallucinations with such a strict compiler.
You're holding it wrong, magic robot edition.