> High level - having a discussion with the LLM about different approaches and the tradeoffs between each
I honestly can't imagine this. If the AI says "However, a downside of approach B is that it takes O(n^2) time instead of the optimal O(nlog(n))", what do you think the odds are that it literally made up both of those facts? Because I'd be surprised if they were any lower than 30%. It's an extremely confident bullshitter, and you're going to use it to talk about engineering tradeoffs!?
> Once all this is set up it often spits out something that compiles and works first try
I'm sorry, but I'm extremely* doubtful that it actually works in any real sense. The fact that you even use "compiles and works first try" as some sort of metric that the code it's producing shows how easily it could slip in awful braindead bugs without you ever knowing. You run it and it appears to work!? The way to know whether something works -- not first try, but every try -- is to understand every character in the code. If that is your standard -- and it must be -- then isn't the AI just slowing you down?
I don't code for a living, and I'm probably worse than a fresh grad would be but I use:
"Please don't generate or rewrite code, I just want to discuss the general approach."
Bc I don't know any design patterns or idiomatic approach, being able to discuss is amazing.
Though quality and consistency of responses is another thing... :)
> I honestly can't imagine this. If the AI says "However, a downside of approach B is that it takes O(n^2) time instead of the optimal O(nlog(n))", what do you think the odds are that it literally made up both of those facts? Because I'd be surprised if they were any lower than 30%. It's an extremely confident bullshitter, and you're going to use it to talk about engineering tradeoffs!?
Being confidently incorrect is not a unique characteristic of AIs, plenty of humans do it too. Being able to spot the bullshit is a core part of the job. If you can't spot the bullshit from AI, I wouldn't trust you to spot the bullshit from a coworker.
But if I have a coworker who bullshits 30% of the time, I get them off my project. Because they too are just slowing everything down.
It can list tradeoffs and approaches you might have forgotten. Thats the big use case for me.