That is why you check your results. If you know what the end outcome should be. Doesn’t matter if it hallucinates. If it does, it probably already got you 90% of the work done which is less work that you have to do now to finish it.
That is why you check your results. If you know what the end outcome should be. Doesn’t matter if it hallucinates. If it does, it probably already got you 90% of the work done which is less work that you have to do now to finish it.
This only works for classes of problems where checking the answer is easier than doing the calculation. Things like making a visualization, writing simple functions, etc. For those, it’s definitely easier to use an LLM.
But a lot of software isn’t like that. You can introduce subtle bugs along the way, so verifying is at least as hard as writing it in the first place. Likely harder, since writing code is easier than reading for most people.
Exactly, thank you.