They have an excel sheet next to it - they can test it against that. Plus they can ask questions if something seems off and have it explain the code.
They have an excel sheet next to it - they can test it against that. Plus they can ask questions if something seems off and have it explain the code.
I'm not sure being able to verify that it's vaguely correct really solves the issue. Consider how many edge cases inhabit a "30 sheet, mind-numbingly complicated" Excel document. Verifying equivalence sounds nontrivial, to put it mildly.
They don't care. This is clearly someone looking to score points and impress with the AI magic trick.
The best part is that they can say the AI will get some stuff wrong, they knew that, and it's not their fault when it breaks. Or more likely, it'll break in subtle ways, nobody will ever notice and the consequences won't be traced back to this. YOLO!
Consider how many edge cases it misses. Equivalence probably shouldn't be the top priority here.
Equivalence here would definitely be the worst test, except for all the alternatives.
> They have an excel sheet next to it - they can test it against that.
It used to be that we'd fix the copy-paste bugs in the excel sheet when we converted it to a proper model, good to know that we'll now preserve them forever.
[flagged]
You would be surprised at the volume of money made by businesses supported by Excel.
Yes. I suspect there are thousands of Excel files that "process" >$1bn/yr out there.
Allow me to introduce to you: ACH. It is truly fascinating.