Amazing comment.
As a non-hardware guy, I read, “well, duh, for a 20yr practitioner dealing with the intricacies of specific FPGA series, all this makes tons of sense”.
Amazing comment.
As a non-hardware guy, I read, “well, duh, for a 20yr practitioner dealing with the intricacies of specific FPGA series, all this makes tons of sense”.
It only makes sense to me because I tried to implement a RISC-V on these Gowin FPGAs and banged into the limitations and can distill them down. A junior engineer looks at this post-AI, shrugs, and says "I'm done."
The AI doesn't flag "Hey, my adder sucks. Move to a better FPGA architecture." A junior engineer pre-AI would have to bang on this a while, get frustrated at the critical paths, and eventually ask for help. At which point we would both look at this, identify that the adder was doing a 32-bit ripple carry, both have a "WTF?!" moment, and switch FPGA families.
In addition, the AI also doesn't flag how close to the margin you are. To my eye, almost all the Fmax gains look like PnR (place and route) noise. The DIV/REM obviously isn't and the replay predictor looks real. To top it off, the branch predictor wins look anomalously low to my eye.
This is what a bunch of us are yelling about with AI. AI gets you a thing. AI gets you no insight into that thing. And because the juniors will use the AI, they will never learn the insight.
Side note: The granularity of the CM/MHz numbers look a bit suspicious. Why are there identical entries?