largest FPGAs have on the order of tens of millions of logic cells/elements. They’re not even remotely big enough to emulate these designs except to validate small parts of it at a time and unlike memory chips or GPUs, companies don’t need millions of them to scale infrastructure.
(The chips also cost tens of thousands of dollars each)
You can synthesize a logic circuit that is as complex as it gets to have a certain accuracy.
Deep differentiable logic networks, in my experience, do not scale well for larger (more inputs) logic elements. One still has to apply logic optimization and synthesis afterwards. So why not to synthesize ones own approximate circuit to the accuracy one's desire?
I gave a short talk about compiling PyTorch to Verilog at Latte '22. Back then we were just looking at a simple dot product operation, but the approach could theoretically scale up to whole models.
Ugh, quick, everyone start panic-buying FPGAs now.
largest FPGAs have on the order of tens of millions of logic cells/elements. They’re not even remotely big enough to emulate these designs except to validate small parts of it at a time and unlike memory chips or GPUs, companies don’t need millions of them to scale infrastructure.
(The chips also cost tens of thousands of dollars each)
they also arent power friendly
Deep Differentiable Logic Gate Networks
I see you and I raise approximate logic synthesis [1] [2].
[1] https://www.sciencedirect.com/science/article/pii/S138376212...
[2] https://arxiv.org/abs/2506.22772
You can synthesize a logic circuit that is as complex as it gets to have a certain accuracy.
Deep differentiable logic networks, in my experience, do not scale well for larger (more inputs) logic elements. One still has to apply logic optimization and synthesis afterwards. So why not to synthesize ones own approximate circuit to the accuracy one's desire?
Is this a thing?
I gave a short talk about compiling PyTorch to Verilog at Latte '22. Back then we were just looking at a simple dot product operation, but the approach could theoretically scale up to whole models.
https://capra.cs.cornell.edu/latte22/paper/2.pdf
https://www.youtube.com/watch?v=QxwZpYfD60g