The fact that they show a comparison with an FPGA is a red flag, because large scale generative AI is their biggest weakness.
FPGAs are superior in every respect for models of up to a few megabytes in size and scale all the way down to zero. If they are going for generative AI, they wouldn't even have bothered with FPGAs, because only the highest end FPGAs with HBM are even viable and even then, they come with dedicated AI accelerators.
One thing seems pretty clear from the papers and technical information is that the product is not really aimed at the current approach that is used by mainstream AI models in the first place (where random numbers are far from the bottleneck and sampling from a distribution is generally done by either picking a random starting point and then having a neural net move towards the 'closest' point or by having a neural net spit out a simplified distribution for part of the result and picking randomly from that. In this approach the neural net computation is completely deterministic and takes the bulk of the compute time).
The stuff they talk about in the paper is mainly about things that were in vogue when AI was called Machine Learning where you're essentially trying to construct and sample from very complicated distributions to try to represent your problem in a Bayesian way (i.e. trying to create a situation where you can calculate 'what's the most probable answer given this problem'. In this approach it's often useful to have a relatively small 'model' but to be able to feed random numbers predicated on it back into itself to be able to sample from a distribution which would otherwise be essentially intractable to sample from). This kind of thing was very successful for some tasks but AFAIK those tasks would generally be considered quite small today (and I don't know how many of those have now been taken over by neural nets anyway).
This is why I say it looks very niche and it feels like the website tries to just ride on the AI hypetrain by association with the term.