Have you talked to anyone about where this flat out will not work? Obviously it will work in simple cases but someone with good language understanding will probably be able to point out cases where it just won't. I didn't read your blog so apologies if this is covered. How does this compiler fit into your company business plan?
Our primary use case is cross-platform AI inference (unsurprising), and for that use case we're already in production by startups to larger co's.
It's kind of funny: our compiler currently doesn't support classes, but we support many kinds of AI models (vision, text generation, TTS). This is mainly because math, tensor, and AI libraries are almost always written with a functional paradigm.
Business plan is simple: we charge per endpoint that downloads and executes the compiled binary. In the AI world, this removes a large multiplier in cost structure (paying per token). Beyond that, we help co's find, eval, deploy, and optimize models (more enterprise-y).
I understood some of it. Sounds reasonable if your market already is running a limited subset of the language, but I guess there is a lot of custom bullshit you actually wind up maintaining.
Yup that's true. We do benefit from massive efficiencies though, thanks to LLM codegen.