I'm founding a company that is building an AOT compiler for Python (Python -> C++ -> object code) and it works by propagating type information through a Python function. That type propagation process is seeded by type hints on the function that gets compiled:
https://blog.codingconfessions.com/i/174257095/lowering-to-c...
This sounds even worse than Modular/Mojo. They made their language look terrible by trying to make it look like Python, only to effectively admit that source compatibility will not really work any time soon. Is there any reason to believe that a different take on the same problem with stricter source compatibility will work out better?
Have you talked to anyone about where this flat out will not work? Obviously it will work in simple cases but someone with good language understanding will probably be able to point out cases where it just won't. I didn't read your blog so apologies if this is covered. How does this compiler fit into your company business plan?
Our primary use case is cross-platform AI inference (unsurprising), and for that use case we're already in production by startups to larger co's.
It's kind of funny: our compiler currently doesn't support classes, but we support many kinds of AI models (vision, text generation, TTS). This is mainly because math, tensor, and AI libraries are almost always written with a functional paradigm.
Business plan is simple: we charge per endpoint that downloads and executes the compiled binary. In the AI world, this removes a large multiplier in cost structure (paying per token). Beyond that, we help co's find, eval, deploy, and optimize models (more enterprise-y).
I understood some of it. Sounds reasonable if your market already is running a limited subset of the language, but I guess there is a lot of custom bullshit you actually wind up maintaining.
Yup that's true. We do benefit from massive efficiencies though, thanks to LLM codegen.