Hi there! Thank you for the glowing review! I'm the cofounder of Krea and I'm glad you liked Sangwu's blog post. The team is reading it.

You'll probably get a lot of replies around how this model is a just a fine-tune and a potential disregard for LoRAs, as if we didn't know about them. While the reality is that we have thousands of them running in our platform. Sadly there's simply so much a LoRA and a fine-tune can do before you run into issues that can't be solved until you apply more advanced techniques such as curated post-training runs (including reinforcement learning-based techniques such as Diffusion-PPO[1]), or even large-scale pre-training.

-

[1]: https://diffusion-ppo.github.io