I'm a developer (not a 3D artist) who's been frustrated with current AI text-to-3D tools — most produce messy, monolithic meshes that are unusable without hours of cleanup.
So I built NativeBlend, a side project aimed at generating editable 3D assets that actually fit into a real workflow.
Key features:
- Semantic Part Segmentation: Outputs separate, meaningful components (e.g., wheels, doors), not just a single mesh blob.
- Native Blender Output: Generates clean, structured .blend files with proper hierarchies, editable PBR materials, and decent UVs — no FBX/GLB cleanup required.
The goal is to give devs a usable starting point for game assets without the usual AI slop. I have a working demo and would love feedback: Does this solve a real need, or am I just scratching my own itch?
Thanks for taking a look!
this is interesting. how does the semantic segmentation work? do you generate 3d models and then separate them? or are the separated from the initial generation?
I think since your target is blender it is better as a blender add on, so I can generate directly into my scene. then you can publish it on blender marketplaces.