I've been iterating on some 3D models for various wacky garage projects I have. It's fun. I've often wished I could click on an arbitrary place and say "add an eye bolt here" or somesuch.

Of course learning proper cad software is probably the right thing here, but having Claude write python scripts which generate HTML files which reference three.js to provide a 3d view has gotten me surprisingly far. If something could take my pointer click and reverse whatever coordinate transforms are between the source code and my screen such that the model sees my click in terms of the same coordinate system it's writing python in, well that would be pretty slick.

Hah, Ive been thinking the same thing. Recently I prototyped a 2d paint app to validate the idea using chrome's Prompt API: https://arjo129.github.io/apps/voice_paint.html Honestly what'd be epic is if this could be made to work with a XR headset. Imagine using the headset to capture the piece you are working on and generating saying "hey can you drill some holes over here"?

That would be pretty excellent. We need a layer agnostic notion of "here".

Until then I've just had it list every surface in a legend, each colored differently, so I can say "three inches down from the top of pole six, and rotate it so the hoop part of the bolt faces northwest."