Not that I'm scared of this update but I'd probably be alright with pausing llm development today, atleast in regard to producing code.

I don't want an llm to write all my code, regardless of if it works, I like to write code. What these models are capable of at the moment is perfect for my needs and I'd be 100% okay if they didn't improve at all going forward.

Edit: also I don't see how an llm controlled system can ever replace a deterministic system for critical applications.

I have trouble with this too. I'm working on a small side project and while I love ironing out implementation details myself, it's tough to ignore the fact that Claude/GPT4o can create entire working files for me on demand.

It's still enjoyable working at a higher architecture level and discussing the implementation before actually generating any code though.

I don't mind using it to make inline edits or more global edits between files at my descresion, and according to my instructions. Definitely saves tons of time and allows me to be more creative, but I don't want it make decisions on its own anymore than it already does.

I tried using the composer feature on Cursor.sh, that's exactly the type of llm tool I do not want.

In normal critical system u use 3 CPUs. With LLM u can 1000 shot majority voting. Seems like approaches like entropix might reduce hallucinations also.

I don't think this snapshot image/vision model is going to be the best solution. I think CLI is a much better interface for llms.