Things like these (Google also banned me from Antigravity for briefly using an agent) and the massive quality swings made me cancel all 3 subs last week and resort to my local Qwen 3.6 only. Open models are already great and only getting better, and I really enjoy the privacy and consistency of a model I run myself.

I don't think anyone is questioning all the benefits of using local LLMs. Those are readily apparent.

I just don't believe for an instant that they're anywhere in the same ballpark of capabilities as running Opus or similar. My time is the most valuable resource. Opus would need to be SIGNIFICANTLY more costly and unstable for me to start entertaining local models for day-to-day development.

Perhaps whatever work you're doing makes this trade-off more sensible, but I struggle to see how that could be true. I'm averse to running Sonnet on a large amount of software engineering problems - let alone Qwen.

What kind of work are you applying Opus and other LLMs to? I'm quite curious to understand how other people are using these tools.

At the moment neither Opus nor any open weights models seem to be capable of doing complex work, and for less complex work the additional cost of Opus hasn't been worthwhile. This is for reasonably math-heavy computer vision applications.

What LLMs have been useful for is identifying forgotten code that will be affected when planning a change, reviewing changes, and looking up docs/recipes for simple tasks. But Opus doesn't seem necessary for a lot of that.

Not the one you were asking, but…

I have been using Opus (in zed) to find the “in between” bugs. Bugs that kinda live in the space between micro services or between backend and frontend.

It takes a bit of preparation to get good results, but it can usually find the source of bugs in 1-2 hours (200k-300k context) that would take me a week to track down.

I create a folder, and then open up git worktrees in sub folders for every repo I think might be involved. I also create an empty report.md file. Then I give it a prompt that starts with “I need you to debug an issue”, followed by instructions for how to run tests in each repo, followed by @mentioning any specific files or folders I think is relevant (quick description of what they are), then the bug description. After that I tell it to debug the issue, make no code changes and write its findings to the report.md file.

This works incredibly well.

My current job has me overseeing a few teams of engineers working on ~10+ y/o legacy software systems that have not been especially well maintained. As an example, one team had a completely broken CI pipeline due to numerous flaky tests. They had configured the CI pipeline to rerun tests multiple times and still the master branch had like.. a 40% pass rate. Super ugly, but the suite took ~40 minutes to run and they were demoralized enough to not want to investigate it anymore.

I came in, set Claude up, gave it read access to CI artifacts, had it build out some tooling to monitor the rolling pass/fail rate over the last 30 days, and let it loose. It identifies the worst offending flaky tests, forms hypotheses on whether it's a testing issue or a production issue, then tries to divide-and-conquer until it gets minimal reproduction steps. If it's not able to create deterministic reproduction then it'll make a best guess at fixing the issue and grind away at test re-runs all night until it can try to figure out if it fixed the issue with statistical confidence instead.

It's not perfect. I have to throw away some of the bad solutions, but shaved 20 minutes off their pipeline and improved pass rate by 35% in a handful of weeks. Very minimal oversight on my part - just letting it run while I'm asleep and reviewing PR proposals during the day between meetings.

We have an initiative to make an entire web application significantly more accessible in response to some government mandates. Tight deadline, tons of grunt work, repetitive patterns, some small nuances on edge-cases. The team was able to create a set of skills for doing the conversion logic, slowly build up and address all the edge cases, and are now able to work several magnitudes more quickly in modernizing the app.

A team had punted repeatedly on updating Jest to the latest version because it inherently came with a breaking change to JSDOM which made some properties unable to be spied upon. Took like 20 minutes to have Claude one-shot the entire conversion when they'd ignored it for months because it just felt too finicky prior to agents. In general, everything to do with testing infrastructure is easy to push forward with confidence.

Uhm, we have an active interview pipeline where we give a take-home technical assessment. After we got a few submissions, and manually evaluated them, I fed our analyses in and our grading rubric and had it generate assessments for incoming candidates following the rubric. After checking a few pretty carefully it became clear that it was good enough to trust - the take home wasn't groundbreaking and the problem space was understood enough to be able to identify obvious issues if there were any.

I was given a small team of semi-technical people who were being used to fetch numbers from DBs for product/marketing/sales and perform light data analysis on them. A lot of their day to day was just paper pushing SQL queries into Excel spreadsheets and then transforming them into PowerPoints with key takeaways. They didn't have any experience writing code. I had Claude build a gameified playground for them where I gave them a VSCode dev container, a SQLite DB full of synthetic data emulating what they'd encounter IRL, and a Jupyter notebook filled with questions they'd need to answer by writing code to interrogate the database and form insights. In a couple of weeks I was able to get them to the point where they were comfortable writing basic Python scripts with the help of Claude and they're now off automating all their paper-pushing workflows with deterministic scripts. When they're done we're going to move them to higher value work by having them do sleuthing against our data and surfacing proactive insights to propose to Product rather than just reactively fetching data and building reports.

I was asked to quickly build a prototype for some basic AI functionality we thought we might want to add to one of the products. I was able to go from "I have no idea what I should build" to "here's a prototype we can put in front of clients and see if this idea has any merit" in about 14 hours. Just riffing with Claude from product idea to functional/technical specs, implementation plan, then full working prototype was one shot, and then a tight iteration loop for a couple of hours with me guiding it on personal aesthetic choices to give it enough final polish. Obviously I wouldn't ship this code into production, but it's really nice not having any sunken cost biases when demoing a prototype. If customers don't like it? Great, I lost one day and half the time I was multi-tasking while Claude implemented specs. Even better - I had Claude write a script to extract all the conversations I had with it and include those in the prototype repo. Then I filmed a quick demo video of my process, shared that with the engineers, and they're able to review my Claude conversations to get inspiration for how to modify their own agentic coding strategies.

Those were super fascinating and inspiring to read! Thank you

[dead]

DeepSeek is close to SOTA today, as are Kimi and GLM. Yes they'll be slow and high-latency on ordinary hardware but let's be real, no one reasonable is running Opus or GPT on a 24/7 basis either. Local AI heavily rewards slow inference around the clock over fast response.

I think you'd be surprised, I find that the harness is what makes the real difference. I also prefer to be on the loop, actively guide and review. Local models are definitely much less autonomous as of today so if you need to be churning out code at speed they're probably not for you.

I've played with them plenty and they're not even close as far as speed or intelligence. It's like comparing a bike to an MRAP.

What harness would you recommend for the open-weight models?

Opencode has been the best one for me so far.

Having tried local agents just two weeks ago, the parent poster is correct: they don't come anywhere near frontier models, despite what the benchmarks state. I haven't tried Qwen 3.6 yet, but the version before it frequently got stuck even on moderately complex problems.

Same experience for me. I think people need to start providing context for the type of work they're doing when repeating the local model hype. Maybe they're working with a cookie cutter React app and it does the job fine.

I work on Go and Rust mostly. The experience can be wildly different based on model, quantization, and harness. The fact that it didn't work for you doesn't mean everyone is trying to hype local models, people are getting real work done.

This feels like a symptom of the definition of "real work" changing right in front of us. Some people still use AI like a copilot, cleaning up code here and there, maybe writing functions. And at the right scale, this is genuinely still real work.

Others, especially startups or indie hackers, use AI like it were their end-all be-all assistant. "Hey Jeeves, go add Apple Sign In, Google Sign In to our signup pages. Also, investigate why we're not utilizing cached inputs on our AI APIs correctly. And add Maestro flows for every screen in our app. Btw check out posthog, supabase, and Stripe - is our new agent changing engagement or trial->paid conversion rates?"

And 3 hours later, you have all these done. But only if you use the right multi trillion param models.

Go and Rust is not really a type of work though. What tasks are you throwing at them?

If you know what you're doing and prompt it correctly, local models are great. If you're just vibe coding and relying on the LLM to fill in all the gaps for you and basically build the software for you, yeah you need SOTA to deal with that.

But, you know,

Yet.

For now we infer through few weights, lossily; but then in full precision. Now I represent in part; but then shall I represent as fully as the data was sampled.

1 CorinthAIns 13:12

How much VRAM do you need to achieve decent performance?

I have a 64GB M1 Ultra dedicated to llama.cpp. I get 40 tok/s on a fresh session, decreasing slowly to about 25 tok/s at around 50% of the 256K context, then down to 20 tok/s or less beyond that, but I rarely let it go much higher and handoff instead. This is whith Qwen 36B A3B at 8Q without KV quantization. It's not super fast but perfectly usable for me.

This is the future.

Spent the better part of a week trying to integrate local models into my LazyVim workflow. I've tried both Avante and CodeCompanion and have yet to find any configuration which remotely works. Either it goes into an endless loop, the project directory gets filled with garbage or it can't find the file to apply changes to despite it just being read from. Not sure if it's a Qwen problem, plugins, or Ollama.

I suggest to have opencode drive the model. I also use neovim and these days I mostly just have a tmux pane side by side. But opencode does support ACP mode which you can use with codecompanion and the like.