I see this happening, too.
We know that a lack of control over their environment makes animals, including humans, depressed.
The software we use has so much of this lack of control. It's their way, their branding, their ads, their app. You're the guest on your own device.
It's no wonder everyone hates technology.
A world with software that is malleable, personal, and cheap - this could do a lot of good. Real ownership.
The nerds could always make a home with their linux desktop. Now everyone can. It'll change the equation.
I'm quite optimistic for this future.
I'm presently in the process of building (read: directing claude/codex to build) my own AI agent from the ground up, and it's been an absolute blast.
Building it exactly to my design specs, giving it only the tool calls I need, owning all the data it stores about me for RAG, integrating it to the exact services/pipelines I care about... It's nothing short of invigorating to have this degree of control over something so powerful.
In a couple of days work, I have a discord bot that's about as useful as chatgpt, using open models, running on a VPS I manage, for less than $20/mo (including inference). And I have full control over what capabilities I add to it in the future. Truly wild.
> It's nothing short of invigorating to have this degree of control over something so powerful
Is this really that different to programming? (Maybe you haven't programmed before?)
Fair point.
> It's nothing short of invigorating to have this degree of control over something so powerful
I'm a SWE w/ >10 years, and you're right, this part has always been invigorating.
I suppose what's "new" here is the drastically reduced amount of cognitive energy I need build complex projects in my spare time. As someone who was originally drawn to software because of how much it lowered the barrier to entry of birthing an idea into existence (when compared to hardware), I am genuinely thrilled to see said barrier lowered so much further.
Sharing my own anecdotal experience:
My current day job is leading development of a React Native mobile app in Typescript with a backend PaaS, and the bulk of my working memory is filled up by information in that domain. Given this is currently what pays the bills, it's hard to justify devoting all that much of my brain deep-diving into other technologies or stacks merely for fun or to satisfy my curiosity.
But today, despite those limitations, I find myself having built a bespoke AI agent written from scratch in Go, using a janky beta AI Inference API with weird bugs and sub-par documentation, on a VPS sandbox with a custom Tmux & Neovim config I can "mosh" into from anywhere using finely-tuned Tailscale access rules.
I have enough experience and high-level knowledge that it's pretty easy for me to develop a clear idea of what exactly I want to build from a tooling/architecture standpoint, but prior to Claude, Codex, etc., the "how" of building it tended to be a big stumbling block. I'd excitedly start building, only to run into the random barriers of "my laptop has an ancient version of Go from the last project I abandoned" or "neovim is having trouble starting the lsp/linter/formatter" and eventually go "ugh, not worth it" and give up.
Frankly, as my career progressed and the increasingly complex problems at work left me with vanishingly less brain-space for passion projects, I was beginning to feel this crushing sense of apathy & borderline despair. I felt I'd never be able make good on my younger self's desire to bring these exciting ideas of mine into existence. I even got to the point where I convinced myself it was "my fault" because I lacked the metal to stomach the challenges of day-to-day software development.
Now I can just decide "Hmm.. I want an lightweight agent in a portable binary. Makes sense to use Go." or "this beta API offers super cheap inference, so it's worth dealing with some jank" and then let an LLM work out all the details and do all the troubleshooting for me. Feels like a complete 180 from where I was even just a year or two ago.
At the risk of sounding hyperbolic, I don't think it's overstating things to say that the advent of "agentic engineering" has saved my career.
What models and inference provider?
I'm using kimi-k2-instruct as the primary model and building out tool calls that use gpt-oss-120b to allow it to opt-in to reasoning capabilities.
Using Vultr for the VPS hosting, as well as their inference product which AFAIK is by far the cheapest option for hosting models of these class ($10/mo for 50M tokens, and $0.20/M tokens after that). They also offer Vector Storage as part of their inference subscription which makes it very convenient to get inference + durable memory & RAG w/ a single API key.
Their inference product is currently in beta, so not sure whether the price will stay this low for the long haul.
You can definitely get gpt-oss-120b for much less than $0.20/M on openrouter (cheapest is currently 3.9c/M in 14c/M out). Kimi K2 is an order of magnitude larger and more expensive though.
What other models do they offer? The web page is very light on details
Oh dang I had no idea that gpt-oss-120b was that cheap these days.
And yeah, given Vultr inference is in beta, their docs ain't great. In addition to kimi-k2-instruct and gpt-oss-120b, they currently offer:
deepseek-r1-distill-llama-70b deepseek-r1-distill-qwen-32b qwen2.5-coder-32b-instruct
Best way to get accurate up-to-date info on supported models is via their api: https://api.vultrinference.com/#tag/Models/operation/list-mo...
K2 is the only of the 5 that supports tool calling. In my testing, it seems like all five support RAG, but K2 loses knowledge of its registered tools when you access it through the RAG endpoint forcing you to pick one capability or the other (I have a ticket open for this).
Also, the R1-distill models are annoying to use because reasoning tokens are included in the output wrapped in <think> tags instead of being parsed into the "reasoning_content" field on responses. Also also, gpt-oss-120b has a "reasoning" field instead of "reasoning_content" like the R1 models.
in PI?
what ever you want.
> The nerds could always make a home with their linux desktop. Now everyone can. It'll change the equation.
Probelm is, to be able to do what you're describing, you still need the source code and the permission to modify it. So you will need to switch to the FOSS tools the nerds are using.
That's a feature, not a bug.
It means normies will finally see value in open source beyond just being free. They'll choose it over closed source alternatives.
This, too, makes a brighter future.
Obligatory post: open source != free software.
There is OSS you are not allowed to modify etc.
There are source-available software one is not permitted to distribute after modification. But what source-available software prevents the user from modifying the source for use by oneself?
We’re off to a great start then with Anthropic banning users who use alternative clients with their Claude subscription.
I'm actually relieved they're doing it now because it's going to be a forcing function for the local LLM ecosystem. Same thing with their "distillation attack" smear piece -- the more of a spotlight people get on true alternatives + competition to the 900 lb gorillas, the better for all users of LLMs.
I really hope so. I moved to Codex, only to get my account flagged and my requests downgraded to 5.2 because of some "safety" thing. Now OpenAI demands I hand my ID over to Persona, the incredibly dodgy US surveillance company Discord just parted ways with, to get back what I paid for.
This timeline sucks, I don't want to live in a future where Anthropic and OpenAI are the arbiters of what we can and cannot do.
It definitely does suck. I had the same feelings about a year ago and the unpleasantness has definitely increased. But glass half full, we didn't have Kimi K2.5, GLM5, Qwen3.5, MiniMax 2.5, Step Flash 3.5, etc available and the cambrian explosion is only continuing (DeepSeek V4 should be out pretty soon too).
The real moment of relief for me was the first time I used DeepSeek R1 to do a large task that I would've otherwise needed Claude/OpenAI for about 12 months ago and it just did it -- not just decently, but with less slop than Claude/OpenAI. Ever since that point, I've been continuing to eye local models and parallel testing them for workloads I'd otherwise use commercial frontier models for. It's never a perfect 1:1 replacement, but I've found that I've gotten close enough that I no longer feel that paranoia of my AI workloads not being something I can own and control. True, I do have to sacrifice some capability, but the tradeoff is I get something that lives on my metal, never leaks data or IP, doesn't change behavior or get worse under my feet, doesn't rate limit me, can be fine tuned and customized. It's all lead to a belief for me that the market competition is very much functioning and the cat is out of the bag, for the benefit of all of us as users.
100%
That's just because corporations got greedy and made their apps suck.
Strip away the ads, the data harvesting, add back the power features, and we'll be happy again. I'm more willing than ever to pay a one-time fee good software. I've started donating to all the free apps I use on a regular basis.
I don't want to own my own slop. That doesn't help me. Use your AI tools to build out the software if you want, but make sure it does a good job. Don't make me fiddle with indeterministic flavor-of-the-month AI gents.
> That's just because corporations got greedy and made their apps suck.
It is true for me with Linux. I code for a living and I can't change anything because I can't even build most software -- the usual configure/make/make install runs into tons of compiler errors most of the time.
Loss of control is an issue. I'm curious if AI tools will change that though.
I think there's room for both visions. Big Tech is generating more toxic sludge than ever, and yeah sure this is because they're greedy, but more precisely the root cause is how they lobbied Washington and our elected officials agreed to all kinds of pro-corporate, anti-human legislation. Like destroying our right to repair, like criminalizing "circumvention" measures in devices we own, like insane life-destroying penalties for copyright infringement, like looking the other way when Big Tech broke anti-trust laws, etc.
The Big Tech slop can only be fixed in one way, and actually it's really predictable and will work - we need to fix the laws so that they put the rights and flourishing of human beings first, not the rights and flourishing of Big Tech. We need to fix enforcement because there are so many times that these companies just break the law and they get convicted but they get off with a slap on the wrist. We need to legislate a dismantling of barriers to new entrants in the sectors they dominate. Competition for the consumer dollar is the only thing that can force them to be more honest. They need to see that their customers are leaving for something better, otherwise they'll never improve.
But our elected officials have crafted laws and an enforcement system which make 'something better' impossible (or at least highly uneconomical).
Parallel to this if open source projects can develop software which is easier for the user to change via a PR, they totally should. We can and should have the best of both worlds. We should have the big companies producing better "boxed" software. Plus we should have more flexibility to build, tweak and run whatever we want.
And then they will take away your right to boot whatever you want. For national security reasons and the children, of course.
Very good points, I agree and would add : "Interoperability" is the key to bring back competition and open the ecosystem again.
and being able to fire employees for profit gain when they already make a profit, thats illegal in other countries
What you're describing is the expected and correct outcome inside a profit-oriented, capitalist system. So the only way I see out of this situation would be changing policy to a more socialist one, which doesn't seem to be so popular among the tech elite, who often think they deserve their financial status because of the 'value' they provide, without specifying what that value is (or its second-order consequences). Whether that's abusing a monopolistic market position they lucked into, making apps as addictive as possible, or building drones that throw bombs on newborns in hospitals.
I think we're after the same goal but have a different view of mechanism.
Regulation enforcement against the anti-market behaviors would bring a lot of good.
Putting too much power in any centralized authority - company or government - seems to lead to oppression and unhealthy culture.
Fair markets are the neatest trick we have. They put the freedom of choice in the hands of the individual and allow organic collaboration.
The framing should not be government vs company. But distributed vs centralized power. For both governance and commerce.
The entire world right now suffers from too much centralized power. That comes in the form of both corporate and government. Power tends to consolidate until the bureaucracy of the approach becomes too inefficient and collapses under its own weight. That process is painful, and it's not something I enjoy living through.
If you see through that lens, it has explaining power for the problems of both the EU countries and the US.
I'm not arguing for state capitalism. I consider the "company vs. government" framing as fundamentally flawed. I see it as "a few in power vs. Everyone gets exactly one vote".
I want things in society organized in a way that gives everyone agency, not just those adjacent to capital.
If a company employs me to extract value from my work, I want a vote in how that company operates. Not just one vote every four years in the hopes that policy will shift to benefit workers more over a few decades.
I want to be able to say no to doing a job without the existential threat of not getting another job offer ever, so I can base my decisions on my values, not my fear of not bein able to pay next month's rent.
Capitalism goes against that, because it centers profit hoarding and parasitic value extraction from the working class at the center of attention. It's an inhumane ideology at its core, and only even ever slightly successful in creating wealth because of all the socialist mechanisms wrapped around it to hold it together.
In essence: I want to abolish centralized power and class hierarchies.