Either the models are good and this sort of platform gets swept away, or they aren’t, and this sort of platform gets swept away.

Either the business makes a profit before it gets swept away, or it doesn't. This should be your goal: make money before your business dies. If you do that, you succeeded. Businesses are always ephemeral.

The most interesting thing about everyone trying to position themselves as AI experts is the futility of it: the technology explicitly promises tomorrows models will be better then todays, which means the skill investment is deflationary: the best time to learn anything is tomorrow when a better model will be better at doing the same work - because you don't need to be (conversely if you're not good at debugging and reverse engineering now...)

the best time to learn anything is tomorrow when a better model will be better at doing the same work

doesn’t that presume no value is being delivered by current models?

I can understand applying this logic to building a startup that solves today’s ai shortcomings… but value delivered today is still valuable even if it becomes more effective tomorrow.

I think it also presumes that the skills of today won't be helpful in making you better, faster, stronger at knowing what to learn tomorrow. Skateboarding ain't snowboarding but I guarantee the experience helps.

[deleted]

That’s true for “tips and tricks” knowledge like “which model is best today” or “tell the model you’ll get fired if the answer is wrong to increase accuracy” that pops up on Twitter/X. It’s fleeting, makes people feel like “experts”, and doesn’t age well.

On the other hand, deeply understanding how models work and where they fall short, how to set up, organize, and maintain context, and which tools and workflows support that tends to last much longer. When something like the “Ralph loop” blows up on social media (and dies just as fast), the interesting question is: what problem was it trying to solve, and how did it do it differently from alternatives? Thinking through those problems is like training a muscle, and that muscle stays useful even as the underlying technology evolves.

It does seem like things are moving very quickly even deeper than what you are saying. Less than a year ago langchain, model fine tuning and RAG were the cutting edge and the “thing to do”.

Now because of models improving, context sizes getting bigger, and commercial offerings improving I hardly hear about them.

> what problem was it trying to solve, and how did it do it differently from alternatives?

Sounds to me like accidental complexity. The essential problem is to write good code for the computer to do it's task?

There's an issue if you're (general you) more focused on fixing the tool than on the primary problem, especially when you don't know if the tool is even suitable,

I'm pretty much just rawdogging Claude Code and opencode and I haven't bothered setting up skills or MCPs except for one that talks to Jira and Confluence. I just don't see the point when I'm perfectly capable of writing a detailed prompt with all my expectations.

The problem is that so many of these things are AI instructing AI and my trust rating for vibe coded tools is zero. It's become a point of pride for the human to be taken out of the loop, and the one thing that isn't recorded is the transcript that produced the slop.

I mean, you have the creator of openclaw saying he doesn't read code at all, he just generates it. That is not software engineering or development, it's brogrammer trash.

I think the rationale is that with the right tools you can move much faster, and not burn everything to the ground, than just rawdogging Claude. If you haven't bothered setting up extra tools you may still be faster / better than old you, but not better than the you that could be. I'm not preaching, that's just the idea.

> That is not software engineering or development, it's brogrammer trash.

Yes, but it's working. I'm still reading the code and calling out specific issues to Claude, but it's less and less.

You nailed it. Thats exactly how I feel. Wake me up when the dust settles, and i'll deep dive and learn all the ins and outs. The churn is just too exhausting.

You might wake up in a whole different biome, Rip Van Winkle.

I don't get the pressure. I don't know about you, but my job for a long time has been continually learning new systems. I don't get how so many of my peers fall into this head trip where they think they are gonna get left behind by what amounts to anticipated new features from some SaaS one day.

How do you both hold that the technology is so revolutionary because of its productive gains, but at the same time so esoteric that you better be ontop of everything all the time?

This stuff is all like a weird toy compared to other things I have taken the time to learn in my career, the sense of expertise people claim at all comes off to me like a guy who knows the Taco Bell secret menu, or the best set of coupons to use at Target. Its the opposite of intimidating!

I'm not scared that my skills will be obsolete, I'm scared employers will think they are. The labor market was already irrational enough as it was.

[deleted]

I may just be a "doomer", but my current take is we have maybe 3-5 years of decent compensation left to "extract" from our profession. Being an AI "expert" will likely extend that range slightly, but at the cost of being one of the "traitors" that helps build your own replacement (but it will happen with or without you).

> the technology explicitly promises tomorrows models will be better then todays, which means the skill investment is deflationary

This is just wrong. A) It doesn’t promise improvement B) Even if it does improve, that doesn’t say anything about skill investment. Maybe its improvements amplify human skill just as they have so far.

I have a reading list of a bunch of papers i didn't get through over the past 2 years. it is crazy how many papers on this list are completely not talked about anymore.

I kinda regret going through the SeLU paper lol back in the late 2010s.

This is very well put. I think this platform can be useful but I doubt it can be something as big as the think it will be. At the end of the day it’s just storing some info with your data. I guess they are trying to be the next GitHub (and clearly have the experience :)). I doubt that success can be replicated today with this idea, even with $60 mil to burn

But think of all the investor dollars between now and then!

They know hence: forget what it does, it was created by the ex CEO of another commonly used thingy!