With AI, we‘re cargo-culting understanding. We‘re reproducing the surface of having understood something, but we‘re robbing ourselves the time and effort to truly do it.

i been telling my coworker this who's only use case he can conjure up with AI is simply "im going to give claude snowflake cortex, our integration code, all our documentation, jira tickets and its gonna make everything so much better. we'll be able to ask him anything and get the answer" and he's just lost the plot because there wasn't much of a plot. Sci-fi's infused him with how great it would be to have something to answer any question he had. he's hung up on this possibility of having his own tony stark jarvis at his disposal, in his head this is going to be the thing that speeds him up.

i'd say it's been a huge distraction for him and the obsession over using LLM for Big Wikiz hasn't yielded anything near what he thought the tech was for. few occasions now he's learned the hard way how imperfect the technology is.

between that and everyones grand visions for agentic workflows i've mostly just receded into being one of the few who is still regularly delivering stuff. i'm using AI to speed my delivery up quite a bit, i'm just not wasting my time taking it on some big grand adventure. the irony that a lot of people pushed back on companies who wanted to implement chat bots and they spend most of their credits/tokens making their own chat bots by collecting six trillion .md files and adding skill files.

my real takeaway is this: i've come to reason that there is some sort of loss in actual real institutional knowledge when we attempt to take shortcuts to growing the breadth of our own knowledge. i don't mean "hey claude give me some examples of how companies typically design x to solve for y" or "golang is new to me, what are the benefits of a compiled language versus something that requires a runtime going".

no, i'm talking about these kinds of questions:

"/somePersonalBigWikiProjectInvokedBySkill.md claude review our current tooling and infrastructure, how can we 5x our deployment speed, then search the web for <some SaaS company> and put a proposal together to get it implemented at the organization and include a 5 year cost benefit analysis and ... "

i look around and it feels like everyone is nerfing themselves. that latter question? people are just sending claude proposals left and right. my eyes have completely glazed over. is it really that hard to do some digging yourself? we're already ceding the ability to just go grab an architect or senior engineer and ask him what he thinks about how <some SaaS company> will fit with the broader suite of technologies and visions on the horizon. we're just skipping the pieces where we do a little discovery together and work together on an outcome. we're walking away with surface level understanding of many things.

this clearly has visible impacts on how we engage with each other, there's something there that I'm noticing and don't have the words for. it's mostly that people are less able to explain what it is they're talking about when pressed for deeper details, but also everyone's behavior is now different because AI sort of... makes them feel like they have definitive answers/strategies and they're no longer willing to have their ideas challenged. they no longer see that as a learning experience, a chance to learn from someone who has wisdoms who is already a walking wikipedia on something. the perfect technology for people who hate when someone with way more experience than them says "maybe not a good idea and here's why"

i've met some interesting people who are just... walking encyclopedias on some or many domains. incredibly smart people who have so much knowledge and wisdom and so many years of experience not just with tech but with people and failures and successes. i don't doubt for a second that the human brain is capable of holding an unbelievable index of information in a natural way that marries well with decision making processes that come from experience. i'm not sure what gap people are trying to close building themselves some proverbial great library here, but i would encourage people to just sit back and trust that their brain is still one of the greatest technologies at their disposal.

I feel the exact same way, it helps speed up development a lot (and eliminates a lot of really annoying grunt work). But I see people I work with doing shit with it that doesn't make any sense, e.g. writing 50k lines of code for a "compiler" when it's really just an interpreter under the hood. Like they never take the time to understand the domain more deeply, they just use claude to sling some shit that barely works

> i'm not sure what gap people are trying to close building themselves some proverbial great library here, but i would encourage people to just sit back and trust that their brain is still one of the greatest technologies at their disposal.

Culturally I think this is going to fuck things up significantly. If I take the time to read all of the latest papers in the LLM space, I'm damn well not going to summarize it or document what I've learned for anyone. (Maybe this is why there are not many high quality books aggregating all of this information in all the latest papers, all of the advancements, etc. All the people doing this work would rather (smartly) milk the cash cow and maintain the information asymmetry.)

Or think about open source, this will kill it for people trying to make money off a product and keep it open source. Because someone could spin up a competitor overnight.

AI is going to make the information easier to acquire for cheap. But it's going to absolutely destroy the incentive structure and trust required to have an open exchange of information. It was already bad enough because the industry is not incentivized to produce quality literature for educational purposes like academia is. But after this, it'll be a complete shit show

>i look around and it feels like everyone is nerfing themselves

>this clearly has visible impacts on how we engage with each other

> there's something there that I'm noticing and don't have the words for.

Welcome to ASI takeoff!

> im going to give claude snowflake cortex, our integration code, all our documentation, jira tickets and its gonna make everything so much better. we'll be able to ask him anything and get the answer

This is actually a good idea because it's a very cheap way to build your own industrial-strength search engine. We've forgotten how cool search engines are because Google's is so shit now.

(Although you don't need Claude, you can self-host this with minimal effort now.)

[dead]

>> With AI, we‘re cargo-culting understanding

We're cargo culting "the manager view". Like the critic you can read on Bret Devereaux's blog about Game Of Thrones having been written from an elite's point of view, it's utopian and sounds good ... for the elites, the people who benefit from the hard work they never have to do themselves. But like any elite bubble wildly disconnected from reality, this one will fall bad. Maybe French revolution bad, when the answer to the masses of unemployed "displaced" by AI screaming "we can't get a piece of bread to eat" is "let them eat cake instead".

AI can do things on its own, without you understanding them yes.

But if you are trying to understand something well, there is no better tool for helping you than AI.

I think that AI can sometimes help a lot. But I think doing it correctly is a tightrope and one misstep can easily have terrible results.

First issue is this result from reinforcement learning that tells you that you really want to be doing a large fraction of stuff stuff on policy when possible.

It's true of RL agents, but I think it's actually just a universal learning result that applies to humans. Sure you could ask AI to solve a difficult math problem step by step, and what it can expose you to is tricks you had no idea about and the general method of solving such a problem.

But there is something about the work that you produced without external influence (the on policy epispde) that is sort of irreplaceably important.

The second is that there is something about the speed and conciseness of information AI presents to you. It seems like a super power but there are two problems I have with it.

A) It's too fast. Unless you are artificially slowing yourself down by reading like one sentence per minute there is something about how quickly all you want gets presented to you that seems to have a strong in one ear out the other sort of effect. You need to slow down. You need to appreciate the details.

B) It's also often too consise. There is something about doing research yourself that lets you stumble upon something new that you might not have thought was helpful. Lots of times I've found lots of amazing nuggets on missteps and tangents.

There are more issues as well, but these are the major two I get concerned about. Like you need to be cognizant of the work not being done when you are using AI to do research. And imo it's deeply problematic for young students who have literally never done the hard work of trying to answer questions themselves. Because they might not realize the problem.

> But if you are trying to understand something well, there is no better tool for helping you than AI

Could not disagree more.

The best way to understand something deeply is to practice it. AI is anti-practice. It's like trying to learn something by following a YouTube video step by step. It has an outcome and it feels productive but it's not going to stick in your head at all. It's not practice

I would say a better analogy is using Google… you can use it as a tool to seek information and deepen your understanding. But it requires your brain to be engaged and to be putting that stream of knowledge into practice.

you can use AI to get a faster explanation for what's happening in a big codebase, it makes the timelines on developing features much lower from my experience

am I losing out on something by not having to spend hours clicking through redundant parts of a large codebase to get a concrete answer on something? doesn't feel like it