I second this. This* is the matter against which we form understanding. This here is the work at hand, our own notes, discussions we have with people, the silent walk where our brain kinda process errors and ideas .. it's always been like this since i was a kid, playing with construction toys. I never ever wanted somebody to play while I wait to evaluate if it fits my desires. Desires that often come from playing.

Outsourcing this to an LLM is similar to an airplane stall .. I just dip mentally. The stress goes away too, since I assume the LLM will get rid of the "problem" but I have no more incentives to think, create, solve anything.

Still blows my mind how different people approach some fields. I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.

I'll push it back against this a little bit. I find any type of deliberative thinking to be a forcing function. I've recently been experimenting with writing very detailed specifications and prompts for an LLM to process. I find that as I go through the details, thoughts will occur to me. Things I hadn't thought about in the design will come to me. This is very much the same phenomenon when I was writing the code by hand. I don't think this is a binary either or. There are many ways to have a forcing function.

I think it's analogous to writing and refining an outline for a paper. If you keep going, you eventually end up at an outline where you can concatenate what are basically sentences together to form paragraphs. This is sort of where you are now, if you spec well you'll get decent results.

I agree, I felt this a bit. The LLM can be a modeling peer in a way. But the phase where it goes to validate / implement is also key to my brain. I need to feel the details.

My think/create/solve focus is on making my agentic coding environment produce high quality code with the least cost. Seems like a technical challenge worth playing with.

It probably helps that I have 40 years of experience with producing code the old ways, including using punch cards in middle school and learning basic on a computer with no persistent storage when I was ten.

I think I've done enough time in the trenches and deserve to play with coding agents without shame.

> I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.

people seem to have a inability to predict second and third order effects

the first order effect is "I can sip a latte while the bot does my job for me"... well, great I suppose, while it lasts

but the second order effect is: unless you're in the top 10%, you will now lose your job, permanently

and the third order effect is the economy collapses as it is built on consumer spending

Alternatively, another second order effect is can't sip latte anymore because you're orchestrating 8 bots do the work and you're back to 80%-100% time saturation.

The previous second order effect is more likely. For the one orchestrating 8 bots, 7 others are not needed anymore.

So far in my career I have always had more requests coming in than implementations going out. If I can go 3 or 10 times faster, than I will still have plenty of work. Especially for the slew of ideas that are never even considered to put towards a dev, because it's already considered to be too low value to have it even be considered to be build. Or the ideas that are so far fetched they were never considered feasible. I am not worried work will dry up.

What I believe is going to be interesting is what happens when non-engineers adopt building with agentic AI. Maybe 70 or 80% of their needs will be met without anyone else directly involved. My suspicion is that it will just create more work: making those generated apps work in a trustworthy manner, giving the agents more access to build context and make decisions, turning those one off generated apps into something maintainable, etc.

This 100%. We want MOAR!!!

Or, there is just a lot more software written as the costs drop. I think most people work with software not tailored enough for their situation..

>Or, there is just a lot more software written as the costs drop.

Yeah, no, that's wishful thinking. The companies will just opt for higher margins.

Well, but the barrier to entry for new companies not doing that has dropped, and customers might slowly get used to paying less for tailor made.

Exactly this. Even if right now you, bottom level wage earning grunt, get to lighten your workload for a fleeting second, sit back and enjoy the latte it's only but a fleeting second until the capital class tighten the screws.

Most people will get laid off and made redundant and those who remain are going to have to run faster than ever to produce wealth for the capital owners.

Yea, I don’t think that will be the case. Spreadsheets simplified the work of junior finance people who did all the work by hand before. But more people work in finance now than before.

Actually for me it was the opposite: before I wasn't able to play around and experiment in my free time that much, because I didn't have enough energy left to actualize the thoughts and ideas I have since I have a day job.

Now, since the bottleneck of moving the fingers to write code has gone down, I actually started to enjoy doing side projects. The mental stress from writing code has gone down drastically with Claude Code, and I feel the urge to create more nowadays!

you have a point.. i'm still confused about how this will affect jobs, markets

in a way a personal project is different from a job duty, here you're exploring, less if no deadline.. at work if I feel the llm is doing everything and I don't really master, i risk my job and my skills rot.

I wonder over the long term how programmers are going to maintain the proficiency to read and edit the code that the LLM produces.

There were always many mediocre engineers around, some of them even with fancy titles like "Senior," "Principal", and CTO.

We have always survived it, so probably we can also survive mediocre coders not reading the code the LLM generates for them because they are unable to see the problems that they were never able to see in their handwritten code.

Honestly it’s not that hard. I already coded less and less as part of my job as I get more senior and just didn’t have time, but I was still easy to do code reviews and fix bugs, sit down and whip out a thousand lines in a power session. Once you learn it doesn’t take much practice to maintain it. A lot of traditional coding is very inefficient. With AI it’s like we’re moving from combustion cars to EVs, the energy efficiency is night and day, for doing the same thing.

That said, the next generation may struggle, but they’ll find their way.

Personally I planned to allocate weekly challenges to stay sharp.

It’s going to be extremely difficult if PR and code reviews do not prune unnecessary functions. From what I’m experiencing now, there’s a lot of additional code that gets generated.

"That's the neat part—you don't." Eventually the workflow will be to use the LLM to interpret the LLM-generated codebase.

I don’t read or edit the code my claude code agent produces. That’s its job now. My job is to organize the process and get things done.

In this case why can’t other agents just automate your job completely ? They are capable of that. What do you bring in the process of still doing manual organization ?

I still have to tell it what to do, and often how to do it. I manage its external memory and guidelines, and review implementation plans. I’m still heavily involved in software design and test coverage.

AI is not capable yet of automating my job completely – I anticipate this will happen within two years, maybe even this year (I’m an ML researcher).

Do you mean, from your perspective, within 2 years humans won’t be able to bring anything of value to the equation in management and control ?

No, I mean that my job in its current form – as an ML researcher with a phd and 15 years of experience - will be completely automated within two years.

Is the progress of LLMs moving up abstraction layers inevitable as they gather more data from each layer? First, we fed LLMs raw text and code and now they are gathering our interactions with the LLM regarding generated code. It seems like you could then use the interactions to make a LLM that is good at prompting and fixing another LLMs generated code. Then its on to the next abstraction layer.

What you described makes sense, and it's just one of the things to try. There are lots of other research directions: online learning, more efficient learning, better loss/reward functions, better world models from training on Youtube/VR simulations/robots acting in real world, better imitation learning, curriculum learning, etc. There will undoubtedly be architectural improvements, hardware improvements, longer context windows, insights from neuroscience, etc. There is still so much to research. And there are more AI researchers now than ever. Plus current AI models already make us (AI researchers) so much more productive. But even if absolutely no further progress is made in AI research, and foundational model development stops today, there's so much improvement to be made in the tooling around the models: agentic frameworks, external memory management, better online search, better user interactions, etc. The whole LLM field is barely 5 years old.

If you want a machine (or in fact another human) to do something for you, there are two tasks you cannot delegate to them:

a) Specify what you want them to do.

b) Check if the result meets your expectations.

Does your current job include neither a nor b?

A/B happen at different abstractions levels. My abstraction level will be automated. My manager’s level will probably last another year or so.

So your assumption is that it will ultimately be the users of software themselves who will throw some every day language at an AI and it will reliably generate something that meets those users' intuitive expectations?

Yes, it will be at least as reliable as an average software engineer at an average company (probably more reliable than that), or at least as reliable as a self-driving car where a user says get me to this address, and the car does it better (statistically) than an average human driver.

I think this could work for some tasks but not for others.

We didn't invent formal languages to give commands to computers. We invented them as a tool for thinking and communicating things that are hard to express in natural language.

I doubt that we will stop thinking and I doubt that it will ever be efficient to specify tasks purely in terms of natural language.

One of my first jobs as a software engineer was for a bank (~30 years ago). This bank manager wasn't a man of many words. He just handed us an Excel sheet as a specification for what he wanted us to implement.

My job right now is to translate natural English statements from my bosses/colleagues into natural English instructions for Claude. Yes, it takes skill and experience to do this effectively. But I don't see any reasons Gemini 4, Opus 5 or GPT-6 won't be able to do this just as well as I do.

What are you going to do for work in 2 years?

I have enough savings for a few years, so I might just move to a lower COL area, and wait it out. Hopefully after the initial chaos period things will improve.

For someone at your position with your experience it’s quite depressing that your job is going to be automated. I feel quite anxious when I see young generations in my country that say themselves they are lazy about learning new things. The next generation will be useless to capitalist societies, in a sense that they won’t be able to bring value through administrative or white collar work. I hope some areas of the industry will move slowly toward AI

[dead]

simonw alert!!!

> I see people at work who are drooling about being able to have code made for them

These people just drool at being able to have work done for them to begin with. Are you sure it is just "code"?

> I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.

In my circles see some overlap with the people who are like: "Done! Let's move on" and don't worry about production bugs, etc. "We'll fix it later".

I've always stressed out about introducing bugs and want to avoid firefighting (even in orgs where that's the way to get noticed).

Too much leaning on coding tools and agents feels to sketchy to someone like me right now (maybe always tbh)

>I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.

+100 for this.

Everything you have said here is completely true, except for "not in that group": the cost-benefit analysis clearly favors letting these tools rip, even despite the drawbacks.

Maybe.

But it's also likely that these tools will produce mountains of unmaintainable code and people will get buried by the technical debt. It kind of strikes me as similar to the hubris of calling the Titanic "unsinkable." It's an untested claim with potentially disastrous consequences.

> But it's also likely that these tools will produce mountains of unmaintainable code and people will get buried by the technical debt.

It's not just likely, but it's guaranteed to happen if you're not keeping an eye on it. So much so, that it's really reinforced my existing prejudice towards typed and compiled languages to reduce some of the checking you need to do.

Using an agent with a dynamic language feels very YOLO to me. I guess you can somewhat compensate with reams of tests though. (which begs the question, is the dynamic language still saving you time?)

Companies aren't evaluating on "keeping an eye on technical debt", but then ARE directly evaluating on whether you use AI tools.

Meanwhile they are hollowing out work forces based on those metrics.

If we make doing the right thing career limiting this all gets rather messy rather quickly.

> If we make doing the right thing career limiting this all gets rather messy rather quickly.

This has already happened. The gold rush brogrammers have taken over.

Careers are over. Company loyalty is a relic. Now it's a matter of adapting quickly to earn enough to survive.

Tests make me faster. Dynamic or not feels irrelevant when I consider how much slower I’d be without the fast feedback loop of tests.

You can (and probably should) still do tests, but there's an entire class of errors you know can't happen, so you need far less tests, focusing only on business logic for the most part.

Static type checking is even faster than running the code. It doesn't catch everything, but if finding a type error in a fast test is good, then finding it before running any tests seems like it would be even better.

I can provide evidence for your claim. The technical debt can easily snowball if the review process is not stringent enough to keep out unnecessary functions.

Oh I'm well aware of this. I admitted defeat in a way.. I can't compete. I'm just at loss, and unless LLM stall and break for some reason (ai bubble, enshittification..) I don't see a future for me in "software" in a few years.

Somehow I appreciate this type of attitude more than the one which reflects total denial of the current trajectory. Fervent denial and AI trash-talking being maybe the single most dominant sentiment on HN over the last year, by all means interspersed with a fair amount of amazement at our new toys.

But it is sad if good programmers should loose sight of the opportunities the future will bring (future as in the next few decades). If anything, software expertise is likely to be one of the most sought-after skills - only a slightly different kind of skill than churning out LOCs on a keyboard faster than the next person: People who can harness the LLMs, design prompts at the right abstraction level, verify the code produced, understand when someone has injected malware, etc. These skills will be extremely valuable in the short to medium term AFAICS.

But ultimately we will obviously become obsolete if nothing (really) catastrophic happens, but when that happens then likely all human labor will be obsolete too, and society will need to be organized differently than exchanging labor for money for means of sustenance.

If the world comes to that it will be absolutely catastrophic, and it’s a failure of grappling with the implications that many of the executives of AI companies think you can paper over the social upheaval with some UBI. There will be no controlling what happens, and you don’t even need to believe in some malicious autonomous AI to see that.

I get crazy over the 'engineer are not paid to write loc', nobody is sad because they don't have to type anymore. My two issues are it levels the delivery game, for the average web app, anybody can now output something acceptable, and then it doesn't help me conceptualize solution better, so I revert to letting it produce stuff that is not maleable enough.

I wonder about who "anybody can now output something acceptable" will hit most - engineers or software entrepreneurs.

Any implementation moat around rapid prototyping, and any fundraising moat around hiring a team of 10 to knock out your first few versions, seems gone now. Trying to sell MVP-tier software is real hard when a bunch of your potential customers will just think "thanks for the idea, I'll just make my own."

The crunch for engineers, on the other hand, seems like that even if engineers are needed to "orchestrate the agents" and manage everything, there could be a feature-velocity barrier for the software that you can still sell (either internally or externally). Changing stuff more rapidly can quickly hit a point of limited ROI if users can't adjust, or are slowed by constant tooling/workflow churn. So at some point (for the first time in many engineers' career, probably) you'll probably see product say "ok even though we built everything we want to test, we can't roll it all out at once!". But maybe what is learned from starting to roll those things out will necessitate more changes continually that will need some level of staffing still. Or maybe cheaper code just means ever-more-specialized workflows instead of pushing users to one-size-fits-all tooling.

In both of those cases the biggest challenge seems to be "how do you keep it from toppling down over time" which has been the biggest unsolved problem in consumer software development for decades. There's a prominent crowd right now saying "the agents will just manage it by continuing to hack on everything new until all the old stuff is stable too" but I'm not sure that's entirely realistic. Maybe the valuable engineering skills will be putting in the right guardrails to make sure that behavioral verification of the code is a tractable problem. Or maybe the agents will do that too. But right now, like you say, I haven't found particularly good results in conceptualizing better solutions from the current tools.

> your potential customers will just think "thanks for the idea, I'll just make my own."

yeah, and i'm surprised nobody talks about this much. prompting is not that hard, and some non software people are smart enough to absorb the necessary details (especially since the llm can tutor them on the way) and then let the loop produce the MVP.

> Or maybe cheaper code just means ever-more-specialized workflows instead of pushing users to one-size-fits-all tooling.

Interesting thought

The future is either a language model trained on AI code bloats and the ways to optimize the bloat away

OR,

something like Mercor, currently getting paid really well by Meta, OpenAI, Anthropic and Gemini to pay very smart humans really well to proof language model outputs.

Yep, its a rather depressing realization isnt it. Oh well, life moves on i suppose.

I think we realistically have a few years of runway left though. Adoption is always slow outside of the far right of the bell curve.

i'm sorry if I pulled everybody down .. but it's been many months since gemini and claude became solid tools, and regularly i have this strong gut feeling. i tried reevaluating my perception of my work, goals, value .. but i keep going back to nope.

After a multi-decade career that spanned what is rapidly seeming like the golden age of software development, I have two emotions: first gratefulness; second a mixture of resignation, maudlin reflection, and bitterness that I am fighting hard to resist.

As someone who’s always wanted to “get home and code something on my own”, I do have a glimmer of hope that I wonder if others share. I’ve worked extensively with Claude and there’s no question I am now a high velocity “builder” and my broad experience has some value here. I am sad that I won’t be able to deeply look at all the code I am producing, but I am making sure the LLM and I structure things so that I could eventually dig in to modules if needed (unlikely to happen I suppose).

Anyway, my hope/question: if I embrace my new role as fast system builder and I am creative in producing systems that solve real problems “first”, is there a path to making that a career (I.e. 4 friends and I cranking out real production software that’s filling a real niche)? There must be some way for this to succeed —- I am not yet buying the “everything will be instantly copyable and so any solution is instantly commodity” argument. If that’s true, then there is no hope. I am still in shape, though, so going pro in pickleball is always an option, ha ha.

Unfortunately you aren't a high velocity builder. The velocity curve has now shifted and everyone having Claude blast out loc after loc is now a high velocity builder. And when everyone is a high velocity builder...nobody is.

“And when everyone’s super, no one will be”.

Fair point, but my hope is that the creativity involved in deciding what to build, with the choice informed by engineering experience (the project/value will not be obvious to everyone) will allow differentiation.

"creativity involved in deciding what to build, with the choice informed by engineering experience (the project/value will not be obvious to everyone) will allow differentiation."

How? Anyone upon seeing your digital product can just prompt the same thing in no time. If you can prompt it, I can prompt it and so can a million other people.

Nobody whether an individual or business holds any uniqueness or advantage to themselves. All careers and skill sets are leveled and worthless. Implementation skills are worthless. Creativity is worthless.

The only valuable thing is data.

Agree on data value, but as mentioned above I am not yet buying the “everything will be instantly copyable and so any solution is instantly commodity” argument … crud web-app sure, something with significant back-end complexity or a multi-service systems level solution, not so much. Perhaps optimistic, admittedly. Cheers.

I hear you. And maybe you're right. Maybe I'm deluding myself, but: when I look at my skilled colleagues who vibecode, I can't understand how this is sustainable. They're smart people, but they've clearly turned off. They can't answer non-trivial questions about the details of the stuff they (vibe-)delivered without asking the LLM that wrote it. Whoever uses the code downstream aren't gonna stand (or pay!) for this long-term! And the skills of the (vibe-)authors will rapidly disappear.

Maybe I'm just as naive as those who said that photographs lack the soul of paintings. But I'm not 100% convinced we're done for yet, if what you're actually selling is thinking, reasoning and understanding.

The difference with a purely still photograph is that code is a functional encoding of an intention. Code of an LLM could be perfect and still not encode the perfect intention of the product. I’ve seen that in many occasions. Many people don’t understand what code really is about and think they have a printer toy now and we don’t have to use pencils. That’s not at all the same thing. Code is intention, logic, specific use case all at once. With a non deterministic system and vague prompting there will be misinterpreted intentions from LLM because the model makes decisions to move forward. The problem is the scale of it, we’re not talking about 1000 loc. In a month you can generate millions of loc, in a year hundreds of millions of loc.

Some will have to crash and burn their company before they realize that no human at all in the loop is a non sense. Let them touch fire and make up their mind I guess.

> Code is intention, logic, specific use case all at once. With a non deterministic system and vague prompting there will be misinterpreted intentions from LLM because the model makes decisions to move forward. The problem is the scale of it, we’re not talking about 1000 loc. In a month you can generate millions of loc, in a year hundreds of millions of loc.

People are also non deterministic. When I delegate work to team of five or six mid level developers or God forbid outsourced developers, I’m going to have to check and review their work too.

It’s been over a decade that my vision/responsibility could be carried out by just my own two hands and be done on time within 40 hours a week - until LLMs

People are indeed not deterministic. But they are accountable. In the legal sense, of course, but more importantly, in an interpersonal sense.

Perhaps outsourcing is a good analogy. But in that case I'd call it outsourcing without accountability. LLMs feel more like an infinite chain of outsourcing.

As a former tech lead and now staff consultant who leads cloud implementations + app dev, I am ultimately responsible for making sure that projects are done on time, on budget and meets requirements. My manager nor the customer would allow me to say it’s one of my team members fault that something wasn’t done correctly any more than I could say don’t blame me blame Codex.

I’ve said repeatedly over the past couple of days that if a web component was done by someone else, it might as well have been created by Claude, I haven’t done web development in a decade. If something isn’t right or I need modifications I’m going to either have to Slack the web developer or type a message to Claude.

Ofc people are non deterministic. But usually we expect machines to be. That’s why we trust them blindly and don’t check the calculations. We review people’s work all the time though. Here people will stop review machine LLM code as it’s kind of a source of truth like in other areas. That’s my point, reviewing code takes time and even more time when no human wrote it. It’s a dangerous path to stop reviews because of trust in the machine now that the machine is just kind of like humans, non deterministic.

No one who has any knowledge or who has ever used an LLM expects determinism.

And there are no computer professionals who haven’t heard about hallucinations.

Reviewing whether the code meets requirements through manual and automated tests - and that’s all I cared about when I had a team of 8 under me - is the same regardless. I wasn’t checking whether John used a for loop or while loop in between my customer meetings and meetings with the CTO. I definitely wasn’t checking the SOQL (not a typo) of the Salesforce consultants we hired. I was testing inputs and outputs and UX.

Having a team of 8 people producing code is manageable. Having an AI with 8 agents that write code all day long is not the same volume it can generate more code in a day that one person can review in a week. What you say is that, product teams will prompt what they want to a framework, the framework will take care of spec analysis, development, reviews, compliance with spec. Product teams with QA will make sure the delivery is functionally correct. No humans need to make sure of anything code related. What we don’t know yet is, does AI will still produce solid code trough the years because it’s all statistical analysis and with the volume of millions of loc, refactoring needed, data migrations etc what will happen ?

For context, I just started using coding agents - codex CLI and Claude code in October. Once I saw that you had to be billed by use, I’m not using my own money for it when it’s for a company.

Two things changed - Codex CLI now lets you use it with your $20 a month subscription and I have never run into quota issues with it and my employer signed up for the enterprise vs of Claude and we each have an $800 a month allowances

My argument though is “why should I care about the code?” for the most part. If I were outsourcing a project or delegating it to a team lead, I would be asking high level architectural, security and scalability questions.

AI generated the code, AI maintains the code. I am concerned about abstractions and architecture.

You shouldn’t have to maintain or refactor “millions of lines of code”, if your code is well modularized with clean interfaces, making a change for $x7 may mean making a change for $x1…$x6. But you still should be working locally in one module at the time. You should do the same for the benefit of coders. Heck my little 5 week project has three independently deployable repos in a root folder. My root Agents file just has a summary of how all three relate via a clean interface.

In the project I am working on now, besides “does it meet the requirements”, I care about security, scalability, concurrency, user experience for the end user, user experience for the operations folks when they need to make config changes, and user experience for any developers who have to make changes long after I’m off this project. I haven’t looked at a single line of code - besides the CloudFormation templates. But I can answer any architectural question about any of it. The architecture and abstractions were designed by me and dictated to the agents

On this particular project, on the coding level, there is absolutely nothing that application code like this can do that could be insecure except hypothetically embed AWS credentials into the code. But it can’t do that either since it doesn’t have access to it [1].

In this case security posture comes from the architecture - S3 block public access, well scoped IAM roles, not running “in a VPC”. Things I am checking in the infrastructure as code and I was very specific about.

The user experience has to come from design and checking manually.

I mentioned earlier that my first stab it scaled poorly. This was caused by my design and I suspected it would beforehand. But building the first version was so fast because of AI tools, I felt no pain in going with my more architecturally complicated plan B and throwing the first version away. I wouldn’t have known that by looking at the code. The code was fine it was the underlying AWS service. I could only know that by throwing 100K documents at it instead of 1000.

I designed a concurrent locking mechanism that had a subtle flaw. Throwing the code into ChatGPT into thinking mode, it immediately found it. I might have been better off just to tell the coding agents “design a locking mechanism for $x” instead of detailing it.

Even maintainability was helped because I knew I or anyone else who touched it was probably going to be using an LLM. From the get go I threw the initial contract, the discovery sessions transcripts, the design diagrams, the review of the design diagrams, my project plan and breakdown into ChatGPT and told it to render a detailed markdown file of everything - that was the beginning of my AGENTS.md file.

I asked both Codex and Claude to log everything I was doing and my decisions into separate markdown files.

Any new developer could come into my repo, fire up Claude and it wouldn’t just know what was coded, it would have full context of the project from the initial contract through to the delivery

[1] code running on AWS never explicitly has to worry about AWS credentials , the SDKs can find the information by themselves by using the credentials of the IAM role attached to the EC2 instance, Lambda, Docker container, etc.

Even locally you should be getting temporary credentials that are assigned to environment variables that the SDK retrieved automatically.

There are so many types of requirements though. Security is one, performance is another. No one has cared about while/for for a long time.

Okay - and the person ultimately leading the team is still responsibility for it whether you are delegating to more junior developers or AI. You’re still reviewing someone else’s code based on your specs

I have this nagging feeling I’m more and more skimming text, not just what the LLMs output, but all type of texts. I’m afraid people will get too lazy to read, when the LLM is almost always right. Maybe it’s a silly thought. I hope!

This is my fear too.

People will say "oh, it's the same as when the printing press came, people were afraid we'd get lazy from not copying text by hand", or any of a myriad of other innovations that made our lives easier. I think this time it's different though, because we're talking about offloading the very essence of humanity – thinking. Sure, getting too lazy to walk after cars became widespread was detrimental to our health, but if we get too lazy to think, what are we?

there are some youtube videos about the topic, be it pupil in high school addicted to llms, or adults losing skills, and not dev only, society is starting to see strange effects

Can you provide links to these videos?

This one is in french (hope you don't mind), https://youtu.be/4xq6bVbS-Pw?t=534 mentions the issues for students and other cognitive issues.

I feel the same. And I expect even a lot of the early adopters and AI enthusiasts are going to find themselves as the short end of the stick sooner than later.

"Oops I automated myself out a job".

I've already seen this play out. The lazies in our floor were all crazy about AI because they could finally work few and finish their tasks. Until they realized that they were visibly replaceable now. The motto in team chats is "we'll lie about the productivity gains to management, just say 10% but with lots of caretaking" now

Yup. The majority of this website is going to find out they were grossly overpaid for a long time.

Imagine everyone who is in less technical or skilled domains.

I can't help but resist this line of thinking as a result. If the end is nigh for us, it's nigh for everyone else too. Imagine the droves of less technical workers in the workforce who will be unseated before software engineers. I don't think it is tenable for every worker in the first world to become replaced by a computer. If an attempt at this were to occur, those smart unemployed people would be a real pain in the ass for the oligarchs.

I feel the same.

Frankly, I am not sure there is a place in the world at all for me in ten years.

I think the future might just be a big enough garden to keep me fed while I wait for lack of healthcare access to put me out of my misery.

I am glad I am not younger.

So why havent you been fired already?

.......

gemini has only been deployed in the corp this year, but the expectations are now higher (doubled). i'll report by the end of the year..

> the cost-benefit analysis clearly favors letting these tools rip

Does it? I have yet to see any evidence that they are a net win in terms of productivity. It seems to just be a feeling that it's more efficient.