A software engineer with an LLM is still infinitely more powerful than a commoner with an LLM. The engineer can debug, guide, change approaches, and give very specific instructions if they know what needs to be done.

The commoner can only hammer the prompt repeatedly with "this doesn't work can you fix it".

So yes, our jobs are changing rapidly, but this doesn't strike me as being obsolete any time soon.

I listened to an segment on the radio where a College Teacher told their class that it was okay to use AI assist you during test provided:

1. Declare in advance that AI is being used.

2. Provided verbatim the questions and answer session.

3. Explain why the answer given by the AI is good answer.

Part of the grade will include grading 1, 2, 3

Fair enough.

It’s better than nothing but the problem is students will figure out feeding step 2 right back to the AI logged in via another session to get 3.

This is actually a great way to foster the learning spirit in the age of AI. Even if the student uses AI to arrive at an answer, they will still need to, at the very least, ask the AI to give it an explanation that will teach them how it arrived to the solution.

No this is not the way we want learning to be - just like how students are banned from using calculators until they have mastered the foundational thinking.

That's a fair point, but AI can do much more than just provide you with an answer like a calculator.

AI can explain the underlying process of manual computation and help you learn it. You can ask it questions when you're confused, and it will keep explaining no matter how off the topic you go.

We don't consider tutoring bad for learning - quite the contrary, we tutor slower students to help them catch up, and advanced students to help them fulfill their potential.

If we use AI as if it was an automated, tireless tutor, it may change learning for the better. Not like it was anywhere near great as it was.

You're assuming the students are reading any of this. They're not, they're just copy/pasting it.

Well, you can lead the horse to water, but you can't make him drink.

If you assume all students are lazy assholes who want to cheat the system, then I doubt there's anything that would help them learn.

Also so much of the LLMs answer is fluff, when not outright wrong

There is research that shows that banning calculators impedes the learning of maths. It is certainly not obvious to me that calculators will have a negative effect - I certainly always allowed my kids to use them.

LLMs are trickier and use needs to be restricted to stop cheating, just as my kids had restrictions on what calculators they could use in some exams. That does not mean they are all bad or even net bad if used correctly.

  > There is research that shows that banning calculators impedes the learning of maths.
Please share what you know. My search found a heap of opinions and just one study where use of calculators made children less able to calculate by themselves, not the ability to learn and understand math in general.

> There is research that shows that banning calculators impedes the learning of maths.

I've seen oodles of research concluding the opposite at the primary level (grades 1- 5, say). If your mentioned research exists, it must be very well hidden :-/

There were 79 studies used in this meta analysis so it cannot be that well hiddne: https://psycnet.apa.org/record/1987-11739-001

> There were 79 studies used in this meta analysis so it cannot be that well hiddne: https://psycnet.apa.org/record/1987-11739-001

From the first page of that study

> Do calculators threaten basic skills? The answer consistently seemed to be no, provided those basic skills have first been developed with paper and pencil.

So, yeah, there are no studies I have found that support any assertion along the lines of:

>>> There is research that shows that banning calculators impedes the learning of maths.

If you actually find any, we still have to consider that things like this meta-study you posted is already 74-studies ahead in confirming that you are wrong.

Best would be for you to find 75 studies that confirm your hypothesis. Unfortunately, even though I read studies all the time, and even at one point had full access via institutional license to full-text of studies, and spent almost all of my after-hours time between 2009 and 2011 actually reading papers on primary/foundational education, I have not seen even one that supports your assertion.

I have read well over a hundred papers on the subject, and did not find one. I am skeptical that you will find any.

Calculator don't tell you step by step. AI can.

Symbolic computation is a thing. How do you think wolfram alpha worked for 20 years before AI?

[deleted]

And it’s making that up as well.

Yeah; it gets steps 1-3 right, 4-6 obviously wrong, and then 7-9 subtly wrong such that a student, who needs it step by step while learning, can't tell.

That's roughly what we did as well. Use anything you want, but in the end you have to be able to explain the process and the projects are harder than before.

If we can do more now in a shorter time then let's teach people to get proficient at it, not arbitrarily limit them in ways they won't be when doing their job later.

Props to the teacher for putting in the work to thoughtfully grade an AI transcript! As I typed that I wondered if a lazy teacher might then use AI to grade the students AI transcript?

I think it's a bit like the Dunning-Kruger effect. You need to know what you're even asking for and how to ask for it. And you need to know how to evaluate if you've got it.

This actually reminds me so strongly of the Pakleds from Star Trek TNG. They knew they wanted to be strong and fast, but the best they could do is say, "make us strong." They had no ability to evaluate that their AI (sorry, Geordi) was giving them something that looked strong, but simply wasn't.

Oh wow this is a great reference/image/metaphor for "software engineers" who misuse these tools - "the great pakledification" of software

Yep, I've seen a couple of folks pretending to be junior PMs, thinking they can replace developers entirely. The problem is, they can't write a spec. They can define a feature at a very high level, on a good day. They resort to asking one AI to write them a spec that they feed to another.

It's slop all the way down.

People have tried that with everything from COBOL to low code. Its even succeeded in some problem domains (e.g. thing people code with spreadsheet formula) but there is no general solution that replaces programmers entirely.

A "commoner"... Could you possibly be more full of yourself?

That was literally the opposite of my intention. Maybe the choice of word wasn't perfect, but basically, I was trying to highlight that domain expertise is still valuable in the specific scenario of software engineering.

The same could be said about any other job, if you put me against a construction worker and give us both expensive power tools, he will still do a better job than me because I have no experience in that domain.

Agree totally.