>they get enough idioms in C++ slightly incorrect

this is part of why I stay in python when doing ai-assisted programming; there's so much training information out there for python and I _generally_ don't care about if its slightly off-idiom, its still probably fine.

Yea, I was thumbs-down on ai-assisted programming because when I tested it out, I tried it by adding things to my existing C and C++ projects, and its suggestions were... kind of wild. Then, a few months later I gave it another chance when I was writing some Python and was impressed. Finally, I used it on a new-from-blank-text-file Rust project and was pretty much blown away.

The best I have ever seen were obscure languages with very strong type safety. Some researcher at a sibling org to my own told me to try it with the Lean language, and it basically gave flawless suggestions.

I'm guessing this is because the only training material was blogs from uber-nerdy CS researchers on a language where "mistakes" are basically impossible to write, and not a bunch of people flailing on forums asking about hello world-ish stuff and segfaulting examples.

As someone who doesn't generally program, it was pretty good at getting me an init.lua set up for nvim with a bunch of plugins and some functions that would have taken me ages to do by hand. That said...it still took a day or two of working with it and troubleshooting everything, and while it's been reliable so far, I worry that it's not exactly idiomatic. I don't know enough to really say.

What it's really good at is taking my description of something and pointing me in the right direction to do my own research.

(two things that helped me with getting decent code were to describe the problem and desired solution, followed by a "Does that make sense?". This seems to get it to restate the problem itself and produce better solutions. The other thing was to copy the output into a fresh session, ask for a description of what the code does and what improvements could be made)

The downside of this nvim solution is the same downside as both pasting big blobs of ai code into a repo, and, pasting big vim configs you find online into your vimrc: inability to explain the pasted code.

When you need something fast for whatever reason sure, but later when you want to tweak or add something, you'll have to finally sit down and learn basically the whole thing or at least a major part of it to do so anyway. Imo it's better to do that from the start but sometimes that's not always ideal.

When I’ve used AI for writing shell scripts it used a lot of syntax that I couldn’t understand. So then I took the time to ask it to walk me through the parts that I didn’t understand. This took longer than blindly pasting what it generated, but still less time than it would have using search to learn to write my own script. With search, a lot of time is spent guessing the right search term. With chat, assuming it generated a reasonable answer (I know: a big assumption!), my follow-up questions can directly reference aspects of the generated code.

having something explained to me has never helped me retain the information. That only happens if i spend the time actually figuring out stuff myself.

Not saying that it’s a better way, but I started with vim by copying someone conf (on Github), removing all extraneous stuff, then slowly familiarizing myself with the rest. Then it was a matter of reading the docs when I wanted some configuration. I believe the first part is faster than dealing with an LLM, especially when dealing with an unfamiliar software.

I agree with this approach generally, but I needed to use some lua plugins to do something specific fairly quickly, and didn't feel like messing around with it for weeks on end to get it just right.

My data science friend tells me it's really good at writing bad pandas code because it's seen so much bad pandas code.

At the end of the day, it depends where you are in the hierarchy. Having it write code for me on a hobby project in react that's bad but works is one thing. I'm having a lot of fun with that. Having it write bad code for me professionally is another thing though. Either way, there's no going back to before ChatGPT, just like there's no going back to before Stack Overflow or Google. Or the Internet.

Wouldn't AI be worse at Rust than at C++ given the amount of code available in the respective languages?

Maybe this is a case where more training data isn’t better. There is probably a lot of bad/old C++ out there in addition to new/modern C++, compared to Rust which is relatively all modern.

Yes, I think that's it. There is a lot of horrible C++ code out there, especially on StackOverflow where "this compiled for me" sometimes ends up being the accepted answer. There are also a lot of ways to use C++ poorly/wrong without even knowing it.