So speaking of "mastery":
I wanted to know how to clone a single folder in a Git repository. Having done this before, I knew that there was some incantation I needed to make to the git CLI to do it, but I couldn't remember what it was.
I'm very anti-AI for a number of reasons, but I've been trying to use it here and there to give it the benefit of the doubt and avoid becoming a _complete_ dinosaur. (I was very anti-vim ages ago when I learned emacs; I spent two weeks with vim and never looked back. I apply this philosophy to almost everything as a result.)
I asked Qwen3-235B (reasoning) via Kagi Assistant how I could do this. It gave me a long block of text back that told me to do the thing I didn't want it to do: mkdir a directory, clone into it, move the directory I wanted into the root of the directory, delete everything else.
When I asked it if it was possible to do this without creating the directory, it, incorrectly, told me that it was not. It used RAG-retrieved content in its chain of thought, for what that's worth.
It took me only 30 seconds or so to find the answer I wanted on StackOverflow. It was the second most popular answer in the thread. (git clone --filter=tree: --depth=0, then git sparse-checkout set --no-cone $FOLDER, found here: https://stackoverflow.com/a/52269934)
I nudged the Assistant a smidge more by asking it if there was a subcommand I could use instead. It, then, suggested "sparse-checkout init", which, according to the man page for this subcommand, is deprecated in favor of "set". (I went to the man page to understand what the "cone" method was and stumbled on that tidbit.)
THIS is the thing that disappoints me so much about LLMs being heralded as the next generation of search. Search engines give you many, many sources to guide you to the correct answer if you're willing to do the work. LLM services tell you what the "answer" is, even if it's wrong. You get potential misinformation back while also turning your brain off and learning less; a classic lose-lose.
ChatGPT, Gemini, and Claude all point me to a plethora of sources for most of my questions that I can click through and read. They're also pretty good at both basic and weird git issues for me. Not perfect, but pretty good.
Also part of the workflow of using AI is accepting that your initial prompts might not get the right answer. It's important to scan the answer like you did and use intuition to know 'this isn't right', then try again. Just like we learned how to type in good search queries, we'll also learn how to write good prompts. Sands will shift frequently at first, with prompt strategies that worked well yesterday requiring a different strategy tomorrow, but eventually it will stabilize like search query strategies did.
_I_ know that the answer provided by the prompt isn't quite right because I have enough experience with the Git CLI to know that it wasn't quite right.
Someone who doesn't use the Git CLI at all and is relying on an LLM to do it will not know that. There's also no reason for them to search beyond the LLM or use the LLM to go deeper because the answer is "good enough."
That's the point of what I'm trying to make. You don't know what you don't know.
Trying different paths that might go down dead ends is part of the learning process. LLMs short-circuit that. This is fine if you think that learning isn't valuable in, this case, software development. I think it is.
<soapbox>
More specifically, I think that this will, in the long term, create a pyramidal economy where engineers like you and I who learned "the old way" will reap most of the rewards while everyone else coming into the industry will fight for scraps.
I suppose this is fine if you think that this is just the natural order of things. I do not.
Tech is one of, if not the only, career path(s) that could give almost anyone a very high quality of life (at least in the US) without gatekeeping people behind the school they attend (i.e. being born into the right social strata, basically), many years of additional education and even more years of grinding like law, medicine and consulting do.
I'm very saddened to see this going away while us in the old guard cheer its destruction (because our jobs will probably be safe regardless).
</soapbox>
I also disagree with the claim that the LLM gives you a "plethora" of sources. The search I used gave me three [^0]. A search on the same topic gave me more than 15. [^1]
Yes, the 15 it gives me are all over the quality map, but I have much information at my disposal to find the answer I'm looking for. It also doesn't proport to be "the answer," like LLMs tend to do.
[^0] https://kagi.com/assistant/839b0239-e240-4fcb-b5da-c5f819a0f...
[^1] https://kagi.com/search?q=git+clone+single+folder