The replacement of reading books and writing letters with television/video streaming and phone/texting has done far more to depress average human cognitive skills than LLM code and text generation could ever hope to achieve.

As far as code generation, there are some intriguing workflows out there. Eg start by having the LLM generate a git repo for your project. Use the DAG for the repo as the guide for more code generation. Embed the code files into a vector database, differentianting between static and dynamic files and re-embed on commits. Use all those data structures in compilers: ASTs, call graphs, dependency graphs - as guides for the AI along with the git repo's DAG. Then if its proprietary code development, run an open source LLM locally to avoid leaking at the embedding and code generation stages. Now, run an 'overnight build' using your local LLM for code generation. Come in next day, review and test all the code the LLM generated. End of the day, commit all the changes that look good, rinse and repeat.

The thing here is you are actively involved at every stage - and the same strategies work for any non-coding writing task. Eg structure your prompts carefully like well-designed short essays. Read the LLM output with an eye out for errors and inconsistencies, copy and paste those into the next prompt and demand a critical review. Once the context window gets too big, boil it all down into a vector database for future reference, generate a new summary prompt from that database, rinse and repeat.

I'd suggest thinking of yourself as the conductor of an orchestra, but one who knows what the capabilities of every instrument is, so you're actually thinking harder and working harder than you were before you had access to these AI tools. That at least will keep your cognitive skills in tune.

P.S. I tend to get much better critical analysis from the LLM if I start all chats with:

> "Preliminary instructions: do not engage in flattery or enthusiastic validation or praise of any kind. Do not offer suggestions for improvement at the end of output. If this is clear, respond with "yes, this is clear"."

Don't let the LLM give you roses and lead you down the garden path, instead think of it as a politely-adversarial low-trust relationship.

"instead think of it as a politely-adversarial low-trust relationship."

Lol... this sounds terrible.