It's tough. We run botsbench.com , which tracks AI progress on a top CTF, and I gave a talk at CCC a few months ago on our own results doing AI speed runs, so I think about this a lot.
In our own trainings we give (AI agents for security, and a graph masterclass), we ended up leaning into it. For example, we ship with a skills bundle. There are plus sides, like less code-forward participants can go further and are appreciating that, and less of a gap between high-level concepts and successful hands-on. But at the same time, manual work does build a lot of intuition & knowledge that gets missed in auto modes.
Will this bring back the age of LAN parties, where the LAN is disconnected from the internet, and mobile connectivity is blocked?
I think that ship has sailed as well --
botsbench.com shows Sonnet 4.5+ with Claude Code harness does pretty well, and Sonnet roughly tracks the edge of what self-hosted models do on the upper tier of affordable GPUs, like running 1-2 DGX Sparks and waiting 6mo for oss to catch up a bit