It comes down to how you use it, whether you're just getting an answer and moving on, or if you're getting an answer and then increasing your understanding on why that's the correct answer.

I was building a little roguelike-ish sort of game for myself to test my understanding of Raylib. I was using as few external resources as possible outside of the cheatsheet for functions, including avoiding AI initially.

I ran into my first issue when trying to determine line of sight. I was naively simply calculating a line along the grid and tagging cells for vision if they didn't hit a solid object, but this caused very inconsistent sight. I tried a number of things on my own and realized I had to research.

All of the search results I found used Raycasting, but I wanted to see if my original idea had merit, and didn't want to do Raycasting. Finally, I gave up my search and gave copilot a function to fill in, and it used Bresenham's Line Algorithm. It was exactly what I was looking for, and also, taught me why my approach didn't work consistently because there's a small margin of error when calculating a line across a grid that Bresenham accounts for.

Most people, however, won't take interest in why the AI answer might work. So while it can be a great learning tool, it can definitely be used in a brainless sort of way.

This reminds me of my experience using computer-assisted mathematical proof systems, where the computer's proof search pointed me at the Cantor–Schröder–Bernstein theorem, giving me a great deal of insight into the problem I was trying to solve.

That system, of course, doesn't rely on generative AI at all: all contributions to the system are appropriately attributed, etc. I wonder if a similar system could be designed for software?

Now imagine how much better

- the code

- your improvement in knowledge

would have been if you had skipped copilot and described your problem and asked for algorithmic help?

Now imagine that he's interested in finishing his game, not the intricacies of raycasting algorithms.

Idk, depends on the situation. Is he a student trying to show stuff on a resume? Is he a professional trying to sell a product? Is he a researcher trying to report findings? A startup trying to land a pitch?

The value isn't objective and very much depends on end goals.People seem to trounce out the "make games, not engines" without realizing that engine programmers still do exist.

It was just a small personal test of skill with no purpose or stakes. Not even really with intent to make a real game, just a slice of something that resembled a game to see how far I could get without help. Then, once I got as far as I could, research and see how I could do better.