Graph Theory and AI have been forever linked because it turns out the human brain is ridiculously good at finding a path through a graph that is somewhere in the neighborhood of 97% optimal, and we can do it in minutes while a computer would take weeks or until the heat death of the universe to do it better.

It's vexatious how good we are at it, and it's exactly the sort of problem that Science likes. We know it's true, but we can't reproduce it outside of the test subject(s). So it's a constant siren song to try to figure out how the fuck we do that and write a program that does it faster or more reliably.

Traveling Salesman was the last hard problem I picked up solely to stretch my brain and I probably understand the relationship between TSP and linear programming about as well as a Seahawks fan understands what it is to be a quarterback. I can see the bits and enjoy the results but fuck that looks intimidating.

standard algorithms for single-source/single-dest pathfinding scale log-linearly in the size of the graph, so compared to other combinatorial optimisation problems, optimal pathfinding is incredibly easy for computers to do & scales pretty well to industrial-sized problems. computers can also do optimal pathfinding for problems that humans would not be able to solve easily (because the graphs don't easily embed in 2d or 3d, say, so we can't bring our vision systems to bear)

other combinatorial optimisation problems - like the traveling salesman you mention - are much harder than pathfinding to solve optimally or even approximately

>while a computer would take weeks or until the heat death of the universe to do it better.

I don't buy this, approximation algorithms are an entire field of CS, if you're OK with an approximate solution I'm sure computers could do that quickly as well.