I’ve rarely if ever seen a model fully explain mathematical answers outside of simple geometry and algebra to what I would call an adequate level. It gets the answer right more often than explaining why that is the correct answer. For example, it finds a minimal case to optimization, but can’t explain why that is the minimal result among all possibilities.