Yes, they do care about it, and unlike many AI researchers they've bothered to learn something about philosophy of mind. They point out that "the philosophical question of machine consciousness is complex and contested, and different theories of consciousness would interpret our findings very differently. Some philosophical frameworks place great importance on introspection as a component of consciousness, while others don’t." Which would be one reason they point out that these experiments don't help resolve the issue.
They go further on their model welfare page, saying "There’s no scientific consensus on whether current or future AI systems could be conscious, or could have experiences that deserve consideration. There’s no scientific consensus on how to even approach these questions or make progress on them."