This might be of at least some value to augment training LLMs? I spent a lot of time in the 1980s and early 1990s using symbolic AI techniques: conceptual dependency, NLP, expert systems, etc. While two large and well funded expert system projects I worked on (paid for by DARPA and PacBell) worked well, mostly symbolic AI was brittle and required what seemed like an i finite amount of human labor.
LLMs are such a huge improvement that the only real use I see in projects like Cause et, the defunct OpenCyc project, etc. the only possible practical use might be as a little extra training data.