Thanks! We describe this as a knowledge graph because that's how we think about the structure in the data & is where we want to go.
Right now, we've focused on normalizing several key entities (e.g. organizations including parent/subsidiary relations, technologies, people, and job functions), and capturing the relations between these as well as additional useful metadata like location and industry.
From a backend implementation standpoint, this is currently implemented as structured relational tables for query performance and simplicity (e.g. count up all teams mentioning pytorch in job posts including rolling up across parent subsidiaries and sort by the biggest organizations descending).
Future direction here is TBD as we expand the sources that we cover and types of queries that can be computed across these sources.
There's been a lot of attempts at building high-quality public knowledge graphs that haven't hit escape velocity.
We're focusing on a structured, commercially relevant subset of the problem as a starting point to generate a critical mass of usage and funding that will enable us to build the bigger vision: a highly structured, up-to-date, and trusted repository of all the facts about the world that is easy to browse, query, and integrate programatically into all the relevant workflows (including for grounding LLM's)
Appreciate you sharing the vision. Having worked in this space for a while, IMO the biggest challenges for a public facing graph are in 1. Entity Linking from NL Query -> Graph queries or in your case relational queries (Multiple similarly named teams in Microsoft). And 2. Relevance of results for more complex queries. I like your approach of having a drop down of filter tags, which eliminates 1, but will be harder to scale like in a Graph of everything.