As I understand it, this is a dataset of claimed causation. It should contain vaccines->autism, not because it's true, but because someone, in public, claimed that it was.
So, by design, it's pretty useless for finding new, true causes. But maybe it's useful for something else, such as teaching a model what a causal claim is in a deeper sense? Or mapping out causal claims which are related somehow? Or conflicting? Either way, it's about humans, not about ontological truth.
Also, it seems to mistake some definitions as causes.
A coronavirus isn't "claimed" to cause SARS. Rather, SARS is a name given to the disease cause by a certain coronavirus. Or alternatively, the name SARS-nCov-1 is the name given to the virus which causes SARS. Whichever way you want to see it.
For a more obvious example, saying "influenza virus causes influenza" is a tautology, not a causal relationship. If influenza virus doesn't cause influenza disease, then there is no such thing as an influenza virus.
Yes, I agree there are a lot of definitions or descriptions masquerading as explanations, especially in medicine and psychology. I think maybe insurance has a lot to do that. If you just describe a lot of symptoms, insurance won't know whether to cover it or not. But if you authoritatively name that symptom set as "BWZK syndrome" or something, and suddenly switch to assuming "BWZK syndrome" is a thing, the unknown cause to the symptoms, then insurance has something it can deal with.
But this description->explanation thing, whatever the reason, is just another error people make. It's not that different from errors like "vaccines cause autism". Any dataset collecting causal claims people make is going to contain a lot of nonsense.