Damned if you do, damned if you don’t. I think Wikipedia would do really well for itself if it instead created a set of public high level rules for an open model to follow. The model would write the article using all publicly available information. This would enable the article to feature all perspectives on the issue to avoid “lying by omission”. Articles would instead be overviews and about a topic rather than appearing biased to a particular set of talking points and coverage. Summary is much more approachable and benefits people who want to learn all about a topic rather than those who seek confirmation reinforcement. I think the end result of this would be that people would be equally happy/unhappy with Wikipedia because the rules would be applied to every article equally and would be a place to go when users didn’t know what to prompt while apps like Grok/ChatGPT are resources used when people already have a question prepared. I agree with Jimmy’s opinion that Wikipedia is not a place to adjudicate disagreements.

> Wikipedia would do really well for itself if it instead created a set of public high level rules for an open model to follow

This is literally every LLM that quotes Wikipedia.

The value in Wikipedia is it’s curated. A model is the opposite of that.

As for the topic at hand, it seems nobody agrees on what genocide means anymore, few are willing to accept there is legitimate disagreement, everyone has a unique definition they’re loudly committed to, all of which makes the entire debate self obsessed.

I don’t think curation is the answer, if Wikipedia was based off rules and if fundamental articles were dependencies to more complex downstream articles I think people would have more respect the site. Curation invites unintentional omission of information which people may suspect is intentional. If a Wikipedia model first defined rules for a genocide article and then screened events that were suspected to be genocides against the genocide article then a more uniform interpretation of genocide across the entire site would be possible. I think the goal for Wikipedia is to avoid inconsistency, to cover every viewpoint in a topic with rationale and to do so truthfully with associated references.

An issue not brought up is that LLMs are not deterministic enough to follow rules -- it would be nice if we had a perfect robot that could do all these things and then determine rules for it to follow. But it only took prompt tampering with Grok for it to start talking about mechahitler, and I'm pretty sure at least that wasn't entirely planned. Inconsistency is almost to be expected from LLMs.

> if Wikipedia was based off rules and if fundamental articles were dependencies to more complex downstream articles I think people would have more respect the site

These structured sources of truth have been tried. They don’t work. Natural language allows for ambiguity where necessary in a way code does not.

> If a Wikipedia model first defined rules for a genocide article

It would be worthless. Also, futile. You think when the world’s governments can’t agree on what genocide is, a random editorial decision at Wikipedia will control?

> the goal for Wikipedia is to avoid inconsistency

It’s a goal, but certainly not the goal. Truth isn’t a mathematical schema, particularly when it comes to social constructs like genocide.

I don’t think you’re entertaining the idea sufficiently considering you’ve stated that it’s a worthless and futile idea. I think it’s a worthwhile and valuable idea. Rules-derived articles with logical dependencies could hold a mirror to our own biases. I think truth should be logically derived and I don’t want people to be hostile to the outcomes since we’re approaching a future where technology will be able to do this.

> don’t think you’re entertaining the idea sufficiently considering you’ve stated that it’s a worthless and futile idea

It’s useless and futile to this problem.

It could be useful. But as a compliment to Wikipedia. And not in adjudicating something like the definition of genocide.

> should be logically derived

Not really an option for social constructs, which rely on consensus more than logical consistency. You could create LLMs that logically derive an answer from a definition. But that is a semantic punt with extra steps (unless the LLM controls martial forces).

[deleted]
[deleted]