If this happened in the US or Europe it would be an interesting story. In South Africa, this is just par for the course and the quality of the work may not have been any better had it been written by the current people staffing home affairs.

Currently it's happening in reverse in Denmark. People submit complaints to the municipalities about various things and increasingly those complaints are written with the help of AI (~20%). These cases take up a ton of time, because they are so difficult to process, referencing rules and regulations that doesn't exist, mixed in with some that do. These AI written complaints are typically way more complex, and 10 times the pages of a human written one.

Add the insult that these two officials have no doubt been suspended on full pay and benefits while the year-long investigation takes place at great expense to the tax payer. After which they are moved to a different government department as “punishment”.

> Moving forward, the department will also design and implement AI checks and declarations as part of its internal approval processes

Read it first?

Nice seeing an article from SA here :) unfortunately, this surprises none of us.

Something from my country, suprised they didn't get a promotion

Why does this page want to know my precise location?

This is the tip of the iceberg. For example, the South African government included AI hallucinations in drafting its own AI policy: https://mybroadband.co.za/news/ai/644001-south-african-exper... . Imagine the AI slop in other documents including those that are classified, financial calculations etc.

I would be totally unsurprising if corrupt politicians in developing countries start using AI extensively for basic governance.

What will be interesting is to see who does a better job. Corrupt politician by themselves, or the AI they outsource their job to.

[flagged]

[flagged]

Wait until you find out about IBM and Fanta!

Yes, I'm aware. But the point is not just that some businesses survived being Nazi-friendly - Hugo Boss is another example. The point is it's more unusual that a propaganda outlet whose entire purpose was to promote an evil regime survives that regime.

[deleted]

It would seem like a news paper being pro-regime is more of a survival strategy than anything else. Plenty of papers survived the Nazi era and exist today. Nothing unique to be seen in it here.

[flagged]

The entire world sells products to encourage you to do your work with AI assistance.

But god forbid that there should be any evidence of that in your .....work. You'll be suspended or fired.

Holy god, it looks like someone used AI and were a bit sloppy in their editing!!!! YOU'RE FIRED!

Maybe someday when there's been enough such reports people will shrug like they do about security breaches now.

I dont know if its "evidence of AI" so much as "Evidence of laziness causing extreme public embarrassment"

Every good AI policy is basically:

1. You may use <supported LLM with enterprise data agreement>

2. You are still responsible for the quality of your output, customer facing embarrassment is your fault and will not be attributed to the technology.

In this case, the LLM was used to generate a reference table.

>“It seems that these references were generated and attached to the document after the fact, as they are not cited in the body of the text.

Like its just a retrospective justification for the content they have written, its not lazy editing it implies a complete lack of research, while fraudulently trying to imply the research was completed.

These suspensions send the appropriate message. This isn't the same thing as poorly reviewed marketing copy, hallucinations in government policy papers are unacceptable.

These people are employed to serve the public and are paid by public funds. This is a socially critical job which affects people's entire lives, and in South Africa possibly their personal safety. This isn't just another coporation who needs to make line go up.

The wording of the article suggests that large parts of the documents where false and should have been caught by review, for which these two director-level people were responsible. This seems to be more than just editing which was "a bit sloppy".

I suggest if you were an immigrant whose citizenship application was denied based on an AI hallucination, forcing you to uproot and move your family out of the country against your will, you would not appreciate that and would take a different view.

Wow, way to argue maliciously.

The only reason any AI usage is rejected in this scenario is due to errors.

Human error is one thing, but if a human uses AI and does not verify its output and then publishes it as some sort of authoritative work, you are pushing deep past ethical issues and often into legal issues.

Government word is law so government employees posting bad information from AI when it's their job to post good information is practically a crime in of itself.

Yes, humans can also publish information by mistake, but there's a massive difference between a human getting some numbers wrong vs. AI completely inventing citations.

My megacorp recently published their first AI usage policy, more or less, go nuts using AI but you will be 100% be held accountable for reviewing output to be acceptable, including but not limited to terminations.

> Maybe someday when there's been enough such reports people will shrug like they do about security breaches now.

Yes, it's a real danger that it becomes a whole shift downward for society. We stop objecting to errors and mediocrity because they've become so normalized.

God forbid people actually have to do work and fact-check the hallucination machines!

You're correct - whether you keep your job depends on how well you conceal that you used AI.

I don't think most people care if you used AI or not, as long as it's correct. AI or no AI, incorrect and false stuff makes people tired of you.

People who are paying even a slight bit of attention understand and anticipate the correlation between AI and slop/hallucination. There's a reason those terms have emerged. And there aren't corresponding terms for AI success/quality.

[dead]

Yes, but at the same time: "God forbid managers and executives actually permit people enough time to do do work and fact-check the hallucination machines." Especially in contexts where they are also mandating that staff find ways to use the hallucination machines.

Much like industrial accidents, some portion of blame has to go to the system, rather than any individual.