Rendered at 11:13:22 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
jonathanlydall 5 days ago [-]
As a South African, it is well known here (well, amongst the educated) that the vast majority of management level and up government positions are awarded based on who you know rather than what you know.
These people who don't actually do their job properly very rarely get dismissed, at worst, and only after a public debacle, they tend to be shuffled off to some other position in a different department that they're equally unqualified for.
This just one of the many, many, many, symptoms of this general problem.
testing22321 5 days ago [-]
I worked for a large North American telco.
New Director of core network had an all hands to introduce self and lay out some stuff. During someone’s welcome they said, seriously, in front of ~500 IT/Engineering people “wait, what is IPv6?”
Happens here all the time too.
forshaper 5 days ago [-]
Worked at an ISP. An executive had trouble understanding the difference between WiFi and internet.
mothballed 5 days ago [-]
[flagged]
jaapz 5 days ago [-]
You are asking why people aren't flocking to a town literally founded for white separatists to live in segregated?
mothballed 5 days ago [-]
People flock to America even though it's extremely racist compared to many of the alternatives. Racism isn't the only factor a lot of people are using, they have to pick from a lot of suboptimal options in a country with high crime and rolling blackouts and play whatever cards they have. It does seem like they found a model that is working on other qualities better than many other parts of South Africa. I wouldn't write them off just because they're ethnically segregated.
I'd also point out I used "private towns like" so it could refer to some other perhaps less racist private town, just not sure what that is. If you are Afrikaners it looks like an underutilized option. It would be nice though if there were less segregated options as well.
solumunus 4 days ago [-]
> I wouldn't write them off just because they're ethnically segregated.
Amazing.
mothballed 4 days ago [-]
Are you against Indian Reservations or the Hawaiian Homelands? If you finding yourself stopping to say but muh false equivalence or but that's different then realize you're not writing them off just because they're ethnically segregated, you just don't believe the Afrikaner residing there have met the bar at which you'll accept that community.
That's OK, I view segregation as a bad thing as well and all else equal would reject a place with it, but it's important to note the difference.
Full disclosure: I'm not Afrikaner so I'm one of the ones that would be segregated out of their community. I don't have much dog in the race, but I'm surprised more Afrikaner haven't taken advantage of it.
silver_silver 4 days ago [-]
My friend that is what the whites were doing on a national level before the election of 1994. You seem to still be catching up to what they realised 32 years ago.
Semaphor 5 days ago [-]
There are a few non whites in SA, who wouldn't be quite welcome. But I guess that's just *muh racism*
lgleason 5 days ago [-]
If this happened in the US or Europe it would be an interesting story. In South Africa, this is just par for the course and the quality of the work may not have been any better had it been written by the current people staffing home affairs.
mrweasel 5 days ago [-]
Currently it's happening in reverse in Denmark. People submit complaints to the municipalities about various things and increasingly those complaints are written with the help of AI (~20%). These cases take up a ton of time, because they are so difficult to process, referencing rules and regulations that doesn't exist, mixed in with some that do. These AI written complaints are typically way more complex, and 10 times the pages of a human written one.
abyssin 5 days ago [-]
I'm tempted to say it's fair game, since complexity has often had the advantageous side effect for governing bodies to make legitimate complaints impossible to voice for normal citizens.
stroebs 5 days ago [-]
Add the insult that these two officials have no doubt been suspended on full pay and benefits while the year-long investigation takes place at great expense to the tax payer. After which they are moved to a different government department as “punishment”.
miningape 5 days ago [-]
Nah don't worry, I'm sure Cyril will setup a commission for this
rubenvanwyk 5 days ago [-]
Nice seeing an article from SA here :) unfortunately, this surprises none of us.
orbital-decay 5 days ago [-]
Why does this page want to know my precise location?
dee_s101 5 days ago [-]
This is the tip of the iceberg. For example, the South African government included AI hallucinations in drafting its own AI policy: https://mybroadband.co.za/news/ai/644001-south-african-exper... . Imagine the AI slop in other documents including those that are classified, financial calculations etc.
overfeed 5 days ago [-]
You should have read the rest of the article - that imcident was mentioned too.
hoektoe 5 days ago [-]
Something from my country, suprised they didn't get a promotion
aussieguy1234 5 days ago [-]
I would be totally unsurprising if corrupt politicians in developing countries start using AI extensively for basic governance.
What will be interesting is to see who does a better job. Corrupt politician by themselves, or the AI they outsource their job to.
scuff3d 5 days ago [-]
"Hey use our thing! It's totally going to replace all humans! It's so awesome it can do your job for you!
...
BTW, it hallucinates... Like all the time.. and if you don't fact check our product you're responsible. Not us! We stole all your data to train it, we're making billions and billions of dollars off that theft, but you own all the liability"
Ozzie-D 5 days ago [-]
[flagged]
antonvs 5 days ago [-]
[flagged]
antonymoose 5 days ago [-]
Wait until you find out about IBM and Fanta!
antonvs 5 days ago [-]
Yes, I'm aware. But the point is not just that some businesses survived being Nazi-friendly - Hugo Boss is another example. The point is it's more unusual that a propaganda outlet whose entire purpose was to promote an evil regime survives that regime.
5 days ago [-]
antonymoose 5 days ago [-]
It would seem like a news paper being pro-regime is more of a survival strategy than anything else. Plenty of papers survived the Nazi era and exist today. Nothing unique to be seen in it here.
quantified 6 days ago [-]
[flagged]
nelox 5 days ago [-]
> Moving forward, the department will also design and implement AI checks and declarations as part of its internal approval processes
Read it first?
wewewedxfgdf 5 days ago [-]
The entire world sells products to encourage you to do your work with AI assistance.
But god forbid that there should be any evidence of that in your .....work. You'll be suspended or fired.
Holy god, it looks like someone used AI and were a bit sloppy in their editing!!!! YOU'RE FIRED!
Maybe someday when there's been enough such reports people will shrug like they do about security breaches now.
protocolture 5 days ago [-]
I dont know if its "evidence of AI" so much as "Evidence of laziness causing extreme public embarrassment"
Every good AI policy is basically:
1. You may use <supported LLM with enterprise data agreement>
2. You are still responsible for the quality of your output, customer facing embarrassment is your fault and will not be attributed to the technology.
In this case, the LLM was used to generate a reference table.
>“It seems that these references were generated and attached to the document after the fact, as they are not cited in the body of the text.
Like its just a retrospective justification for the content they have written, its not lazy editing it implies a complete lack of research, while fraudulently trying to imply the research was completed.
root_axis 5 days ago [-]
These suspensions send the appropriate message. This isn't the same thing as poorly reviewed marketing copy, hallucinations in government policy papers are unacceptable.
add-sub-mul-div 5 days ago [-]
> Maybe someday when there's been enough such reports people will shrug like they do about security breaches now.
Yes, it's a real danger that it becomes a whole shift downward for society. We stop objecting to errors and mediocrity because they've become so normalized.
suprjami 5 days ago [-]
These people are employed to serve the public and are paid by public funds. This is a socially critical job which affects people's entire lives, and in South Africa possibly their personal safety. This isn't just another coporation who needs to make line go up.
The wording of the article suggests that large parts of the documents where false and should have been caught by review, for which these two director-level people were responsible. This seems to be more than just editing which was "a bit sloppy".
I suggest if you were an immigrant whose citizenship application was denied based on an AI hallucination, forcing you to uproot and move your family out of the country against your will, you would not appreciate that and would take a different view.
delfinom 5 days ago [-]
Wow, way to argue maliciously.
The only reason any AI usage is rejected in this scenario is due to errors.
Human error is one thing, but if a human uses AI and does not verify its output and then publishes it as some sort of authoritative work, you are pushing deep past ethical issues and often into legal issues.
Government word is law so government employees posting bad information from AI when it's their job to post good information is practically a crime in of itself.
Yes, humans can also publish information by mistake, but there's a massive difference between a human getting some numbers wrong vs. AI completely inventing citations.
My megacorp recently published their first AI usage policy, more or less, go nuts using AI but you will be 100% be held accountable for reviewing output to be acceptable, including but not limited to terminations.
jonathanlydall 5 days ago [-]
This is just one more facet of the general incompetence of South African government employees, this just hit headlines due to the novelty of it having AI as a component of said incompetence.
cwnyth 5 days ago [-]
God forbid people actually have to do work and fact-check the hallucination machines!
wewewedxfgdf 5 days ago [-]
You're correct - whether you keep your job depends on how well you conceal that you used AI.
embedding-shape 5 days ago [-]
I don't think most people care if you used AI or not, as long as it's correct. AI or no AI, incorrect and false stuff makes people tired of you.
add-sub-mul-div 5 days ago [-]
People who are paying even a slight bit of attention understand and anticipate the correlation between AI and slop/hallucination. There's a reason those terms have emerged. And there aren't corresponding terms for AI success/quality.
benjiro3000 5 days ago [-]
[dead]
Terr_ 5 days ago [-]
Yes, but at the same time: "God forbid managers and executives actually permit people enough time to do do work and fact-check the hallucination machines." Especially in contexts where they are also mandating that staff find ways to use the hallucination machines.
Much like industrial accidents, some portion of blame has to go to the system, rather than any individual.
cwnyth 5 days ago [-]
That's not relevant to this particular case, but sure, it's hard to disagree with that general statement.
These people who don't actually do their job properly very rarely get dismissed, at worst, and only after a public debacle, they tend to be shuffled off to some other position in a different department that they're equally unqualified for.
This just one of the many, many, many, symptoms of this general problem.
New Director of core network had an all hands to introduce self and lay out some stuff. During someone’s welcome they said, seriously, in front of ~500 IT/Engineering people “wait, what is IPv6?”
Happens here all the time too.
I'd also point out I used "private towns like" so it could refer to some other perhaps less racist private town, just not sure what that is. If you are Afrikaners it looks like an underutilized option. It would be nice though if there were less segregated options as well.
Amazing.
That's OK, I view segregation as a bad thing as well and all else equal would reject a place with it, but it's important to note the difference.
Full disclosure: I'm not Afrikaner so I'm one of the ones that would be segregated out of their community. I don't have much dog in the race, but I'm surprised more Afrikaner haven't taken advantage of it.
What will be interesting is to see who does a better job. Corrupt politician by themselves, or the AI they outsource their job to.
...
BTW, it hallucinates... Like all the time.. and if you don't fact check our product you're responsible. Not us! We stole all your data to train it, we're making billions and billions of dollars off that theft, but you own all the liability"
Read it first?
But god forbid that there should be any evidence of that in your .....work. You'll be suspended or fired.
Holy god, it looks like someone used AI and were a bit sloppy in their editing!!!! YOU'RE FIRED!
Maybe someday when there's been enough such reports people will shrug like they do about security breaches now.
Every good AI policy is basically:
1. You may use <supported LLM with enterprise data agreement>
2. You are still responsible for the quality of your output, customer facing embarrassment is your fault and will not be attributed to the technology.
In this case, the LLM was used to generate a reference table.
>“It seems that these references were generated and attached to the document after the fact, as they are not cited in the body of the text.
Like its just a retrospective justification for the content they have written, its not lazy editing it implies a complete lack of research, while fraudulently trying to imply the research was completed.
Yes, it's a real danger that it becomes a whole shift downward for society. We stop objecting to errors and mediocrity because they've become so normalized.
The wording of the article suggests that large parts of the documents where false and should have been caught by review, for which these two director-level people were responsible. This seems to be more than just editing which was "a bit sloppy".
I suggest if you were an immigrant whose citizenship application was denied based on an AI hallucination, forcing you to uproot and move your family out of the country against your will, you would not appreciate that and would take a different view.
The only reason any AI usage is rejected in this scenario is due to errors.
Human error is one thing, but if a human uses AI and does not verify its output and then publishes it as some sort of authoritative work, you are pushing deep past ethical issues and often into legal issues.
Government word is law so government employees posting bad information from AI when it's their job to post good information is practically a crime in of itself.
Yes, humans can also publish information by mistake, but there's a massive difference between a human getting some numbers wrong vs. AI completely inventing citations.
My megacorp recently published their first AI usage policy, more or less, go nuts using AI but you will be 100% be held accountable for reviewing output to be acceptable, including but not limited to terminations.
Much like industrial accidents, some portion of blame has to go to the system, rather than any individual.