If a company wants to stay in business, the legal risk of AI bots firing people is probably not worth the cost savings. Until that changes, I don't think there's much to discuss, but that may not be long given the way things are going.
If a company wants to stay in business, the legal risk of AI bots firing people is probably not worth the cost savings. Until that changes, I don't think there's much to discuss, but that may not be long given the way things are going.
I don't think AI bots will make unilateral decisions about firing the same way a line manager typically won't. E.g. a manager will typically make a recommendation to a director and/or HR who will than review and approve the final action. A smart director should investigate themselves and might reassign the IC to another manager or in rare cases fire the manager instead (e.g. if they suspect retaliation).
Don't know about HR AI-bots firing workers. But only last week there was a news segment about HR AI-bots doing the first level interviewing via video-calls. Which is intriguing since there have been some reports previously about job applicants using AI to enhance their interviews, especially for remote jobs. Could lead to an AI-vs-AI showdown.
Of course, I take such reports with a grain of salt, because I often wonder whether such news items are self-serving product promotions in disguise.
They don't have to admit that the decision was made by AI. One low-wage worker in HR can be the spokesperson who sells it as his decision.
So the thing about people who enforce laws is, they're not _completely_ stupid. Dodgy companies will pretty much always attempt to hide the fact that they're breaking the law; nothing new there.
No law specifically requires more than a coin flip, but firing people because some rando in HR "decided" without justification or documentation of cause can be risky.