Share this post:
AI is all the rage, but even if companies have been quick to adopt it, laws are slow to adapt. When it comes to hiring laws or protections for workers, regulations seem to come at a crawling speed. So, what can companies do to avoid being caught in the crossfire between innovation and humanity?
Read what some of the top publications have to say about this.
The tech industry and the world have hailed AI as the ultimate timesaver, helping companies economize on this resource and improve productivity. Many companies jumped right onto the bandwagon, only to face backlash from the public and even legal repercussions and regulations. The recruiting industry is one of these, and as Jasmine Williams discusses in her article for VidCruiter, it has begun facing regulations around hiring and AI.
So, how is AI used in hiring? Many companies use it to screen candidates, schedule interviews, communicate with candidates, and even to interview candidates. The thing is that using AI or algorithms isn’t a new phenomenon in this industry. Applicant tracking systems have been used for decades to streamline processes. But now with the advent of AI, things are under legal regulation as ethical concerns have been raised.
“AI technology has the potential to streamline recruiting and boost efficiency.”
So, is it ethical to use AI in hiring decisions? As with anything, there are many risks to keep in mind, as one cannot blindly trust a machine to make decisions. AI risks are a particular sore point, as AI and large language models (LLMs) are at risk of “violating existing regulations in new and often unexpected ways, infringing on established rights, and having a variety of negative social impacts.”
Several considerations must be considered by a company if it plans to use AI in recruitment. These are as follows:
Jennifer Lada, D’Andre Chapman, and Laura Askinazi wrote for Holland & Knight all about the different trials and tribulations that AI hiring regulations have faced in the last few months. The political climate has been shifting to the point that regulations have been walked back and forth, with local lawmakers stepping up to create guidance for businesses to comply.
“Though federal guidance on AI use in the workplace appears to have been revoked (or removed from agency websites), employers are still required to comply with current federal, state, and local laws when implementing AI.”
So, if you’re asking for the AI regulations in the US, you should look towards local regulations. Some of the states have state-level responses to AI in employment to protect job seekers from discrimination. These are as follows:
Now, this doesn’t mean that Title VII of the Civil Rights Act of 1964 has been revoked. This act prohibits intentional and unintentional employment discrimination based on race, color, religion, sex, and national origin. Employers, as always, must comply with it; it’s just that if they’re using AI, they must ensure that it doesn’t fall into the ethical issues listed in the previous article. This does mean that AI, though it can be a time-saver, does need supervision and careful attention to avoid violating state and local protections.
Workday Inc. is currently facing a legal crisis for using AI in recruitment software, which sets a precedent for companies being found liable for AI decisions. This also opens the door for developers to be held accountable for their creations.
Alonzo Martinez writes for Forbes all about the new Californian laws regulating artificial intelligence and automated decision-making systems (ADS) in the workplace. California is known as a maverick, constantly adapting to new technologies to protect its inhabitants. Whether other states will use this legislation to regulate AI is up in the air, but for the moment, here’s what it entails for the Golden State.
“The rules don’t necessarily create new prohibitions, but they frame existing anti-discrimination protections in the context of emerging technologies.”
Defining ADS is essential for this regulation. It is a computational process that makes a decision or facilitates human decision-making regarding an employment benefit, including systems that use AI, machine learning, algorithms, statistics, or similar data processing techniques. This means that systems that use resume screening, applicant ranking, targeted job ads, analyzing facial expressions, and third-party data evaluation all fall under the regulation.
With this in mind, any agent or employment agency acting on behalf of an employer falls under California’s Fair Employment and Housing Act (FEHA). This means that third-party vendors in fields such as recruitment, promotion, pay decisions, or other personnel functions could fall liable under this regulation.
Another issue is that employers cannot test for criminal records until after making a conditional offer of employment, reinforcing California’s Fair Chance Act. And employers must keep records for at least four years. Though there’s no need for anti-bias testing in the regulation, having records can help companies defend themselves against discrimination claims.
So, all in all, what should employers do?
Lawmakers are cracking down on AI and ADS tech in hiring because of concerning actions and attitudes that have been proven. These violate long-standing laws and regulations to ensure equity and equality in hiring. Sadly, the political climate has forced local and state authorities to take matters into their own hands. This means that companies must check based on state regulations to avoid being liable for any unlawful activity.
Share this post:
WHAT DO YOU NEED TO FIND?