AI and Recruiting

AI can be a very big help in helping to take care of the workload of many different people throughout an organization, but caution is needed. All processes in an organization need to be checked or verified, and AI is no different. Examples include the following: 

  • In evaluating resumes, AI has to be checked for stereotyping. A study by the British firm Rippl found that AI generated stereotyped (most often by gender) images for a variety of occupations, with only one (teacher) showing both male and female images. That same study showed that race and age also played a role in the machine’s decision-making process. 

  • AI is subject to “hallucination”: generating responses that are grammatically correct but factually wrong, such as instructing people to eat poisonous mushrooms. 

To avoid what could be costly ramifications, governance of any AI program being used should include the following: 

  • Constant monitoring to ensure the system is not producing inaccurate – or even dangerous – results; 

  • Written policies and procedures, just as with all other important business activities, to ensure that the program doesn’t go off “half-cocked” and jeopardize the organization legally or financially; 

  • Assignment of an individual with authority to correct any bias or other errors found in the AI system to protect the company’s accountability; and 

  • Checking with legal counsel to analyze potential risks, review contracts/agreements, and otherwise ensure that the organization is protected. 

The American Bar Association has developed a formal opinion on how attorneys should approach the use of generative AI tools by their clients. The aspects attorneys need to be able to cover, according to the ABA, are these: 

  • Competence: having at least some knowledge of the purpose of the system(s) involved; 

  • Confidentiality: protecting client information 

  • Communication: both with and by the client regarding the use of an AI system 

  • “Meritorious Claims + Candor toward Tribunal”: honesty in court, not pursuing frivolous claims; and 

  • Supervisory Responsibilities: ensuring provision of clear policies to all members of a client’s organization who have the need to know. 

  • (The ABA also provides guidance about fees, including not charging clients to learn about AI.) 

Finally, a federal court in California has refused to dismiss entirely a lawsuit against software vendor WorkDay, brought because of discriminatory results from their AI recruiting software. Adding weight to the case, EEOC has filed an amicus brief on behalf of the plaintiff, on the basis that discrimination is discrimination, regardless of the underlying cause. The court held that WorkDay through its software acted as an agent of the employer, as defined by relevant legislation. Conversely, the allegation that WorkDay was acting as an employment agency was dismissed. A separate motion by WorkDay to dismiss a disparate impact claim was also denied. 

The plaintiff, an African American male, had allegedly applied to over 100 job postings, many of which were processed through the WorkDay system. Despite having a solid educational and experience background, he was denied employment at every one of them, hence the lawsuit, which is ongoing. 

VBS can assist you with your policies.  

Previous
Previous

Econ 101 in the California Fast Food Industry 

Next
Next

Compliance Corner [August2024]