AI Made Friendly HERE

Companies should consider likelihood of biases in AI hiring tools

Listen to this article

  • 65% of small business HR leaders now use AI, with 53% planning to increase investment.
  • AI enhances hiring through automation, candidate targeting, and predictive analytics.
  • Legal experts urge bias audits, human oversight, and compliance with evolving state laws.

The adoption of artificial intelligence (AI) tools in the human resources (HR) sphere continues to grow. A recent Paychex survey found that 65% of small business owners and HR leaders use AI at work, with 61% using it daily and the majority (53%) of those surveyed plan to invest in AI for HR functions this year.

“When it comes to HR, one of the areas that has seen the biggest transformation is in talent acquisition,” said Alison Stevens, senior director of HR Solutions at Paychex. “The newest tools are fast, powerful, and adapt to hiring managers’ needs, which is critical in today’s labor market.”

When it comes to talent acquisition specifically, AI has many benefits, such as streamlining hiring by automating repetitive tasks and optimizing processes, Stevens said.

“It can also assist with writing compelling job descriptions, targeting the right candidates through smart algorithms, screening resumes efficiently, and even scheduling interviews,” she said. “AI tools can also analyze data to predict the best times and platforms to post jobs, ensuring you attract top talent quickly and effectively.”

AI can also drive efficiencies in HR processes, saving HR professionals precious time and allowing them to spend time on more strategic work, Stevens said.

“One of the areas we are uniquely focused on at Paychex is how to deliver actionable guidance and answers to our clients who have fundamental HR questions,” she said. “In fact, we’re finalizing a Gen-AI tool for our HR Business Partners (HRBPs). It’s a proprietary intelligence engine built by Paychex data scientists and developed based on extensive Paychex data of frequently asked HR questions from hundreds of thousands of client interactions and the expertise of Paychex HRBPs.”

Alison Stevens (left) and Jim KoenigAlison Stevens (left) and Jim Koenig

While Stevens believes AI is one of the most groundbreaking technological developments of our time, and when used correctly, can help businesses everywhere find the right talent, there are important considerations to keep in mind.

“Using AI in the recruiting process could potentially introduce bias based on the data sets they are trained on,” said Stevens, who notes it’s crucial to audit data regularly and ensure that candidates are notified about AI use in your company’s processes.

She also points out that any content that is generated by AI using external sources should be reviewed, audited, and when considered to be sensitive, secured to protect privacy.

“When using AI to generate content to support your recruiting process, be sure to confirm relevancy specific to your business’s needs and aligned with your culture and brand,” Stevens said. “Most importantly, AI cannot replace human evaluation to ensure candidates meet certain qualifications requiring empathy and leadership competencies to name a few.”

Jim Koenig is an attorney who is a partner and co-leader of Privacy + Cyber Practice at Troutman Pepper Locke LLP, a national law firm with attorneys located in 23 U.S. cities, including Rochester.

“HR is really one of the first places that companies can use and achieve benefits from AI, machine learning, and other related technologies right away,” said Koenig, who points out it’s a good idea, though, to reach out to counsel before using or adding AI applications to gain an understanding of the data and jurisdictions involved and to help pinpoint legal obligations and required practices.

There are several key principles that companies should consider with the use of AI in hiring — including bias and discrimination — that are now being incorporated and baked into various existing and emerging AI or AI-adjacent laws and guidelines.

“For example, the EEOC [U.S. Equal Employment Opportunity Commission] has guidance from 2022 that makes sure that individuals who fall under the American Disabilities Act don’t get negatively impacted with the use of AI tools,” Koenig said.

He also encourages companies to be familiar with Colorado’s AI Act — which regulates high-risk AI systems; the Illinois AI Video Interview Act, which in part requires employers to notify candidates about AI use in video interviews; and the Utah AI Policy Act, which imposes transparency obligations on companies using Generative AI.

While Koenig says it’s always possible that there will be a federal movement around AI regulations like the European Union’s General Data Protection Regulation and AI Act, it’s not likely currently and that’s why so many states and municipalities have taken up the responsibility to start adding laws to protect the citizens of their state from bias and discrimination in hiring.

“New York City was the first to actually have an AI law for hiring and the use of tools to make sure you prevent bias,” said Koenig pointing to New York City’s Local Law 144 which, in part, prohibits employers from using AI hiring tools unless they’ve undergone a bias audit within the past year and requires notification to candidates about AI use in the hiring process.

Koenig, who co-authored Troutman Pepper’s 2024 paper “AI and HR: Navigating Legal Challenges in Recruiting and Hiring” shares the following best practices from the piece for companies to take before implementing AI in the hiring processes:

  • Develop data maps and inventories.
  • Create and complete an AI Data Protection Impact Assessment (DPIA) or other impact assessments before adopting and implementing any AI hiring tool.
  • Form an AI governance committee/structure.
  • Update candidate notifications about AI use in hiring.
  • Ensure human oversight in final hiring decisions/no automated decisions.
  • Provide options for candidates to opt out of AI-driven assessments and request explanations for AI-driven hiring decisions.
  • Review vendor contracts to limit or eliminate their independent rights to benchmark or train their algorithms with your data.
  • Regularly train HR staff on AI tool use and potential bias issues.
  • Conduct bias audits of AI hiring tools.
  • Develop clear data retention and deletion policies for candidate information for both candidates being considered by AI and/or traditional means.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird