Skip to main content
abstract image of people lined up with multi color background all in soft focus

AI in HR: Biased but Beneficial

By Ben Opp, SPHR, HR Hotline Advisor
Published September 19, 2023

Did you catch last month’s conversation about artificial intelligence between our own Mary Lynn Fayoumi and Kathi Enderes on our Straight from the Source webinar series? Here are a few of their fantastic insights:

  • If you’re worried about artificial intelligence taking your job, remember that AI has been around since the 1950s. Humans haven’t been replaced yet, and won’t be anytime soon. But, even if AI isn’t coming for your job, someone skilled in using AI probably is! Generative AI can boost productivity by automating repetitive work, spurring creativity, and enabling higher-level work to get done faster. Many jobs will need to be redesigned to incorporate the efficiency gains of AI use.
  • Generative AI systems like ChatGPT should not be treated as reliable sources of information – their factual data is incomplete, and they can’t be trusted to interpret and apply that data accurately to real-life situations. Anything ChatGPT says needs to be thoroughly vetted and fact-checked. HR practitioners shouldn’t rely on ChatGPT to analyze employee leave requests, answer compliance questions, or provide legal advice. (That’s what the HR Hotline is for!)
  • ChatGPT is best used today not as a competitor to humans, but as a collaborative partner – an enhancement or augmentation of the user’s own creativity. You might use it to create a starting point for job descriptions, help you brainstorm a catchy name for your new video series, or summarize a set of written comments from an employee opinion survey. Even in these applications, though, the output must be vetted by a human!

Mary Lynn and Kathi also discussed the discriminatory potential of the large language models upon which generative AI tools are based. Consider an experiment conducted by filmmaker and disability advocate Jeremy Andrew Davis: Jeremy asked MidJourney, an AI tool that generates images from text prompts, to repeatedly render images of “an autistic person.” Out of 148 image results, 100% of them depicted a young white male appearing pale, somber, and sickly. There was zero diversity in gender, race, or emotional affect.

A rousing LinkedIn debate ensued, centering around the question of whether the AI itself is biased (racist, ageist, ableist, etc.), or if the AI is simply drawing from the biased depictions of autism found within its vast dataset, and who, if anyone, is responsible for addressing these biases. Regardless, it’s clear that generative AI tools can and will produce biased content, and users must take the responsibility to mitigate this bias. Writing prompts that intentionally prioritize diversity is one way to force the system’s output to reflect societal reality.

Keep in mind that New York City has already passed legislation requiring bias audits for AI tools used in employment decisions, signaling that such requirements may be coming in other jurisdictions as well.

Looking to dig deeper into applications of AI in HR practice? You can watch the recording of Kathi and Mary Lynn’s discussion, or better, yet, join us on November 2nd at our annual Employment Law Conference to hear more from Ben Eubanks, the world’s leading expert on AI applications in human resources.

Are you using AI tools in your HR practice today? Join the discussion and share your experiences in the All Members Community on the HR Exchange!