AI can help HR professionals create job descriptions, assess candidate resumes, generate interview questions, personalize employee onboarding material, and more.
But now that we’ve had time to play with it, some are starting to question:
How much AI is too much AI?
As the real perks and pitfalls of AI in HR functions come to light, we’re exploring some of the most common ways people pros are using it and sharing the latest best practices when using AI on the job.
Where AI shines
AI works best as a tool, not a crutch. And today, 38% of HR professionals are leveraging it to increase their efficiency. How, you ask?
Here are a few of the HR processes AI can help you perform:
- Creating job descriptions
- Highlighting candidate skills
- Matching candidates to job requirements
- Engaging candidates via chatbots
- Drafting interview questions
- Personalizing onboarding material
- Crunching employee performance data
- Generating employee engagement insights
- Generating conversation starters for performance reviews
- Suggesting training programs based on employee skills
But no matter how you’re using AI, you need to understand its limitations before you can harness its full potential. Let’s take a closer look at some of the risks.
Where AI misses
As with any “bleeding edge” technology, AI doesn’t just free up time and solve problems – it also creates them.
“We know that there are challenges – a threat to human jobs, the potential implications for cyber security and data theft…By failing to understand and react to these threats, any individual or organization runs the risk of falling foul of one of the greatest of all threats posed by AI – failing to exploit its opportunities and by doing so, being left behind by more forward-thinking competitors.” technology advisor Bernard Marr writes for Forbes.
Here are a few of the top risks employers need to be aware of:
- Bias in applicant screening algorithms
- Failing to identify "disparate treatment" and "disparate impact"
- Potential for private data being used for model training
- Risk of amplifying existing biases in company data
So is AI really more trouble than it's worth?
With a clear set of best practices, you can confidently and responsibly experiment with AI tools until you find the use cases that work for you.
5 best practices for the responsible use of AI
AI is changing the future of work as we know it. But for now, the majority of risks around AI in HR are related to talent acquisition and recruitment.
Let’s take a closer look at when to use vs. not to use AI in HR.
1. Know its limits
While AI can help lighten the load, it’s not a silver bullet. For most recruiters, the best use case for AI technology is pairing it with automation to save time and reduce busywork.
“We see the potential for AI to be a virtual partner across our workstreams and stack of people systems that leverage automated workflows to facilitate new hire communications, scheduling and tasks…AI and automation help us to improve upon the efficiency and regularity of these practices…by doing some of the legwork analysis for us,” Elise Jason, Vice President of People at Strive Health, tells SHRM.
By enlisting the help of AI to automate manual administrative tasks like sourcing, sorting and prescreening applicants in real-time, you can spend more time getting to know the candidates most qualified for the role.
HR pro tip: Using AI responsibly also means addressing any employee experience concerns around it. Despite a barrage of headlines about AI stealing jobs, experts predict that AI’s impact on global jobs will be neutral.
2. Prioritize data privacy
As an employer, it’s your responsibility to ensure complete transparency regarding candidate data usage and retention. Many free AI models offer no assurances, meaning your input data could be used for model training.
This has posed a problem for companies of all sizes, even major multinational ones like Samsung. In May of 2023, the electronics company banned ChatGPT after it learned employees had been feeding sensitive documents into the platform.
To protect your company data, HR departments can:
- Address data privacy issues with your employees before onboarding any new tool.
- Consider using a paid tool instead of a free one
- Build your own AI and train it on your proprietary data and language learning models
“AI systems, by their nature, process vast amounts of data, including sensitive personal and organizational information. This makes them attractive targets for cyberattacks. Potential security risks include data breaches, unauthorized access to AI models, and manipulating AI algorithms to produce biased or incorrect outputs,” says Metin Kortak, Chief Information Security Officer at Rhymetec.
At the end of the day, it’s up to employers and HR managers to cultivate an environment where humans and AI work together to deliver more value to candidates, customers and internal stakeholders – without putting privacy at risk.
3. Always keep a human behind the wheel
Ever asked ChatGPT a question and gotten a totally off-the-wall answer in response?
One of the biggest risks of using AI is hallucinations. These are instances where an AI produces or includes information that is inaccurate, fabricated, or not grounded in its training data. And it’s a top concern for some 77% of HR leaders using AI-driven business applications.
AI learns and develops language patterns from its training data, generating new content based on those frameworks. If there are gaps or flaws in the training data, the AI will manufacture an answer that feels right based on probability and analogy — rather than fact.
That’s why the human touch is critical. Always take time to verify the accuracy of any content generated by AI before sending it out into the world. And whatever you do, always, always, always apply human oversight.
Your ‘supervised AI’ strategy could involve:
- Designing a review protocol for AI-generated content
- Educating employees on both the benefits of AI and the risks
- Creating an AI task force to train models and monitor output
“Technology has no moral compass,” explains Jacqueline Woods, Chief Marketing Officer at Teradata. “It is critical that when we use extraordinary tools, we take extraordinary measures to make sure we are protecting the people these tools touch.”
Whether those people are your customers or job candidates, protect them by evaluating every AI output with a critical eye.
4. Pay attention to local and international laws
Like the technology itself, AI regulations are constantly evolving.
Currently, about half of US states have AI laws under consideration. In New York City, employers now have to conduct annual bias audits of AI-powered tools to ensure fairness in their hiring process.
As requirements change, stay on top of all local, state, and federal laws to help ensure compliance in your strategy and AI solutions.
“Companies should make efforts to engage with regulators and government bodies in discussions around AI regulation and legislation. As the technology matures, industry bodies and organizations such as trade unions will be involved in drafting and implementing codes of practice, regulations, and standards,” Bernard Marr writes for Forbes.
Here are some good rules of thumb to follow:
- Do ensure you’re lawfully collecting and processing candidates’ personal data with AI.
- Do inform individuals about how and when you’re using AI in your recruitment process.
- Don’t use AI without human oversight when screening candidates.
- Don’t use AI without regular bias audits.
When in doubt, always check with a legal professional to ensure you're using AI safely and ethically.
5. Train employees to use AI effectively
Widespread adoption requires new AI capabilities and a plan to address the widening skills gaps. To go from theory to practice in your HR team, start with a page out of AI career coach Travis Lindemoen’s book.
“A few years back, I was working with a client in manufacturing,” Travis recalls. “The challenge was, how do we equip those assembly-line workers with new skills so they could thrive alongside the robots?”
Travis and his team solved the problem by implementing AI-focused reskilling and upskilling programs. They joined forces with a local tech college to teach robot programming and data analysis to their existing employees.
He and the team also created internal training programs to help workers optimize time-consuming tasks and immediately start seeing the benefits of AI.
Here are some additional ways to upskill your employees in AI:
- Invest in remote-friendly training sessions. Think webinars, online certification programs, machine learning bootcamps, and more.
- Teach employees how to write a strong prompt to see how easy it can be to get the results they’re looking for.
- Cultivate a growth mindset so employees can see how AI learning can support their career paths.
“Those who participated got hands-on experience and certifications that made them much more competitive in the job market, even if their role on the line shifted,” explains Travis.
By combining AI learning with career development, this two-pronged approach encourages employees to collaborate with technology, instead of going against it.
Artificial intelligence, real success in human resources
When it comes to new technology like AI, the only constant is change.
But if you stay on top of evolving regulations and dedicate time to training your employees, you can make generative AI a natural part of both your own HR workflows and the company culture at large.
Looking for an applicant tracking system that uses AI responsibly? Don’t miss Breezy's Candidate Match Score. With easy and adjustable applicant ratings, hiring teams can prioritize who to move forward, faster. It’s applicant screening without the "AI ick.”
Streamline your hiring process and reduce your time to hire. Get your free 14-day trial today!