
Sometimes the biggest lessons come from the most basic mistakes. Just ask McDonald's, whose AI-powered hiring platform recently exposed millions of job applicants' personal data, thanks to a password that would make any IT expert cringe: "123456".
The scariest part? While everyone's racing to add AI to their hiring process, basic security measures are getting left in the dust. And candidates and employers are left paying the price.
When "quick and easy" becomes "oops and yikes"
On paper, McDonald's McHire platform looked like a dream come true. An AI chatbot named Olivia would screen thousands of candidates, collect their information, and run personality testsâall without human intervention.
But there was just one problem: the system was about as secure as leaving your house key under the welcome mat.
Security researchers discovered they could access millions of applicant records containing names, email addresses, phone numbers and complete chat histories through basic security oversights, including:
- Default admin passwords that hadn't been changed since 2019
- No multi-factor authentication (MFA)
- Exposed backend systems
- Poor access controls
To be fair, the mistake had more to do with human error than artificial intelligence.
Hacked podcast hosts Scott Francis Winder and Jordan Bloeman compared the McHire setup to lining up multiple layers of swiss cheese. Theoretically, nothing gets throughâbut when humans enter the picture, even the smallest hole can lead to a massive breach.
Their best guess? The potential breach was caused by a developer account used to give dev teams quick access without having to go through the headache of MFA authorization.
âIt just never got cleaned up. Youâd be surprised at how many major SaaS web apps have major bypass credentials for developers to use,â say the Hacked hosts.
The real cost of cutting corners
For employers, the incident serves as a stark reminder that AI tools are only as secure as their weakest link.
While no sensitive financial data was exposed, the breach put millions of job seekers at risk of targeted phishing scams. And we mean millionsâresearchers found over 64 million chat records containing the personal details of applicants who thought they were having private conversations with an AI recruiter.
The timing couldn't be worse:
- Job seekers are already facing a surge in recruitment scams, with the FTC reporting $220 million in losses in the first half of 2024 alone.
- The average cost of a GDPR fine in 2024 was âŹ2.8 million, up 30% from the previous year.
- The global average cost of a data breach has reached $4.4 million.
Imagine handing scammers a goldmine of verified job seekers' contact info, complete with proof they're actively looking for work at McDonald's. The phishing potential is frightening.
For example, picture a desperate job seeker getting an email that says: "Thanks for applying to McDonald's! We loved your chat with Olivia. Click here to complete your application..."
Cyber criminals are masters at exploiting these exact scenarios. As an employer, itâs on you to make sure that doesnât happen.
What does this mean for your hiring process?
You're probably thinking, "Great, another security horror story. Iâm not an IT pro, so what can I actually do about it?"
According to data, 61% of HR leaders are planning or deploying AI in their talent processes. And that number's only going up. Whether you're using an AI sidekick for hiring or sticking with traditional ATS tools, here are some practical tips for keeping candidate data secure.
Go back to basics (yes, really)
Think of security basics like the locks on your front doorâthey're not fancy, but they're your first line of defense. And they work: Companies with strong security awareness were 30% less likely to experience a data breach.
Here's your checklist:
- Create passwords that would make a hacker cry (sorry, "password123" won't cut itâand yes, it's still one of the most commonly used passwords)
- Turn on that two-factor authentication. Yes, it's annoying. No, it's not optional anymore. It stops 99.9% of automated attacks.
- Do regular "who's who" checks on system access (because Jake from accounting who left 6 months ago shouldn't still have login credentials)
- Keep an eye out for weird activity (like logins at 3 AM from overseasâjust ask MGM Resorts about their social engineering breach).
The key is making these security basics part of the routine, not just a one-time fix.
Set up monthly security reviews, create clear processes for offboarding employees, and make sure everyone knows the importance of strong passwords and 2FA. A little prevention goes a long way in keeping your candidates' data safe.
Donât treat AI like a security guard
Just because you've got AI in your corner doesn't mean you can kick back and relax. In fact, AI-powered recruitment tools need even more security scrutiny.
Your AI security checklist:
- Use tech to boost your security game, not replace it (AI can spot patterns, but it can't replace human oversight).
- Schedule regular security check-ups. Monthly at minimum.
- Document everything (because we all know memory is unreliable).
While AI can process massive amounts of data and spot patterns humans might miss, it can also make seriously costly mistakes if left unchecked.
Treat candidate data like gold
Your applicants are trusting you big time. Every resume, every email, every chat log represents someone's career hopes and personal details.
These tips will help you keep their data protected:
- Only ask for what you actually need
- Encrypt everything
- Write down your data rules and stick to them
Not sure where to start? Create a data collection audit. If you can't justify why you need a piece of information for the hiring decision, don't collect it.
Know when to use (and not use!) AI
The McDonald's fiasco isn't just about passwordsâit's about knowing AI's place in your hiring process.
Here's your quick-reference guide:
AI is great for:
- Initial resume screening (with human oversight)
- Scheduling interviews
- Basic candidate communications
- Generating interview questions
- Data analysis and reporting
Keep humans in charge of:
- Final candidate selection
- Sensitive data handling
- Bias monitoring
- Candidate communications
- Strategic hiring decisions
Create clear guidelines for when your team should and shouldn't use AI. For example, sensitive candidate data should never be fed into public AI toolsâeven if it seems more efficient.
How Breezy keeps the bad guys out (while letting the good ones in)
We get it. All this security talk can feel overwhelming. That's why we've built enterprise-grade security right into Breezy's DNA. Here's how Breezy helps protect your candidates' data:
Built-in GDPR compliance (because nobody likes surprise fines)
Remember those multi-million dollar GDPR penalties we mentioned? Yeah, we help you avoid those:
- Automated data retention controls
- Candidate consent tracking
- Easy data subject access requests
- Privacy notice management
- Regular compliance updates
Security that works while you sleep
No more lying awake at night wondering if your candidate data is safe. Breezy gives you:
- Multi-factor authentication standard
- Role-based access controls
- Automated security updates
- Activity logging and monitoring
- Secure data encryption
AI done right
â
Donât take chances with AI:
- No public AI models for sensitive data
- Secure, supervised screening tools
- Clear data handling procedures
- Regular security audits
The best part? All these features work automatically in the background while you focus on what matters mostâfinding and hiring great people.
Smarter, safer hiring in the AI age
AI recruitment tools aren't the villain in this story. But the McHire incident is a wake-up call: employers need to be smart about security.
Remember, you're not just processing applicationsâyou're safeguarding people's dreams and personal details. Treat that responsibility with the respect it deserves.
Want to make sure your hiring process is both efficient AND secure? Learn how Breezy's enterprise-grade security keeps your candidates' data safe.