I was reading an article about automated recruiting and had one of those little chill moments where something obvious suddenly seemed a lot more worrying. The piece described how hiring systems that use language models often end up shortlisting candidates whose resumes were written by AI — and, even more strangely, those systems seemed to prefer applications written by the very same type of model doing the reviewing. That story is a neat (and alarming) example of AI hiring bias in the wild.
A quick story from the newsroom
A reporter ran a controlled test: submit a mix of human-crafted and AI-crafted resumes to automated screening tools. The results weren’t subtle. Resumes written by AI were shortlisted at higher rates, and the biggest boost went to resumes generated by models similar to the one doing the screening. In plain terms: when machines are both writing and judging applications, they tend to favor their own style.
Why AI hiring bias happens
There are several reasons this sort of favoritism can emerge. First, language models are trained on massive datasets that encode particular styles, formats, and buzzwords. When an application mirrors those patterns, an automated ranker might interpret it as a better fit. Second, the signal the model learns to reward might be purely stylistic — clarity, keyword density, or phrasing — rather than substantive job fit. Third, when the reviewer and the writer share architecture or training data, there’s an implicit “same-voice” advantage.
Same-model advantage and AI hiring bias
Imagine two applicants with identical skills. One rewrites their resume using a popular LLM; the other hires a human editor. The LLM-polished resume might use phrasing and structures that the screening model recognizes and rates highly. This isn’t necessarily intentional discrimination — it’s pattern-matching behaving in a way that benefits the text that looks most like its own training distribution.
How automated recruiting amplifies small skews
Automated recruiting doesn’t just evaluate; it amplifies. A small, systematic preference — for a certain phrase, layout, or verb choice — can cascade through hiring funnels. If hundreds of applicants are nudged through the shortlist by the same stylistic quirks, hiring managers end up making final decisions from a pool already shaped by those quirks. That means teams may unwittingly narrow diversity of background, thinking the shortlist is the most qualified group when, in reality, stylistic alignment played a big role.
“When the reviewer and the writer share architecture or training data, there’s an implicit ‘same-voice’ advantage.”
How to audit for AI hiring bias
Fixing this starts with measurement. If your hiring process uses automated screening, treat the system like any other tool: audit it. Run blind tests by submitting human and AI-generated resumes, anonymize applications, and track which features correlate with advancement. Look for non-substantive signals — formatting, sentence length, particular buzzwords — that might be overweighted.
- Run A/B tests with different resume styles.
- Measure shortlisting rates across candidate demographics and writing sources.
- Introduce human-in-the-loop checks at critical decision points.
- Use multiple, diverse models for screening rather than a single monolithic reviewer.
Practical tweaks that help
There are small operational changes that can meaningfully reduce the same-voice effect. For example, strip resumes of formatting and keywords before automated scoring, or convert content into standard plain text. Use competency-based assessments that test skills directly rather than relying heavily on CV phrasing. And when you do use AI to assist, require transparency: did the applicant use generative tools? If so, what was the nature of that assistance?
What companies can do right now
If you’re running recruiting software or relying on vendors, ask these questions: What data was the screening model trained on? Has the vendor tested for stylistic or model-based favoritism? Can you opt for ensemble scoring (multiple reviewers) to reduce single-model biases? Demand audit logs and be ready to push back on “black box” scoring that cannot be inspected.
From a policy perspective, organizations should set standards for fairness, transparency, and accountability around AI in hiring. That might include human review quotas, logging of decisions, and clear appeals processes for candidates who suspect automated bias.
How candidates can respond
If you’re applying for jobs, awareness is power. If you use AI to polish a resume, consider doing a plain-text pass to ensure substance isn’t lost beneath shiny phrasing. Keep a portfolio or concrete project list that demonstrates skills beyond elegant wording. And if an application process says it uses AI screening, feel free to ask how it evaluates applicants — even employers appreciate informed candidates.
Transparency from both sides — employers disclosing tools, candidates disclosing assistance — could create a healthier dynamic where style doesn’t drown out substance.
Final thoughts
Seeing how automated systems can lean toward AI-crafted text is a useful reminder: machine judgments reflect the data and incentives we give them. The result isn’t always malicious, but it can still distort outcomes in meaningful ways. Whether you’re building hiring tools, buying them, or applying through them, small changes — audits, transparency, and diverse evaluation signals — can prevent a style-based echo chamber where the system ends up rewarding itself.
Q&A
Q: Can using AI to write my resume harm my chances?
A: It can help and it can hurt. AI can make your resume clearer, but if the hiring process favors a particular style, AI-written content might perform better or worse depending on the screening model. Focus on clear, truthful representations of your skills and consider including concrete links or examples.
Q: How can companies prove their hiring AI is fair?
A: Companies can publish fairness audits, allow third-party testing, maintain decision logs, and implement human oversight. Demonstrating that multiple evaluation signals (not just stylistic matches) inform decisions is a strong way to show commitment to equitable hiring.