Every few years, recruitment discovers a new silver bullet. First it was employer branding. Then assessments. Then automation, now it’s AI.
Each wave arrives with the same promise: fewer mistakes, faster decisions, better hires. And each wave quietly inherits the same unresolved problem from the one before it but the problem here isn’t capability. It’s memory.
Organisations are trying to build intelligent hiring systems on top of data that was never designed to carry judgment in the first place.
Hiring has never failed because people couldn’t decide. It fails because they have to decide too much, too fast, too often. In the last few years, that pressure has intensified dramatically. Application volumes have surged while hiring teams have contracted. Recruiters now operate in conditions that reward speed over reflection, throughput over explanation. Under those conditions, the hiring process still functions, but it definitely changes character.
Decisions become compressed. Nuance disappears. Context is sacrificed for momentum and not because recruiters stop caring, but because the system stops allowing care to show up.
What Gets Stored Isn’t What Was Decided
Most ATS data looks precise. Clean timestamps. Neatly categorised outcomes. Reason codes that suggest clarity, but what’s recorded is not the decision, it’s the artifact of the decision.
The internal debate. The “this could work with coaching.” The “strong candidate, wrong timing.” The “technically capable but risky for this manager” and none of that survives the click.
When organisations later feed this data into AI systems, they assume they are teaching judgment whereas in reality, they are teaching compression. The AI doesn’t learn how people think. It learns what people do when they don’t have time to think
This helps explain a pattern that keeps showing up in research and in the market.
AI tools often perform well in controlled environments and poorly in lived ones. Studies comparing AI-driven hiring assessments to traditional structured methods have found weaker predictive power when outcomes are measured over time (Frontiers in Psychology, 2025) and at the same time, practitioners report that AI frequently misinterprets intent or context, requiring human correction rather than eliminating work (InformationWeek, 2025).
Even operationally, the promise of speed hasn’t materialised. Despite increasing automation, most organisations reported longer time-to-hire in 2024, not shorter.
This isn’t because AI is immature, it’s because AI is extremely faithful to its inputs.
AI Is Not Objective…It Is Loyal
AI does not introduce bias or judgment on its own. It inherits them.
If historical hiring data reflects rushed filtering, defensive decision-making, and checkbox compliance, AI will replicate that faithfully and at scale. This is why many organisations are surprised when AI tools seem to reduce diversity or reinforce conventional profiles. The system isn’t making new choices, it’s repeating old ones more efficiently. Calling this an “AI problem” misses the point. It’s a data lineage problem.
Most leadership discussions focus on whether AI can be trusted and that is not the right question here. The real question is:
What conditions produced the data we’re asking AI to learn from?
Data created under pressure carries pressure forward. Data created without reflection cannot teach reflection. Data created for compliance cannot generate insight and until that is addressed, no amount of explainability or model tuning will change outcomes.
What Better Data Actually Requires
Improving recruitment AI doesn’t start with technology, it absolutely with capacity.
When recruiters have manageable workloads, they explain decisions…when they explain decisions, reasoning becomes visible….when reasoning is visible, learning becomes possible, for humans and machines alike.
It also requires closing the loop. Hiring data without performance feedback is just guesswork frozen in time. Without outcome correction, AI has no way to distinguish signal from habit and these are organisational design choices, not technical ones.
Why This Is Ultimately a Leadership Issue
AI exposes the systems it’s built on, so if hiring is treated as a volume-processing function, AI will optimise for volume. If judgment is treated as optional, AI will remove it entirely.
The uncomfortable truth is that many organisations don’t actually want better hiring, they want faster resolution to uncertainty. AI can do that, but it cannot create judgment where none was captured. The future of recruitment AI will not be decided by better algorithms. It will be decided by whether organisations are willing to slow down just enough for judgment to exist,and to be recorded.
Thanks for coming to my TED Talk.

