Yesterday, a friend sent me an Instagram reel.
It showed a candidate in a live interview using an AI tool to generate answers in real time.
Knowing we’re building HireAce, he asked for my thoughts.
His perspective as a hiring manager in the public sector was simple:
“That’s cheating.”
He was concerned about where this is heading. If candidates can get live AI support during interviews, what happens to fairness? To integrity and to trust?
Naturally, he wanted to know what HireAce is doing to mitigate this.
It’s a fair question…but I think it’s the wrong starting point.
AI is already in the interview room
Tools like “the one named after a bird” (I don’t want to promote them here) and other real-time candidate interview assistants are designed to:
The result?
On the surface, that looks impressive, but the deeper question isn’t whether AI can generate good answers.
It’s whether as interviewers we’re measuring:
Because those are very different signals.
Why “AI detection” alone isn’t the solution
There’s a growing instinct to fight this with detection.
The problem with this?
Trying to block AI in interviews is like trying to ban calculators instead of redesigning the maths exam.
Instead of asking:
“How do we catch candidates using AI?”
We should ask:
“Are we measuring something that AI can easily fake?”
What you can measure: Behavioural Authenticity Signals
This is where the conversation becomes more interesting and more constructive.
Modern voice infrastructure like Deepgram (which powers real-time transcription inside platforms like HireAce) does more than convert speech to text.
It provides:
From this, you can build behavioural indicators such as:
Response Latency Patterns (is there a consistent delay before every complex answer?)
Natural Speech Markers (does the candidate think out loud, self-correct or use hesitation markers, or are answers consistently polished and structurally perfect?)
Cadence & Conversational Rhythm
Is speech varied and natural, or evenly paced, (as if being read)?
None of these prove AI use of course, but together, they create something powerful:
An authenticity confidence profile. No surveillance or accusation, just insight.
The smarter strategy: Make AI assistance ineffective
Here’s the shift I shared with my friend.
Instead of obsessing over detection, redesign the interview!
Introduce:
AI tools struggle with:
When interviews become dynamic conversations instead of rehearsed Q&A sessions, AI-fed answers unravel quickly. Importantly, you’re not policing AI use, you’re measuring capability.
Here’s a bigger question: Is it even cheating?
This is where it gets uncomfortable.
In most roles today, AI is already part of daily work. So perhaps the future question isn’t: “Can you perform without AI?”
But: “Can you use AI with judgment?”
There’s a difference between: Being spoon-fed an answer, and demonstrating capability augmented by AI
Forward-thinking hiring teams will need to decide which side of that line they’re on.
What this means for HireAce
When my friend asked what we’re doing to mitigate this technology, my honest answer was:
We’re not building surveillance software. We’re building structured, AI-assisted interviews to support the interviewer, designed for an AI-enabled world.
That means:
AI isn’t going away, so the question is whether hiring evolves with it or fights a losing battle against it.
My final thought. The future of hiring is Integrity by Design
The AI interview arms race is real. But the winners won’t be the companies trying to block AI at the door. They’ll be redesigning how capability is assessed.
If you’re thinking about how your organisation navigates this shift, we’re building HireAce with exactly this in mind.
Join the waitlist: https://hireace.ai/waitlist
The future of hiring isn’t anti-AI. It’s pro-human but in an intelligent way.
3 Shortlands, London W6 8DA
©HireAce All rights reserved
Legal
Product
Company
AI-Centred, Human-Led, Hiring Intelligence