For the better part of a decade, the take-home assignment was the closest thing engineering hiring had to a reliable signal. Give a candidate a realistic problem, a few days, and no one watching. What came back told you something real — how they structured the code, what tradeoffs they made, whether they could write a sensible README without prompting.
That signal is gone. Not degraded. Gone.
The same shift that made AI indispensable for working engineers has made it a cheat code for job seekers. When every candidate can produce clean, well-commented, architecturally-coherent code by prompting Claude or GPT-4o, the output of a take-home tells you nothing about who wrote it or how they think.
The problem isn't that candidates are using AI
It's tempting to frame this as a cheating problem, but that framing is wrong — and it leads to the wrong solutions. Senior engineers at your best companies use AI constantly. The ability to leverage these tools effectively is increasingly the job. Banning AI from assessments doesn't filter for senior engineers; it filters for engineers willing to pretend they don't use AI.
The real problem is that the take-home was always measuring output, and AI has made output cheap. What you actually want to measure — reasoning, architectural instinct, the ability to identify when AI is wrong — is a layer beneath the output, and it was never visible in the submission.
The take-home was always measuring output. AI has made output cheap. What you actually want to measure was never visible in the submission.
How hiring teams are responding — and why most responses are making it worse
We've seen three common reactions to this problem, all of which create new issues without solving the underlying one.
1. Making the problem harder
The instinct to escalate complexity — add more requirements, tighten the time window, introduce deliberate edge cases — doesn't work because it doesn't change what's being measured. Harder problems still produce AI-assisted submissions. They just add friction for candidates and increase the review burden for your team. The best candidates, who have other options, are also the most likely to drop out.
2. Adding a follow-up interview to "verify" the work
The code walkthrough — asking a candidate to explain their take-home submission in detail — has become standard. It's better than nothing, but it's also become an arms race. Candidates increasingly prompt their AI tools to generate explanations and walkthroughs alongside the code. A practiced candidate can narrate a codebase they didn't write convincingly enough to pass a 30-minute review.
3. Moving to live coding
Live coding under observation eliminates the AI-assistance vector but introduces a different distortion: performance anxiety. Research consistently shows that whiteboard-style coding tests measure a candidate's ability to code under observation more than their ability to code. You lose access to how they work in their natural environment — which is, increasingly, with AI as a collaborator.
What senior engineering actually looks like — and what can measure it
Senior engineers are distinguished not by what they produce, but by how they reason while producing it. Three patterns show up consistently across strong candidates that weak ones — even well-coached ones — struggle to fake:
- Architectural override. They push back on AI suggestions. They see when a generated pattern is technically correct but wrong for the specific context — the wrong abstraction, the wrong tradeoff, the wrong assumption about scale. They override, and they know why.
- Prompt efficiency. Strong engineers don't spam AI with broad requests and accept the first output. They write precise, targeted prompts because they already have a clear mental model of what they need. The gap between their prompt and the accepted output is small.
- Reasoning under constraint. When AI produces something that almost works, senior engineers identify exactly what's wrong and correct it specifically — rather than regenerating until something looks right. The correction pattern reveals the mental model.
None of these signals appear in a submitted codebase. They exist in the process of creation — in the session between developer and AI that produces the output. That session is what needs to be observed.
What this means for how you source and screen
The structural implication is uncomfortable: the most important signal in engineering hiring is now invisible to current process. Teams that don't adapt are already making systematic errors — filtering out good candidates whose AI-collaborative process produces "messy" submissions, and passing through candidates whose AI tool produced polished code they can barely explain.
Take-home tests aren't going away overnight, and they're not useless for all roles. But for senior engineering positions, where the cost of a wrong hire is high and the signal collapse is most acute, relying on submission quality as the primary filter is now a liability. The teams adapting earliest are the ones that will have access to the senior talent that everyone else is failing to identify.