In 2023, a New York attorney named Steven Schwartz submitted a legal brief that cited six case precedents. The writing was fluent. The arguments were structured. The citations looked authoritative. Every single case was fabricated by ChatGPT. The brief had passed through Schwartz's review, his colleague's review, and was filed with the court, because at every stage, the output looked like competent legal work.
This is not a story about one careless lawyer. It is a story about what happens when AI removes the relationship between effort and output, between expertise and presentation, between knowing and appearing to know.
Every recruiter reading this has seen a version of the same phenomenon, not in a courtroom, but in an inbox. The cover letter that is articulate, well-structured, and suspiciously generic. The work sample that is impressive but somehow lacks the rough edges of genuine thinking. The candidate who interviews fluently about AI concepts but stumbles when asked to walk through a specific decision. The surface is flawless. The question is what lies beneath it.
The cost of polish just dropped to zero
For most of professional history, producing polished work required skill. Writing a clear analysis required analytical thinking. Preparing a structured presentation required understanding the material. Crafting a persuasive argument required knowing the domain well enough to anticipate counterarguments. The quality of the output was an imperfect but meaningful proxy for the capability of the person who produced it.
AI has disrupted this relationship. A candidate with no understanding of financial modeling can produce a convincing financial analysis. A consultant who has never read a regulatory document can generate a compliance summary that looks authoritative. A marketing professional with no data analysis background can produce a report full of charts and insights.
The output looks the same. The underlying capability is fundamentally different.
79% of technology workers admit to pretending to know more about AI than they actually do (Pluralsight, 2025). Among executives, the figure rises to 91%. This is not ordinary imposter syndrome. This is a rational response to an environment where the appearance of AI competence is rewarded and the tools to create that appearance are universally available.
The competence illusion in practice
The competence illusion operates at every level of professional interaction, and it is far more subtle than fabricated legal citations.
In hiring, it looks like a perfect application. 86% of hiring managers say AI makes it too easy to exaggerate skills on resumes (Express/Harris Poll, February 2026). But exaggeration is only the most obvious form. The deeper issue is that AI allows candidates to produce work samples, interview answers, and assessments that reflect the AI's competence, not their own. A candidate who cannot independently evaluate a market analysis can still submit one that reads as though they can. The output is real. The capability behind it is not.
In the workplace, it looks like high performance. A professor at IE Business School described the phenomenon in his classroom: students submitted work that read like graduate-level analysis, but when asked basic questions about their methodology or reasoning in oral examinations, many could not respond adequately. They had produced sophisticated output without developing the thinking that should underpin it. The same dynamic plays out in organizations every day. An employee who generates a polished deck using AI appears productive. A colleague who spends twice as long producing a more modest deliverable, but who genuinely understands the implications of what they have written, appears less productive by comparison.
In decision-making, it looks like confidence. 38% of business executives have made decisions based on hallucinated AI output (Deloitte, 2024). They did not do this carelessly. They did it because the output appeared credible, because AI-generated content carries the linguistic markers of competence: clear structure, confident assertions, professional tone. We have evolved to associate fluent language with reliable thinking. AI exploits that association perfectly, producing prose that sounds authoritative whether or not it is accurate.
See the gap for yourself
Take the free Aptivum Snapshot (10 questions, 8 minutes) and find out where you actually stand on AI readiness.
Why this is different from past technology shifts
Every major technology shift has changed what competence looks like. Calculators changed what it meant to be good at mathematics. Spreadsheets changed what it meant to be good at financial analysis. Search engines changed what it meant to be knowledgeable. In each case, the shift was ultimately productive. It freed people from mechanical tasks and let them focus on higher-order thinking.
AI is different in a critical way. Previous tools automated the production of correct answers. A calculator does not hallucinate. A spreadsheet does not fabricate data. A search engine returns real documents, even if they need to be evaluated. These tools reduced effort but did not create a gap between what the output looked like and what the output actually was. The output of a calculator is reliable whether or not the person using it understands the mathematics behind it.
AI, by contrast, produces output that may or may not be accurate, and that requires the person using it to evaluate the output's reliability. The tool has automated the appearance of competence without automating competence itself. This creates a new category of risk that previous technology shifts did not: the risk that the person responsible for the output cannot assess whether it is correct.
This distinction matters enormously for hiring. When you evaluate a candidate who used a calculator to produce correct financial projections, you are evaluating their ability to set up the right calculations. When you evaluate a candidate who used AI to produce a convincing market analysis, you are evaluating their ability to produce plausible-looking output, which tells you nothing about whether the analysis is sound or whether the candidate would recognize it if it were not.
Rodney Brooks, former director of MIT's Computer Science and Artificial Intelligence Laboratory, has warned that we are easily seduced by language, because we have always associated fluent language with intelligence. AI exploits this heuristic at industrial scale. It produces text that sounds like it was written by someone who understands the subject, and in many cases, the person who prompted it cannot tell whether the content is accurate because they lack the domain knowledge to evaluate it.
This is the specific mechanism by which AI creates the competence illusion: it lowers the bar for producing work that looks competent to approximately zero, while leaving the bar for evaluating that work exactly where it has always been. The production gap has closed. The evaluation gap has not.
Professor Kiron Ravindran at IE Business School drew a striking parallel with Norwegian marine biology: when overfishing removed experienced herring who knew ancestral migratory routes, the younger generation invented new paths that led to colder, inhospitable waters. Centuries of accumulated navigational wisdom vanished in a single generation. The parallel to knowledge work is uncomfortable: when AI shortcuts the developmental process that builds professional judgment (the years of struggling with bad first drafts, catching your own errors, learning from failed analyses), it risks producing professionals who can navigate fluently in favorable conditions but have no foundation to draw on when the AI points them toward inhospitable waters.
The social dynamics that make it worse
The competence illusion is reinforced by social dynamics that discourage transparency about AI use and its limitations.
Research published in Harvard Business Review (August 2025) found that when evaluators believed an engineer had used AI to produce a piece of code, they rated that engineer's competence 9% lower, even though the code being evaluated was identical across all conditions. The penalty was more severe for women and older workers. Disclosing AI use carries a measurable professional cost. The rational response is to hide it.
Simultaneously, three in four workers say they are expected to use AI at work, whether officially or unofficially (Howdy.com/HR Dive, 2025). One in five feel pressured to use it in situations they are unsure about. And 16% admit to pretending to use AI at work, performing "AI theater" to appear productive and modern (Newsweek, 2025).
The result is a workplace where AI use is simultaneously expected and penalized. Employees face pressure to use AI to produce more output faster, while also facing a competence penalty for admitting they used AI. The logical outcome is exactly what the data shows: people use AI, do not disclose it, do not discuss its limitations, and produce output that looks competent without anyone in the review chain evaluating whether it actually is.
For hiring, this creates a compounding problem. Candidates use AI to produce impressive applications. Hiring managers cannot tell which outputs reflect genuine capability. Once hired, employees continue to use AI in ways that are invisible to their managers. The competence illusion persists, and it compounds over time as AI-dependent employees advance into roles where independent judgment is increasingly critical.
The organizational cost is not hypothetical. When employees who cannot evaluate AI output are promoted into decision-making roles, they bring the illusion with them, and they make higher-stakes decisions with the same uncritical reliance on AI that got them through their earlier work. The 38% of executives who have made decisions based on hallucinated AI content are not an anomaly. They are the predictable outcome of an environment where AI-assisted competence was never distinguished from actual competence at any point in the career pipeline. The illusion compounds because no one along the way had the tools, or the incentive, to test what was underneath the polished surface.
What this means for hiring
If the cost of appearing competent is zero, then appearance is no longer a signal of competence. Recruiters and hiring managers need to internalize this principle fully, because it invalidates most of the evaluation methods currently in use.
Resumes? AI-polished to perfection. Interviews? AI-coached and rehearsed. Work samples? AI-generated or AI-enhanced. Self-assessments? Systematically inflated by AI use itself (Aalto University, 2026). References? Referring to output quality that was, unknowably, AI-enhanced.
What cannot be faked is the behavior itself: what a person does when confronted with AI-generated content that contains a subtle error, when asked to decide whether a situation warrants AI involvement, when handling data that should not enter a public AI tool. These behaviors are only visible in assessment methods designed to elicit them: scenario-based evaluations that put the candidate in realistic situations where AI judgment, not AI fluency, determines the outcome.
Aptivum's five-pillar assessment framework was designed specifically for this problem. It does not test whether a candidate can use AI tools (that is table stakes). It tests whether a candidate can evaluate AI output critically, handle ethical dilemmas involving AI use, exercise judgment about when to trust and when to override AI recommendations, and collaborate with AI in ways that add value rather than amplify risk. For a detailed look at what each of these dimensions measures, see the five dimensions of AI readiness. These are the dimensions that separate genuine competence from the competence illusion, and they are the dimensions that no amount of AI-assisted preparation can fake, because they require the candidate to demonstrate the behavior in real time.
The gap between appearance and reality is not a marginal hiring risk. It is, as we documented in our analysis of the growing distance between claimed and actual AI skills, a structural market condition. And structural conditions require structural solutions: not better detection of the surface, but better measurement of what lies beneath it.
For specific behavioral patterns that indicate poor AI judgment in candidates, see red flags in AI readiness: 5 patterns that predict poor AI judgment.
Test the gap between appearance and reality. Take the free Aptivum Snapshot: eight minutes, five dimensions. Find out where you actually stand.


