Roger Miles warns of vanishing common sense as over-reliance on artificial intelligence grinds us down into a population of gullible mugs.
Hurrah, artificial intelligence (AI) has arrived: let’s join HM Government’s celebrations as it revolutionises workplaces. Or maybe we should hold the champagne for a minute because, like any new technology, AI brings some unwelcome issues. My particular concern with AI is the overwhelming impact it has on a person’s capacity to feel confident about even routine information it might provide. I call this cognitive shock. It’s the impact of the AI’s disruptive force that undermines the foundations of humanity’s capacity to make sense of what is before it. Policymakers – misdirected by tech sector lobbyists offering a miracle of economic growth – simply haven’t grasped the impact of cognitive shock on an unsuspecting human race.
The homo sapiens brain evolved about 100,000 years ago to help us to survive amid limited information and direct social contact. It enables us instinctively to assess trustworthiness through contextual cues: familiarity, tone, alignment with expectations. AI doesn’t care about that; it’s happy to disrupt those cues while producing its own fluent, highly plausible outputs at scale.
By offloading to AI, we’re not just delegating task-based thinking. Repeated at scale, delegation to AI erodes collective human competence. We saw an early example of such a habituation risk in a related intelligent technology: the satnav installed in our cars twenty years ago. Who now remembers (or even knows) how to read a roadmap? Habits of easy outsourcing lead unused skills to atrophy; we forget how to think for ourselves, only much later realising what competences we’ve lost.
Who now remembers (or even knows) how to read a roadmap?
The category error by policymakers is to assume that AI’s vast information-sorting capacity (its “compute”) will somehow refine human judgement. Behavioural science has found that the opposite is true. AI produces outputs at a rate that overwhelms our capacity to verify them, disorientating our intuitive sense of what is credible, and hence, what is true. Rather than push back, our collective (and intuitive) human response to this cognitive shock is to retreat into coping – via cynicism, tribalism, or by disengaging. For institutions of governance, this is dire: public trust has always grounded itself through some agreement on shared reality. How will institutions keep faith with the public after AI has cut loose those anchors of verifiable truth? Governments and other institutions face rising suspicion, falling confidence, and public retreat into the comfort of identity silos.
Hacking the people
Into this cognitive landscape has stepped a terrifying, and so far highly successful, new group of opportunists: AI-enabled social engineers. These are highly successful fraudsters who target weak spots in human judgement rather than mechanical security. Essentially, they’re old-fashioned but highly skilled con artists, jet-propelled by the new resources that AI magically provides. Social engineering (see box Old hack vs AI) has a boundless ability for personally targeted, industrially scaled persuasion based on AI “deepfake” spoofing of identity. Our human monkey-brains simply aren’t equipped to defend us against this. The real losses are piling up: we’ve recently seen major firms unwittingly transferring funds to organised crime after deepfake calls from cloned “CEOs”; and employees flattered by deepfake “recruiters” into disclosing sensitive information.
Our human monkey-brains simply aren’t equipped to defend us against this.
Meanwhile most organisations, and policymakers, continue to make the category error that this is an “online security” risk, requiring better specified IT firewalls. Wrong. This is not a technology risk; it’s entirely human and cultural. As intelligence people say, the “attack surface” is now any organisation’s cultural weak points: meaning in practice, disaffected staff. Is there any organisation that doesn’t have a few disaffected staff in its ranks? – or even, lots of them? The only effective response: reinforce human critical thinking skills.
Yesterday’s hacker
Target: information-holding machines and network infrastructure
Method: brute force (multiple attempts), malware
Skillset: technical intrusion, security code-breaking
How to defend: firewalls, access controls, encryption
Today’s social engineer
Target: people and relationships
Method: persuasion, impersonation, narrative
Skillset: well-read in behavioural science; people-watching; improvisation; patience
How to defend: culture check to identify disaffected people / teams; encourage critical thinking; make verifying a stronger norm
As with the undermining of commerce, so too with democracy. Any healthy democratic system relies on a commonly shared reality, or at least, general agreement on what social goods and social harms look like. Whilst AI maybe – just maybe – doesn’t wilfully set out to distort shared reality, it’s terrifyingly good at mass-producing doubt.
Policymakers thus need to move beyond the lazy assumption that warning citizens to “be vigilant” will somehow protect us all from a threat to the very nature of what it means to be human. Public policy would do well to return to a focus on recognising public harms, designing preventative measures, and holding the producers of harms accountable. Luckily, there’s one enduringly positive legacy of how evolution has set up our cognitive processes. As organisms built literally to embody the sentient gathering of experiences, humans have a cluster of capabilities that AI does not, and probably can never, replicate. We’re good at contextual judgement (aka “reading the room”), at empathy, moral reasoning, curiosity, and collaborative sense-making. These faculties constantly remind us what it means to be human and are also our best defences against the cognitive shock of a synthetic information environment.
Although AI can create synthetic reality, and talk to us with human-sounding fluency, its intuition and critical-thinking skills remain basic. This points to a grand, generational opportunity for policy-makers: to mobilise a public mindset that responds to AI’s content hollowness with human-centric initiatives including to raise collective critical thinking; to ‘inoculate’ public awareness; and to rebalance responsibility for trust.
Collective critical thinking as public infrastructure: Nudge us all to “verify everything” as a boring-but-normal routine. In an era of plausible deepfakes and artificial ‘friends’, voice and video aren’t self-evidently true. Educate children (and adults) to be sceptical in a disciplined way – not as nihilism but as civic competence.
Inoculation, not shame: Make the harms of truth decay and cognitive shock plainly visible, to restore human resilience. Reward directly citizens who report harms, to help overcome the misplaced stigma that this is “trouble-making”. Warning citizens to “be vigilant” deflects responsibility away from system designers, regulators and law enforcers. Better to build collective public goods by sharing risk intelligence across sectors and throughout society.
Trust architecture: Fast forward ‘accountability-by-design’ for developers, with new standards for provenance, auditability, and identity in synthetic media.
Democracy cannot survive a collapse in shared sense-making, a world where voters shrug as they say “Who knows what’s real anymore?”
Democracy cannot survive a collapse in shared sense-making, a world where voters shrug as they say “Who knows what’s real anymore?”. Policymakers’ task now is to ensure that our institutions can run faster than, and contain, a synapse-killing monster. And preferably, to carry a supportive public with them. Rather than unquestioningly promote artificial intelligence as an economic driver, could policy people perhaps look to preserve human judgement, which underpins all good governance?
Policymakers thus need to move beyond the lazy assumption that warning citizens to “be vigilant” will somehow protect us all from a threat to the very nature of what it means to be human. Public policy would do well to return to a focus on recognising public harms, designing preventative measures, and holding the producers of harms accountable.
Luckily, there’s one enduringly positive legacy of how evolution has set up our cognitive processes. As organisms built literally to embody the sentient gathering of experiences, humans have a cluster of capabilities that AI does not, and probably can never, replicate. Precisely because we’re so good at contextual judgement (aka “reading the room”), at empathy, moral reasoning, curiosity, and collaborative sense-making, we can and must use these faculties to remind ourselves constantly what it means to be human. These are our best defences against the cognitive shock of a synthetic information environment.
