Patricia Gestoso counsels that our appraisal of artificial intelligence should be guided by  the motivations of its chief advocates.

Twenty years ago, I was a trainer at a software company. I taught customers about methods such as genetic algorithms and clustering, which fall under the umbrella of artificial intelligence (AI). However, at the time, nobody called them AI because we were still recovering from a second “AI winter,” a period when people were disenchanted with the progress made in the field.

Today, everything appears to be inextricably linked to AI — from the future of work to attaining our sustainable development goals — but is that true or wishful thinking?

There are four common mythical threads in the history of artificial intelligence systems: the belief that human capabilities can be artificially recreated, the centrality of language to convey humanity, the search for the defining moment where that creation will happen, and the belief that science — especially data and logic — is unbiased.

The idea of creating “thinking machines” is very old. There is some evidence of beliefs in human-like artificial life in Ancient Greece, Rome, and Egypt. Jewish folklore and even alchemists referred to the artificial creation of life. Throughout history, this quest has been exploited in automatons and digital interfaces to mislead and even deceive the public, politicians, and scientists (see box: A history of lies and half-truths).

A history of lies and half-truths
Many claim that humans tend to suspend disbelief when it comes to “thinking machines.” A good example is the Mechanical Turk, an automaton chess player created in the 18th century. It was a life-sized model of a human upper body, dressed in Ottoman attire and wearing a turban, mounted atop a cabinet that concealed a human chess master who operated the Turk covertly. The ruse defeated politicians such as Napoleon Bonaparte and Benjamin Franklin, who apparently did not question the engine.

In the 1960s, Joseph Weizenbaum created ELIZA, the first AI chatbot, which mimicked human conversational patterns inspired by psychotherapy. He was surprised that, from the users’ perspectives, ELIZA could maintain the illusion of understanding by repeating small fragments of the users’ inputs. This inspired the term Eliza effect, the attribution of human thought processes and emotions to an AI system.

Another persistent myth peddled by tech bros and politicians is that AI automates the “drudge work” performed by humans. The reality is that AI systems such as autonomous driving rely heavily on very low-paid annotators, largely based in the Global South. The same goes for social media moderators, who review content flagged by AI tools, including depictions of rape, beheadings, and child abuse, often without minimal mental health support and under very strict NDAs

Moreover, AIwashing — the practice of masking human work as AI — has become widespread. Amazon’s “Just Walk Out” checkout technology and the autonomous-vehicle company Waymo have secretly relied on contractors in India and the Philippines to verify and troubleshoot the underlying AI algorithms.

Future mirages

There is an undeniable gap between the valuations, investment, and revenue in the AI sector, further complicated by the circular nature of the financing among some tech firms. Some of the gap will likely be bridged by taxpayers through government innovation stimulus measures that camouflage bailouts. Still, whilst this exhibits all the hallmarks of a bubble, it will not burst in 2026. Speculative bubbles are sustained by narratives before they finally shatter.

For example, although the promise of achieving AGI in 2027 was ruled out after key AI optimists – including Nobel Prize winners – acknowledged that large language models were not the magic bullet, two other terms have already replaced LLMs in the superintelligence race. The first, world model, is based on the old concept of infusing AI with physics and spatial properties. The other buzzword promising AGI is continual learning, a model that, like humans, can continually acquire new knowledge and handle the real world’s unpredictability.

Additionally, we can expect governments and Big Tech to exacerbate narratives of AI as the ultimate weapon for achieving geopolitical supremacy and ensuring citizens’ protection, reinforced by wars, political and social unrest, and massive increases in defence budgets. As a result, companies such as Palantir, which positions itself as the operating system for governments, will continue to win lucrative deals with “intelligence” and law enforcement agencies around the world. Defence firms like Anduril, a drone and autonomous weapons startup that doubled its valuation to $30 billions last year, will flourish. The proliferation of companies such as Clearview and Flock Safety, which rely on AI to deliver solutions for public safety and security, will continue. Even the UK has now announced plans for an AI centre for policing.

However, behind those promises of enhanced security, AI plays a key role in increasing the vulnerability of digital systems. Nearly  60% of employees admit to concealing how they use artificial intelligence at work. This means data breaches are set to increase substantially, now accelerated by the use of AI chatbots and agents, which have been shown to compromise personal data at scale and help automate a significant portion of cyberattacks. Moreover, the use of coding agents will make applications easier to break because the code will be more predictable.

Strictly speaking, the field of artificial intelligence was founded at a workshop at Dartmouth College in 1956 with the aim of  finding “how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” The term was chosen among several for marketing reasons — it was thought that it would help attract interest and make it easier to secure funding for their research. And it worked.

In the 1960s, massive investment was directed to the field with the promise that AI approaches named after human parts and capabilities, such as neural networks, reinforcement learning, and machine learning, would deliver applications with “common sense” and robots that would “reason” about their own actions. Since then, enthusiasm for AI has fluctuated across decades, but the anthropomorphisation of this technology has remained constant. Today, we can see it in chatbot interfaces that claim to think, reason, and hallucinate, and in AI jargon such as vibe coding, chain-of-thought, and AI agents.

Despite some progress, determining the onset of human-like intelligence in AI systems has remained elusive. In 1950, Allan Turing proposed that a machine passed the test if a human could not distinguish between machine-generated and human-written output. For many, this has been sufficient to demonstrate that large language models (LLMs), such as those used by interfaces like ChatGPT, are indeed thinking machines.

Around the same time that the Turing test was proposed, the concept of technological singularity emerged, a hypothetical future point at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilisation. That concept has evolved over the years, generating visions ranging from existential threat, in which non-human intelligence destroys humanity, to utopian futures in which AI enables humans to focus on self-actualisation.

The reality is that statistics and mathematics, at the root of artificial intelligence development, have historically been used against women and minoritised populations.

In addition to language and the ability to mimic human capabilities, proponents of AI superiority over humans have touted the objectivity of machines and mathematics as a magic weapon (see Box ‘He said, she said’). The reality is that statistics and mathematics, at the root of artificial intelligence development, have historically been used against women and minoritised populations. Some examples are the pseudosciences of phrenology (inferring personality traits from skull morphology) and physiognomy (assessing a person’s character from their outer appearance), as well as the IQ tests, all of which were developed to provide a scientific veneer for discrimination against certain groups.

Currently, Artificial General Intelligence (AGI) – the idea that AI systems can fully match human capabilities – has become the ideal vehicle for obtaining massive funding with minimal accountability. It promises everything — solving cancer, “fixing” sustainability — but it doesn’t commit to anything specific, on the basis that synthetic superintelligence will know how.

Paradoxically, even those who have been vocal about the existential risk posed by AGI, such as Elon Musk and Dario Amodei of Anthropic, are heavily involved in funding, leading, and promoting the idea of superintelligence. What unites the utopian and dystopian AGI factions is the belief that only techno-oligarchs can steer AI in the “right” direction.

This brings us to the personality myth constructed around those who lead commercial AI products. They seek to convince us that they are best suited to make decisions about this purportedly groundbreaking technology (see Box – He said, she said), despite lacking studies in ethics, medicine, or sociology, and some not even having a computing degree (e.g., Bill Gates, Mark Zuckerberg, Sam Altman). Moreover, we are expected to believe that their products will eradicate poverty and yield revolutionary scientific discoveries, whilst in reality, they appear to be primarily geared towards grabbing and monetising our personal data

He said, she said
“Step one, solve intelligence; step two, use it to solve everything else.”

Demis Hassabis, Nobel Prize for contributions to AI, on Google DeepMind’s mission

Present myths

“OpenAI is a lot of things now, but before anything else, we are a superintelligence research company.” Sam Altman, OpenAI CEO (2025)

Past legends

“People should stop training radiologists now, it’s just completely obvious that within five years, deep learning [branch of AI] is going to do better than radiologists.”  Geoffrey Hinton, “Godfather of AI”, Nobel Prize in Physics (2016)

Future ambitions

“We must switch gears. We must speed up AI adoption across the board. And this is why we are presenting an Apply AI Strategy. It is based on a simple yet transformative principle: AI first.” Ursula von der Leyen, President of the European Commission (2025).

Another fairy tale is that AGI is a one-off opportunity we cannot miss. As a result, we are asked to suspend critical thinking about present harms and malfunctions. For example, hallucinations in LLMs are framed as growing pains when in reality they are the natural outcome of a statistical calculation. As a consequence, the biases and harms against women and minoritised populations  — such as biased recruiting algorithms and deepfake porn —  are presented as temporary and a price worth paying for the promised future riches.

We are also told that all AI progress will halt if we dare to regulate it, as if we should forget that one of the oldest technologies, fire, has created jobs and fostered innovation precisely because it has been regulated.

As a result, Big Tech has taken matters into its hands and successfully lobbied its way through AI regulation across many countries — US, UK, EU, Brazil — stopping, weakening, and even rolling back digital rights. And it goes further. Several Pro-AI Super PACs backed by key figures in artificial intelligence companies and VCs have already pledged to spend hundreds of millions of dollars to influence the 2026 US midterm election. Lately, they have also benefited from countries’ eagerness to bring their citizens up to speed on AI, using training as a covert advertisement for their products.

In the past couple of years, executives have been held accountable for delivering the promised productivity and innovation gains from AI. We heard politicians praise AI’s transformative power and CEOs tell staff that AI is coming for their jobs or that they must either embrace AI or leave. Conversely, studies have shown that 95% of organisationsreport no ROI from generative AI.

While Big Tech and investors are selling superintelligent and autonomous AI, we currently have only narrow AI, which performs one or a few tasks at most.

The reality is that while Big Tech and investors are selling superintelligent and autonomous AI, we currently have only narrow AI, which performs one or a few tasks at most. Someone must pay for the failure to fulfil the AI dream of exponential returns and unlimited growth and, if possible, disguise it as a win.

For example, last year, tech companies attributed layoffs to AI adoption—without proof. We should expect the trend to spread to other sectors; that is, redundancies arising from poor performance and challenging business conditions be portrayed as casualties of “AI adoption”. Additionally, managers and staff will face mounting pressure to justify why AI cannot be used before requesting additional headcount.

We are also seeing a shift in accountability for AI success from leaders to staff. Increasingly, organisations are implementing KPIs to drive the adoption of chatbots and other AI tools among workers, and to monitor who’s using the LLM licences they purchased. In summary, AI is moving from a learning opportunity to a mandatory requirement.

Finally, in a defensive move, AI powerhouses such as Sam Altman and Satya Nadella have stated that if AI is not providing value, it is because employees are not using it correctly. Hence, it is not surprising that OpenAI and Microsoft are aggressively offering paid customers free or low-cost training. And not only them, the UK government recently blamed the AI skills gap on workers’ lack of confidence and enthusiasm.

There is another way.

In the book AI Needs You, Verity Harding highlights how we have successfully addressed complex ethical questions surrounding new technologies. The regulation of assisted reproductive technologies illustrates how society makes better decisions when diverse perspectives are involved. For example, the effort was spearheaded by a multi-stakeholder committee chaired by a philosopher rather than a doctor. Additionally, their recommendations were published in 1984, six years after the first human IVF, debunking the myth of technological inevitability, which claims that it is too late to curb AI.

It is also paramount that we embrace the power of collective action.

It is also paramount that we embrace the power of collective action. Activism has been instrumental in blocking and renegotiating billions of dollars’ worth of data centre projects worldwide. Campaigners significantly contributed to the recent criminalisation of the creation of non-consensual AI-generated intimate imagery (deepfake porn) in the UK and they forced the city of Rotterdam to suspend its biased welfare fraud algorithm.

Finally, the best way to resist the siren songs of superintelligent AI spin doctors is to rely on our critical thinking about the present benefits, harms, and limitations of artificial intelligence and remember those gurus’ ultimate goals: power, money, self-aggrandisement, or a combination of these. As Maya Angelou said, “When people show you who they are, believe them.”

Patricia Gestoso

Patricia is a scientific services leader and a diversity and inclusion tech evangelist. Throughout her career as global head of scientific support, training, and services, Patricia has worked with Fortune …

Read More »

Leave a Reply

Your email address will not be published. Required fields are marked *