Cold turnkey

Our addiction to plug in and play is the line in that sustains Big Tech. Patrick Leavy tells The Mint that a crash diet of ethical alternatives is the only way to kick the tech habit.

Patrick Leavy is a fervent critic of Big Tech and a co-founder of a nonprofit group, Rebel Tech Alliance, which is urging people to leave the dominant platforms behind in favour of ethical alternatives. It’s not his day job: he works in finance, building forecasting models, studying how businesses make money and how incentives shape outcomes. But since 2016, when Brexit and Donald Trump’s first presidential victory forced many people to ask what was happening to public life,  Leavy began to campaign. He was trying to understand how apparently implausible political outcomes had become possible in societies saturated with digital media.

What he found was not simply a story about bad content, toxic debate or even misinformation. Beneath all of that, he argues, sits a toxic business model.

‘The problem at its heart is the business model of big tech: the attention economy, aka surveillance capitalism,’ he says. Reading Shoshana Zuboff’s work on surveillance capitalism helped crystallise the point. What companies such as Google and Facebook had built was not just a better advertising machine. It was a way of harvesting human behaviour at scale, translating it into data, and then monetising that data through increasingly precise forms of prediction and influence.

You can target misinformation at people, you can target hate at people, you can target stalking at people, you can target weapons at people, you can target ads. It’s all targeting.

In Leavy’s telling, the key phrase is ‘micro-targeting’. It begins with the apparently harmless promise of relevance: more useful adverts, more tailored content, more convenient services, in exchange for having your actions monitored. Many users are happy enough with that trade-off. But the dark side, he says, lies in the second word.

Targeting is doing a lot of heavy lifting there,’ he says. ‘You can target misinformation at people, you can target hate at people, you can target stalking at people, you can target weapons at people, you can target ads. It’s all targeting.’

That is where the political stakes emerge. Cambridge Analytica was not, for him, an isolated scandal but a glimpse of a deeper architecture. Billions of people were being profiled, sorted and nudged through systems they barely understood. Opinions could be influenced not by open argument in a public square, but by individualised streams of information designed to provoke, manipulate or inflame. Humans, Leavy says bluntly, are more malleable than we like to imagine.

He came to see this as more than a consumer issue. It was not just about unsettling ads or spending too much time on Facebook, although the deliberate addictive design of such products is a problem. At a societal level, he believes, it is a threat to democracy itself. ‘How can you have democracy when you don’t have individual privacy?’ he asks. Once privacy collapses, and once micro-targeting operates at scale, elections can be swayed, people can be radicalised, and public debate can be distorted before citizens even realise what is happening. The attention economy, fueled by recommendation algorithms and driven by micro-targeting, is the basis for a surveillance state. The interaction between tech firms and politics is complex and goes all the way back to the Clinton administration. These days all countries’ security services want to access the giant data trove that the attention economy provides. They call it ‘predictive policing’.

But the large tech firms were not merely influential via lobbying politicians; they were becoming astonishingly powerful themselves. Their profits, built on vast advertising revenues and relatively low marginal costs, gave them the money to lobby governments, shape regulation and entrench their dominance. More recently, Leavy says, some of the people at the top of these firms have become increasingly explicit about their political ambitions and ideological projects. The danger is not only concentrated market power, but concentrated cultural and political power too. All these forces converge in the AI boom. It’s the same pattern repeating again: provide something fun and useful to the masses, and low or no cost, not revealing that they are actually building a mechanism for information control at staggering scale.

So why set up Rebel Tech Alliance? Why imagine anything could be done?

After years of looking at the issue, Leavy concluded that the platforms cannot be fixed. ‘They just do not care,’ he says. There is, in his view, ‘no nice way’ to use products whose business model is incompatible with privacy and democracy. That means the only serious response is exit. Not reform from within, but departure. But Big Tech products are ubiquitous so where do you go?

That was the gap Rebel Tech Alliance set out to fill. Plenty of people could describe the problem. Fewer were offering a practical route out. The organisation’s role is to map the alternatives: different email providers, different messaging apps, different browsers, different tools. The claim is not that switching is effortless, but that it is more feasible than many people think. Compared with a decade ago, Leavy argues, the alternatives now work very well.

The hardest barrier is not usually technical. It is social.

His own journey away from big tech started early. After a technically minded friend explained how Google builds detailed profiles of users, Leavy stopped using Google Search altogether. Facebook followed. Over time he disentangled his personal life from the major platforms and now argues that, at least for individuals, full exit is possible. Business use is more complicated, but for personal communication and everyday digital life, he insists it can be done. He can report that life without algrorithm-driven billionaire-controlled products is less addictive, better for mental health, free of ads and not prone to being bombarded with disinformation.

The network effect

The hardest barrier is not usually technical. It is social. Network effects lock people in because everyone else is already there. Leaving feels like giving something up. Leavy’s answer is to start small and local: family first, then friends, then wider circles. He describes winding down Facebook over several years, warning contacts well in advance, gathering phone numbers and email addresses, and slowly shifting habits. ‘It requires effort,’ he says, but that is precisely the point. Convenience is what made the system so dominant in the first place.

He is particularly sharp on the mythology of ‘free’. Alternative tools, he says, tend to rely on more honest models. Some charge directly for the service. Others are free and open source, sustained by donations, community labour or nonprofit structures. Signal, the messaging app he recommends over WhatsApp, is one example: open source, charity-run and supported by users who choose to pay because they believe the product should exist.

WhatsApp, by contrast, illustrates the broader deception. Yes, it offers end-to-end encryption for message content. But that does not mean privacy in any meaningful sense. Metadata — who contacts whom, when, where, how often, on what device — remains hugely valuable. In Leavy’s view, that metadata helps sustain the ‘social graph’, a constantly updated picture of relationships and behaviour that Meta can feed into its wider systems of profiling and targeting. The message itself may be hidden, but the surrounding behavioural trail remains a goldmine in advertising $$s. This data is then widely accessed by anyone posing as an advertiser, via the Real Time Bidding system (RTB), and this can include governments, scammers, stalkers and malicious actors of any kind. The RTB is essentially the biggest data breach of all time and it fuels the surveillance ad industry. Beyond the data leakage, users of WhatsApp need to be aware that Meta hold their keys to unlock the message content, and you are just trusting them to not unlock it. It was recently found that they do.

AI

From there, the conversation inevitably turns to AI. Leavy sees large language models as an intensification of the same pattern. Users reveal far more to a chatbot than they ever would in a social media post: fears, secrets, health worries, ambitions, vulnerabilities. For that reason, he warns strongly against divulging personal information to big tech AI systems. If people truly want private machine assistance, he argues, they should use local models running on their own devices rather than feeding intimate material into cloud systems owned by giant firms.

They are useful tools, not companions, and treating them as friends is, in his view, both mistaken and dangerous.

He is no AI absolutist. For business tasks, summarisation and certain forms of automation, he sees real value. But he rejects the fantasy that these systems ‘know’ us in any human sense. They are useful tools, not companions, and treating them as friends is, in his view, both mistaken and dangerous.

He is also deeply sceptical about the economics. Drawing on financial analysis he follows closely, Leavy believes the AI boom is structurally unstable. The leading firms, he argues, are heavily loss-making, subscriptions are subsidised, and the true cost of inference remains hidden from users. At some point, he says, those costs will have to surface. If they do, today’s seemingly affordable services may become prohibitively expensive.

That could produce an industry shakeout. Yet he is under no illusion that collapse alone will solve the problem. If private markets fail to sustain the current AI frenzy, companies may seek rescue through state contracts, especially in defence, surveillance and public sector data systems. That prospect alarms him at least as much as the existing consumer platforms.

Which brings him back to first principles. Rebel Tech Alliance is not trying to lobby governments directly. Others already do that work. Its message is simpler, more stubborn and perhaps more radical: your protest can begin at home. Switch the tools. Change the defaults. Stop feeding data into the surveillance machine.

Leavy calls it the ‘Big Tech Walkout’. After years of research, he has come to a stark conclusion: these systems will not reform themselves, and convenience is no excuse for passivity. The only meaningful answer, he says, is to switch to ethical alternatives.

Find the full, step-by-step, Big Tech Walkout programme here – which maps out how to swap your big tech products for ethical non-surveillance ones.

Find the Rebel Tech Alliance main website here.

For an excellent explanation of the themes in this article, watch the documentary Molly vs The Machines, that has recently been released in the UK on Channel4.com

Patrick Leavy

Patrick co-founded the Rebel Tech Alliance as a non-profit in January 2025 after years researching the problems with Big Tech. He’s proud to have helped many thousands of people since …

Read More »

Leave a Reply

Your email address will not be published. Required fields are marked *