Patricia Gestoso tells how the Global North exploits poverty and weak laws in the South to accelerate its digital transformation.
The hype around idyllic tech workplaces that originated in Silicon Valley with tales of great pay, free food and Ping-Pong tables reaches a whole new level when we talk about artificial intelligence (AI). Tech companies that want to remain competitive court data-scientists and AI expert developers with six-figure salaries and perks that go from unlimited holidays, on-site gyms, and nap pods, to subsidising egg-freezing and IVF treatments. I am a director at a software company that develops AI applications so I have seen it first hand.
But I also spent 12 years in Venezuela so I am aware that AI workers there have very different stories to tell than their counterparts in the global North. And this North-South disparity in working conditions is repeated across the world and amplified to the point where in the South a large portion of them are gig workers on subsistence rates.
Take, for instance, the self-driven car industry. It seeks to substitute people at the wheel with algorithms that mimic human pattern recognition – yet it relies on intensive human labour.
Self-driven car algorithms need millions of high-quality images labelled by annotators – workers who assess and identify all the elements on each image. And the industry wants these annotated images at the lowest possible cost. Enter: annotators in the Global South.
Annotators in Venezuela are paid an average of 90 cents an hour with some being paid as low as 11 cents/hour. The situation is similar for their counterparts in North Africa.
The injustice is not only about low pay, but also in work conditions. Workers are under constant pressure because the data-labelling platforms have quota systems that remove annotators from projects if they fail to meet targets for the completion of tasks. The algorithms keep annotators bidding for new gigs day and night, because high-paying tasks may only last seconds on their screens before disappearing.
And annotators are not the only tech workers in the Global South making it possible for the Global North to reap the benefits of AI.
The impact of fake news on elections and conflicts has put pressure on tech big bosses to moderate social media content better. Their customary response has been to offer reassurances that they are working on improving the AI tools that parse content on their platforms.
And we hear frequently that AI algorithms can be deployed to remove the stream of depictions of violence and other disturbing content on the internet and social media. But algorithms can only do so much – platforms need human moderators to review content flagged by AI tools. So where do those people live and how much are they paid?
Kenya is the headquarter of Facebook’s content moderation operation for sub-Saharan Africa. Its workers are paid as little as $1.50 an hour for watching deeply disturbing content, back-to-back.
Kenya is the headquarters of Facebook’s content moderation operation for sub-Saharan Africa. Its workers are paid as little as $1.50 an hour for watching deeply disturbing content, back-to-back, without the benefit of any “wellness” breaks or the right to unionise. Moreover, they have a 50-second target to take a decision on whether content should be taken down or not. Consistently taking longer to make the call leads to a dismissal.
Still, moderation is not granted equally around the world. As the Mozilla Internet Health Report 2022 says: “although 90% of Facebook’s users live outside the US, only 13% of moderation hours were allocated to labelling and deleting misinformation in other countries in 2020.” And 11 out of the 12 countries leading the ranking of national Facebook audiences are part of the Global South. This is in line with prioritising user engagement over their safety.
As well as taking advantage of lax protection of human rights and health to pick up cheap labour, tech companies look to the poor data privacy laws in the Global South to enable them to trial their AI products on people there.
Invasive AI applications are tested in Africa, taking advantage of the need for cash across the continent coupled with the low restrictions regarding data privacy. Examples include apps specialised in money lending – so-called Lendtechs. They use questionable methods such as collecting micro-behavioural data points to determine the credit-worthiness of the users in the region.
Lack of regulation enables lenders to exploit the borrowers’ contacts on their phones to call their family and friends to prompt loan repayment.
Examples of such data points include: the number of selfies, games installed, and videos created stored on phones, the typing and scrolling speed, or SMS data to build a credit score using proprietary and undisclosed algorithms. Lack of regulation enables lenders to exploit the borrowers’ contacts on their phones to call their family and friends to prompt loan repayment. Reports suggest that loan apps have plunged many Kenyans into deep debt and pushed some into divorce or suicide.
The human rights project NotMy.ai, has mapped 20 AI schemes led by Latin American governments that were seen as likely to stigmatise and criminalise the most vulnerable people. Some of the applications – like predictive policing – have already been banned in some regions of the US and Europe. Numerous such initiatives are linked to Global North software companies.
Among the projects, two are especially creepy. First, the rollout of a tech application across Argentina, Brazil, Colombia, and Chile that promises to forecast the likelihood of teenage pregnancy.
Among the projects, two are especially creepy. First, the rollout of a tech application across Argentina, Brazil, Colombia, and Chile that promises to forecast the likelihood of teenage pregnancy based on data such as age, ethnicity, country of origin, disability, and whether the subject’s home had hot water in the bathroom. Second is a Minority Report-inspired model deployed in Chile to predict a person’s lifetime possibility of having a criminal career correlated with age, gender, weapons registered, and family members with a criminal record that reports 37% of false positives.
We in the Global North might naturally consider the Global South to have only a marginal involvement in the use and development of AI. The reality is that the exploitation of the Global South is crucial for the Global North to harness the benefits of AI and to even manufacture AI hardware (See box: Mining disaster).
While AI is naturally associated with the virtual world, it is rooted in material objects: datacentres, servers, smartphones, and laptops. And these objects are dependent on materials that need to be taken from the earth with attendant risks to workers’ health, local communities, and the planet.
For example, cobalt is a critical component in every lithium-ion rechargeable battery used in mobile phones, laptops and electric cars. The Democratic Republic of Congo provides 60% of the world’s cobalt supply which is mined by 40,000 children, according to UNICEF estimates. They are paid $1-2 for working up to 12 hours a day and inhaling toxic cobalt dust.
Unfortunately, the Global North’s apathy towards tackling child labour in the cobalt supply chain means that electronic and car companies get away with maximising profit at the expense of risks to human rights and harm to miners related to their cobalt supply chain.
And one of the driest places on earth, the Atacama Desert in Chile, holds more than 40% of the world’s supply of lithium ore. Extracting lithium requires enormous quantities of water – some 2,500 litres for each kilo of the metal. As a result, freshwater is less accessible to the local communities, affecting farming and pastoral activities as well as harming the delicate ecosystem.
The South provides cheap labour, natural resources, and poorly-regulated access to populations on whom tech firms can test new algorithms and resell failed applications.
The North-South chasm in digital economies was summed up elegantly in a 2003 Economist piece by novelist William Gibson, who foresaw the world wide web in his 1984 novel Neuromancer. “The future is already here,” he declared, adding: “it’s just not evenly distributed.”
In truth, the exploitation and harm that goes with the development of AI demonstrates that it’s not just the future that is with us, out of time; but also the inhumanity of the colonial past.
Image: Max Gruber / Better Images of AI / Clickworker Abyss / CC-BY 4.0