Tania Duarte explains why, in post-Brexit UK, the scrutiny of ethics in advanced tech cannot be left to companies or governments.

We have all been made keenly aware of the technology that underpins our lives by the lifeline thrown to us by online communication during the pandemic. We have also seen the depth of the social inequality in our societies. And there has been a growing, but still limited, realisation of the dangerous connection between the two.

In the UK, last year, A-Level students marched the streets with banners, shouting “Your algorithm doesn’t know me”, after artificial intelligence (AI) deployed by the government to replace exams under-predicted the performances of students from underprivileged backgrounds. Then the Home Office was successfully challenged over the use of an algorithm to sift visa applications. It was said to entrench unfair and racist decision-making in the visa system – “speedy boarding for white people”.

Against this background, the UK has yet to make crucial decisions to make on AI strategy. It is now outside of the EU, which in March 2021, became the first political system in the world to define an all-encompassing legal framework on AI. Its proposals aim to guarantee “the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU”.

Box 1: An overview of the new EU AI proposed regulation
The proposed regulation uses a risk-based system, in which systems which pose an “unacceptable risk” are banned. These include systems that manipulate behaviour in a manner likely to cause physical or psychological harm, systems exploiting children, those using “social scoring” such as seen in China, and the use of “real-time” remote biometric identification systems (except under strict conditions).

High-risk systems include AI in machinery, medical devices and vehicles, for managing critical infrastructure, for biometric identification and categorisation of people, to inform recruitment decisions, evaluate credit scores, and systems that would have an adverse impact on EU fundamental rights. Such systems will have various requirements to meet, involving data used, record-keeping, oversight, transparency, accuracy, security.

Limited-risk systems such as chatbots will have specific obligations. For example to make users aware that they are interacting with a machine.

The EU proposals (see box 1) are momentous in that they harmonise some very complex issues, and they put the first legal stake in the ground. Nonetheless, some feel that they do not go far enough.  For example its restrictions on the use of biometric identification are viewed as falling short in some quarters.

We and AI seeks to increase public awareness and understanding of the risks and rewards of AI. Its policy advisor, Dr Claudine Tinsman, points out that although Article 5 of the regulation prohibits the use of remote biometric identification in publicly accessible spaces, it only applies to “real-time” systems used for law enforcement purposes. This doesn’t address use of “non-live” facial recognition for policing. Non-live applications can still be used to identify people, or have other uses by public authorities or private actors.

Other exceptions mean that it does not ban real-time facial recognition used to identify people suspected of entering or living in a Member State under irregular circumstances, such as migrants and refugees. Tinsman notes that over 60 non government organisations called on the European Commission to prohibit the use of AI systems to automatically identify gender and sexual orientation. No such ban was included in the draft regulation. Neither was there a ban for the use of biometric technology for purposes other than recognition, such as for emotion sensing (see box 2).

Box 2: Emotion-sensing AI
Recently there have been claims that the Chinese government is using Uyghurs as pre-judged test subjects for un-validated AI emotion sensing programmes. These are used like lie detectors, trained to detect and diagnose minute changes in skin pores and facial movements.

AI systems analysing vibrations in people’s heads to determine mental and emotional states have been used at events such as the World Cup and Olympic Games. Similar systems have been used to detect deception at US and EU borders, despite concerns about the lack of any credible evidence as to their accuracy. Under the new EU laws such emotion detection systems are deemed high-risk, but not banned.

Tinsman concludes that although the regulation is a step in the right direction, the loopholes are worrying: “Facial recognition systems, in particular, are biased against minorities, so leaving the door open for their use on a large scale could disproportionately affect minorities.”

Indeed there has already been scrutiny of facial recognition used by public authorities in the UK. South Wales Police was successfully challenged over illegal use on human rights grounds. Several organisations have been mobilising to safeguard rights. At We and AI, our approach has been to increase AI literacy and visibility so that rights can be understood (see box 3).

And now there is a wait to see how the UK government will respond to the Roadmap of recommendations delivered to the Office of AI in January 2021. The AI Council made recommendations aimed to deliver an increase in UK Gross Domestic Product in 2030 that were also intended to benefit the environment, and people from “all walks of life”. But, balancing AI ethics and innovation is not easy. So what is the government likely to take from the EU proposals?

This might be a desirable path for the UK because it would be seen to be regulating AI in some way, rather than leaving it open for “ethics dumping”.

As managing associate and AI lead at law firm Simmons and Simmons, Minesh Tanna, explains, the adoption of the EU regulation in its entirety is unlikely given how strict it is and given the UK’s need to attract investment. A compromise is more likely such as voluntary conformity assessments for UK AI companies and adoption of the same prohibitions.

“This might be a desirable path for the UK because it would be seen to be regulating AI in some way, rather than leaving it open for “ethics dumping,” says Tanna. “The UK might also want to use its flexibility to regulate in a more tailored, nuanced way (an advantage of Brexit), while at the same time, probably having a less onerous regulatory regime than the EU, so as to encourage investment,”  she adds.

AI ethics, technology governance and ESG specialist and partner at EthicsGrade, Charles Radclyffe, cautions that the UK needs to ensure there is sufficient capital investment in the UK AI ecosystem. And that as companies create capital value, Radcliffe says that value should not be syphoned off to the US or China but remain in the UK.

“The UK is a little like Pluto. We’re still in orbit of the sun (the EU) but on the face of it we have lost all of the benefits of planetary status. What we need to do is create the conditions and highlight sufficiently the uniqueness of the UK’s AI ecosystem. We need to attract inward investment that will otherwise get sucked towards the gravity pull of Brussels.”

Box 3: The need to increase AI literacy and visibility
At the beginning of 2020 I set up a non-profit organisation aimed at increasing AI awareness. Off the risks of unchecked AI systems in the UK, and of the rewards which could be unlocked by having a greater range of people involved in decision making about them. This is necessary for many reasons, and in the EU, there is funded support to increase AI literacy levels among the general population.

Greater literacy and more accessible information unlocks the potential to spot violations of existing equality and privacy laws. There comes the power to ask questions and influence debates over the inherent trade-offs of the technologies. Information enables people to make consumer and behavioural decisions over the use of technology. Or to make more considered decisions when defining or designing technology in the workplace. With a greater understanding of what can be achieved with AI and data, a greater diversity of people can see the value of learning tech skills. They can enter the AI and data workforce, which is a significant skills gap.

Knowing where issues around AI are being addressed gives us the ability to participate in the direction of policy making. It fuels the democratic process at this critical inflection point in the future of technology-governed society.

Radclyffe also notes that the EU AI regulations are likely to create a “Brussels effect” as with the EU’s General Data Protection Regulation, to which large corporations comply even outside their territorial limits. And it is copied by regulators across a wider geography.

We and AI advisor and senior technology policy researcher at the Stanford University Cyber Policy Centre, Sebastien A. Krier, agrees. Businesses in the UK will need to comply with the European regulations if they want to sell their products in the EU. He notes, however, that despite the UK attracting a lot of investment and being home to many prominent AI research labs compared to other EU countries, the UK has been quite slow to react beyond the now-defunct industrial strategy.

Krier adds that given that the proposed regulation must still go through the European Parliament and the Council, which can take a few years, various provisions are likely to be tweaked as harmonised standards develop in parallel. This will allow time to respond and catch up.

What is certain is that the pace of change required is daunting in the face of the tech world’s agile innovation.

What is certain is that the pace of change required is daunting in the face of the tech world’s agile innovation. This is especially true if we are to unlock the potential, not just to ensure that AI systems are visible, used responsibly, and held to account. And that they are also applied in ways which are beneficial and equitable for everyone – used to heal rather than deepen divides.

It’s clear that the challenge cannot be left solely in the hands of technology companies and states, neither of which are neutral players. They must be taken on by society as a whole and the individuals within it (see box 4).

Box 4: Empowering participation in AI use and governance
Not-for-profit organisation We and AI offers a free course on demystifying AI, and explaining how to get involved in AI decision-making through a wide variety of methods. We are also currently testing a resource on the intersection of AI and Race. This is the first of a range of open source resources to make information relevant and accessible to marginalised communities. It covers key areas such as education, employment, justice, climate, money, health and social care.

We are seeking partners, volunteers and ambassadors to support our efforts. We need to work together to ensure that the UK’s new position outside of the EU doesn’t leave us behind – in either AI innovation, or trustworthiness. Our team of volunteers and experts from a wide range of different backgrounds aim to empower everyone to play their part. Contact us at hello@weandai.org or https://weandai.org/ to find out more.

 

Tania Duarte

Tania is Co-Founder of We and AI, a UK non-profit focused on making technology equitable and beneficial for all. She is on the Founding Editorial Board of the Springer Nature AI and …

Read More »

Leave a Reply

Your email address will not be published. Required fields are marked *