Artificial intelligence rarely gets it wrong – it’s the culture that programmed it that’s messed up, says Madhavi Venkatesan.

I have become accustomed to my smart phone alerting me in the morning of the time and distance to my standard destination. But last weekend, when my routine changed, my smartphone knew that I planned to cross state lines — even before I entered a new destination address into my GPS system.

A few years ago, a friend who is a consultant in Silicon Valley, noted to me that our smart technology was smarter than we understood. In fact, our pocket artificial intelligence (AI) communicates with our other electronic devices. AI shares information across Bluetooth or Wi-Fi networks. (Have you noticed how the passcodes stored on your home computer are available on your smartphone, even without synching?) He pointed that the combination of learning algorithms and around the clock thinking mean that AI is constantly gathering information which it then processes relative to its programming.

On the surface this may appear benign. But, once you consider how many of our daily decisions are not conscious, AI has a tremendous opportunity to influence actions or inaction. Taking this further, given the imperfections of our society, the benefits of AI for some may continue at the persistence of marginalisation for others. AI is dependent on human programming and its learning is dependent on the quality of data. Its objectivity is fiction.

Computer algorithms are an outcome of programming by a human being. Unchecked, this can allow implicit bias, and other subjective criteria to initiate prejudice in AI decision-making. With machine learning, a subset of AI, the data used to “teach” AI can result in fostering racial and gender profiling, as well as other hidden and normalised biases. If a society has a history of discriminatory practices, AI will provide a rear-view outcome, limiting equity-based progress with seeming legitimacy.

Computer algorithms are an outcome of programming by a human being. Unchecked, this can allow implicit bias, and other subjective criteria to initiate prejudice in AI decision-making.

PredPol, a machine learning AI system resulting from a research collaboration between Los Angeles Police Department and UCLA, uses historical data to predict future patterns of crime. PredPol justifies over-policing poor and historically criminalised neighborhoods. The AI product has come under scrutiny for seemingly legitimising the racialised history of policing given the perceived neutrality of machine learning. Though the issues with PredPol (soon to be Geolitica) have been documented and written about since 2017, the company is poised to participate in the estimated $20bn to $60bn US AI market in policing.

Artificial ignorance
In 2015 Jacky Alciné, a 22-year-old software engineer living in Brooklyn, posted photos of a friend to Google Photos. Google Photos’ AI placed more than 80 photos taken of Alciné’s black friend under the category, “gorilla”. Alcine posted a screenshot on Twitter. “Google Photos, y’all, fucked up, he wrote. “My friend is not a gorilla.” This issue is not isolated. Earlier this year Uber’s use of Microsoft facial recognition resulted in the termination of its own employees, given that the technology was unable to recognise and confirm non-white employees.

If a particular occupation has had a disproportionate representation of women, limited data on women may generate negative expectations or even eliminate resumes before the hiring process is initiated. This gender stereotyping may lead to reaffirming gender bias, setting back decades of awareness, grassroots efforts, and activism. In 2018, Amazon affirmed that its AI model favored male over female job candidates. Why? The programme had trained on a ten-year database that reflected a higher proportion of male candidates. The data supported that men were preferable.

With respect to health care, black people have a historical underuse of medical services due to both discrimination and exploitation. A 2019 study in Science uncovered that AI reinforces historical patterns and recommends to the physician less follow up, including diagnostic tests, and evaluation, for black people compared with white peoples. A 2016 study revealed that racial profiling of pain thresholds, a carry-over belief from slavery, resulted in lower AI-based pain medication recommendations for black patients compared with their white counterparts.

Concerns related to the bias of AI go beyond machine learning to include physical appearance, in terms of perception of attractiveness and standard appearance. Adverse outcomes have already been documented and facial recognition systems created by Google, Microsoft, and Amazon have been noted as misidentifying people of color at a rate of 70% or more (see box).

In a recent interview Professor of communication and science and technology studies at the University of Southern California, Kate Crawford, and a senior principal researcher at Microsoft Research, offered another perspective on AI bias, power dynamics. “Ethics are necessary, but not sufficient. More helpful are questions such as, who benefits and who is harmed by this AI system? And does it put power in the hands of the already powerful? What we see time and again, from facial recognition to tracking and surveillance in workplaces, is these systems are empowering already powerful institutions – corporations, militaries, and police.”

The known and emerging shortcomings of AI need to be addressed to include data that embodies the aspirational goal of social equity and equality, routine evaluation of AI to ensure that use mimics intention, and inclusion of perspectives in the development and evaluation of AI. The irony with AI is that its use makes our social biases obvious and as a result, provides an opportunity in doing so to promote morality in decision-making by acknowledging bias and taking corrective action. However, not all aspects of AI are easy to assess, and some may be reflected only over time, when change at the societal level may have already manifested.

An emerging issue may be the asymmetry between humans and machines with respect to trust, empathy, and responsibility. As AI is deployed in an intermediary role to directly interface between humans and specific actions, it is also designed to learn from its experience.

A recent research study revealed that humans are less likely to maintain politeness and trust in communications with AI. Professor in cognitive science and philosophy at Ludwig Maximilian University in Germany,  Ophelia Deroy, noted recently in an interview with the New York Times: “We are creatures of habit. So, what guarantees that the behaviour that gets repeated, and where you show less politeness, less moral obligation, less cooperativeness, will not colour and contaminate the rest of your behaviour when you interact with another human?”       

By simply reflecting the implicit bias and social norms of behaviour that have been embedded within society, unchecked AI can legitimise historical inequity.

For AI, she noted “If people treat them badly, they’re programmed to learn from what they experience,” she said. “An AI that was put on the road and programmed to be benevolent should start to be not that kind to humans, because otherwise it will be stuck in traffic forever.”

With so many vendors and no standardisation, AI has the potential to create significant social harm. By simply reflecting the implicit bias and social norms of behaviour that have been embedded within society, unchecked AI can legitimise historical inequity. In the US these biases and norms are largely invisible to those identified as part of the dominant racial grouping and or those who are fortunate enough to be above a certain socio-economic threshold. As a result, regulation of AI will be challenging as it will need those who can ignore the pitfalls of the technology to intervene on behalf of those whose vulnerability limits their objection.

As former AI developer for Google, Timnit Gebru, noted, “I’m not worried about machines taking over the world. I’m worried about groupthink, insularity, and arrogance in the AI community…The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.”

Perhaps the most challenging aspect to promoting ethical AI will be the companies who are leading the integration of the technology. Google, Microsoft, and Amazon have significant market presence and lobbying strength. Yet, the collective power of customers and investors is even greater.

NOTE: AI was used in the evaluation of this story. The reading level targeted was 9th grade; the actual reading level, based on AI, was 12th grade.

Madhavi Venkatesan

Madhavi Venkatesan is a faculty member in the Department of Economics, Northeastern University. She has published three economics textbooks under the series A Framework for Sustainable Practices. In 2019, her fourth …

Read More »

3 Comments on “Algorithm and blues: whose tune do we dance to?”

Leave a Reply

Your email address will not be published. Required fields are marked *