← Back to portfolio

Who is really responsible for ethical AI?

Published on


As regulators and ethics-focused experts continue to put pressure on how Big Tech train their AI models, Hana Anandira asked researchers in the field if there is a way to look at ethical AI beyond the policymaking narrative.

This article was published for GSMA's in-house news body.

*

In January, TIME revealed Microsoft-backed OpenAI outsourced workers in Kenya in late 2021 to moderate internet data, forming a fundamental part of developing generative AI sensation ChatGPT’s safety system.

According to documents seen by the news outlet, moderators earned around $2 per hour to label texts which detailed injuries, sexual abuse and self-harm. These workers were also asked to collect images, some reportedly illegal under US law, in a separate project for OpenAI’s image generator DALL-E.

In a statement to TIME, OpenAI explained it took the well-being of its contractors “very seriously” and that programmes offering support were available through the outsourcing company, which believed its employees did not request for support “through the right channels”.

The work involved was so traumatic that the company dealing with OpenAI to outsource the jobs cut short its contract with the AI powerhouse, a recent Wall Street Journal article indicated. Meanwhile, a growing body of research continues to reveal the dependence of big technology companies to conduct precious work in the global south as part of a mission to make AI safe.

Surveys conducted over the years also revealed General Purpose AI deployed in biometrics, policing and housing systems have already caused gender and racial discrimination.

As ChatGPT began to fully take off, the recent dismissal of Microsoft’s responsible AI team raised eyebrows and questions of whether ethical concerns are actually a priority in the multibillion-dollar AI economy.

That is not to say the technology sector as a whole is not taking the risks around generative AI seriously.

Major industry figures did indeed call for a pause in the technology’s developments until a robust AI act is in place. However, researchers speaking to Mobile World Live (MWL) believe the public should look a little further beyond policymaking.

Abid Adonis, researcher at Oxford Internet Institute, argues the task of ensuring ethical AI needs to be expanded.

“Now, we only see two powers: regulators and big tech, but we also have civil society and scholars. And it’s important to hear what marginalised groups say about this because it’s missing from the discussion.”

False AI
This view resonates with Dr Alison Powell, associate professor in Media and Communications at the London School of Economics and Political Science and director of JustAI network at the Ada Lovelace Institute.

Powell told MWL the emphasis on artificial general intelligence — which industry heavyweights claimed can eclipse humans’ cognitive abilities and therefore dominate job markets — is already in itself harmful.

“It’s harmful because it focuses on an imagined world rather than the actual world we live in.”

This is particularly reflected in Large Language Models (LLM) built on internet data. Powell pointed out that while there are a lot of languages spoken in the actual world, English is largely dominant on the internet.

“In the world, there are many ways that people experience things, express ourselves and work together. Not all of these are present online.”

Powell further warned about the hype around AI’s decision-making abilities and suggested the technology’s powers do not take into account social responsibilities.

This somewhat makes sense when considering the fact generative AI posterchild ChatGPT falsely accused law professor Jonathan Turley of assaulting a student and made up a story about the death of Alexander Hanff, a privacy technologist who helped craft GDPR.

Other examples include data-filtering practices in GPT-3, which used a classification system to automatically discard obscene and inappropriate material.

Further flaws in LLM were highlighted in a recent report by The Washington Post, which stated tech companies had grown secretive about what they feed the AI, such as using data from websites that could be deemed discriminatory.

This backed up a study from 2021, which found generative AI has the potential to amplify privileged views, pointing to GPT-2’s training data extracted from Reddit, Twitter and Wikipedia, all of which have predominantly male users.

Cultural machine
Powell stressed the need to understand the social aspects where technology is more likely to cause harm before considering how to make it more ethical.

“AIs are institutional machines, they’re social machines and they’re cultural machines,” she argued.

“If we’re walking away from saying, ‘How do we do this technically, in the gears?’ then we produce that double bind. But if we take a step back, then we notice all of these systems are institutional systems. Thinking about making systems work along the lines of justice and inclusion is about not how the machines work, but how institutions work.”

Adonis added a nuanced public discussion on ethical technology will continue to play a strong role in future innovations and policymaking.

“If we build strong, fundamental discourses in many places on something we know will have detrimental effects to society, it will permeate into stakeholders and state actors. They will know what to do, and civil society will know what to do.”

“I believe discourse and paradigm will shape the corridors of innovation”.

For Powell, AI governance means enforcing existing laws, particularly those relating to data protection, anti-discrimination and human rights “that apply to the institutional settings in which you put AI”.

“I would continue to advocate for thinking about institutional settings employing AI, rather than thinking about it as an object of regulation itself,” she added.

The editorial views expressed in this article are solely those of the author and will not necessarily reflect the views of the GSMA, its Members or Associate Members.