Close-up Of A Robot's Hand Selecting Candidate Photograph - Getty Images
Many HR departments use new technological tools to screen job candidates © Getty

The threat of artificial intelligence is not that robots are like us. The problem, according to scientists, is their inhumanity: we cannot make them care about justice or equality. 

And while talk of AI and robotics may conjure up images of human-like machines such as Hal 9000 or Wall-E, that is a category error, says Joanna Bryson, a computer science professor from the University of Bath, who says the technology has simply “become more human-looking”. 

“There’s no way we can make AI care about the sanctions of human justice,” she says. The humans who designed the technology must ultimately take responsibility for it. That means regulation must catch up to hold programmers to account for their creations. 

If human-like AI robots were to gain legal standing of their own, companies could look to place the blame on them when things go wrong. Those taking action against machines would face a legal quagmire and a defendant for whom fines or jail time is meaningless.

Calls for rights for robots have not gained widespread public appeal. The closest example is Sophia, which was granted citizenship by Saudi Arabia in 2017. Sophia has been mostly a publicity vehicle for the Saudi government, however, and is unlikely to perform duties which could have any legal consequences. It is telling that no other robots have gained citizenship since then.

Fears incubated in popular culture are not entirely misguided, however: killer robots (officially called lethal autonomous weapon systems) are just one of many risks. A more pedestrian threat comes from automated hiring, where applicants are judged by AI that has learnt from historical data sets. 

“Discrimination that comes out of systems trained on data from people . . . reflects the behaviour of people [who previously carried out the job],” explains Yoshua Bengio, a professor in the University of Montreal’s department of computer science. 

In response to these concerns, ethical frameworks for AI are being written around the world. Last year, Prof Bengio co-authored the Montreal Declaration for Responsible AI, which named 10 pillars that AI development should not hamper including equity, democratic participation, and sustainable development. The UN, OECD, and Council of Europe have all formulated their own guides.

Prof Bryson also helped draft a set of principles for robotics in 2011, built on the idea that humans are always the responsible agents rather than robots. 

Yet the tension between safeguarding citizens and fostering innovation can pull policymakers in opposite directions. “You can mitigate [ethical] issues,” argues Prof Bengio. “But it comes at a cost [for companies and researchers]”. 

In 2018, 26 countries had created national AI strategies. Though many mention ethics, these are often little more than a general declaration that rights should be preserved. 

As with data and privacy regulation, the EU is pressing ahead with rulemaking for AI. The guidelines published by the European Commission in April and drawing on high level experts follow the idea of “Trustworthy AI”.

They provide clear ethical principles and a checklist to be used when developing AI systems. The principles will now be tested by companies and other stakeholders in a pilot project to start in the summer of 2019.

The EU’s regulatory preparedness contrasts with the countries which are leading in AI research. “The US was on the path to really forward-thinking AI national policy under the Obama administration. Now, we’re not,” says Mark Latonero, a fellow at the USC Annenberg Center on Communication Leadership & Policy. 

China’s AI strategy has just two passing references to ethics. But the country is not alone: ethics remains a fundamentally international problem. “AI will have the tendency to scale very quickly without really any regards to national borders,” says Mr Latonero. 

Democracies may balk at handicapping their AI industry’s growth if geopolitical rivals do not follow suit, particularly when national security or business advantage could be at risk. And co-operation between friendly states with shared values is a big step away from enforcing ethical codes on a superpower like China.

Prof Bengio suggests the pressure of the international order could be an effective means towards establishing a global ethical code. “Just like with climate change, we have to stigmatise countries which don’t want to play by the rules necessary for the benefit of the whole planet.” 

He believes public pressure could convince even authoritarian states to ensure that the technology needs ethical guidelines. “If you’re China’s leadership, you don’t want to have a massive reaction against your policies from your own people.” 

Yet he adds: “For that to happen, the Chinese have to be part of the discussion.” 

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments