“This year we are going to put our trust in teachers rather than algorithms,” Gavin Williamson, the British education secretary, announced on Wednesday.

The government is eager to avoid a repeat of last year’s fiasco over exams in England’s schools. With tests cancelled because of the disruption caused by coronavirus, Ofqual, the government department in charge of examinations, had created an algorithm which was meant to stop grade inflation by standardising teachers’ assessed grades.

For thousands of students, the algorithm was a disaster, marking down a number of grades and sparking a political firestorm. Facing protests where students chanted “f*** the algorithm”, Ofqual caved. The algorithm was scrapped, students received teacher-assessed grades and the department’s chief regulator, Sally Collier, resigned.

The furore over the British exam system is emblematic of one of the pandemic’s hidden but most impactful trends.

Facing such a stark health emergency, Covid-19 has prompted a sharp acceleration in efforts by governments to introduce more automated forms of decision making. It has provided an impetus and a rationale for authorities to try out new systems often without adequate debate, while offering opportunities for surveillance technology companies to pitch their products as tools for the common good.

Students protest in London last summer against the downgrading of A-level results. The government is eager to avoid a repeat of the exams fiasco
Students protest in London last summer against the downgrading of A-level results. The government is eager to avoid a repeat of the exams fiasco © AFP via Getty Images

But it has also prompted a sharp backlash. While the Ofqual fiasco was the most high-profile algorithmic incident, activists and lawyers have scored several victories against such systems across Europe and the US over the past year, in fields ranging from policing to welfare.

Even with these victories, however, activists believe there will be more disputes in the coming years.

“The public have never been asked about this new way of decision making,” says Martha Dark, co-founder of Foxglove, a digital rights organisation which threatened legal action over the algorithm before Ofqual scrapped it. “That’s storing away potential massive political problems.”

It’s not all ‘Black Mirror’

While algorithms are often associated with the social media sorcery of TikTok or Facebook, in practice they are used in a wide range of circumstances, ranging from complex neural networks to much simpler systems. “You have this idea of Black Mirror sort of stuff,” says Jonathan McCully, legal adviser at the Amsterdam-based Digital Freedom Fund, citing the dystopian Netflix series. “Most of the time they’re simple actuarial tools.” The fund supports litigation to protect digital rights in Europe.

Because of their ubiquity, their usage requires careful consideration, says Fabio Chiusi, project manager at Berlin-based non-profit AlgorithmWatch. “We’re not advocating for a return to the Middle Ages,” he says, “but we need to ask if these systems are making our society better, not just whether it’s making things more efficient.”

The case of the Ofqual algorithm foregrounds how difficult that calculation can be, says Carly Kind, director of the Ada Lovelace Institute, a UK research institute which studies the impact of AI and data on society. “They thought a lot about fairness and justice but they took one particular conception — that the algorithm should be optimised to promote fairness across different school years so [previous] students would not be unfairly disadvantaged.”

This approach meant that the system downgraded 40 per cent of A-level results from teachers’ predictions, sparking mass anger. Worse still, results appeared to disadvantage children from poorer backgrounds and were seen to penalise outliers who had outperformed their schools’ historic performance.

“It was the first time doing this work we’ve seen people shouting on the streets about algorithms,” says Ms Dark. “And it was the first time so many members of the public saw power being exercised [by an algorithm].”

Gavin Williamson: ‘This year we are going to put our trust in teachers rather than algorithms’
Gavin Williamson: ‘This year we are going to put our trust in teachers rather than algorithms’ © AFP via Getty Images

But Ofqual’s U-turn was not Foxglove’s first algorithmic victory in August. A matter of days earlier, the UK’s Home Office dropped an opaque algorithm used for visa applications, after the digital rights group and the Joint Council for the Welfare of Immigrants brought a legal review. In their legal submission, Foxglove said that the system’s secret list of “suspect nationalities” amounted to “speedy boarding for white people”.

In the US, activists have also scored victories against the misuse of one of the most controversial algorithmic technologies, facial recognition, which has long been criticised for issues of racial bias, accuracy and its effects on privacy. In June, the American Civil Liberties Union of Michigan filed an administrative complaint after black Michigan resident Robert Williams was wrongfully arrested as a result of FRT.

“The dangers of artificial intelligence and FRT more specifically . . . have been on the radar of many people for a long time,” says Nicole Ozer, technology and civil liberties director at the ACLU of California. “What we’ve seen in the last two years is a pushback against [inefficient, dangerous] AI.”

Cities in states such as Massachusetts passed laws banning the use of FRT by authorities, following steps taken by cities such as Oakland and San Francisco in 2019. Ms Ozer says that the ACLU of California had also been successful in campaigning against the passage of assembly bill 2261, which she said would have ultimately greenlit facial recognition in California.

There were also significant victories for activists in Europe last year, most notably the case of System Risk Indication (SyRI), a Dutch automated tool used for detecting welfare fraud. In February, the district court of The Hague ruled it was unlawful, because of the privacy implications of using large amounts of data from Dutch public authorities.

Among the possible data points which SyRI could use to deem whether an individual was likely to commit benefit fraud were education status, housing situation and whether they had completed a civic integration programme, a step required for a residence permit.

“Like many of these systems, SyRI started with a very negative approach -the assumption that probably, people are committing fraud,” says Jelle Klaas, litigation director and lawyer at the Netherlands’ Committee of Jurists for Human Rights, which brought the case.

While the success was heartening, Mr Klaas warns the court battle was not the end of the struggle between activists and authorities. “I do think we’ve set a precedent, but we do see a lot of other algorithms being deployed,” he says. “I think the main problems with automated decision making are still to come.”

Passengers use their biometric passport at an ePassport gate equipped with a facial recognition system at the British border of the Eurostar at the Gare du Nord in Paris in 2017
Passengers use their biometric passport at an ePassport gate equipped with a facial recognition system at the British border of the Eurostar at the Gare du Nord in Paris in 2017 © Philippe Lopez/AFP via Getty Images

AI arms race

Even before the pandemic, automated decision making and algorithmic tools had been proliferating for years, says Petra Molnar, associate director of the Refugee Law Lab at York University’s Centre for Refugee Studies.

“From a geopolitical perspective countries are engaged in a kind of arms race, where they’re trying to push to the forefront of innovation when it comes to algorithmic and automated decision making,” she says.

But the unprecedented health emergency has supercharged this trend, with authorities turning to experimental systems which they claim will improve the efficiency of their operations.

“We’ve really seen the pandemic used as a tech experiment on people’s rights and freedoms,” warns Ella Jakubowska, policy and campaigns officer at European Digital Rights, an advocacy group. “It’s completely treating our public spaces, faces and bodies as something to be explored and experimented with.”

A French police officer uses video surveillance at a school in Nice, France
A French police officer uses video surveillance at a school in Nice, France © AFP via Getty Images

Among the surveillance measures which she points to were trials of cameras which can detect whether users are wearing masks in Châtelet-Les-Halles, one of the busiest subway stations in Paris, which ended abruptly in June after CNIL, the French data protection authority, said they contravened EU general data protection regulation (GDPR).

“I feel that in France, we are a bad example of how to use biometric surveillance,” admits Martin Drago, legal expert at French digital rights advocacy group La Quadrature du Net, who echoes Ms Molnar’s concerns over an “arms race”. “When we tell the police administration about the dangers of [abuse], they tell us if we continue our fight, then France will lose the [AI] race against China and the US.”

LQDN racked up a number of successes last year. In February, it won the first case against the use of FRT systems in France, around systems controlling access to two secondary schools in southern France. “CNIL said to [the regional administration] it had not demonstrated why FRT was a necessity and why it was more than a human could do,” says Mr Drago.

But algorithmic surveillance remains widespread in France. Most concerning to activists is the Traitement des Antécédents Judiciaires, a criminal database of 8m faces which the police can use for facial recognition. “The file is about everyone who has been in an investigation,” says Mr Drago, including those acquitted as well as convicted of crimes. “It’s pretty bleak [that] millions of people can be subject to FRT.”

A camera for facial recognition on a bus in Cannes during the outbreak of the coronavirus in France last April
A camera for facial recognition on a bus in Cannes during the outbreak of the coronavirus in France last April © Reuters

Mr Drago is concerned about potential new provisions for data processing, part of the controversial French security law which has sparked protests. The articles would legalise the real-time transmission of data from drones and police body cameras to command centres, opening up even greater potential for facial recognition and other image analysis.

The law was adopted by the Assemblée Nationale, the French parliament’s lower house, last November. It now has to pass the upper house, with a plenary session due around March. If there is disagreement between the chambers on the exact wording of the text, the Assemblée Nationale has the final word, says Mr Drago.

“When [proponents of algorithmic systems] talk about what they’re doing now, China is always used as the scapegoat,” says Mr Drago. “They say “we’ll never be like China”. But we already have the TAJ files, we have Parafe [facial recognition] gates in some airports and train stations — it’s very strange.”

Domen Savic, chief executive of Slovenian non-profit group Drzavljan D, says new systems adopted during the pandemic are unlikely to vanish with it. “Post-pandemic, we’ll have to go back, analyse what was done because of coronavirus and say whether it’s OK [but] that will be hard because they’re being implemented on a level that you can’t just unplug them. More and more technology is being implemented, with no off switch.”

UK Information Commissioner Elizabeth Denham has criticised data broking
UK Information Commissioner Elizabeth Denham has criticised data broking © Christopher Thomond/ Guardian/Eyevine

Private sector opportunity

For surveillance tech providers and related industries, the pandemic has also provided an opportunity to sell their products as tools for managing public health — for example, rebranding cameras designed to detect weapons as thermal scanners which could detect potential infections. “We saw it post 9/11 and again during Covid-19 — companies see people are fearful and they see it as an opportunity to make money,” says Ms Ozer. “They rush in with snake oil, saying ‘Here’s what’s going to keep you safe’ even when the science doesn’t add up.”

In France, Mr Drago says coronavirus has been treated as an opportunity for surveillance companies to optimise their systems ahead of the 2024 Olympic Games in Paris. “You have a lot of French companies . . . that want to make a law to facilitate the experimentation of biometric surveillance [to allow for] a showcase of biometric systems in France for the Games.”

In the UK, data brokers have offered to help local authorities use their vast troves of personal and public information for tasks such as identifying individuals struggling in the aftermath of the crisis or those who are at high risk of breaking self-isolation.

In a statement from UK Information Commissioner Elizabeth Denham last October, data broking was lambasted as a sector “where information appears to be traded widely, without consideration for transparency, giving millions of adults in the UK little or no choice or control over their personal data.”

“Are we happy for decisions for our lives to be made with the assistance of data brokers, relying on the premise of mass surveillance and data collection?” says Silkie Carlo, director of UK non-profit Big Brother Watch. “The pandemic has led to a wave of this kind of thing being normalised and seen as acceptable because of the exceptionalism of the circumstances”.

The Mexican border. Activists are concerned about experiments with border technology
The Mexican border. Activists are concerned about experiments with border technology © Guillermo Arias/AFP via Getty Images

Activists are especially concerned about experiments with technology, targeting immigrants and refugees. “Along the Mexico border, for example, there’s a lot of ‘smart border’ technology there,” says Ms Molnar. “Also the Mediterranean and Aegean seas . . . are becoming kind of a testing ground for a lot of the technology that then gets rolled out later.”

These deployments reflect the confluence of public health and populist politics. “Refugees and migrants and people who cross borders have for a long time been tied to these tropes of bringing disease and illness . . . and therefore they must be surveilled and tracked and controlled,” she says.

Robert Julian-Borchak Williams, who was arrested based on a faulty facial recognition match, at home in Michigan last year
Robert Julian-Borchak Williams, who was arrested based on a faulty facial recognition match, at home in Michigan last year © New York Times / Redux / eyevine

Technologies deployed along borders range from military-grade drones to social media analysis that predicts population movements and “AI lie detectors”, which claim to monitor facial expressions for signs of dishonesty. Airports in Hungary, Latvia and Greece have trialled the technology at border checkpoints.

“I don’t know how these systems deal with issues of cross-cultural communication, and the fact that people often don’t testify or tell their stories in a linear way,” she says. “It’s incredibly disturbing that we’re using technology like this without appropriate oversight and accountability.”

Ms Molnar says there had been recent advances in accountability. “We’re starting to talk a little bit more about some of these structural ways which we could conceptualise what algorithmic oversight look like,” she says. “But we have a long way to go before we are at a point where we’ve been thinking about all the different ramifications that this kind of decision making can have on people’s rights.”

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments