Face masks won’t go gently in heavily surveilled Hong Kong

By Caroline Bottger

Thousands of protesters in Hong Kong marched last weekend in defiance of a face mask ban passed by the city’s chief executive Carrie Lam. Protestors have used the face masks to protect themselves from both tear gas and identification by the Hong Kong police and their employers. Hong Kong has an extensive CCTV system, and some Hong Kong-based firms have cautioned their employees against voicing support for or participating in the protests, for fear it could anger China, their biggest market. 

China, which has sovereignty over Hong Kong, is the vanguard of facial recognition technology. Since 2016, it has invested billions of dollars in developing technology for use by private companies, including banks and smartphones, and the state. On Oct. 7, the U.S. government blacklisted 28 Chinese companies from doing business in the United States due to their roles in alleged human rights violations against Uighur Muslims in Xinjiang Province. China’s AI unicorns, SenseTime, Megvii and Yitu, are on the list.

Elsewhere in the world, facial recognition technology is gaining traction, but secretive use by private companies isn’t winning many public advocates. A British court just backed its use by police in Wales, saying that live use on members of the public does not violate privacy rights. On Oct. 4, The Guardian reported that local police in London passed on images of seven people to a private facial recognition company as part of a secret agreement established back in 2016. There was apparently no oversight of the agreement from either the mayor’s office or the Metropolitan police. “The private sector uses of facial recognition need a lot of attention because there is less regulation and governance here,” said Pete Fussey, a criminologist at the University of Essex who specializes in digital surveillance.

There is greater trust in the tech across the pond. 56 percent of Americans trust the police to use facial recognition technology responsibly. But this varies greatly by race: About 60 percent of whites said they trust law enforcement with the technology, but only 43 percent of black respondents did. 

An even higher percentage – 73 percent – believe that facial recognition can accurately identify people, which some experts see as troubling. Facial recognition technology has led to the misidentification of innocent people, and slowness to act by national governments means that advocacy groups are taking up the reins. In April, AI researchers sent an open letter to Amazon urging it to stop selling its facial recognition software to police, saying that it had higher rates of error for darker-skinned and female faces. “Flawed facial analysis technologies are reinforcing human biases,” wrote Morgan Klaus Scheuerman, a PhD student from the University of Colorado at Boulder. San Francisco banned the technology in May, and more U.S. cities are considering similar measures.

Photo by Kamil Feczko on Unsplash

Comments

Your email address will not be published. Required fields are marked *

PublicSecurity.Today values a meaningful and respectful exchange of ideas and opinions. If you identify inappropriate comments, please contact us. Inappropriate comments will be removed and repeat offenders will be blacklisted.