‘Deep fake’ video can ruin reputations. Can life logs prevent that?

By Benjamin Powers

Deep Fakes, AI-generated audio and video manipulation that creates believable video of events that never happened, have been around for years. This video manipulation of President Obama, for example, was posted to YouTube more than a year ago. Motherboard broke the story of an AI-manipulated video of Wonder Woman actress Gal Gadot in a compromising position last December. But academics are warning that a tipping point for the technology is just two years away, and political parties are already using it to spread fake images of world leaders.

If it’s difficult to grasp the challenges such technology might represent. Just consider how much controversy was inspired by an altered video of CNN reporter Jim Acosta ostensibly making contact with a White House intern as she tries to take his microphone. The doctored video, shared by conspiracy site Infowars, didn’t rely on AI, but it still produced a week-long debate among readers about its legitimacy. A deep fake wouldn’t have just sped up the video, it could have made it seem as if Acosta was simultaneously shouting at the president, for example.

Already, governments and corporations are grappling with how to protect people from this new digital development, which, depending on how you look at it, poses a dystopian menace. The Electronic Frontier Foundation has written about how existing laws and regulations can be applied to counter them. Companies that have sprouted up around the areas of image authentication. Even Facebook has been crafting a variety of practices to counter deep fakes.

A recent paper published on the Social Science Research Center goes a step further, and predicts “the development of a profitable new service: immutable life logs or authentication trails that make it possible for a victim of a deep fake to produce a certified alibi credibly proving that he or she did not do or say the thing depicted.”

It’s not farfetched to think that companies would emerge that would help us protect against the growing sophistication behind tech such as deep fakes. These companies would be able to authenticate where we were, what we were doing, and what we were saying, at any given point and time. While we often think of deep fakes as being targeted at celebrities or politicians, ordinary citizens could be targeted as well. A high profile CEO might suffer significant damage to their reputation if a video emerged “showing” they used racial slurs, for example. Even banal behavior, like a teacher disparaging their students might ruin careers. And consider the threat posed by deep fake revenge porn videos. Given that an average person might not even be aware this technology exists, they would have no idea about how to fight it.

“You can likely imagine a scenario in which owners of facial recognition technology might try to leverage it to individuals after a major catastrophic event in which someone’s identity was deepfaked to negative ends,” said Britt Paris, a researcher at Data & Society who is compiling the Global Media Manipulation Casebook. The Casebook is a research platform that documents the tools, tactics, and techniques of groups who seek to game sociotechnical systems.

But there are inherent risks around allowing counter-fake companies to compile such a detailed log of our personal behavior. Not only would you likely not know how this data is being used after it’s gathered, there is also the risk of how secure the data within such a company might be. As I wrote about in regards to Amazon Rekognition, the engineer explaining it was not just excited about its capacity to recognize up to 15 faces simultaneously in a crowd, but also that when paired with geo-location data, it could engage in “predictive tracking when paired with geolocation technology, letting you know where certain people were likely to be, based on their past movements.”

In a scenario where companies countering deep fakes were to emerge, they likely wouldn’t need to predict where you were likely to be. They’d know because you’d have given them this information and would be giving it to them constantly. Paris believes that actions to combat deep fakes has to start with platforms, where dissemination of them occurs.

“If we were to meaningfully hold platforms accountable for disseminating harmful disinformation, it would help,” says Paris. “The argument technology companies give as their excuse that they cannot combat disinformation or other nefarious uses is that they cannot detect this stuff “at scale”. Perhaps then, it stands to reason, that the problem is with their unchecked scale. One thing they could do would be to think of ways to scale down so that they can meaningfully and effectively monitor content, or to train more human moderators to detect this stuff and treat these employees well.”

As the deep fake technologies develop further, and confidence in journalism and other forms of public information erodes, it’s important to understand the drive towards solutions such as immutable life logs, and why we should walk them out to the third and fourth degree to make sure the implications are something we’re comfortable with. While disinformation will continue to exist, giving up your personal privacy, even if the stakes are high, is not the solution.

Comments

Your email address will not be published. Required fields are marked *

PublicSecurity.Today values a meaningful and respectful exchange of ideas and opinions. If you identify inappropriate comments, please contact us. Inappropriate comments will be removed and repeat offenders will be blacklisted.