Guest Column | July 15, 2020

Canaries In A Coal Mine: Can Better Data Integrity Thwart A Looming Deepfake Threat?

By Kip Wolf, Tunnell Consulting, @KipWolf

yellow-canary

Imagine checking your email someday in the very near future and finding a message forwarded by an associate outside your organization saying, “I had no idea that [your key executive] supported [the current sensitive social issue].” As you follow the hyperlink from the email it opens to show said executive seemingly speaking with passion about the current sensitive social issue and how they and your brand support it. Consider how the faked video would damage reputations. Imagine the consequences if it was created intentionally by a disgruntled employee, a competitor, or just an activist that chose your company at random to detract, influence, or incite. How could it erode public confidence in your brand? How could it negatively impact business or public health? How could this happen and what can we do to prevent it?

What Is Deepfake?

If you think that this is purely science fiction, think again. Not only does the technology exist and the content is readily available to facilitate such activities, but incentives to carry them out are multiplied by: (1) increased idle time during stay-at-home and quarantine conditions; and (2) increased social unrest and restlessness in our society as the public looks for someone to blame for their current melancholy and economic troubles.

If you are not familiar with the term deepfake, it is defined by Merriam-Webster as “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”  The etymology of the term includes “deep,” as in deep learning, or "machine learning using artificial neural networks with multiple layers of algorithms" plus “fake,” meaning “not true, real, or genuine” (as in “counterfeit” or “sham”). Deepfakes are a recent construct and Dirk Kanngiesser says “deepfake activity was mostly limited to the artificial intelligence (AI) research community until late 2017.”1

Deepfakes started as entertaining face-swaps and obvious spoof videos. They have since evolved beyond their initial comedic intent to adopt more sinister motives. Some deepfakes are humorous and obviously faked, like using the faces of Jeff Bezos and Elon Musk as overlays on video from Star Trek’s pilot episode “The Cage.”2 However, now “pornographic content accounts for 96 percent of deepfake content online, according to a report by the company Deeptrace, which is developing tools to unmask fake content.”e These sinister deepfakes and fabricated videos have become real threats to public reputations, from celebrities to political figures. The United States judicial system is unprepared to deal with the risk and reality of deepfake evidence and “needs to learn how to combat the threat.”3 

Risks And Recommendations

The next evolution of deepfake security risk is threatening economic and political powers. Deepfake videos are suspected to likely influence politics, elections, and business. “Sharon Nelson of the technology firm Sensei Enterprises spoke about the evidentiary challenges facing lawyers in an era where it can be problematic figuring out what is real and what is not,” says Matt Reynolds. Further, deepfakes “threaten the rule of law, because people no longer know what the truth is.”3

While we can’t prevent deepfakes, we can improve our security positions. Before we can mitigate a risk, we must first identify and understand the risk. It is recommended that you limit the amount of your personal online photo or video content. While it is impossible to fully eliminate it, it is probable that you can better limit the number of photos or video of you and your firm. And remember that privacy is a fallacy. There is no such thing. There are only barriers to content. Nothing is private anymore. In a professional setting, expectations of privacy are irrelevant, as often the content is discoverable during legal action. It is best to consider that if your content is online, it is accessible. Period.

Here are some things you can do to help protect yourself and your company from deepfakes, using the data integrity principles of ALCOA for both prevention of content misuse and for forensic analysis in the event of faked photos or videos. Recommendations are listed and classified by formatting in the ALCOA context below as primary contributors and lesser contributors to deepfake risk mitigation.  

  • Attributable: Use metadata to attribute your content so it is more easily cataloged for security and so fakes may be identified by missing metadata.
  • Legible: (Applicable to physical storage conditions) Ensure that security logs, physical labels, and the like are clearly legible to prevent mistakes, misinterpretation, and misappropriation of content.
  • Contemporaneous: Like Original, check metadata. Confirm date/time stamps and geotagging to ensure they were contemporaneously recorded.
  • Original: Like Attributable, check metadata. Confirm checksums, date/time stamps, and geotagging to ensure originality.
  • Accurate: Avoid errors while saving files and in where your content is stored (e.g., accidentally saving to the wrong cloud). Be careful where you upload. Identify and change default storage locations to secure locations for private and professional content.

Consider how you and your company can improve understanding of deepfakes and their potential impact. This isn’t just a passing fad but a new norm. The threats may evolve quickly from simply embarrassing videos to making false claims, counterfeit marketing, or brand misrepresentation. These threats could have seriously negative impacts on personal and company reputations, shareholder value, or patient health.

The bottom line is that we can no longer trust anything we see on a screen. “This risk is no longer just hypothetical: there are early examples of deepfakes influencing politics in the real world. Experts warn that these incidents are canaries in a coal mine.”4 The biggest impacts are still to come. We must all do our part to limit the risk.

References:

  1. Kanngiesser, Dirk. 2018. “Toxic Data: How ‘Deepfakes’ Threaten Cybersecurity.” Dark Reading. December 27, 2018. https://www.darkreading.com/application-security/toxic-data-how-deepfakes-threaten-cybersecurity-/a/d-id/1333538.
  2. “The Fakening - Jeff Bezos and Elon Musk Star Trek Deepfake.” The Fakening. February 20, 2020. https://fakening.com/2020/02/20/jeff-bezos-and-elon-musk-star-trek-deepfake/.
  3. .Reynolds, Matt. 2020. “The Judicial System Needs to Learn How to Combat the Threat of ‘deepfake’ Evidence.” ABA Journal. February 28, 2020. https://www.abajournal.com/news/article/aba-techshow-experts-warn-of-deepfake-threats-to-justice-system.
  4. Toews, Rob. 2020. “Deepfakes Are Going To Wreak Havoc On Society. We Are Not Prepared.” Forbes. May 25, 2020. https://www.forbes.com/sites/robtoews/2020/05/25/deepfakes-are-going-to-wreak-havoc-on-society-we-are-not-prepared/.

About The Author:

Kip WolfKip Wolf is a principal at Tunnell Consulting, where he leads the data integrity practice. He has more than 25 years of experience as a management consultant, during which he has also temporarily held various leadership positions at some of the world’s top life sciences companies. Wolf temporarily worked inside Wyeth pre-Pfizer merger and inside Merck post-Schering merger. In both cases he led business process management (BPM) groups — in Wyeth’s manufacturing division and in Merck’s R&D division. At Tunnell, he uses his product development program management experience to improve the probability of successful regulatory filing and product launch. He is an information scientist who consults, teaches, speaks, and publishes on topics of data integrity and quality systems (see www.kipwolf.com). Wolf can be reached at Kip.Wolf@tunnellconsulting.com.