Skip to main content

Seeing Is Disbelieving

The growing presence and negative impact of deepfakes in digital content

In 2014, researchers began laying the groundwork for what is now “synthetic media” – better known as “deepfakes.” These are images or videos of people that look real but are entirely artificial, in every sense of the word.

Leveraging artificial intelligence, they developed “generative adversarial networks” (or GANs) to create realistic-looking faces, and then set up tests in which those GANs would compete against each other to determine which could create the best fake, synthetic content. As they competed, the AI employed learned more and more, feeding off those learnings to continuously improve the images and make them more realistic.

To this day, you can still view AI-generated “human beings” on this-person-does-not-exist.com – and it’s really kind of scary.

September 17, 2019: Collage of hyperrealistic AI-generated human faces, created by generative adversarial network, that was invented by Nvidia researchers

GAN You Tell the Difference?

Why would a content strategist (like me) want to talk about fake people like this? Well, because it’s my job to try to create great content that reaches actual humans with helpful, honest information, and meaningful messages including images of real people, places and things. This tech is contrary to everything I try to do on a daily basis. It’s the stuff that almost makes me want to go back to the days of T-squares and typesetting. Almost, but no, thank you. That would be barbaric.

While there are seemingly good (albeit superficial) things that could come out of this technology (we’ll get to those in a minute), obvious, unethical and negative misuses abound. Not surprisingly, the U.S. military, intelligence community and law enforcement agencies have been worried about this for years. From public trust to political manipulation to outright cybercrime, deepfakes could have serious implications and, as usual, technology advances at the speed of 1s and 0s, while government definitely does not.

For what it’s worth, Texas and California prohibit political deepfakes leading up to an election (why would any state allow this?) and New York forbids any use of a celebrity’s or performer’s synthetic likeness without their consent for 40 years after their death. But with such sparse regulations in place, the FBI issued a warning in early 2021 of Chinese and Russian entities creating deepfake “journalists” and “media celebrities” to spread anti-American messages across social media.

Platforms See the Problem

Although Facebook banned deepfakes in January of 2020, according to BusinessInsider.com, there are still loopholes that have been criticized by congress, such as the acceptable use of “satirical” deepfakes.

Around the same time, Wired featured an article pointing out that although Facebook policies were put in place to ban certain types of deepfakes (nudity, violence, hate speech), there appeared to be no immediate solution to vast amount of doctored and/or intentionally mislabeled photos and videos on the platform. Roughly a month after the 2020 Facebook ban, Twitter also came out with a similar policy, stating they would “ban deepfakes and other manipulated media” that could cause ‘serious harm’.

Today, Facebook/Meta proudly tells us that, since 2016, they’ve been building a team of 35,000 people who “work on safety and security” to “reduce the spread of misinformation and provide more transparency and control around political ads.”

Great. But the relatively limited scope of political theater and potential impact of fraudulent impersonation therein pales in comparison to the fields of marketing and entertainment. The most prevalent use of this technology to date is in the porn industry, where ambitious “filmmakers” are deepfaking female celebrity faces into their online videos in order to make a buck.

“How is all of this even legal?” you ask. Because most deepfakes – just like art and literature and TikTok videos – are considered a form of free speech.

Using the Power for Good

So, what if any good can might come out of synthetic media? The answer lies in human nature. There are good humans, after all. And because technology isn’t inherently “bad” or “good” on its own, in the right hands, it has the potential to be helpful. (Although all the positive examples of deepfakes I’ve found to date are, ironically, pretty shallow).

For instance, the AI used for creating deepfakes could be used to create digital avatars of actors, thus saving studios both time and money if, let’s say, that actor is off in the Bahamas when they desperately need that one last scene. Or perhaps an actor does a spot for a global company and, because of synthetic media, only needs to show up for a single shoot, because their avatar can be manipulated to speak other languages.

The greater goal, some say, is to create the opportunity for anyone to create their own major motion picture without the exorbitant expense of equipment, studio time – or even actors. In fact, an actor’s avatar could even be “employed” posthumously – through some sort of weird tribute or trust fund contribution.

Speaking of Trust

Personally, I don’t see the good outweighing the bad here, but only time will tell. The larger question is: Once everyone knows that deepfakes exist, how will anyone be able to trust anything they see? Any political ad on social media, any video capturing a celebrity breakdown, any personal endorsement for any brand are now immediately suspect.

Current attempts at legislation to curb deepfaking/synthetic media are anything but cohesive. Independent of legislation, social platforms, online publishers and every other reputable medium will need to set stricter rules and filters. It will take a much stronger stance to set meaningful ground rules that still protect free speech while putting legal handcuffs on those who practice deception and wish us harm.

Cam Campbell

Author Cam Campbell

Yes, that’s really her name and, no, her parents aren’t cruel – she married into it. After 30 years navigating the traditional and digital advertising space, Cam joined ModernImpact to become our Chief Content Strategist. But she’s not just about creating content, she’s focused on metric-driven marketing messages with underlying, clever wit (when applicable). Working from our Denver office, she’s a master at Slack and video conferencing; working so closely with her MI colleagues she can smell the creativity streaming all the way from the Minneapolis office. Join Cam on her ever-evolving adventure through Marketingland (and ocassionally Mothertown and INeedSomeWineville).

More posts by Cam Campbell

All rights reserved Modern Impact.