As COVID-19 lockdowns continue to restrict in-person production, advertisers are increasingly turning to digital technologies to produce new creative assets. Recently, there has been increased interest in using “deepfake” technologies to repurpose archival footage.  A “deepfake” is essentially a video or audio that has been manipulated in a way that is undetectable to people viewing or listening, resulting in a piece of media that appears authentic.

A recent example is a commercial featuring a digitally altered depiction of the SportsCenter anchor Kenny Mayne, currently 60. In the spot, a much younger Mr. Mayne appears in SportsCenter footage from 1998 speaking seemingly prophetically about the year 2020. This was accomplished by layering video of Mr. Mayne’s mouth onto footage of his 38-year-old face.

“Deepfake” technology was already on advertisers’ radars before the pandemic. Last year, David Beckham appeared to speak nine languages through the use of “deepfake” technologies in an ad for Malaria No More, a U.K. charity encouraging people to help the fight against malaria. And in January, Doritos launched a campaign with the app Sway, which employs AI technology to allow users to make a video appearing to show them performing the dance moves performed by Lil Nas X in Doritos’ Superbowl commercial.

“Deepfakes” Can Present A Valuable Personalization Tool for Brands

Studies show that a consumer with a more personalized experience is more likely to complete a purchase. AI company Tangent.ai is seeking to capitalize on this with an algorithm designed to help consumers determine what products will look like on them. For example, a consumer may change a model’s lipstick, hair color, ethnicity or race.

Zao, an app released in China last fall, shows how advertisers may take “deepfake” personalization even one step further. Zao allows users to “star” in their favorite movies by seamlessly superimposing their face onto actors’ bodies in well-known movie scenes. The application of this technology to advertising can create the ultimate targeted advertisement, by placing the consumer in the ad. Instead of hiring an expensive celebrity to appear in their commercial, advertisers can hire anyone and use “deepfake” technology to replace them with the consumer in the final ad. A common advertising goal is to make consumers visualize themselves using a product, and the use of “deepfake” technology will make that easier than ever, as people will be able to literally see themselves using a product.

Potential for Abuse

With the uncertainty around how long lockdowns will continue to prevent in-person production, the use of digital technologies like “deepfake” is likely to become increasingly more prevalent. This technology is not without its issues, however. The use of “deepfake” technology raises various ethical issues around disinformation and consent, and certainly poses risk of misuse and fraud, especially in the areas of politics and government. This may leave consumers with competing feelings when viewing a “deepfake” ad: a mixture of wonder at the technology’s capabilities, and concerns about misuse and fraud.  In many cases, it is difficult to distinguish between ads with actual people and those that have been digitally altered, which may leave viewers feeling as though they’ve been “tricked”.

Advertisers should therefore be cautious in employing “deepfake” technology and be transparent about the manipulation, making it clear to viewers that what they are viewing is not real. And advertisers should of course always obtain consent from those appearing in their “deepfake”-powered ads.

With Great Power, Comes Great Responsibility

As “deepfake” technology continues to advance, some experts say that we will soon be unable to tell the difference between real humans and those that have been digitally altered. There are algorithms currently in development that will help viewers tell the difference, and the big tech companies are getting involved — Google recently released a database of thousands of “deepfake” videos to public institutions to assist the with training systems that detect altered media. Some experts have floated the idea of using a digital watermarking system as a way of ensuring people are not defrauded by “deepfakes”. Blockchain has also been considered as a potential solution, as it can be used as a ledger that authenticates the source of a media asset and tracks any time the original has been altered.

As always, the law is struggling to keep pace with technology and, given that this is still an emerging technology, there is generally a lack of legal safeguards currently, but this is likely to change. Maine lawmakers are currently considering a law that would prohibit the use of “deepfake” technology in political advertising.

In the end, while the use of “deepfake” technology in advertising is not without risk, this revolutionary technology could create a myriad of valuable opportunities for marketers if used in the right way.