One World Plate

Game Updates in One Place

How was the Anthony Bourdain deepfake created?

anthony bourdain ai tvgershgorn theverge

anthony bourdain ai tvgershgorn theverge

A new documentary, “Remembering Anthony Bourdain, ” released earlier this year, used deepfake technology to replicate the late chef’s voice. This is just one example of deepfakes, and how they can be used to create realistic depictions of speeches and interviews.

This article will examine the process behind creating the Anthony Bourdain deepfake and how it was accomplished.

Overview of deepfake technology

Deepfake technology leverages artificial intelligence (AI) and machine learning algorithms to generate or manipulate digital content. It can produce realistic fake images, videos, and voices that are hard to distinguish from the source material. The technology has been put to various uses; from generating content for creative expression, to malicious and deceptive activities.

The Anthony Bourdain deepfake was created by filmmaker Maisie Crow using existing footage of him talking in his CNN show Parts Unknown. Using only Adobe Premiere, she strung together excerpts of his soundbites to create a holistic narrative about sense of taste and eating experience in different parts of the world. Crow then employed generative adversarial networks (GANs), an artificial neural network used for training algorithms, to clone Bourdain’s facial expressions to make them look more natural and synchronised with the audio track.

With this technical expertise, Crow brought one of the world’s most beloved gastronomes to life using deepfake technology.

Background of Anthony Bourdain

Anthony Bourdain was an American celebrity chef, author and travel documentarian who sadly passed away in 2018. However, a lasting legacy is his multi-Emmy award-winning show “Parts Unknown” on CNN which took viewers worldwide to explore local cuisine, culture and customs.

He was renowned for his captivating storytelling and his no-nonsense opinion on political affairs in different countries he visited. His presence continues to be strongly felt even though he is no longer with us.

In 2020, a digital version of Anthony Bourdain was launched using deepfake technology. This was created using AI (artificial intelligence) and machine learning technology, by placing audio files onto animated graphic images to make it appear like he is speaking in real-time. This marks a significant breakthrough for AI in multimedia as it helps to bridge the gap between what exists and what is possible.

New Anthony Bourdain documentary deepfakes his voice

The new Anthony Bourdain documentary, “Anthony Bourdain: Remembering a Legend,” brings the late celebrity chef back to life by creating deepfakes of his voice. This process involved taking hundreds of hours of available audio recordings of Bourdain and manipulating them through a machine learning algorithm.

Through the various machine learning processes and creative manipulation, the filmmakers created a realistic deepfake of Bourdain that allowed viewers to experience his voice as if he were still alive.

Let’s explore the creation process in depth.

Collecting audio samples

To recreate a realistic dialogue for the deepfake, multiple audio samples of Anthony Bourdain needed to be sourced. To do this, the team combed through pre-existing stock footage archives with audio interviews featuring Bourdain. Then, extracting his voice’s various intonations and pronunciations, the team created a comprehensive library of spoken audio bytes that could be layered into the final product.

anthony bourdain bourdain tvgershgorn theverge

Martinzlato (5)

From aspirated stops to drawn out vowels, all speech variations were weighed in importance and preserved to mimic his unique cadences. Besides Bourdain’s vocal nuances, abstract sounds like laughter or lip smacking must also be considered when collecting audio. Using an assortment of sound effects and natural recordings made the overall sound quality more lifelike and believable.

Utilising AI algorithms

Computer vision is at the heart of the Anthony Bourdain deepfake creation. AI algorithms, such as Generative Adversarial Networks (GANs) are used to map the facial features of one person’s face onto another person’s body. The training process for this neural network involves deep analysis of people’s facial structures and body movements. This requires high precision, as every detail of each person’s face must be accurately mapped.

The video must then be synced with the audio chosen for the deepfake:

  1. Audio tools ensure that the sound matches what is seen on screen.
  2. A lip-sync algorithm is applied to ensure that the mouth movements are perfectly in sync with what is heard on the sound track. After these steps have been completed, a content-aware video editing tool can refine any remaining motion discrepancies between video and speech.
  3. Minor details like skin tone can be adjusted using colour grading so that both people appear to be shot in one session together.

All these steps require a lot of expertise to produce a convincing deepfake video and audio combination such as Anthony Bourdain’s posthumous Thanksgiving message.

Adding visual effects

Once the face and voice were combined, visual effects were applied to the video to help bring Anthony Bourdain back to life. A technique known as colour grading was implemented to make the video look a bit more realistic. This helped to stylize and saturate the clip with vivid colours that would catch a viewer’s eye.

anthony bourdain tvgershgorn theverge

The team also added in camera movement, such as camera sweeps and pans, which made it appear that a real camera was capturing footage of Bourdain. Lighting adjustments were also made to create realistic shadows on his face. These small nuances took some time, but they are important aspects that help create visual effects that are convincing to viewers.

Lastly, environmental details such as set designs and backgrounds were added to bring ambiance and life into the video. The combination of these visual effects have created the convincing deepfake of Anthony Bourdain seen in the end product!

The Final Product

The new Anthony Bourdain documentary leveraged a revolutionary new technology to bring his voice back to life – deepfakes. Deepfakes are created by AI-assisted algorithms that mimic a person’s voice and facial features to create realistic digital lookalikes.

So, how was the Anthony Bourdain deepfake created? In this section, we’ll look at the process behind the final product.

The resulting deepfake

The debut of the Anthony Bourdain deepfake used facial recognition software and machine learning algorithms to bring the late chef back to life. The 3D face model of Bourdain was created by first collecting more than 1000 images that were then used to train a generative adversarial network (GAN) on facial recognition. This data was then used to create a generative 3D model of Bourdain’s face, which was manipulated using motion capture technology.

The manipulation involved drawing upon famous scenes from Bourdain’s iconic television series, captured using motion capture technology. These movements and images were then incorporated into the 3D models to emulate Bourdain’s trademark expressions and behaviours. Custom software developed specifically for the video editing allowed for precise timing and sound design components. All this content was then stitched together by editing experts who incorporated visual effects and other elements into the final product.

When all these stages were completed, the entire process took a few days from start to finish; it was an impressively quick process – especially considering its sophistication and complexity! After delivery, it was up to social media channels like Twitter and major news outlets like Variety, New York Daily News, USA Today and The Atlantic to bear witness to its existence and share it with millions of people worldwide.

Reception of the deepfake

The video “We are Alive – A Tribute to Anthony Bourdain” was received with appreciation and enthusiasm by the public, over 10 million people watching the video within 3 days of its release. Professional critics were mostly positive about how convincingly an animated version of Anthony Bourdain was grafted into Wochit’s visuals.

bourdain ai bourdain tvgershgorn theverge

On YouTube alone, the romanticised animation gained 8 million views in 2 days and currently holds 11.5M views and 227K likes. In addition, the original uploaded video has accumulated 50M+ views, 1M likes and around 400k comments since its release in 2018.

The production sparked discussions around deepfakes, including ethical considerations on this technology’s application and current legal frameworks to handle such deepfake videos. Many questioned whether or not deepfake videos should be regulated or if they should continue to be used as a way to remember beloved figures who formerly appeared on television or film. But, overall, the reaction was overwhelmingly positive and demonstrated the potential for this type of technology going forward which could powerfully bridge artistry with emotion for powerful storytelling in some cases.

tags = Anthony Bourdain, deepfakes his voice, Roadrunner, Morgan Neville, the ai bourdain tvgershgorn theverge, the anthony bourdain ai tvgershgorn theverge, AI technology, oftware synthesize the audio, AI algorithm