The evolution of AI in recent years has made incredible technological advancements possible. Most of these are positive, but there have also been some more dubious accomplishments. Deep fakes are one such accomplishment that has some serious implications for the future of democracy.
What are deep fakes?
Deep fakes are video clips that look and sound like a completely normal video, except they’re not real. Advancements in AI have made it possible to craft photorealistic video and audio of somebody that a computer can then use to generate a brand new, scripted video that is almost impossible to distinguish from a real video.
Deep fakes first became a concern when they were used in the relm of pornography. Celebrities and actresses would have their faces transplanted onto the bodies of porn stars, and this understandably caused a lot of distress and led to bans of deep fakes on sites like Facebook. Since then, deep fake technology has continued to advance and can now create convincingly real yet entirely fake videos using real people.
How are deep fakes made?
Deep fakes work best on people with lots of different voice clips, photos and videos available. AI is fed thousands of audio and visual examples that it uses to build a model of what a person looks and sounds like. It can then create a new video using that model and estimate what that person would look and sound like from the instructions given.
How is Adobe helping combat deep fakes?
Adobe is using AI to help identify deep fake videos when humans can’t. Computers are able to analyse a video and generate a hash, or digital ‘name’, for that file, based on its contents. Once a video has been hashed, any attempt to alter that file will change the hash and alert the computer that the video is probably a deep fake.
Adobe is leading the charge by using AI to help analyse, identify and remove deep fakes on the web, helping stop the spread of disinformation online.