What is Deepfake? What is It and How does It Work?

Computers have gotten better at replicating reality over time. Modern cinema, for instance, relies significantly on computer-generated scenery, sets, and actors in place of the once-common realistic locales and props, and these sequences are often complex to discern from reality. Deepfake technology has recently been in the news. Deepfakes, the most recent evolution in computer images, are formed when artificial intelligence is designed to substitute one person’s likeness in recorded video with another.

What is Deepfake?

Deepfake technology can perfectly blend anyone in the globe into a video or photograph in which they did not participate. Such skills have existed for decades; this is how the late star Paul Walker was revived for the seventh installment of the Fast & Furious franchise. It used to take an entire year and an entire studio full of experts to generate these effects. Now, deepfake technologies — new autonomous computer graphics or machine-learning systems — can generate images and films significantly faster.

However, there is a great deal of uncertainty about the phrase “deepfake,” and computer vision and graphics academics despise it. It has become a catchall term for everything from cutting-edge films made by artificial intelligence to any possibly false image.

Much of what is referred to as a deepfake is not: For instance, the contentious “crickets” video of the U.S. Democratic primary debate that was published by the campaign of former presidential contender Michael Bloomberg was produced using standard video editing techniques. Deepfakes were irrelevant.

How do Deepfakes work?

Despite the fact that the capacity to automatically switch faces to create credible and realistic-looking synthetic video has some interesting innocuous applications (such as in film and video games), this is a dangerous technology with disturbing implications. One of the earliest real-world applications of deepfakes was the creation of synthetic pornography.

In 2017, a Reddit user titled “deepfakes” built a pornographic forum that featured actors with their faces switched. Since then, porn (especially revenge porn) has repeatedly been in the headlines, badly harming the reputations of celebrities and notable persons. According to a survey by Deeptrace, 96% of deepfake movies detected online in 2019 were pornographic.

Deepfake video has been used in politics as well. In 2018, a Belgian political party, for instance, published a video of Donald Trump delivering a speech urging Belgium to leave the Paris agreement on climate change. However, Trump never gave that address; it was a hoax. It was not the first time deepfakes were used to generate deceptive videos, and tech-savvy political gurus are preparing for a future wave of fake news featuring convincingly realistic deepfakes.

Not all deepfake videos constitute a threat to the existence of democracy. There is an abundance of deepfakes used for fun and satire, such as chips that answer questions such as “What would Nicolas Cage look like in Raiders of the Lost Ark?”

Who Created deepfakes?

The most impressive examples of deepfake tend to originate from university labs and the startups they spawn: a widely reported video of soccer star David Beckham speaking fluently in nine languages, none of which he speaks, is an edition of code devised at the Technical University of Munich in Germany.

And researchers at the Massachusetts Institute of Technology have published an eerie video of former U.S. President Richard Nixon presenting the alternate statement he had prepared for the country had Apollo 11 failed.

However, they are not the deepfakes that governments and academics are concerned about. As proven by non-consensual pornographic deepfakes and other problematic forms, deepfakes don’t need to be lab-grade or high-tech to affect the social fabric negatively.

Indeed, the term “deepfake” derives from the genre’s prototypical example, which was developed in 2017 by a Reddit user named r/deepfakes using Google’s open-source deep-learning framework to replace porn performers’ faces with those of actors. The majority of DIY deepfakes discovered in the open now are descended from this initial code, and while some may be fun thought experiments, none are convincing.

Then why is everyone so anxious? “Technology is continually advancing. According to Hany Farid, a digital forensics expert at the University of California, Berkeley, this is how things work. The research community cannot agree on when DIY approaches will become sophisticated enough to constitute a real threat; estimates range widely from 2 to 10 years. In the future, experts predict that anyone with a smartphone will be able to generate convincing deepfakes of anyone else.

Are Deepfakes only Videos?

Deepfakes are not confined to simply videos. Deepfake audio is a rapidly expanding field with a vast array of applications.

Using deep learning algorithms and just a few hours (or, in some cases, minutes) of audio of the person whose voice is being cloned, it is now possible to create realistic audio deepfakes. Once a model of a voice is created, that person can be made to say anything, such as when fake audio of a CEO was used to commit fraud last year.

Deepfake audio has medical applications in the form of voice replacement and uses in computer game design — now programmers may allow in-game characters to say anything in real-time rather than depending on a limited set of scripts recorded before the game was played released.

How can we stop deepfakes?

In the past year, several U.S. pieces of legislation addressing deepfakes have gone into effect. States are introducing legislation to ban deepfake pornography and prevent its use during elections. Texas, Virginia, and California have all criminalized deepfake porn, and in December, the president signed the first federal law criminalizing the practice as part of the National Defense Authorization Act. However, these new regulations are only effective if the perpetrator resides in one of these jurisdictions.

China and South Korea are the only countries outside the United States taking particular measures to outlaw deepfake fraud. In the United Kingdom, the law commission is now evaluating existing legislation regarding revenge porn to address various deepfake production techniques. However, the European Union does not appear to view this as a pressing concern relative to other forms of internet disinformation.

Even though the United States is in the lead, there is little evidence that the proposed legislation is enforced or has the proper emphasis.

And even though several research labs have devised unique methods to identify and detect manipulated movies, such as embedding watermarks or a blockchain, it is not easy to produce deepfake detectors that cannot be promptly exploited to create more convincing deepfakes.

Still, technology companies are attempting. Facebook engaged researchers from Berkeley, Oxford, and other institutions to assist with developing a deepfake detector and enforcing its new prohibition. In addition to implementing significant policy changes, Twitter intends to tag any deepfakes that are not removed. YouTube reiterated in February that it would not permit deepfake videos linked to the 2020 U.S. census, the 2020 U.S. election, or voting procedures.

What about deepfakes outside of these gated gardens, however? Reality Defender and Deeptrace are two apps designed to protect you from deepfakes. Deeptrace employs an API that functions as a hybrid antivirus/spam filter, prescreening incoming media and redirecting evident manipulations to a quarantine zone, similar to how Gmail automatically redirects spam before it enters the inbox. Reality Defender, a technology under development by AI Foundation, aims to tag and quarantine distorted images and videos before they may cause harm. Adger argues that placing the task of media authentication on the individual is grossly unfair.

Check out: 3 Tips to Clear Obstacles to Adopt AI in Healthcare

Source@techsaa: Read more at: Technology Week Blog

Paraphrase tool
Technology

Paraphrasetool.ai: an In-Depth Analysis of Its Features & Benefits

Writers are always looking for top-of-the-line tools that can help them boost creativity and work efficiency. Paraphrasetool.ai is one such tool that has gained immense popularity these days. Powered by artificial intelligence, this online paraphrase tool paraphrases content efficiently. Is it worth the hype? In today’s comprehensive review article, we will explore its key features […]

Read More
Using AI as online chatting assistants when dating:
Computer Home Software Technology

Using AI as online chatting assistants when dating: thingsto know

Singles have been using dating sites for some time, with upwards of one in three of today’s successful relationships being initiated after digital contact. Signing up to a website or app to flirt and hook up with prospective partners is now so popular, that this activity will eventually surpass the offline version. Digital matchmaking is […]

Read More
chat GPT photo
Technology

Why ChatGPT Matters: Advantages and Honours of AI Chatbots?

ChatGPT appears to be another chatbot, but that’s not the case. While other Chatbots fail to respond to you when asked questions in a contextual manner, ChatGPT can surprise you. This machine learning system based on AI can easily converse with human beings with a certain ease.  In this article, we will try to tell […]

Read More