Deepfake poses threat of visual, written forgery with AI
Deepfake can alter videos to change the face of the actual owner of the video.

Artificial intelligence will be a huge part of our lives in the future and will be used for the greater good. However, the technology has a dark side, causing a huge dilemma: Can artificial intelligence be used for forgery?



One of the biggest ethical issues related to artificial intelligence (AI) is that it is fake. When we say fake, we are talking about the fact that this technology reveals fake virtual characters and develops fake videos. This AI method, called deepfake, creates videos that mimic U.S. President Donald Trump and make him say things he has not said, posing a wide range of threats, such as imitating Trump's voice.

The biggest danger posed by fake AI is that it invites forgery. An example of this is that fraudsters emulated the voice of the CEO of a German-based company with an AI-based software to deposit a large payments into their accounts. In September, thieves stole $1.4 million by deceiving their victims.

Meanwhile, since 2013, there has been a 350% increase in the crime of imitation. Research shows that one out of every 638 phone calls uses synthetically generated sounds. Moreover, the applications that teach this phony sound enhancement gradually are sold for $30-$50. So, they are neither difficult nor expensive to access.

Shamir Allibhai, the CEO of video verification company Amber, told the U.S. media that deepfake technology could be used as an excuse to launch the first attack and wage war and serve as a weapon to fuel international conflicts, pointing to the extent of this danger. "Countries can even produce evidence using deepfake videos to be the first to attack and enter the war. Or a third country can prepare such videos and provoke the conflict between the two countries it is hostile to," Allibhai said.

Combating deepfake

While Google is trying to combat deepfake by removing 9 million videos from Youtube in the second quarter of this year, another AI-backed software has also detected 87% of these fake videos. Facebook and Microsoft launched the $10 million-awarded Deepfake Detection Challenge with the same goal. Social media platforms are under pressure from politicians and ethics experts to protect people from deepfake and to prevent possible harm. To that end, AI will now regularly be employed to struggle against deepfake, another artificial intelligence product.

According to a study conducted by the cybersecurity company Deeptrace, the number of online deepfake videos increased from 7,694 in December 2018 to 14,698 just nine months later, exploding 84%. The fact that most of these videos are adult content for internet users over the age of 18 is another scary dimension.

Another example clenching this fear has recently created a stir among many internet users. The video Bosstown Dynamics, referring to Boston Dynamics and developed by Corridor Digital, was a good example of how unreal virtual characters created by Hollywood since the early 2000s were engraved in people's minds. In the video, a humanoid, supposedly trained in shooting, gets frustrated after being pushed to the ground by people and spreads terror by shooting down well-built men around him who look like civilian soldiers or commandos. The video mimics a dystopian scene in movies.

This video, aired on social media platforms, was so convincing and people's fear of AI robots was so great that millions of people thought the video was real. Inspired from the point where the robotics arrived, Corridor Digital referred to the ability of Boston Dynamics robots to use weapons and flexibility and to self-rise after being pushed to the ground, in fact fooling people with realistic fiction.

Forgery in writing

Deepfake can deceive people not only visually but also in writing. OpenAI, founded by Elon Musk to work on secure general AI, can write meaningful texts. The use of AI in written texts has been cited by Gartner as one of the most popular research areas in the next few years, spearheaded by OpenAI. OpenAI did not disclose the content of the GPT-2 technique, which it announced earlier this year, to prevent it from being used for malicious purposes.

In the summer, two postgraduates, one of whom was Turkish, developed a counterfeit text generator similar to that of OpenAI. Aaron Gökaslan (23) and Vanya Cohen (34), who achieved this success without a huge investment behind them, announced that they had developed this technology with $50,000.

"What we uncover shows how much we should be careful about security. A great responsibility falls on researchers to prevent people from being abused," Cohen told Wired.

AI deciphers ancient Greek writings

AI has also made a dent in archeology. This superior technology's ability to decode text came into play in the ancient text restoration model Pythia. Developed by Oxford University and Google's Deepmind, Pythia can decipher the text wholly by predicting the missing parts of the ancient text through the deep learning method. Trained in stone or ceramics with 35,000 ancient Greek writings written between 1500 and 2600, Pythia has completed the gaps in nearly 3,000 damaged ancient texts so far. Its error rate, on the other hand, was 27% lower than that of epigraphers.