[the_ad_placement id=”adsense-in-feed”]
Sunday Special
By Cmde BR Prakash,VSM(Retd)*
In an era where social media content zips across the internet at the speed of light, ideas, words and phrases are generated at an staggering pace, leaving the uninitiated confounded. Therefore, I am not sure if many of us have heard the new additions to the English lexicon – deepfakes and shallowfakes, let alone understand what they mean.
[the_ad_placement id=”content-placement-after-3rd-paragraph”]
Most of us are familiar with the deliberate spread of misinformation or partisan content, on social media, coined ‘fake news’, which is influencing political and social landscapes across the globe. The fake news devoid of realistic images or videos does not create the intended impact, as the old English adage says” a picture is worth a thousand words”. Until recently, a professional video editor with his team along with sophisticated video editing tools was essential to alter or recreate images. Techniques such as morphing using Photoshop is passé as it can easily discernible to even a casual scrutiny. This limited the spread of authentic looking fake images. However, in the last two years, technological advances and the ubiquitous internet have enabled the average John Doe to swiftly create and disseminate high quality tampered video, aided by an ever-growing social media.
Also read: Strait of Magellan, Ferdinand Magellan, and the India connect
“Shallowfakes” have been around for a while and refer to the numerous(tens of thousands of videos) circulated with malicious intent worldwide, which are not crafted with sophisticated artificial intelligence (AI), but often simply recycled, staged or re-edited and relabelled content uploaded on the net. It may be as simple as mislabelling content to discredit someone or spread false information. Shallowfakes have been in use in recent times with devastating effectiveness as shown by the recent example of the doctored video of US House Speaker Nancy Pelosi that went viral and was retweeted by President Donald Trump.
It was only in 2017 that the world woke up to “deepfakes” when it was used as a moniker by a Reddit user. Deepfake is a portmanteau of “deep learning” and “fake”. It started with an anonymous Reddit user posting digitally altered pornographic videos using machine-learning technology. While the user was banned by Reddit, it spurred a wave of copycats on other platforms. Experts believe there are now about 10,000 deepfake videos circulating online, and the number is growing.
The use of this machine learning technique hitherto was mostly limited to the AI research community in universities and it was only when the Reddit user started using Generative adversarial Networks (GANs), that the common man became aware of its application to video and images. He was building GANs using TensorFlow, Google’s free open source machine learning software, to superimpose celebrities’ faces on the bodies of women in pornographic movies.
However, for the people who think this is Greek (essentially anybody without a PhD in AI), it is a machine learning technique, invented by a graduate student, Ian Goodfellow, in 2014 as a method to algorithmically generate new types of data out of existing data sets. GAN is a class of machine learning systems where two neural networks contest with each other in a game (in the sense of game theory, often but not always in the form of a zero-sum game). Given a training set, this technique learns to generate new data with the same statistics as the training set. Photo images are readily available online. In 2017, it was estimated that 1,000 selfies are uploaded to Instagram every second. The more pictures of the target, the more realistic the deep fake. GANs can also be used to generate new audio from existing audio, or new text from existing text – it is a multi-use technology.
In plain English, in deepfake technology, GANs are trained to replicate patterns, such as the face of a person, which gradually improves the realism of the synthetically generated faces. Basically, it works like a cat-and-mouse game between two neural networks. One network, called “the generator,” is producing the fake video based on training data (real images), and the other network, “the discriminator,” is trying to distinguish between the real images and the fake video. This iterative process continues until the generator is able to fool the discriminator into thinking that the footage is real.
This was logically taken to the next level by the creator of the videos who released FakeApp, an easy-to-use platform for making forged media. The free software effectively democratized the power of GANs. Now, anyone with access to the internet and pictures of a person’s face could generate their own deep fake. The quality of the videos produced by the software beats the current state of the art face recognition systems based on VGG and Facenet neural networks, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively.
In May 19, researchers at the Samsung AI Center in Moscow, developed a way to create “living portraits” from a very small dataset—as few as one photograph, in some of their models. This, in a way, is a leap beyond what even deepfakes and other algorithms using generative adversarial networks can accomplish. Instead of teaching the algorithm to paste one face onto another using a catalogue of expressions from one person, they use the facial features that are common across most humans to then puppeteer a new face. The details of the process is based on the paper, “Few-Shot Adversarial Learning of Realistic Neural Talking Head Models,”.
While altering pornographic content does not affect the public at large, however, in the future, deep fakes could be exploited by purveyors of “fake news” to create digital wildfires. There is always the danger of our networked information environment interacting in toxic ways with our cognitive biases based on fake news. Deep fakes will exacerbate this problem significantly. Anyone with access to this technology – from state-sanctioned propagandists to trolls – would be able to skew information, manipulate beliefs, and in so doing, push ideologically opposed online communities deeper into their own subjective realities.
To counter the growing stream of fake news, misinformation and deepfakes is tough as it is hard to keep up with. Relying on forensic detection alone to combat deep fakes is becoming less viable, due to the rate at which machine learning techniques can circumvent them. Work is in progress to develop forensic technology to identify digital forgeries, with new detection methods to counteract the spread of deep fakes. Techniques to identify subtle changes of color that occur in the face as blood is pumped in and out are being explored, as the signal is so minute that the machine learning software is unable to pick it up – at least for now.
So there is hope that in the future, that deepfakes could be countered using the same AI, which helped to created it.
*The writer retired as Commodore from Indian Navy in 2017. He is an alumni of TS Rajendra and specialised in Missile and Gunnery and also served as Surface to Air Missile Officer and Gunnery officer on a number of Indian Naval ships. He commanded INS Vidyut , INS Ganga and was the commissioning CO of INS Sardar Patel.
you’re really a good webmaster. The website loading pace is incredible.
It seems that you are doing any distinctive trick. Furthermore,
The contents are masterwork. you’ve done a excellent job on this topic!