Detecting ‘deepfake’ videos in the blink of an eye

NOTE:  This is a bit outside our usual posts, but with this being another election year and the Trump administration and GOP Congress not doing anything to protect the voting public from manipulation by foreign adversaries, we thought this post could be helpful.

It’s Barack Obama – or is it?

The following article by Siwel Lyu, Associate Professor of Computer Science; Director Computer Vision and Machine Learning Lab at the University at Albany, State University of New York, was posted on the Conversation website August 29, 2018:

A new form of misinformation is poised to spread through online communities as the 2018 midterm election campaigns heat up. Called “deepfakes” after the pseudonymous online account that popularized the technique – which may have chosen its name because the process uses a technical method called “deep learning” – these fake videos look very realistic.

So far, people have used deepfake videos in pornography and satire to make it appear that famous people are doing things they wouldn’t normally. But it’s almost certain deepfakes will appear during the campaign season, purporting to depict candidates saying things or going places the real candidate wouldn’t.

Because these techniques are so new, people are having trouble telling the difference between real videos and the deepfake videos. My work, with my colleague Ming-Ching Chang and our Ph.D. student Yuezun Li, has found a way to reliably tell real videos from deepfake videos. It’s not a permanent solution, because technology will improve. But it’s a start, and offers hope that computers will be able to help people tell truth from fiction.

View the complete article here.