UNICRI Centre for Artificial Intelligence and Robotics
Problem: Advancements in video manipulation technology and the programmatic generation of video are giving rise to a world where it is increasingly difficult to distinguish between what is fake and what is real. Highly realistic “fake” videos, combining or superimposing images and video using machine learning techniques, can be created with relative ease and with limited resources. So-called ‘Deepfakes’, which are created with a generative adversarial network (GAN), are one type of video manipulation that has garnered considerable media attention in recent months. Other manipulation techniques also exist, including the Face2Face algorithm, the FaceApp, and Lyrebird, which can respectively swap faces, add smiles to faces or impersonate voices. UC Berkeley has also demonstrated an algorithm to transfer movements from a source to a target person. Although the technology certainly has enormous positive potential, its advent may equally fuel the spread of misinformation; be used to tarnish an individual’s reputation or discredit their name; to undermine trust in public authorities or manipulate political figures to incite violence or hatred; or even to call into question the validity of image or video evidence presented in court. The destabilising and dangerous effects of the ‘fake news’ era are already very evident throughout society. Left unchecked, this technology is likely only to further amplify the dynamics of this era.
Outcome: Hackers are challenged to create tools for the detection of manipulated videos that can support law enforcement, security agencies, courts, the media etc. to readily verify the authenticity of image and video.
Datasets: Coming soon
Relevant links and resources: Coming soon