XClose

UCL Jill Dando Institute of Security and Crime Science

Home
Menu

Future Crime opportunities arising from Artificial Intelligence (AI)

17 January 2019

Research Summary

Long-awaited, AI has arrived, delivered by advances in: Machine Learning to build algorithms from data; Deep Learning to do it like the brain; and computers to do it fast and cheap. 

AI has criminal potential. Examples include:

  • Identity Forgery – AI methods can generate speech in a target’s voice given a sample and couple it with synthesized video of them speaking. A senior citizen could be tricked into making financial transfers over video skype by an apparent trusted party.
  • AI Snooping: Phones, PCs, TVs and Home Hubs provide the sensors for audio snooping inside homes. Speech Recognition can sift the resulting data for exploitable fragments (e.g. passwords or bank details, affairs being admitted to). 
  • Driverless Weapons: The driverless truck is close to the ideal urban attack robot for terrorists. GPS guidance could bring it to target, and Machine Vision could target pedestrians. 

On the flip side, AI has potential for crime prevention. Most developed is Machine Perception in, for example, vehicle tracking, person recognition, and X-ray threat detection. However all Deep Learnt vision systems so far studied, are capable of being fooled by an adversary who has prior access to the software. Not by hacking it, but by using AI methods to find its hidden weaknesses – minute adversarial perturbations of the input to the system that tip it into giving the wrong output. Understanding whether a particular security critical system is vulnerable, and addressing the weakness by, for example, ensuring the software is not physically present in purchasable security scanners (but instead runs from a remote server which is not accessible by an adversary) can guard against a lurking problem. This project seeks to examine the future crime potential of AI, and provide a basic taxomony graded on scales of criminal profit, public harm, victim harm, effort, difficulty, and technology readiness.

Lead investigator(s):

Dr Lewis Griffin 

Research Assistant(s):

Jerone Andrews  

Thomas Tanay 

For information about this project contact: Dr Lewis Griffin  L.Griffin@cs.ucl.ac.uk