I’m back to rewriting (extensively) my AI play.
From targeted phishing campaigns to new stalking methods: there are plenty of ways that artificial intelligence could be used to cause harm if it fell into the wrong hands. A team of researchers decided to rank the potential criminal applications that AI will have in the next 15 years, starting with those we should worry the most about. At the top of the list of most serious threats? Deepfakes.
By using fake audio and video to impersonate another person, the technology can cause various types of harms, said the researchers. The threats range from discrediting public figures to influence public opinion, to extorting funds by impersonating someone’s child or relatives over a video call.
The ranking was put together after scientists from University College London (UCL) compiled a list of 20 AI-enabled crimes based on academic papers, news and popular culture, and got a few dozen experts to discuss the severity of each threat during a two-day seminar.
The participants were asked to rank the list in order of concern, based on four criteria: the harm it could cause, the potential for criminal profit or gain, how easy the crime could be carried out and how difficult it would be to stop.
Although deepfakes might in principle sound less worrying than, say, killer robots, the technology is capable of causing a lot of harm very easily, and is hard to detect and stop. Relative to other AI-enabled tools, therefore, the experts established that deepfakes are the most serious threat out there.
There are already examples of fake content undermining democracy in some countries: in the US, for example, a doctored video of House Speaker Nancy Pelosi in which she appeared inebriated picked up more than 2.5 million views on Facebook last year.
The UCL researchers said that as deepfakes get more sophisticated and credible, they will only get harder to defeat. While some algorithms are already successfully identifying deepfakes online, there are many uncontrolled routes for modified material to spread. Eventually, warned the researchers, this will lead to widespread distrust of audio and visual content.
Five other applications of AI also made it to the “highly worrying” category. With autonomous cars just around the corner, driverless vehicles were identified as a realistic delivery mechanism for explosives, or even as weapons of terror in their own right. Equally achievable is the use of AI to author fake news: the technology already exists, stressed the report, and the societal impact of propaganda shouldn’t be under-estimated.