4 Things Everyone Should Fear About Artificial Intelligence and the Future

February 21, 2018, 10:23 PM UTC

Advances in artificial intelligence have the potential to supercharge medical research and better detect diseases, but it could also amplify the actions of bad actors.

That’s according to a report released this week by a team of academics and researchers from Oxford University, Cambridge University, Stanford University, the Electronic Frontier Foundation, artificial intelligence research group OpenAI, and others institutes.

The report’s authors aren’t concerned with sci-fi doomsday scenarios like robots taking over the world, such as in Terminator, but more practical concerns. Criminals, for instance, could use machine learning technologies to further automate hacking attempts, putting more pressure on already beleaguered corporate security officers to ensure their computer systems are safe.

The goal of the report is not to dissuade companies, researchers, or the public from AI, but to highlight the most realistic concerns so people can better prepare and possibly prevent future cyber attacks or other problems related to AI. The authors urge policymakers to work with researchers on addressing possible AI issues, and for technologists involved in AI to consider a code of ethics, among other recommendations.

Here’s some interesting takeaways:

1. Phishing scams could get even worse

Phishing scams, in which criminals send seemingly legitimate emails bundled with malicious links, could become even more prevalent and effective thanks to AI. The report outlines a scenario in which people’s online information and behaviors, presumably scraped from social networks like Twitter and Facebook, could be used to automatically create custom emails that entice them to click. These emails, bad websites, or links, could be sent from fake accounts that are able to mimic the writing style of people’s friends so they look real.

2. Hackers start using AI-like financial firms

If banks and credit card firms adopt machine learning to improve their services, so too will hackers. For instance, the report said that criminals could use AI techniques to automate tasks like payment processing, presumably helping them collect ransoms more quickly.

Criminals could also create chatbots that would communicate with the victims of ransomware attacks, in which criminals hold people’s computers hostage until they receive payment. By using software that can talk or chat with people, hackers could conceivably target more people at once without having to actually personally communicate with them and demand payments.

3. Fake news and propaganda is only going to get worse

If you thought the spread of misleading news on social networks like Facebook was bad now, get ready for the future. Advances in AI have led to researchers creating realistic audio and videos of political figures that are designed to look, and talk like real-life counterparts. For instance, AI researchers at the University of Washington recently created a video of former President Barack Obama giving a speech that looks incredibly realistic, but was actually fake.

 

You can see where this is going. The report’s authors suggest that people could create “fake news reports” with fabricated video and audio. These fake news reports could show “state leaders seeming to make inflammatory comments they never actually made.”

The authors also suggest that bad actors could use AI to create “automated, hyper-personalized disinformation campaigns,” in which “Individuals are targeted in swing districts with personalized messages in order to affect their voting behavior.”

4. AI could make weapons more destructive

Advances in AI could enable people, even a “single person,” to cause widespread violence, the report said. With the widespread availability of open-source technologies like algorithms that can detect faces or help drones navigate, the authors are concerned that criminals could use them for nefarious purposes. Think self-flying drones with the ability to detect a person’s face below it, and then carry out an attack.

Get Data Sheet, Fortune’s technology newsletter.

What’s also concerning is that there’s been little regulation or technical research about defense techniques to combat the “global proliferation of weaponizable robots.”

From the report:

While defenses against attacks via robots (especially aerial
drones) are being developed, there are few obstacles at present
 to a moderately talented attacker taking advantage of the rapid proliferation of hardware, software, and skills to cause large amounts of physical harm through the direct use of AI or the subversion of AI-enabled systems.

 

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward