Artificial Intelligence Is About to Make Ransomware Hack Attacks Even Scarier

June 21, 2019, 3:50 PM UTC

A year ago, network security specialists spotted a worrying new trend: hackers began unleashing ransomware attacks on really big targets—America’s cities. Atlanta, Baltimore, and Greenville, N.C. would later grind to a halt after devastating computer outages disrupted everything from the collection of parking tickets to the sale of new homes.

The next big thing that keeps computer scientist Adam Kujawa up at night? Ransomware powered by artificial intelligence, a development that could give exploits such as RobbinHood and WannaCry a potent new makeover to evade cyber defenses, burrow into computer networks and wreak mayhem.

In recent years, artificial intelligence and machine learning have been a godsend to IT security professionals, enabling them to detect malware sooner—even the moment it enters the wild—keeping networks more secure and corporate assets safer. But the same technologies that are supercharging network defenses could become a powerfully destructive counter-threat in the wrong hands, experts warn.

“The whole industry is moving towards A.I. for protection. At the same time, we see a lot of open-source and community-development of A.I. platforms that are more than likely going to be used by cyber criminals,” says Kujawa, who spent five years dissecting malware for the U.S. Navy, and is now director of Malwarebytes Labs in Santa Clara, Calif. The age of A.I.-powered malware is a matter of when, not if, he added.

“By the end of next year, we’re very likely to see something,” he said, when asked to predict the likeliness of A.I.-fueled exploits in the wild. Just in time for the 2020 U.S. presidential election, in other words.

IT security chiefs are never reluctant to talk about new threats on the horizon. And that’s never been so true as the world transitions to computer systems automated by A.I., machine learning, and neural networks. Malwarebytes this week published a new report on A.I.-based security risks, hoping to strike a balance between likely threats and, as they call it, the science fiction, as A.I. goes mainstream.

One of the things the report warns about is Deepfakes, an emerging threat in which A.I. is used to put words in the mouths of people in videos. The danger is especially acute if bad guys start using deepfakes to target average workers. “Imagine getting a video call from your boss telling you she needs you to wire cash to an account for a business trip that the company will later reimburse,” the authors write. “DeepFakes could be used in incredibly convincing spear phishing attacks that users would be hard-pressed to identify as false.”

Another would be the use of A.I. and machine learning to concoct elaborate social engineering schemes to deceive an individual into divulging confidential or personal information. Here’s what that might look like: a savvy cyber criminal designs an A.I. system to comb through social media services looking for soft targets at a particular company or large public organization. By scraping data from, say, on LinkedIn, Twitter and Instagram a smart profile of a manager or his unwitting assistant begins to take shape. Once the profile is built, the victim can be targeted with super effective spear-phishing emails.

As Kujawa notes, “The biggest weakness is always the end user.” With A.I. tools, such attempts to find soft targets can be scaled up to identify thousands across the corporate world in one shot.

Bot armies also take on a new dimension in a fully A.I. world. Researchers have already demonstrated that machine learning can handily defeat the CAPTCHA security protocols that protect computer servers from certain kinds of malicious bot attacks. Hackers could use this vulnerability to build wide-reaching bot armies, Kujawa says, to push even more convincing spam and fake news to more people.

If there is a silver lining it’s that white-hat A.I. researchers so far have the jump on cyber criminals. They’ve been dissecting the A.I.-security-risk scenarios for several years to find remedies for a problem that has yet to be tried out on a city or company. But that time advantage is slipping, security pros admit. Governments seem to get the message. In February, President Donald Trump signed an executive order to promote A.I. research and development ensuring “that technical standards minimize vulnerability to attacks from malicious actors.”

More must-read stories from Fortune:

—Slack went public without an IPO. Here’s how a direct offering works

—4 reasons to be skeptical about Facebook’s Libra cryptocurrency

—Bank of America CEO: “We want a cashless society

—Fintech startup Tally has raised $50 million to automate people’s finances

—Listen to our new audio briefing, Fortune 500 Daily

Follow Fortune on Flipboard to stay up-to-date on the latest news and analysis.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward