Last summer at the Black Hat cybersecurity conference, the pitted automated systems against one another, trying to find weaknesses in the others’ code and exploit them.

“This is a great example of how easily machines can find and exploit new vulnerabilities, something we’ll likely see increase and become more sophisticated over time,” said David Gibson, vice president of strategy and market development at

His company hasn’t seen any examples of hackers leveraging artificial intelligence technology or machine learning, but nobody adopts new technologies faster than the sin and hacking industries, he said.

“So it’s safe to assume that hackers are already using AI for their evil purposes,” he said.

. “Software is readily available at little or no cost, and machine learning tutorials are just as easy to obtain.”

Take, for example, image recognition.

It was once considered a key focus of artificial intelligence research. Today, tools such as optical character recognition are so widely available and commonly used that they’re not even considered to be artificial intelligence anymore, said Shuman Ghosemajumder, CTO at

“People don’t see them as having the same type of magic as it has before,” he said. “Artificial intelligence is always what’s coming in the future, as opposed to what we have right now.”

, cyber-criminals are already using machine learning to target victims for Business Email Compromise scams, which have been escalating since early 2015.

“What artificial intelligence does is it lets them automate the tailoring of content to the victim,” said Steve Grobman, Intel Security CTO at , which produced the report. “Another key area where bad actors are able to use AI is in classification problems. AI is very good at classifying things into categories.”

For example, the hackers can automate the process of finding the most likely victims.

The technology can also be used to help attackers stay hidden inside corporate networks, and to find vulnerable assets.

Identifying specific cases where AI or machine learning is used can be tricky, however.

“The criminals aren’t too open about explaining exactly what their methodology is,” he said. And he isn’t aware of hard evidence, such as computers running machine learning models that were confiscated by law authorities.

“But we’ve seen indicators that this sort of work is happening,” he said. “There are clear indications that bad actors are starting to move in this direction.”

Sneaker malware and fake domains

Security providers are increasingly using machine learning to tell good software from bad, and good domains from bad.

Now, there are signs that the bad guys are using machine learning themselves to figure out what patterns the defending systems are looking for, said Evan Wright, principal data scientist at

“They’ll test a lot of good software and bad software through anti-virus, and see the patterns in what the [antivirus] engines spot,” he said.

Similarly, security systems look for patterns in domain generation algorithms, so that they can better spot malicious domains.

“They try to model what the good guys are doing, and have their machine learning model generate exceptions to those rules,” he said.

Again, there’s little hard evidence that this is actually happening.

“We’ve seen intentional design in the domain generation algorithms to make it harder to detect it,” he said. “But they could have done that in a few different ways. It could be experiential. They tried a few different ways, and this worked.”

Or they could have been particularly intuitive, he said, or hired people who previously worked for the security firms.

One indicator that an attack is coming from a machine, and not a clever — or corrupt — human being, is the scale of the attack. Take, for example, a common scam in which fake dating accounts are created in order to lure victims to prostitution services.

The clever part isn’t so much the automated conversation that the bot has with the victim, but the way that the profiles are created in the first place.

“It needs to create a profile dynamically, with a very attractive picture from Facebook and an attractive occupation, like flight attendant or school teacher,” said Omri Iluz, CEO and co-founder at

Each profile is unique, yet appealing, he said.

“We know that it’s not just automation because it’s really hard,” he said. “We ruled out manual processes just by sheer volume. And we also don’t think they’re rolling out millions of profiles and doing natural selection because it would be identified by the dating platform. These are very smart pieces of software.”

Scalpers do something similar when they automatically buy tickets to resell at a profit.

“They need to pick the item that they know will get them a high value on the secondary market,” he said. “And they can’t do it manually because there’s no time. And it can’t be a numbers game because they can’t simply buy all the inventories because then they’ll be losing money. There’s intelligence behind it.”

The profits from these activities more than pay for the research and development, he said.

“When we look at the revenues these fraudsters generated, it’s bigger than many real companies,” he said. “And they don’t need to kill anyone, or do something risky like deal drugs.”

Getting ready for the Turing Test

In limited, specific applications, computers are already passing the Turing Test — the classic thought experiment in which humans try to decide whether they’re talking to another human, or to a machine.

The best defense against these kinds of attacks, said Intel’s Grobman, is a focus on fundamentals.

“Most companies are still struggling with even moderate attack scenarios,” he said. “Right now, the most important thing that companies can do is ensure they have a strong technical infrastructure and continue practicing simulations and red team attacks.”

This story, “AI isn’t for the good guys alone anymore” was originally published by
.