Artificial Intelligence: The Hackers’ New Weapon

artificial and human had reaching toward each other
Geralt / Pixabay

Malware is getting smarter—literally. Threat actors are using artificial intelligence to create cyberthreats capable of learning and acting as autonomous agents that can change their tactics as required to complete their mission.

What is Artificial Intelligence?

What is intelligence? We all have it and we know what we mean when we say the word. But trying to define intelligence is very difficult. Psychologists don’t subscribe to a single understanding of what intelligence is. There are different schools of thought. Is intelligence one single factor, or is it a collection of different abilities? Must it include some element of self-awareness? If we find it so hard to define intelligence, how can we even know when we have created an artificial intelligence (AI)?

Putting philosophy and psychology to one side, computer scientists have to take a pragmatic approach. Broadly speaking, a program or system is behaving intelligently if:

  • It is aware of its environment
  • It has a problem to solve or a task to perform
  • It can determine which steps to take to yield the greatest chance of success
  • It can learn new knowledge and skills
  • It can learn or deduce new behavior to overcome new problems

So a reasonable layman’s definition of AI could be the combination of knowledge, reasoning, learning, and deductive powers that the AI system can bring to bear on some problem or task.

We’re seeing more and more products appearing that are labeled as smart. They purport to use AI to achieve their purpose. Smart assistants such as the Amazon Echo, the Google Nest, and the Apple HomePod are small devices that listen to the conversation in a room. When they hear their trigger word such as “Alexa” or “Hey, Google” they prick their digital ears up and listen to the command you speak. That is sent to the cloud to be parsed and the instructions on how to fulfill your request are sent back to it. The actual AI is a huge system sitting somewhere in the manufacturers’ infrastructures. The device in your room is just the way you access it.

Self-driving cars are already in use, with the vehicles constantly monitoring its position, direction, speed, and situation with respect to the complex dynamic of the traffic flow. To navigate to its destination safely and efficiently, it must process thousands of measurements and facts about acceleration, braking, and other road users per second, and decide what to do next.

We’ve come a long way from the chess game in 1956 when, for the first time, a computer—MANIAC 1—was victorious against a human. Sadly, the threat actors are using these advances in AI to upgrade existing cyberthreats and to create new ones.

AI and Cyberattacks

AI methods and techniques like machine learning are being used by threat actors and the publishers of defensive cyber products alike. The battle between cyber security and cyberattacks was a human against human one. The threat actors may have used software such as malware and zero-day exploits, but lined up on the side of the good guys were firewall gateway protection suites, network endpoint protection suites, intrusion detection systems, and much more.

For example, for some time now there have been polymorphic viruses that can modify the portion of themselves that they duplicate and attach to infected files. By changing the make-up of the payload attached to the infected files, the virus attempts to avoid detection by endpoint protection suites (EPPS) that look for specific sequences of bytes in files—the so-called signature of the virus. By changing the bytes that are adhered to each infected file the virus hides it signature.

Anti-virus software and EPPS had to evolve to incorporate new tests that could detect camouflaged infections. They used techniques such as heuristic scanning and analysis and other techniques that look at the behavior. Typically, they isolate the file under test and then let it execute so that the EPPS can safely examine the actions that the file under test attempts to perform.

This was typical of the arms race between the threat actors’ developers and the EPPS developers. Human minds were behind the software on both sides, and both sets of software could only be autonomous and self-deterministic in a limited fashion. Scalability was a problem for both sides. The bottleneck was the human element in the development chain.

Now, the very same AI technologies that power smart devices and autonomous cars can be utilized to create intelligent viruses that morph faster than the EPPS providers can keep up with. AI can write compelling phishing emails, and can conduct convincing conversations via Twitter. It can also intelligently probe the entire perimeter of an organization seeking out vulnerabilities that it can exploit. Machine learning techniques allow software to get better at fulfilling its purpose because they can learn from their experiences.

Suitably skilled humans can do all of this, of course. But having software that is capable of doing it as well or better than people, faster, and in many different places at once overcomes the scalability issue for the threat actors.

Chatbots and Social engineering

Social engineering is a method used to elicit misplaced trust in the victim which can be leveraged to the advantage of the threat actor. Social engineering can be performed face to face, via a phone call, using email, or via some other text-based messaging.

Related: 32 Cyber Security Terms Everyone Should Know

Even as far back as 2016, developers were showcasing AI-driven software using machine learning techniques to conduct simple conversations with humans on Twitter. The tweets contained faux-malicious links to simulate real social engineering attacks. The developers could count the number of victims they fooled into clicking the links.

The software—dubbed SNAP_R—was able to look at the use of Twitter by the victim and determine the period of the day when the victim would be most likely to engage with other tweets. It could also determine topics the victim would likely respond to. Over the course of the test, SNAP_R sent 819 tweets. It fooled and “caught” 275 victims.

Phishing Emails

Using techniques similar to the social media attacks, AI phishing systems can review and learn from the real emails sent out by entities such as PayPal, Facebook, and Netflix and learn how to convincingly create forged emails that look and sound like the real thing.

Tweets and texts are associated with poor grammar and we’re more forgiving when reading these short-content messages. A phishing email, especially when it is impersonating corporate email, must be worded carefully so that it strikes the right tone, and it must be grammatically perfect. AI systems can now generate emails that are indistinguishable from those authored by humans.

They can even undertake the email conversation required to carry out a spear phishing attack, answering the victim’s questions as the scam progresses.

Intelligent Malware

Polymorphic malware was an evolution in malware. It marked the point where malware could change encryption keys, file names, and other identifiable characteristics to attempt to escape detection. Using AI techniques, malware and other attack software can now make such changes intelligently, unpredictably, and based on the environment of the computer or network they are infecting.

AI and machine learning are being used to empower malware to make intelligent on-the-fly decisions about the type of evasive techniques it must use to avoid detection. It can identify the defensive measures being ranged against it and, using a database of the techniques each defensive product can use, it will adapt its behavior and strategy to remain undetected.

Of course, the same technologies are available to the developers of the defensive software and systems that are fighting against the malware. AI defensive systems can learn how users and bone fide software behave, perform, and interact on the network. If anything does not fit the expected pattern it is isolated and flagged as a threat.

Vulnerability Detection

Penetration Scans and vulnerability scans are used to identify vulnerabilities in the network perimeter of an organization, and in the software, appliances, and firmware inside the network. AI-enhanced scans could find vulnerabilities much faster than before, giving the threat actors an advantage.

Traditional scans find vulnerabilities by checking firmware, software, operating system, and other system components’ versions and revisions. They then look up the known vulnerabilities for that revision. AI-enhanced scans are expected to soon be able to determine vulnerabilities that are likely to exist in firmware and software—based on past performances by the manufacturers, the appliances, and the roles the appliances fulfill. They will be able to predict the vulnerabilities and check for their presence. And over time, they will get better and better at doing this.

In other words, AI-enhanced penetration and vulnerability scans will be able to identify and detect zero-day threats. Zero-day threats are threats that can be deployed by the threat actors safe in the knowledge that no defenses exist for the exploit they are deploying.

AI in Defensive Measures

AI and its benefits are available for attackers and defenders alike, of course. AI is being introduced in several areas already to counter the rise of AI in cyberattacks.

Modeling User Dynamics

Ai-driven systems can monitor the behavior of the network’s users. The timing of their logging in and out, the IP addresses they use when they are connecting remotely, the systems, software, APIs, and other processes they interact with are all examined, characterized, and recorded. All the data garnered on each user is fed into a model of that the user, which is then fed into a model of the entire dynamics of all user and the entire network.

Strange behavior, suspicious traffic, or unusual interactions can be spotted and, isolated, and flagged as suspicious. Even if a threat actor is using the log in credentials of a genuine user, their unauthorized actions will be detected.

End Point Protection Suites

EPPS tools with AI capabilities can learn how your normal, non-malicious software operates. Behavior outside of the expected norms is considered hostile. To do this the EPPS must constantly gather information about the good software and the normal operation of the computer. This is used to feed the machine learning element of the AI. unusual network connections should also be flagged as suspicious and treated as hostile until notification.

Most malware will send information back to its command and control (C2) server and receive in return instructions. AI-enabled malware will be able to reduce this dependency of getting instructions from HQ because it will be able to determine its own best next step. But some information about the victim’s network will still be sent to the C2 server such as the encryption key that was used in a ransomware attack and the decryption key required to unlock the captive data. This type of unusual remote connection will be detected by the AI-enabled EPPS.

Intrusion Detection and Intrusion Prevention Systems

Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) are complementary technologies. An IPS tries to prevent unauthorized access to a network and an IDS looks for suspicious or known malicious patterns of network traffic. If it finds any it means unauthorized access is happening right now. These two technologies complement each other and fit nicely into a layered cyber security arsenal.

However, they depend on information about threats before the threats can be detected and blocked or flagged. They share the same problem that EPPS systems have. They are susceptible to zero-day threats that have never been seen in the field before. If they’ve never been seen the EPPS, IPS, and IDS systems cannot have their signature databases and rules updated so that they look for, accurately detect, and deal with those brand-new threats.

AI-enabled IPS and IDS will be able to make smart deductions about suspicious access attempts and patterns of network traffic that may indicate a malicious intent. Instead of treating all unknown events as the actions of a threat actor—and raising a lot of false positives—they will be able to consider the characteristics of the event they’ve detected and will make an intelligent assessment of the situation.

Scanning Emails

If threat actors can use AI-driven systems to generate convincing phishing emails that are very hard to distinguish, it’s only fair that the email scanning providers can use AI to improve the accuracy of their phishing detection software.

AI-enabled email scanners can review technical aspects of the email much faster and easier than humans can. Using information from the email header it can determine whether the email has been spoofed and sent from a faked email address. It can check the genuine destination behind links and shortened URLs.

The latest systems can analyze the turn of phrase, voice, and tone that are used in the body of the email. Do these make grammatical sense? Do they match the usual tone and vocabulary of genuine emails from that organization?

Over time they will become used to the written style of the personnel within your organization, too. This allows the email scanner to spot spear phishing attacks that attempt to convince the victim they’ve received instructions to pay a bill or make a money transfer from a senior manager within the same organization.

A Changing Threat Landscape

The old threats won’t vanish overnight. They’ll still be around for a long time. Not all sources of malware are sophisticated enough to roll AI into their software. But the coming new wave of top-tier malware will be smarter than ever before, and with the capability to become even smarter as time goes by.

Thankfully, the providers of the software and hardware defenses are keeping pace. They are incorporating AI and machine learning into their products. They will compete directly with the new AI-threats, and will be even better at detecting and neutralizing the old threats.