AI cyberattacks will be almost impossible for humans to stop

As cyberattacks become more refined, they will start mimicking our online traits. This will lead to a battle of the machines

As early as 2018, we can expect to see truly autonomous weaponised artificial intelligence that delivers its blows slowly, stealthily and virtually without trace. And 2018 will be the year of the machine-on-machine attack.

There is much debate about the possible future of autonomous AI on the battlefield. Once released, these systems are not controlled. They do not wait for orders from base. They learn and make their own decisions often while deep inside enemy territory. And they learn quickly from their environments.

However, autonomous AIs are already starting to be deployed on another type of battlefield: digital networks. Today cyber-attackers are using AI technologies that help them not only infiltrate an IT infrastructure, but to stay on that network for months, perhaps years, without getting noticed.

In 2018, we can expect these algorithmic presences to use their intelligence to learn about their environments and blend in with the daily commotion of network activity. The drivers of these automated attacks may have a defined target – the blueprint designs of a new type of jet engine, say – or persist opportunistically, where the chance for money- or mischief-making avails itself. As they sustain their presence, they grow stronger in their inside knowledge of the network and its users and they build up control over data and entire systems.

Read more: We can’t let the dark web give online anonymity a bad name

Like the HIV virus, which is so pernicious because it uses the body's own defences to replicate itself, these new machine intelligences will target the very defences deployed against it. They will learn how the firewall works, the analytics models used to detect attacks and times of day that the security team is in the office. They will then adapt to avoid and weaken them. All the while, it will use its strength to spread, creating inroads for compromise and contaminating devices with brutal efficiency.

AI will also attack us by impersonating people. We already have AI assistants that do our scheduling, email on our behalf and ask us what we'd like to order for lunch. But what happens if your AI assistant gets taken over by a malicious attacker? Or, indeed, what happens when weaponised AI is refined enough to convincingly impersonate a real person who you trust?

A stealthy, long-term AI presence on your network will have ample time to learn what your writing style is and how this differs depending on who you email, your contact base and the distinctions in professional and personal relationships based on the language you use and key themes in your conversations.

For example, you email your partner five times a day, particularly in the morning and afternoon. They sign their emails "X". Your football team emails weekly with details for Saturday's five-a-side games. They sign emails "Be there!". This is fodder for AI.

As to what we should do about these malicious AIs: they will be too clever and stealthy to combat other than with other AIs. This is one arena we'll have to give up control, not take it back.

This article was originally published by WIRED UK