Artificial Intelligence – or “AI” for short – is intended to replicate human thinking in as automated or mechanized a way as possible. AI is also playing an increasingly important role in cyber security: it is being used both as an offensive weapon and to defend against cyberattacks. Which side will win in the end?
It is a nightmare for every cyber security expert: cyberattacks supported by artificial intelligence. Phishing emails that rely on social engineering and precisely analyse certain behaviours. Would be many times more damaging with AI. It could help compose mail texts that would no longer be distinguishable from those of real senders. Attacks would be intelligently automated, malware attacks would run faster and more effectively. And the most threatening thing is that with every failed attack, the attacker himself learns from his mistakes and improves his techniques with every new attack.
But what causes cyber security experts headaches as a serious threat also offers the opportunity to strengthen their own protective shields against cyberattacks and to better identify attackers. AI will therefore be both a curse and a blessing.
AI as a weapon of attack
Cybercriminals are increasingly using artificial intelligence as a weapon. With the help of penetration techniques, behavioral analysis and behavioral mimicry, AI can carry out attacks much faster, more coordinated and more efficiently – and on thousands of targets at the same time.
AI seeks vulnerabilities
Cyber attackers use AI that automatically scans very many interfaces in the victim’s IT for vulnerabilities. When a “hit” occurs, the AI can distinguish whether an attack at the vulnerability can cripple the system or whether it can be a gateway for a malicious code.
Hackers are already offering AI-based systems on the darknet as “AI-as-a-Service”. These are ready-made IT solutions for criminal hackers without greater knowledge of artificial intelligence. This lowers the entry barrier for many smaller hacker gangs as well.
AI-based systems already exist that can automatically guess passwords through machine learning. In addition, new threats to AI-protected IT networks are emerging:
Most often, cybercriminals use AI in connection with malware sent via email. The malware can use AI to imitate user behaviour even better: Intelligent assistants can create texts of such high semantic quality that recipients find it very difficult to distinguish them from real mails.
Self-learning phishing attacks
Until now, adapting a phishing email to the sender’s writing style required human insight and background knowledge. With the help of AI systems, information available online can be extracted more specifically to tailor websites, links or emails to the target of an attack. AI systems learn from past mistakes and successes and improve their tactics with each attack.
AI as a shield
AI will play a major role in cyber security for threat detection and defence against cyber attacks. Learning algorithms should recognise the behavioural patterns of attackers and their programmes and take targeted action against them.
Time-saving pattern recognition
AI applications are particularly strong at recognising and comparing patterns by quickly filtering and processing the essentials from large amounts of data. This pattern recognition makes it easy to detect hidden channels through which data is being siphoned off – and faster than human analysts could.
Identifying spam mails
Traditional filtering methods for identifying and classifying spam emails using statistical models, blacklists or database solutions have reached their limits. AI solutions can help to identify and learn complex patterns and structures of spam mails.
Authenticate authorized users
Passive, continuous authentication is a future field for AI algorithms. Sensor data from accelerometers or gyroscopes are collected and evaluated while the device is in use. In this way, AI prevents unauthorised use of the device.
Conventional malware detection is mostly based on checking the signatures of files and programmes. If a new form of malware appears, the AI then compares it with previous forms in its database and decides whether the malware should be automatically warded off. In the future, AI could develop to recognise ransomware, for example, before it encrypts data.
Spying on attackers via algorithms
Hackers almost always use infiltrated programs or commands. Artificial intelligence could learn which programs a malicious code opens, which files it overwrites or deletes, which data it uploads or downloads. According to corresponding patterns, the trained AI algorithm can then keep an eye out for suspicious activities on users’ computers.
Deciphering the identity of attackers
The AI algorithms could also soon find out the identity of attackers. This is because programmers leave individual traces in their programme code. These can be found, among other things, in the style of the comments that programmers add to their programme lines. Learning algorithms can extract these traces and thus assign the code to an author.
A purely AI-based system can never be prepared
Cyber security should not be left exclusively to artificial intelligence (AI). Only a team of humans and machines can be successful in the fight against cyber attacks. Because the threat situation changes almost daily. New attack methods, new vulnerabilities and repeated human error lead to a complex mix of eventualities for which a purely AI-based system can never be prepared. Therefore, you should rather trust our expertise!
Contact us so that we can jointly develop a cyber security concept for your company.
Tel: 030 95 999 8080