Skip to content

Fortifying online defenses: SVALINN’s innovative approach to combatting AI cyber threats


The emergence of Artificial Intelligence has given rise to many new applications and software programs, allowing online users to complete tasks faster and more efficiently. This latest development has also caused an increase in AI cyber-criminality, aiming to exploit the complexities and vulnerabilities of AI’s emerging technology. Building robust defense infrastructures against these threats will become vital as our society transitions into the digital age.

Future forward: Luxembourg’s leap into Artificial Intelligence (AI) is an FNR feature series highlighting Luxembourg’s top A.I. researchers, showcasing findings and results of A.I. research and demonstrating practical applications of A.I. research and its impact on society.

Dr Maxime Cordy is a research scientist at the University of Luxembourg’s Interdisciplinary Centre for Security, Reliability, and Trust (SnT). One of his main research interests is exploring how artificial intelligence impacts the domain of cybersecurity. Maxime has dedicated his research to developing reliable and responsible applications that protect online users against AI cyber threats, malware, and other virtual antagonists. 

AI is an accelerator for the good and the bad. It has enabled the rapid development of new and innovative technologies. But it has also opened the door to more threats and dangers that prey upon the vulnerabilities and capabilities of AI systems.
Dr Maxime Cordy Research scientist at the University of Luxembourg's Interdisciplinary Centre for Security, Reliability, and Trust (SnT)

SVALINN – protection against AI cyberthreats

Maxime and his team have developed SVALINN, a software application that utilizes an unconventional mechanism to protect online users from AI cyber threats. SVALINN introduces a proprietary technology that obfuscates online items such as images, video, and tabular data, making it harder for foreign threats to exploit AI and machine learning systems. Before delving deeper into the practical properties of SVALINN, it is worth clarifying the two most prevalent types of cyberattacks on AI: (1) evasion attacks and (2) poison attacks:

Evasion attacks

Evasion attacks exploit AI’s reliance on machine learning. Specifically, evasion attacks create subtle perturbances or “noise” into existing AI datasets, tricking machine learning algorithms into misclassifying them. In practical terms this means that evasion attacks could “obfuscate” malware files or fraudulent transactions, making them undetectable even by sophisticated AI systems.

Poison attacks

Poison attacks directly inject faulty or “poisonous” data into AI models. This process corrupts the AI model, making it no longer able to recognize the faulty data as such. Poison attacks manipulate the machine-learning process by influencing and modifying the operation of the AI model. Let’s illustrate a poison attack with a real-life scenario: Imagine a hypothetical bank customer named Mark attempting to make a fraudulent transaction. Ordinarily, the internal system would flag this transaction as suspicious and prevent it from going through. However, if the AI system has been tainted with corrupted data, as is the case of a poison attack, it may fail to recognize the transaction as fraudulent, allowing it to proceed unchecked.

Introducing “noise” to make AI systems unexploitable

“SVALINN takes a completely new approach in AI and cybersecurity. It uses the mechanisms of common AI cyber threats to its advantage and renders online data unexploitable by foreign entities.” How does SVALINN enter the picture? And how does it protect online users against these types of cyberattacks? SVALINN “fools” malicious cyberagents by mirroring the attack mechanisms of evasion attacks. Taking the example of deep fakes (i.e., a video of a person in which their appearance has been digitally altered, typically to deceive the viewer), SVALINN conducts an evasion attack itself and changes the pixels in an undetectable manner, making it impossible for the deepfake system to generate any sensible or comprehensive image. SVALINN extends its protection to other areas of AI cybersecurity, such as watermarking, where it creates proof of authenticity and verification of origin for online data.

Keeping up with the evolution of AI cyber threats

Maxime emphasizes that cyber-attacks will increase over the following years as more AI systems and services are integrated into people’s personal lives and work spheres. Cyber-attacks such as deepfakes will become more subtle and complex, underling the significance of continuously updating our knowledge on AI-related cyberattacks.

One of the strengths of AI is its ability to mimic human communication. Bad actors will continue to use this to their advantage and create cyber-attacks that will become harder and harder to detect.
Dr Maxime Cordy Research scientist at the University of Luxembourg's Interdisciplinary Centre for Security, Reliability, and Trust (SnT)

AI cyber threats are rapidly evolving, encompassing sophisticated techniques such as audio and video manipulation, identity theft, and financial fraud. Maxime underlines the importance of academia and the private sector in continuously monitoring the development of cyber threats and updating AI systems’ defense mechanisms accordingly.

SVALINN continues to adapt to the evolving cybersecurity landscape, extending its protective services to online mobile applications, and increasing public awareness of cyber threats.

Written by John Petit

John Petit is a communication consultant, holding a PhD in the field. His expertise lies in exploring the intersection of technology and society, with a particular focus on Artificial Intelligence (AI) and its impact on our daily lives and broader societal norms. John combines his academic knowledge with practical experience to engage in and facilitate meaningful discussions about the role AI will play in shaping our future.