Cybercriminals are arming up - now AI will be used to stop them
: 03.06.2025

Cybercriminals are arming up - now AI will be used to stop them
: 03.06.2025

Cybercriminals are arming up - now AI will be used to stop them
: 03.06.2025
: 03.06.2025
By Peter Witten and Line Horndal Hjørne, AAU Communikation and Public Affairs
Photo: Colourbox
You’ve probably experienced receiving a text message or an email trying to trick you into clicking a risky link or opening a fake website. In an instant, your phone or computer is infected with malicious software, your passwords are stolen, and you might even be scammed out of large sums of money.
Cybercrime is rapidly advancing and becoming harder to detect. Hackers have long been using AI, and digital threats are becoming increasingly sophisticated.
Now, the research project AI:DEFENCE from Aalborg University aims to develop new methods to detect and stop advanced cyberattacks, while also making AI technology more secure.
AI:DEFENCE is led by two AAU researchers: Professor Johannes Bjerva from the Department of Computer Science and Assistant Professor Qiongxiu (‘Jane’) Li from the Department of Electronic Systems.
Johannes Bjerva works with multilingual NLP and security in large language models, while Qiongxiu (‘Jane’) Li specializes in cybersecurity, privacy protection, and machine learning.
This combines expertise in cybersecurity and NLP (Natural Language Processing).
NLP is about understanding and working with human language, such as when we interact with a chatbot, ChatGPT, Google Translate, etc.
By analyzing language - for example in an email - AI:DEFENCE will be able to detect signs of fraud that humans or regular software might miss. With AI’s help, detecting scams should become easier.
The researchers behind AI:DEFENCE will use advanced techniques such as:
Encrypted AI, ensuring decisions are made securely and privately.
Examples of digital threats AI:DEFENCE will focus on using AI:
By analyzing language in real time, these threats can be detected before they cause harm.
The tools developed by AI:DEFENCE will be used for things like chatbots that can’t be hijacked or manipulated, and for critical infrastructure like hospitals and energy grids, where preventing data leaks is crucial.
The systems from AI:DEFENCE will be available as server software or APIs that can easily integrate with existing security systems to upgrade digital defenses.
The tools will be open source, so both small businesses and large organizations worldwide can use them.
Big cybercrime using AI is not just science fiction. The large british design and engineering firm Arup knows this all too well.
According to CNN, Arup reported a major fraud case to Hong Kong police in 2024. The scam was carried out using “fake voices and images.”
An employee in Hong Kong was tricked into joining a video meeting he believed was with the company’s CFO and other staff. According to police, the employee was deceived because the other participants looked and sounded like colleagues he knew. But it was AI-generated deepfake.
CNN reports the employee was tricked into transferring 200 million Hong Kong dollars to the scammers - equivalent to nearly 170 million Danish kroner.
FACTS ABOUT AI:X LABS
Aalborg University’s AI:X Labs aim to promote AI research and deliver sustainable solutions. The goal is to develop new AI talent, strengthen interdisciplinary research, and build a strong international reputation for AI research. The five new AI:X Labs are:
AI:DEFENCE: A safer digital society through secure AI and LLM-based cybersecurity.
AI:EcoNet: Predicting species interactions in a changing world using AI.
AI:MIND: Improving youth mental health through conversational AI.
AI:Xpertise: Integrating AI into expert work.
AI:Cybernetics: Effective human-robot collaboration through AI.