Applied AI for Cybersecurity
Module Overview
This module introduces final-year undergraduate students to the practical use of artificial intelligence in cybersecurity. It focuses on how AI and machine learning can support real security tasks such as intrusion detection, phishing detection, malware analysis, threat monitoring, alert triage, and security analytics. The module also examines the risks of using AI in cyber systems, including adversarial attacks against machine learning models, data poisoning, model misuse, and security weaknesses in large language model applications.
The module is designed as an applied and critical course rather than a purely theoretical machine learning course. Students will learn how to frame cybersecurity problems as data-driven tasks, prepare and evaluate cyber datasets, apply suitable AI techniques, interpret results, and assess whether an AI-based solution is trustworthy enough for real deployment.
The course is suitable for Level 6 students with prior knowledge of basic cybersecurity, computer networking, and introductory programming.
Module Aims
The module aims to:
- develop students’ understanding of how AI is used in modern cybersecurity practice;
- build practical skills in preparing, analysing, and modelling cybersecurity data;
- enable students to apply and evaluate AI techniques for realistic cyber defence tasks;
- introduce the security, privacy, and reliability risks of AI-enabled systems;
- strengthen students’ critical judgement on the responsible and trustworthy deployment of AI in cybersecurity contexts.
Learning Outcomes
By the end of the module, students should be able to:
- Explain the main ways in which AI techniques are applied to cybersecurity problems.
- Prepare and analyse cybersecurity datasets using appropriate preprocessing and exploratory methods.
- Apply and compare suitable AI or machine learning methods for a defined cybersecurity task.
- Evaluate critically the effectiveness and operational limitations of AI-based cybersecurity systems.
- Explain and assess key attacks against AI systems, including adversarial machine learning and large language model risks.
- Communicate technical findings and deployment recommendations clearly in written and practical forms.
Indicative Prior Knowledge
Students are expected to have:
- basic knowledge of computer networks and cybersecurity concepts;
- introductory Python programming skills;
- familiarity with common data structures and basic statistics;
- willingness to engage with practical lab activities and short technical readings.
Teaching Strategy
The module is delivered over five teaching weeks and combines:
- lectures for concepts, methods, and case studies;
- lab sessions for hands-on experimentation and evaluation;
- seminar-style discussion for critical analysis of tools, results, and risks;
- directed independent study using readings, notebooks, and structured tasks.
The teaching approach emphasises applied understanding, evidence-based evaluation, and professional judgement rather than algorithm memorisation alone.
Assessment Strategy
A coursework-led assessment pattern is recommended for this module.
Assessment 1 — Lab Portfolio and Reflective Notes (40%)
Students complete practical weekly tasks and submit a short portfolio containing code outputs, observations, evaluation results, and reflective comments. The emphasis is on sound workflow, interpretation, and awareness of limitations.
Assessment 2 — Applied Case Study Report (60%)
Students produce an individual report in which they analyse a cybersecurity problem, apply or evaluate an AI-enabled solution, discuss results, identify risks, and make justified recommendations for deployment or improvement.
Weekly Structure
The module is organised around five themes:
Week 1 — Foundations of Applied AI for Cybersecurity
Introduction to AI in cyber defence, types of cyber data, common security use cases, and limitations of AI-based approaches.
Week 2 — Data, Features, and Classical Machine Learning for Security Analytics
Feature engineering, dataset quality, supervised and unsupervised learning, and practical model evaluation for security tasks.
Week 3 — Deep Learning and Generative AI in Cybersecurity
Deep learning for cyber data, natural language processing for threat intelligence, and the use of generative AI and LLMs in security workflows.
Week 4 — Attacking and Defending AI Systems
Adversarial machine learning, poisoning, evasion, inference attacks, prompt injection, and defensive design strategies.
Week 5 — Trustworthy Deployment, Governance, and Capstone Case Study
Robustness, explainability, privacy, ethics, governance, and integrated analysis of an AI-enabled cybersecurity deployment scenario.
Indicative Weekly Learning Pattern
Each week may include:
- Lecture: 2 hours
- Lab / Workshop: 2 hours
- Seminar / Discussion: 1 hour
- Independent Study: directed reading and practical follow-up
Indicative Reading Themes
Students will engage with material in the following areas:
- AI and machine learning for intrusion detection and anomaly detection;
- AI-assisted phishing, malware, and threat analysis;
- deep learning for security analytics;
- trustworthy and explainable AI;
- adversarial machine learning;
- security risks of large language model applications;
- governance, ethics, and operational assurance in AI-enabled security systems.
A separate references page can be provided in the course website for detailed weekly readings and recommended sources.
Software and Practical Environment
Indicative tools for practical work may include:
- Python
- Jupyter Notebook
- pandas
- scikit-learn
- matplotlib
- Wireshark or packet-based sample data tools
- selected open security datasets
- optional LLM interfaces or simulated prompt-based workflows for safe classroom exercises
Professional and Academic Skills Developed
This module supports the development of:
- technical problem framing;
- data analysis and feature interpretation;
- critical comparison of alternative methods;
- evidence-based decision making;
- awareness of operational and ethical risk;
- concise technical reporting;
- practical use of AI tools in a security context.
Employability Relevance
The module prepares students for roles and pathways related to:
- security operations and SOC analysis;
- cyber threat analysis;
- security analytics and anomaly detection;
- AI-assisted security engineering;
- responsible deployment of data-driven cyber defence tools;
- further study in AI, cybersecurity, or applied data science.
Academic Integrity and Responsible Use of AI
Students are expected to use AI tools responsibly and transparently. Any use of AI-assisted coding, text generation, or analytical support in assessed work must comply with university regulations and module guidance. Students must remain accountable for the correctness, originality, and ethical integrity of submitted work.
Summary
Applied AI for Cybersecurity is intended to give students a realistic, current, and critical understanding of the role of AI in cyber defence. The module balances practical experimentation with reflective judgement, helping students understand not only what AI can do in cybersecurity, but also where it fails, how it can be attacked, and how it should be evaluated before deployment.