Assessment
Applied AI for Cybersecurity
This page outlines the assessment structure for the module.
The assessment design reflects the applied and critical nature of the course. Students are expected not only to use AI techniques, but also to evaluate them responsibly, interpret results carefully, and make defensible deployment judgements.
Assessment Philosophy
This module is better assessed through coursework than through a traditional closed-book exam.
That is because the key learning goals of the module include:
- problem framing;
- practical analysis;
- evaluation of evidence;
- interpretation of limitations;
- professional judgement;
- written communication.
These are better demonstrated through applied work than through memory-based testing.
Recommended Assessment Pattern
Assessment 1 — Lab Portfolio and Reflective Notes (40%)
Format
A portfolio built from the practical work carried out across the five weeks.
Indicative contents
- selected notebook outputs;
- brief technical summaries;
- metric interpretation;
- reflections on model limitations;
- observations on reliability, risk, or deployment suitability.
Suggested length
Equivalent to approximately:
- 5 short weekly entries, or
- 1 compiled portfolio document with supporting appendices.
Purpose
This assessment rewards steady engagement with the practical side of the module and encourages students to interpret rather than merely execute.
Assessment 2 — Applied Case Study Report (60%)
Format
An individual report analysing an AI-enabled cybersecurity problem, system, or deployment scenario.
Expected structure
Students should typically cover:
- the cybersecurity problem and context;
- the role proposed for AI;
- the available data and its limitations;
- the modelling or workflow approach;
- evaluation criteria and interpretation of results;
- AI-specific risks such as drift, adversarial weakness, hallucination, or privacy issues;
- governance, monitoring, and deployment recommendations.
Suggested length
A moderate report length is recommended, for example:
- 2,500 to 3,500 words, or an equivalent department-approved format.
Purpose
This assessment tests the student’s ability to integrate technical analysis with critical judgement.
Alternative Assessment Pattern
If a presentation component is desired, the following alternative can also work:
- Lab Portfolio: 30%
- Applied Case Study Report: 50%
- Short Presentation or Poster: 20%
This version is useful if you want stronger emphasis on professional communication and employability.
Assessment Brief Guidance
Assessment 1 — Portfolio Guidance
Students should not treat the portfolio as a raw dump of notebooks. A stronger portfolio will:
- select the most relevant outputs;
- explain what was done and why;
- interpret the meaning of the results;
- comment on weaknesses or failure modes;
- reflect on what the exercise reveals about AI in cybersecurity.
A good portfolio entry may include
- brief context;
- method used;
- result summary;
- interpretation;
- limitation;
- reflection.
Assessment 2 — Case Study Guidance
Students may be allowed to choose from instructor-provided scenarios or approved self-selected topics.
Suitable example topics
- phishing detection workflow;
- malicious URL classification system;
- AI-assisted SOC alert triage;
- anomaly detection for network traffic;
- LLM-assisted incident summarisation;
- secure deployment review of an AI-enabled cyber tool.
Important expectation
The report should not be a generic literature summary. It should demonstrate:
- structured analysis;
- evidence-based comparison;
- risk awareness;
- clear recommendation.
Suggested Marking Criteria
Assessment 1 — Lab Portfolio
| Criterion | Indicative focus |
|---|---|
| Technical completion | Has the student completed the required practical work correctly and coherently? |
| Quality of analysis | Are the methods, outputs, and comparisons interpreted sensibly? |
| Understanding of evaluation | Does the student use and discuss metrics appropriately? |
| Reflection and critical judgement | Does the student recognise limitations, risks, and realistic implications? |
| Clarity of communication | Is the work clearly presented and easy to follow? |
Assessment 2 — Applied Case Study Report
| Criterion | Indicative focus |
|---|---|
| Problem framing | Is the cybersecurity problem clearly defined and well understood? |
| Use of evidence | Are claims supported by readings, data, results, or sound technical reasoning? |
| Technical analysis | Is the AI approach explained and evaluated appropriately? |
| Risk and limitation analysis | Are weaknesses, threats, and deployment issues discussed meaningfully? |
| Recommendation quality | Is the final judgement well justified and professionally credible? |
| Structure and communication | Is the report clear, coherent, and well organised? |
Indicative Grade Characteristics
High-performing work
Typically shows:
- strong technical understanding;
- correct and careful evaluation;
- clear problem framing;
- thoughtful interpretation of limitations;
- mature deployment judgement;
- strong structure and writing.
Mid-range work
Typically shows:
- reasonable technical understanding;
- some correct analysis;
- partial awareness of limitations;
- adequate but underdeveloped recommendations.
Weak work
Typically shows:
- superficial model use;
- overfocus on accuracy only;
- weak or absent critical reflection;
- poor understanding of dataset limitations;
- vague or unsupported conclusions.
Assessment Mapping to Learning Outcomes
| Learning Outcome | Portfolio | Case Study Report |
|---|---|---|
| Explain AI applications in cybersecurity | Partial | Strong |
| Prepare and analyse cyber datasets | Strong | Partial |
| Apply and compare suitable AI methods | Strong | Strong |
| Critically evaluate effectiveness and limitations | Strong | Strong |
| Explain and assess attacks against AI systems | Partial | Strong |
| Communicate technical findings clearly | Strong | Strong |
Academic Integrity
Students must comply with university regulations on academic integrity and responsible AI use.
Key expectations
- all submitted work must be the student’s own intellectual work;
- any external assistance, including AI tools, must be acknowledged where required by university policy;
- students remain responsible for the correctness, originality, and integrity of their submissions;
- fabricated references, invented outputs, or unverified AI-generated claims are unacceptable.
Good practice
If AI tools are permitted for limited support, students should be encouraged to state briefly:
- what tool was used;
- how it was used;
- how they checked the result.
Submission Guidance
A simple and consistent submission process is recommended.
For the portfolio
Students may submit:
- one compiled PDF report with appendix screenshots;
- one zipped notebook package;
- one notebook plus a short reflective summary.
For the case study report
Students should submit:
- a single report document in the required format;
- references in the required citation style;
- appendices only where useful and proportionate.
Feedback Strategy
Good feedback in this module should comment on:
- the student’s technical decisions;
- interpretation of evidence;
- awareness of risk;
- realism of recommendations;
- communication quality.
Feedback should aim to help students improve both their technical reasoning and their professional judgement.
Suggested Timing
A sensible structure for a five-week block is:
- Weeks 1–5: lab activities and portfolio development
- End of Week 5 or shortly after: portfolio submission
- One to two weeks later: case study report submission
This allows students to use the full module content when writing their final analysis.
Summary
The assessment design of this module is intended to reward:
- practical engagement;
- careful evaluation;
- critical reasoning;
- responsible use of AI;
- professional communication.
In other words, students are assessed not only on whether they can build or use AI techniques, but on whether they can judge their value and trustworthiness in cybersecurity practice.