AI-Driven Techniques for Software Vulnerability Detection: Integrating Technical Innovation with Legal and Ethical Standards
Research Opportunities
Summary
As AI continues to advance, its application in cybersecurity offers unprecedented opportunities to detect and prevent software vulnerabilities. However, with these advances come significant challenges, particularly in meeting stringent legal standards for data privacy, transparency, and accountability. Regulatory frameworks such as GDPR, CCPA, and the EU AI Act demand that AI-driven security tools not only be effective but also transparent, privacy-preserving, and legally compliant. This project aims to bridge the gap between cutting-edge AI security techniques and the regulatory landscape by developing robust, explainable, and compliant AI systems for software vulnerability mitigation. Key areas of focus include enhancing adversarial robustness to defend against sophisticated cyber threats, implementing eXplainable AI (XAI) to fulfil legal transparency requirements, and integrating privacy-preserving methods to protect sensitive data. This PhD research will contribute both technically and legally to the field, offering a framework for compliant AI-driven security solutions that support privacy, accountability for software vulnerability detection.
Aim
To design, implement, and evaluate AI-driven techniques for software vulnerability detection that are both robust and compliant with legal and ethical standards. This research will address technical aspects of security, including adversarial robustness and explainability, while navigating the legal complexities surrounding data privacy, transparency, and accountability, especially within high-stakes sectors such as finance, healthcare, and critical infrastructure.
Research Objectives
- Develop and Test AI Models: Build robust AI-based models for software vulnerability detection, emphasizing accuracy and regulatory compliance.
- Adversarial Defence Mechanisms and Legal Implications: Implement defence strategies against adversarial attacks, such as adversarial training, and analyse legal implications around failure cases, particularly where AI models operate autonomously or with limited human oversight.
- Explainability and Legal Compliance: Incorporate XAI techniques to meet legal standards for transparency and “right to explanation” in AI-driven security applications. Evaluate whether explainability impacts legal compliance positively by improving accountability and auditability.
- Privacy-Preserving Techniques: Integrate privacy-preserving methods like federated learning to enable decentralized, compliant vulnerability detection. Assess whether these techniques align with data protection laws around data sovereignty, minimization, and secure cross-border data transfer.
- Accountability Framework for AI-Driven Software Security: Develop a framework outlining accountability and liability in cases where AI-driven software security systems fail, providing a structure that aligns with existing laws and ethical guidelines while anticipating future regulatory needs.
Potential Candidate Profile
Good academic background in Cybersecurity, Computer Science, Artificial Intelligence or a related discipline, with a Bachelor's Honours degree in Computer Science or equivalent experience. Proficiency in programming languages such as Python with experience in developing AI-based models and experience in secure software development, privacy and legal aspects is a plus. Demonstrates interest in high quality research and a commitment to advancing the field of cybersecurity and privacy considerations.
Related Work
- Senanayake, J., Kalutarage, H., Petrovski, A., Piras, L. and Al-Kadri, M.O., 2024. Defendroid: Real-time Android code vulnerability detection via blockchain federated neural network with XAI. Journal of Information Security and Applications, 82, p.103741.
- Senanayake, J., Kalutarage, H., Al-Kadri, M.O., Petrovski, A. and Piras, L., 2023. Android source code vulnerability detection: a systematic literature review. ACM Computing Surveys, 55(9), pp.1-37.
- Rajapaksha, S., Senanayake, J., Kalutarage, H. and Al-Kadri, M.O., 2022, December. Ai-powered vulnerability detection for secure source code development. In International Conference on Information Technology and Communications Security (pp. 275-288). Cham: Springer Nature Switzerland.
Supervisors
Discuss this further with a potential supervisor for this research degree:
Research Themes
Find other Research Degrees in the same theme:
Entry requirements
Fees & Costs
How to Apply
Any questions?
Get in touch with our team and we'll do our best to help.
Ready to start this Research Degree?
Find out about our entry requirements, application dates and how to apply.

