Gen AI Model Management in Financial Services: Explainability, Transparency, and Lifecycle Monitoring
Explore risks and controls in Generative AI deployed in financial services. Investigate explainability, fairness, and lifecycle monitoring of LLMs in collaboration with a leading UK financial institution. Contribute to safer, transparent AI-driven finance.
AI-generated overview
Project Description
Project Overview
The financial services industry is rapidly adopting Generative AI (GenAI), particularly large language models (LLMs), to improve efficiencies and decision-making. This project aims to explore risks, controls, and management principles for GenAI deployment. Topics include explainability techniques, confidence scoring, bias mitigation, stochastic behaviour, input sensitivity, prompt engineering, and future-focused research on agentic AI and domain-specific small language models.
What You Will Do
The student will evaluate interpretability methods like SHAP, ICE, LIME, and counterfactuals to support transparency. They will analyze model output confidence, assess bias and fairness, examine variability and robustness, and investigate prompt engineering strategies. The work will be conducted in collaboration with Nationwide Building Society’s data science teams, providing real-world data and applied impact.
Expected Outcomes
The research will develop methodologies to ensure safe, sound, and compliant use of GenAI in financial services. Deliverables include improved explainability frameworks, bias mitigation tools, dynamic prompt optimization methods, and comparison of LLMs against traditional ML models. The collaboration aims to produce impactful insights contributing to both academia and industry.
Why This Matters
As financial institutions integrate AI deeply into critical processes, ensuring these models are transparent, fair, and well-managed is essential for maintaining trust, meeting regulatory demands, and safeguarding customer interests. This research addresses urgent challenges in AI governance in finance, advancing both knowledge and practice.
Entry Requirements
How to Apply
Eligibility
Supervisor Profile
Dr JW Gillard supervises research focused on mathematical and statistical methods applied to AI and data science, with particular emphasis on model interpretability and risk management in practical domains like finance. His expertise aligns with advancing explainability and robustness of AI systems in regulated industries.