firsttime
Image
Capturing the challenges of AI adoption across financial services
Description *

Artificial intelligence (AI) is emerging as a transformational force within the Financial Services (FS) industry, offering a wide range of opportunities alongside complex challenges. From reshaping customer engagement to driving operational efficiency, AI is fast becoming a cornerstone of the sector’s digital evolution. However, as with all innovation, it brings with it critical considerations around data quality, regulation, and organisational readiness. 



In November 2024, LASR set out to explore the most pressing barriers to AI adoption within FS, with particular focus on security-related concerns. 4 key challenge areas emerged: 

  • Regulation – The need to adopt AI responsibly in line with compliance and evolving regulatory frameworks. 
  • Expertise – The requirement for specialist knowledge of AI applications specific to FS. 
  • Return on Investment (ROI) – Balancing financial costs with measurable efficiency and business gains. 
  • Technology Integration – The challenge of embedding AI into existing legacy systems. 


To better understand these themes, LASR hosted an in-person workshop with 30 participants across 15 organisations, including representatives from the LASR partnership, financial institutions such as Lloyds Banking Group, Barclays, and Starling Bank, and SMEs including Advai, SymphonyAI, and Lexverify. 


This blog presents a summary of the insights shared during that session. 


Understanding the financial services AI adoption landscape 

What is the opportunity for financial services? 

AI gives FS organisations the ability to move beyond one-size-fits-all offerings, particularly for customer segments such as mass affluent. Its potential for human-centric interaction enables better real-time understanding of customer needs and helps ensure recommendations are both beneficial for the client and operationally feasible. Internally, AI can drive productivity by automating fraud detection, improving recruitment processes, streamlining debugging, enhancing traditional cyber security activities, and supporting the development of proprietary tools. This translates into scalable efficiency gains across entire technology estates. 


What is holding adoption back? 

Despite its promise, AI adoption in FS is being held back by a number of interconnected barriers. Poor data quality, inadequate access controls, and legacy infrastructure all complicate system integration. The existing skills gap is significant, with AI expertise often confined to silos and leadership teams lacking the technical insight needed for strategic direction. Regulatory uncertainty, further compounded by a lack of global consistency and the rapid pace of AI development, adds a heavy compliance burden. Proving ROI is difficult, especially when value lies in risk mitigation rather than revenue generation. Meanwhile, security concerns around open-source models and lack of explainability continue to prompt caution. 


Challenge Deep Dives 

Regulation 

AI regulation within FS must strike a delicate balance between enabling innovation and ensuring robust oversight. Clear and consistent frameworks can inspire confidence and drive responsible adoption. However, the absence of AI-specific regulation leaves room for innovation but also introduces uncertainty. Multinational firms must navigate varying regional rules, while many regulators lack the technical expertise to respond effectively to the technology’s pace. Solutions include developing AI sandboxes for experimentation, forming cross-sector regulatory alliances, and embedding technical specialists in policy teams. 


Expertise 

There is a critical shortage of talent at the intersection of AI, cyber security and financial services. This shortage contributes to organisational resistance, underuse of technology, and difficulty aligning AI with strategic goals. Boards in particular often lack the depth to lead informed AI discussions. To bridge this gap, AI should become a baseline skill for all employees, with training tailored by role. Ethical hacking, prompt design and red teaming exercises can enhance understanding of AI risks, while multi-disciplinary teams and expert advisory panels can help align initiatives with long-term objectives. Structured, low-risk experimentation and shared learning platforms will also build confidence and foster industry-wide capability. 


Return on Investment 

Determining ROI for AI remains challenging due to inconsistent metrics, evolving benchmarks, and a focus on risk reduction over revenue gains. In many cases, lessons from AI projects, particularly failures, are not shared and limit broader learning. Competitive pressures further discourage transparency. To navigate this, firms should begin with small-scale pilots to validate impact and use dynamic tools that adjust ROI metrics as projects evolve. Scenario planning and predictive modelling powered by AI itself can help leaders anticipate returns across different stages of adoption. Creating shared repositories of anonymised case studies can support standardisation and reduce duplication of effort across the sector. 


Technology Integration 

Embedding AI into FS systems is not without its complications. Integration is hindered by legacy infrastructure, data silos, opaque vendor models, and limited frameworks for managing AI vulnerabilities, with open-source models introducing new security risks. However, there are clear opportunities: dynamic governance platforms that provide real-time monitoring and compliance automation, gamified employee training on security and risk, and federated learning approaches that maintain data privacy while training models. Encouraging vendor transparency, creating benchmarking tools, and proactively researching emerging risks will help ensure safer, more effective integration. 


What does this mean for AI security? 

AI adoption within legacy banks remains at an early stage, with many institutions currently implementing isolated tools such as Microsoft Co Pilot to drive modest small-scale organisational efficiency gains. While this limited scope leaves little room for exploring advanced AI security tooling at present, it offers a valuable opportunity to embed robust AI security principles from the outset, drawing on lessons from other industry sectors. 


As a next step, LASR is keen to explore how smaller challenger banks and fintechs are adopting AI, with a view to identifying more advanced use cases and evaluating the specific security considerations that may arise. 


Future Ways of Working 

The successful adoption of AI in FS won’t rely on technology alone. A collaborative, system-wide approach involving government, academia and industry is essential. By aligning AI strategies with real-world challenges and sharing anonymised data and experiences, the sector can foster resilience and responsible innovation. Establishing shared forums, creating co-development pathways with startups, and enabling secure spaces for open discussion will support inclusive, scalable AI adoption. Raising the baseline of AI understanding across the sector, supporting upskilling, and embedding a culture of experimentation and transparency will be key to building the future of financial services. 



Stuart Nelson, Innovation Associate at Plexal