firsttime
Image
Securing AI's future: Lessons from the Singapore AI Cyber Security Readiness workshop
Description *

As artificial intelligence becomes central to digital transformation across sectors, securing it is no longer optional - it's critical. AI systems now power decisions in healthcare, finance, and law enforcement, introducing unique risks that traditional cyber security frameworks do not specifically manage.



To tackle these challenges, LASR partners Plexal and Oxford University's Global Cyber Security Capacity Centre (GCSCC) co-hosted the ‘Singapore AI Cyber Security Readiness Workshop’ in March 2025, bringing together experts and policymakers from across Southeast Asia to explore best practices and readiness frameworks. 



AI adoption is outpacing oversight 

AI deployment is accelerating at unprecedented rates. Workshop participants noted that processes that once took weeks now take minutes, with some organisations integrating AI even before hiring specialised staff. This efficiency comes with significant blind spots. 



The workshop surfaced a common challenge seen globally, even among more advanced ecosystems like Singapore’s: AI systems are often deployed rapidly without full consideration of their lifecycle risks. Participants reflected on how issues like bias, adversarial manipulation, and data integrity can sometimes be underexplored in the drive to innovate. As one university participant aptly asked: “How do you know the data training the model is clean?” 



The lifecycle security challenge 

The CSA's AI security guidelines provided a framework for evaluating challenges across the AI lifecycle. Workshop participants identified key pain points: 

  • Most critical: Raising awareness and AI literacy across organisations 
  • Most misunderstood: Operations and maintenance costs and complexity 
  • Most expensive: Securing AI supply chains, especially with third-party models 
  • Most overlooked: End-of-life management, where decommissioning protocols remain unclear 


While secure-by-design is ideal, real-world development -particularly in startups- often sacrifices security for speed. Without proper incentives or enforcement, security becomes merely a cost centre rather than a fundamental design principle. 


Market barriers slow security innovation 

Despite growing demand, the AI security market faces significant structural barriers. Misalignment between academic research and commercial applications, fragmented standards, and complex procurement cycles all impede progress. 



Small and medium enterprises face particular difficulties establishing trust without recognised assurance frameworks. Technology investment has disproportionately favoured generative AI, leaving other high-risk ML applications for manufacturing or supply chain optimisation with inadequate security solutions. 



The workshop highlighted how AI's inherent opacity further complicates security efforts, making it difficult to build shared understanding across sectors with varying risk profiles. 



Building global readiness 

The workshop also focused on National AI Cyber Security Readiness Metric, designed to help countries benchmark their preparedness. Already tested in Mongolia, Cyprus, and now Singapore, the metric identifies gaps across regulation, awareness, incident response capabilities, and workforce capacity. 



Participants emphasised that while national approaches must adapt to regional contexts, they must also support global interoperability. ASEAN's regional framework was identified as increasingly vital for coordinated action on AI risks throughout Southeast Asia. 



The path forward: security in AI's DNA 

The Singapore workshop confirmed that AI security isn't a single challenge but a complex ecosystem requiring coordinated responses across policy, design, education, and international cooperation. 



As AI systems grow more powerful and pervasive, embedding security throughout their lifecycle becomes essential. Through better standards, shared metrics, and inclusive dialogues like this workshop, the global AI cyber security community is shaping a collective response to emerging threats. 



The question isn't whether we can secure AI's future - it's whether we can move quickly enough to keep pace with its rapid evolution. 

Shape 


This workshop builds on the World Economic Forum's 2024 report: "Artificial Intelligence and Cyber Security: Balancing Risks and Rewards", developed with the GCSCC. The report outlines a 7-step approach to AI cyber risk management that influenced the workshop's discussions and continues to shape international policy.