
Software supply chains now face the threat of pre-trained AI models carrying hidden backdoors, trojans lying dormant until triggered, and AI tools capable of exfiltrating sensitive data. Complex dependency chains mean that a single compromised component can jeopardise entire systems.
We are calling on solution providers and innovators to join us in tackling these challenges head-on by building the frameworks and tools that will ensure AI technologies remain safe, trustworthy, and resilient across every layer of our critical infrastructure.
To have confidence in AI systems we need a foundation of trust underpinned by AI security; protecting and safeguarding AI systems, models, infrastructure, and data from cyber threats and malicious attacks.
This LASR opportunity call is delivered by Plexal and sponsored by Cisco. It is an outcome-focused initiative bringing together industry and LASR to address near-term challenges through collaborative problem solving. It will enable SMEs to pursue innovative technology ideas and demonstrate feasibility while leveraging external expertise, capabilities and resources.
Over the course of 8 weeks, successful applicants are encouraged to develop new AI security capabilities. The opportunity call will provide the support required to validate and test new AI security capabilities through funding, mentorship, and access to tooling.
Why join?
Companies will gain access to:
- Funding to support capability development and demonstration
- Support to develop and demonstrate new capabilities
- Mentoring from Cisco, as well as technical guidance from the LASR Project Team
- A test harness to validate and demonstrate their proposed capability
- Free desk-space in the LASR Hub in London
- Access to LASR events and final demo day presenting to Cisco and LASR
Opportunity areas
We have identified some key opportunity areas within AI security and are looking for innovators who are interested in developing ground-breaking capabilities that align to challenges within these opportunity spaces:
Opportunity Area 1: Secure deployment and monitoring of AI at the edge for CNI
This opportunity space addresses the unique security challenges of AI models deployed at
the edge, on IoT sensors, embedded devices, autonomous systems, and operational
technology equipment within CNI environments. Edge AI can enable greater efficiencies and
speed which can be vital to CNI business success and safety measures, however unlike
centralised cloud-based AI, edge deployments often embed machine learning models
directly into hardware with constrained resources, making them vulnerable to physical
access, firmware exploitation, and model extraction attacks while operating in unpatched,
resource-limited environments. The challenge is to ensure that AI models deployed on edge
devices remain secure throughout their operational lifetime, can be effectively patched and
monitored despite bandwidth and power constraints, and are protected against both
remote and physical attacks. This is particularly critical for CNI applications where edge AI
enables real-time decision-making in sensors, cameras, autonomous drones, industrial
control systems, and distributed monitoring equipment.
What are the human-driven attack vectors?
In addition to misconfigurations, weak access controls and insider threats that expose models or data, unvetted deployment of pre-trained models that may carry trojans or malicious instructions could be exploited without sophisticated technical attacks. Human error in patching, updating, or monitoring edge devices also amplifies these vulnerabilities.
What could a proposed project scope look like?
- Create secure deployment pipelines for AI models on low-power edge devices (e.g. sensors, drones, buoys) with automated patching and rollback mechanisms.
- Simulate data poisoning attacks and monitor model behaviour to see if real-time anomaly detection and model drift alerts are effective on edge devices in operational environments / a test bed.
Opportunity Area 2: The origin and trustworthiness of AI components
This opportunity space focuses on establishing verifiable provenance for AI components;
training datasets, pre-trained models, fine-tuning data, memory and inference APIs, used in
CNI environments. The challenge is analogous to establishing a "chain of custody" for AI
artifacts, ensuring that every component can be traced to its source, validated for integrity,
and assessed for geopolitical or supply chain risks. The goal is to create standardised
frameworks that enable CNI operators to answer: Where did this model come from? Who
trained it? What data was used? Has it been modified?
What are the human-driven attack vectors?
Poisoned datasets (training and fine-tuning), pre-trained models with hidden backdoors, unauthorised access to memory during training or inference, and inside tampering and misuse of privileged access.
What could a proposed project scope look like?
We feel the development of AI Bill of Materials (AI-BOM) standards are well-supported, not needing the intervention of LASR, therefore the following are potential focus areas:
- Building mechanisms that ensure agents can only access data they are authorised to.
- Design cryptographic signing and verification protocols for model weights and training datasets, enabling tamper-evident distribution. We are interested in exploring whether we can learn from external mechanisms, such as in autonomous vehicles where automotive secure boot and code signing mechanisms (used to verify ECU firmware integrity) could be applied to AI model weights, ensuring only authenticated models execute in CNI environments.
- Formal cybersecurity methods to assert that model B was derived from model A with certainty. This may include certificates of trust or chain of custody.
- Prototype automated provenance auditing tools that scan AI systems for undeclared dependencies, unauthorised modifications, or high-risk sourcing patterns.
- Simulate supply chain compromise scenarios (e.g., backdoored foundation models) and test detection efficacy in CNI deployment contexts.
Opportunity Area 3: The integrity of AI models throughout their lifecycle
This opportunity space addresses the challenge of ensuring that AI models remain trustworthy from development through deployment and in-life operational use within CNI environments. Unlike traditional software, AI models can degrade, drift, or be subtly manipulated in ways that evade conventional security controls. The challenge is to build verification pipelines that continuously validate model behaviour, detect integrity violations, and enforce security policies specific to safety-critical infrastructure.
The focus is on continuous verification, least-privilege access, and automated security checks throughout the AI delivery and operational pipeline; to include model testing, behavioural verification, and runtime integrity monitoring tailored to CNI operational requirements.
What are the human-driven attack vectors?
Attack vectors include backdoored or trojaned pre-trained models, malicious dependencies in composition frameworks, insider tampering, compromised update or deployment processes, and exploitation of weak access controls or verification gaps. Unintended or maliciously induced model drift can take place over time.
What could a proposed project scope look like?
• Develop behavioural verification frameworks that test AI models against CNI-specific
safety requirements (e.g., fail-safe behaviours in grid management or water
treatment)
• Design secure model versioning and rollback mechanisms that enable rapid
response to compromised or degraded models
• Implement continuous model monitoring that detects behavioural drift, out-ofdistribution inputs, or anomalous outputs in operational environments
• Prototype zero-trust architectures for AI deployment in OT networks, enforcing strict
isolation and access controls for model execution
Timeline
Monday 1st December 2025
Applications open
Wednesday 17th December, 12:00PM GMT (noon)
Applications close
January 2026
Successful applicants notified
Monday 23rd February 2026
Programme commences
Friday 17th April 2026
End of programme
Key dates
Participating companies can work out of the LASR Hub in Stratford, London as often as they woud like. There are key mandatory in-person days (location TBC):- Week 1: Monday 23rd February 2026
- Mid-point: A review and opportunity to showcase progress around 17th March 2026
- End of programme: Friday 17th April
Entry requirements
- UK headquarters
- Dedicated to active participation
- Relevant project idea and suggested use of funding
- Open to potential collaboration with industry or LASR
- Willing to attend in-person programme activities within the UK
Apply to LASR connect


