firsttime

Opportunity Call


Apply now



Supported by cisco         

There is a strong demand signal for the UK to adopt AI systems, but as these systems become more sophisticated and embedded across digital sectors, they also become more vulnerable to malicious attacks or unintended misuse. To have confidence in AI systems we need a foundation of trust underpinned by AI security; protecting and safeguarding AI systems, models, infrastructure, and data from cyber threats and malicious attacks.

The opportunity call is delivered by Plexal with sponsorship from Cisco. It is an outcome-focused initiative to bring together the ecosystem, industry and LASR to address high-priority challenges through collaborative problem solving. It will enable SMEs to pursue innovative technology ideas and demonstrate feasibility while leveraging expertise, capabilities and resources to stimulate innovation and generate practical and impactful solutions.

Over the course of ten weeks successful applicants are encouraged to develop new AI security capabilities. The opportunity call will provide the support required to build, validate and test new AI security capabilities through funding, mentorship, and access to tooling.

Why join?

Enhance AI Security
Innovation

Foster the development of new products to address pressing AI security challenges

Accelerate Market
Readiness

Reduce time-to-market for AI security solutions with funding as well as technical mentorship

Promote Education
and Awareness

Raise awareness of AI security risks and best practices

Companies will gain access to:

  • Funding to support capability development and demonstration 

  • Mentoring from Cisco, as well as technical guidance from the LASR Project Team 

  • A test harness to validate and demonstrate their proposed capability

  • Free desk-space in the LASR Hub

Opportunity areas

We have identified some key opportunity areas within AI security and are looking for innovators who are interested in developing ground-breaking capabilities that align to challenges within these opportunity spaces: 

Protecting the agent ecosystem and discovery infrastructure

This opportunity area centres on securing the infrastructure that allows AI agents to register, discover, and communicate with each other across networks or organisational boundaries. Another way to look at this is that we are looking for answers to the "DNS and HTTP of agents" - foundational services that connect diverse agents - to make them resilient against spoofing, tampering, and abuse. 

The goal is to ensure that agents can find each other and interact in a trusted manner, preventing unauthorised agents from infiltrating the ecosystem. Drawing from Cisco's Internet of Agents initiative, we are looking for solutions which ensure that only legitimate, authenticated agents participate and that all interactions are cryptographically protected. 

What are the human-driven attack vectors?

In addition to structural challenges, adversarial threats, such as memory poisoning and agent impersonation, demand explicit countermeasures. Protecting this "DNS and HTTP of agents" requires both cryptographic trust and robust abuse resistance. 

What could a proposed project scope look like?

• Creating simulations of well-known attacks, such as memory poisoning and remote code execution within agent directories

• Using simulations to design and prototype a signed agent registration and discovery protocol that shows efficacy against attacks
• Designing features such as the development of an agent trust scoring mechanism 

• Exploring how security mechanisms could interoperate with open agent orchestration stacks

Securing confidential compute and RAG architectures for Agentic AI

This opportunity areas addresses the secure integration of enterprise data into autonomous agent reasoning, focusing on two key enablers: Retrieval-Augmented Generation (RAG) architectures and confidential computing environments.

The challenge is to ensure that agents only retrieve what they are allowed to, that the retrieved knowledge is accurate and not compromised, and that all processing of sensitive data occurs in a secure and isolated manner. 

What are the human-driven attack vectors?

Memory poisoning, intent-breaking, and context manipulation pose growing risks, especially when agent reasoning depends on retrieved or embedded information.

What could a proposed project scope look like?

• Developing techniques to detect and prevent attackers from injecting false or misleading information into vector databases or memory stores
• Building mechanisms that ensure agents can only access data they are authorised to
• Creating test environments where multiple agents interact with a shared knowledge base to identify scenarios where confidential or unnecessary data is unintentionally exposed

• Developing tools that can trace where data came from and verify that it hasn't been tampered with, and log how it's used by the agent
• Implement robust logs showing what agents retrieved, from where, and when

Security tooling for agent infrastructure

This opportunity area focuses on how we can secure the platform and pipelines where agents are delivered, deployed, and orchestrated. This is not about looking at the agent's behaviour itself, it is about applying and extending DevSecOps and cloud security principles to the "agent-specific" infrastructure that traditional security tools don't yet fully cover. This includes securing areas such as: shared agent services (e.g. message brokers, memory stores), deployment pipelines and CI/CD for agent code or prompts, authentication and access layers for agents , and the agent orchestration systems (like workflow managers or agent controllers).

The challenge is to build tooling that protects the agent platform itself from compromise, ensuring that only secure, verified agent components run, and that agents operate with least privilege within hardened environments.

What are the human-driven attack vectors?

This challenge targets the underlying agent platforms and pipelines. In additional to infrastructure risks, we must account for tool misuse, unexpected code execution, and other vectors from threat models that exploit agent privilege and connectivity. This is critical to ensuring that agents act securely in complex environments. 

What could a proposed project scope look like? 

• Creating simulations of real-world agent-specific threats such as tool misuse, memory poisoning, and RCE attacks on shared agent infrastructure
•Exploring the requirement for the unique development of secure build/test pipelines for agent logic, including prompt and tool verification stages
• Exploring differences in the build of runtime monitoring systems tailored to detect anomalous tool/AI invocation patterns across agents

Timeline

  • Wednesday 18th June to Wednesday 9th July

    Applications open

  • Wednesday 30th July

    Successful applicants notified

  • Wednesday 20th August

    Programme commences

  • Around 30th September

    Midpoint showcase at LASR event

  • Around 29th October

    End of programme

Key dates

Participating companies can work out of the LASR Hub in Stratford, London as often as they woud like. There are key mandatory in-person days (location TBC):

  • Week 1: Wednesday 20th August 
  • Mid-point: A review and opportunity to showcase progress around 30th September
  • End of programme: Wednesday 29th October

Entry requirements

  • UK headquarters 
  • Dedicated to active participation 
  • Relevant project idea and suggested use of funding
  • Open to potential collaboration with industry or HMG
  • Willing to attend in-person programme activities within the UK


Need more help?
Contact us

Contact us