Skip to main content

What are they?

Supply chain vulnerabilities are security weaknesses that come from the third-party components and services used to build, run, or extend an LLM solution. This includes things like: 

  • The model you use (hosted or downloaded) 
  • Datasets used for training 
  • Libraries/frameworks 
  • Hosting platforms and APIs 
  • Plugins, connectors, and tools that extend what the LLM can do 

LLM systems often depend on more external components than traditional software, which increases the number of places something can go wrong. If any part of the supply chain is compromised, out of date, poorly configured, or simply untrustworthy, it can impact the confidentiality (data leaks), integrity (tampered behaviour), or availability (outages/cost spikes) of the LLM application and the services it connects to. 

 

Who is at risk?

Individuals 

  • People using AI apps that rely on third-party add-ons, or extensions may unknowingly install something risky (or follow risky setup instructions), which can lead to account compromise or device infection.  
  • Users of popular AI assistants can be impacted if the assistant’s plugin ecosystem includes weak or malicious components, because those components may handle data or perform actions on the user’s behalf.  

Businesses and Organisations 

  • Any organisation that uses third-party LLM services, integrates LLMs into products, or enables plugins/connectors is exposed to supply chain risk across models, data, tools, and platforms. 
  • Organisations using agent plugins are at higher risk because those components can interact with local machines, credentials, or business systems.  

 

How attacks work

Example 1: A malicious plugin 

A user installs a plugin that claims to add a helpful feature, but it contains harmful instructions or prompts the user to run unsafe installers. This can lead to malware, credential theft, or unauthorised access. 

Example 2: A compromised third-party dependency 

An organisation builds an LLM-enabled app using open-source libraries, containers, or connectors. If one dependency becomes compromised or is outdated and vulnerable, attackers may exploit that weakness to access the system or data, just like traditional software supply-chain attacks, but now inside an AI pipeline. 

Example 3: Risk inherited from a hosted AI provider 

A business uses a third-party hosted LLM and connects it to internal data sources. If the provider’s security controls, monitoring, patching, or access control are weak, the organisation may inherit those weaknesses. 

 

Controls

People  

  • Teach users and staff to be cautious of AI plugins and to avoid running installers or scripts from untrusted sources, even if they appear inside official marketplaces.  
  • Encourage staff to treat AI tools like any other IT system: if it handles work data, it needs the same care as email, cloud storage, or business applications 

Process 

  • Approved list approach: limit which LLM tools, plugins, and connectors are allowed, and who can enable them. This reduces the chance of users installing risky add-ons.  
  • Supplier due diligence: when procuring LLM services or platforms, ask how they manage:  
    • Patching and vulnerability handling, 
    • Plugin/connector review, 
    • Access control and logging, 
    • Incident reporting and response.  
  • Asset inventory: track what models, datasets, connectors, and plugins you rely on (including versions).  

Tech 

  • Least privilege for integrations: ensure connectors/plugins only have the minimum access they need (e.g. read-only access where possible). This reduces impact if a component is compromised.  
  • Secure configuration and monitoring: enable logging for plugin installs, connector authorisations, model changes, and unusual behaviour so you can detect and investigate issues quickly.  
  • Integrity and provenance checks where possible: prefer trusted sources and verify the origin of models, datasets, and packages (e.g. checksums/signatures where available), and avoid unknown or unmaintained components.  
  • Segmentation and containment: run AI tools and agent runtimes with restricted permissions and isolate tokens/keys so one compromise does not expose everything.