Pieter van Noordennen

What AI Security Can Learn from Software Supply Chain Security

Calls are coming for AI to be regulated by the government. But what will that actually look like when it comes to AI in Cybersecurity? Recent developments in Software Supply Chain Security can help guide the path.


What AI Security Can Learn from Software Supply Chain Security

There remain a lot of unanswered questions in the nascent field of AI Security. Can prompt injection be stopped? How (not “will”) AI be used to invent new attack vectors? Will AI itself be capable of protecting us from its own attempts to compromise systems?

One thing we do know: The government is destined to get involved and begin regulating AI at some point, and the cybersecurity experts in regulatory bodies will do their best to write effective policy for AI security.

In January 2023, the National Institute for Standards and Technology (NIST) released their first AI Risk Management Framework, and, in late March, launched the Trustworthy and Responsible AI Resource Center, responsible for implementing it.

Just last month, the Biden Administration’s National Artificial Intelligence Advisory Committee (NAIAC) published their 89-page report on their findings over the past year. Their conclusion: “AI as a technology … requires immediate, significant, and sustained government attention.”

We’ve been witnessing the evolution of this process in another recently developed field of cybersecurity: Software Supply Chain Security. In the wake of high-profile breaches like SolrWinds and Log4J, the Biden Administration and several branches of the military have put a clear focus on improving the nation’s software supply chains though tools like Software Bills of Material (SBOMs). However, despite several forward-thinking leaders in the bureaucracy (i.e., NIST) and military (i.e., the Iron Bank), measurable progress has been hard fought.

Here are a few things we can learn about the likely rollout of AI Security regulations from the recent and ongoing rollout of Software Supply Chain Security standards.

Military and Homeland Security Use-Cases Will Drive Action

While OpenAI CEO Sam Altman famously went to Capitol Hill to ask Congress to regulate AI, both Congress and political administrations are often loathe to directly and overtly reign-in industry. Even if they have the appetite to do so, it would take significant time, technical acumen, and political capital, all of which seems in too-short supply to make meaningful change.

The cyber-defense military-industrial complex, however, has technical experts aplenty, an adversary who is most certainly deploying AI to their advantage today, and the requisite budgets to make meaningful change. With cyber-defense need comes vendors, and with vendors come guidelines.

In software supply chain, the push towards SBOMs came from a need to know what open-source libraries and packages were being used in the software being sold to government, with a particular interest in what was being fed into our defensive systems. The Biden Administration’s Executive Order requiring SBOMs of software vendors selling to the Federal Government has had a massive impact on the awareness and adoption of software supply chain issues.

Standards (and their adoption) will trail industry innovation

“The wheels of Government turn slowly” is too easy of a pot-shot to take at our modern cyber-defense industry. Many of the top experts in cybersecurity work in the administration, and even if they feel there’s too red tape at times, the U.S. has a great track record relative to the threats we face every day.

But, as any “AI Expert” podcaster will tell you, this new branch of technology is moving at an unprecedented pace. New models, approaches, and demos seem to come out in an endless stream on social media, and companies trip over themslves to showcase their new feature-sets driven by AI.

At the same time, the NIST PMF makes no mention of ChatGPT, MidJourney, or other specific tools, and only has a single mention of “large language models.” Perhaps this is a purposeful choice by its authors to keep the guidance high-level and broadly applicable in the face of rapid innovation. But its hard not to see this choice as itself a capitulation to the fact that anything being saved as a PDF will by definition be behind the times.

In software supply chain security, we a similar choice made: to focus on the adoption of SBOMs. Any security expert worth their salt will tell you that SBOMs are necessary but not sufficient for a robust supply chain security posture. They help tremendously and are a great starting point. Requiring them is an easy and binary ask. Yet, even with this “keep it simple” approach, sufficient questions — are SBOMs required of SaaS vendors, for instance — remain to slow adoption and prevent step-change innovation.

One or many high-profile attacks will catalyze a movement

While software supply chain security has been around as long as open source software, it took two catastrophic vulnerability events — the SolrWinds in January 2021 and Log4J in Dec 2021 — to get CISOs and CTOs motivated enough to mobilize.

And, thankfully, they have. Leading industry group the Linux Foundation tasked the Open Software Security Foundation (OpenSSF) to take lead on supply chain issues, and we’ve seen a groundswell in conference talks, new tools, and new guidance from thought-leaders within government and industry alike.

No such event has happened for AI. Yet. While there’s a lot of talk of a singularity-driven nuclear winter, chances are actual breaches will likely be more pedestrian. (And if they’re not, well, I guess we won’t be here to do retrospectives on them anyway.) The silver lining — barring human annihilation — will be that, like with Log4J and SolrWinds, we will learn a tremendous amount in the response to any highly visible breach, moreso than we can right now, simply by guessing at how attackers will use AI.

Accuracy will be the key driver of progress — and it will take time

The saying in cybersecurity is “99% is still insecure.” Compare that with the near constant hallucinations common in today’s crop of LLMs, and you have a situation where many serious cybersecurity practitioners are simply sitting out this latest round of innovation until the results get more serious.

Technological accuracy is a notoriously persnickety thing to accomplish, with the last 10-percent of the journey often taking as long if not longer than the first 90. Even with SBOMs becoming rote, with many excellent open-source and free systems capable of producing them, there are questions about their accuracy and how to effectively use them. Questions over accuracy can lead to complacency in overloaded organizations, resulting in a “do nothing” outcome.

But need drives innovation, and we’re seeing many forward-thinking companies push the envelope on improving SBOMs specifically and software supply chain features in general. In the AI space, new and evolving models for quality assurance, accuracy, and product development are helping to hone the tools so they can be applied to security.

Getting Generative AI into a place where it is both secure and can help with security will be a long road, and government is sure to play a role. The balance will come with crafting helpful policies without slowing down the pace of innovation in industry.

Progress, not perfection.