How Palantir Mastered In-Toto (Software Supply Chain Security Series, #4)

How Palantir Mastered In-Toto (Software Supply Chain Security Series, #4)

Editor’s Note: This blog post is the fourth in a series that shares insights from our journey to enhance our software supply chain security story at Palantir. This post details how Palantir adopted in-toto to secure our software supply chain, the challenges we faced, and the lessons we learned along the way.

Background

Early in 2022, the Palantir InfoSec team was attempting to address a critical challenge in our software delivery pipeline: how do we validate that the software we’re deploying is authentically ours, remains untampered, and has successfully passed through all required software development lifecycle (SDLC) controls? 

Our research into existing paradigms and frameworks revealed two key insights: first, no clean, mature standard existed for this problem; and second, in-toto emerged as a promising conceptual framework for achieving our security objectives. 

At the time, in-toto lacked a stable release, and no existing implementations had been deployed at the scale or complexity we required. With no battle-tested solutions available, we decided to venture into the unknown like Frodo and Sam setting out from the Shire and build something that would work for us. This blog post chronicles that journey, the challenges we encountered, the lessons we learned, and how we continue to refine and evolve our implementation. 

Understanding Our Scale and Complexity

To appreciate the scale at which we operate, it’s essential to understand Palantir’s deployment model. While we maintain consistent frameworks, tooling, and build pipelines across our infrastructure, there is still a high degree of heterogeneity in how we build software. We therefore couldn’t build just one tool to generate or verify attestations, nor could we simply leverage the existing tools. We needed to build tooling to support multiple build systems, including Gradle and Godel ecosystems. The solution would need to support diverse artifact types (e.g., SLS tar.gz packages, helm charts, Docker containers, and frontend asset bundles), and would need to validate artifacts across vastly different deployment environments (i.e., on-premise installations, our custom Kubernetes distribution — Rubix — and more traditional VM-based compute, which we call PCloud). 

Beyond scale challenges, we encountered a high degree of ambiguity in the nuances of the in-toto framework and reference implementations. For example, when a Git repository serves as a source or input for a functionary, how should Git LFS files be handled? Should we hash the actual file content or the Git reference pointer? Also, how do we ensure that when attesting to a Git repository’s state during a build, the repository hasn’t been inadvertently modified before attestation generation occurs?

These ambiguities meant that we had to make opinionated implementation decisions. 

Designing Our Supply Chain Security

Defining the Layout and Functionaries: In-toto requires a layout file that defines each functionary (step) in the supply chain, the expected inputs and outputs for each step, the keys that each functionary uses for attestation signing, and the validation rules for the complete attestation chain and final artifacts. While conceptually straightforward, there are some implementation complexities that may not be immediately apparent. 

Signing and Trust Distribution: Our first major challenge involved layout file signing and trust distribution to verifiers. To facilitate moving forward, we balanced security with operational feasibility by having a controlled signing ceremony whereby select individuals sign layout files using GPG keys generated and stored on Yubikeys. This signing occurs as part of our standard PR process, with validation of signatures before PR approval. Our next challenge was in setting a trust distribution strategy. We ultimately decided to hardcode trusted keys into the verifier, and rely on our policy bot implementation to enforce that key modifications require approval from one of the authorized signers and that all commits affecting key files include valid signatures. While we plan to replace this approach with TUF (The Update Framework) in the near future, several implementation roadblocks led us to prioritize tool building and rollout for generation and verification. 

Build and Release Steps: Determining our functionaries was straightforward. There are two basic steps to building a software artifact: tagging or releasing the source control and building that source to generate the artifact. As described in our previous blog post on securing source controls, we use Autorelease to perform all release tagging of our source control, so naturally it serves as the functionary for the release step and generates the attestation of the source control for each tagged release. Our release attestations initially included every file in the source control repository. In practice, this turned out to be a major headache and caused a high degree of fragility and false positive verification failures. We’ll detail those lessons and what we changed later. 

The build step presented our first technical challenge. We needed to generate attestations using both our Gradle and Godel tooling, as well as for the variety of artifact types that we produce and deploy. These build attestations would capture the state of the source control repository as their input and then the final artifacts as their output. Ultimately, we were able to do this for the variety of build ecosystems and artifacts, and we continue to maintain and update these tools internally. We hope to be able to release these to the OSS community eventually. 

Verification Process: With our layout defined, we only needed to determine what verification would look like — and it’s pretty straightforward based on the small set of attestations. The output of the release should match the Git repository state; the repository state should match the build input, and the final software artifact to be installed should match the built output. 

With these pieces in place, we had both the structure of our layout files and process for generating and verifying attestations. 

Storage & Distribution of Attestations

Distribution within an enterprise is not written about much within the in-toto framework, and for good reason. Most of the available implementations are geared towards OSS where ecosystems are transparent and primarily concerned with storage. This is why an implementation like a transparency log such as Rekor is a great choice for OSS, and it uses that transparency as a security benefit. 

However, in an enterprise this is a much more thorny issue. Building a series of monitors for a transparency log, along with all the infrastructure and tooling to interface with the log, quickly becomes a heavy long-term commitment. Additionally, being able to query the log using a variety of inputs is critical due to the different ways downstream verifiers receive artifacts, something that current transparency log implementations for in-toto don’t support well. 

As we described in the second installment of this blog series, we store all software artifacts in Artifactory where we use convention-based searches to locate the full path of an artifact. We decided to leverage this existing model for our attestations and store them using the same pathing convention in a separate in-toto release repository within Artifactory. 

During the release, Autorelease publishes its attestation to Artifactory under a separate repo to which only it has publish access; this repo is not meant for client consumption. During the build step, once the build attestation is generated and signed, our Gradle or Godel task retrieves the corresponding release attestation, bundles it with the build attestation and layout file in a tar.gz, and publishes that to Artifactory. 

Each release bundle is fully encapsulated and not reliant on any remote dependencies such as a layout file or key. This creates a more robust implementation for verification clients in offline scenarios, which is a critical segment of our customer deployments.

The major downside in this trust model is the layout file signing key being hardcoded, but we have additional layers to defend against tampering while we work on our TUF implementation. Eventually we‘ll move the layout file serving to the TUF model as well, which will further protect against downgrades and similar metadata manipulation attacks. 

Verification in Apollo

Apollo Background

While a full explanation of Palantir Apollo is beyond the scope of this blog, it’s important to understand how powerful it is as a software distribution, monitoring, configuration, and security policy engine. Apollo is at the core of how we distribute and operate our software at scale across our many customer environments. 

At the end of a release build, we publish all bytes of our software to Artifactory. We also publish metadata about the release to Apollo, which serves as a catalog of all available software that can be installed in a customer environment. Each release goes through an adjudication process to determine if the release has passed relevant security and quality checks. Each environment can configure which software from the catalog they want installed, as well as whether they want releases with more or less “soak time” — the time spent in various testing environments. Releases must demonstrate they are stable and free of errors and performance issues before being elevated to a status of ‘stable.’

Service installations and upgrades are performed by an agent within the environment that receives instructions from Apollo for what software to install, update, or remove. This agent ultimately performs the pull and installation of software. 

In-Toto Verification in Apollo

One of the first ways we perform in-toto verification is by hooking the service that catalogs all available software for install within Apollo. When a new release is published from our build environment to Apollo, an in-toto verification service receives a webhook call to verify the attestations and the corresponding artifact in Artifactory. If the release passes this initial verification, it can eventually be promoted to a stable release and installed across the widest range of environments. However, the artifact in Artifactory could still be tampered with, so we continue to monitor and re-verify active releases. 

This initial verification largely serves as an early warning system and is far from a complete solution. There are many avenues in which our software could be tampered with in this setup, including during the download, in between re-checks by the Apollo verifier, or during the many intermediate hops our software must sometimes go through before landing in a customer’s environment. To that end, we also employ in-toto verification at the agent level. This means that the agent responsible for communicating with Apollo about what to install and that is ultimately responsible for the install must also perform in-toto verification. 

We made the decision to implement in-toto verification at the Apollo level for several reasons. First, it was a single, more straightforward case to perform verification where we could quickly iterate and debug as we rolled out attestations and verification. Second, during the rollout of attestation generation and prior to entering a blocking mode for release promotion to stable, we needed a way to flag to service owners that their service wasn’t compliant and needed to generate attestations. As mentioned earlier, we have an array of build ecosystems, tooling, and artifact types, so ensuring we had sufficient coverage of all cases prior to enforcement was key to preventing customer-impacting outages. If a key service wasn’t generating attestations yet — because they hadn’t updated their build tooling, there was an issue with verification, or there was a bug in attestation generation — it would prevent that service from installing across all of our customers only after it had reached appropriate soak and testing times. This process can take days to weeks, which would have been catastrophic to wait that long and then discover a cascading set of install failures. Instead, by performing the verification at the time of release and flagging failed verifications immediately, we were able to respond nimbly to bugs in our in-toto implementation as well as create more urgency for service owners to perform any required updates to their build tooling. 

Lastly, there is at least some security value in performing verification at the Apollo level because we’re able to ensure that software entering our Apollo catalog is verifiably produced by Palantir. This has, on several occasions, served as a mitigating control for other security issues that were identified and remediated in our software supply chain. It also allows us to validate that what is in Artifactory hasn’t been tampered with by continually verifying active releases stored there, providing an early warning system for any potential tampering. 

Transitions to In-Toto V1

In the fall of 2024, we began updating our implementation to align with in-toto V1 and update our attestations to use the new attestation framework. We defined our own custom source control release attestation and used Google’s SLSA build attestation for our build step. 

As previously mentioned, we gained a lot of value from this transition for our source control attestation. We started using the HEAD commit SHA and the TAG SHA to represent the state of the repository, which meant we no longer hashed every file in the git repo, and during verification, no longer needed to perform the complex diff of those hashes between the release and the build attestations. We were also able to embed significantly more metadata about the release with the new attestation format. 

We also gained significant value from including additional fields in the build attestation. 

Lessons Learned

We learned many lessons during this journey and continue to improve on our implementation. Much of what we learned has helped us improve the security of our deployment model, better understand how to roll out a security control that sits directly in the critical path of deploying software, and determine what effective metrics look like for such a control. 

We initially spent a lot of time trying to build to the exact spec of in-toto, but we ran into several issues during that v0 build out. Those issues caused us to become highly opinionated on certain practical details of the spec and also helped us identity when we should differ from the official spec entirely. It’s important to note that our initial implementation was developed prior to the V1 release of the in-toto spec and its union with Google’s SLSA attestations. One of the first issues we encountered was a slew of seemingly false positive verification failures. In these cases, an artifact would fail verification because the output of the release step (state of the git repository) would differ from the state of the repository cloned during the build. The key detail here is that we included every individual file in the git repo in our attestations. We discovered these false positives were due to a number of related reasons, including: 

  • Off-roading builds that modified the git repo prior to our in-toto task being able to attest to the repository state. 

  • Difference in release and build attestation interpretation of git LFS files. The release attestation took the sha of the git LFS reference file while the build tooling had the actual file present on disk. We fixed this issue by parsing the git LFS reference in the release tooling. 

Additionally, the compute to hash every file from every release, twice, was very expensive, as was performing the comparison between the two repo states during verification. 

When in-toto v1 was released along with support for the new attestation formats, we decided we needed to make a change to mitigate these issues. Ultimately, to decrease the flakiness caused by off-roading builds, reduce expensive attestation generation for large mono-repos, and reduce overall verification time, we moved to using the SHA of the HEAD commit from the release tag, the commit signature, and the tag signature as the new basis for the output fields of our release attestation and the input fields for our build attestation. This allows downstream verifiers to validate that the commits and tags were in fact generated by our tooling based on trust of those signing keys and quickly validate that the repo states are equivalent. P99 verification times during spikes in releases (times when hundreds of releases all occur at once) went from approximately an hour and a half to less than fifteen minutes. Attestation size also went down from 50MBs gzipped in the case of some large mono-repos to on the order of KBs.

Another lesson we learned around attestation generation is that building a single tool that will address all of your repositories and builds is very unlikely in a sufficiently large enterprise. We were able to insert our attestation generation code into the majority of builds initially. However, there were more than a few cases where we had to integrate into one-off build tooling for particular mono-repos or legacy code bases. This created significant delays in our ability to enforce verification. The issue was that if every product isn’t generating valid attestations, we couldn’t block products that don’t have them yet. If we had, we would’ve ended up blocking all releases for critical services for days, weeks or, in at least a couple of cases, what would have been months. These outcomes are obviously unacceptable. 

Enforcement Strategies

Our enforcement model initially was to verify attestations and, if that verification failed, block the product from installing. However, if the service did not have any attestations, we would skip verification and allow the install. If we had attempted to block services lacking attestations initially, we would have blocked huge subsets of our products from installing for months. The reason was that as we rolled out attestation generation, we couldn’t cover all build tooling at once and had to wait for the owning development teams to integrate attestation generation into their build tooling. This took over 12 months to gain full coverage due to lack of developer resourcing, build tooling complexity, and other factors. As we waited for generation coverage, we still wanted to perform in-toto verification, so the soft enforcement model was used to help us to continue to iterate on implementation, collect metrics, and gain a limited measure of security value. This type of delay is probably very familiar to other large enterprise security engineers — introducing security tooling into builds and gaining 100% coverage is an extremely difficult problem. 

Iterative Tightening of Enforcement 

It was important to slowly tighten our enforcement of verification. Initially, it’s impossible to enforce verification passing because inevitably there is some subset of products which haven’t started generating attestations yet. Using the soft enforcement policy allowed us to begin verifying attestations while allowing products which haven’t yet generated an attestation to continue to be used. Eventually, as we stabilized attestation generation and eliminated many of the bugs and edge cases, we wanted to tighten enforcement further. We decided to still allow products that weren’t generating attestations to be installed, but for any product that had previously published an attestation, if a new version lacked an attestation, verification would now fail. This meant existing verifiable products couldn’t regress and helped solidify attestation generation as a required build operation. 

Exemptions and Break Glass Procedures

One of the ways we ensured a smooth rollout and minimal operational disruptions was creating a live reloadable ignore/skip list option in the configuration of our verifier. In cases where a product was previously publishing attestations and there was suddenly an issue publishing new attestations for any reason (e.g., build issue, Gradle plugin incompatibility, bug in attestation generation, new build instruction that dirtied the repo), we could add the product to the skip list and it would be given a special label in Apollo signifying it couldn’t be validated, but installs are manually allowed. This is obviously suboptimal, but when dealing with ~10,000 products, ~100,000 product releases, and millions of installs, it’s critical to be able to unblock in certain circumstances to maintain operational continuity. At any given time, our list of skipped products was fewer than 20 products. 

Along the same principle of being able to maintain operational continuity, we also implemented a break glass operation. This was used only when a release had failed verification but we still needed to enable its promotion to STABLE in order to install it on a customer stack. Normally we would correct the false positive issue and then release a new version of the product, resulting in a passing verification. However, in some cases, releases urgently needed to be installed and we didn’t have time to correct the false positive cause. For example, there may have been a release that fixed an urgent security issue or bug but was now being blocked from install fleet-wide. As stated above, these cases were due to bugs or other false positive conditions. For these cases, we allowed an AppSec engineer to manually label the particular release as installable despite having failed verification. The benefit of using this method is that it’s a quick method for unblocking a release and is a transparent process that is gated by role-based access controls in Apollo, meaning a product owner wouldn’t be able to use the break glass to bypass verification without involving an AppSec engineer. 

Verification at Install Time 

We chose to use Apollo as an initial verification, but we also implemented a verifier at the customer environment level where services were actually being installed. The service responsible for interacting with Apollo to know what to install, pulling the product artifacts, and then actually deploying them to their install location also performed in-toto verification. Unfortunately, that service was no longer responsible for installing many services as our product architecture shifted to our K8s-based infrastructure known as Rubix. Additionally, this move resulted in an increase in the types of artifacts we deployed. Previously, we installed Palantir’s own artifact format known as SLS, which were tar.gzs with specific structures. The move to Rubix introduced containers, K8s apps, and Helm charts. This proliferation created additional paths for installs, new services responsible for installing, and new artifacts to generate attestations for. Moreover, because Rubix was actively evolving, it was difficult to pin down where in the architecture we would even perform in-toto verification. 

Rubix has now stabilized, and we continue to iterate on an in-toto verification architecture that is scalable and effectively covers the full scope of artifact installs that occur in our environments. 

Future plans

Static Analysis Attestations

An additional attestation type we plan to generate is for static analysis on the release tag. We use CodeQL to perform static analysis on all PRs and all commits to the main branch. When the scan runs on a build triggered by a release tag, the job will generate an attestation that can be linked to the release using the tag and commit SHAs and their signatures. With these, we can prove that all releases have undergone required static code analysis prior to deployment. 

Dependency Vulnerability Scanning 

As described in our previous blog post on continued vulnerability management, we scan all releases deployed to production for third-party library vulnerabilities. If a version of a product has a vulnerability that is beyond the fix SLA, we initiate a recall in Apollo, which forces an upgrade to a newer version if available. 

Currently, we enforce these scans via Apollo by requiring the scans to have occurred before allowing promotion to STABLE, similar to its enforcement of in-toto. Often we have to move artifacts between Apollo instances and there’s no verifiable way to prove an artifact has undergone required checks of other requirements. Having the scanner generate an attestation for each scan would provide a transferable piece of evidence that could be independently verified by any Apollo hub. 

ITAR Attestation

Building off the concept of using attestations to create a deployment policy engine, we can utilize metadata in the attestations to delineate special artifacts like ITAR artifacts. Currently, we have separate source control, build, and artifact storage environments to separate and signify which artifacts are ITAR. If we either generate an additional ITAR attestation and place an ITAR field in each attestation type (e.g., release and build), Apollo and environment-based verifiers would be able to validate if an artifact is deployable to a particular environment. 

At the Apollo level, we can also note if an artifact is compliant in how it was built. If an artifact has one attestation with an ITAR true flag and other has it set to false, we can be sure that there was some sort of cross contamination between build steps. This concept can be taken further by knowing which products are supposed to be ITAR and ensuring each step was done by the corresponding ITAR-segmented environment. This set of transferable evidence is a harder control for ensuring ITAR compliance and tracking ITAR artifacts across environments. 

Approved Deployment Attestations

This is a third possible example of combining in-toto attestations and a policy engine to control deployment of artifacts. In many compliance-heavy environments, one or more human users must sign off on the deployment of new versions or new software. When the software goes through multiple hops, it’s difficult to reliably and provably transfer the metadata of that approval. An attestation of that approval provides a verifiable way to enforce this policy and tie it to a particular artifact. Regardless of how many hops or downstream environments the artifact may go through, an attestation provides a verifiable way to show it was approved at each necessary gate. 

There are many more ways to use these attestations in conjunction with a verifier policy engine. The key takeaway is that the attestations provide a verifiable unit of compliance such that environments can cryptographically validate that the software they’re installing has met the compliance requirements expected. 

Verification Attestations

Now you may be wondering, with all these hops in our deployment model and the problem of air-gapped environments, will we transfer all attestations to end environments? How will we ensure they know the trusted keys? How much additional compute, network bandwidth, and storage are we using to do all of this? 

While we don’t currently have a critical need to solve these problems, we do have a proposal internally to address this if any of these problems becomes a blocker. Google’s SLSA proposed the idea of a verification attestation — an attestation that signifies something else did a full verification based on its requirements, and this artifact passed. This has a couple of nice properties to highlight:

  • Fewer keys to trust. Instead of needing to trust each key in the supply chain or be able to access an online root key, we can use a set of static keys that are used by the single verifier. 

  • Key trust management becomes less painful for disconnected or rarely updated verifiers. Having a single set of valid signing keys which the verifier slowly rotates through over years allows an offline environment to pre-seed the key list and not worry about updating it unless there’s a catastrophic compromise. 

  • Addition or subtraction of supply chain steps, policies, and types of attestations doesn’t break downstream verifiers. If a new step is added to the supply chain, a downstream verifier doesn’t have to know anything about it; only that the upstream verifier it trusts regarded the artifact as valid. 

These benefits greatly simplify verification of artifacts which are multiple hops from the original development and build environment. Instead of needing to encode and transfer the verification logic and key trust for the entire supply chain, we can distill it down to a much simpler subset of keys and steps that are expected. There is a downside here in that we’re transferring a great deal of trust to the upstream verifier. In the cases of offline or rarely updated the environments, this risk is acceptable because the alternative implementation would be a much more fragile and error-prone system. Blocking key updates for software that typically requires huge updates at infrequent intervals would be massively painful. 

Source Control Security Attestation Fields

Another interesting concept we can implement is a policy gate for releases using attestations. In our release attestation, we are able to have the releasing agent indicate the state of the repository configuration, including:

  • Required status checks

  • Protected branch configuration

  • Required pre-receive hooks

  • Unaddressed security alerts (e.g., dependency vulnerabilities or static code analysis findings)

Within Palantir, we have strict controls implemented at the source repository configuration level including requiring policy bot as a status check, our custom commit signing status check, no admin users configured (only maintainers), pre-receive hooks that block user actions like tagging, and requiring automated configuration management of the repository by our octocorrect tool.

 We currently enforce many of these controls via our custom secret injector tool, which blocks the ability to acquire secrets in the build, such as in-toto signing and Artifactory publish credentials. However, we could also enforce these at the in-toto verification layer by storing the repository configuration state at release in the release attestation and requiring certain values in order to verify the final artifact. This would gate software from being deployed via Apollo if the software hadn’t met our security bar due to configuration drift or intentional bypassing of security requirements. 

Remaining Challenges

Creating a fully end-to-end secure supply chain implementation using in-toto and attestations is difficult. Inevitably, there are practical blockers and problems to solve, such as heterogenous language and build ecosystems, varying environment constraints, multi-hop deployment pipelines, high availability requirements, low bug and false positive rates, scaling rollout, and key management.

We have made compromises in our implementation in order to expedite rollout and gain valuable experience operating in-toto at scale. We are actively working to shore up these areas, but they require considerable engineering to do well. 

Key Management and Distribution

The keys that sign the layout file are the most critical keys. The compromise of these keys, or the ability to alter the keys a verifier trusts, compromise the whole system. We began by using offline keys stored on hardware security modules and hardcoding the trust of those keys into the verifiers. This approach is far from ideal. While the keys themselves are stored securely, there are major weaknesses with the trust and distribution of the associated public keys. Namely, we rely on change control and monitoring of the hardcoded keys to prevent a compromise. 

We recently created an Apollo service that can distribute keys to different Apollo verifiers as needed, instead of hardcoding the keys. Although this eases development and key distribution, there is still room for improvement in detection of, and recovery from, key compromise. We plan to implement TUF in this service to further protect these key components. 

Gaps in Verification Coverage

Due to technical complexity in certain deployment environments, we do not have 100% coverage at the lowest level of deployment. Our Apollo verification casts the widest net for verifying products but leaves a clear, exploitable section between Apollo and the environment. Additionally, we are still working on attestation generation coverage for Helm charts and containers within parts of our build infrastructure. 

Because of updates to our attestation format we have needed to update all of our attestation generation tooling and verifiers. This is a complex undertaking and requires multiple teams to operate from a shared specification to achieve full coverage. We have recently achieved 100% coverage for ITAR artifacts and we have fully migrated to in-toto v1 in the part of our infrastructure that uses Gradle as a build system, but gaps still remain (e.g., in the Godel part of the infrastructure).

As for verification coverage across all environments, we are optimistic that we’ll be able to achieve full coverage by the end of 2025. Due to some architectural changes, as well as improvements in our internal library support, we believe verifier implementation will become considerably easier. 

Conclusion

Supply chain integrity, as well as software artifact authenticity and integrity, are difficult problems to solve. While in-toto provides a framework that offers these security properties, there is still significant complexity that must be addressed and practical details that need to be accounted for. We still have a lot of security value to derive from in-toto and believe our plans for expanding it will greatly improve the security of our software and our customers.

✨Michel Andriana Huang✨ 🇳🇱🇨🇳

✨ Entrepreneurial Supermodel 179 cm Owner Golden Glass + Golden Cup + the woman The Brain Toko Obat Ketandan + aptk Johar + Afiliate Marketing+ Ex - Ali Cloud | Ex- Konimex |Paid Online answer YouGov

4d

Hello palantir how can I help you to design Palantir Technologies

Like
Reply
Gustavo Moreno

Symbolic AI | Founder of Dharma | PAX Author & Architect | CTM Architect | Ethical AI Strategist | Blockchain Security | Cognitive Systems | Human-AI Synergy | Building intelligence that protects, not replaces

3w
Jonathan Capriola

Inventor of A2SPA Protocol | Co-Founder, AImodularity.com | Bestselling Author: ‘Addicted to Silence’ | 2x Exited Founder | AI + Blockchain Builder | Federal Vendor

3w

Supply chain security is essential, and frameworks like in-toto are valuable for verifying software artifacts during build and deployment. But here’s the emerging challenge: Even with a fully verified pipeline, AI agents themselves still run unauthenticated by default. That means once deployed, they can accept spoofed instructions that look legitimate — and execute them without question. The August NX attack proved this: attackers bypassed trusted pipelines and hijacked AI assistants at runtime to exfiltrate secrets and execute malicious payloads. Frameworks protect the road leading up to deployment. A2SPA protects the car while it’s driving — by cryptographically signing, verifying, and logging every payload before execution. The future of secure AI will require both: • Verified build pipelines • Verified agent actions Learn more: https://aimodularity.com/A2SPA

Garrett Sextro

General Manager Kamphaengphet BioPower, Director SouthEast Asia Alternative Energy at Sangfah Agri Product Co., Ltd

3w

🤙🤙

Alijan Mirzai

legitimerad sjuksköterska

3w

Using AI technology to kill ppl in Gaza och helping Zionist regime to continue genocide

To view or add a comment, sign in

Explore content categories