Trustible Raises $4.6M Seed Series Financing Round
Trustible Raises $4.6M Seed Series Financing Round
Plus Anthropic’s National Security push, vibe boding, and AI policy developments
Hi there! This is a proud week for Trustible! Yesterday, we announced our new funding round which we’re excited to share with our Substack audience. This capital will allow Trustible to continue to drive product innovation, grow its team, and scale its partnerships.
Don’t worry, we also go in-depth into other AI governance-specific topics here! In today’s edition (5-6 minute read).
Trustible Raises $4.6M Seed Series Financing Round
AI for National Security and Specialized Industries
The Vibes of Vibe Coding
A Brief Update on Global AI Policy Developments
Trustible Raises $4.6M Seed Series Financing Round
This has been a big week for all of us at Trustible! Yesterday, we announced our $4.6 million Series Seed round, led by Lookout Ventures, with participation from Eric Schmidt, and many others. You can read more about the news here.
When our co-founders Gerald and Andrew started Trustible two years ago, they understood clearly that society is at a pivotal moment of change, disruption, and opportunity driven by artificial intelligence. Since then, our team has seen how AI is reshaping everything—from employment and public services, to everyday consumer products and enterprise software applications.
But trust in AI – and the organizations deploying them – will ultimately shape whether or not we see the expected benefits from these systems.
Trust remains a critical gap for AI adoption. A recent KPMG study surveyed over 48,000 people across 47 countries and found that 46% of people globally are willing to trust AI systems and over 70% believe regulation is needed.
When governance is done right – with trust at the center – it becomes a powerful accelerant of AI adoption. Trustible’s customers—38% of whom are Fortune 500 companies, 62% publicly traded, and over 87% operating globally—have proven that AI governance can go hand-in-hand with rapid deployment.
We’re also proud to center that growth right here in the D.C. area. In fact, over 80% of this round includes VCs and angel investors based in the Washington D.C. area. D.C. is the nation’s third-best city for tech – with 1,000+ startups and a 24 % rise in tech jobs since 2019—all supported by some of the highest average tech salaries in the world. Our region’s strengths translate directly into an opportunity for Trustible: we can build at the intersection of policy and tech, proving that the next generation of responsible-AI leaders will come from a place that understands that governance and innovation are two sides of the same coin.
Key Takeaway: With this new funding, we’re doubling down on growing our team locally, scaling our customer footprint, and providing new and innovative capabilities that enable enterprises to solve the most challenging problems in AI governance. Stay tuned, we’re just getting started!
AI for National Security and Specialized Industries
Source: Anthropic
Last week, Anthropic announced a new set of models dedicated for US national security use, and the appointment of Richard Fontaine, the leader of a prominent national security think tank, to their oversight board. This isn’t Anthropic’s first foray into the national security sector, as they previously announced a partnership with Palantir to bring their previous generation of models to the intelligence community. Why the need for a specialized set of models? On the legal side, the public version of Claude, including those hosted on AWS, are bound by an acceptable usage policy that prohibits use for criminal justice or law enforcement purposes, and presumably both the Palantir deal, and new set of national security models are not bound by those. On the technical side, many of the default alignment guardrails built into Claude, and a lot of their research has focused on how to ensure models are built with strong ethical principles. It is unclear to what extent these guardrails may have been altered or removed in order for them to be used for national security use. Both the legal and technical aspects of these were also cited in the recent decision by the Department of Homeland Security to retire use of commercial AI models, in favor of dedicated national security ones.
We do not know enough about the intended uses of these models to be able to debate their ethics here, nor will we dig into potential questions about Anthropic conducting this work. Instead, we do see this as the continuation of the trend of needing specialized models for different domains in order to properly align the guardrails for the appropriate end user. For example, Antropics’s usage policy includes prohibitions on using their models for any political campaigns, and specifies certain high risk uses that require mandatory protections. However there are legitimate potential uses for generative AI in political activity, and for high risk domains without those restrictions. While mass commercial models may need to have heavy guardrails, partly to limit liability to the provider themselves, we expect to see more niche models get released that remove certain guardrails or restrictions. Use of these less restricted models likely will require the end users to be highly literate on appropriate AI use and associated risks, and ensuring that they are properly qualified for that is a challenge unto itself.
Key Takeaway: We’ll likely see an evolution of models for specific domains with fewer built-in guardrails, but restricted to appropriately trained personnel as is starting to happen in the national security space.
The Vibes of Vibe Coding
“Vibe Coding” started as a joke about giving a task to an LLM in natural language and accepting generated code that mostly (probably?) works, while the developer “vibes” (i.e. relaxes). However, it quickly became emblematic of a new trend in software development made possible by powerful new LLMs (e.g., Claude 4). Coding copilots such as Cursor and Github Copilot have been popular for several years and primarily used to generate well-defined functions, as well as auto-complete sections of code. In contrast, “vibe coding” is a more hand-offs approach where the user does not understand or verify the generated code line-by-line. AI providers are investing in this approach: Major LLM providers are now acquiring or creating IDEs (i.e. developer apps) that make this approach accessible; while no-code tools like Lovable enable 1000s of apps to be built daily. While Google and Microsoft have recently reported that over 30% of new code is AI-generated, it is unclear how much of that is “vibe-coded” vs created through a more controlled process.
Vibe coding can lower the barrier to entry for a non-technical audience and can allow developers to create an initial application in hours instead of days. However, it is not without risks. The primary risk is harmful code generation or introducing security vulnerabilities into production systems. An analysis of Loveable apps found that 10% allowed anyone to access sensitive data, like financial information and secret API keys. Other studies suggest that LLMs have significant weaknesses around detecting and generating insecure code. To address this risk, AI Literacy Training is necessary to understand appropriate applications for vibe coding (e.g. do not use it for applications with sensitive data). Traditional security mitigations, like static vulnerability scanning tools, can help as well. Beyond security, the use of vibe coding within larger production systems remains experimental with no consensus on best practices. Some recommend focusing on hand-writing tests, others focus on AI-driven code reviews. In general, strict version control systems that make it easy to track and revert AI changes will be critical to support this process.
A Brief Update on Global AI Policy Developments
The last few weeks have seen a flurry of AI policy activity across the globe. Here is our quick synopsis of the major AI policy developments:
U.S. Federal Government. The proposed federal AI moratorium may get removed from the Republican’s reconciliation bill, even as Senate Republicans amended the language to tie the moratorium to state broadband funding. The Senate Parliamentarian will likely find that the provision violates Senate procedures for what is allowed in a reconciliation bill, a point some Senate Republicans have publicly conceded.
U.S. States: Texas. On May 30, 2025, the Texas state legislature passed the Texas Responsible AI Governance Act, which now awaits Governor Greg Abbott’s signature or veto. The bill imposes specific obligations on how the state government can develop or deploy AI systems, creates an AI regulatory sandbox program, and establishes a non-regulatory AI oversight council.
European Union. Lawmakers in the EU are reportedly considering delaying enforcement for certain provisions of the EU AI Act. The potential delay would be limited to provisions where technical guidance or standards are required, but not yet developed.
Japan. On May 28, 2025, the Japanese Parliament (known as the “Diet”) passed an AI law that takes a light-touch regulatory approach. The law relies on cooperation between businesses and the government on issues related to AI governance, rather than imposing compliance penalties. Companies would still be subject to existing law and sanctions could be imposed under those existing regimes as it relates to AI.
Standards Bodies. The following organizations published AI-related standards:
Our Take: The recent AI policy developments are a mixed bag when it comes to regulations versus innovation. While there has been a general pivot away from AI regulations to encourage AI innovation, some of the latest activity underscore that important work is still happening to build and demonstrate trust within the AI ecosystem.
—
As always, we welcome your feedback on content! Have suggestions? Drop us a line at newsletter@trustible.ai.
AI Responsibly,
- Trustible Team