Trustible and Databricks team up to operationalize the DAGF
Plus why the AI moratorium (RIP) would have backfired, why AI slop is making human-generated content a premium, and the inflection point in the AI copyright debate
Hi there, Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter! The newscycle in AI never slows down, and that goes for holiday weekends for all of us here in the U.S. Between last week’s AI regulatory moratorium eventually failing to pass, separate lawsuits involving Anthropic, Meta, and Midjourney being both filed and resolved on the topic of AI copyright infringement, and the European Commission signaling its full steam ahead for implementation of the EU AI Act, we’re watching history being made in real-time.
In today’s edition (5-6 minute read):
Last week, our partner Databricks introduced their AI Governance Framework (DAGF v1.0), a structured and practical approach to governing AI adoption across the enterprise. The DAGF acknowledges what many organizations are already discovering: AI governance is not simply a technical exercise. It’s about aligning people, processes, policies, and platforms to ensure that AI systems are trustworthy, compliant, and scalable.
The Databricks AI Governance Framework marks a pivotal step in helping organizations balance innovation with responsible deployments. But success depends on operationalizing them effectively across people, processes, and technology.
Yesterday, we announced that Trustible is proud to serve as the official Technology Implementation Partner of the Databricks AI Governance Framework and a key contributor alongside leading organizations such as Capital One, Meta, Netflix, Grammarly, and others. DAGF offers a practical, flexible framework designed to help enterprises embed AI governance into day-to-day operations, regardless of where they are in their AI maturity journey. You can read more about how we’ve interpreted the DAGF in the Trustible platform in our whitepaper here.
Key Takeaway: Starting this week, Trustible customers will be able to align their AI governance efforts directly to the framework through a dedicated DAGF module within the Trustible platform and help embed AI governance into the fabric of your AI strategy so you can build, deploy, procure, and scale with confidence. This is the first of many partnerships to come with AI deployers, infrastructure providers, and ecosystem partners to ensure enterprises of all size and shapes have access to ready-to-deploy governance solutions that adapt as quickly as the market.
2. Why the AI Moratorium would have backfired
While we now know the ultimate fate of the proposed State AI Legislative Moratorium that was included in the ‘One Big Beautiful Bill’ budget (it was removed by the Senate in a 99-1 vote), the idea is likely to stick around, and similar proposals may appear in the future. We supported its removal for a variety of reasons, but our biggest argument against it was that it would have backfired. Specifically, we think banning AI regulations in the absence of any federal clarity would have hurt the very startups and innovative environment that its proponents were trying to protect. Here’s a brief outline of why:
For a bigger deep-dive analysis, we have a more in-depth post here.
Our Take: The idea that all regulation is bad for business is a little simplistic, and can often just be a form of regulatory capture. A balanced federal framework would be the best approach for everyone involved, but pre-empting things before that framework has even been discussed would hurt the ability of all but the biggest AI companies.
3. Policy Round-Up
Here is our quick synopsis of the major AI policy developments:
4. Why AI Slop Matters
With recent improvements in the quality and cost-effectiveness of AI generated content, it seems impossible to escape ‘AI Slop’ - low quality generated content used mainly for driving online engagement. We see it on our social media feeds, in the content we read, and even increasingly in our professional work. The public is becoming increasingly aware of it as well, with recent news stories covering its impact on events like the most recent season of Squid Games, or even the Sean Combs trial. The topic was even covered in-depth by John Oliver in a recent ‘Last Week Tonight’ episode. How big of an issue is ‘AI Slop’ however? Is it simply the new ‘Spam’, something to be ignored that will eventually fade into the background, or are there major governance implications for it? Here’s a brief overview of why ‘AI Slop’ may be relevant to organizations using AI.
Wasteful Spend - Even before the current ‘Age of AI’, there was a conspiracy theory called the ‘Dead Internet Theory’ that postulated that the majority of online content and interactions were driven by automated bots. The biggest challenge with this is that many organizations spend massive amounts of money on ads for ‘engagement’, or derive market insights from it. Big Tech platforms unfortunately have an incentive to ‘boost’ engagement artificially, even though it will yield poor ROI for the advertisers.
Degraded Reputation - Many organizations differentiate themselves based on the quality of the services they provide. Consider every top tier law firm, editorial publication, or consulting business that charges high rates for access to top thinkers. The problem: What if those ‘top thinkers’ are using the same AI as everyone else? The temptation to use AI systems may win-out, even as studies show that the diversity of AI content is actually quite low, and that overuse can degrade our own cognitive abilities over time. It will be a constant fight for organizations that seek to differentiate based on quality to avoid their reputation degrading as a result of too much slop.
Key Takeaway: While AI generated content is quick and easy to create, there may be a persistent bias against such content, and the internet will likely become overwhelmed for it. This could create a market for authentic human-generated content, but then maintaining that quality and output could become difficult to maintain. For enterprises that rely on AI to generate content, especially as part of their marketing strategy, expect that lack of authenticity ultimately to reduce the effectiveness of their strategies. It’s an important reminder that while AI is a transformative technology, it’s also a tool, not to replace human content generation.
5. The Great AI Copyright Conondrum
Source: New York Times
Over the span of three days, two major cases on AI and copyright law were released. On June 23, 2025, a judge ruled in favor of Anthropic after a group of authors sued Anthropic for copyright infringement by alleging that the company trained Claude on their protected works without their permission. The judge found that, while Anthropic may have broken the law when it trained Claude on millions of pirated books, the books that were legally purchased for model training purposes did not violate copyright law. The judge reasoned training the model on the books was a fair use because Claude’s outputs generated new text that was "quintessentially transformative” from the original material.
Two days later, a judge found in favor of Meta after a separate group of authors sued the company for using their copyrighted books to train Llama. The judge found that Llama’s outputs did not cause sufficient market harm to the authors because Llama was not able to generate “any meaningful portion” of the authors’ books that would threaten the books’ market value. Moreover, the authors did not present meaningful evidence that Meta’s use of their books diluted their value. The judge seemingly left open the door to further litigation on this issue by noting that the ruling only impacted “the rights of [the] 13 authors—not the countless others whose works Meta used to train its models … this ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”
The big tech companies are heralding this as a win, however it does not solve the broader issues related to how AI model providers are using protected works to train their models. Companies should also take note that this does not alleviate them from infringing on someone’s intellectual property (IP) rights. Not all models are created equal and it is important to understand whether the underlying model(s) for a company’s AI products or services have guardrails in place to avoid violating IP laws. Companies should understand how a model handles IP in their training data. For instance, Trustible Model Ratings identifies when a model has policies around IP in their training data. Moreover, groups like Creative Commons are working on frameworks that help balance the IP rights with the need to have high quality datasets available to train AI systems.
Our Take: Big tech won the battle but is far from winning the war when it comes to how AI uses IP. Policymakers are thinking about how best to balance IP law with AI innovation, but do not expect an answer in the foreseeable future. In the meantime, companies should be taking steps to ensure that training data is properly licensed when appropriate, selecting models that have IP safeguards in place, and reviewing outputs to avoid using potentially infringing materials.
—
As always, we welcome your feedback on content! Have suggestions? Drop us a line at newsletter@trustible.ai.
AI Responsibly,
- Trustible Team
Engineering Leader
1moI read through your whitepaper on how Trustible implements the DAGF. I think it is a great way to kickstart an organizations AI journey. Do you have a way to monitor the actual prompts from users for policy violations?