AI Browsers Emerge to Redefine the State of Browsing
Plus Meta charting a new course, AI models have a legal Ship of Theseus problem, and a global policy roundup
Hi there, Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter! We’re awaiting the Trump administration’s three expected AI Executive Orders to drop today, including a possible Moratorium 2.0 push impacting state-level AI regulatory autonomy. We saw this approach play out earlier this month in Congress, and don’t feel this approach serves anyone’s best interests. But, stay tuned on the Trustible blog for updates as we learn more about what this means for AI adoption.
In the meantime, in today’s edition (5-6 minute read):
Building A Moat: Agentic AI Web Browsers
Several prominent AI companies, including Perplexity and OpenAI, have announced the creation of their own web browsers. Perplexity’s browser, Comet, has been in preview recently and the first few positive reviews (TechRadar, Mashable, FastCompany) have some in highlighting how integrating AI directly with a browser’s capabilities can allow for creating agentic workflows, summarizing and highlighting key information in websites, or conducting deep-research type tasks with the advantage of being logged into systems. While AI features directly integrated with web browsers could offer a more seamless experience than separate applications with ChatGPT, it’s also worth considering some of the additional risks, or potential motives for these initiatives. Aside from competing with Google, as Perplexity’s CEO seeks to do, here are our theories on why a web browser could be massively beneficial to big AI companies:
Disclosure: These are potential ways a browser could be exploited, and hypotheses about risks. Trustible has not reviewed the systems in question, nor their terms of service, to know whether these practices would be allowed.
Our Take: AI web browsers could become a powerful tool for personal productivity and reduce some of the friction and clunkiness of using a separate platform. However, a browser could also be the perfect tool for solving a number of current and emerging challenges faced by major AI companies, and could heavily expose people to massive privacy risks. The winner of the AI browser wars could emerge with an insurmountable advantage in their ability to collect and evaluate content.
AI Round Up ‘Round the World
Here is our quick synopsis of the major AI policy developments:
Industry. Delta announced that it would increase the volume of ticket prices that are influenced by AI from about 3% to 20%. Lawmakers and privacy groups have voiced concerns over the initiative, characterizing it as invasive and predatory.
AI Model Fine Tuning’s Ship of Theseus Problem
On July 18, 2025, the European Commission released guidelines to help clarify obligations for general purpose AI model providers and complement the recently published Code of Practice. While the focus has primarily been on obligations for frontier model providers, there are concerns about how model fine-tuning may pass those obligations from the model providers to the modifying organization.
Think of this from the lens of the Ship of Theseus thought experiment: for a wooden ship to have each of its planks and nails replaced piece by piece over time, is the whole of the entirely replaced ship still the same ship? The European Commission’s guidelines raise a parallel real-world conundrum for AI model providers and fine tuners.
The latest Commission guidance clarifies that when a downstream organization is considered to be the model provider when the training compute used to modify the model is “greater than a third of the training compute of the original model.” This threshold was chosen, instead of a fixed number of FLOPs, because the amount of compute needed to significantly modify a model is relative to its size. In contrast to previous texts, this document provides some specific methods for calculating compute and provides placeholder values when compute for the original model is not disclosed (as is the case for a majority of systems).
Many common applications are not likely to reach the specified thresholds. For instance, LLama-4 Maverick would need roughly 5.5 trillion words of fine-tuning data to meet the modification threshold. In contrast, common guidance recommends fine-tuning with a much smaller data set, typically suggesting starting with several thousand examples. Moreover, organizations who provide applications that require substantial modification may have a more difficult time understanding how they may reach these thresholds. The Act’s annex emphasizes that almost all forms of compute used should be tallied with some enumerated exceptions. However, not all compute can be easily ascertained. For example, reinforcement learning processes are often used to instill “helpful” and “harmless” behavior into models that use larger amounts of compute but do not have a direct formula to estimate it.
Our Take: Many organizations may not need to be concerned with becoming general purpose model providers because they modify an existing GPAI system. However, the Act misses the mark on how smaller changes could substantially impact the model, especially when some techniques like parameter efficient fine-tuning can modify models with a much smaller amount of compute. The Act’s threshold may ultimately punish organizations that fine-tune at scale without making significant changes to the model as compared to those organizations that make smaller modifications that significantly affect alignment and alter model behavior.
Meta has dominated several AI news cycles over the past few weeks with major headlines of $100+ million signing bonuses for top AI researchers, and a purchase/investment in Scale AI that made Scale’s CEO the new head of Meta’s AI division. However, hidden amongst some of these market moves, there have also been two core stories that may impact organizations using Meta’s AI technologies. It’s worth looking into these key events, and understanding what it means for technology and policy professionals.
Our Take: Meta clearly has big ambitions to become a foundational layer amongst the likes of OpenAI and Anthropic, and their investments in talent and M&A makes that clearer than ever. Meta is distinct amongst the landscape as the best performing Western open source LLM on the market, something until recently Meta has highlighted as their stand out differentiator. Many organizations have indicated they want to build on top of Llama directly, or fine-tune it to create their own AI intellectual property, and to host and run entirely themselves on-prem, offering higher privacy and cyber protections. It’s also one of the most common models used in academia for research, as smaller versions of Llama can actually run on your average desktop computer without specialized AI hardware. But Meta’s recent actions have created uncertainty and unease on the apparent pivot, which may actually harm them in the short and long term. It’s unclear whether a closed-ecosystem system will be compliant in the EU, whether building on Llama will be further developed, or whether Meta’s systems will benefit from the improved security and reliability that accompany being a heavily-researched open system.
FAccT Finding: AI Takeaways from ACM FAccT 2025
In June, while policy makers debated the finer points of the US AI Moratorium and the EU AI Act Code of Practice, researchers from around the world (including our Director of Machine Learning, Anastassia Kornilova) gathered at the ACM FAccT Conference to discuss the finer points of building responsible AI systems. What we learned reinforced the importance of involving affected populations in AI system development and testing, and the breadth and depth of testing necessary to build safer AI systems. At the same time, we heard that gaps persist between the richness of this research and how policy is crafted both at an organizational and government levels.
You can read more about our takeaways on our blog.
—
As always, we welcome your feedback on content! Have suggestions? Drop us a line at newsletter@trustible.ai.
AI Responsibly,
- Trustible Team