AI Browsers Emerge to Redefine the State of Browsing

AI Browsers Emerge to Redefine the State of Browsing

Plus Meta charting a new course, AI models have a legal Ship of Theseus problem, and a global policy roundup

Hi there, Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter! We’re awaiting the Trump administration’s three expected AI Executive Orders to drop today, including a possible Moratorium 2.0 push impacting state-level AI regulatory autonomy. We saw this approach play out earlier this month in Congress, and don’t feel this approach serves anyone’s best interests. But, stay tuned on the Trustible blog for updates as we learn more about what this means for AI adoption.   

In the meantime, in today’s edition (5-6 minute read):

  1. Building A Moat: Agentic AI Web Browsers
  2. AI Model Fine Tuning’s Ship of Theseus Problem
  3. Global & U.S. Policy Roundup
  4. Meta Analysis
  5. FaaCT Finding: AI Takeaways from ACM FAccT 2025


Building A Moat: Agentic AI Web Browsers

Article content

Several prominent AI companies, including Perplexity and OpenAI, have announced the creation of their own web browsers. Perplexity’s browser, Comet, has been in preview recently and the first few positive reviews (TechRadar, Mashable, FastCompany) have some in highlighting how integrating AI directly with a browser’s capabilities can allow for creating agentic workflows, summarizing and highlighting key information in websites, or conducting deep-research type tasks with the advantage of being logged into systems. While AI features directly integrated with web browsers could offer a more seamless experience than separate applications with ChatGPT, it’s also worth considering some of the additional risks, or potential motives for these initiatives. Aside from competing with Google, as Perplexity’s CEO seeks to do, here are our theories on why a web browser could be massively beneficial to big AI companies:

  • Bypassing Anti-AI Scraping - Many websites have rolled out tools to try and prevent scraping by AI bots, and Cloudflare recently made this a lot easier. However, anti-scraping systems are meant to allow humans to view the content and fully render it on your browser. If the browser then had the rendered html/text, it can capture that and share copies of it for training, therefore bypassing the anti-scraping bots and evading issues around robots.txt. This can give a huge competitive advantage to the browser creator, while creating massive privacy risks.
  • Building Popularity Metrics - Search Engine Optimization (SEO) has long been a major focus for marketing teams everywhere, as search result rankings can make or break entire companies. However now orgs are thinking about the idea of ‘Answer Engine  Optimization’, ensuring that your results come recommended by AI. The era of ‘back-links’ being prominent may be over, but there is still a lot of value in finding the websites others find the most useful or popular. Browsers that collect and track the top websites and other activity could deliver this information at scale, improving the quality of AI responses.
  • Collecting Human-generated Content - As the internet becomes flooded with more AI generated content, it will become increasingly difficult to find ‘human written’ content to fuel additional model quality growth. This is partly because AI models trained on content they themselves generate seem to suffer from ‘model collapse’. However, if the browser can track what a user writes, and whether they used AI, or did it from scratch, they could identify and capture the human generated content from AI generated ones, and then further train on that content.

Disclosure: These are potential ways a browser could be exploited, and hypotheses about risks. Trustible has not reviewed the systems in question, nor their terms of service, to know whether these practices would be allowed.

Our Take: AI web browsers could become a powerful tool for personal productivity and reduce some of the friction and clunkiness of using a separate platform. However, a browser could also be the perfect tool for solving a number of current and emerging challenges faced by major AI companies, and could heavily expose people to massive privacy risks. The winner of the AI browser wars could emerge with an insurmountable advantage in their ability to collect and evaluate content.


AI Round Up ‘Round the World

Article content

Here is our quick synopsis of the major AI policy developments:

  • U.S. Federal Government. The Trump Administration is expected to release its AI Action Plan, which was mandated under the President’s AI Executive Order, alongside 3 new AI-related Executive Orders. The Trump Administration is also continuing to invest in AI infrastructure, recently announcing that Georgia Tech would receive $20 million to build a new supercomputer that will use AI for scientific research. Meanwhile, a bipartisan bill was introduced in the Senate to protect certain types of data from being used to train AI models. 
  • U.S. States. AI-related policy developments at the state level include:
  • California. The California Judicial Council adopted a rule requiring courts to develop policies for the use of generative AI by judges and court employees. The new policies must be in place by September 1, 2025.
  • New York. Mayor Eric Adams emphasized that he would leverage more AI technologies to improve city services if he were re-elected to a second term.
  • Pennsylvania. Tech and energy companies plan on investing over $90 billion in Pennsylvania as part of a broader effort to turn the state into an AI hub. The investments are primarily aimed at securing new energy sources for AI infrastructure.   
  • Argentina. Netflix recently announced that it used generative AI in its first TV show, an Argentinian sci-fi series called El Eternauta (The Eternaut). Using AI remains a contentious issue within the entertainment industry due to concerns over job cuts.
  • Asia. AI-related policy developments in Asia include:
  • China. Beijing hosted China International Supply Chain Expo, which featured over 650 companies from 60 countries. Nvidia CEO, Jensen Huang, praised China during the expo as a “catalyst for global progress” because of its open source AI. The comments came one day after Nvidia announced it would resume sales of its H20 chips to China. While the U.S. has generally been concerned about chip exports to China, the Trump Administration was less worried because the H20 chips are Nvidia’s “fourth best” AI chips. 
  • Kazakhstan. In a bid to boost its AI industry, Kazakhstan launched central Asia’s most powerful supercomputer. While the current government has expressed an interest in bolstering AI investments, the inability to retain appropriate talent may hinder the country’s ambitions.  
  • EU. Most of the major providers are agreeing to sign the EU’s Code of Practice for general purpose AI models, with Microsoft likely to sign and Anthropic announcing it would sign. Meta continues to be outlier as the only model provider to publicly state it will not sign the Code of Practice.  
  • Middle East. The U.S. launched an initiative with Israel to build a strategic tech hub that focuses on AI and quantum development in an effort to counter influences from China, Iran, and Russia. The partnership will likely expand to include other Gulf and Central Asian nations. The announcement comes as a billion dollar AI chips deal was placed on hold between Nvidia and the UAE over concerns that the technology may end up in China.  
  • North America. AI-related policy developments in outside of the U.S. in North America include: 
  • Canada. Canadian AI company, Cohere, is reportedly lobbying Canadian government officials in an effort to have Canada influence AI policy for the G7, as it currently holds the bloc’s presidency.  
  • Mexico. Voice actors in Mexico are demanding that the Mexican government enact regulations that would prohibit voice cloning without consent.

Industry. Delta announced that it would increase the volume of ticket prices that are influenced by AI from about 3% to 20%. Lawmakers and privacy groups have voiced concerns over the initiative, characterizing it as invasive and predatory.


AI Model Fine Tuning’s Ship of Theseus Problem

Article content

On July 18, 2025, the European Commission released guidelines to help clarify obligations for general purpose AI model providers and complement the recently published Code of Practice. While the focus has primarily been on obligations for frontier model providers, there are concerns about how model fine-tuning may pass those obligations from the model providers to the modifying organization. 

Think of this from the lens of the Ship of Theseus thought experiment: for a wooden ship to have each of its planks and nails replaced piece by piece over time, is the whole of the entirely replaced ship still the same ship? The European Commission’s guidelines raise a parallel real-world conundrum for AI model providers and fine tuners.

The latest Commission guidance clarifies that when a downstream organization is considered to be the model provider when the training compute used to modify the model is “greater than a third of the training compute of the original model.” This threshold was chosen, instead of a fixed number of FLOPs, because the amount of compute needed to significantly modify a model is relative to its size. In contrast to previous texts, this document provides some specific methods for calculating compute and provides placeholder values when compute for the original model is not disclosed (as is the case for a majority of systems). 

Many common applications are not likely to reach the specified thresholds. For instance, LLama-4 Maverick would need roughly 5.5 trillion words of fine-tuning data to meet the modification threshold. In contrast, common guidance recommends fine-tuning with a much smaller data set, typically suggesting starting with several thousand examples. Moreover, organizations who provide applications that require substantial modification may have a more difficult time understanding how they may reach these thresholds. The Act’s annex emphasizes that almost all forms of compute used should be tallied with some enumerated exceptions. However, not all compute can be easily ascertained. For example, reinforcement learning processes are often used to instill “helpful” and “harmless” behavior into models that use larger amounts of compute but do not have a direct formula to estimate it.

Our Take: Many organizations may not need to be concerned with becoming general purpose model providers because they modify an existing GPAI system. However, the Act misses the mark on how smaller changes could substantially impact the model, especially when some techniques like parameter efficient fine-tuning can modify models with a much smaller amount of compute. The Act’s threshold may ultimately punish organizations that fine-tune at scale without making significant changes to the model as compared to those organizations that make smaller modifications that significantly affect alignment and alter model behavior. 


  1. Meta Analysis

Article content

Meta has dominated several AI news cycles over the past few weeks with major headlines of $100+ million signing bonuses for top AI researchers, and a purchase/investment in Scale AI that made Scale’s CEO the new head of Meta’s AI division. However, hidden amongst some of these market moves, there have also been two core stories that may impact organizations using Meta’s AI technologies. It’s worth looking into these key events, and understanding what it means for technology and policy professionals.

  • Meta refuses to sign EU AI Act Code of Practice - As of the time of writing, Meta is the only major AI lab to confirm that they won’t sign the newly released EU AI Act General Purpose AI Code of Practice. Competitors Anthropic, OpenAI, Mistral have confirmed signatures, and others like Microsoft are considered likely to do so. The Code outlines copyright, transparency, and safety testing protocols for frontier models, and is technically voluntary until full AI Act enforcement begins in August of 2026. 
  • Key Takeaway: Meta has had repeated frustrations with the EU over digital regulations, and could simply withdraw or prohibit their AI systems in the EU altogether. However, they have the world’s top leading open-weight models, which makes avoiding the EU, or enforcing bans, particularly hard. Ironically, when Apple previously withheld their AI systems from the EU over regulatory concerns, they still landed in regulatory issues as it was considered an unfair use of market power under the DMA.
  • Meta considers going closed source - Meta’s new AI head, Alexandr Wang, is reportedly considering a major strategy pivot away from the open-weight Llama models, into closed proprietary models. Releasing open source (or open weight) models has been a major issue in AI policy discussions, as it can both enable innovation, but also cannot ever be ‘reigned in’ or controlled once released. 
  • Key Takeaway: While a closed model could allow for some more flexibility to compete for Meta, there may be National Security aspects to consider. Most Chinese LLMs are also open weight, including Alibaba’s Qwen 3, and DeepSeek’s R1. While Llama is seen as less capable than these for now, halting its development would remove the leading US-centric open weight model which is still preferred by people for a range of applications. Chinese open models must meet testing requirements by Chinese authorities and align with CCP viewpoints making them unsuitable for a wide range of purposes in the US. We think it’s unlikely that Meta will pull back on their Llama models, especially if they’re going to want the Trump Administration’s help pushing back on heavy EU digital regulations, and avoiding current anti-trust scrutiny. 

Our Take: Meta clearly has big ambitions to become a foundational layer amongst the likes of OpenAI and Anthropic, and their investments in talent and M&A makes that clearer than ever. Meta is distinct amongst the landscape as the best performing Western open source LLM on the market, something until recently Meta has highlighted as their stand out differentiator.  Many organizations have indicated they want to build on top of Llama directly, or fine-tune it to create their own AI intellectual property, and to host and run entirely themselves on-prem, offering higher privacy and cyber protections. It’s also one of the most common models used in academia for research, as smaller versions of Llama can actually run on your average desktop computer without specialized AI hardware. But Meta’s recent actions have created uncertainty and unease on the apparent pivot, which may actually harm them in the short and long term. It’s unclear whether a closed-ecosystem system will be compliant in the EU, whether building on Llama will be further developed, or whether Meta’s systems will benefit from the improved security and reliability that accompany being a heavily-researched open system. 


FAccT Finding: AI Takeaways from ACM FAccT 2025

Article content

In June, while policy makers debated the finer points of the US AI Moratorium and the EU AI Act Code of Practice, researchers from around the world (including our Director of Machine Learning, Anastassia Kornilova) gathered at the ACM FAccT Conference to discuss the finer points of building responsible AI systems. What we learned reinforced the importance of involving affected populations in AI system development and testing, and the breadth and depth of testing necessary to build safer AI systems. At the same time, we heard that gaps persist between the richness of this research and how policy is crafted both at an organizational and government levels.

You can read more about our takeaways on our blog.

As always, we welcome your feedback on content! Have suggestions? Drop us a line at newsletter@trustible.ai

AI Responsibly, 

- Trustible Team

To view or add a comment, sign in

Others also viewed

Explore topics