Gen AI for Business Edition # 66: New Beginnings
Six months ago, we made a quiet promise to ourselves: to build something that made AI usable, not just impressive.
This week, that promise became real: we officially launched YOUnifiedAI.
Coming out of stealth feels like stepping into the sun after working in a darkroom: everything’s exposed now, but it’s also where growth begins.
New beginnings are like that.
Whether you're founding a company, learning a new skill, or starting over after something hard, there's a shared vulnerability and a shared power in that first step forward.
That’s why this newsletter is all about fresh starts.
The stories we’ve included this week show how AI, creativity, and resilience are intersecting in ways that reset what’s possible.
Follow our journey at YOUnifiedAI AI on LinkedIn, and thank you, truly, for being part of this new chapter.
Other :) New Beginnings Inside This Issue:
Elon Musk’s xAI quietly joins the Pentagon’s AI accelerator, signaling deeper military ambitions beyond chatbots.
Meta goes all-in on artificial superintelligence, backed by custom silicon and “hundreds of billions” in AI investment.
Researchers are embedding hidden prompts in academic papers to trick AI peer reviewers into giving only positive feedback.
AI-generated music is exploding, one synthetic band made $34K on Spotify in a month, triggering industry-wide alarm.
AI agents are behaving like junior employees with root access, raising major concerns for enterprise security and access controls.
56% of retailers increased GenAI investment this year, using AI for customer service, hiring, and backend operations.
One in five new Steam games now uses generative AI, prompting backlash from players concerned about creative dilution.
MIT Tech Review shows how you can now run an LLM on your own laptop, giving you privacy, control, and a front-row seat to the future.
Here is to the new beginning!
Eugina
Models
Anthropic stumbles with silent throttling but surges forward with Claude Code’s 5.5x revenue jump, enterprise dashboards, and a finance-focused launch; Mistral and Google push into deeper agentic territory, while Moonshot’s Kimi K2 undercuts rivals in coding. OpenAI embraces new beginnings with ChatGPT Agents, a biosafety bug bounty, multi-cloud expansion, and a formal EU Code of Practice commitment. The U.S. government backs generative AI with $200M contracts to OpenAI, xAI, Anthropic, and Google, despite safety concerns. Meta bets big on superintelligence with titanic data centers and billions in capital, aiming to outpace the field. Meanwhile, Amazon debuts its coding assistant Kiro, Mira Murati’s Thinking Machines gears up for an open-source launch, Apple reveals private AI architecture, and researchers call for transparency in how AI thinks, marking a wave of fresh starts across the ecosystem.
Anthropic
Anthropic tightens usage limits for Claude Code — without telling users | TechCrunch Without warning users, Anthropic has tightened usage limits on Claude Code, its $200/month Max plan now throttles heavy users with vague “usage limit reached” messages, causing project delays and confusion. Despite promising 20x the access of Pro tiers, Anthropic’s flexible and opaque quota system means even paying users can’t predict restrictions. The company confirmed the disruptions but offered no timeline or explanation, frustrating developers who rely on Claude Code’s unique capabilities and have no clear alternatives. Simultaneous API overloads and ongoing status page issues are further eroding trust in the platform’s reliability.
Claude Code revenue jumps 5.5x as Anthropic launches analytics dashboard | VentureBeat Anthropic has launched a new analytics dashboard for its Claude Code AI assistant, offering enterprise teams detailed usage metrics such as lines of code accepted, suggestion acceptance rates, spend per user, and developer activity over time, responding to growing demand for ROI visibility on AI coding tools. Since releasing its Claude 4 models in May, Claude Code’s run-rate revenue has surged 5.5x, and its user base has grown 300%, with companies like Figma, Intercom, and Rakuten onboard. Designed for AI enablement teams and enterprise-scale deployments, Claude Code differentiates itself with agentic capabilities that go beyond code completion, enabling multi-file coordination and workflow customization. The dashboard, featuring role-based access controls and metadata tracking, supports a broader trend toward autonomous agents in software development, where understanding impact at scale is becoming just as critical as the tools themselves.
Amazon-backed Anthropic rolls out Claude AI for financial services Anthropic has launched its Claude Financial Analysis Solution, a tailored version of Claude for Enterprise designed specifically for financial professionals to support investment decision-making, market analysis, and research. The offering includes Claude 4 models, Claude Code, and enterprise-grade features like expanded usage limits and implementation support. Integrated with data sources like Box, PitchBook, S&P Global, Snowflake, and Databricks, Claude now offers real-time access to financial information. Available via AWS Marketplace, with Google Cloud support coming soon, the launch signals Anthropic’s push into vertical-specific enterprise AI, positioning Claude as a high-accuracy, high-reasoning tool for the financial services sector.
Mistral
Le Chat dives deep. | Mistral AI Mistral AI has launched a suite of major upgrades to its assistant, Le Chat, including a preview of Deep Research. This structured, source-backed research mode turns complex queries into readable, reference-rich reports. Other updates include voice input via Voxtral, multilingual reasoning with Magistral, Projects for organizing chats and tools by topic, and advanced image editing through a Black Forest Labs partnership. These features aim to make Le Chat a deeper, more contextual AI companion for tasks ranging from market analysis and scientific research to personal planning and visual content creation—all now accessible via web and mobile.
My take: Mistral’s latest Le Chat updates: Deep Research, voice input, multilingual reasoning, and image editing, are impressive, but they arrive months after similar features launched from competitors like OpenAI, Anthropic, and Google. While the enhancements position Le Chat as a more comprehensive assistant, the rollout suggests Mistral is playing catch-up in an already fast-moving landscape. Without a clear differentiator beyond openness and European alignment, matching feature sets may not be enough to gain ground against more established AI players.
Voxtral | Mistral AI Mistral AI has launched Voxtral, a new open-source family of speech understanding models available in 24B and 3B parameter sizes, designed to combine state-of-the-art transcription accuracy with deep semantic understanding, multilingual fluency, and real-time deployment capabilities. Released under the Apache 2.0 license, Voxtral offers advanced features such as long-form context handling (up to 40 minutes), built-in Q&A and summarization, voice-activated function calling, and strong multilingual support across major global languages. Benchmarks show Voxtral surpasses OpenAI Whisper and ElevenLabs Scribe in both short- and long-form transcription and outperforms Whisper in every FLEURS language task. The models also retain strong text comprehension via the Mistral Small 3.1 backbone and are optimized for cost-efficiency, with Voxtral Mini Transcribe available via API for $0.001/minute. Mistral is offering private deployment, fine-tuning, and integration support for enterprises, signaling an aggressive move into production-ready voice AI for regulated sectors and domain-specific applications.
More advanced AI capabilities are coming to Search Google Search is adding advanced features for U.S.-based Google AI Pro and AI Ultra subscribers, including access to the Gemini 2.5 Pro model and a new Deep Search capability within AI Mode. Gemini 2.5 Pro enhances complex query handling, particularly in math, reasoning, and coding, by providing richer, more intelligent responses. Deep Search leverages the model to perform hundreds of background searches and compiles a fully cited, detailed report, saving users time on high-effort research tasks such as job preparation, financial analysis, or major purchases. Additionally, Google is introducing a new agentic feature that allows Search to call local businesses, like pet groomers or dry cleaners, on the user’s behalf to gather pricing and availability details. This feature, available to all U.S. Search users (with higher usage limits for subscribers), reflects Google’s continued shift toward embedding AI agents directly into everyday workflows while maintaining business opt-in control via their Business Profile settings.
OpenAI
OpenAI says it will use Google's cloud for ChatGPT OpenAI has announced it will begin using Google Cloud Platform to support ChatGPT and its API in select countries, expanding beyond its previous reliance on Microsoft Azure. The move reflects OpenAI’s growing need for computing capacity amid surging demand and comes as Microsoft shifts from being its exclusive cloud provider to offering right of first refusal on new capacity needs. OpenAI will now operate across infrastructure from Microsoft, Google, CoreWeave, and Oracle, with Google’s cloud supporting operations in the U.S., Japan, the Netherlands, Norway, and the U.K. The agreement marks a win for Google Cloud, which also hosts Anthropic, and further underscores the escalating competition among tech giants to support AI workloads at scale.
My take: OpenAI’s decision to diversify its cloud infrastructure by adding Google Cloud, alongside Microsoft, CoreWeave, and Oracle, is a technically sound move, particularly from a data privacy and regulatory compliance standpoint. By distributing workloads across multiple providers and geographies, including operating in the Netherlands, Norway, and the U.K., OpenAI can better address data localization requirements and reduce risk exposure under GDPR and other region-specific laws. This multi-cloud strategy also enhances fault tolerance, reduces dependency on a single vendor, and aligns with growing enterprise and government expectations around cloud sovereignty and AI governance, especially critical as regulatory scrutiny intensifies across Europe.
The EU Code of Practice and future of AI in Europe | OpenAI OpenAI announced its intention to sign the EU’s Code of Practice for General Purpose AI, aligning with the upcoming EU AI Act’s compliance framework—pending formal approval by the AI Board. In parallel, OpenAI is launching its "OpenAI for Countries – European Rollout" to accelerate AI infrastructure, education, and public-private partnerships across the continent. The initiative includes joining bids for the EU's AI Gigafactories, collaborating with governments on national AI startup funds, and expanding ChatGPT Edu across schools, starting with Estonia. OpenAI emphasized that European users represent one of its largest global customer bases, spanning API developers, enterprises, and educators. The Code of Practice and the EU AI Continent Action Plan aim to drive responsible AI growth, ensure data residency, and support local innovation. OpenAI is advocating for simplified regulatory pathways to support small AI startups and has committed to responsible AI deployment by referencing its Preparedness Framework, Safety Evaluation Hub, and transparency tools like System Cards and red-teaming protocols. With these efforts, OpenAI positions itself as a core enabler of Europe's AI strategy, focused on delivering productivity gains, building sovereign infrastructure, and empowering local developers and governments to shape AI use aligned with European values.
Introduction to ChatGPT agent In this newly released demo, OpenAI introduces its unified ChatGPT Agent—a powerful, action-oriented evolution of the Operator feature. The presentation, led by Sam Altman alongside Casey Chu, Isa Fulford, Yash Kumar, and Zhiqing Sun, showcases how the Agent can automate multi-step digital tasks and seamlessly integrate with external applications. The model demonstrates booking a meeting, reading calendars, sending invites, and placing lunch orders, illustrating real-time tool use beyond mere Q&A. Designed for Pro, Plus, and Team users, the Agent emphasizes advanced security, maintaining context across sessions to avoid repetitive prompts. This marks a significant shift towards agentic AI workflows, where intelligent assistants perform complex, contextual tasks with minimal user intervention, ushering in a new era of productivity automation.
Agent bio bug bounty | OpenAI OpenAI has launched a biosecurity-focused bug bounty program aimed at testing the safety of its ChatGPT agent model against universal jailbreaks that could bypass biological and chemical safeguards. The program, opening to vetted applicants on July 17, 2025, challenges researchers to craft a single jailbreak prompt that elicits successful responses to all ten bio/chem safety challenge questions from a clean chat instance. A $25,000 prize will go to the first team achieving a true universal jailbreak, while $10,000 will be awarded for solving all ten prompts through multiple jailbreaks; partial rewards may also be granted. Eligible participants must apply with a brief track record and 150-word test plan by July 29 and agree to strict non-disclosure terms. This initiative reflects OpenAI’s continued investment in red-teaming frontier models for biosafety, highlighting concerns around misuse of AI in life sciences and the importance of preemptively hardening models before broader deployment.
Grok
Defense Department to begin using Grok, Musk’s controversial AI model The U.S. Department of Defense has awarded Elon Musk’s xAI a contract worth up to $200 million to deploy its AI chatbot, Grok, in government operations under a new initiative called “Grok for Government.” This move places Grok alongside systems from OpenAI, Google, and Anthropic as part of a broader federal strategy to integrate generative AI into defense, taxation, air traffic control, and other public sector functions. The announcement comes shortly after Grok faced criticism for generating antisemitic content, including references to “MechaHitler,” prompting concerns about its readiness for sensitive applications. While xAI issued a fix, the episode has fueled debate about the tension between accelerating AI deployment and ensuring model safety. Despite these risks, the Biden administration continues to push for AI adoption across federal agencies, emphasizing innovation and competition. Grok’s inclusion signals a growing acceptance of Musk’s model in high-stakes environments, even as critics warn that rapid rollouts without strong oversight could compromise security, transparency, and public trust. See more in the Other section.
Meta
For our superintelligence effort, I'm focused on building the most elite and talent-dense team in the industry. We're also going to invest hundreds of billions of dollars into compute to build superintelligence. We have the capital from our business to do this. SemiAnalysis just reported that Meta is on track to be the first lab to bring a 1GW+ supercluster online. 💪 We're actually building several multi-GW clusters. We're calling the first one Prometheus and it's coming online in '26. We're also building Hyperion, which will be able to scale up to 5GW over several years. We're building multiple more titan clusters as well. Just one of these covers a significant part of the footprint of Manhattan. Meta Superintelligence Labs will have industry-leading levels of compute and by far the greatest compute per researcher. I'm looking forward to working with the top researchers to advance the frontier! | Mark Zuckerberg | Facebook Meta CEO Mark Zuckerberg announced that the company will invest hundreds of billions of dollars to develop superintelligent AI, primarily through the creation of vast data center clusters. The initiative, led by the newly formed Meta Superintelligence Labs, will begin with the launch of the Prometheus facility in 2026, followed by Hyperion—an enormous site expected to scale up to 5 gigawatts of computing power. These "titan clusters" will be among the largest AI infrastructure projects globally, rivaling entire city blocks in size. Backed by Meta’s $165 billion in annual revenue, the company has already increased its 2025 capital expenditure guidance to between $64 billion and $72 billion and committed $14.3 billion to partners like Scale AI. To accelerate construction, Meta is even using temporary tent structures to house servers while permanent buildings are completed. While investors welcomed the announcement, boosting Meta’s stock by 1%, analysts note that the long-term payoff from these AI infrastructure investments, especially in terms of leading model performance, remains uncertain.
Other
Research leaders urge tech industry to monitor AI's 'thoughts' | TechCrunch A coalition of top AI researchers from OpenAI, Google DeepMind, Anthropic, and others is urging the tech industry to prioritize research into monitoring "chains-of-thought" (CoTs), the step-by-step reasoning trails AI models use to solve problems. Their new position paper argues that CoT monitoring may be one of the few effective tools to interpret and align advanced AI systems, but warns that the window for transparency could close if not studied now. With signatories including Geoffrey Hinton, Ilya Sutskever, and Shane Legg, the paper signals rare industry-wide agreement on the need to preserve and enhance visibility into how AI agents think as they grow more powerful.
Anthropic, Google, OpenAI and xAI granted up to $200 million for AI work from Defense Department The U.S. Department of Defense announced up to $200 million in new contract awards for Anthropic, Google, OpenAI, and Elon Musk’s xAI to develop mission-specific AI agents aimed at enhancing national security capabilities. Managed by the Chief Digital and Artificial Intelligence Office, these contracts are part of a broader effort to accelerate AI adoption within the military. Notably, xAI introduced its “Grok for Government” product suite, now accessible via the GSA schedule, while OpenAI followed its previous $200 million DoD contract with a federal-focused launch of “OpenAI for Government.” The move underscores growing institutional interest in operationalizing generative AI, despite previous controversies, including Grok’s spread of harmful content.
Thinking Machines Lab will launch its first AI product soon with 'a significant open source component' Thinking Machines Lab, founded by former OpenAI CTO Mira Murati, has confirmed it will launch its first AI product in the coming months, featuring a significant open-source component aimed at startups and researchers developing custom models. The announcement follows a record-setting $2 billion seed round that valued the company at $12 billion, led by Andreessen Horowitz with participation from NVIDIA, Cisco, AMD, and others. While product specifics remain under wraps, Murati said the multimodal AI system will support natural human interaction through conversation and vision, and the company will also release new scientific insights on frontier models. With two-thirds of the team composed of ex-OpenAI staff and cloud infrastructure provided by Google, Thinking Machines positions itself as a serious contender in open AI development, just as OpenAI continues to delay its own open-source offerings.
Amazon jumps into AI vibe coding with preview of Kiro Amazon Web Services has launched a preview of Kiro, its AI-driven “vibe coding” platform designed to assist developers by generating software code, system diagrams, and task lists from natural language prompts. Unlike traditional AI coding tools, Kiro emphasizes upfront design and documentation, helping users define requirements and architecture before writing code. The platform, which currently supports English and uses models from Amazon-backed Anthropic, will eventually offer both free and premium tiers, with paying users’ content excluded from model training. Kiro enters a competitive space alongside tools like Google’s Gemini-integrated Windsurf and Microsoft’s agent-powered Visual Studio Code, signaling AWS’s deeper push into AI-assisted software development. Frequently Asked Questions - Kiro
My take: We gave Mistral a hard time for entering late, so why not Amazon? The difference is intent. Mistral joined an already crowded LLM arena trying to stand out on model performance alone. Amazon, with Kiro, isn’t chasing model dominance, it’s embedding AI into the developer workflow. Specs, diagrams, and task lists aren’t flashy, but they solve real pain points. It’s not about being first, it’s about being useful. Whether that’s enough this late in the game is still a fair question.
Apple Intelligence Foundation Language Models Tech Report 2025 Apple released technical details of its multilingual, multimodal foundation models powering Apple Intelligence. The on-device model (~3B parameters) is optimized for Apple silicon using KV-cache sharing and 2-bit quantization-aware training, while the scalable server model employs a novel Parallel-Track Mixture-of-Experts (PT-MoE) transformer combining sparse MoE computation, global-local attention, and track parallelism. Both models support image understanding and tool use, are trained on responsibly sourced data, and outperform comparable open baselines in benchmarks. Apple also introduced a Swift-based developer framework enabling guided generation and fine-tuning with LoRA adapters. Privacy and responsible AI remain central, backed by Private Cloud Compute and content filtering.
A unified ontological and explainable framework for decoding AI risks from news data | Scientific Reports Researchers from the Technical University of Munich and MIT introduced a unified ontological and explainable framework to systematically analyze AI risks reported in news data. By recoding 496 publicly available AI incident reports with a new multi-scale ontology model—categorizing risks by event type, harm severity, technology characteristics, and lifecycle stage—they created a structured dataset to uncover patterns in AI-related harms such as psychological, physical, economic, privacy, and equal rights violations. Analysis revealed that 73.79% of risk events occur during the post-deployment operation phase, with top contributors including poor oversight (55.85%), outdated training data (54.64%), and opaque AI models (42.34%). Major tech firms like Meta, Google, and OpenAI accounted for nearly 50% of named risk incidents, with bias, privacy breaches, and discrimination cited as leading concerns. Explainable machine learning using XGBoost and SHAP showed that specific lifecycle stages and harm types could accurately predict privacy and equality violations. While the study confirms the importance of practitioner responsibility and lifecycle monitoring, it also highlights the limitations of current oversight and the need for globally coordinated AI governance. Despite a relatively small dataset, the framework provides a scalable foundation for regulators and developers to anticipate and mitigate AI risks with transparency and precision.
New Video-Generating AI Trained 100 Percent on Public Domain Films AI video startup Moonvalley has released Marey, a video-generating model trained entirely on public domain films, offering an ethical alternative to AI systems that rely on copyrighted material. Unlike most generative video tools—which draw criticism for scraping proprietary content—Moonvalley claims its “clean model” avoids legal and ethical gray areas, earning praise from VFX veteran Ed Ulbrich (Top Gun: Maverick, Titanic), who recently joined the company. The platform, now publicly available on a credit-based system, supports 3D-aware video synthesis. Its approach mirrors recent efforts in the language model space, where researchers trained competitive LLMs using only openly licensed data. While Moonvalley’s public domain claim still awaits independent verification, it challenges the narrative that AI development must rely on copyright violations, suggesting that responsible data sourcing is both viable and artistically promising.
Alibaba-backed Moonshot releases new Kimi AI model that beats ChatGPT, Claude in coding — and it costs less Alibaba-backed Moonshot AI has released Kimi K2, a new open-source large language model (LLM) optimized for coding, positioning it as a cheaper and potentially more powerful alternative to OpenAI’s GPT-4.1 and Anthropic’s Claude Opus 4. According to industry benchmarks, Kimi K2 outperformed both rivals in coding tasks while charging just $0.15 per million input tokens and $2.50 per million output tokens—significantly undercutting Claude’s $15 and $75 pricing and GPT-4.1’s $2 and $8, respectively. Available for free via app and browser, the model’s open-access terms only require attribution in high-traffic or high-revenue products. Initial reviews have praised its production readiness despite some hallucination issues, with analysts noting the model’s strengths in code generation but cautioning about limited tool integration. The release coincides with OpenAI's indefinite delay of its own open-source model due to safety concerns. Moonshot’s parallel Kimi-Researcher model also drew attention, matching Google’s Gemini Deep Research in performance and earning praise for agentic AI capabilities, highlighting China’s growing role in the global AI arms race. Download here: Kimi K2: Open Agentic Intelligence
News
Amazon, Zoho, and the broader AI community marked a wave of new beginnings this week, as Amazon launched AgentCore to help enterprises deploy secure, production-ready AI agents at scale, while Zoho unveiled its homegrown Zia LLMs and 40+ domain-specific agents to power its entire app suite with privacy-first intelligence. On AI Appreciation Day, industry leaders reflected on the shift from experimentation to real-world deployment, with agentic AI taking center stage and a renewed focus on governance, trust, and business value.
Enabling customers to deliver production-ready AI agents at scale | Artificial Intelligence Amazon introduced a sweeping strategy to bring agentic AI into production at enterprise scale via its AgentCore suite, enabling developers to build secure, context-aware, and highly integrated AI agents. AgentCore includes secure runtime environments, observability tools, memory systems, identity management, and API transformation capabilities, supporting frameworks like CrewAI and models inside or outside Bedrock. Amazon also launched S3 Vectors for 90% cheaper vector storage and real-time performance, and expanded Nova model customization (with support for PEFT, SFT, DPO, PPO, CPT) to fine-tune agents for domain-specific actions. Pre-built agent solutions are now available in AWS Marketplace, including Nova Act for browser automation and Kiro for AI-assisted software development, as AWS aims to streamline agent deployment across industries from finance to healthcare.
Zoho makes big AI move with launch of Zia LLM, pack of AI agents | Constellation Research Inc. Zoho has launched a proprietary suite of AI tools centered on Zia LLM, a multilingual large language model family trained entirely in India on Nvidia infrastructure. Zia includes models with 1.3B, 2.6B, and 7B parameters optimized for structured data extraction, summarization, RAG, and code generation. Supporting its privacy-first, cost-sensitive B2B approach, Zoho avoids consumer data training and retains all customer data on its servers. Complementing Zia LLM are 40 prebuilt Zia Agents, a prompt-based no-code builder (Zia Agent Studio), and a model context protocol (MCP) server to enable agent interoperability across Zoho’s 55-app suite. The agentic strategy is designed to support key business roles such as sales, customer support, finance, and recruiting, with plans for an Agent2Agent protocol for cross-platform collaboration. By building its own LLM and agent infrastructure in-house, Zoho seeks long-term platform control, cost efficiency, and deeper model optimization, marking a strategic shift to self-reliance in enterprise AI. General availability for all products is expected by the end of 2025.
AI Appreciation Day: Industry Weighs in on Celebration AI Appreciation Day, founded in 2021 as a marketing effort for a German film, has since evolved into an informal industry celebration reflecting on AI's impact. Despite its origins, the 2025 observance comes at a critical juncture, with leaders across sectors recognizing AI’s transformative role. Industry voices emphasize a shift from experimental to production use cases, the rise of agentic and generative AI, and the urgent need for responsible deployment. Executives highlight AI's role in augmenting human talent, reshaping cybersecurity, enabling predictive analytics, and exposing gaps in trust and infrastructure. Many also stress the importance of governance, employee education, and connecting AI to real business outcomes—underscoring that appreciation must go hand-in-hand with accountability.
Regulatory
This week’s AI headlines read like the dawn of a new geopolitical season. The OECD’s revised AI principles lay a common foundation, like resetting the compass before a long journey, guiding 47 nations toward interoperable, human-centered AI governance. Nvidia’s green light to resume chip sales to China marks a tentative spring thaw in tech trade tensions, reopening pathways once frozen by regulation. And in a show of force, President Trump unveiled a $90B investment wave at the Energy and Innovation Summit, planting the seeds of American AI and infrastructure dominance. From global standards to silicon diplomacy, these moves don’t just signal change: they set the stage for a new era of strategic alignment, innovation, and competition.
OECD AI Principles overview The OECD’s updated AI Principles, revised in May 2024, continue to serve as a global framework for developing trustworthy and human-centric AI systems, with widespread adoption by 47 countries and key institutions like the EU, UN, and U.S. government. These principles emphasize values-based priorities, such as inclusive growth, human rights, transparency, robustness, and accountability, while offering practical policy guidance for R&D investment, international cooperation, workforce readiness, and interoperable AI governance. Crucially, the OECD’s standardized AI system definition and lifecycle model now underpin major legislative and regulatory efforts worldwide, supporting alignment across jurisdictions and strengthening global AI risk management frameworks.
Nvidia to resume sales of highly desired AI computer chips to China Nvidia has secured U.S. government approval to resume sales of its H20 AI chips to China, reversing an earlier Trump administration export restriction. CEO Jensen Huang confirmed the decision during a trip to Beijing, highlighting China’s critical role in AI innovation and Nvidia’s global business. The H20 chip—less powerful than Nvidia’s top models—was designed to comply with U.S. export rules, aiming to balance national security with economic competition. The approval follows lobbying efforts and is reportedly linked to a broader trade deal involving rare earth exports. AMD is also set to resume chip exports. Despite bipartisan concerns over national security, the move marks a thaw in tech trade tensions between the U.S. and China.
President Trump Solidifies U.S. Position as Leader in AI – The White House At the inaugural Energy and Innovation Summit held at Carnegie Mellon University on July 15, 2025, President Donald Trump announced over $90 billion in AI and energy investments aimed at reinforcing the U.S. as the global leader in artificial intelligence. The announcement featured major commitments, including Google’s $25 billion for AI-related data centers and infrastructure, Blackstone’s $25 billion in data centers and natural gas facilities, and CoreWeave’s $6 billion for data center expansion. These investments, unveiled alongside industry executives and lawmakers, signal a strategic push to accelerate innovation, bolster national infrastructure, and create jobs, underscoring the Trump Administration’s focus on American technological and energy dominance.
Regional Updates
From Bristol to Istanbul, this week marked a global turning of the AI calendar, as countries flipped the switch on new strategies, systems, and sovereign ambitions. The UK powered up its £225M Isambard-AI supercomputer—like striking a match in the dark—aiming to transform public health, farming, and safety. Perplexity planted its flag in India, betting on partnerships and scale over profit, while New Zealand took a more laissez-faire route, ushering in economic optimism but leaving oversight at the door. In Japan, Rakuten’s GENIAC-backed LLM project signals a deeper investment in culturally attuned, memory-rich models. And in Turkey, the launch of T3AI felt like a national coming-of-age moment, as local volunteers and institutions united to create the country’s first homegrown AI. Each of these moves signals more than progress: they’re the first steps into a new geopolitical season defined not just by competition, but by identity, purpose, and the will to shape AI on one’s own terms.
UK switches on AI supercomputer that will help spot sick cows and skin cancer | Artificial intelligence (AI) | The Guardian The UK has activated its £225 million Isambard-AI supercomputer in Bristol, a public facility equipped with 5,400 Nvidia superchips capable of running 100,000 times faster than a standard laptop. Designed to advance AI-powered breakthroughs across healthcare, agriculture, and public safety, the system will support models detecting skin cancer bias, predicting livestock illness, and forecasting human movement for safety applications. Despite being the UK’s largest publicly known compute resource, Isambard ranks 11th globally, underscoring the international race for AI infrastructure. Backed by nuclear-powered electricity at a cost of nearly £1 million per month, the initiative reflects Britain’s £2 billion push for AI sovereignty while raising ethical questions around surveillance, data access, and algorithmic decision-making in sensitive areas like law enforcement and public health.
Perplexity sees India as a shortcut in its race against OpenAI | TechCrunch Perplexity is aggressively expanding into India as a strategic counter to OpenAI’s U.S. dominance, partnering with Bharti Airtel to offer 360 million subscribers a free year of Perplexity Pro, a $200 value, while locking out rival telcos (similar to AT&T iPhone move in 2007). The move is part of a broader global telco strategy and follows integration with Paytm, India’s top fintech app. Perplexity saw 600% year-over-year growth in Indian downloads (2.8M in Q2) and a 640% spike in monthly active users, though it still trails ChatGPT in absolute numbers. Despite impressive user growth, monetization remains a challenge: Perplexity generated no notable Indian revenue in Q2, compared to ChatGPT’s $9M. Still, with limited local AI search competition and a growing tech-savvy population, Perplexity views India as a high-leverage bet to scale users, brand equity, and eventually, revenue.
NZ's new AI strategy is long on 'economic opportunity' but short on managing ethical and social risk | RNZ News New Zealand’s newly launched National AI Strategy focuses heavily on economic opportunity, signaling a pro-business, light-touch regulatory approach, but offers limited measures to address ethical, social, and indigenous risks. While it encourages AI adoption and includes guidance on bias, accuracy, and oversight, the recommendations are entirely voluntary and lack enforceable safeguards. The government has not committed new funding for AI capacity building, despite under-resourced universities and ineligible funding for humanities-based AI ethics research. This positions New Zealand among the most relaxed global regulators, alongside Japan and Singapore, in contrast to the EU’s risk-tiered AI Act. Critics warn that without oversight, AI systems, especially those imported or fine-tuned from global models, could exacerbate bias, harm Māori communities, and undermine public trust, already low at third-to-last among 47 nations.
Rakuten gets involved in Japanese gen AI initiative Rakuten has been selected for Phase 3 of Japan’s government-backed Generative AI Accelerator Challenge (GENIAC), supported by METI and NEDO, to develop an open-weight Japanese large language model (LLM) using a Mixture of Experts (MoE) architecture. The model will focus on expanding memory capacity to handle longer prompts and maintain context across interactions, addressing key limitations in existing generative models. The project aims to support personalized AI tools across Rakuten’s ecosystem, with R&D starting in August 2025. Rakuten’s initiative emphasizes long-term memory, personalization, and language optimization. This builds on its previous releases like Rakuten AI 2.0 and its business-focused generative AI solution, Rakuten AI for Business, which prioritizes Japanese language compliance, secure environments, and user-controlled data privacy.
Beta version of 1st Turkish large language model T3AI launched | Daily Sabah Turkey has launched its first large language model, T3AI, in beta via the Teknofest social platform, marking a milestone in the country’s AI development efforts. Created collaboratively by the Turkish Technology Team (T3) Foundation and defense firm Baykar, the open-source model was built with support from 1,792 volunteers across 67 provinces. T3AI emphasizes ethical AI principles and supports multilingual interaction, though it is optimized for Turkish and Turkic languages. Project partners include the Turkish Ministry of National Education, Microsoft, TRT, SETA, the Turkish Academy of Sciences, and Anadolu Agency. The initiative aims to boost AI awareness, integrate AI into national digital services, and develop a skilled local AI workforce. During the beta phase, users could tag T3AI in posts for interaction, with feedback used to refine the model. The beta concluded on July 13, with a public message promising improved future iterations.
Investments
Lovable becomes a unicorn with $200M Series A just 8 months after launch | TechCrunch Swedish startup Lovable has raised a $200 million Series A at a $1.8 billion valuation just eight months after launch, making it Europe’s latest AI unicorn. The “vibe coding” platform, which allows users to build apps and websites via natural language, has surpassed 2.3 million active users and reached $75 million in ARR from 180,000 paying subscribers. Its user base skews heavily toward non-technical creators using the tool for rapid prototyping, though the company aims to support production-grade applications as well. Backed by Accel and a roster of high-profile angels, Lovable has already facilitated 10 million projects and counts Klarna and HubSpot as enterprise customers, all with a lean team of just 45 employees.
Cognition, maker of the AI coding agent Devin, acquires Windsurf | TechCrunch Cognition, creator of the AI coding agent Devin, has acquired Windsurf’s IP, product, and remaining staff just days after Google hired away its leadership team in a $2.4B reverse-acquihire. Windsurf, which had grown to $82M ARR with over 350 enterprise customers, was reportedly valued as high as $100M ARR earlier this year. The acquisition gives Cognition not only a robust AI-powered IDE, but also renews Windsurf’s access to Anthropic’s Claude models, which had been cut off amid OpenAI acquisition rumors. Unlike Google’s deal, which excluded many employees from financial benefit, Cognition waived vesting cliffs and ensured 100% of Windsurf staff shared in the transaction. This consolidation positions Cognition to compete directly with AI coding giants OpenAI, Anthropic, and Cursor by offering both IDE and autonomous agent capabilities, a full-stack approach in the fast-evolving AI developer tooling market.
AI bubble is worse than the dot-com crash that erased trillions, economist warns — overvaluations could lead to catastrophic consequences Torsten Sløk, chief economist at Apollo Global Management, warns that the current AI market is even more overvalued than the dot-com era, suggesting a potentially larger financial crash ahead. He points to skyrocketing valuations of companies like Microsoft, Google, Meta, Amazon, and OpenAI as being disconnected from their actual earnings potential. Sløk argues that AI-driven gains in the stock market have been heavily concentrated in a handful of firms, mirroring dot-com dynamics but at a much larger scale—with higher capital inflows, such as Nvidia’s $500B “AI Factory” push, Amazon’s potential $8B investment in Anthropic, and Meta’s rumored $100M signing bonuses. While he doesn't predict a timeline, Sløk suggests that the eventual downturn could trigger mass layoffs, rapid consolidation, and a reset in expectations. Like the internet post-2000 crash, AI would likely survive as a foundational technology, but inflated investor hype may collapse, especially for speculative startups without sustainable revenue.
Research
Accenture research finds Gen AI becoming top source for travel discovery According to Accenture’s 2025 Consumer Pulse survey, generative AI has officially overtaken social media and OTAs as the top discovery tool for frequent AI users in travel, with 80% of surveyed travelers already using Gen AI across airlines, hotels, and platforms. The report highlights a major shift: 93% of these users trust Gen AI to support purchasing decisions, 78% are open to fully autonomous AI agents for planning trips, and 57% want assistants that span brands and services. The impact is emotional as well as practical: travelers are 1.3x more engaged and 1.7x more willing to pay higher prices for brands offering personalized, emotionally resonant experiences.
5 ways generative AI projects fail | CIO Dive More than half of enterprise generative AI projects fail due to a combination of misaligned use cases, overestimating the technology’s readiness, poor change management, underinvestment in employee training, and a lack of responsible AI practices, according to new Gartner research. CIOs are often pressured to act quickly but fall short when they don’t link AI efforts to business value, don’t equip their teams with necessary literacy programs, or neglect clear frameworks for implementation and risk management. Gartner recommends that companies focus on technically feasible, high-impact use cases, validate vendor tools rigorously, prioritize user feedback, and invest in transparency and skills development. Failure to align AI with process change and responsible governance, such as bias mitigation and model lifecycle management, can lead to stalled adoption or operational risk.
AI firms ‘unprepared’ for dangers of building human-level systems, report warns | Artificial intelligence (AI) | The Guardian A new report from the Future of Life Institute (FLI) warns that leading AI companies remain “fundamentally unprepared” for the risks associated with building artificial general intelligence (AGI), with none scoring higher than a D in existential safety planning. The report evaluated seven major developers—Google DeepMind, OpenAI, Anthropic, Meta, xAI, and Chinese firms Zhipu AI and DeepSeek, across areas such as current harms and long-term safety. Anthropic received the highest overall grade with a C+, while OpenAI and DeepMind followed with C and C-. The report argues that despite public claims of near-term AGI development, no firm has presented credible, actionable plans to prevent loss of control or manage catastrophic risks. A second nonprofit, SaferAI, echoed these findings, calling current risk management practices "unacceptable." MIT professor and FLI co-founder Max Tegmark compared the situation to launching a nuclear power plant with no meltdown plan, emphasizing the lack of preparedness amid accelerating AI capabilities.
Concerns
Even as AI opens doors to new beginnings, this week reminded us that unchecked growth comes with shadows. Researchers are quietly embedding prompts to trick AI peer reviewers into glowing praise, exposing a fragile fault line in the integrity of scientific publishing. In the music world, synthetic bands are going viral, fast-tracking fame and revenue while threatening to drown out human artists in algorithmic noise. Courtrooms offer little clarity, as rulings tilt toward AI platforms but leave content creators navigating a legal fog. In the workplace, agents are acting like employees with root access, unseen but everywhere, while “shadow AI” creeps in through unmanaged tools and personal devices, turning innovation into an unmanaged risk. These stories are a reminder: every sunrise in AI brings longer shadows unless governance, transparency, and security rise with it.
Scientists reportedly hiding AI text prompts in academic papers to receive positive peer reviews | Artificial intelligence (AI) | The Guardian A growing number of computer science preprints on platforms like arXiv have been found to contain hidden prompts, often in white text, targeted at AI-based peer reviewers, instructing them to give only positive feedback or ignore negatives. Investigations by Nikkei and Nature uncovered at least 18 such instances across 14 institutions in eight countries, including the U.S. and Japan. The tactic appears to stem from a viral post by an Nvidia researcher suggesting LLM prompts as a safeguard against harsh automated reviews. While human reviewers would overlook these invisible prompts, the rise of AI-assisted peer review introduces a new vector for manipulation, raising concerns about the integrity of the scientific review process as reliance on LLMs increases.
My take: I use AI to peer review my newsletters and articles, and I welcome critical feedback, because that’s how you improve. Inserting hidden prompts to force only positive reviews undermines the integrity of both the writing and the review process. Large language models are valuable for catching inconsistencies, surfacing counterarguments, and highlighting unclear logic. If we start training them to ignore flaws, we lose one of their most important functions: objective critique. In research and publishing, especially, the goal isn’t flattery for me: it’s accuracy, clarity, and intellectual rigor.
What Gen AI court rulings mean for content owners and creators - Ad Age Two recent Gen AI court decisions have favored AI platforms, slightly easing legal anxieties around their use—but legal expert Brian Heidelberger argues that significant human involvement in Gen AI content creation is still essential to address lingering copyright risks. While the rulings provide some clarity, they don’t eliminate the ambiguity around ownership, input sourcing, or how AI-generated outputs may affect brand authenticity. Heidelberger advises creators and brand owners to remain cautious, ensuring legal oversight of both inputs and outputs to avoid potential infringement and preserve credibility.
AI-generated music is going viral. Should the music industry worry? AI-generated bands like The Velvet Sundown are going viral with over 1 million monthly Spotify listeners and $34K in estimated streaming revenue in just 30 days, raising concerns across the music industry over copyright, authenticity, and artist compensation. Platforms like Suno and Udio now enable anyone to create high-quality, full-length AI songs with minimal inputs, accelerating the rise of synthetic artists like Aventhis. Major record labels have filed lawsuits against these platforms, while companies like Deezer report that 18% of new uploads are fully AI-generated. Industry leaders warn that this rapid proliferation could overwhelm streaming platforms, complicate copyright protections, and marginalize human creators, despite efforts to integrate AI responsibly into education and production.
AI Agents Act Like Employees With Root Access—Here's How to Regain Control A report from The Hacker News warns that generative AI systems—especially enterprise AI agents—are increasingly behaving like junior employees with root access, posing major identity and access management risks. As companies embed AI in sensitive systems like finance, code repositories, and email, the traditional security model of one-time login or basic MFA is proving inadequate. Organizations often overlook the fact that every LLM interface or integration becomes a new identity edge, opening doors to credential-based attacks, misconfigured permissions, and insecure devices. Whether enterprises build custom agents or buy SaaS AI tools, attackers can exploit over-permissioned bots and personal device access gaps. To mitigate these risks, experts recommend phishing-resistant MFA, continuous device trust enforcement via EDR/MDM/ZTNA, and real-time RBAC. Beyond Identity, the article’s contributor, promotes its IAM solution as a way to enforce device-aware, passwordless security, ensuring AI agents only act within authorized bounds. The piece underscores that securing AI is no longer about just the model; it’s about securing the people and devices interacting with it.
Shadow AI: How to Mitigate the Hidden Risks of Generative AI at Work A July 2025 report sponsored by Zscaler outlines the growing risks of “Shadow AI,” where employees use generative AI tools like ChatGPT on personal devices or unsanctioned platforms, inadvertently exposing sensitive company data. A 2023 incident involving a multinational electronics firm entering proprietary source code into ChatGPT highlights how public AI models can absorb confidential inputs into future training data. Many organizations have tried to block GenAI access entirely, but this strategy often backfires by reducing visibility and driving AI use underground. Zscaler recommends a strategic, multi-layered approach: first, gain visibility into AI usage patterns across the organization; second, implement context-aware governance policies instead of blanket bans; third, apply real-time data loss prevention (DLP) systems to stop sensitive uploads; and fourth, educate employees on AI risks and responsible use. The report stresses that balancing innovation and security is essential; organizations that enable safe, productive AI adoption will maintain both competitive advantage and resilience in an increasingly AI-integrated workplace.
Case Studies
Across industries, AI is ushering in fresh starts, from Florida farms to French tarmacs. Farmers in hurricane-prone regions may soon swap guesswork for real-time damage insights via a USDA-funded AI chat tool. Retailers are embracing AI agents, accelerating hiring and shopping experiences, while fashion designers blend ChatGPT and DALL·E to generate next-season looks with machine-assisted flair. Marketers are moving past fear to reimagine strategy with AI at Cannes, and Netflix is redefining production pipelines with faster, AI-enhanced VFX. Video ads are undergoing their own reboot as 86% of advertisers turn to GenAI to scale creativity. In gaming, AI-generated titles now make up 1 in 5 new Steam releases, sparking backlash but signaling an irreversible shift. Meanwhile, private equity sees a future in AI-native B2B distribution, betting on algorithms over legacy relationships. Air France-KLM’s new GenAI factory symbolizes an operational rebirth, while in healthcare, patient use of chatbots is prompting calls for fresh ethical safeguards.
Farming
Artificial intelligence may soon give Florida farmers access to crop damage data during a hurricane | WUSF University of Florida researchers are piloting an AI-powered chat tool to help farmers assess crop damage during hurricanes, offering near-real-time insights using satellite imagery and natural language queries like “What areas are flooded in my field?” The prototype—funded by a $297K USDA grant—will be tested at farms in Collier and Hardee counties and could roll out statewide by next hurricane season. Unlike costly drone surveys, this chat-based tool is accessible to non-experts and aims to speed up disaster response and support agriculture extension agents with trend analysis and localized damage assessments.
Retail
Retail accelerates investments in generative AI According to Capgemini, 56% of retail organizations have increased their generative AI investments since last year, making retail one of the top five industries adopting AI agents and multiagent systems, with 18% already deploying them. The report highlights how generative and agentic AI are being used for complementary tasks—front-end customer service vs. back-end operations. Retailers like Walmart and Amazon are integrating assistants like Sparky and Alexa+ to automate shopping experiences, while H&M’s AI HR agent reduced time-to-hire by 43% and attrition by 25%. Despite the momentum, consumer trust in AI remains mixed, with concerns about data privacy and over-automation flagged in a recent KPMG study.
Fashion
Generative AI models streamline fashion design with new text and image creation Researchers from Pusan National University demonstrated how generative AI can assist fashion designers by combining ChatGPT’s trend analysis with DALL·E 3’s image generation to visualize future menswear collections. The study used ChatGPT-3.5 and 4 to analyze historical data and predict Fall/Winter 2024 fashion trends, which were translated into structured design codes—such as silhouettes, materials, and embellishments. These were used to create 35 detailed prompts for DALL·E 3, resulting in 105 generated images. The AI accurately implemented the prompts 67.6% of the time, especially when descriptive adjectives were used. While some outputs closely resembled real runway styles, limitations included a bias toward ready-to-wear fashion and difficulty representing concepts like gender fluidity. The study emphasized the importance of prompt engineering by fashion experts and concluded that with further refinement, generative AI could enhance design efficiency and democratize access to fashion trend visualization.
Marketing
How CMOs can help their teams adopt AI to drive creativity, strategy and measurable business outcomes - Ad Age At Cannes Lions, a private roundtable hosted by Ad Age and Infosys Aster revealed that while 73% of global CMOs report adopting AI across functions, only 52% are realizing tangible business value. Marketing leaders discussed AI's role in accelerating tasks, like document editing, summarizing data, and even conflict de-escalation, while emphasizing the continued need for human critical thinking, creativity, and empathy. CMOs are rethinking how to foster team-wide AI adoption: providing function-specific training, addressing job security fears, and using AI to empower rather than replace talent. Senior executives noted that openness to AI often correlates with employee tenure and mindset, and that marketers who embrace AI now will gain a decisive edge, especially as the industry looks toward an era of artificial superintelligence (ASI). Concerns remain around IP and creator rights as AI-generated content scales.
Entertainment
Netflix’s Ted Sarandos Says AI Will Make Movies and TV “Better, Not Just Cheaper” Netflix co-CEO Ted Sarandos says AI is enhancing, not replacing, creativity in film and TV, pointing to real-world examples like El Eternauta, an Argentine sci-fi series that used AI tools for VFX. He emphasized that generative AI is speeding up production tasks such as pre-visualization and shot planning, and enabling high-quality effects with smaller budgets and global teams. One VFX scene showing a collapsing building was completed 10x faster using AI-powered tools. Sarandos stressed that creators, audiences, and Netflix were all thrilled with the results, signaling AI’s growing role as a force multiplier in storytelling, not just a cost-cutting measure.
Nearly 90% of Advertisers will Use Gen AI to Build Video Ads, According to IAB's 2025 Video Ad Spend & Strategy Full Report According to IAB’s 2025 Digital Video Ad Spend & Strategy Report, 86% of advertisers are now using or planning to use generative AI to create video ads, signaling a major shift in how campaigns are produced and scaled. GenAI is especially empowering small and mid-sized brands to generate high-quality, customized video ads at low cost and without large creative teams, accelerating adoption faster than among larger advertisers. Buyers expect GenAI-generated ads to account for 40% of total ad volume by 2026. Use cases include audience-specific versions (42%), visual style alterations (38%), and contextual adaptations (36%). Meanwhile, expectations for connected TV (CTV) are also evolving, with advertisers projecting that 47% of CTV inventory will be biddable this year, up from 34% in 2024, and 74% of buyers building in-house teams to manage self-serve CTV buying. The report highlights a broader trend toward automation, outcome-based KPIs like store visits and sales, and the democratization of video advertising driven by GenAI innovation.
Gaming
1 in 5 new video games on Steam now uses generative AI, report says A new report by Totally Human Media reveals that nearly 1 in 5 video games released on Steam in 2025 disclosed using generative AI, representing a 700% increase since 2024. Of the 7,818 games on Steam that now use AI, many incorporate it in limited ways—such as AI-generated paintings in the hit title My Summer Car, which has sold 2.5 million units. Steam began requiring AI usage disclosures in January 2024, enabling data collection at scale. While AI adoption is rising, community backlash is strong: Reddit users frequently add AI-developed games to their ignore lists, expressing concerns over artistic integrity and the perceived devaluation of human creativity. Some users allow for limited use of AI in non-core assets, like UI elements, but overall sentiment remains wary. The surge highlights broader industry pressure to integrate AI, even amid growing pushback from players.
B2B Distribution
AI For Private Equity — The Future Of B2B Distribution AI is transforming B2B distribution from a legacy, relationship-based model into a scalable, intelligence-driven industry, and private equity is taking notice. Traditional challenges, like highly specialized product specs, deep domain expertise, and fragmented customer bases, are precisely where transformer models excel, making AI a strategic advantage, not just an efficiency tool. Tools like fine-tuned LLMs can now replicate expert judgment (e.g., material tolerances or compliance requirements) and automate sales interactions once managed exclusively by veteran reps. Companies like Endeveor, Blue Ridge, and Kanava are already injecting AI into operations from order intake to CRM. The real opportunity lies in shifting B2B sales from people-driven to algorithmically preferred systems, creating a new investment thesis for PE and venture capital in AI-native distribution platforms.
Airline Industry
Air France-KLM builds cloud-based gen AI 'factory' to drive business transformation Air France-KLM has launched a cloud-based generative AI factory in partnership with Accenture and Google Cloud to accelerate digital transformation across its operations. Hosted on Google Cloud, the factory enables the airline group to efficiently test, manage, and deploy generative AI and machine learning models tailored to internal use cases, already delivering over 35% faster development cycles. Applications include engineering diagnostics, customer service automation, and ground operations optimization, with tools like private AI assistants and retrieval-augmented generation (RAG) models enhancing internal workflows. The initiative emphasizes upskilling employees through dedicated “GenAI Days,” fostering in-house AI solution development with measurable business impact. The collaboration builds on earlier efforts to modernize Air France-KLM’s digital core and reflects a broader strategic shift to make AI integral to operational agility, customer experience, and long-term resilience.
Healthcare
https://academic.oup.com/rheumap/advance-article/doi/10.1093/rap/rkaf083/8198077?login=false A cross-sectional study published in Rheumatology Advances in Practice surveyed 270 individuals with rheumatic diseases to understand their use and perception of large language model (LLM) chatbots like ChatGPT. Of the respondents, 44% reported using LLM chatbots in general, and 15% used them specifically for health-related purposes, primarily for general information rather than personalized medical guidance. The study found that younger age and liberal political views were statistically associated with higher adoption, while factors such as gender, education, income, ethnicity, and language spoken had no significant correlation. Conducted across online platforms and rheumatology clinics in Edmonton, the survey used logistic regression to evaluate sociodemographic predictors. The findings underscore the growing role of AI in patient behavior and highlight the need for urgent regulatory and ethical frameworks to ensure accuracy, safety, and equity in health-related chatbot usage, especially within vulnerable populations like those managing chronic conditions.
Learning Center
AI is democratizing itself, opening new doors for experimentation and autonomy. Running LLMs locally, on laptops or even smartphones, signals a fresh era of AI independence, empowering users to take control of their tools without relying on the cloud. Prompt Learning introduces a new way to fine-tune AI with natural language feedback, allowing prompts to self-correct and evolve with just a few words, an elegant restart for model training. And for small businesses, accessible GenAI tools like Claude, ElevenLabs, and Replit are creating a launchpad for innovation, proving that even the smallest teams can harness AI to begin anew.
Learning
How to run an LLM on your laptop | MIT Technology Review Running a large language model (LLM) on your own device is now feasible and increasingly popular, thanks to advances in model optimization and hardware efficiency. MIT Technology Review’s guide highlights how open-weight LLMs can be downloaded and operated locally on laptops or even smartphones, offering greater privacy, control, and resilience compared to cloud-based systems. Tools like Ollama and LM Studio make it easier for non-technical users to access models from Hugging Face, with performance typically scaling based on RAM (roughly 1 GB per billion parameters). Privacy advocates and ethicists argue local models reduce data exposure and resist the centralized control of major AI providers like OpenAI and Google, which train their systems on user interactions. While smaller models are less powerful and more prone to hallucination, they offer transparency, consistency, and an opportunity to better understand AI behavior, valuable trade-offs for users seeking autonomy and experimentation.
Prompting
Exploring Prompt Learning: Using English Feedback to Optimize LLM Systems | Towards Data Science Prompt Learning is emerging as a new approach for optimizing LLM behavior using natural language feedback instead of numeric scores or gradient-based updates. Inspired by NVIDIA’s Voyager and building on reinforcement learning concepts, Prompt Learning treats English-language critiques from evaluations or human annotators as direct inputs to revise system prompts. This method allows dynamic, ongoing refinement of AI behavior, editing specific instruction sections, managing conflicting rules, and adapting to evolving requirements, all within the prompt context. Unlike traditional prompt optimization, which relies on offline batch scoring and can’t incorporate nuanced feedback, Prompt Learning enables real-time “self-healing” of prompts with as little as a single example, significantly reducing data needs and compute costs. In benchmarks, it achieved high rule adherence (up to 100%) with only a few iterative loops, showing promise for maintaining production-grade LLM agents with evolving objectives.
Tools and Resources
Top Generative AI Tools for Small Business | CO- by US Chamber of Commerce A recent piece from CO— by the U.S. Chamber of Commerce outlines five generative AI tools tailored for small businesses, including CapCut for video editing, Claude for content and research, ElevenLabs for voice synthesis, Humantic AI for buyer profiling, and Replit for app development. The article emphasizes that 38% of SMBs already use AI and highlights the importance of choosing tools that align with business needs, integrate smoothly with workflows, and deliver measurable ROI. Experts advise starting with a specific problem, piloting tools before full deployment, and ensuring human oversight remains central for accuracy and effectiveness.
If you enjoyed this newsletter, please comment and share. If you would like to discuss a partnership, or invite me to speak at your company or event, please DM me.
AI Chat Automations
1dLove the theme of new beginnings; as agents start acting like employees, give them narrow permissions and audit trails so teams can move from repetitive ops to creative, high value work. Micro experiment this week: route new leads to an AI agent that sends a WhatsApp reply within 60 seconds and a one click booking flow, measure booking conversion and show rate — aim for ~15% more bookings and ~25% fewer no shows while turning on consent safe GA4 attribution to track real CAC 🙂
AI UX Strategist I Human Experience Architect | Product & Design Leader | Startup Advisor | Founder, UX4AI
1wCongratulations on your launch 🚀
President & Group Executive Committee Member | Board Member | Purpose-Driven Global Business Executive ➣ Inspiring diverse, global teams to deliver innovation, transformation, profitability & growth | MIT EMBA
1wCongratulations Eugina!!! It's been incredible watching your journey and seeing your tenacity and drive!!!
Cheers to new beginnings and using AI to make the most of them 🥂
Empowering Women in AI Globally | Co-Founder & Co-CEO, WOMEN x AI | Board Member & Investor | Fractional CMO for B2C Software Startups
1wNew beginning have so much power! I am thrilled to see YOUnifiedAI launched and can’t wait to see what’s next!