Trump's $500B AI Announcement - what does it mean for AI in the US?
On Tuesday, President Trump announced a new joint venture that would invest up to $500B for infrastructure tied to artificial intelligence (AI) and included OpenAI, Oracle and SoftBank. The new entity, Stargate, will start building out data centers and (equally importantly) the electricity needed to power the data centers that will push the AI industry forward. Ellison mentioned that 10 data centers are already under construction with initial use cases focused on digital health records and improved disease treatment.
There has been significant investment in AI already - $50B in venture capital (VC) funding in 2023 and a comparable amount is expected for 2024 when all the final transactions are counted up and summarized. So there's a $100B investment portfolio already into AI from VCs. On top of that, now add the $500B in commitments for Stargate. Elon Musk (xAI), Google (Gemini), MIcrosoft (CoPilot and an investor in OpenAI), and Meta are all putting large chip stacks of capital on AI, conscious that it's impact will be broad, far reaching, and take years to achieve.
Products are being upgraded at a frantic pace. OpenAI is trying to stay ahead in both the revenue ($200M in 2022 to $3.7B in 2024) and the mind share consumers and investors have around generative AI (the solutions that replace Google's Page rank sorted search result lists with a more contextual answer that provides many of the details in a single integrated result). Microsoft is upgrading CoPilot while hedging with a 49% revenue share with OpenAI (in exchange for $18B invested). So there are products that are delivering value to enterprise organizations and consumers with value propositions that make sense now and others that are emerging or will emerge soon as products keep evolving. Many of these early AI companies and products are generating revenue (or in some cases strategic value) and consumer awareness is increasing rapidly as more people figure out what AI can do for them and their organizations. Usage is going up, value is going up, and value is clearly being delivered across multiple products and millions of users.
So why the need for $500 billion in investment commitments and what is that going to do? Well, remember that in historic terms we are still very early in the AI space. It was just over two years ago that OpenAI rolled out the version of ChatGPT that lit the world on fire and just like that decades of slow ... slow ... slow ... no seriously slow ... progress became overnight all at once one of the most overnight success stories the world has seen. Suddenly Sam Altman was Bill Gates, or Steve Jobs, or maybe Mark Zuckerberg. Who knows, but suddenly Sam was a big deal, way bigger then when he was running Y Combinator - that suddenly seemed like yesterday's news. Sam had the world by the tail, but the world is now very much aware of Power Law (where tech segments tend to become winner take all games once the space is defined and rules of engagement identified - and 70% of the profits for the segment goes to the dominant player - see Microsoft, Cisco, Oracle, Facebook, and many others).
And then there is this wrinkle - for a couple of decades after the core underpinnings of the internet infrastructure got built - hello Cisco, Broadcom, Juniper, Akamai, Oracle, and many others which built out the networking and OS/middleware layers so that the application layer could emerge because suddenly people had enough bandwidth to the home to do some cool stuff like led by the classic early adopters gaming and 18+ videos (as always, games and adult videos lead the way because they can always monetize new technologies which enable higher resolution and faster graphics faster and better than other industries - same as it ever was). So the app king makers emerged - e-commerce (Amazon and it's long slow beat down of eBay and many others), search (which emerged as a much better discovery result than browsing because of the speed of finding the correct answer via the 10 little blue links and defeated all of the portals and with PageRank all of the other search engines that hadn't thought ahead far enough to build something that could compete with PageRank - whoops), streaming videos (in which Netflix outfoxes Blockbuster and dozens of others), and social networking (in which Facebook arrives late and opens up a can of whoop ass on MySpace, Friendster, and every other attempted competitor by being one of the fastest followers and ensuring a small and curated experience for each University rollout before they became the horizontal solution that would rule the world as they came off campus to the rest of the world - with more than a little success after passing on Jerry and David's little Yahoo! portal's $1B - err, $800M - acquisition offer) all showed up as perfect application layer players.
Ok, enough memory lane stuff (sorry, but it's relevant context, you'll see). In the internet era, there was a ton invested in all of the companies mentioned above and the amount and timing impacted the outcomes and result a lots. If you were early in applications, you were Pets.com with the worst marketing campaign ever (remember the Sock Puppet? Of course you do.) before they truly achieved product-market fit and we all know that without product-market fit the quality of the marketing campaign often doesn't even matter because the product can't scale because it never got a beachhead to allow it to cross the chasm and win in a new segment that only it can define and win. But before the plumbing was built and the internet had grown into a large enough user base to resemble a metropolitan area more than a small village (thanks to the original killer app - email! No really - look it up.) the app layer couldn't happen at the scale needed.
Ok this time really enough history (this article stuff is great - I can just keep going and going - I've got all day people!) The point is that without big time investment in infrastructure and build out and a hyper-growth user acquisition factor for the internet, the app layers couldn't happen - certainly not at scale. On the other hand once infrastructure and a user base were established, apps were poised for hockey stick growth, which more than happened in many segments. Why does that matter with AI? Well, for the first time since roughly the mid to late 90s, hardware is again back in vogue. Not that it ever lost all it's luster or stopped making lots of money - but for a while the software side of the world had all the cool kids. Marc Andreesen (yep, that Marc) coined "software is eating the world" and everybody agreed and wanted to help it eat faster. Solid strategy actually - for both startup founders and investors like A16Z.
The one obvious exception was smart phones - both Apple iPhones and Google Android phones established themselves as competitors to laptops from both a mind share and spend perspective, particularly as phones got bigger with more memory and better cameras. But even smart phones proved Andreesen right because the app store and the software layer ate up literally hundreds of devices and turned them into mobile apps by leveraging tools like GPS to remove the need for the hardware layer behind it that had seemed so valuable just a few years back. Mapping devices like Tom-Tom (remember them?) - gone, replaced by Map apps. Yardage finders for golfers - gone, replaced by similar functionality in apps. Digital cameras (come on Walt, don't bury the lead) and regular cameras - gone, gone, gone, and so much faster than any of us envisioned. And so many more - relegated to history and wikipedia pages as apps got better, faster, stronger, and easier to download and use. So smart phones are the segment that turn out to be the exception that proves the rule because of all the software it enabled to keep eating the world.
From about 1999 to 2020 a lot of hardware advancements (other than mobile phones), making applications that delivered new and better tools for consumers and enterprises became much of the game in technology. But then in the late 2010s the growth of AI functionality started to really demand some pretty big improvements in hardware to support the massive compute and storage infrastructure needed to do all that "AI stuff" from data (gotta store it, collect it, process it for training data sets) to training (did I mention massive data sets?) to plumbing elements like security to inference (the reasoning part of AI - how it thinks) to model development (large or small language model) and to deployment. And as each layer of the AI infrastructure got incrementally better, the whole stack moved forward slightly but consistently and pushed the boundaries and hardware needs forward again. In the same way that adding memory to PCs and giving apps more memory and storage to do things in pushed the need for x86 from 80286 (286 in PC buyer parlance) and then 80386 (386) and then 80486 (486) and then ... wait for it 80586 (Pentium - wait, what? I know, a little AMD v Intel litigation in the San Jose courthouse I clerked in the summer of 1991 resulted in the hand slap to Intel that forced a new naming convention - stuff happens, best laid plans).
In each case, the PC buyer needed faster chips and more memory to handle the new application capabilities like fancy Excel pivot tables and game graphics like Doom and Myst (both awesome - in very different ways). There was some (but not much) incentive to treat memory carefully and be efficient (remember mallocdebug - of course you do if you wrote any code back then ...) so applications kept getting more - let's just say it without fat shaming software - bloated and fat! But that worked out fine for chip and memory and PC makers because the average upgrade cycle for many users was about 3 years. More chip speed and more memory meant more powerful apps and better user experiences. That fed the Wintel (Microsoft Windows + Intel chips) beast very well for a few decades. There were billions of dollars spent on faster PCs and new and better app versions - the virtuous cycle of tech upgrades and flywheels on both hardware and software sides!
But then the speed of the chip and the memory needed became less important because over time the speed of the connection and the internet into the house and office became more important and that started to get more investment, innovation, and capital. Gradually, the speed of your computer chip mattered less than the speed of the pipe into your house and investment flows switched to match that reality. From an AI perspective today, we are at the 8086 (precursor to the 286) - maybe. And remember it was the 386 that really got the PC revolution moving in earnest. So we're early and a lot of infrastructure has to be built. Most importantly, remember when I mentioned the incentives to be careful about memory and storage usage and be good citizens and clean up after yourself if you were an independent software vendor (ISV) working inside the Windows or Mac OS environments? Now think about AI - the incentives are very much aligned the other way. If you were a good memory user and code writer for PC ISVs, you ended up with an app that worked better and ran faster and created a better user experience for the ISV, the computer manufacturer, the OS manufacturer, and the end user.
In AI, the opposite is true, and you can deliver a better user experience often times only by increasing the compute needed to run multiple layers of queries and inference. The initial ChatGPT tool (and others like it) took your discovery experience and moved it from 10 blue links to a contextual based search result with one "ideal answer" search result. Over time, as generative AI tools (like ChatGPT) have evolved into tools like Gemini Deep Research, multi-layered query structures have emerged. When I tested Deep Research a couple weeks back, I used the same query structure as I often use for ChatGPT. In short, I asked it to give me a summary of what energy demand was forecast to be globally now and how it was expected to grow the next 20-30 years and then add an AI-specific demand component based on the current forecasts for AI energy needs. With ChatGPT, you get the contextual answer to the query. With Deep Research, you get an interim step of "if I understand your query correctly, here is the research I think should be done to answer the question(s) your query suggests and here are the steps I will use to track down and assemble a complete research answer for you." Then you press "yes" and send it on it's way and 5 minutes later you get an amazing Gemini document that can be sent to Google Docs with one click. Here's a link to that Google Doc -
That's a much better search result. My estimate is that assembling this content myself (and I've done a fair bit of research and analysis over the years ...) would probably take me 2-3 hours (and if anything, I'm erring on the low side and over-estimating my compilation capabilities). So I get a much better search result than from just 2 years ago. That result takes a lot more compute and other resources. It's almost the opposite of the ISV situation - developing the multi-layered query obviously ups the ante significantly from a resource perspective but also delivers a tremendously improved search or discovery result. Being memory efficient as an ISV helped the app work better and improve the user experience. In AI, using more memory and compute delivers an exponentially better result.
That's why the $500 billion is so important. We are two years into AI and it's obvious that every component of AI infrastructure is going to need tens of billions of investment because all of it is going to attempt to do the same thing Gemini just did for me as compared to ChatGPT just a few months ago (and even today really). Remember Power Law - 70% of revenue accrues to segment winners when Power Law applies. Everyone in the game knows that - that is why Google, Microsoft, Facebook, OpenAI, and x-AI are spending and raising tens to hundreds of billions of dollars. Nobody wants to lose the space, everyone wants to win it, and it's becoming an arms race where capital is the ultimate arm enabler (insert joke about CISC-based ARM chip company here). When Ellison, Masa, and Sam commit to $500B, it means that the space will have access to capital. That means that everything that needs to happen on the infrastructure will now happen faster - ideally a lot faster. Recall that the entire AI space had $50B in VC funding in 2023 and likely a similar amount in 2024, while the large public tech companies are investing additional capital. But even if you give VCs $100B in capital invested and take another $100B in public company commitments, the $500B is a 2.5x lift (talk about shot - shot - chaser) for the total space. Yeah, gonna have to call that raising all the boats. It is also a clear signal to capital allocators that this space is on fire, needs capital, and there is likely ROI that will follow - look at the track record of those three guys at the podium with Trump, they speak for themselves. They're great capital allocators and risk takers and they're saying let's make a big bet to get AI moving by investing and making great AI innovation happen in the US.
I think the $500B also helps serve notice to the rest of the world that the US wants to retain a leadership or near the top of the pile position in AI. You can argue that the US is already at or near the top of the heap for AI development regions. But it is hard to argue that injecting another $500B will not increase US innovation and competitiveness in AI. That is a good thing from a global competitiveness perspective - China is investing huge $s in AI as well and they plan on being at or near the top of the same heap. I've seen a lot of coverage that a lot of this work was already underway, and that is true. But putting three of the smartest tech leaders at the podium and announcing $500B in capital is a clear sign to the world that the US plans on being first or second (it will be a battle with China, for sure) among AI innovators and that if this is an arms race based on capital folks better get their checkbook out if they want to compete with the US and China for AI talent and innovation. From my perspective, there's a lot of upside from raising the rest of the world's attention that the US is playing in the AI space and has big ambitions and the downside appears very limited. In my opinion, the $500B does not quite turn the AI war into a China v US only competition but it does thin the herd of any other countries or regions that had ambitions of staying near the front of the pack in AI innovation. It puts the US in a very strong position relative to the rest of the world.
(I'll have more to say later about the impact on Ag and AgTech - need to give that some more thought. For now, I'm pretty pumped about the $500B announcement!)
Writer, Editor | Freelance Journalist | Content Strategist | Content Marketer
6moThe Saudis could fund this in a second.
--
6moGreat read
Helping Launch Innovative Products and Services in AgTech, IoT, AI, Privacy and CyberSecurity
6moThat's a interesting analogy, Walt. I look forward to future posts discussing how we'll find the water and energy to supply this effort.
Leader | Strategist | Foodie
6moIt is clear that an analysis of Trump’s economic plan must model tariff impacts and consider offsets thru strategic domestic investment to boost productivity or lower costs with key technologies and in certain sectors. Trump will need many proposals for high ROI domestic investment given the targeted tariff levels. I think we could consider that Precision Ag is maturing and Agentic Agriculture in on the horizon.
Manager at Aactive Engineering
6moWalt - Stargate $500 Billion is Softbank( ARM), Oracle, OpenAI marketng. The real news is DeepSeek from China. Taiwan- TSMC fabricates 90% of AI- Microsoft/Maia, Amazon/Trainium, Meta/MTIA, Nvidia, AMD now that Intel is on the ropes... HBM is SK and US-Micron ( hallelujah). AI processors are Nvidia GPU and Google TPU V6 ( Samsung) =99% from Asia. Replacing the ISA ( Instruction Set Architecture) of Microsoft/Intel with Nvidia/OpenAI is just the beginning... watch Elon Musk and Dojo ( TSMC) Best-willett