An AI model to unlock the Sun's secrets
Welcome back to the Circuit Breaker, where you can find the best recaps on the latest innovations in AI, quantum computing, semiconductors, and more, from across IBM Research and beyond.
Week of August 19 - August 22
Introducing Surya, an AI model for the Sun
This week, IBM, NASA, and eight other partners open-sourced Surya, a heliophysics foundation model that can be used for all sorts of scientific questions related to our host star, including predicting solar flares, solar winds, and the Sun’s magnetism.
Why do we need an AI model for the Sun? Two reasons: AI can make mountains of satellite data accessible in new ways. It can also potentially give us more advanced warning of impending solar storms.
When NASA’s Solar Dynamic Observatory launched into space, scientists didn’t really have the tools to analyze the satellite’s daily haul of 1.5 terabytes of data. Now, 15 years later, they do. IBM, NASA, and collaborators used nine years of raw SDO imagery to create an AI model named Surya that could not only predict space weather but provide new insights into the Sun itself.
Space weather hits Earth hardest during the Sun’s active phase, which peaks every decade or so. This phase is marked by solar flares, solar wind, and jets of plasma shooting charged particles towards Earth. If the particles interact with Earth’s magnetic lines, the energy can endanger astronauts in space and throw off satellites, power grids, and communications. Researchers hope that Surya and a related set of benchmarks can help us see solar storms coming sooner.
"We want to give Earth the longest lead time possible,” said Andrés Muñoz-Jaramillo, a solar physicist at the SouthWest Research Institute and a lead scientist on the project.
Why Surya? It’s Sanskrit for Sun, and it follows IBM’s Prithvi (Sanskrit for Earth) family of AI models for weather and climate prediction on Earth, among other applications.
How was Surya trained? Instead of learning to reconstruct partially blacked out images during training, as Prithvi was, Surya was given sequential images and asked to predict what SDO would see an hour in the future. In the process, the model inferred knowledge from the raw data essential for skillful forecasting — things like the Sun’s geometry, magnetic structure, and its unusual rotation.
Were there any surprises? Yes! One of the Sun’s quirks is that it spins faster at its equator than its poles. To do any kind of forecasting, you have to correct for the Sun’s differential rotation. Researchers at first tried to encode this knowledge directly into the model, but they got better results when they let the model be. “It’s remarkable that the model performed better when we let it figure out the rotation on its own,” said Johannes Schmude, an AI researcher who led the project on the IBM side.
Game. Set. Match Chat.
IBM and the (USTA) United States Tennis Association are serving up a new level of fan engagement for the 2025 US Open. From real-time match insights to interactive assistants, this year’s AI-powered digital experience is designed to keep tennis fans close to the action. These new ways to enjoy the matches were built with IBM watsonx Orchestrate, and the Granite family of LLMs, born out of IBM Research. Here’s what’s new:
💬 Match Chat is an interactive AI assistant that's ready to answer fan questions during all 254 singles matches. Whether you’re looking for stats, head-to-head records, or even help pronouncing a player’s name, all you have to do is ask.
📊 SlamTracker is back again this year, and it’s enhanced with live “Likelihood to Win” probabilities, updating in near real-time based on player stats, expert analysis, and match momentum.
📝 Key Points is built with watsonx and generates instant TL;DR summaries of articles and match data, so fans never miss the story behind the score.
How a decades-old classical computing technique can illuminate the inner workings of AI
Despite the massive leaps we've taken into this new era of AI, our ability to interrogate the inner workings of large models remains surprisingly ad hoc. A new paper in Nature Machine Intelligence builds on decades-old complexity theory, proposing a new way to bring rigor and transparency to assessing the computational complexity of large AI systems
Mid-20th century innovations like Turing machines and the birth of computational complexity theory gave us formal tools to define computability and categorize problems by their inherent difficulty.
IBM Quantum platform updates from Jay Gambetta
What is shadow AI?
Sports fans are turning to AI
Making it easier for anyone to deploy agents
Take the IBM Quantum annual feedback survey
Highlighting new publications from IBM researchers that we liked the sound of:
If you liked this, please consider following IBM Research on LinkedIn. And if you want to go even deeper, subscribe to our monthly newsletter, Future Forward for more on the latest news on breakthroughs in AI, quantum computing and hybrid cloud.
Attended Doeacc centre new delhi
3hBest Wishes
Application Developer @ IBM | NIT KKR-23
6hIt reminds me 2 things carrington effect & wimbledon.
Great dad | Inspired Risk Management and Security | Cybersecurity | AI Governance & Security | Data Science & Analytics My posts and comments are my personal views and perspectives but not those of my employer
18hIBM Research very insightful. The advances on GenAI have not been able to resolved the challenges of explainability of results produce and this is essential for certain industries that are either regulated or require evidence. Also hallucinations is another challenge that researchers are looking to tackle adding other older AI and ML techniques to oversight and guide LLMs. These are very needed improvements.
Business Transformation Executive | AI, Data, Cloud & CX Modernization | P&L Ownership | Growth Architect | Public & Private Sector Innovation
19hWell done!
Sr.VP and CTO at Endicott Interconnect - Retired
19hCongrats! 🎉