New approaches to improving AI
Welcome back to the Circuit Breaker, where you can find the best recaps on the latest innovations in AI, quantum computing, semiconductors, and more, from across IBM Research and beyond.
Stay tuned for new features and content in the coming weeks as we look to bring you newsletters that will give you an even deeper understanding of what's next in computing.
Week of June 23 - 27
Putting mountains of sensor data to better use
IBM’s new lightweight foundation model, TSPulse, goes beyond time series forecasting to help enterprises analyze observational data from a variety of different angles.
🧮 What can TSPulse do? It can pick out anomalies in a historical dataset, fill in missing values, classify data into categories, and tease out similar-looking patterns — with greater accuracy than models 10 to 100 times larger, when measured on leading benchmarks in the field. TSPulse is meant to complement IBM’s popular forecasting model, TinyTimeMixers.
📈 Why not just focus on forecasting? Observational data contains information that can be valuable for tasks other than predicting what’s next. For enterprises, this could include detecting signs of equipment or network failures by picking out subtle anomalies in a continuous stream of sensor or network data.
🦾 What makes TSPulse so powerful? Its efficient, hybrid architecture makes it easy to fine-tune and serve on an ordinary laptop — no GPUs required. TSPulse was also trained to extract frequency as well as temporal information from a time series, and to analyze data from both a bird’s eye and worm’s eye view. “The model learns high-level and low-level features together,” said Vijay Ekambaram, an IBM researcher who focuses on time-series analysis. “Combined with the frequency information that’s already been integrated, this is when the magic happens.”
🪄 Magic from two vantage points. Through a range of masked reconstruction tasks during training, TSPulse learns how to recognize both the global meaning of a time series and intriguing local patterns. TSPulse pivots between these two perspectives based on the task. It picks the summary view for classification tasks or when searching for recurring patterns; It chooses the detailed view for imputing missing values or identifying subtle anomalies.
🌐 Why it matters: People have traditionally mined historical data for insights using statistical models. But foundation models pre-trained on raw time-series data are quickly catching up. TSPulse is part of a new breed of lean, high-performing AI models for time-series analysis. It outperformed both statistical models and much larger deep-learning models on several leading benchmarks, including the TSB-AD benchmark for anomaly detection.
Embracing failed experiments to improve chemistry models
The road to scientific discovery is paved with failures, and IBM Research is using these failures to make chemical language models more accurate. When developing a language model to predict the outcomes of chemical reactions, a team led by IBM Research scientist Mara Graziani found that fine-tuning with a mix of failed and successful experiments led to greater accuracy than fine-tuning with successful experiments alone.
These findings, published in Science Advances, add to a growing body of scientific literature that emphasizes the importance of publishing the results of failed experiments.
⚗️ Learning the language of chemistry: “You can think of chemistry as having a grammar and syntactic rules,” Graziani said. And just like a language model can be trained to hold conversations based on the rule it can also be trained to understand the rules of chemistry and perform operations based on them.
🗑️ From trash to treasure: Learning from mistakes can be as informative as learning from successes. After all, scientific failures aren't random. They're based on informed hypotheses that contain important background knowledge about the field. Just as one can master a new language by learning from their linguistic errors, a language model can learn from two different types of failed chemistry experiments: those that yielded an unexpected but chemically relevant product, and those that included no significant reactions.
🧪 Positive reinforcement: Building upon a transformer backbone, researchers trained a language model on chemical reactions extracted from United States Patent and Trademark Office (USPTO) patents, and they fine-tuned it with two different datasets — one containing negative data, and one without. They then crafted reward functions to support the use of reinforcement learning from human feedback (RLHF), an approach not commonly used in chemistry. Careful encoding was required for the model to make use of the negative examples, and eventually they cracked the code with a fine-tuned model that performed significantly better than one tuned only on successful experiments.
For IBM’s Dmitry Krotov, AI is all about physics
When John Hopfield won the Nobel Prize in physics last fall for his work in AI, many people were puzzled. Not Dmitry Krotov .
As one of Hopfield's close collaborators, the IBM researcher has helped to explain to the world since the Nobels were announced how Hopfield networks paved the way for the deep neural networks in use today. At Princeton, Hopfield and Krotov invented something called dense associative memory, which lifted the memory storage limits of those early Hopfield networks, opening them to practical applications. Krotov is now carrying on Hopfield’s ideas by building computational models to improve artificial intelligence, and to even understand the underpinnings of intelligence itself.
Associative memory may never displace transformers as the backbone of generative AI, but it could provide ideas for making AI more interpretable to us humans.
What does this all have to do with physics? Krotov and others explain here.
The first deployment of an IBM Quantum System Two outside the United States
Hear from Jerry M. Chow on IBM's path towards quantum advantage with Bloomberg Technology
Where is IBM headed? Take a closer look at the 2025 IBM Quantum Roadmap to see what's next 🗺️
How IBM is helping to make our digital landscape safer 🔒
Breaking through the noise: How IBM and Kipu Quantum Quantum are making breakthroughs that matter
𝙍𝙚𝙨𝙚𝙖𝙧𝙘𝙝 𝙍𝙤𝙪𝙣𝙙𝙪𝙥:
Highlighting new publications from IBM researchers that we liked the sound of:
If you liked this, please considering following IBM Research on LinkedIn. And if you want to to go even deeper, subscribe to our monthly newsletter, Future Forward for more on the latest news on breakthroughs in AI, quantum computing and hybrid cloud.