Congratulations to IBM researchers Gregor Seiler and Ghazi Sarwat Syed for each earning prestigious Starting Grants from the European Research Council: https://ibm.co/6048ByP0i 🏅 Based at our labs in Zurich, Syed’s work stands at the forefront of analog in-memory computing (IMC). His project, INFUSED — Inferencing Fast and Slow With Ultra-Scaled Phase-Change Memory Devices — introduces a new framework for evaluating machine learning models, mimicking the brain’s reconfigurable fast-and-slow synaptic dynamics to ultimately make AI models more efficient and sustainable. 🏅 Also based in Zurich, Seiler earned a grant to build on his work in lattice-based cryptography and its application in making future encryption systems safer. Specifically, his work aims to enable practical, remote authentication use cases for safely sharing digital ID information in a post-quantum world. More details on their cutting-edge work and its future implications in the link above. ⬆️ ----- European Research Council (ERC)
IBM Research
Research Services
Yorktown Heights, New York 90,371 followers
Inventing what's next in science and technology.
About us
IBM Research is a group of researchers, scientists, technologists, designers, and thinkers inventing what’s next in computing. We’re relentlessly curious about all the ways that computing can change the world. We’re obsessed with advancing the state of the art in AI and hybrid cloud, and quantum computing. We’re discovering the new materials for the next generation of computer chips; we’re building bias-free AI that can take the burden out of business decisions; we’re designing a hybrid-cloud platform that essentially operates as the world’s computer. We’re moving quantum computing from a theoretical concept to machines that will redefine industries. The problems the world is facing today require us to work faster than ever before. We want to catalyze scientific progress by scaling the technologies we’re working on and deploying them with partners across every industry and field of study. Our goal is to be the engine of change for IBM, our partners, and the world at large.
- Website
-
http://www.research.ibm.com/
External link for IBM Research
- Industry
- Research Services
- Company size
- 10,001+ employees
- Headquarters
- Yorktown Heights, New York
Updates
-
IBM Research reposted this
Granite Docling by IBM is number 3 trending on Hugging Face! This is a multimodal Image-Text-to-Text model engineered for efficient document conversion. It preserves the core features of Docling while maintaining seamless integration with DoclingDocuments to ensure full compatibility. License: Apache 2.0 It builds upon the IDEFICS3 architecture, but introduces two key modifications: it replaces the vision encoder with siglip2-base-patch16-512 and substitutes the language model with a Granite 165M LLM. Try out our Granite-Docling-258 demo today. Granite-docling-258M is fully integrated into the Docling pipelines, carrying over existing features while introducing a number of powerful new features, including: 🔢 Enhanced Equation Recognition: More accurate detection and formatting of mathematical formulas 🧩 Flexible Inference Modes: Choose between full-page inference, bbox-guided region inference 🧘 Improved Stability: Tends to avoid infinite loops more effectively 🧮 Enhanced Inline Equations: Better inline math recognition 🧾 Document Element QA: Answer questions about a document’s structure such as the presence and order of document elements 🌍 Japanese, Arabic and Chinese support (experimental) HF page: https://lnkd.in/eivfEXXb Congrats Arvind Krishna Bill Higgins & team!
-
-
IBM Research reposted this
Look at #BeeAI framework from The Linux Foundation and IBM Research in action! Congratulations team.
Excited to share that our team Hive.ai won first place at the IBM TechXchange Pre-conference watsonx Hackathon! 🎉 With my amazing teammates Paul Barbaste and Louise Caignaert, we created Hive.ai, an agentic AI system that helps cities cut their climate impact and build resilience. Hive AI enables cities to: - Aggregate and analyze climate and sustainability data - Identify vulnerabilities and local trends - Generate tailored, actionable plans aligned with the SDGs - Deliver clear resilience reports with priorities, budgets, and timelines Also a special thank you Olivier Oullier (Inclusive Brains), Julien FLOCH and the whole Wavestone team for sharing their expertise and guidance along the way. And hats off to Jennifer Judge, Raluca Negrea, Radu BORCEA and the IBM team for putting together such an inspiring hackathon and giving us the space to push our ideas further. Can’t wait to see you all in Orlando! 🌍✈️🏙️ [Wavestone AI Team] Julien FLOCH Florence NOIZET Yuri Höfte Stephan Mir Ghislain de Pierrefeu Alexia Bros Jonathan Gérardin Johann Chazelle Tom Wiltberger Clément Peponnet Nolwenn Delord Federico Garcia Velez Juan Pablo Duque Escobar #IBM #TechXChange2025 #Hackaton #Wavestone #WatsonX #TechforGood
-
-
IBM Research reposted this
𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 𝗶𝘀 𝘁𝗵𝗲 𝗻𝗲𝘄 𝗳𝗿𝗼𝗻𝘁𝗶𝗲𝗿 𝗼𝗳 𝗔𝗜. Thrilled to share that the 𝗖𝗮𝘂𝘀𝗮𝗹 𝗟𝗟𝗠 𝗥𝗼𝘂𝘁𝗶𝗻𝗴 paper https://lnkd.in/erPWniT7 with my incredible collaborators Asterios Tsiourvas & Georgia Perakis has been accepted to NeurIPS 2025! 🥳 🥂 Routing is all about efficiency: sending the right query to the right compute (save the big guns for when it really counts). Our approach brings 𝗰𝗮𝘂𝘀𝗮𝗹 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻-𝗺𝗮𝗸𝗶𝗻𝗴 into the mix by learning directly from deployment logs instead of costly experiments. This work highlights the power of combining 𝗮𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗔𝗜 (causal ML) with the mathematical elegance of 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵 (“smart predict-and-optimize” formulations that provably recover the optimal routing policy at convergence). As GenAI moves from flashy demos to real deployment, AI efficiency will take center stage (even OpenAI, with all its deep pockets, built a cost-aware router into GPT5!). With OR’s strengths in optimization, efficiency, and rigor, I’m confident we can unlock more value while pushing AI toward greater performance and sustainability! Go #OR! Go IBM Research! 🫶
-
IBM Research reposted this
Analog In-Memory Computing (AIMC) promises to unlock completely new use-cases of AI by serving huge models (think 100s of billions of parameters) in a tiny footprint thanks to the enormous capacity and density of non-volatile memory. In a previous paper, we have even demonstrated a clear path to achieving this by combining 3D AIMC with Mixture of Experts (see https://lnkd.in/eMmCiF5Z). All of this sounds amazing, but there is a catch: Performing matrix-vector multiplications directly inside non-volatile memory devices is a noisy process. As a result, deploying off-the-shelf LLMs on AIMC hardware will yield bad accuracy. This is similar to quantizing the weights of LLMs to exploit low-precision kernels on modern GPUs, only that the noise is non-deterministic at runtime. Previous studies have introduced methods to successfully "robustify" neural networks to the kind of noise found in AIMC hardware. However, these studies have mostly been limited to CNNs, LSTMs etc. with <100M parameters. In our paper titled "Analog Foundation Models", we introduce a new class of foundation models that are robust to noise present in AIMC. Due to noise-agnostic hardware-aware training using AIHWKIT-Lightning (https://lnkd.in/e2FaXgEJ), these models are robust to many different instances of hardware noise, and can even be quantized much more easily using simple round-to-nearest quantization. Most importantly, we show for the first time that LLMs can be robustified to achieve *iso-4-bit performance*, meaning that we can now compete with LLMs that had their weights quantized to 4 bits. This is a huge milestone in addressing one of the most fundamental limitations of AIMC, and hopefully paves the way to make this technology a reality some time! The paper has now been accepted to #NeurIPS2025 🎉 👉 Link to the preprint: https://lnkd.in/ex4PtjT3 Huge thanks to all my collaborators Iason Chalas, Giovanni Acampa, An Chen, Omobayode Fagbohungbe, Sidney Tsai, Kaoutar El Maghraoui, Manuel Le Gallo, Abbas Rahimi, Abu Sebastian Special thanks to my students Iason Chalas and Giovanni Acampa, who put endless hours into this project. Also, special thanks go out to my supervisors Abu Sebastian from IBM Research Zurich and Martin Vechev from ETH Zurich, who have made my Ph.D. a very pleasant experience. This has been a collaboration with contributions from IBM Research in Zurich, Almaden, and Yorktown, and I am really proud to have pulled off this global team effort. Vijay Narayanan Jeff Burns Alessandro Curioni Mukesh Khare
-
-
IBM Research reposted this
I had such a good time teaching and building AI Agents with engineering students at Massachusetts Institute of Technology , Northeastern University , and Boston University 🤖 Everyone walked away with a fully functional agent they can use as the foundation for their experimentation and hackathons. 🚀 I taught about memory strategies, how tool calling actually works, RAG, and how to guide and enforce certain agent behavior using BeeAI’s Requirement Agent. Thanks to everyone who came and for the awesome engagement throughout! 🙏 If we haven’t made it to your university yet, reach out and let me know where you’d like us to host next? 🎓📍
-
-
Congratulations to the FlowState team! Despite being over 20x smaller than the top-ranked model (and more than 10x smaller than the next two) FlowState achieves the second and third-best zero-shot performance on the Hugging Face GIFT-Eval Time Series Forecasting Leaderboard: https://ibm.co/6047BySIN FlowState — the first time-scale adjustable Time Series Foundation Model (TSFM) open-sourced by IBM Research — combines a State Space Model (SSM) Encoder with a Functional Basis Decoder (FBD) to seamlessly adapt to any sampling rate. In short: by adjusting the time scale during training and inference, FlowState learns patterns that generalize across all scales — enabling efficient use of training data and robust inference even at unseen time scales. This innovation sets a new benchmark in zero-shot time series forecasting.
-
-
𝗧𝗵𝗶𝘀 𝘄𝗲𝗲𝗸 𝗶𝗻 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵: 🏅 IBM’s Granite Guardian 8B claims the third spot on the LLM-AggreFact leaderboard, a fact-checking benchmark created by researchers at The University of Texas at Austin. It’s the first guardrail model to not only detect harmful inputs and outputs, but to suggest a fix by snapping on an IBM-created “harm correction” low-rank adapter. 🇮🇳 IBM and BharatGen announce a new partnership to help drive the adoption of AI in India. Together, they’ll use the multimodal AI and LLMs that BharatGen is working on to build the tools needed to serve Indian language speakers. 🐝 BeeAI Framework, our open-source platform for building production-ready AI agents in both Python and Typescript, releases “RequirementAgent” – a new feature to help developers define rules for their agent behavior, without the need for extensive code orchestration. Read, share and subscribe for more news from IBM Research. ⤵️
-
IBM Research reposted this
BharatGen is India’s own large language model (LLM) that addresses the unique needs of our diverse and growing nation. It is being developed in Department of Science and Technology, Government of India National Mission on Interdisciplinary Cyber Physical Systems #NMICPS. BharatGen is engaging IBM as a partner to collaborate on innovations, scalable architectures, and applications that will power transformative use cases across sectors such as agriculture, banking, education, and beyond. The strategic collaboration will bring together experts from IBM and the BharatGen consortium to co-develop application solutions that are truly made for Bharat. Dr. Amit Singhee, Director, IBM Research ; Mr. Sandip Patel, MD, IBM India & South Asia; Dr. Jaikrishnan Hari, Senior Manager – Business Development, IBM Research; Mr. Ramesh Karwani, Head – Technology Policy, IBM India ; Prof. Ganesh Ramakrishnan, Principal Investigator, BharatGen & Professor, CSE, Indian Institute of Technology, Bombay; Prof. Aditya Maheshwari, IIM Indore & BharatGen Consortium Member; Dr. Ekta Kapoor, Head FFT Division Department of Science and Technology, Government of India and other officials attended the MoU exchange event. This partnership is yet another step in making India a global leader in responsible and inclusive AI – building solutions that empower every sector, every community, and every citizen. India DST Dr Jitendra Singh Narendra Modi
-
-
IBM Research reposted this
India’s languages and cultures are among its greatest strengths, and we are proud to partner with BharatGen to ensure AI is shaped by this vibrancy. Together, BharatGen and IBM will accelerate AI adoption across the country by combining IBM’s depth in AI innovation with BharatGen’s sovereign multimodal and large language models designed for India’s linguistic and cultural landscape. Our vision is to create inclusive, India first AI solutions that can transform business, government and society. Together we will unlock opportunities, foster innovation and power growth for India’s digital future. Read more here: https://ibm.co/6049ByoAD #IBMISA