July 18, 2023
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closedsource models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.
AutorIn
Louis Martin
Kevin Stone
Peter Albert
Amjad Almahairi
Yasmine Babaei
Nikolay Bashlykov
Soumya Batra
Dan Bikel
Lukas Blecher
Moya Chen
Guillem Cucurull
David Esiobu
Jude Fernandes
Jeremy Fu
Wenyin Fu
Brian Fuller
Cynthia Gao
Vedanuj Goswami
Naman Goyal
Anthony Hartshorn
Saghar Hosseini
Rui Hou
Marcin Kardas
Viktor Kerkez
Isabel Kloumann
Artem Korenev
Punit Singh Koura
Marie-Anne Lachaux
Thibaut Lavril
Jenya Lee
Diana Liskovich
Yinghai Lu
Xavier Martinet
Todor Mihaylov
Igor Molybog
Yixin Nie
Andrew Poulton
Jeremy Reizenstein
Kalyan Saladi
Alan Schelten
Ruan Silva
Ranjan Subramanian
Xiaoqing Ellen Tan
Binh Tang
Ross Taylor
Andrew Kuan
Puxin Xu
Zheng Yan
Iliyan Zarov
Yuchen Zhang
Melanie Kambadur
Sharan Narang
Aurelien Rodriguez
Robert Stojnic
Thomas Scialom
Publisher
arxiv
September 24, 2025
Jade Copet, Quentin Carbonneaux, Gal Cohen, Jonas Gehring, Jacob Kahn, Jannik Kossen, Felix Kreuk, Emily McMilin, Michel Meyer, Yuxiang Wei, David Zhang, Kunhao Zheng, Jordi Armengol Estape, Pedram Bashiri, Maximilian Beck, Pierre Chambon, Abhishek Charnalia, Chris Cummins, Juliette Decugis, Zacharias Fisches, François Fleuret, Fabian Gloeckle, Alex Gu, Michael Hassid, Daniel Haziza, Badr Youbi Idrissi, Christian Keller, Rahul Kindi, Hugh Leather, Gallil Maimon, Aram Markosyan, Francisco Massa, Pierre-Emmanuel Mazaré, Vegard Mella, Naila Murray, Keyur Muzumdar, Peter O'Hearn, Matteo Pagliardini, Dmitrii Pedchenko, Tal Remez, Volker Seeker, Marco Selvi, Oren Sultan, Sida Wang, Luca Wehrstedt, Ori Yoran, Lingming Zhang, Taco Cohen, Yossi Adi, Gabriel Synnaeve
September 24, 2025
September 24, 2025
Dulhan Jayalath, Shashwat Goel, Thomas Simon Foster, Parag Jain, Suchin Gururangan, Cheng Zhang, Anirudh Goyal, Alan Schelten
September 24, 2025
September 24, 2025
Daniel Song, Peter Ney, Cristina Menghini, Faizan Ahmad, Aidan Boyd, Nathaniel Li, Ziwen Han, Jean-Christophe Testud, Saisuke Okabayashi, Maeve Ryan, Jinpeng Miao, Hamza Kwisaba, Felix Binder, Spencer Whitman, Jim Gust, Esteban Arcaute, Dhaval Kapil, Jacob Kahn, Ayaz Minhas, Tristan Goodman, Lauren Deason, Alexander Vaughan, Shengjia Zhao, Summer Yue
September 24, 2025
September 23, 2025
Zilin Xiao, Qi Ma, Mengting Gu, Jason Chen, Xintao Chen, Vicente Ordonez, Vijai Mohan
September 23, 2025
Our approach
Latest news
Foundational models