CONVERSATIONAL AI

NLP

Llama 2: Open Foundation and Fine-Tuned Chat Models

July 18, 2023

Zusammenfassung

In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closedsource models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.

Download the Paper

AUTHORS

AutorIn

Hugo Touvron

Louis Martin

Kevin Stone

Peter Albert

Amjad Almahairi

Yasmine Babaei

Nikolay Bashlykov

Soumya Batra

Praj Bhargava

Shruti Bhosale

Dan Bikel

Lukas Blecher

Cristian Canton Ferrer

Moya Chen

Guillem Cucurull

David Esiobu

Jude Fernandes

Jeremy Fu

Wenyin Fu

Brian Fuller

Cynthia Gao

Vedanuj Goswami

Naman Goyal

Anthony Hartshorn

Saghar Hosseini

Rui Hou

Hakan Inan

Marcin Kardas

Viktor Kerkez

Madian Khabsa

Isabel Kloumann

Artem Korenev

Punit Singh Koura

Marie-Anne Lachaux

Thibaut Lavril

Jenya Lee

Diana Liskovich

Yinghai Lu

Yuning Mao

Xavier Martinet

Todor Mihaylov

Pushkar Mishra

Igor Molybog

Yixin Nie

Andrew Poulton

Jeremy Reizenstein

Rashi Rungta

Kalyan Saladi

Alan Schelten

Ruan Silva

Eric Michael Smith

Ranjan Subramanian

Xiaoqing Ellen Tan

Binh Tang

Ross Taylor

Adina Williams

Andrew Kuan

Puxin Xu

Zheng Yan

Iliyan Zarov

Yuchen Zhang

Angela Fan

Melanie Kambadur

Sharan Narang

Aurelien Rodriguez

Robert Stojnic

Sergey Edunov

Thomas Scialom

Publisher

arxiv

Related Publications

September 24, 2025

RESEARCH

NLP

CWM: An Open-Weights LLM for Research on Code Generation with World Models

Jade Copet, Quentin Carbonneaux, Gal Cohen, Jonas Gehring, Jacob Kahn, Jannik Kossen, Felix Kreuk, Emily McMilin, Michel Meyer, Yuxiang Wei, David Zhang, Kunhao Zheng, Jordi Armengol Estape, Pedram Bashiri, Maximilian Beck, Pierre Chambon, Abhishek Charnalia, Chris Cummins, Juliette Decugis, Zacharias Fisches, François Fleuret, Fabian Gloeckle, Alex Gu, Michael Hassid, Daniel Haziza, Badr Youbi Idrissi, Christian Keller, Rahul Kindi, Hugh Leather, Gallil Maimon, Aram Markosyan, Francisco Massa, Pierre-Emmanuel Mazaré, Vegard Mella, Naila Murray, Keyur Muzumdar, Peter O'Hearn, Matteo Pagliardini, Dmitrii Pedchenko, Tal Remez, Volker Seeker, Marco Selvi, Oren Sultan, Sida Wang, Luca Wehrstedt, Ori Yoran, Lingming Zhang, Taco Cohen, Yossi Adi, Gabriel Synnaeve

September 24, 2025

September 24, 2025

CONVERSATIONAL AI

REINFORCEMENT LEARNING

Compute as Teacher: Turning Inference Compute Into Reference-Free Supervision

Dulhan Jayalath, Shashwat Goel, Thomas Simon Foster, Parag Jain, Suchin Gururangan, Cheng Zhang, Anirudh Goyal, Alan Schelten

September 24, 2025

September 24, 2025

RESEARCH

NLP

Code World Model Preparedness Report

Daniel Song, Peter Ney, Cristina Menghini, Faizan Ahmad, Aidan Boyd, Nathaniel Li, Ziwen Han, Jean-Christophe Testud, Saisuke Okabayashi, Maeve Ryan, Jinpeng Miao, Hamza Kwisaba, Felix Binder, Spencer Whitman, Jim Gust, Esteban Arcaute, Dhaval Kapil, Jacob Kahn, Ayaz Minhas, Tristan Goodman, Lauren Deason, Alexander Vaughan, Shengjia Zhao, Summer Yue

September 24, 2025

September 23, 2025

RESEARCH

NLP

MetaEmbed: Scaling Multimodal Retrieval at Test-Time with Flexible Late Interactions

Zilin Xiao, Qi Ma, Mengting Gu, Jason Chen, Xintao Chen, Vicente Ordonez, Vijai Mohan

September 23, 2025

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.