SlideShare a Scribd company logo
How to fine-tune a
Large Language Model
Durgesh Gupta
Lack of etiquette and manners is a huge turn off.
KnolX Etiquettes
 Punctuality
Join the session 5 minutes prior to the session start time. We start on
time and conclude on time!
 Feedback
Make sure to submit a constructive feedback for all sessions as it is very
helpful for the presenter.
 Silent Mode
Keep your mobile devices in silent mode, feel free to move out of session
in case you need to attend an urgent call.
 Avoid Disturbance
Avoid unwanted chit chat during the session.
1. What is Fine-tuning
2. Pre-trained Model Vs Fine-tuned Model
3. What is Pre-training?
4. Limitation of pre-trained base models
5. Advantage of fine-tuning your own LLM
6. What is Instruction fine-tuning
7. Data Preparation
8. Approach to fine-tuning
9. PEFT: Parameter Efficient fine-tuning
10. Error Analysis
11. Sample Training Code
01
What is Fine-tuning?
 Finetuning is tweaking the model’s parameters to make it suitable for
performing a specific task.
 We can fine-tune a pre-trained model or in simple words, train to
perform a specific task such as sentiment analysis, text generation,
finding document similarity, etc.
 What fine-tuning does for the model?
− Gets model to learn the data, rather than just get access to it.
− Steers the model to more consistent outputs
− Reduce hallucinations
− Customizes the model to a specific use case.
What is Fine-tuning?
02
Pre-trained Model Vs Fine-tuned Model
 No data to get started
 Smaller upfront cost
 No technical/training knowledge
 Connect data through retrieval (RAG)
 More Generic data fits
 Hallucinations
 RAG misses or gets incorrect data.
Pre-trained Model
 Domain specific data required
 Involves Upfront compute cost
 Needs technical expertise.
 Use RAG too (More Secure)
 More high-quality domain specific data
 Learn new information
 Able to correct incorrect information
Fine-tuned Model
Note: Less cost afterwards if smaller model
03
Training Model to learn text-generation
 Training LLMs from scratch is known as pre-training.
 It is a technique in which a large language model is trained on a vast amount of
unlabeled text.
 Utilizing the concept of Self-Supervised Learning, model masks a word and tries
to predict the next word with the help of the preceding words.
 Pre-training, it is a technique in which the model learns to predict the next word
in the text.
 Example: I am a data scientist.
− The model can create its own labeled data from this sentence like:
Text Label
I am
I am a
I am a data
I am a data scientist
04
Limitation of Pre-trained Model
 Contextual Understanding: Difficult differentiating context.
 Generating Misinformation: May generate incorrect or misleading
information.
 Lack of Creativity: Creativity based on mimicking patterns.
 Hallucination: Generates text that is erroneous, nonsensical, or
detached from reality.
05
Benefit of fine-tuning your own LLM
 Performance
− Less Hallucination
− Increase Consistency
− Reduce unwanted information
 Privacy
− On Prem
− Prevent Leakage
− No breaches
 Reliability
− Control Uptime
− Lower Latency
− Increased Transparency
− Greater Control
Impact of fine-tuning on the model
 Behavior Change
− Learning to respond more consistently
− Learning to focus, e.g., moderation
− Teasing out capability, e.g., better at conversation
 Gain Knowledge
− Increasing knowledge of new specific concepts.
− Correcting old incorrect information
06
What is instruction fine-tuning?
 Instruction fine-tuning is a specialized technique to tailor large language
models to perform specific tasks based on explicit instructions.
 It refers to the process of further training LLMs on a dataset consisting of
instruction, output pairs in a supervised fashion, which bridges the gap
between the next-word prediction objective of LLMs and the users'
objective of having LLMs adhere to human instructions.
 Teaches model to behave more like a chat bot.
 Better user interface for model interaction
− Increased AI adoption, from the thousands of researchers to million of
people
 Can access model pre-existing knowledge.
Instruction following datasets
 Some existing data is ready as-in online:
− FAQ's
− Customer Support Conversation
− Slack Messages
07
Data Selection Criteria
 Higher Quality
 Diversity
 Real
 More
Better
 Lower Quality
 Homogeneity
 Generated
 Less
Worse
Steps to prepare your data
1. Collect instruction-response pairs
2. Concatenate pairs (add prompt template, if required)
3. Tokenization: Pad, Truncate
4. Split into train/test
Tokenization
 Tokenization is the process of splitting text into individual units,
typically words or sub words.
 This step is crucial for the model to understand the structure of
the text.
 In languages like English, tokenization is relatively
straightforward, as words are typically separated by spaces.
Tokenization
This is an input text.
[CLS] This is an input text . [SEP]
101 2023 2003 1037 7953 2058 1012 102
ENCODING
08
Approach To Fine-tune LLM
 Figure out the task.
 Data collection related to the task: input/output pairs.
 Data generation, if required
 Fine tune a small model e.g., 50M-1B
 Vary the amount of data you give your model.
 Evaluate the model performance.
 Collect more data to improve.
 Increase task complexity
 Increase the model size for performance.
The steps for fine-tuning the Large Language Model are:
Fine-tuning Lifecycle
09
PEFT: Parameter Efficient Fine Tuning
 PEFT stands for Parameter Efficient Fine-tuning.
 ML models are essentially complex mathematical equations with
numerous coefficients or weights.
 These coefficients are responsible for the model behavior and make it
capable of learning from data.
 During training of ML models, we adjust these coefficients to minimize
errors and make accurate predictions.
 In case of LLMs, which can have billions of parameters, and changing all
of them during training can be computationally expensive and memory-
intensive.
 PEFT, as a subset of fine-tuning, takes the parameter efficiency seriously.
 Instead of altering all the coefficients of the model, PEFT selects a subset
of them.
 It helps us significantly reducing the computational and memory
requirements.
PEFT: Parameter Efficient Fine Tuning
PEFT: Parameter Efficient Fine Tuning
 LoRA (Low-Rank Adoption):
− It is a technique exploits the fact that some weights have more
significant impacts than others. In LoRA, the large weight matrix is
divided into two smaller matrices by factorization.
− We reduce the number of coefficients that need adjustment, making the
fine-tuning process more efficient.
 QLoRA (Quantization + Low-Rank Adoption):
− Quantization involves converting high-precision floating-point coefficients
into lower-precision representations, such as 4-bit integers.
− Quantization offers a solution by reducing the precision of these
coefficients.
− For instance, a 32-bit floating-point number can be represented as a 4-
bit integer within a specific range. This conversion significantly shrinks
the memory footprint.
LoRA and QLoRA for Coefficient Selection
10
Evaluating Generative AI model
 Huaman Evaluation: Human Expert Evaluation is most reliable.
 Test Data- Good test data is crucial
− High Quality
− Accurate
− Generalize
− Not seen in training data
 Elo Rankings
− Ranking of the top LLMs based on their Elo scores.
− The Elo scores are computed from the results of A/B tests, wherein the
LLMs are pitted against each other in a series of games.
− The ranking system employed is based on the Elo Rating System.
Evaluating Generative Models are Notoriously difficult !
Error Analysis
• Understand the base model behaviour before finetuning
• Categorize errors: iterate on data to fix these problems in data.
Category Example with Problem Example Fixed
Misspelling Your kidney is healthy,
but you lever is sick, get
your lever examined
Your kidney is healthy,
but your liver is sick
Too Long Diabetes is less likely
when you eat a healthy
diet makes diabetes less
likely, making …......
Diabetes is less likely
when you eat a healthy
diet
Repetitive Medical LLMs can save
healthcare workers time
and money and time and
money and time and
money.
Medical LLMs can save
healthcare workers time
and money
11
How to fine-tune and develop your own large language model.pptx

More Related Content

PPTX
Fine tuning large LMs
PDF
LLM Cheatsheet and it's brief introduction
PPTX
Fine-tuning Large Language Models by Dmitry Balabka
PDF
Customizing LLMs
PDF
presentation.pdf
PPTX
Building, Evaluating, and Optimizing your RAG App for Production
PDF
Use Case Patterns for LLM Applications (1).pdf
PDF
Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...
Fine tuning large LMs
LLM Cheatsheet and it's brief introduction
Fine-tuning Large Language Models by Dmitry Balabka
Customizing LLMs
presentation.pdf
Building, Evaluating, and Optimizing your RAG App for Production
Use Case Patterns for LLM Applications (1).pdf
Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...

What's hot (20)

PDF
Intro to LLMs
PDF
Large Language Models - Chat AI.pdf
PDF
Build an LLM-powered application using LangChain.pdf
PDF
And then there were ... Large Language Models
PDF
LLMs Bootcamp
PDF
LanGCHAIN Framework
PPTX
A Comprehensive Review of Large Language Models for.pptx
PDF
generative-ai-fundamentals and Large language models
PDF
Large Language Models Bootcamp
PDF
NLP using transformers
PDF
What is MLOps
PPTX
Introduction to Transformer Model
PDF
MLOps for production-level machine learning
PDF
LangChain Intro by KeyMate.AI
PPTX
Using Generative AI
PDF
Transformers, LLMs, and the Possibility of AGI
PPTX
LLaMA 2.pptx
PPTX
Large Language Models, No-Code, and Responsible AI - Trends in Applied NLP in...
PPTX
Natural language processing and transformer models
PDF
Build an LLM-powered application using LangChain.pdf
Intro to LLMs
Large Language Models - Chat AI.pdf
Build an LLM-powered application using LangChain.pdf
And then there were ... Large Language Models
LLMs Bootcamp
LanGCHAIN Framework
A Comprehensive Review of Large Language Models for.pptx
generative-ai-fundamentals and Large language models
Large Language Models Bootcamp
NLP using transformers
What is MLOps
Introduction to Transformer Model
MLOps for production-level machine learning
LangChain Intro by KeyMate.AI
Using Generative AI
Transformers, LLMs, and the Possibility of AGI
LLaMA 2.pptx
Large Language Models, No-Code, and Responsible AI - Trends in Applied NLP in...
Natural language processing and transformer models
Build an LLM-powered application using LangChain.pdf
Ad

Similar to How to fine-tune and develop your own large language model.pptx (20)

PDF
Tailoring Small Language Models for Enterprise Use Cases
PDF
solulab.com-A Complete LLM Technique Comparison.pdf
PDF
solulab.com-A Complete LLM Technique Comparison (2).pdf
PDF
LLM.pdf
PDF
Roman Kyslyi: Синтетичні дані – стратегії, використання (UA)
PDF
LLMs for the “GPU-Poor” - Franck Nijimbere.pdf
PPTX
Introduction to LLM Post-Training - MIT 6.S191 2025
PDF
Implementing high-quality and cost-effiient AI applications with small langua...
PDF
LLM Fine-Tuning vs RAG A Complete Comparison.pdf
PPTX
Local Applications of Large Language Models based on RAG.pptx
PDF
LLM Fine-Tuning vs RAG A Complete Comparison.pdf
PDF
LLM Fine-Tuning vs RAG A Complete Comparison.pdf
PDF
Tailoring Small Language Models for Enterprise Use Cases
PDF
[DSC Europe 23] Dmitry Ustalov - Design and Evaluation of Large Language Models
PPTX
Roman Kyslyi: Великі мовні моделі: огляд, виклики та рішення
PDF
Arcee AI - building and working with small language models (06/25)
PPTX
LLM24aug.pptxxz khi ong mat troi thuc dat me
PPTX
Paper presentation on LLM compression
PPTX
A Survey of Techniques for Maximizing LLM Performance.pptx
PPTX
Trustworthy Generative AI_ ICML'23 Tutorial.pptx
Tailoring Small Language Models for Enterprise Use Cases
solulab.com-A Complete LLM Technique Comparison.pdf
solulab.com-A Complete LLM Technique Comparison (2).pdf
LLM.pdf
Roman Kyslyi: Синтетичні дані – стратегії, використання (UA)
LLMs for the “GPU-Poor” - Franck Nijimbere.pdf
Introduction to LLM Post-Training - MIT 6.S191 2025
Implementing high-quality and cost-effiient AI applications with small langua...
LLM Fine-Tuning vs RAG A Complete Comparison.pdf
Local Applications of Large Language Models based on RAG.pptx
LLM Fine-Tuning vs RAG A Complete Comparison.pdf
LLM Fine-Tuning vs RAG A Complete Comparison.pdf
Tailoring Small Language Models for Enterprise Use Cases
[DSC Europe 23] Dmitry Ustalov - Design and Evaluation of Large Language Models
Roman Kyslyi: Великі мовні моделі: огляд, виклики та рішення
Arcee AI - building and working with small language models (06/25)
LLM24aug.pptxxz khi ong mat troi thuc dat me
Paper presentation on LLM compression
A Survey of Techniques for Maximizing LLM Performance.pptx
Trustworthy Generative AI_ ICML'23 Tutorial.pptx
Ad

More from Knoldus Inc. (20)

PPTX
Angular Hydration Presentation (FrontEnd)
PPTX
Optimizing Test Execution: Heuristic Algorithm for Self-Healing
PPTX
Self-Healing Test Automation Framework - Healenium
PPTX
Kanban Metrics Presentation (Project Management)
PPTX
Java 17 features and implementation.pptx
PPTX
Chaos Mesh Introducing Chaos in Kubernetes
PPTX
GraalVM - A Step Ahead of JVM Presentation
PPTX
Nomad by HashiCorp Presentation (DevOps)
PPTX
Nomad by HashiCorp Presentation (DevOps)
PPTX
DAPR - Distributed Application Runtime Presentation
PPTX
Introduction to Azure Virtual WAN Presentation
PPTX
Introduction to Argo Rollouts Presentation
PPTX
Intro to Azure Container App Presentation
PPTX
Insights Unveiled Test Reporting and Observability Excellence
PPTX
Introduction to Splunk Presentation (DevOps)
PPTX
Code Camp - Data Profiling and Quality Analysis Framework
PPTX
AWS: Messaging Services in AWS Presentation
PPTX
Amazon Cognito: A Primer on Authentication and Authorization
PPTX
ZIO Http A Functional Approach to Scalable and Type-Safe Web Development
PPTX
Managing State & HTTP Requests In Ionic.
Angular Hydration Presentation (FrontEnd)
Optimizing Test Execution: Heuristic Algorithm for Self-Healing
Self-Healing Test Automation Framework - Healenium
Kanban Metrics Presentation (Project Management)
Java 17 features and implementation.pptx
Chaos Mesh Introducing Chaos in Kubernetes
GraalVM - A Step Ahead of JVM Presentation
Nomad by HashiCorp Presentation (DevOps)
Nomad by HashiCorp Presentation (DevOps)
DAPR - Distributed Application Runtime Presentation
Introduction to Azure Virtual WAN Presentation
Introduction to Argo Rollouts Presentation
Intro to Azure Container App Presentation
Insights Unveiled Test Reporting and Observability Excellence
Introduction to Splunk Presentation (DevOps)
Code Camp - Data Profiling and Quality Analysis Framework
AWS: Messaging Services in AWS Presentation
Amazon Cognito: A Primer on Authentication and Authorization
ZIO Http A Functional Approach to Scalable and Type-Safe Web Development
Managing State & HTTP Requests In Ionic.

Recently uploaded (20)

PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Encapsulation theory and applications.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Machine learning based COVID-19 study performance prediction
PPT
Teaching material agriculture food technology
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
Cloud computing and distributed systems.
PDF
Approach and Philosophy of On baking technology
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
Unlocking AI with Model Context Protocol (MCP)
Diabetes mellitus diagnosis method based random forest with bat algorithm
Mobile App Security Testing_ A Comprehensive Guide.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
Per capita expenditure prediction using model stacking based on satellite ima...
Encapsulation theory and applications.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
NewMind AI Monthly Chronicles - July 2025
Machine learning based COVID-19 study performance prediction
Teaching material agriculture food technology
Review of recent advances in non-invasive hemoglobin estimation
Cloud computing and distributed systems.
Approach and Philosophy of On baking technology
NewMind AI Weekly Chronicles - August'25 Week I
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?

How to fine-tune and develop your own large language model.pptx

  • 1. How to fine-tune a Large Language Model Durgesh Gupta
  • 2. Lack of etiquette and manners is a huge turn off. KnolX Etiquettes  Punctuality Join the session 5 minutes prior to the session start time. We start on time and conclude on time!  Feedback Make sure to submit a constructive feedback for all sessions as it is very helpful for the presenter.  Silent Mode Keep your mobile devices in silent mode, feel free to move out of session in case you need to attend an urgent call.  Avoid Disturbance Avoid unwanted chit chat during the session.
  • 3. 1. What is Fine-tuning 2. Pre-trained Model Vs Fine-tuned Model 3. What is Pre-training? 4. Limitation of pre-trained base models 5. Advantage of fine-tuning your own LLM 6. What is Instruction fine-tuning 7. Data Preparation 8. Approach to fine-tuning 9. PEFT: Parameter Efficient fine-tuning 10. Error Analysis 11. Sample Training Code
  • 4. 01
  • 5. What is Fine-tuning?  Finetuning is tweaking the model’s parameters to make it suitable for performing a specific task.  We can fine-tune a pre-trained model or in simple words, train to perform a specific task such as sentiment analysis, text generation, finding document similarity, etc.  What fine-tuning does for the model? − Gets model to learn the data, rather than just get access to it. − Steers the model to more consistent outputs − Reduce hallucinations − Customizes the model to a specific use case.
  • 7. 02
  • 8. Pre-trained Model Vs Fine-tuned Model  No data to get started  Smaller upfront cost  No technical/training knowledge  Connect data through retrieval (RAG)  More Generic data fits  Hallucinations  RAG misses or gets incorrect data. Pre-trained Model  Domain specific data required  Involves Upfront compute cost  Needs technical expertise.  Use RAG too (More Secure)  More high-quality domain specific data  Learn new information  Able to correct incorrect information Fine-tuned Model Note: Less cost afterwards if smaller model
  • 9. 03
  • 10. Training Model to learn text-generation  Training LLMs from scratch is known as pre-training.  It is a technique in which a large language model is trained on a vast amount of unlabeled text.  Utilizing the concept of Self-Supervised Learning, model masks a word and tries to predict the next word with the help of the preceding words.  Pre-training, it is a technique in which the model learns to predict the next word in the text.  Example: I am a data scientist. − The model can create its own labeled data from this sentence like: Text Label I am I am a I am a data I am a data scientist
  • 11. 04
  • 12. Limitation of Pre-trained Model  Contextual Understanding: Difficult differentiating context.  Generating Misinformation: May generate incorrect or misleading information.  Lack of Creativity: Creativity based on mimicking patterns.  Hallucination: Generates text that is erroneous, nonsensical, or detached from reality.
  • 13. 05
  • 14. Benefit of fine-tuning your own LLM  Performance − Less Hallucination − Increase Consistency − Reduce unwanted information  Privacy − On Prem − Prevent Leakage − No breaches  Reliability − Control Uptime − Lower Latency − Increased Transparency − Greater Control
  • 15. Impact of fine-tuning on the model  Behavior Change − Learning to respond more consistently − Learning to focus, e.g., moderation − Teasing out capability, e.g., better at conversation  Gain Knowledge − Increasing knowledge of new specific concepts. − Correcting old incorrect information
  • 16. 06
  • 17. What is instruction fine-tuning?  Instruction fine-tuning is a specialized technique to tailor large language models to perform specific tasks based on explicit instructions.  It refers to the process of further training LLMs on a dataset consisting of instruction, output pairs in a supervised fashion, which bridges the gap between the next-word prediction objective of LLMs and the users' objective of having LLMs adhere to human instructions.  Teaches model to behave more like a chat bot.  Better user interface for model interaction − Increased AI adoption, from the thousands of researchers to million of people  Can access model pre-existing knowledge.
  • 18. Instruction following datasets  Some existing data is ready as-in online: − FAQ's − Customer Support Conversation − Slack Messages
  • 19. 07
  • 20. Data Selection Criteria  Higher Quality  Diversity  Real  More Better  Lower Quality  Homogeneity  Generated  Less Worse
  • 21. Steps to prepare your data 1. Collect instruction-response pairs 2. Concatenate pairs (add prompt template, if required) 3. Tokenization: Pad, Truncate 4. Split into train/test
  • 22. Tokenization  Tokenization is the process of splitting text into individual units, typically words or sub words.  This step is crucial for the model to understand the structure of the text.  In languages like English, tokenization is relatively straightforward, as words are typically separated by spaces.
  • 23. Tokenization This is an input text. [CLS] This is an input text . [SEP] 101 2023 2003 1037 7953 2058 1012 102 ENCODING
  • 24. 08
  • 25. Approach To Fine-tune LLM  Figure out the task.  Data collection related to the task: input/output pairs.  Data generation, if required  Fine tune a small model e.g., 50M-1B  Vary the amount of data you give your model.  Evaluate the model performance.  Collect more data to improve.  Increase task complexity  Increase the model size for performance. The steps for fine-tuning the Large Language Model are:
  • 27. 09
  • 28. PEFT: Parameter Efficient Fine Tuning  PEFT stands for Parameter Efficient Fine-tuning.  ML models are essentially complex mathematical equations with numerous coefficients or weights.  These coefficients are responsible for the model behavior and make it capable of learning from data.  During training of ML models, we adjust these coefficients to minimize errors and make accurate predictions.  In case of LLMs, which can have billions of parameters, and changing all of them during training can be computationally expensive and memory- intensive.  PEFT, as a subset of fine-tuning, takes the parameter efficiency seriously.  Instead of altering all the coefficients of the model, PEFT selects a subset of them.  It helps us significantly reducing the computational and memory requirements.
  • 30. PEFT: Parameter Efficient Fine Tuning  LoRA (Low-Rank Adoption): − It is a technique exploits the fact that some weights have more significant impacts than others. In LoRA, the large weight matrix is divided into two smaller matrices by factorization. − We reduce the number of coefficients that need adjustment, making the fine-tuning process more efficient.  QLoRA (Quantization + Low-Rank Adoption): − Quantization involves converting high-precision floating-point coefficients into lower-precision representations, such as 4-bit integers. − Quantization offers a solution by reducing the precision of these coefficients. − For instance, a 32-bit floating-point number can be represented as a 4- bit integer within a specific range. This conversion significantly shrinks the memory footprint. LoRA and QLoRA for Coefficient Selection
  • 31. 10
  • 32. Evaluating Generative AI model  Huaman Evaluation: Human Expert Evaluation is most reliable.  Test Data- Good test data is crucial − High Quality − Accurate − Generalize − Not seen in training data  Elo Rankings − Ranking of the top LLMs based on their Elo scores. − The Elo scores are computed from the results of A/B tests, wherein the LLMs are pitted against each other in a series of games. − The ranking system employed is based on the Elo Rating System. Evaluating Generative Models are Notoriously difficult !
  • 33. Error Analysis • Understand the base model behaviour before finetuning • Categorize errors: iterate on data to fix these problems in data. Category Example with Problem Example Fixed Misspelling Your kidney is healthy, but you lever is sick, get your lever examined Your kidney is healthy, but your liver is sick Too Long Diabetes is less likely when you eat a healthy diet makes diabetes less likely, making …...... Diabetes is less likely when you eat a healthy diet Repetitive Medical LLMs can save healthcare workers time and money and time and money and time and money. Medical LLMs can save healthcare workers time and money
  • 34. 11