SlideShare a Scribd company logo
Large Language Models
and How to Tame Them.
Using ChatGPT in academic settings.
by David Döring
AI.Analytics
h2.de/ZAKKI
Poll
●
Please answer the questions.
●
What do you think about, when you hear ChatGPT?
●
Have you used ChatGPT (or related tools) for studying or
academics?
●
How good were the Results in your opinion?
●
Please use the Q&A Tab to collect upcoming questions.
●
My Link
Outline
●
What is this about?
●
Technobabble
●
Application Recommendations
●
Risks and Side Effects
●
Discussion and Questions
Outline
ChatGPT in academic settings H2.de
ChatGPT in academic settings H2.de
LLM – Large Language Model
●
Are AI-Models that produce natural language
●
AI-Models are purely statistical functions
●
LLMs use Billions of “Parameters” = Large
●
LLMs require more calculation power than a single
computer can handle
●
LLMs usually are Cloud-Services
ChatGPT creates human-like dialog
by predicting weighted word tokens.
ChatGPT creates human-like dialog
by predicting weighted word tokens.
Try this out: https://platform.openai.com/tokenizer
Everyone knows „Chat“, but
What does „GPT“ stand for?
Generative Pretrained Transformer
???
GPT: Generative
It generates Text. Quite similar to autocomplete.
GPT: Generative
… just a bit more complicated.
GPT: Pretrained
GPT: Pretrained
●
Pretrained on huge
amount of text
●
Fine-tuned to Chat
●
Improved through
rating system
●
Adaptable to new
applications
Fine-Tuning
OpenAI employees created or gathered many example Dialogs
to „teach“ human help-desk communication.
Please explain the transformer achitecture.
Yes, I could.
Could you explain this in simple language?
The transformer architecture is a deep learning model that has gained significant
popularity and success in various natural language processing (NLP) tasks. It was
introduced in the paper "Attention is All You Need" by Vaswani et al., published in 2017.
The core idea behind the transformer is the use of self-attention mechanisms to weigh
the importance of different words in a sentence, enabling the model to capture
relationships and dependencies regardless of their positions within the input sequence.
Reinforcement Learning
Please explain the transformer achitecture.
1: ... 2: ... 3: ...
4: ... 5: ... 6: ...
Which Answer do humans prefer?
Employees and Users rank and rate answers.
GPT: Transformer
„Attention is All You Need“ – 2017 Google Brain/Research
Attention-Layer
f(x,y) = ( M × y ) × x
Bildquellen:
https://www.tensorflow.org/text/tutorials/transformer
https://arxiv.org/abs/1706.03762
Self-Attention
f(x) = ( M × x ) × x
casual
Self-Attention
Input
[START]
Ich
bin
Student
[END]
[START]
GPT: Transformer
am
a
student
am
a
student
[END]
I
“Shifted“
Output
I
Output
Writes a new
sentence while
preserving content:
Signal when done.
GPT – What does it do?
●
handles text as tokens
●
predict most likely token
●
requires context
●
generates text that look and feel correct
GPT – What does it NOT do?
●
Use the internet for correct answers.
●
Fact-Check
●
obtain up-to-date information
●
Actually perform given tasks like
– „calculate“
– „research“
– „imagine“
How can we use it anyway?
That doesn’t sound usefull for academics. So...
GPT as a product is an API
…
API:
Application
Programming
Interface
Bing Chat
Microsoft
Copilot
Use the correct tool for the job!
●
ChatGPT on chat.openai.com is likely not the
tool you need
●
Many free tool use GPT-3.5
●
Many paid tools use GPT-4.0
●
Use tools that can reference sources.
●
Prefer tools that protect your data rights.
What can we use LLMs for?
Getting started /
Beating Procrastination
Research and
Summaries
Gathering Ideas
Programmierung &
Formatierung
Improvement Text to Presentation
Review
Roleplay
(e.g. Exam situation)
Translation
What are we allowed to use?
What are we allowed to use?
●
The use of AI-generated text in your work is permitted.
●
Use the official Declaration of Originality from H2 (EN; DE)
●
mark AI-generated or enhanced content
●
reference with "supported by …”
●
include section “overview of resources used”
●
how you used these tools, including date and settings
●
review AI-generated content.
●
Mistakes made by using LLMs are your mistakes!
Known Problems
Bias and Stereotypes lack of education
loss of basic skills
„Hallucinations“
confidently incorrect
Traceability unclear
Quoteability
complicated
Data Protection
Inequality
Financial Burden
Lack of Transparency
Rights and
Regulations unclear
Add more of your own context.
Ask for Reasoning. Fact Check.
Use tools with added context.
Start quick but spend more time.
Check the FAQs of Apps.
Spend money for better results.
Support Open-Source.
Better Results
Repeat Stuff
Work Interactively
Add Knowledge and Context
Improve your Prompt
●
GPT-3.5 Context Size: 4,097 tokens
●
“short memory”
●
Important knowledge needs to be repeated
●
Start with known good Prompts
●
Github “Awesome ChatGPT Prompts”
●
Roleplay. Start by describing it’s role.
●
Provide information from citeable sources.
●
Use Tools that Automate that.
●
Perplexity.AI, ChatPDF.com, Bing Chat, Bard …
●
Repeat and refine requests
●
Use tools that allow reworking and editing
●
Notion.so, Microsoft Office 365, …
ChatGPT in academic settings H2.de
ChatGPT in academic settings H2.de
ChatGPT in academic settings H2.de
ChatGPT in academic settings H2.de
Links and more
https://t1p.de/l8qrn

More Related Content

PDF
H2O Generative AI Starter Track - Support Presentation Slides.pdf
PDF
intro_to_gen_ai_tools.pdf
PDF
Machine Learning Product Managers Meetup Event
PDF
AI生成工具的新衝擊 - MS Bing & Google Bard 能否挑戰ChatGPT-4領導地位
PDF
Executive Directors Chat- Artificial Intelligence - The Good, the Bad, and th...
PDF
Best Practices for Building Successful LLM Applications
PDF
Organization strategies to increase development productivity
PDF
Generative AI leverages algorithms to create various forms of content
H2O Generative AI Starter Track - Support Presentation Slides.pdf
intro_to_gen_ai_tools.pdf
Machine Learning Product Managers Meetup Event
AI生成工具的新衝擊 - MS Bing & Google Bard 能否挑戰ChatGPT-4領導地位
Executive Directors Chat- Artificial Intelligence - The Good, the Bad, and th...
Best Practices for Building Successful LLM Applications
Organization strategies to increase development productivity
Generative AI leverages algorithms to create various forms of content

Similar to ChatGPT in academic settings H2.de (20)

PDF
Promt software engineer rEngineering.pdf
PDF
fgfjhghkjhlkjkljkjkjkljkljkljkjkjkjkljklj
PPTX
Data Science - Experiments
PDF
Content In The Age of AI
PDF
DataScientist Job : Between Myths and Reality.pdf
PPTX
How AI is transforming learning
PDF
Екатерина Гордиенко (Serpstat)
PDF
IRJET- Semantic Question Matching
PDF
Salesforce Architect Group, Frederick, United States July 2023 - Generative A...
PDF
Reflective Plan Examples
PDF
ChatGPT-and-Generative-AI-Landscape Working of generative ai search
PPTX
Successful Single-Source Content Development
PPT
Technology tools to differentiate instruction
PDF
Maintaining Large Scale Julia Ecosystems
PDF
Towards an Agile Authoring methodology: Learning from Lean
PPTX
AI/ml workshop organized by GDG on campus SCOE.pptx
PDF
Generative Al: Exploring the Next Frontier of Artificial Intelligence
PPTX
Generative AI in Higher Education Workshop Crash Course
PDF
Green light AMIA 2018
PPTX
Networking chapter jkl; dfghyubLec 1.pptx
Promt software engineer rEngineering.pdf
fgfjhghkjhlkjkljkjkjkljkljkljkjkjkjkljklj
Data Science - Experiments
Content In The Age of AI
DataScientist Job : Between Myths and Reality.pdf
How AI is transforming learning
Екатерина Гордиенко (Serpstat)
IRJET- Semantic Question Matching
Salesforce Architect Group, Frederick, United States July 2023 - Generative A...
Reflective Plan Examples
ChatGPT-and-Generative-AI-Landscape Working of generative ai search
Successful Single-Source Content Development
Technology tools to differentiate instruction
Maintaining Large Scale Julia Ecosystems
Towards an Agile Authoring methodology: Learning from Lean
AI/ml workshop organized by GDG on campus SCOE.pptx
Generative Al: Exploring the Next Frontier of Artificial Intelligence
Generative AI in Higher Education Workshop Crash Course
Green light AMIA 2018
Networking chapter jkl; dfghyubLec 1.pptx
Ad

Recently uploaded (20)

PDF
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
PDF
Empowerment Technology for Senior High School Guide
PDF
Practical Manual AGRO-233 Principles and Practices of Natural Farming
PDF
Weekly quiz Compilation Jan -July 25.pdf
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PDF
Hazard Identification & Risk Assessment .pdf
PPTX
B.Sc. DS Unit 2 Software Engineering.pptx
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PDF
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PPTX
Share_Module_2_Power_conflict_and_negotiation.pptx
PPTX
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
PDF
1_English_Language_Set_2.pdf probationary
PDF
IGGE1 Understanding the Self1234567891011
PDF
LDMMIA Reiki Yoga Finals Review Spring Summer
PPTX
Unit 4 Computer Architecture Multicore Processor.pptx
PPTX
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
PPTX
Virtual and Augmented Reality in Current Scenario
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
Empowerment Technology for Senior High School Guide
Practical Manual AGRO-233 Principles and Practices of Natural Farming
Weekly quiz Compilation Jan -July 25.pdf
FORM 1 BIOLOGY MIND MAPS and their schemes
Hazard Identification & Risk Assessment .pdf
B.Sc. DS Unit 2 Software Engineering.pptx
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
Share_Module_2_Power_conflict_and_negotiation.pptx
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
1_English_Language_Set_2.pdf probationary
IGGE1 Understanding the Self1234567891011
LDMMIA Reiki Yoga Finals Review Spring Summer
Unit 4 Computer Architecture Multicore Processor.pptx
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
Virtual and Augmented Reality in Current Scenario
Ad

ChatGPT in academic settings H2.de

  • 1. Large Language Models and How to Tame Them. Using ChatGPT in academic settings. by David Döring AI.Analytics h2.de/ZAKKI
  • 2. Poll ● Please answer the questions. ● What do you think about, when you hear ChatGPT? ● Have you used ChatGPT (or related tools) for studying or academics? ● How good were the Results in your opinion? ● Please use the Q&A Tab to collect upcoming questions. ● My Link
  • 3. Outline ● What is this about? ● Technobabble ● Application Recommendations ● Risks and Side Effects ● Discussion and Questions
  • 7. LLM – Large Language Model ● Are AI-Models that produce natural language ● AI-Models are purely statistical functions ● LLMs use Billions of “Parameters” = Large ● LLMs require more calculation power than a single computer can handle ● LLMs usually are Cloud-Services
  • 8. ChatGPT creates human-like dialog by predicting weighted word tokens.
  • 9. ChatGPT creates human-like dialog by predicting weighted word tokens. Try this out: https://platform.openai.com/tokenizer
  • 10. Everyone knows „Chat“, but What does „GPT“ stand for? Generative Pretrained Transformer ???
  • 11. GPT: Generative It generates Text. Quite similar to autocomplete.
  • 12. GPT: Generative … just a bit more complicated.
  • 14. GPT: Pretrained ● Pretrained on huge amount of text ● Fine-tuned to Chat ● Improved through rating system ● Adaptable to new applications
  • 15. Fine-Tuning OpenAI employees created or gathered many example Dialogs to „teach“ human help-desk communication. Please explain the transformer achitecture. Yes, I could. Could you explain this in simple language? The transformer architecture is a deep learning model that has gained significant popularity and success in various natural language processing (NLP) tasks. It was introduced in the paper "Attention is All You Need" by Vaswani et al., published in 2017. The core idea behind the transformer is the use of self-attention mechanisms to weigh the importance of different words in a sentence, enabling the model to capture relationships and dependencies regardless of their positions within the input sequence.
  • 16. Reinforcement Learning Please explain the transformer achitecture. 1: ... 2: ... 3: ... 4: ... 5: ... 6: ... Which Answer do humans prefer? Employees and Users rank and rate answers.
  • 17. GPT: Transformer „Attention is All You Need“ – 2017 Google Brain/Research Attention-Layer f(x,y) = ( M × y ) × x Bildquellen: https://www.tensorflow.org/text/tutorials/transformer https://arxiv.org/abs/1706.03762 Self-Attention f(x) = ( M × x ) × x casual Self-Attention
  • 19. GPT – What does it do? ● handles text as tokens ● predict most likely token ● requires context ● generates text that look and feel correct
  • 20. GPT – What does it NOT do? ● Use the internet for correct answers. ● Fact-Check ● obtain up-to-date information ● Actually perform given tasks like – „calculate“ – „research“ – „imagine“
  • 21. How can we use it anyway? That doesn’t sound usefull for academics. So...
  • 22. GPT as a product is an API … API: Application Programming Interface Bing Chat Microsoft Copilot
  • 23. Use the correct tool for the job! ● ChatGPT on chat.openai.com is likely not the tool you need ● Many free tool use GPT-3.5 ● Many paid tools use GPT-4.0 ● Use tools that can reference sources. ● Prefer tools that protect your data rights.
  • 24. What can we use LLMs for? Getting started / Beating Procrastination Research and Summaries Gathering Ideas Programmierung & Formatierung Improvement Text to Presentation Review Roleplay (e.g. Exam situation) Translation
  • 25. What are we allowed to use?
  • 26. What are we allowed to use? ● The use of AI-generated text in your work is permitted. ● Use the official Declaration of Originality from H2 (EN; DE) ● mark AI-generated or enhanced content ● reference with "supported by …” ● include section “overview of resources used” ● how you used these tools, including date and settings ● review AI-generated content. ● Mistakes made by using LLMs are your mistakes!
  • 27. Known Problems Bias and Stereotypes lack of education loss of basic skills „Hallucinations“ confidently incorrect Traceability unclear Quoteability complicated Data Protection Inequality Financial Burden Lack of Transparency Rights and Regulations unclear Add more of your own context. Ask for Reasoning. Fact Check. Use tools with added context. Start quick but spend more time. Check the FAQs of Apps. Spend money for better results. Support Open-Source.
  • 28. Better Results Repeat Stuff Work Interactively Add Knowledge and Context Improve your Prompt ● GPT-3.5 Context Size: 4,097 tokens ● “short memory” ● Important knowledge needs to be repeated ● Start with known good Prompts ● Github “Awesome ChatGPT Prompts” ● Roleplay. Start by describing it’s role. ● Provide information from citeable sources. ● Use Tools that Automate that. ● Perplexity.AI, ChatPDF.com, Bing Chat, Bard … ● Repeat and refine requests ● Use tools that allow reworking and editing ● Notion.so, Microsoft Office 365, …