From the course: Introduction to Transformer Models for NLP
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
A brief history of NLP
From the course: Introduction to Transformer Models for NLP
A brief history of NLP
- Welcome to Introduction to Transformers for Natural Language Processing, using BERT, GPT, and more to solve modern NLP tasks. My name is Sinan Ozdemir. I'm currently the founder and CTO of a company called Sheba. I formerly was a lecturer of business analytics and computer science at Johns Hopkins, where I received my Masters in theoretical mathematics. I am an entrepreneur in the San Francisco Bay Area and the author of several textbooks, including the "Principles of Data Science," and most recently the "Feature Engineering Bookcamp." So let's dive right in to lesson one, attention and language models. Section 1.1, a brief history of natural language processing. Before we dive into how to use transformers and transformer based models to perform natural language processing tasks, it would behoove us to take a look back at the last several decades into the advancements of natural language processing and what has led us to this point, to this renaissance of natural language processing…