From the course: Introduction to Transformer Models for NLP

Unlock this course with a free trial

Join today to access over 24,800 courses taught by industry experts.

Summary

Summary

- Thank you for watching "Introduction to Transformers for NLP." These lessons started with an overview of the history of modern NLP and language models, covered how attention mechanisms changed the way people thought about sequence modeling, and showed you how transfer learning theory enabled large, pre-trained models to even exist and be useful. We then got down to business by working with BERT for natural language understanding. We saw how BERT learns about language through extensive pre-training, and we saw how we can fine tune our own versions of BERT for our own NLP tasks. We switched gears to see how the GPT Family of Models learns to read and write free text, and we saw how we can fine tune our own GPT versions to write in our own new styles. After that, we saw complex applications of both GPT and BERT through semantic search and multitask learning. We then saw the power of the end-to-end transformer with T5, saw how we can fine tune T5 for our own needs, and how easy it could…

Contents