From the course: Hands-On Introduction to Transformers for Computer Vision

Unlock this course with a free trial

Join today to access over 24,700 courses taught by industry experts.

Fine-tuning in practice: Adapting ViTs to new tasks

Fine-tuning in practice: Adapting ViTs to new tasks - PyTorch Tutorial

From the course: Hands-On Introduction to Transformers for Computer Vision

Fine-tuning in practice: Adapting ViTs to new tasks

- [Instructor] Hey, everyone. Welcome to chapter five, video two, Fine-Tuning in Practice: Adapting Vision Transformers to New Tasks. Now it's our time to try out transfer learning for ourselves. We're going to take that pre-trained task from before and then transform it into a new task that we're going to be working on. We're going to have a coding example that runs alongside this slide here, so feel free to have it open alongside if it helps bring more context in. But we're going to be talking at a high-level how transfer learning works. So quick definition, nuance here. Transfer learning versus fine-tuning. They're often used interchangeably, and if you said one and meant the other, like, it wouldn't really be the biggest deal in the world. However, transfer learning keeps the original weights intact and only trains classifier heads, where fine-tuning trains all the weights in the model after pre-training. Like I mentioned, it's a very small nuanced difference and you shouldn't use…

Contents