From the course: Microsoft Azure Data Scientist Associate (DP-100) Cert Prep: 4 Implement Responsible Machine Learning
Configure compute for a batch deployment - Azure Tutorial
From the course: Microsoft Azure Data Scientist Associate (DP-100) Cert Prep: 4 Implement Responsible Machine Learning
Configure compute for a batch deployment
- [Instructor] When you're deploying models via Azure ML Studio, a couple of options include batch and realtime. There also is the ability to deploy directly as a web service, and essentially abstract away some of the complexity with packaging a model. In the time that you're doing a batch, often you're going to be periodically running it. So let's say once a night you would go through and do credit card scoring. In the case of realtime, you would do this on a 24/7 possibility. So you would go through and have some endpoint deployed and continuously put packages into this realtime endpoint. So really there's a couple of key options, and then there's a peripheral option which packages the entire service. Let's go ahead and take a look at how this would work. So if we go over to Azure ML Studio here and we dive into the model prediction interface, one of the things that we see here is that there's deploy to realtime, deploy to batch and deploy to service. So this is only for models that are based on frameworks. like for example, scikit-learn or PyTorch. If we go through and we select Deploy to Batch, it asks you for some options. You then go through, configure it, and it goes and creates the service. We can actually look at a existing endpoint here, and this is where they would show up. If your real time endpoints are registered, you would put them here. If the batch endpoints are registered they would put them here. And you can see I created one earlier, and this one is able to be run periodically if I want to create a job against it, and it's designed to run on a scheduled point. Versus the other time type of endpoint which is the realtime endpoint, you would set this up so that it would run and constantly receive API requests from a client. So in a nutshell, there are a couple key ways to deploy on Azure ML Studio, and there also is an advanced option that allows you to directly take a framework and deploy it into production.
Contents
-
-
Configure compute for a batch deployment2m 11s
-
(Locked)
Deploy a model to a batch endpoint4m 2s
-
Test a real-time deployed service4m 23s
-
(Locked)
Apply Machine Learning Operations (MLOps) practices4m 32s
-
(Locked)
Trigger an Azure Machine Learning pipeline, including from Azure DevOps or GitHub2m 36s
-
(Locked)
Conclusion1m 6s
-