Notes about Azure ML, Part 2 - Computation Options

December 28, 2021
machine-learning azure ml Computation

This post will very briefly discuss two of the computation options available in Azure Machine Learning; compute instances and compute clusters.

A Compute Instance in Azure Machine Learning is a cloud-based workstation, where all the necessary frameworks, tools and libraries are installed and configured, thus making it easy to run machine CPU or GPU based learning experiments and manage the Azure ML resources. We can create instances by selecting from one of the VM sizes available in Azure. A number of additional advanced configuration settings are available during the creation of the instance, such as the ability to schedule the time when the instance is operating and if we can access it via SSH. Once a Compute Instance is created, unless a schedule is created, it is up to the user to switch on and off the instance, so it is advisable to carefully monitor this aspect to limit the overall cost of the experiment.

Azure Machine Learning Compute Instance

It is possible to access an Azure ML Compute Instance using several methods, namely:

Azure Machine Learning Compute Instance Access

For production-grade model training, Azure Machine Learning Compute Target is used. Compute targets are multi-node scaling compute resources where we can execute our training script or host our service deployment, thus making it possible to use parallel processing for such computations. We can create each node with a user-specified hardware configuration.

A critical parameter of a compute target creation is the possibility to define a cluster as dedicated or low priority. Low priority clusters are created when the resources are available, so experiments deployed on low priority clusters can take some time to commence. They are generally used for development and testing. They are, however substantially cheaper than dedicated clusters.

Azure Machine Learning Compute Target

Compute Clusters are required when implementing Automated Machine Learning Experiments.

There are two additional computation options available in Azure Machine Learning;

Inference Clusters create a Docker container that hosts the model and associated resources needed to use it. This container is then used in a compute target to host the ML model.

Attach Computes make it possible to attach Databricks, Data lake Analytics, HDInsight or a prevailing VM as a compute for your workspace, and thus will not be managed by Azure Machine Learning.

Linear Regression, Part 7 - Multivariate Gradient Descent

January 12, 2022
machine-learning linear-regression python

Linear Regression, Part 6 - The Gradient Descent Algorithm, Univariate Considerations

January 7, 2022
machine-learning linear-regression python

Notes about Azure ML, Part 5 - Azureml AutoML

January 6, 2022
machine-learning azure ml automl
comments powered by Disqus

machine-learning 17 python 13 fuzzy 11 hugo_cms 11 linear-regression 7 azure-ml 5 type1-fuzzy 5 type2-fuzzy 5 type2-fuzzy-library 5 cnc 4 dataset 3 datastore 3 excel 3 r 3 iot 2 it2fs 2 weiszfeld_algorithm 2 arduino 1 automl 1 classifier 1 computation 1 cost-functions 1 development 1 game 1 javascript 1 learning 1 mathjax 1 maths 1 multi-variable 1 mxchip 1 pandas 1 random_walk 1 robot 1 roc 1 tools 1 univariate 1 vscode 1 wsl 1