Notes about Azure ML, Part 2 - Computation Options

Tue December 28, 2021
machine-learning azure ml Computation

This post will very briefly discuss two of the computation options available in Azure Machine Learning; compute instances and compute clusters.

A Compute Instance in Azure Machine Learning is a cloud-based workstation, where all the necessary frameworks, tools and libraries are installed and configured, thus making it easy to run machine CPU or GPU based learning experiments and manage the Azure ML resources. We can create instances by selecting from one of the VM sizes available in Azure. A number of additional advanced configuration settings are available during the creation of the instance, such as the ability to schedule the time when the instance is operating and if we can access it via SSH. Once a Compute Instance is created, unless a schedule is created, it is up to the user to switch on and off the instance, so it is advisable to carefully monitor this aspect to limit the overall cost of the experiment.

Azure Machine Learning Compute Instance

It is possible to access an Azure ML Compute Instance using several methods, namely:

Azure Machine Learning Compute Instance Access

For production-grade model training, Azure Machine Learning Compute Target is used. Compute targets are multi-node scaling compute resources where we can execute our training script or host our service deployment, thus making it possible to use parallel processing for such computations. We can create each node with a user-specified hardware configuration.

A critical parameter of a compute target creation is the possibility to define a cluster as dedicated or low priority. Low priority clusters are created when the resources are available, so experiments deployed on low priority clusters can take some time to commence. They are generally used for development and testing. They are, however substantially cheaper than dedicated clusters.

Azure Machine Learning Compute Target

Compute Clusters are required when implementing Automated Machine Learning Experiments.

There are two additional computation options available in Azure Machine Learning;

Inference Clusters create a Docker container that hosts the model and associated resources needed to use it. This container is then used in a compute target to host the ML model.

Attach Computes make it possible to attach Databricks, Data lake Analytics, HDInsight or a prevailing VM as a compute for your workspace, and thus will not be managed by Azure Machine Learning.




Logistic Regression

Derivation of logistic regression
machine-learning

Notes about Azure ML, Part 11 - Model Validation in AzureML

March 9, 2023
machine-learning azure ml hyperparameter tuning model optimization

Notes about Azure ML, Part 10 - An end-to-end AzureML example; Model Optimization

Creation and execution of an AzureML Model Optimization Experiment
machine-learning azure ml hyperparameter tuning model optimization
comments powered by Disqus


machine-learning 27 python 21 fuzzy 14 azure-ml 11 hugo_cms 11 linear-regression 10 gradient-descent 9 type2-fuzzy 8 type2-fuzzy-library 8 type1-fuzzy 5 cnc 4 dataset 4 datastore 4 it2fs 4 excel 3 paper-workout 3 r 3 c 2 c-sharp 2 experiment 2 hyperparameter-tuning 2 iot 2 model-optimization 2 programming 2 robotics 2 weiszfeld_algorithm 2 arduino 1 automl 1 classifier 1 computation 1 cost-functions 1 development 1 embedded 1 fuzzy-logic 1 game 1 javascript 1 learning 1 mathjax 1 maths 1 mxchip 1 pandas 1 pipeline 1 random_walk 1 roc 1 tools 1 vscode 1 wsl 1