Notes about Azure ML, Part 6 - Experiment Creation

Sun March 13, 2022
machine-learning azure ml experiment

In a previous post, we have seen how to create an ML workspace and provision a compute resource in Azure using AzureML SDK; now we will see how to execute an experiment the ML environment using AzureML SDK.

We will start by executing a simple experiment that will print a message. The steps required to run this trivial experiment are:

The Experiment

An essential task in this process is to create the script to execute the experiment. In our case, the following procedure is observed for all experiments:

In our simple example, the script to execute the experiment is as follows:

print('Experiment Executed!')

Script Execution

The script to execute our trivial experiment on a compute target is created as follows:


from azureml.core import Workspace, Experiment, ScriptRunConfig
import os
import constants

def run_experiment():

    config_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), '.azureml')
    ws = Workspace.from_config(path=config_path)

    config = ScriptRunConfig(
        source_directory=os.path.join(os.path.dirname(os.path.realpath(__file__)), 'Experiment_1'),
        script='experiment_1.py',
        compute_target=constants.TARGET_NAME)

    experiment = Experiment(ws, constants.EXPERIMENT_NAME)
    run = experiment.submit(config)

    aml_run = run.get_portal_url()
    print(aml_run)

if __name__ == '__main__':
    run_experiment()

It is important to note that:

The submit method of the Experiment class returns a Run instance. This instance contains the information necessary to access the experiment results, including the URL of the portal to access the results.

Upon execution, the script is put in a docker container and executed on the compute target. We can read the output of the script in the experiment log.

Environment

The example above is elementary and of little use, but it shows the basic steps to execute a script on a compute target. To run more useful experiments, we will need to create a more complex environment that will include the libraries and the code necessary to execute the experiment. Environments are stored and tracked in your AzureML workspace, and upon creation, the workspace will already contain typical environments normally used in ML projects.

We can also create environments specific to our project through an Environment instance. Two interesting methods in the Environment class are:

name: experiment-env
channels:
  - defaults
  - pytorch
dependencies:
  - python=3.8.10
  - pytorch=1.10.1
  - torchvision=0.11.2
  - numpy=1.19.4
python==3.8.10
torch==1.10.1
torchvision==0.11.2
numpy==1.19.4

To execute an experiment that requires our environment, we must provide an Environment instance to the ScriptRunConfig instance. This is done as follows:

from importlib.resources import path
from azureml.core import Workspace, Experiment, ScriptRunConfig, Environment
import os
import constants

def run_experiment():

    config_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), '.azureml')
    ws = Workspace.from_config(path=config_path)

    config = ScriptRunConfig(
        source_directory=os.path.join(os.path.dirname(os.path.realpath(__file__)), 'Experiment_1'),
        script='experiment_1.py',
        compute_target=constants.TARGET_NAME)

    # env = Environment.from_conda_specification(
    #     name = 'env-2',
    #     file_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'run_experiment_2.yml')
    # )

    env = Environment.from_pip_requirements(
        name = 'env-2',
        file_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'requirements.txt')
    )
    config.run_config.environment = env

    experiment = Experiment(ws, constants.EXPERIMENT_NAME)
    run = experiment.submit(config)

    aml_run = run.get_portal_url()
    print(aml_run)

if __name__ == '__main__':
    run_experiment()

Note that the code above is similar to the previous example, but we have now added the Environment instance to the ScriptRunConfig instance.

Execution

Once executed, the script will return a Run instance. This instance contains the information necessary to access the experiment results, including the URL of the portal to access the results.

The URL directs the user to the experiment portal, where we can access the experiment results. This screen links to the environment that we specified, including the status of the experiment run and the name of the script being executed.

Experiment Portal

Other tabs provide additional information about the experiment run. In the snapshot tab, we can see the script that was executed.

Experiment Portal

Logs are also available in the portal, in the Outputs+Logs tab. These provide helpful information about the execution of the experiment, especially when the experiment fails. We can also see the output of our experiment script.

Experiment Portal

Going one level up, we can see information about the various execution runs of the experiment, their status, compute target, and the time taken to execute them.

Experiment Portal

Conclusion

In this post, we have seen the steps required to execute an ML experiment in AzureML. The steps required are:




Logistic Regression

Derivation of logistic regression
machine-learning

Notes about Azure ML, Part 11 - Model Validation in AzureML

March 9, 2023
machine-learning azure ml hyperparameter tuning model optimization

Notes about Azure ML, Part 10 - An end-to-end AzureML example; Model Optimization

Creation and execution of an AzureML Model Optimization Experiment
machine-learning azure ml hyperparameter tuning model optimization
comments powered by Disqus


machine-learning 27 python 21 fuzzy 14 azure-ml 11 hugo_cms 11 linear-regression 10 gradient-descent 9 type2-fuzzy 8 type2-fuzzy-library 8 type1-fuzzy 5 cnc 4 dataset 4 datastore 4 it2fs 4 excel 3 paper-workout 3 r 3 c 2 c-sharp 2 experiment 2 hyperparameter-tuning 2 iot 2 model-optimization 2 programming 2 robotics 2 weiszfeld_algorithm 2 arduino 1 automl 1 classifier 1 computation 1 cost-functions 1 development 1 embedded 1 fuzzy-logic 1 game 1 javascript 1 learning 1 mathjax 1 maths 1 mxchip 1 pandas 1 pipeline 1 random_walk 1 roc 1 tools 1 vscode 1 wsl 1