Weights & Biases
Logging and visualizing experiments with Weights & Biases.
Weights & Biases
When would you want to use it?
You should use the Weights & Biases Experiment Tracker:
if you have already been using Weights & Biases to track experiment results for your project and would like to continue doing so as you are incorporating MLOps workflows and best practices in your project through ZenML.
if you are looking for a more visually interactive way of navigating the results produced from your ZenML pipeline runs (e.g. models, metrics, datasets)
if you would like to connect ZenML to Weights & Biases to share the artifacts and metrics logged by your pipelines with your team, organization, or external stakeholders
How do you deploy it?
The Weights & Biases Experiment Tracker flavor is provided by the MLflow ZenML integration, you need to install it on your local machine to be able to register a Weights & Biases Experiment Tracker and add it to your stack:
Authentication Methods
You need to configure the following credentials for authentication to the Weights & Biases platform:
api_key
: Mandatory API key token of your Weights & Biases account.project_name
: The name of the project where you're sending the new run. If the project is not specified, the run is put in an "Uncategorized" project.entity
: An entity is a username or team name where you're sending runs. This entity must exist before you can send runs there, so make sure to create your account or team in the UI before starting to log runs. If you don't specify an entity, the run will be sent to your default entity, which is usually your username.
This option configures the credentials for the Weights & Biases platform directly as stack component attributes.
This is not recommended for production settings as the credentials won't be stored securely and will be clearly visible in the stack configuration.
How do you use it?
To be able to log information from a ZenML pipeline step using the Weights & Biases Experiment Tracker component in the active stack, you need to enable an experiment tracker using the @step
decorator. Then use Weights & Biases logging or auto-logging capabilities as you would normally do, e.g.:
Weights & Biases UI
Weights & Biases comes with a web-based UI that you can use to find further details about your tracked experiments.
Every ZenML step that uses Weights & Biases should create a separate experiment run which you can inspect in the Weights & Biases UI:
You can find the URL of the Weights & Biases experiment linked to a specific ZenML run via the metadata of the step in which the experiment tracker was used:
Alternatively, you can see an overview of all experiment runs at https://wandb.ai/{ENTITY_NAME}/{PROJECT_NAME}/runs/.
The naming convention of each Weights & Biases experiment run is {pipeline_run_name}_{step_name}
(e.g. wandb_example_pipeline-25_Apr_22-20_06_33_535737_tf_evaluator
) and each experiment run will be tagged with both pipeline_name
and pipeline_run_name
, which you can use to group and filter experiment runs.
Additional configuration
Doing the above auto-magically logs all the data, metrics, and results within the step, no further action is required!
Last updated