Runs
Understand the building blocks of your experiments
Runs are the core unit of your work in Strikes. They provide the context for all your data collection and represent a complete execution session. Think of runs as the “experiment” or “session” for your code.
Creating Runs
The most common way to create a run is using the context manager syntax:
The run automatically starts when you enter the with
block and ends when you exit it. All data logged within the block is associated with this run.
Run Names
You can provide a name for your run to make it easier to identify:
If you don’t provide a name, Strikes will generate one for you automatically using a combination of random words and numbers:
Run Tags
Tags help you categorize and filter runs. You can add tags when creating a run:
Tags make it easy to find related runs in the UI and when exporting data.
Tags will soon be available in the UI, but in the meantime it’s a good muscle to exercise.
Setting the Project
Runs are always associated with a project. You can specify which project a run belongs to:
If you don’t specify a project, the run will use the default project configured in dn.configure()
or be placed in a project named “Default”.
Run Attributes
You can add arbitrary attributes to a run for additional metadata:
These attributes are stored with the run and can be used for filtering and organization when you perform data exports.
Execute Runs
You can either execute multiple runs independently from one another or in parallel with each other.
Multiple Independent Runs
You can create multiple independent runs in sequence:
Each run is completely separate with its own data and lifecycle.
Parallel Runs
For more efficient experimentation, you can run multiple experiments in parallel:
This pattern is particularly useful for hyperparameter searches or evaluating multiple models.
Error Handling
Runs automatically capture and log errors, marking the run as failed if an exception is raised, but you can also handle them explicitly:
Best Practices
- Use meaningful names: Give your runs descriptive names that indicate their purpose.
- Use parameters: Parameters are a great way to filter and compare runs later, so use them frequently.
- Create separate runs for separate experiments: Don’t try to jam multiple experiments into a single run—you can create multiple runs inside your code.
- Use projects for organization: Group related runs into projects.
- Create comparison runs: When testing different approaches, ensure parameters and metrics are consistent to enable meaningful comparison.