Exporting Data
How to get your data out of Strikes
The UI is a great place to begin analyzing your run data, monitoring execution, and troubleshooting issues. Exporting data is the next step for deeper analysis, dataset creation, and even model training. The SDK makes it easy to export complete projects or individual runs.
The following data items are available to you in the dreadnode
SDK:
- Runs: Collect all runs under a project, or individually by ID
- Tasks: Put all tasks within a run including their arguments, output, and associated scores
- Trace: Get a full OpenTelemetry trace for a specific run including all tasks and associated data
You can also export dataframes for analysis in the following perspectives:
- Export Runs: Get all of your runs with their parameters, metrics, and metadata
- Export Metrics: Focus on the metrics data from your runs
- Export Parameters: Analyze how parameters affect your metrics
- Export Timeseries: Get time-based data for your metrics
All exports are available in multiple formats and can be filtered to view the precise data you need.
Basic Usage
Here’s a quick example of using the Dreadnode API to export data from your Strikes projects:
Export Types
Export Runs
Export all run data including parameters, tags, and aggregated metrics.
The resulting DataFrame
contains:
- Run metadata (ID, name, start time, duration, status)
- Parameters (prefixed with
param_
) - Tags (prefixed with
tag_
) - Aggregated metrics (prefixed with
metric_
)
Export Metrics
Focus on the metrics data with detailed information about each metric point.
The resulting DataFrame
contains:
- Run metadata (ID, start time, duration, status)
- Metric information (name, step, timestamp, value)
- Aggregated values (based on selected aggregations)
- Parameters (prefixed with
param_
)
Export Parameters
Analyze how different parameter values affect your metrics.
The resulting DataFrame
shows how different parameter values influence your metrics, with:
- Parameter name and value
- Run count for each parameter value
- Aggregated metric values
Export Timeseries
Get time-based data for your metrics, with options for time representation.
The timeseries export provides metric values over time, with:
- Run metadata (ID, name)
- Metric name and value at each point
- Time representation (based on selected time_axis)
- Running aggregations (if aggregations are specified)
- Parameters (prefixed with
param_
)
Filtering Data
All export functions support filtering to narrow down the results. The filter expression is a string that follows a simple query language:
Available Aggregations
The following aggregation functions are available for metrics:
avg
: Average valuemedian
: Median valuemin
: Minimum valuemax
: Maximum valuesum
: Sum of valuesfirst
: First valuelast
: Last valuecount
: Number of valuesstd
: Standard deviationvar
: Variance
For timeseries exports, the following aggregations are available:
max
: Running maximum valuemin
: Running minimum valuesum
: Running sum of valuescount
: Running count of values
Time Axis Options
When exporting timeseries data, you can specify how time should be represented:
wall
: Actual timestamp (datetime)relative
: Seconds since the run started (float)step
: Step number (integer)
Pulling Run, Trace, and Task Information
While exporting DataFrames is powerful for analysis, the Dreadnode SDK also lets you programmatically access detailed information about runs, traces, and tasks as structured objects.
Listing Runs and Metadata
You can list all runs in a project and inspect their metadata:
Gathering Run Traces
A trace provides a complete record of all tasks and spans executed during a run, including timing, parent/child relationships, and metadata.
- Each trace span or task includes timing, parent/child relationships, and any associated metrics or errors.
- Use
format="tree"
to get a nested structure reflecting the execution hierarchy.
You can also pull just the tasks for a run, including their arguments (inputs), outputs, and any metrics or scores.
- Each task object contains its input arguments, output, status, and timing.
- This is useful for reconstructing the full execution flow and understanding how data moves through your system.
Viewing Historical Data and Task Inputs/Outputs
You can use the above methods to build a complete picture of how your code executed, what data was processed, and what results were produced. For example, to view all inputs and outputs for every task in a run:
This is especially useful for debugging, auditing, or building custom visualizations of your workflow.
Example Workflows
Compare Performance Across Experiments
Analyze Learning Curves
Working with Traces
You can export trace information for debugging and performance analysis:
Custom Exports
For more complex analyses, you can combine different exports: