Algorithms are usually evaluated using KPIs / Objective Figures of Merit / metrics. To make sure QA-Board's web UI displays them:
run()should return a dict of metrics:
Alternatively, you can also write your metrics as JSON in
ctx.obj['output_directory'] / metrics.json.
- Describe your metrics in qa/metrics.yaml. Here is an example
If it all goes well you get:
- Tables to compare KPIs per-input across versions:
Metrics integrated in the visualizations:
and evolution over time per branch...
We plan on not requiring you to define metrics ahead of time.