Computing quantitative metrics

Algorithms are usually evaluated using KPIs / Objective Figures of Merit / metrics. To make sure QA-Board's web UI displays them:

  1. run() should return a dict of metrics:
def run():
# --snip--
return {
"loss": loss

Alternatively, you can also write your metrics as JSON in ctx.obj['output_directory'] / metrics.json.

  1. Describe your metrics in qa/metrics.yaml. Here is an example
qa/metrics.yaml (location from qaboard.yaml: outputs.metrics)
loss: # the fields below are all optionnal
label: Loss function # human-readable name
short_label: Loss # somes part of the UI are better with thin labels...
smaller_is_better: true # default: true
target: 0.01 # plots in the UI will compare KPIs versus a target if given
# when displaying results in the UI, you often want to change units
# suffix: '' # e.g. "%"...
# scale: 1 # e.g. 100 to convert [0-1] to percents...
# by default we try to show 3 significant digits, but you can change it with
# precision: 3

If it all goes well you get:

  • Tables to compare KPIs per-input across versions:
  • Summaries:
  • Metrics integrated in the visualizations:

  • and evolution over time per branch...


We plan on not requiring you to define metrics ahead of time.