Skip to main content

Running your code

QA-Board works as a CLI wrapper for your code. As a default to get started, it runs commands you provide as extra arguments:

qa run --input path/to/your/input.file 'echo "{input_path} => {output_dir}"'
#=> prints "/database/path/to/your/input.file => output/dir"

qa --share run --input path/to/your/input.file 'echo "{input_path} => {output_dir}"'
#=> prints an URL to view logs in the web interface
First results
tip

--share should really be the default behaviour, we plan to fix it! To change it user-side:

# .bashrc or other shell config
alias qa="qa --share"

# you can also use an environment variable
export QATOOLS_SHARE=true

--share'd results are available in the web UI and their data is stored under /mnt/qaboard. To change the location, edit storage in qaboard.yaml. If you don't use --share, results are saved under output/.

Wrapping your code

How does it work? When you pip install QA-Board with pip, you get the qa executable. qa opens qaboard.yaml and imports the python file specified by project.entrypoint. Then it runs your entrypoint's run() function with information about the current run: input, configuration, where outputs should be saved etc.

Take a look at the default run() in qa/main.py. You should change it to run your code. In most cases that means finding and executing an executable file, or importing+running python code...

tip

Many users want to separate algorithm runs and postprocessing. To make this flow easier, you can optionnaly implement postprocess(). Then you will get qa run and qa postprocess.

What should your wrapper do?

The main assumption is that your code

The run() function receives as argument a context object whose properties tell us how you should run what, and where outputs are expected to be saved.

important

Below are the most common ways users wrap their code. Identify what works for you and continue to the next page!


Use-case #1: Running Python code

qa/main.py
from pathlib import  Path

def run(context):
metrics = your_code(
input=context.input_path,
output=context.output_dir,
# The next page will show you to supply configurations via context.params
params={"hard-coded": "values"},
)
# you can return a dict with "metrics"
metrics['is_failed'] = False
return metrics

Use-case #2: Running an executable

QA-Board assumes you already built your code.

qa/main.py
import subprocess

def run(context):
command = [
'build/executable',
"--input", str(context.input_path),
"--output", str(context.output_dir),
# if you call e.g. "qa run -i some_input --forwarded args", you can do:
*context.forwarded_args,
]
process = subprocess.run(
command,
capture_output=True,
)
print(process.stdout)
print(process.stderr)
return {"is_failed": process.returncode != 0}
tip

Instead of returning your metrics, in some cases it's more convinient to write them as JSON in context.output_dir/metrics.json.

Use-case #3: Importing existing results (Advanced)

It's is sometimes needed to easily compare results versus reference implementations or benchmarks. Let's say the benchmark results can be found alongside images in your database like so:

database
├── images
│ ├── A.jpg
│ └── B.jpg
└── standard-benchmark
└── images
├── A
│ └── output.jpg
└── B
└── output.jpg
qa/main.py
import os
import shutil

def run(context):
if context.type == 'benchmark':
# Next page you will learn how you can provided configurations/parameters to the run.
benchmark = context.params['benchmark']
# Find the benchmark results...
benchmark_outputs = context.database / benchmark context.input_path.parent / context.input_path.stem
# To copy the result image only
os.copy(str(benchmark_outputs / 'output.jpg'), str(context.output_dir])
# To copy the whole directory
shutil.copytree(
str(benchmark_outputs),
str(context.output_dir),
dirs_exist_ok=True, # python>=3.8, otherwise just call `cp -R` to do it yourself...
)
# Otherwise run your code, that create *output.jpg*

To actually import the results, create a batch (more info later) for the benchmark. qa batch import-standard-benchmark with:

qa/batches.yaml
import-standard-benchmark:
type: benchmark
configurations:
- benchmark: standard-benchmark
inputs:
- images

Now you can make comparaisons!

tip

From the QA-Board web application, you can set the benchark as a "milestone", to compare your results to it in a click.

Useful context properties (Reference)

What
databasepath to the database
rel_input_pathinput relative to the database
input_path$database / $rel_input_path
typeinput type
input_metadataif relevant, input metadata (more info later)
How
configsMany algorithms need some notion of "incremental/delta/cascading configs". Array of strings or dicts or whatever. You decide how to interpret it.
paramsdict with all the above configs of type dict deep merged. It's often all you need!.
platformUsually the host (linux/windows), but can be overwritten as part of your custom logic
forwarded_argsExtra CLI flags. Usually used for debugging. Also available in params.
configurations(advanced): Array of strings or dicts, but without tuning extra parameters like configs.
extra_parameters(advanced): When doing tuning via QA-Board or using extra CLI forwarded arguments, a dict of key:values.
Where
output_dirwhere your code should save its outputs
tip

You can use context.obj to store arbitrary data.

Accessing the QA-Board configuration

Work in Progress

A full reference for from qaboard.config import ... will arrive in the docs!

from qaboard.config import project, config, ...