Evaluator

The Evaluator takes two datasets, one considered the ground truth, and compares them based on different metrics.

The calculated metrics are accurate. The same Ground Truth and detections combination were evaluated using COCO API and the results were identical.

The computed metrics take into account all the detections and area ranges on an image. Moreover, AP (average precision) is computed using 101 recall thresholds from 0.0 to 1.0 with a step of 0.01. The mAP is computed using 10 IoU thresholds from 0.5 to 0.95 with a step go 0.05. These configurations are identical to the ones used for computation in the COCO API, and so are the results generated by Detection Metrics.

For more details please visit COCO Detection Evaluation or COCO API.

Command line use example

An example of config file would be:

    outputPath: /opt/datasets/output/results/
    inputPathGT: /opt/datasets/weights/annotations/instances_train2017.json
    inputPathDetection: /opt/output/test/annotations/instances_train.json
    readerImplementationGT: COCO
    readerImplementationDetection: COCO
    readerNames: /opt/datasets/names/coco.names
    iouType: bbox

Available options for iouType: bbox or segm. With the config file, change the directory to Tools/Evaluator` inside build and run

    ./evaluator -c appConfig.yml

This will output the results as a .csv file in the output folder.

GUI use video example

In order to use the Evaluator functionality, the configuration file needs an inferencesPath value, so the config file could be as follows:

    datasetPath: /opt/datasets/

    evaluationsPath: /opt/datasets/eval

    weightsPath: /opt/datasets/weights

    netCfgPath: /opt/datasets/cfg

    namesPath: /opt/datasets/names

    inferencesPath: /opt/datasets/

The video below demonstrates the Evaluator tool of Detection Metrics evaluating detector generated results for COCO val2017 dataset. After evaluation, a summary of results is printed which contains both COCO mAP (mean average precision) metric and Pascal VOC metric. More detailed results are written in a .csv file with the name Evaluation Results.csv which contains class wise and overall results for the given dataset.