detectionmetrics.utils package

Submodules

detectionmetrics.utils.conversion module

detectionmetrics.utils.conversion.get_ontology_conversion_lut(old_ontology: dict, new_ontology: dict, ontology_translation: dict | None = None, ignored_classes: List[str] | None = []) ndarray

Build a LUT that links old ontology and new ontology indices. If class names don’t match between the provided ontologies, user must provide an ontology translation dictionary with old and new class names as keys and values, respectively

Parameters:
  • old_ontology (dict) – Origin ontology definition

  • new_ontology (dict) – Target ontology definition

  • ontology_translation (Optional[dict], optional) – Ontology translation dictionary, defaults to None

  • ignored_classes (Optional[List[str]], optional) – Classes to ignore from the old ontology, defaults to []

Returns:

numpy array associating old and new ontology indices

Return type:

np.ndarray

detectionmetrics.utils.conversion.hex_to_rgb(hex: str) Tuple[int, ...]

Convert HEX color code to sRGB

Parameters:

hex (str) – HEX color code

Returns:

sRGB color value

Return type:

Tuple[int, …]

detectionmetrics.utils.conversion.label_to_rgb(label: Image, ontology: dict) Image

Convert an image with raw label indices to RGB mask

Parameters:
  • label (Image.Image) – Raw label indices as PIL image

  • ontology (dict) – Ontology definition

Returns:

RGB mask

Return type:

Image.Image

detectionmetrics.utils.conversion.ontology_to_rgb_lut(ontology: dict) ndarray

Given an ontology definition, build a LUT that links indices and RGB values

Parameters:

ontology (dict) – Ontology definition

Returns:

numpy array containing RGB values per index

Return type:

np.ndarray

detectionmetrics.utils.io module

detectionmetrics.utils.io.extract_wildcard_matches(pattern: str) list

Given a pattern with wildcards, extract the matches

Parameters:

pattern (str) – ‘Globable’ pattern with wildcards

Returns:

Matches found in the pattern

Return type:

list

detectionmetrics.utils.io.get_image_mode(fname: str) str

Given an image retrieve its color mode using PIL

Parameters:

fname (str) – Input image

Returns:

PIL color image mode

Return type:

str

detectionmetrics.utils.io.read_json(fname: str) dict

Read a JSON file

Parameters:

fname (str) – JSON filename

Returns:

Dictionary containing JSON file data

Return type:

dict

detectionmetrics.utils.io.read_txt(fname: str) List[str]

Read a .txt file line by line

Parameters:

fname (str) – .txt filename

Returns:

List of lines found in the .txt file

Return type:

List[str]

detectionmetrics.utils.io.read_yaml(fname: str) dict

Read a YAML file

Parameters:

fname (str) – YAML filename

Returns:

Dictionary containing YAML file data

Return type:

dict

detectionmetrics.utils.io.write_json(fname: str, data: dict)

Write a JSON file properly indented

Parameters:
  • fname (str) – Target JSON filename

  • data (dict) – Dictionary containing data to be dumped as a JSON file

detectionmetrics.utils.metrics module

class detectionmetrics.utils.metrics.MetricsFactory(n_classes: int)

Bases: object

‘Factory’ class to accumulate results and compute metrics

Parameters:

n_classes (int) – Number of classes to evaluate

METRIC_NAMES = ['tp', 'fp', 'fn', 'tn', 'precision', 'recall', 'accuracy', 'f1_score', 'iou']
get_accuracy(per_class: bool = True) ndarray | float

Accuracy = (TP + TN) / (TP + FP + FN + TN)

Parameters:

per_class (bool, optional) – Return per class accuracy, defaults to True

Returns:

True Positives

Return type:

np.ndarray | int

get_averaged_metric(metric_name: str, method: str, weights: ndarray | None = None) float

Get average metric value

Parameters:
  • metric (str) – Name of the metric to compute

  • method (str) – Method to use for averaging (‘macro’, ‘micro’ or ‘weighted’)

  • weights (Optional[np.ndarray], optional) – Weights for weighted averaging, defaults to None

Returns:

Average metric value

Return type:

float

get_confusion_matrix() ndarray

Get confusion matrix

Returns:

Confusion matrix

Return type:

np.ndarray

get_f1_score(per_class: bool = True) ndarray | float

F1-score = 2 * (Precision * Recall) / (Precision + Recall)

Parameters:

per_class (bool, optional) – Return per class F1 score, defaults to True

Returns:

True Positives

Return type:

np.ndarray | int

get_fn(per_class: bool = True) ndarray | int

False negatives

Parameters:

per_class (bool, optional) – Return per class FN, defaults to True

Returns:

True Positives

Return type:

np.ndarray | int

get_fp(per_class: bool = True) ndarray | int

False Positives

Parameters:

per_class (bool, optional) – Return per class FP, defaults to True

Returns:

True Positives

Return type:

np.ndarray | int

get_iou(per_class: bool = True) ndarray | float

IoU = TP / (TP + FP + FN)

Parameters:

per_class (bool, optional) – Return per class IoU, defaults to True

Returns:

True Positives

Return type:

np.ndarray | int

get_metric_names() list[str]

Get available metric names

Returns:

List of available metric names

Return type:

list[str]

get_metric_per_name(metric_name: str, per_class: bool = True) ndarray | float | int

Get metric value by name

Parameters:
  • metric_name (str) – Name of the metric to compute

  • per_class (bool, optional) – Return per class metric, defaults to True

Returns:

Metric value

Return type:

np.ndarray | float | int

get_precision(per_class: bool = True) ndarray | float

Precision = TP / (TP + FP)

Parameters:

per_class (bool, optional) – Return per class precision, defaults to True

Returns:

True Positives

Return type:

np.ndarray | int

get_recall(per_class: bool = True) ndarray | float

Recall = TP / (TP + FN)

Parameters:

per_class (bool, optional) – Return per class recall, defaults to True

Returns:

True Positives

Return type:

np.ndarray | int

get_tn(per_class: bool = True) ndarray | int

True negatives

Parameters:

per_class (bool, optional) – Return per class TN, defaults to True

Returns:

True Positives

Return type:

np.ndarray | int

get_tp(per_class: bool = True) ndarray | int

True Positives

Parameters:

per_class (bool, optional) – Return per class TP, defaults to True

Returns:

True Positives

Return type:

np.ndarray | int

update(pred: ndarray, gt: ndarray, valid_mask: ndarray | None = None)

Accumulate results for a new batch

Parameters:
  • pred (np.ndarray) – Array containing prediction

  • gt (np.ndarray) – Array containing ground truth

  • valid_mask (Optional[np.ndarray], optional) – Binary mask where False elements will be igonred, defaults to None

detectionmetrics.utils.metrics.get_metrics_dataframe(metrics_factory: MetricsFactory, ontology: dict) DataFrame

Build a DataFrame with all metrics (global and per class) plus confusion matrix

Parameters:
  • metrics_factory (MetricsFactory) – Properly updated MetricsFactory object

  • ontology (dict) – Ontology dictionary

Returns:

DataFrame with all metrics

Return type:

pd.DataFrame

Module contents