detectionmetrics.utils package

Submodules

detectionmetrics.utils.conversion module

detectionmetrics.utils.conversion.get_ontology_conversion_lut(old_ontology: dict, new_ontology: dict, ontology_translation: dict | None = None, ignored_classes: List[str] | None = []) ndarray

Build a LUT that links old ontology and new ontology indices. If class names don’t match between the provided ontologies, user must provide an ontology translation dictionary with old and new class names as keys and values, respectively

Parameters:
  • old_ontology (dict) – Origin ontology definition

  • new_ontology (dict) – Target ontology definition

  • ontology_translation (Optional[dict], optional) – Ontology translation dictionary, defaults to None

  • ignored_classes (Optional[List[str]], optional) – Classes to ignore from the old ontology, defaults to []

Returns:

numpy array associating old and new ontology indices

Return type:

np.ndarray

detectionmetrics.utils.conversion.hex_to_rgb(hex: str) Tuple[int, ...]

Convert HEX color code to sRGB

Parameters:

hex (str) – HEX color code

Returns:

sRGB color value

Return type:

Tuple[int, …]

detectionmetrics.utils.conversion.label_to_rgb(label: Image, ontology: dict) Image

Convert an image with raw label indices to RGB mask

Parameters:
  • label (Image.Image) – Raw label indices as PIL image

  • ontology (dict) – Ontology definition

Returns:

RGB mask

Return type:

Image.Image

detectionmetrics.utils.conversion.ontology_to_rgb_lut(ontology: dict) ndarray

Given an ontology definition, build a LUT that links indices and RGB values

Parameters:

ontology (dict) – Ontology definition

Returns:

numpy array containing RGB values per index

Return type:

np.ndarray

detectionmetrics.utils.io module

detectionmetrics.utils.io.extract_wildcard_matches(pattern: str) list

Given a pattern with wildcards, extract the matches

Parameters:

pattern (str) – ‘Globable’ pattern with wildcards

Returns:

Matches found in the pattern

Return type:

list

detectionmetrics.utils.io.get_image_mode(fname: str) str

Given an image retrieve its color mode using PIL

Parameters:

fname (str) – Input image

Returns:

PIL color image mode

Return type:

str

detectionmetrics.utils.io.read_json(fname: str) dict

Read a JSON file

Parameters:

fname (str) – JSON filename

Returns:

Dictionary containing JSON file data

Return type:

dict

detectionmetrics.utils.io.read_txt(fname: str) List[str]

Read a .txt file line by line

Parameters:

fname (str) – .txt filename

Returns:

List of lines found in the .txt file

Return type:

List[str]

detectionmetrics.utils.io.read_yaml(fname: str) dict

Read a YAML file

Parameters:

fname (str) – YAML filename

Returns:

Dictionary containing YAML file data

Return type:

dict

detectionmetrics.utils.io.write_json(fname: str, data: dict)

Write a JSON file properly indented

Parameters:
  • fname (str) – Target JSON filename

  • data (dict) – Dictionary containing data to be dumped as a JSON file

detectionmetrics.utils.metrics module

class detectionmetrics.utils.metrics.ConfusionMatrix(n_classes: int)

Bases: object

Class to compute and store the confusion matrix, as well as related metrics (e.g. accuracy, precision, recall, etc.)

Parameters:

n_classes (int) – Number of classes to evaluate

compute() ndarray

Get confusion matrix

Returns:

confusion matrix

Return type:

np.ndarray

get_accuracy() Tuple[ndarray, float]

Compute accuracy from confusion matrix as:

\[\text{Accuracy} = \frac{1}{N}\sum_i^N 1(y_i = \hat{y}_i)\]
Returns:

per class accuracy, and global accuracy

Return type:

Tuple[np.ndarray, float]

update(pred: ndarray, gt: ndarray, valid_mask: ndarray | None = None) ndarray

Update the confusion matrix with new predictions and ground truth

Parameters:
  • pred (np.ndarray) – Array containing prediction

  • gt (np.ndarray) – Array containing ground truth

  • valid_mask – Binary mask where False elements will be ignored, defaults

to None :type valid_mask: Optional[np.ndarray], optional :return: Updated confusion matrix :rtype: np.ndarray

class detectionmetrics.utils.metrics.IoU(n_classes: int)

Bases: Metric

Compute Intersection over Union (IoU). IoU per sample and class is accumulated and then the average per class is computed.

Parameters:

n_classes (int) – Number of classes to evaluate

compute() ndarray

Get IoU (per class and mIoU)

Returns:

per class IoU, and mean IoU

Return type:

Tuple[float, np.ndarray]

update(pred: ndarray, gt: ndarray, valid_mask: ndarray | None = None)

Accumulate IoU values for a new set of samples

Parameters:
  • pred (np.ndarray) – one-hot encoded prediction array (batch, class, width, height)

  • gt (np.ndarray) – one-hot encoded ground truth array (batch, class, width, height)

  • valid_mask – Binary mask where False elements will be igonred, defaults

to None :type valid_mask: Optional[np.ndarray], optional

class detectionmetrics.utils.metrics.Metric(n_classes: int)

Bases: ABC

Abstract class for metrics

Parameters:

n_classes (int) – Number of classes to evaluate

abstract compute() ndarray

Get final values

Returns:

Array containing final values

Return type:

np.ndarray

abstract update(pred: ndarray, gt: ndarray, valid_mask: ndarray | None = None)

Accumulate results for a new batch

Parameters:
  • pred (np.ndarray) – Array containing prediction

  • gt (np.ndarray) – Array containing ground truth

  • valid_mask – Binary mask where False elements will be igonred, defaults

to None :type valid_mask: Optional[np.ndarray], optional

Module contents