detectionmetrics.models package
Submodules
detectionmetrics.models.model module
detectionmetrics.models.onnx module
- class detectionmetrics.models.onnx.OnnxImageSegmentationModel(model, model_type, ontology_fname, model_cfg, model_fname)
Bases:
ImageSegmentationModel
detectionmetrics.models.tensorflow module
- class detectionmetrics.models.tensorflow.ImageSegmentationTensorflowDataset(dataset: ImageSegmentationDataset, resize: Tuple[int, int] | None = None, crop: Tuple[int, int] | None = None, batch_size: int = 1, splits: List[str] = ['test'], lut_ontology: dict | None = None, normalization: dict | None = None, keep_aspect: bool = False)
Bases:
object
Dataset for image segmentation Tensorflow models
- Parameters:
dataset (ImageSegmentationDataset) – Image segmentation dataset
resize (Optional[Tuple[int, int]], optional) – Target size for resizing images, defaults to None
crop (Optional[Tuple[int, int]], optional) – Target size for center cropping images, defaults to None
batch_size (int, optional) – Batch size, defaults to 1
splits (str, optional) – Splits to be used from the dataset, defaults to [“test”]
lut_ontology (dict, optional) – LUT to transform label classes, defaults to None
normalization (dict, optional) – Parameters for normalizing input images, defaults to None
keep_aspect (bool, optional) – Whether to keep aspect ratio when resizing images. If true, resize to match smaller sides size and crop center. Defaults to False
- load_data(idx: str, images_fnames: List[str], labels_fnames: List[str]) Tuple[tensorflow.Tensor, tensorflow.Tensor]
Function for loading data for each dataset sample
- Parameters:
idx (str) – Sample index
images_fnames (List[str]) – List containing all image filenames
labels_fnames (List[str]) – List containing all corresponding label filenames
- Returns:
Image and label tensor pairs
- Return type:
Tuple[tf.Tensor, tf.Tensor]
- read_image(fname: str, label=False) tensorflow.Tensor
Read a single image or label
- Parameters:
fname (str) – Input image or label filename
label (bool, optional) – Whether the input data is a label or not, defaults to False
- Returns:
Tensorflow tensor containing read image or label
- Return type:
tf.Tensor
- class detectionmetrics.models.tensorflow.TensorflowImageSegmentationModel(model: str | tensorflow.Module | tensorflow.keras.Model, model_cfg: str, ontology_fname: str)
Bases:
ImageSegmentationModel
Image segmentation model for Tensorflow framework
- Parameters:
model (Union[str, torch.nn.Module]) – Either the filename of a Tensorflow model in SavedModel format or the model already loaded into an arbitrary Tensorflow or Keras model.
model_cfg (str) – JSON file containing model configuration
ontology_fname (str) – JSON file containing model output ontology
- eval(dataset: ImageSegmentationDataset, split: str | List[str] = 'test', ontology_translation: str | None = None, predictions_outdir: str | None = None, results_per_sample: bool = False) DataFrame
Perform evaluation for an image segmentation dataset
- Parameters:
dataset (ImageSegmentationDataset) – Image segmentation dataset for which the evaluation will be performed
split (str | List[str], optional) – Split to be used from the dataset, defaults to “test”
ontology_translation (str, optional) – JSON file containing translation between dataset and model output ontologies
predictions_outdir (Optional[str], optional) – Directory to save predictions per sample, defaults to None. If None, predictions are not saved.
results_per_sample (bool, optional) – Whether to store results per sample or not, defaults to False. If True, predictions_outdir must be provided.
- Returns:
DataFrame containing evaluation results
- Return type:
pd.DataFrame
- get_computational_cost(image_size: Tuple[int] | None = None, runs: int = 30, warm_up_runs: int = 5) dict
Get different metrics related to the computational cost of the model
- Parameters:
image_size (Tuple[int], optional) – Image size used for inference
runs (int, optional) – Number of runs to measure inference time, defaults to 30
warm_up_runs (int, optional) – Number of warm-up runs, defaults to 5
- Returns:
Dictionary containing computational cost information
- inference(image: Image) Image
Perform inference for a single image
- Parameters:
image (Image.Image) – PIL image
- Returns:
segmenation result as PIL image
- Return type:
Image.Image
- detectionmetrics.models.tensorflow.crop_center(image: tensorflow.Tensor, width: int, height: int) tensorflow.Tensor
Crop tensorflow image center to target size
- Parameters:
image (tf.Tensor) – Input image tensor
width (int) – Target width for cropping
height (int) – Target width for cropping
- Returns:
Cropped image tensor
- Return type:
tf.Tensor
- detectionmetrics.models.tensorflow.get_computational_cost(model: tensorflow.Module, dummy_input: tensorflow.Tensor, model_fname: str | None = None, runs: int = 30, warm_up_runs: int = 5) dict
Get different metrics related to the computational cost of the model
- Parameters:
model (tf.Module) – Loaded TensorFlow SavedModel
dummy_input (tf.Tensor) – Dummy input data for the model
model_fname (Optional[str], optional) – Model filename used to measure model size, defaults to None
runs (int, optional) – Number of runs to measure inference time, defaults to 30
warm_up_runs (int, optional) – Number of warm-up runs, defaults to 5
- Returns:
DataFrame containing computational cost information
- Return type:
pd.DataFrame
- detectionmetrics.models.tensorflow.resize_image(image: tensorflow.Tensor, method: str, width: int | None = None, height: int | None = None, closest_divisor: int = 16) tensorflow.Tensor
Resize tensorflow image to target size. If only one dimension is provided, the aspect ratio is preserved.
- Parameters:
image (tf.Tensor) – Input image tensor
method (str) – Resizing method (e.g. bilinear, nearest)
width (Optional[int], optional) – Target width for resizing
height (Optional[int], optional) – Target height for resizing
closest_divisor (int, optional) – Closest divisor for the target size, defaults to 16. Only applies to the dimension not provided.
- Returns:
Resized image tensor
- Return type:
tf.Tensor