perceptionmetrics.models package¶
Subpackages¶
- perceptionmetrics.models.utils package
Submodules¶
perceptionmetrics.models.detection module¶
- class perceptionmetrics.models.detection.DetectionModel(model, model_type, model_cfg, ontology_fname, model_fname=None)[source]¶
Bases:
PerceptionModelParent detection model class
- Parameters:
model (Any) – Detection model object
model_type (str) – Model type (e.g. scripted, compiled, etc.)
model_cfg (str) – JSON file containing model configuration
ontology_fname (str) – JSON file containing model output ontology
model_fname (Optional[str], optional) – Model file or directory, defaults to None
- abstract eval(dataset, split='test', ontology_translation=None, predictions_outdir=None, results_per_sample=False)[source]¶
Perform evaluation for a detection dataset
- Parameters:
dataset (ImageDetecctionDataset) – Detection dataset for which evaluation will be performed
split (Union[str, List[str]]) – Split(s) to use, defaults to “test”
ontology_translation (Optional[str]) – JSON file containing translation between dataset and model output ontologies
predictions_outdir (Optional[str]) – Directory to save predictions per sample, defaults to None. If None, predictions are not saved.
results_per_sample (bool) – Whether to store results per sample or not, defaults to False. If True, predictions_outdir must be provided.
- Returns:
DataFrame containing evaluation metrics
- Return type:
pd.DataFrame
- class perceptionmetrics.models.detection.ImageDetectionModel(model, model_type, model_cfg, ontology_fname, model_fname=None)[source]¶
Bases:
DetectionModelParent image detection model class
- Parameters:
model (Any) – Detection model object
model_type (str) – Model type (e.g. scripted, compiled, etc.)
model_cfg (str) – JSON file containing model configuration (e.g. image size or normalization parameters)
ontology_fname (str) – JSON file containing model output ontology
model_fname (Optional[str], optional) – Model file or directory, defaults to None
- abstract eval(dataset, split='test', ontology_translation=None, predictions_outdir=None, results_per_sample=False)[source]¶
Evaluate the image detection model
- Parameters:
dataset (ImageDetectionDataset) – Image detection dataset for which the evaluation will be performed
split (Union[str, List[str]]) – Split(s) to use, defaults to “test”
ontology_translation (Optional[str]) – JSON file containing translation between dataset and model output ontologies
predictions_outdir (Optional[str]) – Directory to save predictions per sample, defaults to None. If None, predictions are not saved.
results_per_sample (bool) – Whether to store results per sample or not, defaults to False. If True, predictions_outdir must be provided.
- Returns:
DataFrame containing evaluation metrics
- Return type:
pd.DataFrame
- class perceptionmetrics.models.detection.LiDARDetectionModel(model, model_type, model_cfg, ontology_fname, model_fname=None)[source]¶
Bases:
DetectionModelParent LiDAR detection model class
- Parameters:
model (Any) – Detection model object
model_type (str) – Model type (e.g. scripted, compiled, etc.)
model_cfg (str) – JSON file with model configuration
ontology_fname (str) – JSON file containing model output ontology
model_fname (Optional[str], optional) – Model file or directory, defaults to None
- abstract eval(dataset, split='test', ontology_translation=None, predictions_outdir=None, results_per_sample=False)[source]¶
Perform evaluation for a LiDAR detection dataset
- Parameters:
dataset (LiDARDetectionDataset) – LiDAR detection dataset for which the evaluation will be performed
split (Union[str, List[str]]) – Split or splits to be used from the dataset, defaults to “test”
ontology_translation (Optional[str]) – JSON file containing translation between dataset and model output ontologies
predictions_outdir (Optional[str]) – Directory to save predictions per sample, defaults to None. If None, predictions are not saved.
results_per_sample (bool, optional) – Whether to store results per sample or not, defaults to False. If True, predictions_outdir must be provided.
- Returns:
DataFrame containing evaluation metrics
- Return type:
pd.DataFrame
perceptionmetrics.models.onnx module¶
- class perceptionmetrics.models.onnx.OnnxImageSegmentationModel(model, model_type, ontology_fname, model_cfg, model_fname)[source]¶
Bases:
ImageSegmentationModel
perceptionmetrics.models.perception module¶
- class perceptionmetrics.models.perception.PerceptionModel(model, model_type, model_cfg, ontology_fname, model_fname=None)[source]¶
Bases:
ABCBase class for all vision perception models (e.g., segmentation, detection).
- Parameters:
model (Any) – Model object
model_type (str) – Model type (e.g. scripted, compiled, etc.)
model_cfg (str) – JSON file containing model configuration
ontology_fname (str) – JSON file containing model output ontology
model_fname (Optional[str], optional) – Model file or directory, defaults to None
- abstract eval(*args, **kwargs)[source]¶
Evaluate the model on the given dataset.
- Return type:
DataFrame
- abstract get_computational_cost(runs=30, warm_up_runs=5)[source]¶
Get different metrics related to the computational cost of the model
- Parameters:
runs (int, optional) – Number of runs to measure inference time, defaults to 30
warm_up_runs (int, optional) – Number of warm-up runs, defaults to 5
- Return type:
dict- Returns:
Dictionary containing computational cost information
- get_lut_ontology(dataset_ontology, ontology_translation=None)[source]¶
Build ontology lookup table (leave empty if ontologies match)
- Parameters:
dataset_ontology (dict) – Image or LiDAR dataset ontology
ontology_translation (Optional[str], optional) – JSON file containing translation between model and dataset ontologies, defaults to None
perceptionmetrics.models.segmentation module¶
- class perceptionmetrics.models.segmentation.ImageSegmentationModel(model, model_type, model_cfg, ontology_fname, model_fname=None)[source]¶
Bases:
SegmentationModelParent image segmentation model class
- Parameters:
model (Any) – Image segmentation model object
model_type (str) – Model type (e.g. scripted, compiled, etc.)
model_cfg (str) – JSON file containing model configuration (e.g. image size or normalization parameters)
ontology_fname (str) – JSON file containing model output ontology
model_fname (Optional[str], optional) – Model file or directory
- abstract eval(dataset, split='test', ontology_translation=None, translations_direction='dataset_to_model', predictions_outdir=None, results_per_sample=False)[source]¶
Perform evaluation for an image segmentation dataset
- Parameters:
dataset (ImageSegmentationDataset) – Image segmentation dataset for which the evaluation will be performed
split (Union[str, List[str]], optional) – Split or splits to be used from the dataset, defaults to “test”
ontology_translation (str, optional) – JSON file containing translation between dataset and model output ontologies
translations_direction (str, optional) – Direction of the ontology translation. Either “dataset_to_model” or “model_to_dataset”, defaults to “dataset_to_model”
predictions_outdir (Optional[str], optional) – Directory to save predictions per sample, defaults to None. If None, predictions are not saved.
results_per_sample (bool, optional) – Whether to store results per sample or not, defaults to False. If True, predictions_outdir must be provided.
- Returns:
DataFrame containing evaluation results
- Return type:
pd.DataFrame
- abstract get_computational_cost(image_size=None, runs=30, warm_up_runs=5)[source]¶
Get different metrics related to the computational cost of the model
- Parameters:
image_size (Tuple[int], optional) – Image size used for inference
runs (int, optional) – Number of runs to measure inference time, defaults to 30
warm_up_runs (int, optional) – Number of warm-up runs, defaults to 5
- Return type:
dict- Returns:
Dictionary containing computational cost information
- abstract predict(image, return_sample=False)[source]¶
Perform prediction for a single image
- Parameters:
image (Image.Image) – PIL image
return_sample (bool, optional) – Whether to return the sample data along with predictions, defaults to False
- Returns:
Segmentation result as a PIL image or a tuple with the segmentation result and the input sample tensor
- Return type:
Union[Image.Image, Tuple[Image.Image, Any]]
- class perceptionmetrics.models.segmentation.LiDARSegmentationModel(model, model_type, model_cfg, ontology_fname, model_fname=None)[source]¶
Bases:
SegmentationModelParent LiDAR segmentation model class
- Parameters:
model (Any) – LiDAR segmentation model object
model_type (str) – Model type (e.g. scripted, compiled, etc.)
model_cfg (str) – JSON file containing model configuration (e.g. sampling method, input format, etc.)
ontology_fname (str) – JSON file containing model output ontology
model_fname (Optional[str], optional) – Model file or directory
- abstract eval(dataset, split='test', ontology_translation=None, translations_direction='dataset_to_model', predictions_outdir=None, results_per_sample=False)[source]¶
Perform evaluation for a LiDAR segmentation dataset
- Parameters:
dataset (LiDARSegmentationDataset) – LiDAR segmentation dataset for which the evaluation will be performed
split (Union[str, List[str]], optional) – Split or splits to be used from the dataset, defaults to “test”
ontology_translation (str, optional) – JSON file containing translation between dataset and model output ontologies
translations_direction (str, optional) – Direction of the ontology translation. Either “dataset_to_model” or “model_to_dataset”, defaults to “dataset_to_model”
predictions_outdir (Optional[str], optional) – Directory to save predictions per sample, defaults to None. If None, predictions are not saved.
results_per_sample (bool, optional) – Whether to store results per sample or not, defaults to False. If True, predictions_outdir must be provided.
- Returns:
DataFrame containing evaluation results
- Return type:
pd.DataFrame
- abstract get_computational_cost(runs=30, warm_up_runs=5)[source]¶
Get different metrics related to the computational cost of the model
- Parameters:
runs (int, optional) – Number of runs to measure inference time, defaults to 30
warm_up_runs (int, optional) – Number of warm-up runs, defaults to 5
- Return type:
dict- Returns:
Dictionary containing computational cost information
- abstract predict(points_fname, has_intensity=True, return_sample=False)[source]¶
Perform prediction for a single point cloud
- Parameters:
points_fname (str) – Point cloud in SemanticKITTI .bin format
has_intensity (bool, optional) – Whether the point cloud has intensity values, defaults to True
return_sample (bool, optional) – Whether to return the sample data along with predictions, defaults to False
- Returns:
Segmentation result as a numpy array or a tuple with the segmentation result and the input sample data
- Return type:
Union[np.ndarray, Tuple[np.ndarray, Any]]
- class perceptionmetrics.models.segmentation.SegmentationModel(model, model_type, model_cfg, ontology_fname, model_fname=None)[source]¶
Bases:
PerceptionModelParent segmentation model class
- Parameters:
model (Any) – Segmentation model object
model_type (str) – Model type (e.g. scripted, compiled, etc.)
model_cfg (str) – JSON file containing model configuration
ontology_fname (str) – JSON file containing model output ontology
model_fname (Optional[str], optional) – Model file or directory, defaults to None
- abstract eval(dataset, split='test', ontology_translation=None, translations_direction='dataset_to_model', predictions_outdir=None, results_per_sample=False)[source]¶
Perform evaluation for an image segmentation dataset
- Parameters:
dataset (ImageSegmentationDataset) – Segmentation dataset for which the evaluation will be performed
split (Union[str, List[str]], optional) – Split or splits to be used from the dataset, defaults to “test”
ontology_translation (str, optional) – JSON file containing translation between dataset and model output ontologies
translations_direction (str, optional) – Direction of the ontology translation. Either “dataset_to_model” or “model_to_dataset”, defaults to “dataset_to_model”
predictions_outdir (Optional[str], optional) – Directory to save predictions per sample, defaults to None. If None, predictions are not saved.
results_per_sample (bool, optional) – Whether to store results per sample or not, defaults to False. If True, predictions_outdir must be provided.
- Returns:
DataFrame containing evaluation results
- Return type:
pd.DataFrame
perceptionmetrics.models.tf_segmentation module¶
- class perceptionmetrics.models.tf_segmentation.ImageSegmentationTensorflowDataset(dataset, resize=None, crop=None, batch_size=1, splits=['test'], lut_ontology=None, normalization=None, keep_aspect=False)[source]¶
Bases:
objectDataset for image segmentation Tensorflow models
- Parameters:
dataset (ImageSegmentationDataset) – Image segmentation dataset
resize (Optional[Tuple[int, int]], optional) – Target size for resizing images, defaults to None
crop (Optional[Tuple[int, int]], optional) – Target size for center cropping images, defaults to None
batch_size (int, optional) – Batch size, defaults to 1
splits (str, optional) – Splits to be used from the dataset, defaults to [“test”]
lut_ontology (dict, optional) – LUT to transform label classes, defaults to None
normalization (dict, optional) – Parameters for normalizing input images, defaults to None
keep_aspect (bool, optional) – Whether to keep aspect ratio when resizing images. If true, resize to match smaller sides size and crop center. Defaults to False
- load_data(idx, images_fnames, labels_fnames)[source]¶
Function for loading data for each dataset sample
- Parameters:
idx (str) – Sample index
images_fnames (List[str]) – List containing all image filenames
labels_fnames (List[str]) – List containing all corresponding label filenames
- Returns:
Image and label tensor pairs
- Return type:
Tuple[tf.Tensor, tf.Tensor]
- class perceptionmetrics.models.tf_segmentation.TensorflowImageSegmentationModel(model, model_cfg, ontology_fname)[source]¶
Bases:
ImageSegmentationModelImage segmentation model for Tensorflow framework
- Parameters:
model (Union[str, torch.nn.Module]) – Either the filename of a Tensorflow model in SavedModel format or the model already loaded into an arbitrary Tensorflow or Keras model.
model_cfg (str) – JSON file containing model configuration
ontology_fname (str) – JSON file containing model output ontology
- eval(dataset, split='test', ontology_translation=None, translations_direction='dataset_to_model', predictions_outdir=None, results_per_sample=False)[source]¶
Perform evaluation for an image segmentation dataset
- Parameters:
dataset (ImageSegmentationDataset) – Image segmentation dataset for which the evaluation will be performed
split (Union[str, List[str]], optional) – Split to be used from the dataset, defaults to “test”
ontology_translation (str, optional) – JSON file containing translation between dataset and model output ontologies
translations_direction (str, optional) – Direction of the ontology translation. Either “dataset_to_model” or “model_to_dataset”, defaults to “dataset_to_model”
predictions_outdir (Optional[str], optional) – Directory to save predictions per sample, defaults to None. If None, predictions are not saved.
results_per_sample (bool, optional) – Whether to store results per sample or not, defaults to False. If True, predictions_outdir must be provided.
- Returns:
DataFrame containing evaluation results
- Return type:
pd.DataFrame
- get_computational_cost(image_size=None, runs=30, warm_up_runs=5)[source]¶
Get different metrics related to the computational cost of the model
- Parameters:
image_size (Tuple[int], optional) – Image size used for inference
runs (int, optional) – Number of runs to measure inference time, defaults to 30
warm_up_runs (int, optional) – Number of warm-up runs, defaults to 5
- Return type:
dict- Returns:
Dictionary containing computational cost information
- inference(tensor_in)[source]¶
Perform inference for a tensor
- Parameters:
tensor_in (tf.Tensor) – Input point cloud tensor
- Returns:
Segmentation result as tensor
- Return type:
tf.Tensor
- predict(image, return_sample=False)[source]¶
Perform prediction for a single image
- Parameters:
image (Image.Image) – PIL image
return_sample (bool, optional) – Whether to return the sample data along with predictions, defaults to False
- Returns:
Segmentation result as a PIL image or a tuple with the segmentation result and the input sample tensor
- Return type:
Union[Image.Image, Tuple[Image.Image, tf.Tensor]]
- perceptionmetrics.models.tf_segmentation.crop_center(image, width, height)[source]¶
Crop tensorflow image center to target size
- Parameters:
image (tf.Tensor) – Input image tensor
width (int) – Target width for cropping
height (int) – Target width for cropping
- Returns:
Cropped image tensor
- Return type:
tf.Tensor
- perceptionmetrics.models.tf_segmentation.resize_image(image, method, width=None, height=None, closest_divisor=16)[source]¶
Resize tensorflow image to target size. If only one dimension is provided, the aspect ratio is preserved.
- Parameters:
image (tf.Tensor) – Input image tensor
method (str) – Resizing method (e.g. bilinear, nearest)
width (Optional[int], optional) – Target width for resizing
height (Optional[int], optional) – Target height for resizing
closest_divisor (int, optional) – Closest divisor for the target size, defaults to 16. Only applies to the dimension not provided.
- Returns:
Resized image tensor
- Return type:
tf.Tensor
perceptionmetrics.models.torch_detection module¶
- class perceptionmetrics.models.torch_detection.ImageDetectionTorchDataset(dataset, transform, splits=['test'])[source]¶
Bases:
DatasetDataset for image detection PyTorch models
- Parameters:
dataset (ImageDetectionDataset) – Image detection dataset
transform (transforms.Compose) – Transformation to be applied to images
splits (str, optional) – Splits to be used from the dataset, defaults to [“test”]
- class perceptionmetrics.models.torch_detection.TorchImageDetectionModel(model, model_cfg, ontology_fname, device=None)[source]¶
Bases:
ImageDetectionModel- Parameters:
model (str | torch.nn.Module)
model_cfg (str)
ontology_fname (str)
device (torch.device)
- eval(dataset, split='test', ontology_translation=None, predictions_outdir=None, results_per_sample=False, progress_callback=None, metrics_callback=None)[source]¶
Evaluate model over a detection dataset and compute metrics
- Parameters:
dataset (ImageDetectionDataset) – Image detection dataset
split (Union[str, List[str]]) – Dataset split(s) to evaluate
ontology_translation (Optional[str]) – Optional translation for class mapping
predictions_outdir (Optional[str]) – Directory to save predictions, if desired
results_per_sample (bool) – Store per-sample metrics
progress_callback (Optional[Callable[[int, int], None]]) – Optional callback function for progress updates in Streamlit UI
metrics_callback (Optional[Callable[[pd.DataFrame, int, int], None]]) – Optional callback function for intermediate metrics updates in Streamlit UI
- Returns:
DataFrame containing evaluation results
- Return type:
pd.DataFrame
- get_computational_cost(image_size, runs=30, warm_up_runs=5)[source]¶
Get computational cost metrics like inference time
- Parameters:
image_size (Tuple[int]) – Size of input image (H, W)
runs (int) – Number of repeated runs to average over
warm_up_runs (int) – Warm-up runs before timing
- Returns:
Dictionary with computational cost details
- Return type:
dict
- perceptionmetrics.models.torch_detection.data_to_device(data, device)[source]¶
Move detection input or target data (dict or list of dicts) to the specified device.
- Parameters:
data (Union[Dict[str, torch.Tensor], List[Dict[str, torch.Tensor]]]) – Detection data (a single dict or list of dicts with tensor values)
device (torch.device) – Device to move data to
- Returns:
Data with all tensors moved to the target device
- Return type:
Union[Dict[str, torch.Tensor], List[Dict[str, torch.Tensor]]]
- perceptionmetrics.models.torch_detection.get_computational_cost(model, dummy_input, model_fname=None, runs=30, warm_up_runs=5)[source]¶
Get different metrics related to the computational cost of a model.
- Parameters:
model (Any) – TorchScript or PyTorch model (segmentation, detection, etc.)
dummy_input (Union[torch.Tensor, tuple, list]) – Dummy input data (Tensor, Tuple, or List of Dicts for detection)
model_fname (Optional[str]) – Optional path to model file for size estimation
runs (int) – Number of timed runs
warm_up_runs (int) – Warm-up iterations before timing
- Returns:
DataFrame with size, inference time, parameter count, etc.
- Return type:
pd.DataFrame
perceptionmetrics.models.torch_segmentation module¶
- class perceptionmetrics.models.torch_segmentation.CustomResize(width=None, height=None, interpolation=torchvision.transforms.v2.functional.InterpolationMode.BILINEAR, closest_divisor=16)[source]¶
Bases:
ModuleCustom rescale transformation for PyTorch. If only one dimension is provided, the aspect ratio is preserved.
- Parameters:
width (Optional[int], optional) – Target width for resizing
height (Optional[int], optional) – Target height for resizing
interpolation (F.InterpolationMode, defaults to F.InterpolationMode.BILINEAR) – Interpolation mode for resizing (e.g. NEAREST, BILINEAR)
closest_divisor (int, optional) – Closest divisor for the target size, defaults to 16. Only applies to the dimension not provided.
- class perceptionmetrics.models.torch_segmentation.ImageSegmentationTorchDataset(dataset, transform, target_transform, splits=['test'])[source]¶
Bases:
DatasetDataset for image segmentation PyTorch models
- Parameters:
dataset (ImageSegmentationDataset) – Image segmentation dataset
transform (transforms.Compose) – Transformation to be applied to images
target_transform (transforms.Compose) – Transformation to be applied to labels
splits (str, optional) – Splits to be used from the dataset, defaults to [“test”]
- class perceptionmetrics.models.torch_segmentation.LiDARSegmentationTorchDataset(dataset, model_cfg, get_sample, splits=['test'])[source]¶
Bases:
DatasetDataset for LiDAR segmentation PyTorch - Open3D-ML models
- Parameters:
dataset (LiDARSegmentationDataset) – LiDAR segmentation dataset
model_cfg (dict) – Dictionary containing model configuration
get_sample (callable) – Function for loading sample data
splits (str, optional) – Splits to be used from the dataset, defaults to [“test”]
- class perceptionmetrics.models.torch_segmentation.TorchImageSegmentationModel(model, model_cfg, ontology_fname)[source]¶
Bases:
ImageSegmentationModel- Parameters:
model (str | torch.nn.Module)
model_cfg (str)
ontology_fname (str)
- eval(dataset, split='test', ontology_translation=None, predictions_outdir=None, results_per_sample=False)[source]¶
Perform evaluation for an image segmentation dataset
- Parameters:
dataset (ImageSegmentationDataset) – Image segmentation dataset for which the evaluation will be performed
split (Union[str, List[str]], optional) – Split or splits to be used from the dataset, defaults to “test”
ontology_translation (str, optional) – JSON file containing translation between dataset and model output ontologies
predictions_outdir (Optional[str], optional) – Directory to save predictions per sample, defaults to None. If None, predictions are not saved.
results_per_sample (bool, optional) – Whether to store results per sample or not, defaults to False. If True, predictions_outdir must be provided.
- Returns:
DataFrame containing evaluation results
- Return type:
pd.DataFrame
- get_computational_cost(image_size, runs=30, warm_up_runs=5)[source]¶
Get different metrics related to the computational cost of the model
- Parameters:
image_size (Tuple[int]) – Image size used for inference
runs (int, optional) – Number of runs to measure inference time, defaults to 30
warm_up_runs (int, optional) – Number of warm-up runs, defaults to 5
- Return type:
dict- Returns:
Dictionary containing computational cost information
- inference(tensor_in)[source]¶
Perform inference for a tensor
- Parameters:
tensor_in (torch.Tensor) – Input point cloud tensor
- Returns:
Segmentation result as tensor
- Return type:
torch.Tensor
- predict(image, return_sample=False)[source]¶
Perform prediction for a single image
- Parameters:
image (Image.Image) – PIL image
return_sample (bool, optional) – Whether to return the sample data along with predictions, defaults to False
- Returns:
Segmentation result as a PIL image or a tuple with the segmentation result and the input sample tensor
- Return type:
Union[Image.Image, Tuple[Image.Image, torch.Tensor]]
- class perceptionmetrics.models.torch_segmentation.TorchLiDARSegmentationModel(model, model_cfg, ontology_fname)[source]¶
Bases:
LiDARSegmentationModel- Parameters:
model (str | torch.nn.Module)
model_cfg (str)
ontology_fname (str)
- eval(dataset, split='test', ontology_translation=None, translation_direction='dataset_to_model', predictions_outdir=None, results_per_sample=False)[source]¶
Perform evaluation for a LiDAR segmentation dataset
- Parameters:
dataset (LiDARSegmentationDataset) – LiDAR segmentation dataset for which the evaluation will be performed
split (Union[str, List[str]], optional) – Split or splits to be used from the dataset, defaults to “test”
ontology_translation (Optional[str], optional) – JSON file containing translation between dataset and model output ontologies
translation_direction (str, optional) – Direction of the ontology translation, either ‘dataset_to_model’ or ‘model_to_dataset’, defaults to “dataset_to_model”
predictions_outdir (Optional[str], optional) – Directory to save predictions per sample, defaults to None. If None, predictions are not saved.
results_per_sample (bool, optional) – Whether to store results per sample or not, defaults to False. If True, predictions_outdir must be provided.
- Returns:
DataFrame containing evaluation results
- Return type:
pd.DataFrame
- get_computational_cost(point_cloud_range=(-50, -50, -5, 50, 50, 5), num_points=100000, has_intensity=False, runs=30, warm_up_runs=5)[source]¶
Get different metrics related to the computational cost of the model
- Parameters:
point_cloud_range (Tuple[int, int, int, int, int, int], optional) – Point cloud range (meters), defaults to (-50, -50, -5, 50, 50, 5)
num_points (int, optional) – Number of points in the point cloud, defaults to 100000
has_intensity (bool, optional) – Whether the point cloud has intensity values, defaults to False
runs (int, optional) – Number of runs to measure inference time, defaults to 30
warm_up_runs (int, optional) – Number of warm-up runs, defaults to 5
- Return type:
dict- Returns:
Dictionary containing computational cost information
- inference(sample, model, model_cfg, measure_processing_time=False)[source]¶
Perform inference for a sample
- Parameters:
sample (dict) – Sample data
model (torch.nn.Module) – PyTorch model
model_cfg (dict) – Dictionary containing model configuration
measure_processing_time (bool, optional) – whether to measure processing time, defaults to False
- Returns:
tuple of (predictions, labels, names) and processing time dictionary (if measured)
- Return type:
Tuple[Tuple[torch.Tensor, Optional[torch.Tensor], List[str]], Optional[dict]]
- predict(points_fname, has_intensity=True, return_sample=False, ignore_index=None)[source]¶
Perform prediction for a single point cloud
- Parameters:
points_fname (str) – Point cloud in SemanticKITTI .bin format
has_intensity (bool, optional) – Whether the point cloud has intensity values, defaults to True
return_sample (bool, optional) – Whether to return the sample data along with predictions, defaults to False
ignore_index (Optional[List[int]], optional) – List of class indices to ignore during prediction, defaults to None
- Returns:
Segmentation result as a numpy array or a tuple with the segmentation result and the input sample data
- Return type:
Union[np.ndarray, Tuple[np.ndarray, Any]]
- perceptionmetrics.models.torch_segmentation.get_computational_cost(model, dummy_input, model_fname=None, runs=30, warm_up_runs=5)[source]¶
Get different metrics related to the computational cost of the model
- Parameters:
model (Any) – Either a TorchScript model or an arbitrary PyTorch module
dummy_input (torch.Tensor) – Dummy input data for the model
model_fname (Optional[str], optional) – Model filename used to measure model size, defaults to None
runs (int, optional) – Number of runs to measure inference time, defaults to 30
warm_up_runs (int, optional) – Number of warm-up runs, defaults to 5
- Returns:
DataFrame containing computational cost information
- Return type:
pd.DataFrame