perceptionmetrics.models.utils package

Subpackages

Submodules

perceptionmetrics.models.utils.lsk3dnet module

perceptionmetrics.models.utils.lsk3dnet.collate_fn(samples)[source]

Collate function for batching samples

Parameters:

samples (List[dict]) – list of sample dictionaries

Returns:

collated batch dictionary

Return type:

dict

perceptionmetrics.models.utils.lsk3dnet.compute_normals_range(current_vertex, proj_H=64, proj_W=900, extrapolate=True, blur_type='gaussian')[source]

Compute normals for each point using range image-based method.

perceptionmetrics.models.utils.lsk3dnet.get_sample(points_fname, model_cfg, label_fname=None, name=None, idx=None, has_intensity=True, measure_processing_time=False)[source]

Get sample data for mmdetection3d models

Parameters:
  • points_fname (str) – filename of the point cloud

  • model_cfg (dict) – model configuration

  • label_fname (Optional[str], optional) – filename of the semantic label, defaults to None

  • name (Optional[str], optional) – sample name, defaults to None

  • idx (Optional[int], optional) – sample numerical index, defaults to None

  • has_intensity (bool, optional) – whether the point cloud has intensity values, defaults to True

  • measure_processing_time (bool, optional) – whether to measure processing time, defaults to False

Returns:

sample data dictionary and processing time dictionary (if measured)

Return type:

Tuple[dict, Optional[dict]]

perceptionmetrics.models.utils.lsk3dnet.inference(sample, model, model_cfg, ignore_index=None, measure_processing_time=False)[source]

Perform inference on a sample using an LSK3DNet model

Parameters:
  • sample (dict) – sample data dictionary

  • model (torch.nn.Module) – LSK3DNet model

  • model_cfg (dict) – model configuration

  • ignore_index (Optional[List[int]], optional) – list of class indices to ignore during inference, defaults to None

  • measure_processing_time (bool, optional) – whether to measure processing time, defaults to False

Returns:

tuple of (predictions, labels, names) and processing time dictionary (if measured)

Return type:

Tuple[Tuple[torch.Tensor, Optional[torch.Tensor], List[str]], Optional[dict]]

perceptionmetrics.models.utils.lsk3dnet.range_projection(current_vertex, fov_up=3.0, fov_down=-25.0, proj_H=64, proj_W=900)[source]

Project a pointcloud into a spherical projection (range image).

perceptionmetrics.models.utils.mmdet3d module

perceptionmetrics.models.utils.mmdet3d.get_sample(points_fname, model_cfg, label_fname=None, name=None, idx=None, has_intensity=True, measure_processing_time=False)[source]

Get sample data for mmdetection3d models

Parameters:
  • points_fname (str) – filename of the point cloud

  • model_cfg (dict) – model configuration

  • label_fname (Optional[str], optional) – filename of the semantic label, defaults to None

  • name (Optional[str], optional) – sample name, defaults to None

  • idx (Optional[int], optional) – sample numerical index, defaults to None

  • has_intensity (bool, optional) – whether the point cloud has intensity values, defaults to True

  • measure_processing_time (bool, optional) – whether to measure processing time, defaults to False

Returns:

sample data and optionally processing time

Return type:

Tuple[ dict, Optional[dict] ]

perceptionmetrics.models.utils.mmdet3d.inference(sample, model, model_cfg, ignore_index=None, measure_processing_time=False)[source]

Perform inference on a sample using an mmdetection3D model

Parameters:
  • sample (dict) – sample data dictionary

  • model (torch.nn.Module) – mmdetection3D model

  • model_cfg (dict) – model configuration

  • measure_processing_time (bool, optional) – whether to measure processing time, defaults to False

  • ignore_index (Optional[List[int]], optional) – list of class indices to ignore during inference, defaults to None

Returns:

predictions, labels (if available), sample names and optionally processing time

Return type:

Tuple[ Tuple[torch.Tensor, Optional[torch.Tensor], Optional[List[str]]], Optional[dict] ]

perceptionmetrics.models.utils.sphereformer module

perceptionmetrics.models.utils.sphereformer.collate_fn(samples)[source]

Collate function for batching samples

Parameters:

samples (List[dict]) – list of sample dictionaries

Returns:

collated batch dictionary

Return type:

dict

perceptionmetrics.models.utils.sphereformer.get_sample(points_fname, model_cfg, label_fname=None, name=None, idx=None, has_intensity=True, measure_processing_time=False)[source]

Get sample data for mmdetection3d models

Parameters:
  • points_fname (str) – filename of the point cloud

  • model_cfg (dict) – model configuration

  • label_fname (Optional[str], optional) – filename of the semantic label, defaults to None

  • name (Optional[str], optional) – sample name, defaults to None

  • idx (Optional[int], optional) – sample numerical index, defaults to None

  • has_intensity (bool, optional) – whether the point cloud has intensity values, defaults to True

  • measure_processing_time (bool, optional) – whether to measure processing time, defaults to False

Returns:

sample data dictionary and processing time dictionary (if measured)

Return type:

Tuple[dict, Optional[dict]]

perceptionmetrics.models.utils.sphereformer.inference(sample, model, model_cfg, ignore_index=None, measure_processing_time=False)[source]

Perform inference on a sample using an SphereFormer model

Parameters:
  • sample (dict) – sample data dictionary

  • model (torch.nn.Module) – SphereFormer model

  • model_cfg (dict) – model configuration

  • measure_processing_time (bool, optional) – whether to measure processing time, defaults to False

  • ignore_index (Optional[List[int]], optional) – list of class indices to ignore during inference, defaults to None

Returns:

tuple of (predictions, labels, names) and processing time dictionary (if measured)

Return type:

Tuple[Tuple[torch.Tensor, Optional[torch.Tensor], List[str]], Optional[dict]]

perceptionmetrics.models.utils.torchvision module

perceptionmetrics.models.utils.torchvision.postprocess_detection(output, confidence_threshold=0.5)[source]

Post-process torchvision model output.

Parameters:
  • output (dict) – Dictionary with keys ‘boxes’, ‘labels’, and ‘scores’.

  • confidence_threshold (float) – Confidence threshold to filter boxes.

Returns:

Dictionary with keys ‘boxes’, ‘labels’, and ‘scores’.

Return type:

dict

perceptionmetrics.models.utils.yolo module

perceptionmetrics.models.utils.yolo.postprocess_detection(output, confidence_threshold=0.25, nms_threshold=0.45)[source]

Post-process YOLO model output.

Parameters:
  • output (torch.Tensor) – Tensor of shape [num_classes + 4, num_anchors] containing bounding box predictions and class logits.

  • confidence_threshold (float) – Confidence threshold to filter boxes.

  • nms_threshold (float) – IoU threshold for Non-Maximum Suppression (NMS).

Returns:

Dictionary with keys ‘boxes’, ‘labels’, and ‘scores’.

Return type:

dict

Module contents