sapien.sensor package

Submodules

sapien.sensor.activelight module

class sapien.sensor.activelight.ActiveLightSensor(sensor_name: str, renderer: sapien.core.pysapien.KuafuRenderer, scene: sapien.core.pysapien.Scene, sensor_type: Optional[str] = 'fakesense_j415', rgb_resolution: Tuple[int, int] = None, ir_resolution: Tuple[int, int] = None, rgb_intrinsic: Optional[numpy.ndarray] = None, ir_intrinsic: Optional[numpy.ndarray] = None, trans_pose_l: Optional[sapien.core.pysapien.Pose] = None, trans_pose_r: Optional[sapien.core.pysapien.Pose] = None, light_pattern: Optional[str] = None, max_depth: float = 8.0, min_depth: float = 0.3, ir_ambient_strength: float = 0.002)[source]

Bases: sapien.sensor.sensor_base.SensorEntity

clear_cache()[source]
get_depth()[source]
get_ir()[source]
get_pointcloud(frame='camera', with_rgb=False)[source]
get_pose()[source]
get_rgb()[source]
set_pose(pose: sapien.core.pysapien.Pose)[source]
take_picture()[source]

Note: we expect one scene.update_render call before take_picture

sapien.sensor.depth_processor module

sapien.sensor.depth_processor.calc_depth_and_pointcloud(disparity: numpy.ndarray, mask: numpy.ndarray, q: numpy.ndarray, no_pointcloud: bool = False) → Tuple[numpy.ndarray, open3d.cuda.pybind.geometry.PointCloud][source]

Calculate depth and pointcloud.

Parameters
  • disparity – Disparity

  • mask – Valid mask

  • q – Perspective transformation matrix

  • no_pointcloud

Return depth

Depth

Return pointcloud

Pointcloud

sapien.sensor.depth_processor.calc_disparity(imgl: numpy.ndarray, imgr: numpy.ndarray, method: str, *, ndisp: int = 96, min_disp: int = 0, lr_consistency: bool = True, use_census: bool = True, census_wsize: int = 7) → numpy.ndarray[source]

Calculate disparity given a rectified image pair.

Parameters
  • imgl – Left image

  • imgr – Right image

  • method – SGBM or BM

  • ndisp – max disparity

  • min_disp – min disparity

  • lr_consistency – Use Left-Right Consistency (LRC) check

Returns

disparity

sapien.sensor.depth_processor.calc_main_depth_from_left_right_ir(ir_l: numpy.ndarray, ir_r: numpy.ndarray, rt_l: numpy.ndarray, rt_r: numpy.ndarray, rt_main: numpy.ndarray, k_l: numpy.ndarray, k_r: numpy.ndarray, k_main: numpy.ndarray, method: str = 'SGBM', ndisp: int = 96, use_noise: bool = True, use_census: bool = True, lr_consistency: bool = False, register_depth: bool = True, register_blur_ksize: int = 5, main_cam_size=1920, 1080, census_wsize=7, **kwargs) → numpy.ndarray[source]

Calculate depth for rgb camera from left right ir images.

Parameters
  • ir_l – left ir image

  • ir_r – right ir image

  • rt_l – left extrinsic matrix

  • rt_r – right extrinsic matrix

  • rt_main – rgb extrinsic matrix

  • k_l – left intrinsic matrix

  • k_r – right intrinsic matrix

  • k_main – rgb intrinsic matrix

  • method – method for depth calculation (SGBM or BM)

  • use_noise – whether to simulate ir noise before processing

  • lr_consistency – whether to use left-right consistency check

Return depth

calculated depth

sapien.sensor.depth_processor.calc_rectified_stereo_pair(imgl: numpy.ndarray, imgr: numpy.ndarray, kl: numpy.ndarray, kr: numpy.ndarray, rt: numpy.ndarray, distortl: Optional[numpy.ndarray] = None, distortr: Optional[numpy.ndarray] = None) → Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray][source]

Rectify an image pair with given camera parameters.

Parameters
  • imgl – Left image

  • imgr – Right image

  • kl – Left intrinsic matrix

  • kr – Right intrinsic matrix

  • rt – Extrinsic matrix (left to right)

  • distortr – Left distortion coefficients

  • distortl – Right distortion coefficients

Return imgl_rect

Rectified left image

Return imgr_rect

Rectified right image

Return q

Perspective transformation matrix (for cv2.reprojectImageTo3D)

sapien.sensor.depth_processor.depth_post_processing(depth: numpy.ndarray, ksize: int = 5) → numpy.ndarray[source]
sapien.sensor.depth_processor.get_census(img, wsize=7)[source]
sapien.sensor.depth_processor.pad_lr(img: numpy.ndarray, ndisp: int) → numpy.ndarray[source]
sapien.sensor.depth_processor.sim_ir_noise(img: numpy.ndarray, scale: float = 0.0, blur_ksize: int = 0, blur_ksigma: float = 0.03, speckle_shape: float = 398.12, speckle_scale: float = 0.00254, gaussian_mu: float = - 0.231, gaussian_sigma: float = 0.83, seed: int = 0) → numpy.ndarray[source]

TODO: IR density model

Simulate IR camera noise.

Noise model from Landau et al. Simulating Kinect Infrared and Depth Images

Parameters
  • img – Input IR image

  • scale – Scale for downsampling & applying gaussian blur

  • blur_ksize – Kernel size for gaussian blur

  • blur_ksigma – Kernel sigma for gaussian blur

  • speckle_shape – Shape parameter for speckle noise (Gamma distribution)

  • speckle_scale – Scale parameter for speckle noise (Gamma distribution)

  • gaussian_mu – mu for additive gaussian noise

  • gaussian_sigma – sigma for additive gaussian noise

  • seed – random seed used for numpy

Returns

Processed IR image

sapien.sensor.depth_processor.unpad_lr(img: numpy.ndarray, ndisp: int) → numpy.ndarray[source]

sapien.sensor.sensor_base module

class sapien.sensor.sensor_base.SensorEntity[source]

Bases: object

Module contents