Reducing Point Cloud Resolution on Ensenso S10

General Information

  • Product: Ensenso S-10
  • Serial Number: 245229
  • Ensenso SDK Version: Ensenso SDK 4.1.1023
  • Operating System: Windows

Problem / Question

Hello everyone,

I’m currently integrating an Ensenso S10 camera with a real-time motion-controlled robotic arm. I’ve noticed that each trigger generates quite a bit of a dense point cloud, which is leading to processing delays. This latency is impacting the responsiveness of the system.

I’m looking for suggestions or resources on how to reduce the resolution—that is, decrease the number of points per unit area—to help lower the processing time. Has anyone encountered a similar issue? Are there any configuration settings, calibration techniques, or post-processing methods (e.g., downsampling or filtering strategies) that you recommend to optimize performance?

Any advice, shared experiences, or documentation pointers would be greatly appreciated.

Thank you in advance for your support!

Hi Prasan,

there are parameters to control downsampling of the point map, they are just not exposed in NxView. You can edit the values using NxTreeEdit and NxView will save the changes and show the effect live. The default for S-series cameras is 4 in each X and Y direction. The maximum is 8, so you can reduce the size by a factor of 4. If you need the point map even smaller, you would currently have to do that yourself, feel free to ask about the details if required.

If you do not use the entire field of view, you might also want to shrink the area of interest to further reduce point cloud size and enable UseDisparityMapAreaOfInterest to reduce the capture time. These can both be set using NxView.

You might want to check out our optimization guide for more hints, although not all of them apply to S-series cameras.

Regards,
Raphael

So the solution (both downsampling and area of interest) you are suggesting is to be done on the NxView app, right?

Are there any sample python codes available for the same to do it programmatically using the EnsensoSDK provided? Please do share if any such work has already been done.

Thank you.

I was only highlighting NxView because, from our experience, most users use NxView to evaluate their settings. You can, of course, set the parameters using our Python API, e.g.:

with NxLib(), StructuredLightCamera(serial) as camera:
    camera[ITM_PARAMETERS][ITM_DISPARITY_MAP][ITM_DOWNSAMPLE][0] = 8;
1 Like

This is working well for downsampling of the 3D point cloud. Thank you.

Regarding AOI: I’m trying to set an Area of Interest (AOI) to limit the region for 3D point cloud generation and thereby reduce processing time. However, I’ve encountered errors when attempting to configure the AOI under the path /Cameras/245229/Parameters/DisparityMap/AreaOfInterest.

I have attempted setting the AOI as a dictionary.
camera[“Parameters”][“DisparityMap”][“AreaOfInterest”] = {“Left”: 0, “Top”: 0, “Width”: 640, “Height”: 480}
Error: NxLibException: NxLibItemTypeNotCompatible

I have also attempted by setting individual sub-parameters.
aoi = camera[“Parameters”][“DisparityMap”][“AreaOfInterest”]
aoi[“Left”] = 0
Error: NxLibException: NxLibItemInexistent for /Cameras/245229/Parameters/DisparityMap/AreaOfInterest/Left

Could you please assist with the following:

  • Correct Parameter Path: Where should I set the AOI for a structured light camera to limit the region used for point cloud computation?
  • Expected Format: What is the correct format for the AOI parameter (e.g., a string like “0,0,640,480”, a list like [0, 0, 640, 480], or another type)?
  • Additional Requirements: Are there any prerequisite steps or parameters (e.g., enabling a specific setting) required before setting the AOI?
  • If any sample code are available then please do share regarding the same.

Thank you.

@RSC

I wanted to follow-up on this since there was no response from your side.

This approach does not work because a) setting an NxLibItem from a dictionary is currently not supported, and b) for the same reason your other approach failed:

This is almost correct, but the NxLib does not specify AOI in terms of {Left, Top, Width, Height}, but as two points {LeftTop, RightBottom}, both inclusive! So one correct approach would be:

aoi = camera[ITM_PARAMETERS][ITM_DISPARITY_MAP][ITM_AREA_OF_INTEREST]
aoi[ITM_LEFT_TOP][0] = 0
aoi[ITM_LEFT_TOP][1] = 0
aoi[ITM_RIGHT_BOTTOM][0] = 639
aoi[ITM_RIGHT_BOTTOM][1] = 479
1 Like