Texturing pointclouds: calibration procedure of a monocular camera not supported by the EnsensoSDK

General Information

  • Product: N/X series
  • Serial Number: n/a
  • Ensenso SDK Version: n/a
  • Operating System: Linux / Windows

Problem / Question

For texturing pointclouds, the calibration wizard can be used to calibrate a monocular camera to the 3D data of the Ensenso. But the wizard only supports IDS color cameras.
If a customer needs to use a non supported camera and cannot use the wizard, they need to do program this calibration themselves.
Do you have any literature to recommend on this topic that explains the process?

Thanks!
Bart

Hi Bart,

in principle we try to support GigE Vision based cameras in general, as monocular devices. Only the stereo cameras just support IDS cameras, but monocular devices are not specifically vendor locked or anything. It may well be that there are some manufacturer specific implementation details that can break the functionality with EnsensoSDK, but then this would be more like an issue that we should fix.

What’s the device you want to calibrate with?

Best regards, Rainer

Thanks Rainer for the clarification.

Is only GigEVision supported or would USB3Vision work as well?

Best,
Bart

Hi Bart,

USB3Vision requires a different transport layer, which the NxLib does not include at the moment. It only supports GigE Vision and uEye. USB sensors could work via the uEye API, because the transport layer is provided by the uEye driver in that case.

Kind regards,
Daniel

Hi,

Any literature that you’d recommend if the SDK cannot be used for the monocular to 3D calibration process?
Or do I refer to the OpenCV docs: OpenCV: Camera Calibration and 3D Reconstruction ?

Thanks,
Bart

Hi Bart,

yes, OpenCV ist probably the first option to consider. It is the most used and most complete package.

Thanks Daniel,

I have encountered the same problem

Can you provide us with some literature or solution?

If you really cannot use a camera that is supported by the NxLib, you have to implement detection of calibration targets, the calibration procedure and use of the resulting calibration yourself. The details depend on what exactly you want to achieve and I am afraid I cannot help much.

For OpenCV you can find a lot of generic information, e.g. these tutorials in their documentation.

For integrating with our stereo cameras, you have to fetch the images from the NxLib and use them in OpenCV. There is a choice you can make here:

  • Use the rectified images. The pixels of these images correspond with the pixels in the disparity and point map, so association with the 3D data is straightforward. These images are already undistorted and don’t need to be calibrated themselves. Use their camera matrix from the NxLib instead. You just have to calibrate the position offset to another camera.
  • Use the raw images and treat them like the raw images of any other cameras. These images cannot be associated with the 3D data directly.

Feel free to post if you have any questions regarding the details.