What is the official recommended pipeline for colored point cloudsin c++ . Hello,
Based on the process described in link Aligning Images With 3D Data — Guides - Ensenso SDK 4.3.905 Documentation I want to generate colored 3D depth maps in the coordinate system of the built-in color camera. I tried the approach using the RenderPointMap command but the result is not as expected: The quality of the 3d data is not as good as a normal acquisiton and no color information is matched with the 3D data. Also the resolution is smaller than the color camera resolution.
Please tell me the approach for this . I would like to have merged coloredpointcloud in pcd format
You probably used the default mode of RenderPointMap, which does a telecentric projection without the color sensor. To project into the perspective of the color sensor you have to specify the serial number of the color sensor as the Camera parameter.
You shouldn’t use the color camera’s raw image, but its rectified image or the texture image from RenderPointMap. Both are the same in this case.
The point map and the texture image from RenderPointMap are pixel-aligned. You can index both of them with the same pixel position to get the color that belongs to a certain point.
Your code uses the x and y coordinates of the 3D point as an index for the color image. This is not correct as these coordinates as in mm and part of the point cloud, not an index.
You don’t need the “Computing texture (projecting color image into stereo view)” part if you want to use RenderPointMap. You can choose either one of them depending on the perspective you want the final data in.
Images from the NxLib are RGB, not BGR as your code assumes.
I hope this helps. If you fix these things your program should run correctly.