Hi Braitmaier_PR,
I guess that the implicit execution of the the command “RenderPointMap” (By requesting the image “RenderPointMap” in grab_data-items) will not be suitable, because there are no command parameters provided and thus the telecentric view of the pointmap is computed.
I ‘ve uploaded a HDevelop sample that is running with two filecameras, taken from an Ensenso C57. I’ll upload all needed data here. With the sample you see the relevant differences to your code. There are some lines of code, that you might want to delete. The script is made for training purposes.
The uploaded data contains one filecamera for the stereo camera and one filecamera for the colorcamera, a parameterfile for each camera and the script. - The script asks you to navigate to the 3d-filecam and parameter file (in that order) and than to the 2d filecamera and parameterfile. The filecameras are the *.zip files.
Kind regards
Ute
ColoredPointMap_FileCam.hdev (99.5 KB)
C57_Metal_FV16_calib.json (18.2 KB)
C57_Metal_FV16.zip (97.7 MB)
C57_Metal_Color_FV16.zip (3.4 MB)
C57_Metal_Color_calib.json (4.1 KB)
Edit: Info regarding the concept to get color information for the 3d data.
After you’ve executed “RenderPointMap”, your pointcloud is still without color !
Think of the command will kind of “rearrange” the 3d data *, from the pixelgrid of the left-rectified stereo camera, to the pixelgrid of the additional color camera. So you have two data containers “image” that share the same pixelgrid. To get “colored points” you can pixelwise add the color information to the coordinate-information.
If you don’t need the 3d data in the format “image” and a ply file is enough, you can directly save the colored 3d mesh from NxView. (File /Save /3d Mesh).Find some more information plus a pdf regarding color +3d in this topic.
* in fact, the command “RenderPointMap” does more than rearranging the 3d data, but for understanding the command, thats the most important part.