Currently from the API and SDK it’s possible to compute normals from the initial Point Map, however, once RenderPointMap is called to align the colours from an RGB camera with the Point Map (e.g. with an Ensenso C), the original normals are no longer aligned to the rendered Point Map and might not have the same resolution.
To be able to get a coloured cloud with normals, it would therefore be useful to be able to call something equivalent to RenderNormals to have a complete set of data from the point of view of the RGB camera. Perhaps it could even be a parameter that can be enabled on RenderPointMap to include normals if they’ve previously been computed.
I hope this makes sense but if there’s any further clarification required, please don’t hesitate to let me know.
Thanks,
Seb
Hi,
this feature is indeed missing at the moment. We already have it on our radar, but I cannot promise any time frame yet.
Instead of transforming the normals during rendering, we would probably extend ComputeNormals
to re-compute normals on the rendered image. But I guess that would also meet your requirements.
Hi Daniel,
Thanks for the quick response. Yes, being able to re-execute ComputeNormals
on the rendered PointMap would indeed achieve the same thing for us.
I did actually try to set the binary data of the Images/PointMap to the RenderedPointMap in an attempt to call ComputeNormals
again on that new data. That obviously didn’t work because the data is protected in the PointMap node but that is just to say taht calling ComputeNormals
a second time is absolutely acceptable.
Hi Daniel, some times ago I’ve discussed a workaround with one of you. Can you confirm that it it’s possible to:
Lets say we have an Ensenso C57 with workspace calibration
-
Use ComputePointMap and ComputeNormals to get the PointMap and the Normals
-
For each point (pixel) in the PointMap get the corresponding normal and remember it. (The corresponding normal has the same pixelcoordinates in the “Normal”-image as its point in the PointMap)
-
Transform the points coordinates into the coordinatesystem of the color camera:
Use the inverse of the workspace transformation, stored in the stereo cameras “Link” node, multiplied with the 2D/3D transformation that is stored in the color camera
-
Apply the camera matrice of the rectified color camera, to compute the row/col value where the objectpoint is located in the rectified color image. (This is the same position, where the 3d data of the point is located in RenderPointMap)
-
Copy the values for the normal to that row/col value
One additional question:
This procedure should be working together with rendering the Pointmap on the CPU. Is it possible to use this workaround, if RenderPointMap is computed using OpenGL?
This strategy results in data in the correct coordinate system. But it does not take into account points shadowing each other. There could be multiple points in the original point map mapping to the same pixel of the color camera and then you have to decide which of them you want to take.
So it is possible, but when you add all of that functionality you have basically reinvented the software rendering mode of RenderPointMap
.