The mapping from disparity map to point map is given by the reprojection matrix followed by a transformation of the points using the camera link (if the camera has one).
This computation is exactly what the ComputePointMap command does.
There is no fixed number of disparities that corresponds to 100mm, since disparities are not linear. You either have to color the disparities in fixed intervals (making the coloring non-linear in the depth) or do a conversion using the matrix above.
Converting to a point map results in a large amount of data,
so I thought that if we could keep it as a disparity map,
the processing would be lighter,
but that seems difficult.
So, does NxView also make its judgments based on
the data after running ComputePointMap()?
Thanks for your support and
With kind regards
K.N.
Yes, NxView always shows the point map and uses z values to color it.
You can also do the coloring based on disparity values if you don’t mind the non-linearity. Just pick a repetition length in disparities instead of mm. Older versions of NxView also did that.
The most efficient way to do the coloring based on the depth is probably to render the disparity map (which is smaller) and apply the reprojection matrix inline in a shader before coloring. This is what the RenderView command does.
Thank you for your answer.
When using a point map and z-values,
how should the xy coordinates that determine the color by the z coordinate value correspond to the pixels of the raw image that I want to color?
When using a point map and Z values,
how do I find the pixel in the raw image that corresponds
to the x and y coordinates whose color is determined
by the Z coordinate value?
(Sorry for the basic question.)
I knew that the offset of DisparityMap and the rectified images
is DisparityMapOffset,but I didn’t know that the offset of PointMap
and the rectified images is also DisparityMapOffset.
I was able to understand a lot of things.
Thank you very much.
K.N.