Getting depth map in python

General Information

  • Product: N10-804-18
  • Serial Number: 130060
  • Ensenso SDK Version: 4.0.1502
  • Operating System: Windows

Problem / Question

When I use nxview, I get a decent depth map. However, when I switch to Python, my disparity map and point cloud are mostly filled with NaN values. Most of the elements are set to -32678 (indicating NaN). What could be causing this? The save_stereo_image script works correctly, but when I try to generate the disparity map, it produces incorrect results.

Hello Warre,

when you use the camera in Python, do you provide the same settings used in NxView? This transfer does not happen automatically. There is a guide for loading camera settings in the manual, which tells you how to save the settings from NxView and apply them in your application. It does not cover Python, but the principle is the same as in C++:

from nxlib import NxLib, NxLibItem, StereoCamera
from nxlib.constants import *

# Load the parameter file.
with open("params.json") as f:
    jsonstr = f.read()

# Store the parameters temporarily for inspection.
tmp = NxLibItem().make_unique_item()
tmp.set_json(jsonstr);

# Initialize NxLib, open camera.
with NxLib(), StereoCamera.from_serial(serial) as camera:
    # Load only parameters from the file.
    if tmp[ITM_PARAMETERS].exists():
        camera.get_node()[ITM_PARAMETERS].set_json(tmp[ITM_PARAMETERS].as_json(), True)
    else:
        camera.get_node()[ITM_PARAMETERS].set_json(tmp.as_json(), True)

# Clean up.
tmp.erase()

If that did not help you, please let me know.

Regards,
Raphael

1 Like

Hi Rapheal,

Thank you for your response. I have implemented the settings from NxView in my application, but unfortunately, the issue persists.

Attached are two point clouds for reference: one generated using NxView (showing the surface), and the other created using the Ensenso Python example script ‘nx_watch_point_cloud’ (displaying some edges and unusual lines).
nxview
python

Then let’s go figure out why.

Assuming everything works correctly, did you check whether your surface was not just the tiny blue shape in the lower left? Depending on the settings, the NxLib can sometimes generate ghost points in these long lines that can be quite large and overwhelm the actual data if the 3D view is not properly scaled, which the example probably does not do. You can usually get rid of these ghost points by using more aggressive filtering.

If your surface is really not there, we have to go look for it somwhere further up the chain. Please excuse me if it seems like you answered some of these questions already, I repeat them make sure I understand you correctly.

  1. Does the camera produce good data with the default settings, i.e. when you open it in NxView with the Load Cached Settings checkbox unchecked?

If you need to modify any parameters:

  1. Did you save the modified parameter set to a file, either via the Button in the NxView parameter window or NxTreeEdit?
  2. Did you insert the code to load the parameters into the nx_watch_point_cloud.py example so it loads your parameters, if necessary?
  3. Or did you just hardcode the changed settings in the example?
  4. Can you add a call to nxlib.open_tcp_port and check using NxTreeEdit that the parameters are set correctly?

If the parameters are loaded correctly:

  1. Use NxTreeEdit to check the disparity map and the point map in the running nx_watch_point_cloud.py example. They should be similar to the maps you see in NxView.

  2. When you save the images to a file camera from NxView by pressing Ctrl-S or via File > Save > Raw Images, and then create a file camera from the saved images and load that into the nx_watch_point_cloud.py example, does that produce results similar to NxView?

If that does not work, can you please attach you parameter file, the file camera and, if you modified it, your nx_watch_point_cloud.py to your reply?

1 Like

Thank you for the guidance. I followed each step, and here are my answers to your questions:

  1. Yes, the camera produces good data with the default settings in NxView.
  2. Yes, I saved the modified parameter set to a file, as instructed.
  3. Yes, I inserted the code to load these parameters.
  4. No, I did not hardcode the changes; they are loaded from the saved file.
  5. I added nxlib.open_tcp_port and verified via NxTreeEdit that the parameters are set correctly.
  6. The disparity and point maps are not similar; I will attach images of both for clarity.
  7. When saving images in NxView and loading them in nx_watch_point_cloud.py, the Disparity Map now matches for the first time, and the Point Map matches too, but with the problem you described before: the ghost points in those long lines.

I am also attaching the requested files and confirm that I’m using the original nx_watch_point_cloud.py. Given that the parameters do match, do you have any idea what might be causing this issue with both the Disparity Map and the Point Map?
EnsensoProblem.zip (392.8 KB)

Hi Raphael,

I’ve tried many approaches to solve this issue, but unfortunately, without success. The disparity maps I’m seeing in Python resemble those from NXView, but only when the projector is turned off. In Python, I’ve set the projector to True when using the camera, and I can also confirm in real life that the projector is indeed on when set to True. However, it seems as if the capture() or compute_disparity_map() functions within the SDK internally assume the projector is off.

Additionally, I’m curious as to why I get a correct disparity map when I save a camera file and then read that file back into my script, as described in Step 7 of your previous response.

Thanks in advance!

I am not sure what you mean by this. The NxLib command ComputeDisparityMap only operates on the the image pixels and does not care about the nominal projector status. Command Capture ensures the projector is switch on or off, depending on the setting.

One possible source of error that might be related to the projector is that for the N10 the exposure can only be changed with a delay of one frame, meaning if you change the exposure now, the next frame will still use the previous exposure. Even though you have loaded different settings, since the Python example only captures one frame, it will always use the default exposure, which might not be ideal. Do the images in Python all look alright? You can insert a second capture() call to ensure the example is using the exposure from your settings.

If this does not work, can you send me a copy of your bad point map? You can right click the PointMap node in NxTreeEdit and select Save binary data as…. And possibly also a file camera from the python example. To obtain one you can use the /Execute/Default node to execute the SaveFileCamera command via NxTreeEdit.