I am attempting to extract the color image from the camera using the code provided below. However, I have encountered an issue with the front light behavior. Specifically:
During the first capture, the front light is turned off, resulting in a dark image.
During the second capture, the front light is turned on, and the image appears correctly illuminated.
The same code is used for both captures, so I am unsure why this inconsistency occurs.
I load a JSON configuration file (configuration_C57.json) using NxLibItem::setJson().
I execute cmdCapture to capture the first image.
I execute another cmdCapture to capture the second image.
I extract the raw texture image using getBinaryData() and save it as a BMP file.
Observations
The first image (rawColor.bmp) is dark due to the front light being off.
The second image (rawColor2.bmp) is correctly illuminated as the front light is on.
I have attached the following files to help investigate the issue:
The two captured images (rawColor.bmp and rawColor2.bmp).
The JSON configuration file (configuration_C57.json).
Could you help me identify why the front light behaves differently between the two captures? Is there something missing in my configuration or initialization process?
the reason why the first texture image is darker is the automatic adjustment of the exposure and gain values of the texture node. They both have a default value of 1, which is too low and will be higher during the following captures.
I wrote the following Python script to debug your problem:
#! /usr/bin/env python3
# -*- coding: utf8 -*-
import argparse
import nxlib.api as api
from nxlib.constants import *
from nxlib import NxLibCommand
from nxlib import NxLibItem
parser = argparse.ArgumentParser()
parser.add_argument("serial", type=str, help="the serial of the stereo camera to open")
args = parser.parse_args()
def get_camera_node(serial):
# Get the root of the tree.
root = NxLibItem()
# From here on we can use the [] operator to walk the tree.
cameras = root[ITM_CAMERAS][ITM_BY_SERIAL_NO]
for i in range(cameras.count()):
found = cameras[i].name() == serial
if found:
return cameras[i]
def open():
with NxLibCommand(CMD_OPEN) as cmd:
cmd.parameters()[ITM_CAMERAS] = args.serial
cmd.execute()
def capture():
with NxLibCommand(CMD_CAPTURE) as cmd:
cmd.parameters()[ITM_CAMERAS] = args.serial
cmd.execute()
def compute_disparity_map():
NxLibCommand(CMD_COMPUTE_DISPARITY_MAP).execute()
def compute_point_map():
NxLibCommand(CMD_COMPUTE_POINT_MAP).execute()
def save_texture(camera, filename):
with NxLibCommand(CMD_SAVE_IMAGE) as cmd:
cmd.parameters()[ITM_NODE] = camera[ITM_IMAGES][ITM_RAW_TEXTURE][ITM_LEFT].path
cmd.parameters()[ITM_FILENAME] = filename
cmd.execute()
def print_parameters(camera):
print()
print(f"texture exposure: {camera[ITM_PARAMETERS][ITM_CAPTURE][ITM_TEXTURE][ITM_EXPOSURE][ITM_VALUE].as_double()}")
print(f"texture gain: {camera[ITM_PARAMETERS][ITM_CAPTURE][ITM_TEXTURE][ITM_GAIN][ITM_VALUE].as_double()}")
def main():
print("Initializing API")
api.initialize()
api.open_tcp_port()
print(f"Opening camera {args.serial}")
open()
camera = get_camera_node(args.serial)
camera[ITM_PARAMETERS][ITM_CAPTURE][ITM_TEXTURE][ITM_ENABLED] = True
auto_adjustment = True
if not auto_adjustment:
camera[ITM_PARAMETERS][ITM_CAPTURE][ITM_TEXTURE][ITM_EXPOSURE][ITM_AUTOMATIC] = False
camera[ITM_PARAMETERS][ITM_CAPTURE][ITM_TEXTURE][ITM_EXPOSURE][ITM_VALUE] = 5
camera[ITM_PARAMETERS][ITM_CAPTURE][ITM_TEXTURE][ITM_GAIN][ITM_AUTOMATIC] = False
camera[ITM_PARAMETERS][ITM_CAPTURE][ITM_TEXTURE][ITM_GAIN][ITM_VALUE] = 9
print_parameters(camera)
capture()
compute_disparity_map()
compute_point_map()
save_texture(camera, "texture_1.png")
print_parameters(camera)
capture()
compute_disparity_map()
compute_point_map()
save_texture(camera, "texture_2.png")
print_parameters(camera)
capture()
compute_disparity_map()
compute_point_map()
save_texture(camera, "texture_3.png")
if __name__ == "__main__":
main()
You can either skip the first frames during which the auto adjustment is still ongoing or you find suitable settings for your scene and load the corresponding JSON settings as you’ve done in your provided code.
Thank you very much for the detailed explanation and for taking the time to write the script to help me with this issue. It’s very useful to understand how the automatic adjustment of exposure and gain affects the first captures.