There are various variable signals S, with a minimum value of 1 for a single image and a maximum value limited by data bandwidth, which is the maximum value that data can store. An 8-bit grayscale image with a maximum value of 255 has a theoretical dynamic range of 48dB, a dynamic range of 60dB for 10 bit images, and a dynamic range of 120dB for 20 bit images.
In fact, dynamic range is a universal concept, and different signals or variables S can define their own dynamic range. Image sensors have dynamic ranges, and displays, projectors, printers, and so on all have their own dynamic ranges. We can even define a person's dynamic range. If the person can bear hardship when the conditions are tough, he can enjoy it when the conditions are good, and he can make do with it and pay attention to it. This is a tough person with High dynamic range.
The dynamic range of an image and the dynamic range of the scene are mostly inconsistent.
The signal S of the scene is not the grayscale value of the image, it is the brightness of the light emitted by the scene. It can be understood that the maximum value of S is the brightness of the brightest part in the scene, while the minimum value of S is the brightness of the darkest part in the scene. It is related to the dynamic range of the image but not equal. At the same time, image post-processing usually compresses linear data into nonlinear output, which also amplifies the differences between image values and the dynamic range of the scene.
If the brightness of the scene is taken as the horizontal axis and the data output by the image sensor is taken as the vertical axis, the image sensor collects and maps a certain brightness range of the scene as its own output.
The actual dynamic range of image sensors is usually lower than the dynamic range of the scene, and the sensor's ability can only capture the brightness range of the scene within the red box corresponding to the horizontal axis. The position of the red box needs to be dynamically adjusted and moved to adapt to changes in scene brightness, which is the task of the auto exposure module in imaging algorithms.
The first is to dynamically change the sensitivity of pixels to expand the dynamic range. The mapping of image sensors to scene brightness becomes non-linear. As the ambient brightness increases, pixel sensitivity gradually decreases, and sensitivity changes from a linear function of brightness to a piecewise function. The charge accumulation is divided into three stages. When the brightness is low, the sensitivity is high, corresponding to black charges. Then, when the brightness is medium, the sensitivity is also medium, corresponding to blue charges. Finally, the brightness is highest and the sensitivity is lowest. From the coordinate graph, it can be seen that the potential well capacity of the pixel, i.e. the vertical axis, does not increase at this time, but the brightness range of the mapped scene, i.e. the horizontal axis, can be significantly increased, achieving the goal of increasing the dynamic range.
Onsemi's vehicle image sensor product line launched a 300000 pixel variable sensitivity sensor at an early stage, which is based on this technology. The biggest challenge of this technology is that it changes the sensitivity characteristics of pixels, making the sensitivity of linear characteristics become nonlinear. The shape of this broken line is sensitive to voltage, temperature and exposure duration, with poor consistency and limited dynamic range expansion capability, Sensors that can only be barely used for large-sized pixel black and white images. At present, this type of technology has gradually been phased out in the market.
The second High dynamic range technology is time-division multi exposure. This is the technology currently used by mainstream automotive image sensors. The method is to change the exposure time of the image sensor and continuously expose multiple times to obtain multiple frames of images, and then select appropriate pixels to merge into one frame of image. The sensor changes the exposure time, which is equivalent to the built-in automatic exposure function. It samples different brightness of the scene separately to obtain multiple red boxes, and then concatenates the dynamic range. The advantage of this technology is that the pixel potential well capacity does not need to be increased, only the data bandwidth needs to be increased; The duration control of each exposure can be very precise, and the final fitted image has good linear brightness characteristics; The dynamic range expansion is easy, and a dynamic range of 140dB can be achieved using time division technology alone.
The time-division multiple exposure technology has a difficult problem to overcome, as the continuous exposure time of the sensor is sequentially delayed. When there are rapidly moving objects or drastic changes in lighting in the scene, such as LED strobe, moving object artifacts and color noise will appear after multi frame image fitting. The ADAS algorithm needs to train this type of noise specifically.