“The core of a typical image sensor is a CCD unit (charge-coupled device) or a standard CMOS unit (complementary meta-oxide semiconductor). CCD and CMOS sensors have similar characteristics, and they are widely used in commercial cameras. However, most modern sensors use CMOS cells, mainly for manufacturing considerations. Sensors and optics are often integrated together to make wafer-level cameras, which are used in fields similar to biology or microscopy, as shown in Figure 1.
“
The core of a typical image sensor is a CCD unit (charge-coupled device) or a standard CMOS unit (complementary meta-oxide semiconductor). CCD and CMOS sensors have similar characteristics, and they are widely used in commercial cameras. However, most modern sensors use CMOS cells, mainly for manufacturing considerations. Sensors and optics are often integrated together to make wafer-level cameras, which are used in fields similar to biology or microscopy, as shown in Figure 1.
Figure 1: Common arrangement of image sensors with integrated optics and color filters
The image sensor is designed to meet the special goals of different applications, and it provides different levels of sensitivity and quality. If you want to be familiar with various sensors, you can consult their manufacturer’s information. For example, in order to have the best compromise between silicon base mode and dynamic response (used to achieve light intensity and color detection), the size and composition of each photodiode sensor unit need to be optimized for a specific semiconductor manufacturing process Element.
For computer vision, the effect of sampling theory is of great significance. For example, the pixel range of the target scene will use the Nyquist frequency. The sensor resolution and optics together can provide enough resolution for each pixel to image the feature of interest. Therefore, it is concluded that the sampling (or imaging) frequency of the feature of interest should be the important pixel (for the interested feature). In terms of features), it is twice the smallest pixel size. Of course, as far as imaging accuracy is concerned, twice oversampling is only a minimum goal. In practical applications, it is not easy to determine the characteristics of a single pixel width.
To achieve the best results for a given application, the camera system needs to be calibrated to determine the sensor noise and dynamic range of the pixel bit depth under different lighting and distance conditions. In order to be able to deal with the noise and non-linear response of the sensor to any color channel, detect and correct pixel defects, and deal with the modeling of geometric distortion, it is necessary to develop a suitable sensor processing method. If you use the test mode to design a simple calibration method, this method has a gradient from fine to coarse in grayscale, color, characteristic pixel size, etc., you will see the result.
1. Sensor material
Silicon image sensors are the most widely used, and of course other materials are also used. For example, in industrial and military applications, gallium (Ga) is used to cover longer infrared wavelengths than silicon. Different cameras have different resolutions of their image sensors. From a single-pixel phototransistor camera (which uses a one-dimensional linear scanning array for industrial applications), to a two-dimensional rectangular array on a common camera (all paths to a spherical array are used for high-resolution imaging), it is possible to use . (The sensor configuration and camera configuration will be introduced at the end of this chapter).
Common imaging sensors are manufactured using CCD, CMOS, BSI and Foveon methods. Silicon image sensors have a non-linear spectral response curve, which can perceive the near-infrared part of the spectrum well, but does not perceive the blue, purple, and near-ultraviolet parts well (as shown in Figure 2).
Figure 2: Typical spectral response of several silicon photodiodes. It can be noted that the photodiode has high sensitivity in the near-infrared range around 900 nanometers, but has nonlinear sensitivity in the visible light range spanning 400 nanometers to 700 nanometers. Due to the standard silicon response, removing the IR filter from the camera will increase the sensitivity of the near infrared. (The use of spectral data images has been licensed by OSI Optoelectronics Co., Ltd.)
Note that when the raw data is read in and the data is discretized into digital pixels, it will result in a silicon spectral response. The sensor manufacturer has done design compensation in this area. However, when calibrating the camera system according to the application and designing the sensor processing method, the color response of the sensor should be considered.
2. Sensor photodiode element
The key to the image sensor is the size of the photodiode or the size of the element. The number of photons captured by a sensor element using a small photodiode is not as high as that of a large photodiode. If the element size is smaller than the wavelength of visible light that can be captured (such as blue light with a length of 400 nanometers), then in order to correct the image color, other problems must be overcome in the sensor design. Sensor manufacturers spend a lot of energy designing and optimizing component sizes to ensure that all colors can be imaged equally (as shown in Figure 3). In extreme cases, small sensors may be more sensitive to noise due to the lack of accumulated photons and sensor readout noise. If the diode sensor element is too large, the particle size and cost of the silicon material will increase, which has no advantage at all. Generally, commercial sensor equipment has a sensor element size of at least 1 square micrometer, which varies from manufacturer to manufacturer, but there are some compromises in order to meet certain special needs.
Figure 3: Wavelength distribution of basic colors. Note that the basic color domains overlap each other. For all colors, green is a good monochromatic substitute.
3. Sensor configuration: Mosaic, Faveon and BSI
Figure 4 shows the different on-chip configurations of the multispectral sensor design, including mosaic and stacking methods. In the mosaic method, color filters are installed on the mosaic pattern of each element. The Faveon sensor stacking method relies on the depth penetration of the color wavelength into the physical composition of the semiconductor material, where each color penetrates the silicon material to a different degree, thereby imaging the respective color. The entire component size can be applied to all colors, so there is no need to configure components for each color separately.
Figure 4: (Left) Foveon method of stacking RGB components: each component has RGB colors and absorbs different wavelengths at different depths; (right) Standard mosaic components: placed on top of each photodiode An RGB filter, each filter only allows specific wavelengths to pass through each photodiode.
The back-side illuminated (BSI) sensor structure has a larger element area, and each element has to gather more photons, so the sensor wiring is re-arranged on the die.
The placement of the sensor elements also affects the color response. For example, Figure 5 shows different arrangements of basic color (R, G, B) sensors and white sensors, where the white sensor (W) has a very clear or achromatic color filter. The arrangement of the sensor takes into account a certain range of pixel processing. For example, in the process of processing one pixel information by the sensor, the selected pixels in different configurations of adjacent elements are combined, and these pixel information will optimize the color response or spatial color resolution. In fact, some applications only use raw sensor data and perform ordinary processing to enhance resolution or construct other color mixtures.
Figure 5: Several different mosaic configurations of component colors, including white, basic RGB colors, and secondary CYM components. Each configuration provides a different method for optimizing the color or spatial resolution of the sensor processing process (the image comes from the book “Building Intelligent Systems” and is licensed by Intel Press).
The size of the entire sensor also determines the size of the lens. Generally speaking, the larger the lens, the more light passes. Therefore, for photography applications, larger sensors are better suited for digital cameras. In addition, the aspect ratio of the elements arranged on the particles determines the geometry of the pixels. For example, aspect ratios of 4:3 and 3:2 are used for digital cameras and 35mm film, respectively. The details of the sensor configuration are worthy of the reader’s understanding, so that the best sensor processing process and image preprocessing program can be designed.
4. Dynamic range and noise
Currently, the most advanced sensors can provide at least 8 bits per color unit, usually 12 to 14 bits. Sensor elements take space and time to gather photons, so smaller elements must be carefully designed to avoid some problems. Noise may come from the optical components used, color filters, sensor components, gain and A/D converters, post-processing or compression methods, etc. The readout noise of the sensor will also affect the actual resolution, because each pixel unit is read out from the sensor and then passed to the A/D converter to form digital rows and columns for pixel conversion. The better the sensor, the less noise it will produce, and the more efficient bit resolution will be obtained at the same time. Ibenthal’s work is a good literature on noise reduction.
In addition, the sensor photon absorption will be different for each color, and there may be some problems with blue, that is, this is the most difficult color for small sensor imaging. In some cases, manufacturers will try to build a simple gamma curve correction method for each color in the sensor, but this method is not worth promoting. In applications that require color, you can consider the chromaticity device model and color management, and even let each color channel of the sensor have non-linear characteristics and establish a series of simple correction lookup table (LUT) conversions.
5. Sensor processing
Sensor processing is used to demosaicate and gather pixels from the sensor array, and it is also used to correct perceptual imperfections. In this section we will discuss the basics of sensor processing.
Usually there is a proprietary sensor processor in each imaging system, including a fast HW sensor interface, optimized very long instruction word (VLIW), single instruction multiple data stream (single instruction multiple data, SIMD instructions and hardware modules with fixed functions, these functions are to solve the workload caused by massively parallel pixel processing. Generally, the sensor processing process is transparent and automated, and is set by the manufacturer of the imaging system, and all images from the sensor are processed in the same way. There are also other ways to provide raw data that allows the sensor processing to be customized for the application, just like digital photography.
6. Demosaicing
According to different sensor element configurations (as shown in Figure 5), various demosaicing algorithms can be used to generate the final RGB pixels from the original sensor data. Losson & Yang and Li et al. respectively gave two very good review documents, which introduced various methods and challenges.
One of the main challenges of demosaicing is pixel interpolation, which combines the color channels of neighboring cells into a single pixel. Given the geometric shape of the sensor element arrangement and the aspect ratio of the cell arrangement, this is an important issue. A related problem is the weighting of color units, such as how much each color should account for in each RGB pixel. Because in a mosaic sensor, the spatial element resolution is greater than the final combined RGB pixel resolution, some applications require raw sensor data in order to use all the accuracy and resolution as much as possible, or some processing needs to enhance the effective pixel resolution. Either need to better realize spatially accurate color processing and demosaicing.
7. Correction of bad pixels
Like LCD monitors, sensors may also have bad pixels. By providing the bad pixel coordinates that need to be corrected in the camera module or driver, the supplier can calibrate the sensor at the factory and provide a sensor defect map for known defects. In some cases, adaptive defect correction methods are used on the sensor to monitor neighboring pixels to find defects, and then correct the types of defects within a certain range, such as single pixel defects, column or row defects, and similar 2×2 Or 3×3 block defects. In order to find defects in real time, the camera driver can also provide adaptive defect analysis, and a special compensation control may be provided in the camera’s startup menu.
8. Color and lighting correction
It is necessary to perform color correction in order to balance the overall color accuracy and white balance. As shown in Figure 1-2, the silicon sensor is usually very sensitive to the two colors of red and green, but not to blue. Therefore, understanding and calibrating the sensor is the basic work to get the most accurate color.
The processor of most image sensors contains a geometric processor for halo correction, which appears to be darker at the edges of the image. The correction is based on the geometric distortion function, and the programmable light function can be considered to increase the light towards the edge. This needs to be calibrated before leaving the factory to match the optical halo mode.
9. Geometric correction
The lens may be geometrically aberrated or warped toward the edges, resulting in a radially distorted image. In order to solve lens distortion, most imaging systems have a dedicated sensor processor, which has a hardware-accelerated digital distortion element, similar to a texture sampler on a GPU. In the factory, the geometric correction of the optical device will be calibrated and programmed.
The Links: PM10RHB120 PM25RKK120