CN113963115B - High dynamic range laser 3D scanning method based on single frame image - Google Patents
High dynamic range laser 3D scanning method based on single frame image Download PDFInfo
- Publication number
- CN113963115B CN113963115B CN202111260769.3A CN202111260769A CN113963115B CN 113963115 B CN113963115 B CN 113963115B CN 202111260769 A CN202111260769 A CN 202111260769A CN 113963115 B CN113963115 B CN 113963115B
- Authority
- CN
- China
- Prior art keywords
- laser
- image
- sub
- positioning
- dynamic range
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 15
- 238000002834 transmittance Methods 0.000 claims description 10
- 230000000737 periodic effect Effects 0.000 claims description 9
- 238000000354 decomposition reaction Methods 0.000 claims description 5
- 230000015572 biosynthetic process Effects 0.000 claims description 4
- 238000003786 synthesis reaction Methods 0.000 claims description 4
- 230000002194 synthesizing effect Effects 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 description 10
- 238000002310 reflectometry Methods 0.000 description 7
- 238000005259 measurement Methods 0.000 description 5
- 239000003086 colorant Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000001627 detrimental effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000003696 structure analysis method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Color Television Image Signal Generators (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
A high dynamic range laser three-dimensional scanning method based on a single frame image is characterized in that an image sensor with a pixel-level filter array is used for collecting a single frame laser fringe pattern and decomposing the single frame laser fringe pattern into a plurality of sub-images during each scanning, an optimal center coordinate is synthesized for each sub-image in parallel by using a laser fringe positioning algorithm, three-dimensional point cloud reconstruction is carried out by using the optimal center coordinate of a laser fringe, and laser three-dimensional scanning with a high dynamic range is realized. The method realizes light intensity attenuation in different proportions in parallel by utilizing the pixel-level filter array in the image sensor, realizes laser three-dimensional scanning with high dynamic range based on single frame image by sacrificing spatial resolution and replacing time efficiency, can remarkably reduce the time required for collecting laser stripe images, reduces the requirement on the camera collection frame rate, and realizes faster laser three-dimensional scanning.
Description
Technical Field
The invention relates to a high dynamic range imaging method for laser three-dimensional scanning, and belongs to the field of three-dimensional imaging and measurement.
Background
The optical three-dimensional measurement technology is widely applied to a plurality of fields such as industrial manufacture, mapping, medical treatment, cultural heritage and the like due to the advantages of non-contact, high precision, easiness in operation and the like. Laser three-dimensional scanning, which is a typical technique for optical three-dimensional measurement, can provide moderate measurement accuracy by a simple and low-cost optical configuration, and has been widely popularized in recent years.
Classical laser three-dimensional scanning systems consist of a camera, a laser source and a mechanical device for scanning. The laser source projects a laser line onto the surface of the object, and the plane where the laser line is located intersects the surface of the object to generate laser stripes; an image of the laser stripe is then captured by the camera. The curve variation of the laser stripe is modulated by the object surface profile. The image coordinates of the laser stripes can thus be extracted, and three-dimensional data of the object surface shape can be reconstructed by the triangle geometry between the laser source and the camera.
From the working principle of a laser three-dimensional scanning system, the performance of three-dimensional measurement is largely dependent on the center detection and positioning of laser fringes. When designing or selecting a laser stripe center extraction algorithm, positioning accuracy and robustness to noise are mainly considered, and the default precondition is that an image of a laser stripe has better contrast and signal-to-noise ratio. The laser three-dimensional scanning is used as an optical three-dimensional imaging technology based on active illumination, and the brightness change of laser stripes depends on the intensity of laser lines emitted by a light source and the change of the reflectivity of the surface of an object; and since the laser is monochromatic, the object surface texture and color also change the brightness of the laser stripe significantly. At this time, if the system adopts a standard imaging method to collect the image of the laser stripe, overexposure or underexposure is easy to occur, and the detection and positioning of the laser stripe are affected. The high dynamic range imaging can respond more accurately to dramatic changes in light intensity within the field of view than standard imaging. Thus, when objects have different surface reflectivities or colorful surface textures, laser three-dimensional scanning systems require the use of high dynamic range imaging techniques to acquire images of the laser stripes.
A commonly used high dynamic range imaging technique for laser three-dimensional scanning systems is the multiple exposure method inherited from computer vision. The principle and algorithm of the method are simple, but multiple frames of images are required to be acquired for each high dynamic range imaging, so that the time efficiency is low. And such methods theoretically require that the object remain stationary during acquisition of multiple frames of images, which is detrimental to continuous laser scanning.
Disclosure of Invention
The invention aims to provide a high dynamic range laser three-dimensional scanning method based on a single frame image, which does not increase the image acquisition time or the complexity of equipment. The pixel-level filter array in the image sensor is utilized to realize light intensity attenuation in different proportions in parallel, and the time efficiency is traded off by sacrificing the spatial resolution, so that the laser three-dimensional scanning with high dynamic range based on single frame images is realized.
The invention relates to a high dynamic range laser three-dimensional scanning method based on a single frame image, which comprises the following steps:
During each scanning, an image sensor with a pixel-level filter array is used for collecting a single-frame laser fringe pattern and decomposing the single-frame laser fringe pattern into a plurality of sub-images; synthesizing an optimal center coordinate for each sub-image by using a laser stripe positioning algorithm in parallel; and carrying out three-dimensional point cloud reconstruction by using the optimal center coordinates of the laser stripes, and realizing laser three-dimensional scanning with a high dynamic range.
The pixel-level filter array has a periodic distribution of transmittance variation for adjacent pixels, and the specific distribution can be designed by itself, including but not limited to a periodic structure of 2×2 and 3×3.
The pixel-level filter array may be a bayer filter in a commercial color image sensor.
The decomposition is to decompose a single-frame laser fringe pattern into a plurality of sub-images with different brightness, and each sub-image is directly subjected to laser fringe positioning and an optimal coordinate is selected from positioning results; instead of laser stripe positioning after synthesizing the sub-image into a frame of high dynamic range image.
The selection of the optimal center coordinates depends on positioning parameters related to image brightness, contrast and signal-to-noise ratio output by a positioning algorithm; by analyzing the positive or negative correlation between the positioning parameters and the positioning accuracy, the optimal center coordinates can be determined according to the maximum or minimum parameters.
The laser stripe positioning algorithm is as follows: firstly, decomposing a single-frame input image into sub-images with different average brightness according to the periodic distribution of a filter array; then using the same laser stripe positioning algorithm to center position each sub-image; selecting a known positioning algorithm (such as an extremum method, a gray level gravity center method, a hessian matrix structure analysis method and the like) according to the requirements of speed and precision, and selecting an optimal laser stripe center coordinate from positioning results of a plurality of sub-images by analyzing positive or negative correlation between positioning parameters and positioning precision; and considering the coordinate offset introduced during the decomposition of the sub-images, correspondingly compensating the coordinate offset during the synthesis of the optimal coordinates, and finally outputting the high dynamic range optimal center coordinates of the whole laser stripe.
Since the starting point of laser stripe positioning is always the image gray value distribution of the laser stripe image neighborhood, most positioning algorithms can simultaneously output positioning parameters related to image brightness/contrast/signal-to-noise ratio. And the three-dimensional point cloud reconstruction is carried out by using the optimal center coordinates of the laser stripes, so that the laser three-dimensional scanning with a high dynamic range is essentially realized.
The invention has the following characteristics:
(1) Compared with the traditional multiple exposure high dynamic range laser scanning method, the method can remarkably reduce the time required for collecting the laser stripe images, reduce the requirement on the camera collection frame rate and realize faster laser three-dimensional scanning;
(2) The acquisition of the high dynamic range image is based on a pixel-level filter array with periodically-changed transmissivity, the periodically-changed transmissivity of the filter can be designed by itself, and the balance between the dynamic range and the resolution can be better considered;
(3) Since the laser light itself has good monochromaticity, the color filter array in a commercial color camera can be directly used as a periodic filter array with 3 different transmittances (see examples); the direct use of commercial color cameras does not add any hardware cost;
(4) After the sub-image is decomposed, the laser stripe center is directly positioned without high dynamic range laser stripe image synthesis, and the algorithm has higher execution efficiency.
Drawings
Fig. 1 is a conceptual diagram of a pixel-level filter array in accordance with the present invention.
FIG. 2 is a flowchart of the high dynamic range laser stripe centering algorithm of the present invention.
Fig. 3 is a schematic diagram of a pixel-level filter array (with a laser wavelength of 520 nm) corresponding to a bayer filter according to an embodiment of the present invention.
FIG. 4 shows an original image collected by a color image sensor and its decomposed sub-images (laser wavelength 520 nm) according to an embodiment of the present invention; wherein: (a) a single frame original image; (b) A partial enlarged view, in which the brightness attenuation effect of the bayer filter can be seen; (c) A sub-image I r of the red channel; (d) A sub-image I g of the green channel; (e) Sub-image I b of the blue channel.
Detailed Description
The gray value of a pixel in a laser stripe image acquired by a camera is proportional to the quantum efficiency q e of an image sensor and the reflectivity r (x, y) of the surface of an object:
I(x,y)∝qer(x,y),
when the reflectivity of the surface of the object is relatively constant, a laser stripe image with proper gray scale can be obtained by reasonably setting the parameters of the camera. For objects with rich textures or reflective surfaces, the reflectivity of the surface may change drastically in different areas or viewpoints, resulting in too little or too much (oversaturation) of the laser stripes in the image, and thus introducing more positioning errors.
The laser stripe image acquisition of the present invention uses an image sensor with a pixel-level filter array. The existing image sensors are all digital sensors, and the minimum unit of the recorded acquired image is a pixel, so that it is possible to design an independent filter for each pixel. The pixel-level filter array with periodically-changing transmissivity is designed, so that different pixels of the image sensor have different quantum efficiencies, and laser stripe images with different brightness can be separated from a single frame image and then used for a subsequent high dynamic range laser stripe positioning algorithm.
A typical pixel-level filter array is shown in fig. 1, with a period of transmittance change of 2 x 2 pixels. With a dark color representing a filter of lower transmittance and a light color representing a filter of higher transmittance, the quantum efficiencies of four pixels in one period have the following relationship:
q1<q2<q3<q4,
the corresponding image brightness must satisfy:
I1<I2<I3<I4,
This means that 4-frame sub-images with different average luminance can be separated from a single-frame image.
The design concept of the present invention can be used to design any pixel-level filter array with periodic transmittance changes, not just the example of fig. 1. By increasing the number of pixels in the period, more sub-images with different average brightness can be separated from the single frame image, at the cost of further reduced resolution of each sub-image.
In laser three-dimensional scanning systems, high dynamic range imaging techniques serve the accuracy and reliability of laser stripe positioning. In practice, it is not necessary to synthesize a high dynamic range image for the laser stripes, but only to position in each sub-image and then select the coordinates with optimal accuracy and reliability. Based on the above consideration, the overall flow of the high dynamic range laser stripe center positioning algorithm provided by the invention is shown in fig. 2.
First, a single frame input image is decomposed into sub-images with different average brightnesses according to the periodic distribution of the filter array. Then using the same laser stripe positioning algorithm to center position each sub-image; the existing positioning algorithm can be flexibly selected according to the requirements of speed and precision. Because the starting point of laser stripe positioning is always the image gray value distribution of the laser stripe image neighborhood, most positioning algorithms can simultaneously output positioning parameters related to image brightness, contrast and signal to noise ratio.
By analyzing the positive or negative correlation between the positioning parameters and the positioning accuracy, the optimal laser stripe center coordinates can be selected from the positioning results of the plurality of sub-images. And considering the coordinate offset introduced during the decomposition of the sub-images, correspondingly compensating the coordinate offset during the synthesis of the optimal coordinates, and finally outputting the high dynamic range optimal center coordinates of the whole laser stripe. And the three-dimensional point cloud reconstruction is carried out by using the optimal center coordinates of the laser stripes, so that the laser three-dimensional scanning with a high dynamic range is essentially realized.
Taking a commercial color camera as an example, embodiments of the present invention are described. The advantage of using a commercial color camera is that it does not add any hardware cost, and the disadvantage is that the transmissivity of the filter cannot be customized by itself. According to the invention, the pixel-level filter array with periodically-changed transmissivity can be designed by itself, and the corresponding camera can be manufactured, so that the balance between the dynamic range and the resolution can be better considered.
Typical image sensors such as Complementary Metal Oxide Semiconductor (CMOS) chips do not have color sensing capabilities themselves. Therefore, a general CMOS chip can only function as a monochrome image sensor. To capture color images with a single chip, commercial color cameras commonly use bayer filters for separating the different colors. The bayer filter is a color filter array, and red, green and blue filters are embedded on a square grid, and the size of each filter unit is completely matched with the size of each pixel on a CMOS chip. Thus, filters of different colors form different color channels, and three colors of red, green, and blue are to be recorded in different channels (pixels). Filters of different colors have different transmittance for the wavelength, resulting in different quantum efficiencies for the same wavelength for different color channels. For example, a typical color CMOS chip has a blue channel sensitive to wavelengths in the range of 400-500nm, a green channel sensitive to wavelengths in the range of 480-600nm, and a red channel sensitive to wavelengths in the range of 580-750 nm.
The laser light has good monochromaticity, and therefore the transmittance of the laser light of any wavelength in the three color channels is inevitably inconsistent. Taking a green semiconductor laser commonly used in a laser three-dimensional scanning system as an example, the wavelength of the green semiconductor laser is 520nm. When a color image sensor with a bayer filter is used to collect laser stripe images, the transmissivity of the green channel is highest, the transmissivity of the blue channel is next highest, and the transmissivity of the red channel is lowest. Therefore, for a 520nm wavelength laser, the red, green, and blue filters of the bayer filter correspond to 3 different transmittance filters, as shown in fig. 3. At this time, 4 sub-images with 3 different average brightnesses can be separated from a single-frame image, wherein 2 sub-images corresponding to the green channel have the same average brightness, and the sub-images have:
Ir<Ib<Ig
As shown in fig. 4.
The overall algorithm flow is shown in fig. 2. The coordinates of the laser stripes can be independently calculated by using a laser stripe positioning algorithm for each frame of sub-image. Thus, 4 coordinates of any point on the laser stripe can be obtained from different sub-images. The 4 coordinates have different reliability and accuracy depending on the brightness level of the laser stripe at that point. Since the dynamic range of the sub-image is relatively low, in the sub-image I r corresponding to the red channel, the brightness level of the laser stripe may be appropriate in the region where the surface reflectivity is high, and the brightness level is too low in the region where the surface reflectivity is low; in the sub-image I g corresponding to the green channel, the brightness level of the laser stripe may be appropriate in the region where the surface reflectance is low, while the brightness level is too high in the region where the surface reflectance is high. In practice, it is difficult to formulate a general standard to determine whether the brightness level is appropriate. Therefore, we use the corresponding eigenvalues of the positioning algorithm to evaluate the quality of the coordinates. Theoretically, the larger the absolute value of the feature value, the higher the reliability and accuracy of the related coordinates. Thus, the optimal coordinates can be selected by comparing the relevant 4 eigenvalues. Finally, we need to compensate for the coordinate offsets from the different sub-images. According to the arrangement of the filter array in the bayer filter, if we choose the sub-image of the red channel as a reference, the coordinate offsets in the sub-images of the two green and blue channels should be (0, 0.5), (0.5, 0) and (0.5 ), respectively.
And carrying out three-dimensional point cloud reconstruction by using the optimal center coordinates of the laser stripes, and realizing laser three-dimensional scanning with a high dynamic range. The three-dimensional point cloud reconstruction adopts a common line-plane intersection model, and related model parameters are obtained by carrying out system calibration based on a classical pinhole camera model.
Claims (1)
1. A high dynamic range laser three-dimensional scanning method based on a single frame image is characterized in that: during each scanning, an image sensor with a pixel-level filter array is used for collecting a single-frame laser fringe pattern and decomposing the single-frame laser fringe pattern into a plurality of sub-images; synthesizing an optimal center coordinate for each sub-image by using a laser stripe positioning algorithm in parallel; performing three-dimensional point cloud reconstruction by using the optimal center coordinates of the laser stripes to realize laser three-dimensional scanning with a high dynamic range;
The decomposition is to decompose a single-frame laser fringe pattern into a plurality of sub-images with different brightness, and each sub-image is directly subjected to laser fringe positioning and an optimal coordinate is selected from positioning results; instead of synthesizing the sub-image into a frame of high dynamic range image and then positioning the laser stripes;
the selection of the optimal center coordinates depends on positioning parameters related to image brightness, contrast and signal-to-noise ratio output by a positioning algorithm; determining an optimal center coordinate by analyzing positive or negative correlation between the positioning parameters and the positioning accuracy, namely according to the maximum or minimum parameters;
The laser stripe positioning algorithm is as follows: firstly, decomposing a single-frame input image into sub-images with different average brightness according to the periodic distribution of a filter array; then using the same laser stripe positioning algorithm to center position each sub-image; selecting a known positioning algorithm according to the requirements of speed and precision, and selecting an optimal laser stripe center coordinate from positioning results of a plurality of sub-images by analyzing positive or negative correlation between positioning parameters and positioning precision; considering the coordinate offset introduced during the decomposition of the sub-images, correspondingly compensating the coordinate offset during the synthesis of the optimal coordinates, and finally outputting the high dynamic range optimal center coordinates of the whole laser stripe;
the pixel-level filter array has a periodic distribution of transmittance variation for adjacent pixels, and comprises a periodic structure of 2×2 and 3×3;
The pixel-level filter array is a bayer filter in a color image sensor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111260769.3A CN113963115B (en) | 2021-10-28 | 2021-10-28 | High dynamic range laser 3D scanning method based on single frame image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111260769.3A CN113963115B (en) | 2021-10-28 | 2021-10-28 | High dynamic range laser 3D scanning method based on single frame image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113963115A CN113963115A (en) | 2022-01-21 |
CN113963115B true CN113963115B (en) | 2024-11-15 |
Family
ID=79467875
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111260769.3A Active CN113963115B (en) | 2021-10-28 | 2021-10-28 | High dynamic range laser 3D scanning method based on single frame image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113963115B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116664742B (en) * | 2023-07-24 | 2023-10-27 | 泉州华中科技大学智能制造研究院 | HDR high dynamic range processing method and device for laser light bar imaging |
CN118608592B (en) * | 2024-08-07 | 2024-11-15 | 武汉工程大学 | Line structure light center line extraction method based on light channel exposure self-adaption |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389639A (en) * | 2018-07-16 | 2019-02-26 | 中国铁道科学研究院集团有限公司基础设施检测研究所 | Rail profile laser stripe center extraction method and device under dynamic environment |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108151671B (en) * | 2016-12-05 | 2019-10-25 | 先临三维科技股份有限公司 | A kind of 3 D digital imaging sensor, 3 D scanning system and its scan method |
CN106981054B (en) * | 2017-03-27 | 2019-12-24 | 联想(北京)有限公司 | Image processing method and electronic equipment |
CN108132017B (en) * | 2018-01-12 | 2020-03-24 | 中国计量大学 | Planar weld joint feature point extraction method based on laser vision system |
CN109920007B (en) * | 2019-01-26 | 2023-04-07 | 中国海洋大学 | Three-dimensional imaging device and method based on multispectral photometric stereo and laser scanning |
CN113324478A (en) * | 2021-06-11 | 2021-08-31 | 重庆理工大学 | Center extraction method of line structured light and three-dimensional measurement method of forge piece |
-
2021
- 2021-10-28 CN CN202111260769.3A patent/CN113963115B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389639A (en) * | 2018-07-16 | 2019-02-26 | 中国铁道科学研究院集团有限公司基础设施检测研究所 | Rail profile laser stripe center extraction method and device under dynamic environment |
Non-Patent Citations (1)
Title |
---|
《High dynamic range imaging for fringe projection profilometry with single-shot raw data of the color camera》;Yongkai Yin等;《Optics and Lasers in Engineering》;20160902;参见正文第138-144页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113963115A (en) | 2022-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Oh et al. | Do it yourself hyperspectral imaging with everyday digital cameras | |
CN101558282B (en) | image processing system, method, device and image format | |
TWI468021B (en) | Image sensing device and image capturing method using illuminance and chrominance sensor | |
JP4469021B2 (en) | Image processing method, image processing apparatus, image processing program, image composition method, and image composition apparatus | |
CN105046668B (en) | The method for producing high dynamic range images | |
CN106575035B (en) | System and method for light field imaging | |
EP2637004A1 (en) | Multispectral imaging color measurement system and method for processing imaging signals thereof | |
CN113963115B (en) | High dynamic range laser 3D scanning method based on single frame image | |
WO2012057623A1 (en) | System and method for imaging and image processing | |
JP2011528866A (en) | Universally packed pixel array camera system and method | |
CN102484721A (en) | Four-channel color filter array pattern | |
KR20030020357A (en) | Method and apparatus for enhancing data resolution | |
US20120002874A1 (en) | Apparatus and method for shift invariant differential (sid) image data interpolation in fully populated shift invariant matrix | |
JP2010239610A (en) | Image processing device and image processing method | |
WO2017125507A1 (en) | Imaging unit and system for obtaining a three-dimensional image | |
JP5943393B2 (en) | Imaging device | |
US20230230345A1 (en) | Image analysis method, image analysis device, program, and recording medium | |
Hubel | Foveon technology and the changing landscape of digital cameras | |
US8589104B2 (en) | Device and method for compensating color shifts in fiber-optic imaging systems | |
JP2011109620A (en) | Image capturing apparatus, and image processing method | |
JP6948407B2 (en) | Equipment and methods for determining surface topology and associated colors | |
TWI843820B (en) | Method of color inspection by using monochrome imaging with multiple wavelengths of light | |
CN114279568B (en) | Multispectral imaging method, device and equipment for encoding compression based on chromatic dispersion | |
Alasal et al. | Improving passive 3D model reconstruction using image enhancement | |
JPH08105799A (en) | Color classification device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |