CN118347590A - Ultraviolet-visible band coded aperture spectrum imaging system and reconstruction method thereof - Google Patents
Ultraviolet-visible band coded aperture spectrum imaging system and reconstruction method thereof Download PDFInfo
- Publication number
- CN118347590A CN118347590A CN202410431634.6A CN202410431634A CN118347590A CN 118347590 A CN118347590 A CN 118347590A CN 202410431634 A CN202410431634 A CN 202410431634A CN 118347590 A CN118347590 A CN 118347590A
- Authority
- CN
- China
- Prior art keywords
- feature matrix
- dmd
- light path
- image
- spectrum
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001228 spectrum Methods 0.000 title claims abstract description 62
- 238000003384 imaging method Methods 0.000 title claims abstract description 61
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000013135 deep learning Methods 0.000 claims abstract description 43
- 238000012545 processing Methods 0.000 claims abstract description 32
- 238000010168 coupling process Methods 0.000 claims abstract description 31
- 238000005859 coupling reaction Methods 0.000 claims abstract description 31
- 230000008878 coupling Effects 0.000 claims abstract description 30
- 238000001514 detection method Methods 0.000 claims abstract description 27
- 238000012549 training Methods 0.000 claims abstract description 21
- 239000011159 matrix material Substances 0.000 claims description 72
- 230000003595 spectral effect Effects 0.000 claims description 18
- 238000011176 pooling Methods 0.000 claims description 16
- 230000003287 optical effect Effects 0.000 claims description 15
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 14
- 238000000701 chemical imaging Methods 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 13
- 238000010586 diagram Methods 0.000 claims description 10
- 238000012360 testing method Methods 0.000 claims description 9
- 230000005499 meniscus Effects 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 238000012634 optical imaging Methods 0.000 claims description 6
- 229910052736 halogen Inorganic materials 0.000 claims description 5
- 150000002367 halogens Chemical class 0.000 claims description 5
- 230000003647 oxidation Effects 0.000 claims description 4
- 238000007254 oxidation reaction Methods 0.000 claims description 4
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 230000001678 irradiating effect Effects 0.000 claims description 3
- 238000005259 measurement Methods 0.000 abstract description 15
- 238000004458 analytical method Methods 0.000 abstract description 3
- 238000004422 calculation algorithm Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 230000004075 alteration Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 3
- 230000004907 flux Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 101100277917 Caenorhabditis elegans dmd-3 gene Proteins 0.000 description 1
- 206010010071 Coma Diseases 0.000 description 1
- XDXDZDZNSLXDNA-TZNDIEGXSA-N Idarubicin Chemical compound C1[C@H](N)[C@H](O)[C@H](C)O[C@H]1O[C@@H]1C2=C(O)C(C(=O)C3=CC=CC=C3C3=O)=C3C(O)=C2C[C@@](O)(C(C)=O)C1 XDXDZDZNSLXDNA-TZNDIEGXSA-N 0.000 description 1
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000011990 functional testing Methods 0.000 description 1
- 239000005350 fused silica glass Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 229910052500 inorganic mineral Inorganic materials 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000009347 mechanical transmission Effects 0.000 description 1
- QSHDDOUJBYECFT-UHFFFAOYSA-N mercury Chemical compound [Hg] QSHDDOUJBYECFT-UHFFFAOYSA-N 0.000 description 1
- 229910052753 mercury Inorganic materials 0.000 description 1
- 239000011707 mineral Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J3/2823—Imaging spectrometer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J2003/283—Investigating the spectrum computer-interfaced
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J2003/283—Investigating the spectrum computer-interfaced
- G01J2003/2836—Programming unit, i.e. source and date processing
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J2003/283—Investigating the spectrum computer-interfaced
- G01J2003/284—Spectral construction
Landscapes
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an ultraviolet-visible band coded aperture spectrum imaging system and a reconstruction method thereof. The light beam emitted by the light source is reflected to the DMD device through a front imaging light path and generates a mask image, the light path enters a rear coupling light path after being reflected by the DMD device, an image signal is collected by a detector, and finally the image signal is transmitted to a spectrum restoration data processing system for spectrum reconstruction; the reconstruction method comprises the steps of assembling a light imaging system, and collecting mask images and two-dimensional snapshots of detection targets by using the system to construct an image training data set; and inputting the image training data set into a deep learning network for training, and reconstructing a spectrum image by using the trained deep learning network. The invention realizes spectrum measurement under high spatial resolution, thereby providing more accurate analysis results, being capable of carrying out panoramic spectrum imaging on a target scene in a short time and improving imaging efficiency and instantaneity.
Description
Technical Field
The invention belongs to the technical field of spectral imaging, and particularly relates to an ultraviolet-visible band coded aperture spectral imaging system and a reconstruction method thereof.
Background
The spectrum imaging technology can simultaneously acquire the spectrum information of the object radiation and the two-dimensional space information of the detection target, thereby constructing a data cube integrating the spectrum information and the space information and clearly displaying the characteristics of the detection target. The technology is widely applied in the fields of medical imaging, agriculture, environmental detection, food safety, mineral exploration, military reconnaissance and the like.
The traditional spectrum imaging technology uses a stack-broom spectrum imaging system, but the system needs a mechanical transmission device, and is easy to cause problems of mechanical failure, vibration error and the like, so that the detection precision is reduced and the measurement speed is reduced. Furthermore, scanning a large number of signal channels using a push-broom system reduces the measurement time per channel within a limited measurement time, resulting in a reduced luminous flux for a single channel. Since the luminous flux of the system is proportional to the signal-to-noise ratio, a decrease in luminous flux will result in a decrease in signal-to-noise ratio, with the detector circuit noise remaining constant. Therefore, the push-broom spectral imaging system cannot simultaneously achieve the signal-to-noise ratio and the detection speed.
Disclosure of Invention
In order to solve the problems existing in the background technology, the invention aims to provide an ultraviolet-visible waveband coded aperture spectrum imaging system based on a DMD and a reconstruction method thereof, wherein the system effectively improves the signal to noise ratio of the system while hyperspectral imaging, and solves the problems of complex optical system, high manufacturing cost, long imaging time, limited use scene and the like in the prior art
The technical scheme adopted by the invention is as follows:
1. An ultraviolet-visible band coded aperture spectral imaging system:
The system comprises a light source, a front-end imaging light path, a DMD device, an upper computer display control terminal, a rear-end coupling light path, a detector, a spectrum restoration data processing system and a system black box; the DMD device, the rear coupling light path, the detector and the spectrum restoration data processing system are all arranged in a system black box, a light-passing hole is formed in the top of the system black box, a light source is arranged at the position of the light-passing hole of the system black box, and the light path direction of the light source is axially parallel to the light-passing hole; the concave reflector in the front imaging light path is fixed on the light path of the light beam emitted by the light source, the DMD equipment is connected with the display control terminal of the upper computer, the detector is arranged between the rear coupling light path and the spectrum restoration data processing system, and the detector is used for collecting image signals and transmitting the image signals to the spectrum restoration data processing system for spectrum reconstruction.
The DMD device is mainly formed by connecting a DMD coding aperture and a DMD drive control board, wherein the DMD coding aperture is positioned at the inclined rear of a front imaging light path, the DMD coding aperture is arranged at an angle of 45 degrees with the light path direction of a light source, the DMD drive control board is connected with an upper computer display control terminal, and the upper computer display control terminal controls the working mode of the DMD coding aperture through the DMD drive control board so as to control mask images generated in the DMD coding aperture;
The front imaging light path mainly comprises two concave reflectors, a convex reflector and three plane reflectors; the light beam emitted by the light source is reflected by the first concave mirror, the convex mirror, the second concave mirror and the first plane mirror in sequence and then is incident on the DMD coding aperture, a mask image is generated on the DMD coding aperture, and the light beam reflected by the DMD coding aperture is reflected by the second plane mirror and the third plane in sequence and then is incident on the rear coupling light path.
The rear coupling light path adopts a refraction type structure and mainly comprises two convex lenses and a positive meniscus lens, light beams emitted by the front imaging light path sequentially pass through the two convex lenses and the positive meniscus lens and are acquired by the detector to obtain a two-dimensional snapshot, and the spectrum restoration data processing system processes the two-dimensional snapshot to reconstruct a hyperspectral image;
The rear coupling light path is fixed at the inclined rear of the DMD coding aperture and is parallel to the light path direction of the light source; the detector is positioned right behind the rear coupling light path.
The light source adopts a halogen lamp light source; the wall surface of the system black box adopts oxidation blackening treatment, and the system black box is used for reducing interference of stray light on the system.
2. A spectrum reconstruction method based on deep learning comprises the following steps:
step S1, firstly, assembling an optical imaging system according to the trend of an optical path, and testing each element in the optical imaging system to ensure the normal operation of the system;
S2, starting a spectrum imaging system, irradiating a light source on a detection target, enabling a DMD coding aperture and a detector to respectively acquire a mask image and a two-dimensional snapshot of the detection target, simultaneously acquiring a hyperspectral image of the detection target, taking the mask image, the two-dimensional snapshot and the corresponding hyperspectral image as a group of image data sets, and constructing an image training data set by utilizing a plurality of groups of image data sets;
Step S3: constructing a deep learning network in the spectrum restoration data processing system, inputting the image training data set in the step S2 into the constructed deep learning network for training, and acquiring the trained deep learning network after training is completed;
step S4: and (3) performing spectrum image reconstruction by using the trained deep learning network: inputting the two-dimensional snapshot of the target to be detected into a trained deep learning network, and outputting a reconstructed hyperspectral image.
The training method of the deep learning network specifically comprises the following steps:
Firstly, inputting a two-dimensional snapshot into a deep learning network, and outputting a reconstructed spectrum image of a detection target; taking the hyperspectral image as a truth value tag of the deep learning network, and training the deep learning network according to the output reconstructed spectral image and the truth value tag to obtain a trained deep learning network;
The deep learning network is mainly formed by sequentially connecting a preprocessing module, an encoder and a decoder; the two-dimensional snapshot is input into a preprocessing module, and an initial feature matrix containing 32 channels is output; then, extracting the characteristic information of the two-dimensional snapshot by an encoder to obtain five scale characteristic matrixes; finally, the five scale feature matrices are all input into a decoder, and the reconstructed hyperspectral image is output through the decoder.
The method for extracting the characteristic information of the two-dimensional snapshot by using the encoder comprises the following steps:
Firstly, sequentially carrying out 3×3 convolution and 2×2 maximum pooling on an initial feature matrix to obtain a first scale feature matrix d 1;
Then, the first scale feature matrix d 1 is processed by a first dense back projection module to obtain a first dense feature matrix r 1, and the first dense feature matrix r 1 is sequentially processed by a3×3 convolution and 2×2 maximum pooling operation to obtain a second scale feature matrix d 2;
Then, the second scale feature matrix d 2 is processed by a second dense back projection module to obtain a second dense feature matrix r 2, and the second dense feature matrix r 2 is sequentially subjected to 3×3 convolution, res2Net residual block and 2×2 maximum pooling operation to obtain a third scale feature matrix d 3;
Then, the third scale feature matrix d 3 is processed by a third dense back projection module to obtain a third dense feature matrix r 3, and the third dense feature matrix r 3 is sequentially subjected to 3×3 convolution, res2Net residual block and 2×2 maximum pooling operation to obtain a fourth scale feature matrix d 4;
Finally, the fourth scale feature matrix d 4 is processed by a fourth dense back projection module to obtain a fourth dense feature matrix r 4, and the fourth dense feature matrix r 4 is sequentially subjected to 3×3 convolution, res2Net residual block and 2×2 maximum pooling operation to obtain a fifth scale feature matrix d 5;
The topology of the decoder is as follows:
The decoder comprises five sub-modules, a1 multiplied by 1 convolution layer and a Sigmoid activation function;
Five scale feature matrixes output by the encoder are respectively input into five submodules, the output ends of the five submodules are connected to the input end of the 1 multiplied by 1 convolution layer, and the output of the 1 multiplied by 1 convolution layer is processed by a Sigmoid activation function to obtain the output of the deep learning network;
The first sub-module to the third sub-module are mainly formed by sequentially connecting a space residual error attention block, a spectrum attention block, a residual error block and a deconvolution layer; the fourth sub-module to the fifth sub-module are mainly formed by sequentially connecting a residual block and a deconvolution layer.
The spatial residual attention block mainly comprises convolution operation and residual connection, the scale feature matrix is input into the spatial residual attention block, the spatial attention force diagram is obtained after being sequentially processed by two 3X 3 convolution layers, one 1X 1 convolution layer and a Sigmoid function, then the spatial attention force diagram is multiplied by the scale feature matrix which is initially input, and the multiplication result is added with the input scale feature matrix to obtain the output feature t S∈RH×W×C of the spatial residual attention block.
The dense back projection module in the encoder is used for enhancing the characteristics and fusing the characteristics, and the input-output mapping relation expression of the dense back projection module is as follows:
rn=downn-i(Δri)+dn
Δri=upn-i(dn)-ri
r0=d0
Wherein r n represents the output of the nth dense back projection module; d n denotes the input feature vector of the current scale; down n-i represents the downsampling operation in a dense backprojection module, wherein the superscript indicates the number of downsamples; up n-i denotes the up-sampling operation in the dense backprojection module, wherein the superscript denotes the number of up-samples; Δr i denotes a difference feature, and its subscript i denotes the ordinal number of the difference feature.
The invention adopts coded aperture imaging technology to construct an ultraviolet-visible band spectrum imaging system based on the DMD. The system has high coding speed and high reliability, combines a Hadamard coding matrix, and detects hyperspectral image data based on a multi-channel superposition multiplexing principle. The technology can effectively improve the detection signal-to-noise ratio of the system and overcome the problem of imaging quality degradation caused by weaker signals. The system is particularly applied to the acquisition of fabric hyperspectral images and the detection of chromatic aberration. The system encodes the space by utilizing the DMD, so that the spectrum measurement under high spatial resolution is realized, and a more accurate analysis result is provided. Meanwhile, the system has high imaging speed, panoramic spectrum imaging can be carried out on a target scene in a short time, and imaging efficiency and instantaneity are improved. In addition, the unique design of the system and the application of DMD technology simplifies the design of the spectral imaging system, reduces the required optical elements and mechanical components, and reduces the cost and complexity of the system. Through a large number of simulation experiments and real system experiments, the reconstruction algorithm provided by the invention is superior to other advanced methods, and has excellent reconstruction quality and operation efficiency.
The beneficial effects of the invention are as follows:
1. The invention provides an ultraviolet visible band coded aperture spectrum imaging system based on a spatial light modulator (DMD) and a reconstruction method thereof, which realize spectrum measurement under high spatial resolution, thereby providing more accurate analysis results.
2. The invention has rapid imaging speed, can carry out panoramic spectrum imaging on the target scene in a short time, and improves imaging efficiency and instantaneity.
3. The invention simplifies the design of the spectrum imaging system, reduces the required optical elements and mechanical parts, and reduces the cost and complexity of the system.
4. The reconstruction algorithm provided by the invention is superior to other advanced methods, and has reconstruction quality and reconstruction efficiency.
Drawings
FIG. 1 is a block diagram of an ultraviolet-visible band coded aperture spectral imaging system provided by the invention, illustrating a system architecture;
FIG. 2 is a schematic diagram of an ultraviolet-visible band coded aperture spectrum imaging system, illustrating the positional relationship among a light source, a front imaging light path, a DMD coded aperture, a rear coupling light path and a detector;
fig. 3 is a flow chart of a detection method of the system.
FIG. 4 is a flow chart of a reconstruction algorithm provided by the present invention;
FIG. 5 is a topological structure diagram of an encoder;
FIG. 6 is a topological structure diagram of a decoder;
in the figure: 1-a light source; 2-a front imaging light path; 3-DMD coded aperture; 4-DMD drive control board; 5-an upper computer display control terminal; 6-a post-coupling optical path; 7-a detector; 8-a spectral restoration data processing system; 9-a system black box.
Detailed Description
The invention will now be described in detail with reference to specific examples which will assist those skilled in the art in further understanding the invention, but which are not intended to be limiting in any way.
1-2, The system comprises a light source 1, a front-end imaging light path 2, a DMD device, an upper computer display control terminal 5, a rear-end coupling light path 6, a detector 7, a spectrum restoration data processing system 8 and a system black box 9; the DMD device, the rear coupling light path 6, the detector 7 and the spectrum restoration data processing system 8 are all arranged inside the system black box 9, a light passing hole is formed in the top of the system black box 9, the light source 1 is arranged at the position of the light passing hole of the system black box 9, and the light path direction of the light source 1 is axially parallel to the light passing hole; the concave reflecting mirror in the front imaging light path 2 is fixed on the light path of the light beam emitted by the light source 1, the DMD device is connected with the upper computer display control terminal 5, the detector 7 is arranged between the rear coupling light path 6 and the spectrum restoration data processing system 8, and the detector 7 is used for collecting image signals and transmitting the image signals to the spectrum restoration data processing system 8 for spectrum reconstruction.
In particular, the backward direction of the spectral imaging system is the forward direction of the light beam of the light source 1, and the front imaging light path 2 is located behind the light source 1.
The DMD equipment mainly comprises a DMD coding aperture 3 and a DMD drive control board 4 which are connected, namely the DMD coding aperture 3 and the DMD drive control board 4 are integrated, the DMD coding aperture 3 is positioned at the obliquely rear side of the front imaging light path 2, the DMD coding aperture 3 and the light path direction of the light source 1 are arranged at an angle of 45 degrees, the DMD drive control board 4 is connected with an upper computer display control terminal 5, and the upper computer display control terminal 5 controls the working mode of the DMD coding aperture 3 through the DMD drive control board 4 so as to control mask images generated in the DMD coding aperture 3;
The front imaging light path 2 is of an Offner concentric off-axis three-mirror structure and mainly comprises two concave reflectors, a convex reflector and three plane reflectors; the light beam emitted by the light source 1 is reflected by the first concave mirror, the convex mirror, the second concave mirror and the first plane mirror in sequence and then is incident on the DMD coded aperture 3, a mask image is generated on the DMD coded aperture 3, and the light beam reflected by the DMD coded aperture 3 is reflected by the second plane mirror and the third plane in sequence and then is incident on the rear coupling light path 6.
The rear coupling light path 6 adopts a refraction type structure and mainly comprises two convex lenses and a positive meniscus lens, light beams emitted by the front imaging light path 2 sequentially pass through the two convex lenses and the positive meniscus lens, then are acquired by the detector 7 to obtain a two-dimensional snapshot, and the spectrum restoration data processing system 8 processes the two-dimensional snapshot to reconstruct a hyperspectral image;
the rear coupling light path 6 is fixed at the inclined rear of the DMD coded aperture 3, and the rear coupling light path 6 is parallel to the light path direction of the light source 1; the detector 7 is located directly behind the rear coupling beam path 6.
The light source 1 adopts a halogen lamp light source; the wall surface of the system black box 9 is subjected to oxidation blackening treatment, and the system black box 9 is used for reducing interference of stray light on a system.
The DMD coded aperture 3 adopts 0.7XGA DMD of TI company; the detector 7 is a 2/3"cmos camera from SONY corporation with pixel dimensions of 2.74um x 2.74um. The upper computer display control terminal 5 controls the work and alignment of the DMD coded aperture 3 through the DMD drive control board 4.
As shown in fig. 1, an embodiment of the present invention provides a DMD-based ultraviolet-visible band coded aperture spectrum imaging system, which includes a light source 1 fixed on the upper left side of a black box of the system, a front imaging light path 2 for focusing and adjusting light, a DMD control board 4 for controlling and driving the DMD, a rear coupling light path 6 for collecting and transmitting light, and a detector 7 for imaging, wherein the front imaging light path 3 is used for realizing high-precision all-solid-state coded DMD coded aperture. A signal can be sent to the DMD control board 4 through the upper computer display control terminal 5 to control the DMD coded aperture 3. The image and data acquired by the detector 7 can be reconstructed by the spectral restoration data processing system 8 into the final desired hyperspectral image.
Wherein the light source 1 is required to be fixed at the upper left of the black box of the system and leveled, the position of the light source 1 is adjusted so that the light rays emitted by the light source 1 can pass through the light through holes and vertically strike on the concave reflecting mirror of the front imaging light path 2 (shown in fig. 2). The light source 1 is selected as a halogen lamp, the wavelength of which ranges from 300 to 1000nm, the intensity of which can be adjusted by adjusting the electronic dimmer. Meanwhile, a fan is provided for the light source to ensure stability and reliability of the light source when used for a long time.
The front imaging light path 2 adopts an Offner concentric off-axis three-mirror structure, and the structure is completely symmetrical, so that aberration such as spherical aberration and coma aberration can be effectively compensated, no color distortion exists, and the wave band range is wide. The optical path comprises two concave mirrors, a convex mirror and three planar mirrors (as shown in fig. 2). The 2 concave reflectors are completely overlapped, so that the 2 concave reflectors can be integrally processed during design and processing, and the assembly and adjustment are simpler. The light emitted by the object point is firstly reflected to the surface of the convex reflector through the first concave reflector, then reflected to the surface of the second concave reflector through the convex reflector, imaged at the first plane mirror, and finally can be subjected to subsequent processing through the plane mirror. The optical elements in the system are all spherical elements, and the system is easy to process and test.
Wherein the DMD coded aperture 3 serves as an optical encoder, and spatial encoding of light can be achieved by adjusting the inclination angle of the mirror surface. The DMD device is placed in a system in a 45-degree offset mode, so that an optical path and a mechanical structure are convenient to design, and systematic errors are reduced. The DMD drive control board 4 can be used for realizing data interaction between the drive board and the DMD3, so that the bitmap display of the DMD is completed. A bitmap is displayed in the DMD coded aperture 3; the upper computer display control terminal 5 is mainly used for sending control signals to the DMD control panel 4, and the terminal is communicated with the DMD control panel through UDP protocol and transmits data, so that functions of DMD instruction operation and the like are realized. Meanwhile, four coding modes of full-pel scanning, single-pel scanning, multi-pel scanning and Hadamard coding are developed to meet various performance requirements of a coded aperture spectrum imaging system.
Wherein the rear coupling light path 6 adopts a structure of a combination of a biconvex lens and a positive meniscus lens. In order to ensure that the rear coupling light path has good transmittance at 300-1000nm, the materials of the two lenses are ultraviolet fused quartz. The positive meniscus lens is thicker in the middle than at the edges, which causes light to converge, and can be used with a double lens to reduce the focal length and thus increase the numerical aperture.
Wherein the detector 7 is a 2/3' CMOS camera from SONY company, receives the image secondarily imaged by the coupling light path, and sends the image to a subsequent data processing system for processing. The spectral restoration data processing system 8 is mainly used for processing the acquired image data. The data processing system may set the resolution of the detector 7, exposure target values, including white balance settings, color adjustments, dark field correction, reconstruction of hyperspectral images, etc. Meanwhile, the acquired images can be cut and spliced.
In order to avoid the influence of external factors and the change of the internal structure of the system on the detection result, when all optical elements are processed, the processing technology adopts anodic oxidation blackening treatment to absorb stray light generated by unexpected light paths as much as possible, and a light-passing hole and a corresponding light shield are arranged to reduce the stray light entering the system. Meanwhile, a stable and reliable element fixing structure is designed for each optical element, so that the relative position relation between each element is ensured to be accurate.
Since only two-dimensional snapshot measurements are acquired by the system and the resulting hyperspectral image is required, the hyperspectral image needs to be reconstructed from the acquired snapshot measurements. In order to solve the problem of high inappropriateness, a reasonable hyperspectral compressed snapshot reconstruction algorithm is provided. The algorithm shown in fig. 4 mainly comprises: snapshot measurement and mask extraction, snapshot preprocessing, snapshot space and spectrum characteristic information extraction, snapshot characteristic information enhancement, characteristic information extraction fusion, hyperspectral image reconstruction and hyperspectral image data transmission. The algorithm directly learns mapping from snapshot measurement to hyperspectral image in an end-to-end mode, and can capture more space and spectral characteristic information from two-dimensional snapshot measurement to help reconstruction through a network structure and a high-performance module. While the algorithm is still capable of maintaining a high quality reconstruction when the hardware mask of the hyperspectral compressed snapshot imaging system changes. The reconstruction algorithm exhibits better reconstruction quality and reconstruction speed than other reconstruction algorithms and has the adaptability of different masks.
The specific embodiment of the spectral reconstruction using the system of the present invention is as follows, as shown in fig. 3 and 4:
step S1, firstly, assembling an optical imaging system according to the trend of an optical path, and testing each element in the optical imaging system to ensure the normal operation of the system;
the step S1 specifically comprises the following steps:
S1.1, before a system is started, a light source 1, a front-end imaging light path 2, DMD equipment, an upper computer display control terminal 5, a rear-end coupling light path 6, a detector 7 and a spectrum restoration data processing system 8 are respectively tested, so that each element and software can work normally;
S1.2, integrally assembling and adjusting the system by utilizing a laser, ensuring that all elements of the system are at ideal positions before working, sequentially adjusting the positions of all elements according to the trend of an optical path during adjustment, firstly keeping the central heights of all elements consistent, then adjusting the horizontal positions of all elements, and finishing the system assembling and adjusting when the laser point is at the central positions of all the elements of the system;
s1.3, starting a system, carrying out necessary calibration and configuration, adjusting the brightness of a light source, selecting a working mode and setting data acquisition parameters;
s1.4, performing functional test and performance verification, and checking whether the imaging, detecting and data processing functions of the system are normal or not:
S1.4.1 imaging detection of the system: and a halogen lamp light source is adopted as an illumination light source, and the DMD device is respectively subjected to imaging detection of 128X 128pixels full screen and 1X 1024pixels single line. When the detected DMD device images the actual measurement image center image area well and has no obvious distortion, the driving logic of the DMD device is correct, and the designed optical system can effectively work.
S1.4.2, performing resolution test on the system: the spatial resolution determines the acquisition capacity and detection accuracy of the coded aperture spectral imaging system for two-dimensional spatial information. During the test, the integration time of the detector 7 is set to 300ms, the resolution plate is selected from the USAF1951 resolution plate, and imaging measurement is performed on the 0 th group of five elements of the resolution plate.
S1.4.3 carrying out a Hadamard transform imaging test on the system: firstly, performing hadamard coding, turning over the DMD device by 8×8pixels, coding by adopting a 15-order cyclic S matrix, wherein the first line element is '000100110101111', the detection area is 30×35pixels, filling the 15-order S matrix for 15 times according to a 3×5pixels rectangular area, sequentially filling 70 (30/3×35/5) rectangular areas, measuring 15×70=1050 spectrum signals, and thus completing hadamard coding of the detection area. During the test, the integration time of the detector 7 is set to 100ms, and imaging actual measurement is carried out on the encoded fifth element of the USAF1951 group 0.
S1.4.4, performing mercury lamp spatial distribution test on the system: imaging and actually measuring the two-dimensional space distribution of five characteristic spectral lines of 365.0nm, 404.7nm, 434.7nm, 546.1nm and 577.0nm, verifying the distribution condition of a target in the two-dimensional space, and ensuring reasonable distribution.
S2, starting a spectrum imaging system, irradiating a light source 1 on a detection target, enabling a DMD coding aperture 3 and a detector 7 to respectively acquire a mask image and a two-dimensional snapshot of the detection target, simultaneously acquiring a hyperspectral image of the detection target by using a hyperspectral camera, taking the mask image, the two-dimensional snapshot and the corresponding hyperspectral image as a group of image data sets, and constructing an image training data set by using a plurality of groups of image data sets;
Step S3: constructing a deep learning network for outputting the reconstructed hyperspectral image in a spectrum restoration data processing system 8, inputting the image training data set in the step S2 into the constructed deep learning network for training, and acquiring the trained deep learning network after training is completed;
step S4: and (3) performing spectrum image reconstruction by using the trained deep learning network: inputting the two-dimensional snapshot of the target to be detected into a trained deep learning network, and outputting a reconstructed hyperspectral image.
The reconstructed hyperspectral image is transmitted and stored to a spectrum restoration data processing system 8.
The training method of the deep learning network specifically comprises the following steps:
Firstly, inputting a two-dimensional snapshot into a deep learning network, and outputting a reconstructed spectrum image of a detection target; taking the hyperspectral image as a truth value tag of the deep learning network, and training the deep learning network according to the output reconstructed spectral image and the truth value tag to obtain a trained deep learning network;
The deep learning network is mainly formed by sequentially connecting a preprocessing module, an encoder and a decoder; inputting the two-dimensional snapshot into a preprocessing module, and outputting an initial feature matrix containing 32 channels; then, extracting the characteristic information of the two-dimensional snapshot by an encoder to obtain five scale characteristic matrixes; finally, the five scale feature matrices are all input into a decoder, and the reconstructed hyperspectral image is output through the decoder. In specific implementation, the preprocessing module comprises a1×1 convolution layer, and the two-dimensional snapshot generates an initial feature matrix comprising 32 channels after a1×1 convolution operation; both the encoder and decoder include five scales.
The manner of extracting the feature information of the two-dimensional snapshot by using the encoder is as shown in fig. 5:
Firstly, sequentially carrying out 3×3 convolution and 2×2 maximum pooling on an initial feature matrix to obtain a first scale feature matrix d 1;
Then, the first scale feature matrix d 1 is processed by a first dense back projection module to obtain a first dense feature matrix r 1, and the first dense feature matrix r 1 is sequentially processed by a3×3 convolution and 2×2 maximum pooling operation to obtain a second scale feature matrix d 2;
Then, the second scale feature matrix d 2 is processed by a second dense back projection module to obtain a second dense feature matrix r 2, and the second dense feature matrix r 2 is sequentially subjected to 3×3 convolution, res2Net residual block and 2×2 maximum pooling operation to obtain a third scale feature matrix d 3;
Then, the third scale feature matrix d 3 is processed by a third dense back projection module to obtain a third dense feature matrix r 3, and the third dense feature matrix r 3 is sequentially subjected to 3×3 convolution, res2Net residual block and 2×2 maximum pooling operation to obtain a fourth scale feature matrix d 4;
Finally, the fourth scale feature matrix d 4 is processed by a fourth dense back projection module to obtain a fourth dense feature matrix r 4, and the fourth dense feature matrix r 4 is sequentially subjected to 3×3 convolution, res2Net residual block and 2×2 maximum pooling operation to obtain a fifth scale feature matrix d 5;
The residual block in the encoder uses Res2Net, which represents the multi-scale feature at the granularity level and adds a receptive field for each network layer. And the network performance is improved.
The topology of the decoder is as follows, as shown in fig. 6:
the decoder comprises five sub-modules, a 1X 1 convolution layer and a Sigmoid activation function;
The five scale feature matrixes output by the encoder are respectively input into five submodules, namely an nth scale feature matrix d n is input into the nth submodule, the output ends of the five submodules are connected to the input end of the 1X 1 convolution layer, and the output of the 1X 1 convolution layer is processed by a Sigmoid activation function to obtain the output of the deep learning network;
The output of the deep learning network has the same number of channels as the original hyperspectral image.
The first sub-module to the third sub-module are mainly formed by sequentially connecting a space residual error attention block, a spectrum attention block, a residual error block and a deconvolution layer; the fourth sub-module to the fifth sub-module are mainly formed by sequentially connecting a residual block and a deconvolution layer.
The spatial residual attention block mainly comprises convolution operation and residual connection, after the scale feature matrix is input into the spatial residual attention block, the spatial attention force diagram is obtained after being sequentially processed by two 3X 3 convolution layers, one 1X 1 convolution layer and a Sigmoid function, then the spatial attention force diagram is multiplied by the scale feature matrix which is initially input, and the multiplication result is added with the input scale feature matrix to obtain the output feature t S∈RH×W×C of the spatial residual attention block.
In specific implementation, the output t S∈RH×W×C of the spatial residual attention block firstly obtains a feature vector E epsilon R 1×1×C through global average pooling operation in the spectral attention block, and the feature vector E epsilon R 1×1×C obtains a weight vector A C∈R1×C among all channels after one-dimensional convolution operation with a convolution kernel k; the weight vector a C∈R1×C is then input into the residual block, deconvolution layer. The spectral attention block captures the correlation between the bands of hyperspectral images using a local cross-channel interaction strategy.
The dense back projection module in the encoder is used for enhancing the characteristics and fusing the characteristics, and the input-output mapping relation expression of the dense back projection module is as follows:
rn=downn-i(Δri)+dn
Δri=upn-i(dn)-ri
r0=d0
Wherein r n represents the output of the nth dense back projection module; d n denotes the input feature vector of the current scale (i.e., the nth scale feature matrix); down n-i represents the downsampling operation in a dense backprojection module, wherein the superscript indicates the number of downsamples; up n-i denotes the up-sampling operation in the dense backprojection module, wherein the superscript denotes the number of up-samples; Δr i denotes a difference feature, and its subscript i denotes the ordinal number of the difference feature.
In specific implementation, the mapping function F n (·) of the dense back-projection module is represented by the following formula (1):
rn=Fn(dn,{r0,r1,…,rn-1}) (1)
Wherein r n represents the output of the nth (scale n) dense back projection module; d n represents the input feature vector of the current scale, r 0 is the initial reconstruction, { r 0,r1,…,rn-1 } represents the output of the scale 1 to scale n-1 dense backprojection module;
The mapping function F n (·) first calculates the difference features Δr i between the current scale feature d n and all previous scales { r 0,r1,…,rn-1 }. The input feature d n is firstly subjected to up-sampling operation for a plurality of times to respectively obtain features with the same scale as r i (i=0, 1, …, n-1), and then the difference feature delta r i.Δri between the current scale and each previous scale is obtained by making a difference, and the calculation process can be expressed as follows by a mathematical formula:
Δri=upn-i(dn)-ri (2)
wherein up n-i represents the up-sampling operation in the dense back projection module, n-i represents the up-sampling times;
The calculation process of downsampling all the difference features to the same scale as d n and adding the downsampled difference features to d n to obtain the current enhanced feature r n,rn can be expressed as follows by a mathematical formula:
rn=downn-i(Δri)+dn (3)
Where down n-i represents the downsampling operation in the dense backprojection module and n-i represents the number of downsampling times. The enhanced features of the dense back projection module are integrated into the down sampling process, so that missing high-resolution feature information can be made up, and the dense back projection encoder can extract potential basic features more accurately.
The dense back projection module fuses complementary feature information between the scale n and all the previous scales, and transmits the fused features to the nth scale of the decoder so as to enrich reconstruction details.
In the deep learning network, a measurement snapshot firstly makes up high-resolution spatial characteristic information lost in a downsampling process through an encoder; the characteristic information is then transmitted to a decoder, which combines the spatial and spectral correlation characteristic information in the hyperspectral image, thereby achieving a high quality reconstruction. The deep learning network employs a dense backprojection reconstruction network of joint attention. The encoder adopts a dense back projection encoder, and comprises a downsampling operation and a dense back projection module. The decoder employs a spatial-spectral attention-enhancing decoder comprising an upsampling operation and a spatial-spectral attention module.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (10)
1. An ultraviolet-visible band coded aperture spectral imaging system, characterized by:
The system comprises a light source (1), a front-end imaging light path (2), DMD equipment, an upper computer display control terminal (5), a rear-end coupling light path (6), a detector (7), a spectrum restoration data processing system (8) and a system black box (9); the DMD equipment, the rear coupling light path (6), the detector (7) and the spectrum restoration data processing system (8) are all arranged inside a system black box (9), a light passing hole is formed in the top of the system black box (9), the light source (1) is arranged at the position of the light passing hole of the system black box (9), and the light path direction of the light source (1) is axially parallel to the light passing hole; the concave reflecting mirror in the front-end imaging light path (2) is fixed on the light path of the light beam emitted by the light source (1), the DMD equipment is connected with the upper computer display control terminal (5), the detector (7) is arranged between the rear-end coupling light path (6) and the spectrum restoration data processing system (8), and the detector (7) is used for collecting image signals and transmitting the image signals to the spectrum restoration data processing system (8) for spectrum reconstruction.
2. An ultraviolet-visible band coded aperture spectral imaging system as defined in claim 1, wherein: the DMD equipment mainly comprises a DMD coding aperture (3) and a DMD drive control board (4) which are connected, wherein the DMD coding aperture (3) is positioned at the inclined rear of a front imaging light path (2), the DMD coding aperture (3) and the light path direction of a light source (1) are arranged at an angle of 45 degrees, the DMD drive control board (4) is connected with an upper computer display control terminal (5), and the upper computer display control terminal (5) controls the working mode of the DMD coding aperture (3) through the DMD drive control board (4), so as to control a mask image generated in the DMD coding aperture (3);
The front imaging light path (2) mainly comprises two concave reflectors, a convex reflector and three plane reflectors; the light beam emitted by the light source (1) is reflected by the first concave mirror, the convex mirror, the second concave mirror and the first plane mirror in sequence and then is incident on the DMD coding aperture (3), a mask image is generated on the DMD coding aperture (3), and the light beam reflected by the DMD coding aperture (3) is reflected by the second plane mirror and the third plane in sequence and then is incident into the rear coupling light path (6).
3. An ultraviolet-visible band coded aperture spectral imaging system as defined in claim 2, wherein: the rear coupling light path (6) adopts a refraction type structure and mainly comprises two convex lenses and a positive meniscus lens, light beams emitted by the front imaging light path (2) sequentially pass through the two convex lenses and the positive meniscus lens, two-dimensional snapshots are acquired by the detector (7), and the two-dimensional snapshots are processed by the spectrum restoration data processing system (8) to reconstruct hyperspectral images;
The rear coupling light path (6) is fixed at the inclined rear part of the DMD coding aperture (3), and the rear coupling light path (6) is parallel to the light path direction of the light source (1); the detector (7) is positioned right behind the rear coupling light path (6).
4. An ultraviolet-visible band coded aperture spectral imaging system as defined in claim 1, wherein: the light source (1) adopts a halogen lamp light source; the wall surface of the system black box (9) is subjected to oxidation blackening treatment, and the system black box (9) is used for reducing interference of stray light on the system.
5. A depth learning based spectral reconstruction method applied to the system of any one of claims 1-4, comprising the steps of:
step S1, firstly, assembling an optical imaging system according to the trend of an optical path, and testing each element in the optical imaging system to ensure the normal operation of the system;
S2, starting a spectrum imaging system, irradiating a light source (1) on a detection target, enabling a DMD coding aperture (3) and a detector (7) to respectively acquire a mask image and a two-dimensional snapshot of the detection target, simultaneously acquiring a hyperspectral image of the detection target, taking the mask image, the two-dimensional snapshot and the corresponding hyperspectral image as a group of image data sets, and constructing an image training data set by utilizing a plurality of groups of image data sets;
Step S3: constructing a deep learning network in a spectrum restoration data processing system (8), inputting the image training data set in the step S2 into the constructed deep learning network for training, and acquiring the trained deep learning network after training is completed;
step S4: and (3) performing spectrum image reconstruction by using the trained deep learning network: inputting the two-dimensional snapshot of the target to be detected into a trained deep learning network, and outputting a reconstructed hyperspectral image.
6. The method for deep learning based spectral reconstruction as claimed in claim 5, wherein,
The training method of the deep learning network specifically comprises the following steps:
Firstly, inputting a two-dimensional snapshot into a deep learning network, and outputting a reconstructed spectrum image of a detection target; taking the hyperspectral image as a truth value tag of the deep learning network, and training the deep learning network according to the output reconstructed spectral image and the truth value tag to obtain a trained deep learning network;
The deep learning network is mainly formed by sequentially connecting a preprocessing module, an encoder and a decoder; the two-dimensional snapshot is input into a preprocessing module, and an initial feature matrix containing 32 channels is output; then, extracting the characteristic information of the two-dimensional snapshot by an encoder to obtain five scale characteristic matrixes; finally, the five scale feature matrices are all input into a decoder, and the reconstructed hyperspectral image is output through the decoder.
7. The depth learning based spectral reconstruction method according to claim 6, wherein:
the method for extracting the characteristic information of the two-dimensional snapshot by using the encoder comprises the following steps:
Firstly, sequentially carrying out 3×3 convolution and 2×2 maximum pooling on an initial feature matrix to obtain a first scale feature matrix d 1;
Then, the first scale feature matrix d 1 is processed by a first dense back projection module to obtain a first dense feature matrix r 1, and the first dense feature matrix r 1 is sequentially processed by a3×3 convolution and 2×2 maximum pooling operation to obtain a second scale feature matrix d 2;
Then, the second scale feature matrix d 2 is processed by a second dense back projection module to obtain a second dense feature matrix r 2, and the second dense feature matrix r 2 is sequentially subjected to 3×3 convolution, res2Net residual block and 2×2 maximum pooling operation to obtain a third scale feature matrix d 3;
Then, the third scale feature matrix d 3 is processed by a third dense back projection module to obtain a third dense feature matrix r 3, and the third dense feature matrix r 3 is sequentially subjected to 3×3 convolution, res2Net residual block and 2×2 maximum pooling operation to obtain a fourth scale feature matrix d 4;
Finally, the fourth scale feature matrix d 4 is processed by a fourth dense back projection module to obtain a fourth dense feature matrix r 4, and the fourth dense feature matrix r 4 is sequentially subjected to 3×3 convolution, res2Net residual block and 2×2 maximum pooling operation to obtain a fifth scale feature matrix d 5.
8. The depth learning based spectral reconstruction method according to claim 6, wherein:
The topology of the decoder is as follows:
The decoder comprises five sub-modules, a1 multiplied by 1 convolution layer and a Sigmoid activation function;
Five scale feature matrixes output by the encoder are respectively input into five submodules, the output ends of the five submodules are connected to the input end of the 1 multiplied by 1 convolution layer, and the output of the 1 multiplied by 1 convolution layer is processed by a Sigmoid activation function to obtain the output of the deep learning network;
The first sub-module to the third sub-module are mainly formed by sequentially connecting a space residual error attention block, a spectrum attention block, a residual error block and a deconvolution layer; the fourth sub-module to the fifth sub-module are mainly formed by sequentially connecting a residual block and a deconvolution layer.
9. The depth learning based spectral reconstruction method according to claim 8, wherein:
The spatial residual attention block mainly comprises convolution operation and residual connection, the scale feature matrix is input into the spatial residual attention block, the spatial attention force diagram is obtained after being sequentially processed by two 3X 3 convolution layers, one 1X 1 convolution layer and a Sigmoid function, then the spatial attention force diagram is multiplied by the scale feature matrix which is initially input, and the multiplication result is added with the input scale feature matrix to obtain the output feature t S∈RH×W×C of the spatial residual attention block.
10. The depth learning based spectral reconstruction method according to claim 7, wherein: the dense back projection module in the encoder is used for enhancing the characteristics and fusing the characteristics, and the input-output mapping relation expression of the dense back projection module is as follows:
rn=downn-i(Δri)+dn
Δri=upn-i(dn)-ri
r0=d0
Wherein r n represents the output of the nth dense back projection module; d n denotes the input feature vector of the current scale; down n-i represents the downsampling operation in a dense backprojection module, wherein the superscript indicates the number of downsamples; up n-i denotes the up-sampling operation in the dense backprojection module, wherein the superscript denotes the number of up-samples; Δr i denotes a difference feature, and its subscript i denotes the ordinal number of the difference feature.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410431634.6A CN118347590A (en) | 2024-04-11 | 2024-04-11 | Ultraviolet-visible band coded aperture spectrum imaging system and reconstruction method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410431634.6A CN118347590A (en) | 2024-04-11 | 2024-04-11 | Ultraviolet-visible band coded aperture spectrum imaging system and reconstruction method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118347590A true CN118347590A (en) | 2024-07-16 |
Family
ID=91823565
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410431634.6A Pending CN118347590A (en) | 2024-04-11 | 2024-04-11 | Ultraviolet-visible band coded aperture spectrum imaging system and reconstruction method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118347590A (en) |
-
2024
- 2024-04-11 CN CN202410431634.6A patent/CN118347590A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9927300B2 (en) | Snapshot spectral imaging based on digital cameras | |
CN107343130B (en) | High dynamic imaging module based on DMD dynamic light splitting | |
US9459148B2 (en) | Snapshot spectral imaging based on digital cameras | |
CN105589210B (en) | Digital synthetic aperture imaging method based on pupil modulation | |
US20160004145A1 (en) | Pattern projection and imaging using lens arrays | |
CN108896183B (en) | Aperture coding polarization spectrum imaging device | |
CN114659634A (en) | Miniature snapshot type compressed spectrum imaging detection device and detection method | |
CN108663118B (en) | Infrared broadband hyperspectral calculation imaging device and method thereof | |
CN113790676B (en) | Three-dimensional space spectral imaging method and device based on coded aperture and light field distribution | |
CA3201859A1 (en) | Optical method | |
CN208270074U (en) | Space-time joint modulation light field spectrum imaging system | |
US7876434B2 (en) | Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy | |
CN118347590A (en) | Ultraviolet-visible band coded aperture spectrum imaging system and reconstruction method thereof | |
US20240098377A1 (en) | Imaging device and imaging method | |
US20220341781A1 (en) | Optical device and method | |
CN108024037A (en) | Hadamard matrixes perceive imaging system and its imaging method | |
CN110501069B (en) | Space-time joint modulation light field spectrum imaging system and method | |
CN112903103A (en) | Computed spectrum imaging system and method based on DMD and complementary all-pass | |
CN117848502B (en) | Aberration compensation-based coded aperture polarization spectrum imaging device and method | |
CN113520594B (en) | Assembling method of double-light-path 3D imaging module | |
CN219178730U (en) | Filter array calibrating device and spectrometer calibrating device | |
CN114674435B (en) | Double-dispersion multispectral target simulator and simulation method | |
CN115790850A (en) | High dynamic range high resolution split frame snapshot type hyperspectral imaging system | |
CN118644382A (en) | High-resolution multispectral video imaging system and multispectral mosaic restoration method | |
CN118329798A (en) | Compact snapshot hyperspectral imaging system and method based on liquid crystal spatial light modulator |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |