WO2016045425A1 - Two-viewpoint stereoscopic image synthesizing method and system - Google Patents
Two-viewpoint stereoscopic image synthesizing method and system Download PDFInfo
- Publication number
- WO2016045425A1 WO2016045425A1 PCT/CN2015/082557 CN2015082557W WO2016045425A1 WO 2016045425 A1 WO2016045425 A1 WO 2016045425A1 CN 2015082557 W CN2015082557 W CN 2015082557W WO 2016045425 A1 WO2016045425 A1 WO 2016045425A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- view
- stereoscopic image
- image
- data
- matching
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
Definitions
- This paper relates to stereo imaging technology, especially a two-view stereo image synthesis method and system.
- the most effective channel for humans to access information is through vision. Since the human eyes see the real three-dimensional scene in nature, it is always the goal of human beings to be able to reproduce the true three-dimensional scene on the screen.
- Stereoscopic imaging technology is gradually developed based on such needs, and can be used in scientific research, military, education, industry, medical and many other fields. Through stereoscopic imaging technology, stereoscopic color images can be recorded, transmitted and displayed, giving viewers an immersive feel.
- the spatial multiplexing mode stereoscopic imaging technology displays the stereoscopic image pairs on the screen at the same time, and through some special means, the two eyes respectively view different images at the same time, thereby obtaining a stereoscopic feeling.
- a stereoscopic image can be viewed using various optical surfaces without wearing glasses, and is called an auto-stereoscopic display.
- Common optical surfaces include: Lenticular Plate, Parallax Barrier, IP Lens Array, and the like.
- the emission direction of each pixel light is controlled to make the left viewpoint
- the image is only incident on the left eye
- the image of the right viewpoint is only incident on the right eye
- binocular vision is generated by binocular parallax.
- the lens cylinder is composed of a row of vertically arranged semi-circular cylindrical lenses.
- the parallax barrier is a vertical plate mounted in front of the display. For each eye, it blocks a part of the screen, so that all the pixels of the left view point are incident on the left eye field of view, and all the pixels of the right view point are incident on the right eye. Sight.
- the parallax barrier acts like a lens cylinder, except that it uses a baffle to block a portion of the pixel display rather than redirecting it by refraction.
- FIG. 1 is a related art slit front LCD (Liquid Crystal Display)
- the schematic diagram of the structure of the autostereoscopic display, as shown in FIG. 1, places the slit grating in a proper position in front of the liquid crystal screen, and the slit blocks part of the line of sight of the human eye.
- the human eye sees the liquid crystal screen through the slit grating. Due to the occlusion of the slit grating, a single eye of a person can only see one column of pixels through a slit. For example, the right eye can only see the Rn column pixels, and the left eye can only see the Ln column pixels. If the Rn column pixel and the Ln column pixel respectively display images of the right eye and the left eye, then viewing the image by the human eye forms a stereoscopic image in the brain.
- the stereoscopic video is finally displayed on the mobile screen, and the left and right eyes of the human eye respectively view two views having a certain parallax, thereby causing the human brain to recover the three-dimensional information in the view.
- the use of two viewpoints is sufficient for the viewing needs.
- This paper provides a two-view stereo image synthesis method and system, which can overcome the image mutation problem, enhance the matching stability, improve the anti-noise ability, and thus display high-quality stereo images.
- a two-view stereoscopic image synthesis method includes: acquiring a dual-viewpoint image by using a dual mobile industry processor interface MIPI interface;
- the naked eye display of the two-view stereoscopic image is realized by the slit grating front LED stereoscopic display.
- the method further includes: performing video driving processing on the collected dual view image.
- performing the matching and synthesizing processing on the collected dual-viewpoint image includes:
- the data of each frame of the left and right views in the collected two-view image is extracted by using two preview threads executed in parallel, and matching and synthesizing processing are performed.
- the performing matching and synthesizing processing includes:
- Registering a dedicated buffer for previewing the camera in the memory acquiring the left and right video single frame data from the acquired two-viewpoint image, and storing the data in a dedicated buffer;
- the left and right view pixel points are arranged by a specific pixel arrangement manner set in advance, and one frame of stereoscopic image data that can be displayed under the raster is generated.
- the image smoothing process is implemented using a Gaussian low pass filter.
- the method further comprises: using the random sampling consistency RANSAC algorithm to remove the features of the matched left and right views. The feature points in the mismatch.
- the pixel arrangement manner is: in a vertical column.
- the first column of the composite image is arranged in the first column of the left view
- the second column of the composite image is arranged in the first column of the right view
- the third column of the composite image is arranged in the second column of the left view
- the composite image The fourth column arranges the second column of pixels in the right view, and so on, until the left and right view pixels are completely arranged into the composite image.
- the method before the generating the stereoscopic image data that can be displayed under the raster, the method further includes: verifying whether the left and right views are occluded, and if there is occlusion, repairing the occlusion region.
- the repairing the occlusion region comprises: correcting a corresponding occluded region in a view by the gray value of the pixel in another view.
- the method further includes: detecting, by using a median filter, the left and right views after the occlusion is repaired The noise of the graph and the noise points are marked;
- Noise point repair is performed on points determined to be noise.
- the performing the noise repair comprises: correcting the gray value of each pixel in the confirmed noise point by using the gray value of the pixel corresponding to the other viewpoint.
- a two-view stereoscopic image synthesis system comprising an acquisition unit, a processing unit, and a display unit;
- the acquisition unit is configured to: acquire a dual viewpoint image by using a dual MIPI interface;
- the processing unit is configured to: perform matching and synthesizing processing on the collected two-viewpoint image, generate two-viewpoint stereoscopic image data, and output the data to the display unit;
- the display unit is configured to realize naked-eye display of the two-view stereoscopic image through the slit grating front LED stereoscopic display.
- the collecting unit includes two rear cameras with MIPI interfaces, and is configured to separately collect left and right views;
- the two cameras are mounted on different I 2 C buses, interact with the memory and the central processor using separate data lines, and use the time stamp of each frame for frame synchronization.
- the camera is an OV5640 chip of Omnivision Corporation.
- the collecting unit further includes a camera driving module, configured to: drive the collected two-way view and output the result to the processing unit.
- a camera driving module configured to: drive the collected two-way view and output the result to the processing unit.
- the camera driving module is implemented by using a V4L2 video driving framework.
- the processing unit includes a preprocessing module, an extraction module, a matching module, and a synthesizing module;
- the pre-processing module is configured to: register a dedicated buffer for previewing the camera in the memory; acquire the single-frame data of the left and right video, and store them in the dedicated buffer respectively; at the same time, use the frame data timestamp function to perform software synchronization; Converting the collected data in the YUV format into an RGB format; performing image smoothing and size conversion processing on the left and right views after the format conversion, and outputting the image to the extraction module;
- the extraction module is set to: extract the pre-processed left and right views by using the SIFT feature matching algorithm a feature that generates a 32-dimensional SIFT feature descriptor;
- the matching module is configured to: use the SIFT feature matching algorithm to match the extracted features of the left and right views, and use the point closest to the Euclidean distance in the right view as the matching point of the current left view SIFT key point, and record the coordinate information of the matching point pair;
- the synthesizing module is configured to: arrange the left and right view pixels by a specific pixel arrangement manner set in advance, and generate a frame of stereoscopic image data that can be displayed under the raster.
- the processing unit further includes a culling module, configured to: cull the mismatched feature points by using the RANSAC algorithm, and estimate the left and right view pixel coordinate mapping models.
- a culling module configured to: cull the mismatched feature points by using the RANSAC algorithm, and estimate the left and right view pixel coordinate mapping models.
- the processing unit further includes an occlusion repair module, configured to: when the occlusion region exists in the estimated left and right view pixel coordinate mapping model, use the ash shaded by the other view for the region occluded by the foreign object in the view Degree correction to achieve occlusion area repair.
- an occlusion repair module configured to: when the occlusion region exists in the estimated left and right view pixel coordinate mapping model, use the ash shaded by the other view for the region occluded by the foreign object in the view Degree correction to achieve occlusion area repair.
- the processing unit further includes a noise repair module, configured to: detect the noise of the left and right views by using a median filter, and mark the noise points and output the signals to the synthesis module.
- a noise repair module configured to: detect the noise of the left and right views by using a median filter, and mark the noise points and output the signals to the synthesis module.
- a computer readable storage medium storing computer executable instructions for performing the method of any of the above.
- the embodiment of the present invention includes acquiring a dual-viewpoint image by using a dual MIPI interface, performing matching and synthesizing processing on the acquired dual-viewpoint image, and generating two-viewpoint stereoscopic image data; and implementing the LED stereoscopic display through the slit grating front LED A naked eye display of a two-view stereoscopic image.
- the SIFT feature matching algorithm is adopted, and the features of the extracted left and right views are more effectively coped with the image abrupt problem frequently encountered in the shooting of mobile terminals such as “focus abrupt change”, and the matching stability and the anti-noise are enhanced. ability.
- the quality of the three-dimensional material is generally solved in the related art.
- the high-quality stereo image is collected, synthesized and displayed at a relatively fast speed when the mobile terminal is occluded and the collected stereoscopic image material has a large noise.
- FIG. 1 is a schematic structural view of a slit front-mounted LCD autostereoscopic display according to related art
- FIG. 2 is a flowchart of a method for synthesizing two-view stereoscopic images according to an embodiment of the present invention
- FIG. 3 is a schematic diagram of arrangement of pixels in left and right views according to an embodiment of the present invention.
- FIG. 4 is a schematic structural diagram of a two-view stereoscopic image synthesizing system according to an embodiment of the present invention.
- the stereoscopic video utilizes the binocular parallax principle of the human eye, and the binoculars independently receive the left and right images of specific imaging points from the same scene to obtain a stereoscopic effect, and the amount of data to be processed is multiplied compared with the conventional single channel video. .
- portable media in order to overcome the high power consumption and electromagnetic interference (EMI) noise caused by parallel data buses in the case of high-speed data transmission, it is necessary to design a bus design suitable for a larger bandwidth and transmission rate to present the same level.
- EMI electromagnetic interference
- Image and multimedia effects Portable multimedia devices such as mobile phones, portable media players (PMPs), portable digital versatile discs (DVDs), and digital still cameras (DSCs) are also faced with such problems.
- MVI a video format for camera output
- MPL National Semiconductor
- MPL Mobile
- MPL Mobile
- MPL National Semiconductor
- Whisperbus a physical layer interface
- CDMA code-based multiple access
- MIPI Mobile Industry Processor Interface
- CSI Serial Camera Interface
- DSI Serial Display Interface
- D-PHY digital physical layer
- the physical connections, protocol processing, and upper-layer applications are defined in detail in the specification.
- D-PHY adopts 1.2 volt, source synchronous upgradeable low-voltage signaling technology, and a 200mV differential signal pair. It can support up to 4 channels, and the rate of each channel can reach up to 1 Gigabit per second.
- the DSI specification for portable device applications also defines a maximum support for Extended Graphics Array (XGA) resolution.
- XGA Extended Graphics Array
- For the CSI interface specification in addition to supporting raw image data such as three primary color space (RGB), Bayer (an image format) and gray color space (YUV), it can support user-defined data types or compressed data formats.
- the application mode of the DSI interface is basically the same as that of the CSI. MIPI has refined the application interfaces to their respective sub-specifications for different application requirements, so in mobile-based handheld electronic products applications, the MIPI interface specification can basically support data transmission requirements of any speed and resolution.
- the MIPI standard does not address the issue of stereo acquisition.
- the terminal CPU chip there are two MIPI interfaces reserved for image acquisition, such as Texas Instruments' OMAP 4 series and OMAP 5 series chips.
- the application of the dual MIPI interface is the front/back dual camera acquisition, which cannot meet the stereo acquisition function with high real-time and synchronization requirements.
- FIG. 2 is a flowchart of a method for synthesizing a two-view stereoscopic image according to an embodiment of the present invention. As shown in FIG. 2, the method includes:
- Step 200 Acquire a dual viewpoint image using a dual MIPI interface.
- the implementation of this step is: synchronous shooting from two capturing devices (such as cameras) with different capturing positions and angles to obtain a dual viewpoint image including image data of left and right views. That is to say, the dual MIPI rear camera satisfies the requirements of high bandwidth, high synchronism, low power consumption and low noise required for stereo acquisition.
- the OMAP4 series processor chip provides two ARM Cortex-A9 processors with a clock speed of 1.2G, one TMS320C64+ processor, two ARM Cortex-M3 processors, 1GB DRAM memory, and two channels. MIPI serial camera data interface: CSI-2_1 and CSI-2_2, as well as direct memory read and write direct memory read (DMA) controller and other modules. It can meet the processing requirements of collecting, synthesizing and displaying stereo images quickly and with high quality.
- the application of the OMAP4 series processor chip belongs to the conventional technical means of those skilled in the art, and details are not described herein again.
- the two-way camera in the embodiment of the present invention is two cameras in which two optical characteristics, geometric characteristics, and imaging characteristics are almost aligned in the horizontal direction, and the distance between the cameras may be 35 mm.
- the two cameras are mounted on different I 2 C buses, interact with the memory and the central processor using separate data lines, and use the time stamp of each frame of the image for frame synchronization.
- the DMA and the central processing unit CPU can alternately access the memory.
- the DMA controller controls the image data to be directly transferred from the camera to the memory. During the transfer process, the CPU of the central processing unit is not required to participate.
- the hardware opens a path for directly transferring data for the memory and the input/output device, so that the efficiency of the CPU is greatly improved.
- the camera chip in the embodiment of the present invention can select the OV5640 chip of Omnivision, and the interface of the OV5640 is extended based on the MIPI interface.
- the camera control interface CCI defined in the MIPI interface is consistent with the I 2 C standard, and has two ports, respectively. It is called serial clock line (SCL) and serial data line (SDA), where SCL is a unidirectional control clock signal line and SDA is a bidirectional control line.
- SCL serial clock line
- SDA serial data line
- the RESET line, SHUTTER line, STROBE line, clock bus and power ground in the camera expansion interface can be shared by the two cameras, so there is no need to do too much processing, and the two cameras can be directly connected in parallel. How to implement the connection between the camera chip OV5640 and the extended interface of the central processing unit OMAP4460 belongs to the conventional technical means of those skilled in the art, and details are not described herein again.
- V4L2 is a double layer driving system
- the upper layer is a video device (Video Device) module, which is a registered device function function. Character device.
- the lower layer is the V4L2 driver.
- the V4L2 driver and device nodes are registered by the registration function. After the device node is opened, the operation of the device file is replaced by various V4L2-compliant interfaces defined by the v4l2_ioctl_ops structure.
- the V4L2 driver module calls the device driver.
- the drive frame and flow used by the two cameras are identical.
- the main functions of the V4L2 driver framework are: timing management of video data and memory management of data buffers, control of hardware and acquisition of image data by means of I 2 C bus, peripheral component interconnect standard (PCI) interface, etc.
- PCI peripheral component interconnect standard
- the initialization operation function that will be called when the camera driver module is started completes a series of initialization tasks, including hardware power supply, bus controller MIPI clock setting, I 2 C bus port initialization, MIPI data port initialization, video device detection and binding.
- Two video frame buffers (Buffer Queues) located in the memory are managed inside the camera driver module, one as an input buffer queue and the other as an output buffer queue.
- the buffer in the input queue is automatically filled into the output queue after being filled with the image data, waiting for the video driver to call the dequeue command (VIDIOC_DQBUF) to transfer the data to the upper layer. After processing, re-invoke the enqueue command (VIDIOC_QBUF) to put the buffer back into the input queue.
- the video stream image acquisition step generally includes: opening a video device file; obtaining a function list of the device, detecting a video supported format; setting a video frame capture format and a frame size; and applying a plurality of frame buffer regions to the memory, As an input and output cache queue.
- Multiple caches can be used to establish queues to improve the efficiency of video capture; obtain each cached information and map it to the cache information of the upper system space; start collecting video streams; and take out a frame that has been sampled in the output queue header buffer.
- the data is passed to the upper layer for processing; the frame buffer just processed is put back into the end of the input queue to cycle acquisition; the video stream is stopped; the video device is turned off.
- step 200 based on the full customization of the camera subsystem of the Android operating system, the OMAP4 series central processing unit of Texas Instruments is used as the hardware platform core, and the dual camera real-time synchronous image is realized through the dual MIPI serial data link.
- the collection work has the advantages of less wiring, high speed, large amount of transmission data, low power consumption, strong anti-interference and strong adaptability.
- Step 201 Perform matching and synthesizing processing on the collected two-viewpoint image to generate two-viewpoint stereoscopic image data.
- Step 2 two preview threads (PreviewThread) executed in parallel are used, and the function interface provided by the V4L2 framework (if video driving is performed) is used to extract the left and right images in the dual viewpoint image collected in step 200 from the system kernel driver layer.
- Viewing the data of each frame of image, and then matching and synthesizing the data generally includes:
- Registering a dedicated buffer for previewing the camera in the memory acquiring left and right video single frame data from the acquired dual viewpoint images, and storing them in a dedicated buffer;
- the left and right view pixel points are arranged by a specific pixel arrangement manner set in advance, and one frame of stereoscopic image data that can be displayed under the raster is generated.
- the method further includes repairing, denoising, etc., and finally, the synthesized image data is transmitted to the Android display subsystem (Surface) system library, and displayed on the application layer interface.
- Android display subsystem Surface
- the Android Camera architecture is mainly based on the hierarchical structure of the Android system itself, and the corresponding hierarchy is mainly composed of the application layer (Camera App) and the application framework layer. (Camera Service), hardware abstraction layer (Camera Hal), kernel driver layer (Camera Driver).
- the entire camera subsystem is actually divided into two processes: the client and the server.
- the matching and synthesizing processing of the collected dual viewpoint images includes:
- Registration preview shows the cache area. Register a dedicated buffer in memory for the camera preview display in the Android Display Surface System Library and specify the image data type.
- the single-frame original image data that is, the data of each frame of the left and right views in the dual-view image collected in step 200, that is, calling the V4L2 interface function in the Linux kernel in the hardware abstraction layer library, and acquiring the left and right video single frames.
- the data is stored in a dedicated buffer; at the same time, the software is synchronized using the frame data time stamp function provided by the Android camera subsystem.
- the collected YUV format data is converted to RGB format: YUV color space is European TV A color coding method used in the system, Y stands for brightness (grayscale) and UV stands for color difference (R-Y) (B-Y).
- Y stands for brightness (grayscale)
- UV stands for color difference (R-Y) (B-Y).
- the data collected by the camera is a pixel information matrix of the YUV format, and the stereo image is required to be in the RGB format for synthesis and display. Therefore, the collected original data type needs to be converted into a type consistent with the preview display cache registration type.
- Preprocessing the image Perform image smoothing and size conversion on the obtained left and right views.
- the image smoothing processing can be implemented by using a Gaussian low-pass filter, and the Gaussian low-pass filter can effectively overcome the ringing effect, and the effect of eliminating noise is obvious, how to implement the conventional technical means belonging to the skilled person, and will not be described here;
- the transformation process is to adjust the image size according to actual needs while ensuring the search quality and the processing speed, and how to implement the conventional technical means belonging to those skilled in the art, and details are not described herein again.
- the features of the left and right views after preprocessing are extracted and matched, for example, by using a SIFT feature matching algorithm.
- the Scale Invariant Feature Transform (SIFT) is a local feature descriptor proposed by David Lowe in 1999, and was further developed and improved in 2004.
- the SIFT feature matching algorithm can deal with the matching problem between the two images, such as viewing angle change, occlusion, brightness change, rotation, noise and scale conversion, and has strong matching ability.
- the SIFT algorithm consists of two parts: interest point detection and feature description generation.
- the generated SIFT operator is a local feature descriptor, which describes the gray gradient distribution of the region of interest of the image.
- the SIFT algorithm has a wide range of applications in the field of image matching and target detection technology, and the target positioning accuracy is also very high. include:
- the features of the left and right views after preprocessing are extracted: firstly, Gaussian difference (DoG) scale space is constructed; secondly, each pixel point is searched for extreme points in the neighborhood of its image space and DoG scale space, and the position of the feature points is initially obtained; Then, by fitting the three-dimensional quadratic function to accurately determine the position and scale of the key points (to achieve sub-pixel precision), at the same time remove the low-contrast key points and unstable edge response points to enhance the matching stability and improve the anti-noise ability; Finally, the direction parameter of each key point is specified by the gradient direction distribution characteristic of the neighboring pixels of the key point, so that the operator has rotation invariance and generates a 32-dimensional SIFT feature descriptor.
- DoG Gaussian difference
- Matching the features of the extracted left and right views First, a transformation model is assumed for the transformation between the left and right viewpoints; then, according to the position, scale and rotation information of each feature point, the Euclidean distance of the key feature vector is used as two Similarity measure of key points of image, according to hypothetical transformation The model calculates the transformation parameters of each pair of matches; finally, the point closest to the Euclidean distance in the right view is used as the matching point of the current left view SIFT key point, and the coordinate information of the matching point pair is recorded.
- the SIFT feature matching algorithm with "scale invariant” and “rotation invariant” is used to match the left and right view materials, and the position of the feature points is used to estimate the transformation model according to the least squares criterion, and the non-conformity is discarded.
- the matching pair of the transformed model is then re-calculated according to the least squares criterion using the remaining matching pair.
- the feature of the left and right views of the matching extraction in the embodiment of the present invention can effectively cope with the image abrupt problem frequently encountered when shooting a mobile terminal such as a "focal length change".
- the embodiment of the present invention may further include rejecting the mismatched feature points.
- a Random Sample Consensus (RANSAC) algorithm may be used, and the position information of the matching points in the feature points of the picture is used as a parameter to estimate the mapping between the two pictures. relationship.
- RANSAC Random Sample Consensus
- the mapping relationship between the two graphs can be accurately estimated, and the matching points of the SIFT are filtered, thereby achieving the effect of removing the mismatched points.
- RANSAC is a robust estimation method proposed by Fishler and Bolles.
- the estimation of the coordinate mapping mode relationship includes: randomly selecting a data point set from the matching point pair set S, and initializing the model from the subset; finding the set of support points Si according to the threshold Td to become the current model, the set Si is the sample
- the uniform set is defined as the effective point; if the size of the set Si exceeds the specified threshold T, the model is re-estimated with Si and ends; if the size of the set Si is smaller than the threshold Ts, a new sample is selected and the above steps are repeated. After N attempts, the largest uniform set Si was selected and used to re-estimate the model to get the final result. Only the correct matching point pairs that conform to the coordinate mapping model, that is, the point sets in Si, are retained, and the obtained left and right view pixel coordinate mapping models are saved as the frame image reference information.
- the process of matching and synthesizing the collected two-viewpoint image is performed cyclically, and only the image reference information of the newly acquired 4-frame image is retained, and a queue is established. Each time a frame of stereo material is acquired, the head frame information is deleted, and the latest frame reference information is stored in the end of the team.
- the left and right view pixels are arranged through a specific pixel
- stereoscopic image data that can be displayed under a raster is generated to realize a composite image processing process.
- the specific pixel arrangement manner is: in the vertical column unit, the first column of the composite view is arranged in the first column of the left view, and the second column of the composite view is arranged in the first column of the right view; The third column arranges the second column of pixels in the left view, the fourth column of the composite view arranges the second column of pixels in the right view, and so on, and so on, until the left and right view pixels are completely arranged into the composite image.
- the arrangement is special in that the number of horizontal pixels of the synthesized image is twice that of the original image.
- the average pixel coordinate mapping model is solved by using the first three frames of the current model queue to verify whether the left and right views are occluded.
- the average of the left and right views is divided into 8 blocks, and the average gray value is taken for each block, and the average gray value of the corresponding block in the left and right views is compared. If the relative difference of the average gray value of the block is more than 10%, then The low gray value block is regarded as a damaged block. If the number of broken blocks is 0, indicating that there is no occlusion, noise processing is performed.
- this step further includes: repairing the occlusion area, that is, correcting the occlusion value of the pixel corresponding to the foreign object in the view by using the gray value of the corresponding pixel of the other view to ensure that the mobile terminal is in the
- the left and right view materials of relatively high quality can still be obtained, which solves the problem that the quality of the three-dimensional material is generally low and the synthesis processing efficiency is not high in the related art.
- the damage complex includes: accurately determining the damage area, and using the Sobel operator to detect the gray-scale abrupt edge in the damaged block; using the coordinate mapping model of the left and right views of the current scene to determine the coordinate information of the damaged area in another view; The image content of the area replaces the image content of the current damaged area; the edge repair is performed, and for the edge of the gray-scale abrupt detected by the Sobel operator, the average number of gray values of the corresponding pixel points of the left and right views is used to correct each pixel within the 3 ⁇ 3 area. Pixel gray value.
- the view block with the lower gray value is marked as a damaged block. Since the left and right views have been matched, here, the view block with the higher gray value (ie the other view mentioned above) can be used instead of the block with the lower gray value (that is, the view mentioned above) The area covered by foreign matter) to achieve damage repair.
- this step further includes: detecting the noise of the left and right views by using a median filter, and labeling the noise points. That is, in the current point N ⁇ N (N is an odd number), the maximum value, the minimum value, and the mean value of the gray value are taken, if the gray value of the current point is the maximum or minimum value in the neighborhood, and exceeds a preset threshold. , there may be noise, marked as suspicious. Where the threshold is in the experiment The empirical value can generally take the average gray value of the entire image.
- the coordinate mapping model is used to determine the location area of the suspicious point in another view, the current point is placed in the position, and the gray level comparison is performed again to determine whether the current point is a noise point;
- the gradation value of the pixel corresponding to the pixel point of the other view point may be used to correct the gray value of each pixel in the vicinity of the noise point 3 ⁇ 3.
- the noise repairing method in the embodiment of the present invention is simpler and less computationally intensive. The denoising effect is more significant, and the effect on the stereoscopic image after synthesis is less affected.
- Step 202 Perform naked eye display of the two-view stereoscopic image through a slit grating front LED (Light Emitting Diode) stereoscopic display.
- a slit grating front LED Light Emitting Diode
- the hardware abstraction layer system library sends a preview message to the server of the camera subsystem through a callback function.
- the server calls the ISurface system library to complete the data filling work for the preview display buffer area.
- the Android operating system Take the Android operating system as an example, including: in the Android operating system application layer through two Camera object instances and their associated Surface preview controls, respectively, the Android.Hardware.Camera class in the application framework layer is provided to the upper layer application.
- the technical solution proposed by the embodiments of the present invention has a small amount of data processing, low hardware cost, and is easy to manufacture.
- the display conditions are: the number of parallax images (viewpoints) is K, and the 2D display sub-pixel width is Wp .
- the viewing condition is: the optimal viewing distance is L, and the viewpoint spacing of the leading parallax images is Q, which may be equal to or smaller than the pupil spacing of the two eyes.
- E the pupil spacing of the human eye
- N the viewpoint spacing of adjacent parallax images.
- the slit grating parameters are: the grating pitch W s , wherein the width of the light-transmitting strip and the light-blocking strip are W w and W b , respectively, and the distance between the 2D display screen and the slit grating is D.
- the hardware abstraction layer system library continuously loops the process of matching and synthesizing the collected dual-viewpoint images according to the thread loop loop mechanism, and the image data collected by the two camera hardware devices is continuously cycled. Frame matching, compositing, sending to the preview display buffer, and finally to the application interface.
- the naked-eye display of the two-view stereoscopic image can be realized according to the above steps; for the video recording function, after the stereoscopic image synthesizing process of step 201, the data format is re-converted into the YUV format, so that For storage, the image data is transmitted to the video recorder subsystem (VideoRecorder) of the Android system for encoding processing; and then step 202 is performed.
- VideoRecorder VideoRecorder
- FIG. 4 is a schematic structural diagram of a two-view stereoscopic image synthesizing system according to an embodiment of the present invention. As shown in FIG. 4, at least an acquisition unit 410, a processing unit 420, and a display unit 430 are included.
- the collecting unit 410 is configured to collect the dual viewpoint image by using the dual MIPI interface.
- the collecting unit 410 includes two rear cameras having MIPI interfaces, that is, the first camera head 411 and the second camera 412 in FIG. 4, and is configured to separately collect left and right views.
- the two cameras are two cameras that are arranged in the horizontal direction with two optical characteristics, geometric characteristics, and imaging characteristics, and the distance between the cameras can be 35 mm.
- the two cameras are mounted on different I 2 C buses, interact with the memory and the central processor using separate data lines, and use the time stamp of each frame for frame synchronization.
- the camera chip can choose Omnivision's OV5640 chip.
- the collecting unit 410 further includes a camera driving module 413, configured to: drive the collected two-way view and output the result to the processing unit 420.
- the camera driver module 413 can be implemented using a V4L2 video drive framework.
- the processing unit 420 is configured to perform matching and combining processing on the collected two-viewpoint image, generate two-viewpoint stereoscopic image data, and output the data to the display unit 430.
- Processing unit 430 can be implemented using Texas Instruments' OMAP4 family of processor chips.
- the processing unit includes a pre-processing module 421, an extraction module 422, a matching module 423, and a synthesizing module 424.
- the pre-processing module 421 is configured to: register a dedicated buffer for previewing the camera in the memory; acquire single-frame data of the left and right video, and store them in a dedicated buffer; and simultaneously use the frame data timestamp function.
- Software synchronization is performed; the acquired YUV format data is converted into an RGB format; the format converted left and right views are subjected to image smoothing and size conversion processing, and then output to the extraction module 422.
- the extracting module 422 is configured to: extract the features of the pre-processed left and right views by using a SIFT feature matching algorithm, and generate a 32-dimensional SIFT feature descriptor.
- the matching module 423 is configured to: use the SIFT feature matching algorithm to match the extracted features of the left and right views, and use the point closest to the Euclidean distance in the right view as the matching point of the current left view SIFT key point, and record the coordinate information of the matching point pair.
- the synthesizing module 424 is configured to: arrange the left and right view pixel points by a specific pixel arrangement manner set in advance, and generate a frame of stereoscopic image data that can be displayed under the raster.
- the culling module 425 is further configured to: remove the mismatched feature points by using the RANSAC algorithm, and estimate the left and right view pixel coordinate mapping models.
- the occlusion repair module 426 is further configured to: when the occlusion region is determined by using the estimated left and right view pixel coordinate mapping model, correcting the occlusion region of the view by the gray value of the pixel corresponding to the other view To achieve the repair of the occlusion area.
- the noise repair module 427 is further configured to: detect the noise of the left and right views by using a median filter, and output the noise points to the synthesis module 424.
- the display unit 430 is configured to realize naked-eye display of the two-view stereoscopic image through the slit grating front LED stereoscopic display.
- all or part of the steps of the above embodiments may also be implemented by using an integrated circuit. These steps may be separately fabricated into individual integrated circuit modules, or multiple modules or steps may be fabricated into a single integrated circuit module. achieve.
- the devices/function modules/functional units in the above embodiments may be implemented by a general-purpose computing device, which may be centralized on a single computing device or distributed over a network of multiple computing devices.
- the device/function module/functional unit in the above embodiment When the device/function module/functional unit in the above embodiment is implemented in the form of a software function module and sold or used as a stand-alone product, it can be stored in a computer readable storage medium.
- the above mentioned computer readable storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
- the SIFT feature matching algorithm is adopted, and the features of the extracted left and right views are more effectively coped with the image abrupt problem frequently encountered in the shooting of mobile terminals such as “focus abrupt change”, and the matching stability and the anti-noise are enhanced. ability.
- the related art stereo material is solved. The problem is that the quality is general and the synthesis processing efficiency is not high.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
A two-viewpoint stereoscopic image synthesizing method and system, comprising: acquiring two viewpoint images via two MIPI interfaces; matching and synthesizing the obtained two viewpoint images, and generating two-viewpoint stereoscopic image data; and displaying a two-viewpoint stereoscopic image for the naked eye via a front-set LED stereoscopic display in a slit grating.
Description
本文涉及立体成像技术,尤指一种两视点立体图像合成方法及系统。This paper relates to stereo imaging technology, especially a two-view stereo image synthesis method and system.
人类获取信息最有效的渠道是通过视觉。由于人的双眼看到的是自然界中真实的三维景物,因此,能够在屏幕上再现真实的三维景物一直是人类追求的目标。立体成像技术就是基于这样的需求逐步发展起来的,可以用于科研、军事、教育、工业、医疗等诸多领域。通过立体成像技术,可以记录、传输和显示立体彩色图像,使观众产生身临其境的感觉。The most effective channel for humans to access information is through vision. Since the human eyes see the real three-dimensional scene in nature, it is always the goal of human beings to be able to reproduce the true three-dimensional scene on the screen. Stereoscopic imaging technology is gradually developed based on such needs, and can be used in scientific research, military, education, industry, medical and many other fields. Through stereoscopic imaging technology, stereoscopic color images can be recorded, transmitted and displayed, giving viewers an immersive feel.
空间复用方式立体成像技术是将立体图像对同时显示在屏幕上,通过一些特殊的手段,使两眼分别同时观看到不同的画面,从而获得立体感。从光学原理上讲,不戴眼镜而利用各种光学面即可观看到立体图像,被称为双视点自由立体(auto-stereoscopic)显示方式。常用的光学面包括:透镜柱板(Lenticular Plate)、视差栅栏(Parallax Barrier)、光栅阵列(IP Lens Array)等。对于双视点自由立体显示方式,一般情况下,通过在CRT(Cathode Ray Tube,阴极射线管)显示器或者平板显示器前加入透镜柱面或者视差栅栏,控制每个像素光线的射出方向,使左视点的图像仅射入左眼,右视点的图像仅射入右眼,利用双目视差,产生立体视觉。The spatial multiplexing mode stereoscopic imaging technology displays the stereoscopic image pairs on the screen at the same time, and through some special means, the two eyes respectively view different images at the same time, thereby obtaining a stereoscopic feeling. Optically speaking, a stereoscopic image can be viewed using various optical surfaces without wearing glasses, and is called an auto-stereoscopic display. Common optical surfaces include: Lenticular Plate, Parallax Barrier, IP Lens Array, and the like. For the dual-view autostereoscopic display mode, in general, by adding a lens cylinder or a parallax barrier in front of a CRT (Cathode Ray Tube) display or a flat panel display, the emission direction of each pixel light is controlled to make the left viewpoint The image is only incident on the left eye, and the image of the right viewpoint is only incident on the right eye, and binocular vision is generated by binocular parallax.
透镜柱面由一排垂直排列的半圆形柱面透镜组成,利用每个柱面镜头对光的折射作用,把两幅不同的平面图像导向双眼分别对应的视域,使左眼图像聚焦于观看者左眼,右眼图像聚焦于观看者右眼,由此来产生立体视觉。The lens cylinder is composed of a row of vertically arranged semi-circular cylindrical lenses. By using the refractive effect of each cylindrical lens, two different planar images are directed to the corresponding fields of view of the two eyes, so that the left eye image is focused on The viewer's left eye, the right eye image is focused on the viewer's right eye, thereby producing stereoscopic vision.
视差栅栏是安装在显示前方的垂直平板,对每只眼睛,它都阻挡了屏幕的一部分,使左视点所有像素的光线均射入左眼视域,右视点所有像素的光线均射入右眼视域。视差档板的作用类似透镜柱面,区别在于它是利用档板挡住部分像素显示,而不是通过折射改变方向。The parallax barrier is a vertical plate mounted in front of the display. For each eye, it blocks a part of the screen, so that all the pixels of the left view point are incident on the left eye field of view, and all the pixels of the right view point are incident on the right eye. Sight. The parallax barrier acts like a lens cylinder, except that it uses a baffle to block a portion of the pixel display rather than redirecting it by refraction.
图1为相关技术狭缝前置式LCD(Liquid Crystal Display,液晶显示器)
自由立体显示器的结构示意图,如图1所示,将狭缝光栅置于液晶屏前适当位置,狭缝会遮挡人眼的部分视线。人眼透过狭缝光栅观看液晶屏,由于狭缝光栅的遮挡,人的单眼透过一条狭缝只能观看到一列像素。比如,右眼只能看到Rn列像素,左眼只能看到Ln列像素。如果Rn列像素与Ln列像素分别显示右眼和左眼的图像,那么,人眼观看此图像会在大脑中形成立体图像。FIG. 1 is a related art slit front LCD (Liquid Crystal Display)
The schematic diagram of the structure of the autostereoscopic display, as shown in FIG. 1, places the slit grating in a proper position in front of the liquid crystal screen, and the slit blocks part of the line of sight of the human eye. The human eye sees the liquid crystal screen through the slit grating. Due to the occlusion of the slit grating, a single eye of a person can only see one column of pixels through a slit. For example, the right eye can only see the Rn column pixels, and the left eye can only see the Ln column pixels. If the Rn column pixel and the Ln column pixel respectively display images of the right eye and the left eye, then viewing the image by the human eye forms a stereoscopic image in the brain.
立体视频最终通过移动屏幕显示,让人的左右眼分别观看到具有一定视差的两个视图,从而使人脑恢复出视图中的三维信息。考虑到移动终端的显示屏幕较小、电池驱动等特点,采用两个视点足以满足观看的需要。The stereoscopic video is finally displayed on the mobile screen, and the left and right eyes of the human eye respectively view two views having a certain parallax, thereby causing the human brain to recover the three-dimensional information in the view. Taking into account the small display screen of the mobile terminal, battery drive and the like, the use of two viewpoints is sufficient for the viewing needs.
相关技术中,存在利用单摄像头完成立体图像采集显示的方案,方法是对单帧图像进行人工地视差平移,产生出另一张与其视差不同的图像,再将这两张图像用于立体合成。由于人工模拟的视差只是单纯地对图像进行左右整体平移而不改变图像中每个景深景物的相对距离,其立体显示效果不佳;此外,由于采集时只使用一路摄像头,当出现摄像头被遮挡或是图像噪声较大的情况时,会对立体图像的质量产生较大影响。而且,在移动终端在摄像头被遮挡、采集的立体图像素材噪声较大的情况下,不能以较快的速度采集、合成、显示出高质量的立体图像。In the related art, there is a scheme for performing stereoscopic image acquisition and display by using a single camera by artificially distorting a single frame image to generate another image different from the parallax, and then using the two images for stereo synthesis. Since the parallax of the artificial simulation simply shifts the image to the left and right without changing the relative distance of each scene in the image, the stereoscopic display effect is not good; in addition, since only one camera is used during the acquisition, when the camera is blocked or When the image noise is large, the quality of the stereo image is greatly affected. Moreover, in the case where the mobile terminal is occluded and the collected stereoscopic image material is noisy, the high-quality stereoscopic image cannot be collected, synthesized, and displayed at a relatively high speed.
发明内容Summary of the invention
本文提供一种两视点立体图像合成方法及系统,能够克服图像突变问题,增强匹配稳定性、提高抗噪声能力,从而显示出高质量的立体图像。This paper provides a two-view stereo image synthesis method and system, which can overcome the image mutation problem, enhance the matching stability, improve the anti-noise ability, and thus display high-quality stereo images.
一种两视点立体图像合成方法,包括:利用双移动行业处理器接口MIPI接口采集双视点图像;A two-view stereoscopic image synthesis method includes: acquiring a dual-viewpoint image by using a dual mobile industry processor interface MIPI interface;
对采集到的双视点图像进行匹配、合成处理,生成两视点立体图像数据;Perform matching and synthesizing processing on the collected two-viewpoint image to generate two-viewpoint stereoscopic image data;
通过狭缝光栅前置LED立体显示器实现对两视点立体图像的裸眼显示。The naked eye display of the two-view stereoscopic image is realized by the slit grating front LED stereoscopic display.
可选地,利用双MIPI接口采集双视点图像的步骤之后,该方法还包括:对述采集到的双视点图像进行视频驱动处理。Optionally, after the step of acquiring the dual view image by using the dual MIPI interface, the method further includes: performing video driving processing on the collected dual view image.
可选地,所述对采集到的双视点图像进行匹配、合成处理包括:
Optionally, performing the matching and synthesizing processing on the collected dual-viewpoint image includes:
利用两个并行执行的预览线程,分别提取出所述采集到的双视点图像中的左右视图每一帧图像的数据,进行匹配、合成处理。The data of each frame of the left and right views in the collected two-view image is extracted by using two preview threads executed in parallel, and matching and synthesizing processing are performed.
可选地,所述进行匹配、合成处理包括:Optionally, the performing matching and synthesizing processing includes:
在内存中注册用于照相机预览显示的专用缓冲区;从所述采集到的双视点图像中获取左右两路视频单帧数据,分别存放到专用缓冲区中;Registering a dedicated buffer for previewing the camera in the memory; acquiring the left and right video single frame data from the acquired two-viewpoint image, and storing the data in a dedicated buffer;
利用帧数据时间戳对获得的左右两路视频单帧数据进行同步处理;Synchronizing the obtained left and right video single frame data by using frame data time stamp;
将采集到的YUV格式的数据转换为RGB格式;Converting the collected data in YUV format to RGB format;
对格式转换后的左右视图进行图像平滑和尺寸变换处理;Perform image smoothing and size conversion processing on the left and right views after format conversion;
采用尺度不变特征变换SIFT特征匹配算法提取并匹配的经过图像平滑和尺寸变换处理后的左右视图的特征;The features of the left and right views after image smoothing and size transformation are extracted and matched by the scale invariant feature transform SIFT feature matching algorithm;
通过预先设置的特定的像素排列方式对左右视图像素点进行排列,生成一帧可在光栅下显示的立体图像数据。The left and right view pixel points are arranged by a specific pixel arrangement manner set in advance, and one frame of stereoscopic image data that can be displayed under the raster is generated.
可选地,所述图像平滑处理采用高斯低通滤波器实现。Optionally, the image smoothing process is implemented using a Gaussian low pass filter.
可选地,采用SIFT特征匹配算法提取并匹配的经过图像平滑和尺寸变换处理后的左右视图的特征的步骤之后,该方法还包括:利用随机取样一致性RANSAC算法剔除匹配后的左右视图的特征中的误匹配的特征点。Optionally, after the step of extracting and matching the features of the left and right views after the image smoothing and the size transform processing by using the SIFT feature matching algorithm, the method further comprises: using the random sampling consistency RANSAC algorithm to remove the features of the matched left and right views. The feature points in the mismatch.
可选地,通过预先设置的特定的像素排列方式对左右视图像素点进行排列,生成一帧可在光栅下显示的立体图像数据的步骤中,所述像素排列方式为:以纵向列为单位,合成图的第一列排布左视图的第一列像素,合成图的第二列排布右视图的第一列像素;合成图的第三列排布左视图第二列像素,合成图的第四列排布右视图第二列像素,以此类推,直至左右视图像素完全排布进入合成图中。Optionally, in the step of arranging the left and right view pixel points by a specific pixel arrangement manner set in advance, and generating a frame of stereoscopic image data that can be displayed under the raster, the pixel arrangement manner is: in a vertical column. The first column of the composite image is arranged in the first column of the left view, the second column of the composite image is arranged in the first column of the right view; the third column of the composite image is arranged in the second column of the left view, the composite image The fourth column arranges the second column of pixels in the right view, and so on, until the left and right view pixels are completely arranged into the composite image.
可选地,所述生成一帧可在光栅下显示的立体图像数据之前还包括:验证所述左右视图是否被遮挡,如果存在被遮挡,修复遮挡区域。Optionally, before the generating the stereoscopic image data that can be displayed under the raster, the method further includes: verifying whether the left and right views are occluded, and if there is occlusion, repairing the occlusion region.
可选地,所述修复遮挡区域包括:采用另一视图中像素点的灰度值修正对一视图中对应的被异物遮挡的区域。Optionally, the repairing the occlusion region comprises: correcting a corresponding occluded region in a view by the gray value of the pixel in another view.
可选地,该方法还包括:利用中值滤波器检测所述修复遮挡后的左右视
图的噪声,并对噪声点加以标注;Optionally, the method further includes: detecting, by using a median filter, the left and right views after the occlusion is repaired
The noise of the graph and the noise points are marked;
对于确定为噪声的点进行噪声点修复。Noise point repair is performed on points determined to be noise.
可选地,所述进行噪声修复包括:采用另一视点对应像素点的灰度值修正确认出的噪声点内每个像素灰度值。Optionally, the performing the noise repair comprises: correcting the gray value of each pixel in the confirmed noise point by using the gray value of the pixel corresponding to the other viewpoint.
一种两视点立体图像合成系统,包括采集单元、处理单元,以及显示单元;其中,A two-view stereoscopic image synthesis system, comprising an acquisition unit, a processing unit, and a display unit; wherein
采集单元,设置为:利用双MIPI接口采集双视点图像;The acquisition unit is configured to: acquire a dual viewpoint image by using a dual MIPI interface;
处理单元,设置为:对采集到的双视点图像进行匹配、合成处理,生成两视点立体图像数据并输出给显示单元;The processing unit is configured to: perform matching and synthesizing processing on the collected two-viewpoint image, generate two-viewpoint stereoscopic image data, and output the data to the display unit;
显示单元,设置为:通过狭缝光栅前置LED立体显示器实现对两视点立体图像的裸眼显示。The display unit is configured to realize naked-eye display of the two-view stereoscopic image through the slit grating front LED stereoscopic display.
可选地,所述采集单元包括两路具有MIPI接口的后置摄像头,设置为:分别采集左右两路视图;Optionally, the collecting unit includes two rear cameras with MIPI interfaces, and is configured to separately collect left and right views;
两路摄像头分别挂载在不同的I2C总线上,采用独立的数据线路与内存和中央处理器交互,并利用每一帧图像的时间戳进行帧同步。The two cameras are mounted on different I 2 C buses, interact with the memory and the central processor using separate data lines, and use the time stamp of each frame for frame synchronization.
可选地,所述摄像头的为Omnivision公司的OV5640芯片。Optionally, the camera is an OV5640 chip of Omnivision Corporation.
可选地,所述采集单元还包括摄像头驱动模块,设置为:对采集到的两路视图进行驱动处理后输出给处理单元。Optionally, the collecting unit further includes a camera driving module, configured to: drive the collected two-way view and output the result to the processing unit.
可选地,所述摄像头驱动模块采用V4L2视频驱动框架来实现。Optionally, the camera driving module is implemented by using a V4L2 video driving framework.
可选地,所述处理单元包括预处理模块、提取模块、匹配模块,合成模块;其中,Optionally, the processing unit includes a preprocessing module, an extraction module, a matching module, and a synthesizing module;
预处理模块,设置为:在内存中注册用于照相机预览显示的专用缓冲区;获取左右两路视频单帧数据,分别存放到专用缓冲区中;同时,利用帧数据时间戳功能进行软件同步;将采集到的YUV格式的数据转换为RGB格式;对格式转换后的左右视图进行图像平滑和尺寸变换处理后输出给提取模块;The pre-processing module is configured to: register a dedicated buffer for previewing the camera in the memory; acquire the single-frame data of the left and right video, and store them in the dedicated buffer respectively; at the same time, use the frame data timestamp function to perform software synchronization; Converting the collected data in the YUV format into an RGB format; performing image smoothing and size conversion processing on the left and right views after the format conversion, and outputting the image to the extraction module;
提取模块,设置为:采用SIFT特征匹配算法提取预处理后的左右视图
的特征,生成32维度的SIFT特征描述子;The extraction module is set to: extract the pre-processed left and right views by using the SIFT feature matching algorithm
a feature that generates a 32-dimensional SIFT feature descriptor;
匹配模块,设置为:采用SIFT特征匹配算法匹配提取的左右视图的特征,将右视图中欧式距离最近的点作为当前左视图SIFT关键点的匹配点,记录匹配点对的坐标信息;The matching module is configured to: use the SIFT feature matching algorithm to match the extracted features of the left and right views, and use the point closest to the Euclidean distance in the right view as the matching point of the current left view SIFT key point, and record the coordinate information of the matching point pair;
合成模块,设置为:通过预先设置的特定的像素排列方式对左右视图像素点进行排列,生成一帧可在光栅下显示的立体图像数据。The synthesizing module is configured to: arrange the left and right view pixels by a specific pixel arrangement manner set in advance, and generate a frame of stereoscopic image data that can be displayed under the raster.
可选地,所述处理单元还包括剔除模块,设置为:采用RANSAC算法剔除误匹配的特征点,并估计左右视图像素坐标映射模型。Optionally, the processing unit further includes a culling module, configured to: cull the mismatched feature points by using the RANSAC algorithm, and estimate the left and right view pixel coordinate mapping models.
可选地,所述处理单元还包括遮挡修复模块,设置为:利用估计得到的左右视图像素坐标映射模型确定存在遮挡区时,对视图中被异物遮挡的区域用另一视图对应像素点的灰度值修正,以实现遮挡区域的修复。Optionally, the processing unit further includes an occlusion repair module, configured to: when the occlusion region exists in the estimated left and right view pixel coordinate mapping model, use the ash shaded by the other view for the region occluded by the foreign object in the view Degree correction to achieve occlusion area repair.
可选地,所述处理单元还包括噪声修复模块,设置为:利用中值滤波器检测左右视图的噪声,并对噪声点加以标注后输出给合成模块。Optionally, the processing unit further includes a noise repair module, configured to: detect the noise of the left and right views by using a median filter, and mark the noise points and output the signals to the synthesis module.
一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行上述任一项的方法。A computer readable storage medium storing computer executable instructions for performing the method of any of the above.
与相关技术相比,本发明实施例包括利用双MIPI接口采集双视点图像;对采集到的双视点图像进行匹配、合成处理,生成两视点立体图像数据;通过狭缝光栅前置LED立体显示器实现对两视点立体图像的裸眼显示。本发明实施例中采用SIFT特征匹配算法,匹配提取的左右视图的特征更能有效地应对了“焦距突变”等移动终端拍摄时经常遇到的图像突变问题,增强了匹配稳定性、提高抗噪声能力。Compared with the related art, the embodiment of the present invention includes acquiring a dual-viewpoint image by using a dual MIPI interface, performing matching and synthesizing processing on the acquired dual-viewpoint image, and generating two-viewpoint stereoscopic image data; and implementing the LED stereoscopic display through the slit grating front LED A naked eye display of a two-view stereoscopic image. In the embodiment of the present invention, the SIFT feature matching algorithm is adopted, and the features of the extracted left and right views are more effectively coped with the image abrupt problem frequently encountered in the shooting of mobile terminals such as “focus abrupt change”, and the matching stability and the anti-noise are enhanced. ability.
通过确定当前场景左右视图之间像素坐标映射模型,并根据该坐标映射模型对单视点低质量立体素材进行遮挡修复和去噪处理,再生成两视点立体图像,解决了相关技术中立体素材质量一般、合成处理效率不高的问题。By determining the pixel coordinate mapping model between the left and right views of the current scene, and performing occlusion repair and denoising processing on the single-view low-quality stereo material according to the coordinate mapping model, and generating two-view stereo images, the quality of the three-dimensional material is generally solved in the related art. The problem of low efficiency in synthesis processing.
通过对遮挡区域的修复及噪声修复,实现了在移动终端在摄像头被遮挡、采集的立体图像素材噪声较大的情况下,以较快的速度采集、合成、显示出高质量的立体图像。
Through the repair of the occlusion area and the noise repair, it is realized that the high-quality stereo image is collected, synthesized and displayed at a relatively fast speed when the mobile terminal is occluded and the collected stereoscopic image material has a large noise.
附图概述BRIEF abstract
图1为相关技术狭缝前置式LCD自由立体显示器的结构示意图;1 is a schematic structural view of a slit front-mounted LCD autostereoscopic display according to related art;
图2为本发明实施例两视点立体图像合成方法的流程图;2 is a flowchart of a method for synthesizing two-view stereoscopic images according to an embodiment of the present invention;
图3为本发明实施例左右视图像素排列示意图;3 is a schematic diagram of arrangement of pixels in left and right views according to an embodiment of the present invention;
图4为本发明实施例两视点立体图像合成系统的组成结构示意图。FIG. 4 is a schematic structural diagram of a two-view stereoscopic image synthesizing system according to an embodiment of the present invention.
下文中将结合附图对本发明的实施方式进行详细说明。需要说明的是,在不冲突的情况下,本文中的实施例及实施例中的特征可以相互任意组合。Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that, in the case of no conflict, the features in the embodiments and the embodiments herein may be arbitrarily combined with each other.
立体视频是利用人眼的双目视差原理,双目各自独立地接收来自同一场景的特定摄像点的左右图像来获取立体感,与传统的单通道视频相比,要处理的数据量成倍增加。在便携式媒体中,为克服在高速数据传输情况下,并行数据总线带来的高功耗和电磁干扰(EMI)噪声,需要设计适合更大频宽和传输速率的总线设计,以呈现出同等级的图像和多媒体的效果。除了手机外、便携式媒体播放器(PMP)、便携数字多功能光盘(DVD)、数码照相机(DSC)等便携式多媒体设备也都会面对这样的问题。为此,多个标准化组织应运而生。主要由瑞萨科技和精工爱普生联合制定的MVI(一种摄像机输出的视频格式)标准基于低电压差分信号(LVDS)技术,能有效减少EMI;由国家半导体提出的移动像素链路(MPL,Mobile Pixel Link)总线标准采用自有专利的Whisperbus(一种物理层接口),包括索尼-爱立信等手机巨头都有与之合作;更有码分多址(CDMA)技术的起草者高通推出的面向移动终端的高速串列接口(MDDI)标准,同样基于低电压差分信号(LVDS)物理传输规范;当然最庞大、最引人注目的是移动行业处理器接口(MIPI,Mobile Industry Processor Interface)标准,它由诺基亚、高级精简指令集机器(ARM)、意法半导体、德州仪器、英特尔、飞思卡尔等终端及方案提供商联合组织并
发布此规范。在众多的技术标准中,MIPI标准经过多年的锤炼,在业界已具备相当的影响力。The stereoscopic video utilizes the binocular parallax principle of the human eye, and the binoculars independently receive the left and right images of specific imaging points from the same scene to obtain a stereoscopic effect, and the amount of data to be processed is multiplied compared with the conventional single channel video. . In portable media, in order to overcome the high power consumption and electromagnetic interference (EMI) noise caused by parallel data buses in the case of high-speed data transmission, it is necessary to design a bus design suitable for a larger bandwidth and transmission rate to present the same level. Image and multimedia effects. Portable multimedia devices such as mobile phones, portable media players (PMPs), portable digital versatile discs (DVDs), and digital still cameras (DSCs) are also faced with such problems. To this end, a number of standardization organizations have emerged. The MVI (a video format for camera output) standard, jointly developed by Renesas Technology and Seiko Epson, is based on Low Voltage Differential Signaling (LVDS) technology, which effectively reduces EMI; mobile pixel links proposed by National Semiconductor (MPL, Mobile) Pixel Link) bus standard uses its own patented Whisperbus (a physical layer interface), including Sony Ericsson and other mobile phone giants have cooperated with it; more is the code-based multiple access (CDMA) technology drafter Qualcomm launched mobile-oriented The terminal's High Speed Serial Interface (MDDI) standard is also based on the Low Voltage Differential Signaling (LVDS) physical transmission specification; of course, the largest and most compelling is the Mobile Industry Processor Interface (MIPI) standard. Organized by Nokia, Advanced Reduced Instruction Set Machine (ARM), STMicroelectronics, Texas Instruments, Intel, Freescale and other terminal and solution providers
Publish this specification. Among the many technical standards, MIPI standards have been influential in the industry after years of tempering.
采用MIPI接口的模组,相较于其他标准具有布线少、速度快、传输数据量大、功耗低、抗干扰强、适应性强等优点。MIPI组织下的高速多点连接工作小组推出的串行摄像头接口(CSI)和串行显示接口(DSI)都是以数字物理层(D-PHY,MIPI协议中的一项)为物理传输层基础,制定出的摄像头和显示模块接口规范。规范中详细定义了其物理连接、协议处理和上层应用。D-PHY采用1.2伏、源同步可升级的低压信令技术、摆幅200mV的差分信号对,最多可支持4个通道,每个通道的速率最高可达1千兆每秒Gbps,理论上4个通道共可以达到4Gbps的传输速率,且静态功耗为零。针对便携设备应用的DSI规范同时定义了最高可以支持扩展图形阵列(XGA)分辨率。对于CSI接口规范,除了支持三原色色彩空间(RGB)、Bayer(一种图像格式)和灰度色彩空间(YUV)等原始图像数据外,更可以支持用户自定义的数据类型或者压缩数据格式。DSI接口应用方式基本与CSI相同。MIPI针对不同的应用需求已经把应用接口细化到各自的子规范中,所以在以手机为主的手持电子产品应用中,MIPI接口规范基本可以支持任何速度和分辨率的数据传输需求。Compared with other standards, modules using MIPI interface have the advantages of less wiring, faster speed, large amount of transmitted data, low power consumption, strong anti-interference and strong adaptability. The Serial Camera Interface (CSI) and Serial Display Interface (DSI) introduced by the High Speed Multipoint Connection Working Group under the MIPI organization are based on the digital physical layer (D-PHY, one of the MIPI protocols). , developed camera and display module interface specifications. The physical connections, protocol processing, and upper-layer applications are defined in detail in the specification. D-PHY adopts 1.2 volt, source synchronous upgradeable low-voltage signaling technology, and a 200mV differential signal pair. It can support up to 4 channels, and the rate of each channel can reach up to 1 Gigabit per second. In theory, 4 A total of 4Gbps transmission rate can be achieved, and the static power consumption is zero. The DSI specification for portable device applications also defines a maximum support for Extended Graphics Array (XGA) resolution. For the CSI interface specification, in addition to supporting raw image data such as three primary color space (RGB), Bayer (an image format) and gray color space (YUV), it can support user-defined data types or compressed data formats. The application mode of the DSI interface is basically the same as that of the CSI. MIPI has refined the application interfaces to their respective sub-specifications for different application requirements, so in mobile-based handheld electronic products applications, the MIPI interface specification can basically support data transmission requirements of any speed and resolution.
但是,MIPI标准没有解决立体采集的问题。在终端CPU芯片中,有预留两个MIPI接口用来图像采集的趋势,例如德州仪器(TI)公司的OMAP 4系列、OMAP 5系列芯片。但是在移动终端上,对于双MIPI接口的应用是前置/后置双摄像头采集,无法满足对于实时、同步性要求很高的立体采集功能。However, the MIPI standard does not address the issue of stereo acquisition. In the terminal CPU chip, there are two MIPI interfaces reserved for image acquisition, such as Texas Instruments' OMAP 4 series and OMAP 5 series chips. However, on the mobile terminal, the application of the dual MIPI interface is the front/back dual camera acquisition, which cannot meet the stereo acquisition function with high real-time and synchronization requirements.
图2为本发明实施例两视点立体图像合成方法的流程图,如图2所示,包括:2 is a flowchart of a method for synthesizing a two-view stereoscopic image according to an embodiment of the present invention. As shown in FIG. 2, the method includes:
步骤200:利用双MIPI接口采集双视点图像。Step 200: Acquire a dual viewpoint image using a dual MIPI interface.
本步骤的实现是:从两个采集位置、角度不同的拍摄装置(如摄像头)进行同步拍摄,以获取包括左右视图的图像数据的双视点图像。即采用双MIPI后置摄像头,满足了立体采集所需要的高带宽、高同步性、低功耗、低噪声等要求。
The implementation of this step is: synchronous shooting from two capturing devices (such as cameras) with different capturing positions and angles to obtain a dual viewpoint image including image data of left and right views. That is to say, the dual MIPI rear camera satisfies the requirements of high bandwidth, high synchronism, low power consumption and low noise required for stereo acquisition.
比如,可在德州仪器公司的OMAP4系列处理器芯片组开发平台附带的双MIPI接口的基础上,通过在I2C总线上挂载两路摄像头,实现双摄像头实时图像的采集。与传统采集平台相比,具有布线少、速度快、传输数据量大、功耗低、抗干扰强、适应性强等优点。For example, on the basis of the dual MIPI interface provided by Texas Instruments' OMAP4 series processor chipset development platform, real-time image acquisition of dual cameras can be realized by mounting two cameras on the I 2 C bus. Compared with the traditional acquisition platform, it has the advantages of less wiring, high speed, large amount of data transmission, low power consumption, strong anti-interference and strong adaptability.
本领域技术人员知道,OMAP4系列处理器芯片提供了2个主频1.2G的ARM Cortex-A9处理器、1个TMS320C64+处理器、2个ARM Cortex-M3处理器、1GB大小的DRAM内存、2路MIPI串行摄像头数据接口:CSI-2_1与CSI-2_2,以及直接内存读写直接内存读取(DMA)控制器等模块。可以满足快速、高质量地采集、合成、显示立体图像的处理需求。其中,关于OMAP4系列处理器芯片的应用属于本领域技术人员的惯用技术手段,这里不再赘述。Those skilled in the art know that the OMAP4 series processor chip provides two ARM Cortex-A9 processors with a clock speed of 1.2G, one TMS320C64+ processor, two ARM Cortex-M3 processors, 1GB DRAM memory, and two channels. MIPI serial camera data interface: CSI-2_1 and CSI-2_2, as well as direct memory read and write direct memory read (DMA) controller and other modules. It can meet the processing requirements of collecting, synthesizing and displaying stereo images quickly and with high quality. The application of the OMAP4 series processor chip belongs to the conventional technical means of those skilled in the art, and details are not described herein again.
本发明实施例中的两路摄像头为在水平方向排列两个光学特性、几何特性、成像特性几乎一致的两个摄像头,摄像头之间的距离可以为35mm。两个摄像头挂载在不同的I2C总线上,采用独立的数据线路与内存和中央处理器交互,并利用每一帧图像的时间戳进行帧同步。在DMA控制器“透明”工作模式下,DMA与中央处理器CPU可交替访内存。DMA控制器控制着图像数据直接从摄像头传送至内存,在传送过程中不需要中央处理器CPU的参与,硬件为内存与输入输出设备开辟一条直接传送数据的通路,使CPU的效率大为提高。The two-way camera in the embodiment of the present invention is two cameras in which two optical characteristics, geometric characteristics, and imaging characteristics are almost aligned in the horizontal direction, and the distance between the cameras may be 35 mm. The two cameras are mounted on different I 2 C buses, interact with the memory and the central processor using separate data lines, and use the time stamp of each frame of the image for frame synchronization. In the "transparent" mode of operation of the DMA controller, the DMA and the central processing unit CPU can alternately access the memory. The DMA controller controls the image data to be directly transferred from the camera to the memory. During the transfer process, the CPU of the central processing unit is not required to participate. The hardware opens a path for directly transferring data for the memory and the input/output device, so that the efficiency of the CPU is greatly improved.
本发明实施例中的摄像头芯片可以选择Omnivision公司的OV5640芯片,OV5640的接口是基于MIPI接口扩展的,MIPI接口中定义的摄像头控制接口CCI是与I2C标准一致的,有两个端口,分别称为串行时钟线(SCL)和串行数据线(SDA),其中SCL为单向控制时钟信号线,SDA为双向控制线。摄像头扩展接口中的RESET线、SHUTTER线、STROBE线、时钟总线和电源地线可以由两个摄像头共用,因此不需做过多处理,直接将两路摄像头并联在其上即可。摄像头芯片OV5640与中央处理器OMAP4460的扩展接口之间如何实现连接属于本领域技术人员的惯用技术手段,这里不再赘述。The camera chip in the embodiment of the present invention can select the OV5640 chip of Omnivision, and the interface of the OV5640 is extended based on the MIPI interface. The camera control interface CCI defined in the MIPI interface is consistent with the I 2 C standard, and has two ports, respectively. It is called serial clock line (SCL) and serial data line (SDA), where SCL is a unidirectional control clock signal line and SDA is a bidirectional control line. The RESET line, SHUTTER line, STROBE line, clock bus and power ground in the camera expansion interface can be shared by the two cameras, so there is no need to do too much processing, and the two cameras can be directly connected in parallel. How to implement the connection between the camera chip OV5640 and the extended interface of the central processing unit OMAP4460 belongs to the conventional technical means of those skilled in the art, and details are not described herein again.
在视频驱动方面,本发明实施例可以采用Linux视频驱动(V4L2,Video for Linux 2)视频驱动框架,V4L2是一个双层驱动系统,上层为视频设备(Video Device)模块,是注册了设备功能函数的字符设备。下层为V4L2驱
动,利用注册函数注册V4L2驱动和设备节点,在打开设备节点后,在对设备文件的操作时则替换成由v4l2_ioctl_ops结构定义的各种符合V4L2标准的接口来完成。在访问视频硬件设备时,先调用Android内核中的V4L2驱动模块,然后V4L2模块再调用设备驱动。两路摄像头所使用的驱动框架和流程完全相同。V4L2驱动框架的主要作用是:对视频数据的时序和数据缓冲区的内存管理,借助I2C总线、外设部件互连标准(PCI)接口等驱动来控制硬件和获得图像数据。In terms of video driving, the embodiment of the present invention can adopt a Linux video driver (V4L2, Video for Linux 2) video driving framework, V4L2 is a double layer driving system, and the upper layer is a video device (Video Device) module, which is a registered device function function. Character device. The lower layer is the V4L2 driver. The V4L2 driver and device nodes are registered by the registration function. After the device node is opened, the operation of the device file is replaced by various V4L2-compliant interfaces defined by the v4l2_ioctl_ops structure. When accessing the video hardware device, first call the V4L2 driver module in the Android kernel, and then the V4L2 module calls the device driver. The drive frame and flow used by the two cameras are identical. The main functions of the V4L2 driver framework are: timing management of video data and memory management of data buffers, control of hardware and acquisition of image data by means of I 2 C bus, peripheral component interconnect standard (PCI) interface, etc.
摄像头驱动模块启动时将调用的初始化操作函数完成一系列初始化工作,包括硬件供电、总线控制器MIPI时钟设置、I2C总线端口初始化、MIPI数据端口初始化、视频设备检测与绑定等。在摄像头驱动模块内部管理着位于内存的两个视频帧缓存区(Buffer Queues)队列,一个作为输入缓存队列,另一个作为输出缓存队列。对于摄像头设备而言,当开始采集数据后,输入队列中的缓存区(Buffer)被填充满图像数据以后将自动变为输出队列,等待视频驱动调用出队命令(VIDIOC_DQBUF)将数据传递给上层进行处理以后,再重新调用入队命令(VIDIOC_QBUF)将缓冲区重新放进输入队列。The initialization operation function that will be called when the camera driver module is started completes a series of initialization tasks, including hardware power supply, bus controller MIPI clock setting, I 2 C bus port initialization, MIPI data port initialization, video device detection and binding. Two video frame buffers (Buffer Queues) located in the memory are managed inside the camera driver module, one as an input buffer queue and the other as an output buffer queue. For the camera device, when the data is collected, the buffer in the input queue is automatically filled into the output queue after being filled with the image data, waiting for the video driver to call the dequeue command (VIDIOC_DQBUF) to transfer the data to the upper layer. After processing, re-invoke the enqueue command (VIDIOC_QBUF) to put the buffer back into the input queue.
利用V4L2视频驱动框架,进行视频流图像采集步骤大致包括:打开视频设备文件;获取设备的功能列表,检测视频支持的制式;设置视频帧捕获格式和帧大小;向内存申请若干个帧缓存区域,作为输入输出缓存队列。多个缓存可以用于建立队列,以提高视频采集的效率;获取每个缓存的信息,并映射到上层系统空间的缓存信息;开始采集视频流;取出输出队列头部缓存中已经采样的一帧数据,传递给上层进行处理;将刚刚处理完的一帧缓冲重新放入输入队列尾,以循环采集;停止采集视频流;关闭视频设备。Using the V4L2 video driver framework, the video stream image acquisition step generally includes: opening a video device file; obtaining a function list of the device, detecting a video supported format; setting a video frame capture format and a frame size; and applying a plurality of frame buffer regions to the memory, As an input and output cache queue. Multiple caches can be used to establish queues to improve the efficiency of video capture; obtain each cached information and map it to the cache information of the upper system space; start collecting video streams; and take out a frame that has been sampled in the output queue header buffer. The data is passed to the upper layer for processing; the frame buffer just processed is put back into the end of the input queue to cycle acquisition; the video stream is stopped; the video device is turned off.
步骤200中,在对安卓操作系统的照相机子系统进行全面定制的基础上,以德州仪器公司的OMAP4系列中央处理器为硬件平台核心,通过双MIPI串行数据链路实现双摄像头实时同步的图像采集工作,与传统采集平台相比,具有布线少、速度快、传输数据量大、功耗低、抗干扰强、适应性强等优点。In step 200, based on the full customization of the camera subsystem of the Android operating system, the OMAP4 series central processing unit of Texas Instruments is used as the hardware platform core, and the dual camera real-time synchronous image is realized through the dual MIPI serial data link. Compared with the traditional acquisition platform, the collection work has the advantages of less wiring, high speed, large amount of transmission data, low power consumption, strong anti-interference and strong adaptability.
步骤201:对采集到的双视点图像进行匹配、合成处理,生成两视点立体图像数据。
Step 201: Perform matching and synthesizing processing on the collected two-viewpoint image to generate two-viewpoint stereoscopic image data.
本步骤利用两个并行执行的预览线程(PreviewThread),通过V4L2框架提供的函数接口(如果进行了视频驱动的话),从系统内核驱动层分别提取出步骤200中采集到的双视点图像中的左右视图每一帧图像的数据,再对其进行匹配、合成处理,大致包括:In this step, two preview threads (PreviewThread) executed in parallel are used, and the function interface provided by the V4L2 framework (if video driving is performed) is used to extract the left and right images in the dual viewpoint image collected in step 200 from the system kernel driver layer. Viewing the data of each frame of image, and then matching and synthesizing the data, generally includes:
在内存中注册用于照相机预览显示的专用缓冲区;从所述采集到的双视点图像中中获取左右两路视频单帧数据,并将其分别存放到专用缓冲区中;Registering a dedicated buffer for previewing the camera in the memory; acquiring left and right video single frame data from the acquired dual viewpoint images, and storing them in a dedicated buffer;
利用帧数据时间戳对获得的左右两路视频单帧数据进行同步处理;Synchronizing the obtained left and right video single frame data by using frame data time stamp;
将采集到的YUV格式的数据转换为RGB格式;Converting the collected data in YUV format to RGB format;
对格式转换后的左右视图进行图像平滑和尺寸变换处理;Perform image smoothing and size conversion processing on the left and right views after format conversion;
采用SIFT特征匹配算法提取并匹配的经过图像平滑和尺寸变换处理后的左右视图的特征;The feature of the left and right views after image smoothing and size transformation processing extracted and matched by the SIFT feature matching algorithm;
通过预先设置的特定的像素排列方式对左右视图像素点进行排列,生成一帧可在光栅下显示的立体图像数据。The left and right view pixel points are arranged by a specific pixel arrangement manner set in advance, and one frame of stereoscopic image data that can be displayed under the raster is generated.
可选地,还包括修复、去噪等处理,最后将合成后的图像数据传递给安卓显示子系统(Surface)系统库,显示到应用层界面上。Optionally, the method further includes repairing, denoising, etc., and finally, the synthesized image data is transmitted to the Android display subsystem (Surface) system library, and displayed on the application layer interface.
以利用定制的安卓照相机子系统硬件抽象层库为例,安卓照相机子系统(Android Camera)架构主要基于Android系统本身的层次结构,对应的层次主要由应用程序层(Camera App)、应用程序框架层(Camera Service)、硬件抽象层(Camera Hal)、内核驱动层(Camera Driver)组成。整个照相机子系统实际分成客户端(Client)和服务端(Service)两个进程。本步骤对采集到的双视点图像进行匹配、合成处理包括:Taking the customized Android camera subsystem hardware abstraction layer library as an example, the Android Camera architecture is mainly based on the hierarchical structure of the Android system itself, and the corresponding hierarchy is mainly composed of the application layer (Camera App) and the application framework layer. (Camera Service), hardware abstraction layer (Camera Hal), kernel driver layer (Camera Driver). The entire camera subsystem is actually divided into two processes: the client and the server. In this step, the matching and synthesizing processing of the collected dual viewpoint images includes:
注册预览显示缓存区。在安卓显示Surface系统库中为照相机预览显示在内存里注册专用缓冲区,并指定图像数据类型。Registration preview shows the cache area. Register a dedicated buffer in memory for the camera preview display in the Android Display Surface System Library and specify the image data type.
获取单帧原始图像数据即步骤200中采集到的双视点图像中的左右视图每一帧图像的数据,即在硬件抽象层库中调用Linux内核中的V4L2接口函数,获取左右两路视频单帧数据,并将其分别存放到专用缓冲区中;同时,利用安卓照相机子系统提供的帧数据时间戳功能进行软件同步。Acquiring the single-frame original image data, that is, the data of each frame of the left and right views in the dual-view image collected in step 200, that is, calling the V4L2 interface function in the Linux kernel in the hardware abstraction layer library, and acquiring the left and right video single frames. The data is stored in a dedicated buffer; at the same time, the software is synchronized using the frame data time stamp function provided by the Android camera subsystem.
采集到的YUV格式的数据转换为RGB格式:YUV色彩空间是欧洲电视
系统中采用的一种颜色编码方法,Y代表亮度(灰度),UV代表色差(R-Y)(B-Y)。通常摄像头采集的数据是YUV格式的像素信息矩阵,而立体图像在合成及显示均要求为RGB格式,所以这里需要将采集到的原始数据类型转化成与预览显示缓存注册类型相一致的类型。The collected YUV format data is converted to RGB format: YUV color space is European TV
A color coding method used in the system, Y stands for brightness (grayscale) and UV stands for color difference (R-Y) (B-Y). Usually, the data collected by the camera is a pixel information matrix of the YUV format, and the stereo image is required to be in the RGB format for synthesis and display. Therefore, the collected original data type needs to be converted into a type consistent with the preview display cache registration type.
对图像进行预处理:对取得的左右视图进行图像平滑和尺寸变换处理。其中,图像平滑处理可以应用高斯低通滤波器实现,高斯低通滤波器可以有效克服振铃效应,并对消除噪声效果明显,如何实现属于本领技术人员的惯用技术手段,这里不再赘述;尺寸变换处理为,在保证搜索质量和处理速度的情况下根据实际需求调整图像尺寸的,如何实现属于本领技术人员的惯用技术手段,这里不再赘述。Preprocessing the image: Perform image smoothing and size conversion on the obtained left and right views. Among them, the image smoothing processing can be implemented by using a Gaussian low-pass filter, and the Gaussian low-pass filter can effectively overcome the ringing effect, and the effect of eliminating noise is obvious, how to implement the conventional technical means belonging to the skilled person, and will not be described here; The transformation process is to adjust the image size according to actual needs while ensuring the search quality and the processing speed, and how to implement the conventional technical means belonging to those skilled in the art, and details are not described herein again.
提取并匹配预处理后的左右视图的特征,比如可以采用SIFT特征匹配算法来实现。尺度不变特征变换(SIFT,Scale Invariant Feature Transform)是David Lowe于1999年提出的局部特征描述子,并于2004年进行了更深入的发展和完善。SIFT特征匹配算法可以处理两幅图像之间发生视角变化、遮挡、亮度变化、旋转、噪声和尺度变换情况下的匹配问题,具有很强的匹配能力。SIFT算法由兴趣点检测和特征描述生成两部分组成,生成的SIFT算子是一种局部特征描述子,描述了图像的兴趣区域的灰度梯度分布情况。SIFT算法在基于图像匹配和目标检测技术的领域有着十分广泛的应用,对目标定位精度也非常高。包括:The features of the left and right views after preprocessing are extracted and matched, for example, by using a SIFT feature matching algorithm. The Scale Invariant Feature Transform (SIFT) is a local feature descriptor proposed by David Lowe in 1999, and was further developed and improved in 2004. The SIFT feature matching algorithm can deal with the matching problem between the two images, such as viewing angle change, occlusion, brightness change, rotation, noise and scale conversion, and has strong matching ability. The SIFT algorithm consists of two parts: interest point detection and feature description generation. The generated SIFT operator is a local feature descriptor, which describes the gray gradient distribution of the region of interest of the image. The SIFT algorithm has a wide range of applications in the field of image matching and target detection technology, and the target positioning accuracy is also very high. include:
提取预处理后的左右视图的特征:首先构建高斯差分(DoG)尺度空间;其次对每个像素点在其图像空间和DoG尺度空间的邻域中搜索极值点,初步得到特征点的位置;然后通过拟和三维二次函数以精确确定关键点的位置和尺度(达到亚像素精度),同时去除低对比度的关键点和不稳定的边缘响应点,以增强匹配稳定性、提高抗噪声能力;最后利用关键点邻域像素的梯度方向分布特性为每个关键点指定方向参数,使算子具备旋转不变性,生成32维度的SIFT特征描述子;The features of the left and right views after preprocessing are extracted: firstly, Gaussian difference (DoG) scale space is constructed; secondly, each pixel point is searched for extreme points in the neighborhood of its image space and DoG scale space, and the position of the feature points is initially obtained; Then, by fitting the three-dimensional quadratic function to accurately determine the position and scale of the key points (to achieve sub-pixel precision), at the same time remove the low-contrast key points and unstable edge response points to enhance the matching stability and improve the anti-noise ability; Finally, the direction parameter of each key point is specified by the gradient direction distribution characteristic of the neighboring pixels of the key point, so that the operator has rotation invariance and generates a 32-dimensional SIFT feature descriptor.
匹配提取的左右视图的特征:首先,对左右两视点间的变换假设一个变换模型;然后根据匹配对每个特征点的位置、尺度和旋转信息,用关键点特征向量的欧氏距离来作为两幅图像关键点的相似性特征度量,按假设的变换
模型计算出每个匹配对的变换参数;最后用右视图中欧式距离最近的点作为当前左视图SIFT关键点的匹配点,记录匹配点对的坐标信息。Matching the features of the extracted left and right views: First, a transformation model is assumed for the transformation between the left and right viewpoints; then, according to the position, scale and rotation information of each feature point, the Euclidean distance of the key feature vector is used as two Similarity measure of key points of image, according to hypothetical transformation
The model calculates the transformation parameters of each pair of matches; finally, the point closest to the Euclidean distance in the right view is used as the matching point of the current left view SIFT key point, and the coordinate information of the matching point pair is recorded.
由于采用了具备“尺度不变”和“旋转不变”的SIFT特征匹配算法对左右视图素材进行匹配,利用特征点的位置等信息,根据最小平方准则估计出变换模型,并且在丢弃了不符合该变换模型的匹配对后再利用剩余匹配对根据最小平方准则重新计算模型参数。与传统的匹配算法相比较,本发明实施例中的匹配提取的左右视图的特征更能有效应对了“焦距突变”等移动终端拍摄时经常遇到的图像突变问题。The SIFT feature matching algorithm with "scale invariant" and "rotation invariant" is used to match the left and right view materials, and the position of the feature points is used to estimate the transformation model according to the least squares criterion, and the non-conformity is discarded. The matching pair of the transformed model is then re-calculated according to the least squares criterion using the remaining matching pair. Compared with the traditional matching algorithm, the feature of the left and right views of the matching extraction in the embodiment of the present invention can effectively cope with the image abrupt problem frequently encountered when shooting a mobile terminal such as a "focal length change".
本发明实施例还可包括剔除误匹配的特征点,比如可以采用随机取样一致性(RANSAC,Random Sample Consensus)算法,把匹配点在图片特征点集中的位置信息作为参数,估计两图片间的映射关系。通过调整RANSAC中的阈值,就可以准确地估计出两图之间的映射关系,对SIFT的匹配点进行筛选,从而达到了去除错匹配点的作用。其中,RANSAC是由Fishler和Bolles提出的一种鲁棒性估计方法。其基本思想是:在进行参数估计时,不是不加区分地对待所有可能的输入数据,而是首先针对具体问题设计出一个搜索引擎,利用此搜索引擎迭代地剔除那些与所估计参数不一致的输入数据(Outliers),然后利用正确的输入数据来估计参数。其中,The embodiment of the present invention may further include rejecting the mismatched feature points. For example, a Random Sample Consensus (RANSAC) algorithm may be used, and the position information of the matching points in the feature points of the picture is used as a parameter to estimate the mapping between the two pictures. relationship. By adjusting the threshold in RANSAC, the mapping relationship between the two graphs can be accurately estimated, and the matching points of the SIFT are filtered, thereby achieving the effect of removing the mismatched points. Among them, RANSAC is a robust estimation method proposed by Fishler and Bolles. The basic idea is: when making parameter estimation, instead of treating all possible input data indiscriminately, first design a search engine for specific problems, and use this search engine to iteratively eliminate those inputs that are inconsistent with the estimated parameters. Data (Outliers), then use the correct input data to estimate the parameters. among them,
坐标映射模关系的估计包括:从匹配点对集合S中随机选取一个数据点集本,并由这个子集初始化模型;找出按照阈值Td成为当前模型的支撑点集Si,集合Si就是样本的一致集,被定义为有效点;如果集合Si的大小超过了指定阈值T,用Si重新估计模型并结束;如果集合Si的大小小于阈值Ts,选取一个新的样本,重复上面的步骤。经过N次尝试,最大的一致集Si被选中,用它来重新估计模型,得到最后的结果。仅保留符合坐标映射模型的正确匹配点对,即Si中的点集,与得到的左右视图像素坐标映射模型一同保存为本帧图像参考信息。The estimation of the coordinate mapping mode relationship includes: randomly selecting a data point set from the matching point pair set S, and initializing the model from the subset; finding the set of support points Si according to the threshold Td to become the current model, the set Si is the sample The uniform set is defined as the effective point; if the size of the set Si exceeds the specified threshold T, the model is re-estimated with Si and ends; if the size of the set Si is smaller than the threshold Ts, a new sample is selected and the above steps are repeated. After N attempts, the largest uniform set Si was selected and used to re-estimate the model to get the final result. Only the correct matching point pairs that conform to the coordinate mapping model, that is, the point sets in Si, are retained, and the obtained left and right view pixel coordinate mapping models are saved as the frame image reference information.
循环执行上述对采集到的双视点图像进行匹配、合成处理的过程,仅保留最新采集到的4帧图像的图像参考信息,建立队列。每采集一帧立体素材,将队头帧信息删除,将最新帧参考信息存入队尾。The process of matching and synthesizing the collected two-viewpoint image is performed cyclically, and only the image reference information of the newly acquired 4-frame image is retained, and a queue is established. Each time a frame of stereo material is acquired, the head frame information is deleted, and the latest frame reference information is stored in the end of the team.
完成左右视图预处理和匹配后,将左右视图像素点通过特定的像素排列
方式,如图3所示,生成一帧可在光栅下显示的立体图像数据,以实现立体图像的合成处理。其中,特定的像素排列方式为:以纵向列为单位,合成图的第一列排布左视图的第一列像素,合成图的第二列排布右视图的第一列像素;合成图的第三列排布左视图第二列像素,合成图的第四列排布右视图第二列像素……以此类推,周而复始直至左右视图像素完全排布进入合成图中。排列形式特殊之处在于,合成后的图像横向像素数量是原图像的2倍。After the left and right view preprocessing and matching is completed, the left and right view pixels are arranged through a specific pixel
In a manner, as shown in FIG. 3, stereoscopic image data that can be displayed under a raster is generated to realize a composite image processing process. The specific pixel arrangement manner is: in the vertical column unit, the first column of the composite view is arranged in the first column of the left view, and the second column of the composite view is arranged in the first column of the right view; The third column arranges the second column of pixels in the left view, the fourth column of the composite view arranges the second column of pixels in the right view, and so on, and so on, until the left and right view pixels are completely arranged into the composite image. The arrangement is special in that the number of horizontal pixels of the synthesized image is twice that of the original image.
可选地,上述循环执行完成后,采用当前模型队列前三帧求解平均像素坐标映射模型,验证左右视图是否遮挡。将左右视图各自平均切分为8块,对每一块取平均灰度值,对左右视图相应区块的平均灰度值进行对比,若区块平均灰度值相对差异在10%以上,则将低灰度值区块视为破损区块,若破损区块数为0,表示无遮挡,则进行噪声处理。Optionally, after the loop execution is completed, the average pixel coordinate mapping model is solved by using the first three frames of the current model queue to verify whether the left and right views are occluded. The average of the left and right views is divided into 8 blocks, and the average gray value is taken for each block, and the average gray value of the corresponding block in the left and right views is compared. If the relative difference of the average gray value of the block is more than 10%, then The low gray value block is regarded as a damaged block. If the number of broken blocks is 0, indicating that there is no occlusion, noise processing is performed.
如果若破损区块数不为0,表示有遮挡,本步骤还包括:修复遮挡区域,即对视图中被异物遮挡的区域用另一视图对应像素点的灰度值修正,以确保移动终端在摄像头被遮挡的情况下,仍能够获得质量相对较高的左右视图素材,解决了相关技术中立体素材质量一般、合成处理效率不高的问题。破损复包括:精确确定破损区域,在破损区块中,利用Sobel算子检测灰度突变边缘;采用当前场景左右视图的坐标映射模型确定破损区域在另一视图中坐标信息;采用正常视图中相应区域图像内容替换当前破损区域图像内容;进行边缘修复,对于利用Sobel算子检测到的灰度突变边缘,利用左右两视图对应像素点灰度值的平均数修正每像素点3×3临域内每个像素灰度值。这里,对于已经匹配好的左右视图,若一区块平均灰度值相对差异在10%以上,则灰度值较低的那块视图区块被标记为破损区块。由于左右视图已完成匹配,此处,可用灰度值较高的那块视图区块(即上面提到的另一视图)代替灰度值较低的区块(即上面提到的视图中被异物遮挡的区域),以实现破损修复。If the number of damaged blocks is not 0, indicating that there is occlusion, this step further includes: repairing the occlusion area, that is, correcting the occlusion value of the pixel corresponding to the foreign object in the view by using the gray value of the corresponding pixel of the other view to ensure that the mobile terminal is in the When the camera is occluded, the left and right view materials of relatively high quality can still be obtained, which solves the problem that the quality of the three-dimensional material is generally low and the synthesis processing efficiency is not high in the related art. The damage complex includes: accurately determining the damage area, and using the Sobel operator to detect the gray-scale abrupt edge in the damaged block; using the coordinate mapping model of the left and right views of the current scene to determine the coordinate information of the damaged area in another view; The image content of the area replaces the image content of the current damaged area; the edge repair is performed, and for the edge of the gray-scale abrupt detected by the Sobel operator, the average number of gray values of the corresponding pixel points of the left and right views is used to correct each pixel within the 3×3 area. Pixel gray value. Here, for the left and right views that have been matched, if the relative difference of the average gray value of a block is more than 10%, the view block with the lower gray value is marked as a damaged block. Since the left and right views have been matched, here, the view block with the higher gray value (ie the other view mentioned above) can be used instead of the block with the lower gray value (that is, the view mentioned above) The area covered by foreign matter) to achieve damage repair.
在进行遮挡区域修复之后,由于坐标映射模型的局限性,会伴随有少量椒盐噪声,因此,本步骤还包括:利用中值滤波器检测左右视图的噪声,并对噪声点加以标注。即在当前点N×N临域内(N为奇数),取灰度的最大值、最小值和均值,如果当前点的灰度值是这个临域内的最大或最小值,且超过预先设置的阈值,则有可能为噪点,标记为可疑点。其中阈值为实验中
的经验值,一般可以取整幅图像的平均灰度值。此时,通过坐标映射模型确定可疑点在另一视图中位置区域,将当前点置于此位置中,再次进行灰度比较,进而确定当前点是否为噪声点;After the occlusion area is repaired, due to the limitation of the coordinate mapping model, a small amount of salt and pepper noise is accompanied. Therefore, this step further includes: detecting the noise of the left and right views by using a median filter, and labeling the noise points. That is, in the current point N×N (N is an odd number), the maximum value, the minimum value, and the mean value of the gray value are taken, if the gray value of the current point is the maximum or minimum value in the neighborhood, and exceeds a preset threshold. , there may be noise, marked as suspicious. Where the threshold is in the experiment
The empirical value can generally take the average gray value of the entire image. At this time, the coordinate mapping model is used to determine the location area of the suspicious point in another view, the current point is placed in the position, and the gray level comparison is performed again to determine whether the current point is a noise point;
对于确定为噪声的点进行噪声点修复,可以采用另一视点对应像素点的灰度值修正确认出的噪声点3×3临域内每个像素灰度值。与其他除噪方法(相关技术的其他除噪方法大多是将出现噪声的图像整体进行滤波处理,容易影响图像质量)相比较,本发明实施例中的噪声修复方法更简单、计算量更小,去噪效果更显著,对合成后的立体图像观感影响更小。For the noise point repair of the point determined to be noise, the gradation value of the pixel corresponding to the pixel point of the other view point may be used to correct the gray value of each pixel in the vicinity of the noise point 3×3. Compared with other denoising methods (other denoising methods of the related art mostly filter the image in which noise is generated as a whole and easily affect the image quality), the noise repairing method in the embodiment of the present invention is simpler and less computationally intensive. The denoising effect is more significant, and the effect on the stereoscopic image after synthesis is less affected.
步骤202:通过狭缝光栅前置LED(Light Emitting Diode,发光二极管)立体显示器实现对两视点立体图像的裸眼显示。Step 202: Perform naked eye display of the two-view stereoscopic image through a slit grating front LED (Light Emitting Diode) stereoscopic display.
本步骤中,硬件抽象层系统库通过回调函数,向照相机子系统的服务端发送预览消息。服务端收到该消息后调用ISurface系统库完成对预览显示缓存区的数据填充工作。以安卓操作系统为例,包括:在安卓操作系统应用程序层通过2个Camera对象实例及其关联的Surface预览控件,分别实现了应用程序框架层中的Android.Hardware.Camera类提供给上层应用的预览相关接口,将立体图像从安卓系统的硬件抽象层提出上来,交由安卓系统的ISurface系统库来完成数据逻辑处理,最后显示到预览界面上。In this step, the hardware abstraction layer system library sends a preview message to the server of the camera subsystem through a callback function. After receiving the message, the server calls the ISurface system library to complete the data filling work for the preview display buffer area. Take the Android operating system as an example, including: in the Android operating system application layer through two Camera object instances and their associated Surface preview controls, respectively, the Android.Hardware.Camera class in the application framework layer is provided to the upper layer application. Preview the relevant interface, and put the stereo image from the hardware abstraction layer of Android system, and then submit it to the ISurface system library of Android system to complete the data logic processing, and finally display it to the preview interface.
由于本方法采用了前置适配光栅的LED立体显示器,传送至应用层的立体图像数据将按照原图像素等比例的方式投映到2D显示屏上。与同类产品相比,本发明实施例所提出的技术方案数据处理量小,硬件成本较低且易于制作。Since the method adopts the LED stereoscopic display of the pre-adaptive grating, the stereoscopic image data transmitted to the application layer will be projected onto the 2D display screen in a proportional manner to the pixels of the original image. Compared with the similar products, the technical solution proposed by the embodiments of the present invention has a small amount of data processing, low hardware cost, and is easy to manufacture.
对应特定的2D显示屏像素大小及观看条件,为使观看者的左右眼透过光栅能观看到相应的立体视差图像,就必须精确设计狭缝光栅透光条和挡光条的宽度、2D显示屏与狭缝光栅之间的距离等结构参数。对于给定的2D显示屏,其显示条件为:视差图像数(视点数)为K,2D显示屏子像素宽度为Wp。观看条件为:最佳观看距离为L,相领视差图像的视点间距为Q,其取值可等于或小于两眼瞳孔间距。一般而言,可设Q=E/N,其中E为人眼瞳孔间距,N为自然数,此时,在最佳观看距离上,若左眼看到第i幅视差图像,则右眼看到的是第(i+N)幅视差图像。当N=1时,相邻视差图像的视点间距即
为人眼瞳孔间距。狭缝光栅参数为:光栅节距Ws,其中,透光条与挡光条宽度分别为Ww和Wb,2D显示屏与狭缝光栅的距离为D。Corresponding to the specific 2D display pixel size and viewing conditions, in order to allow the viewer's left and right eyes to see the corresponding stereo disparity image through the grating, it is necessary to accurately design the width and 2D display of the slit grating light strip and the light blocking strip. Structural parameters such as the distance between the screen and the slit grating. For a given 2D display, the display conditions are: the number of parallax images (viewpoints) is K, and the 2D display sub-pixel width is Wp . The viewing condition is: the optimal viewing distance is L, and the viewpoint spacing of the leading parallax images is Q, which may be equal to or smaller than the pupil spacing of the two eyes. In general, Q=E/N can be set, where E is the pupil spacing of the human eye, and N is a natural number. At this time, if the left eye sees the i-th parallax image at the optimal viewing distance, the right eye sees the first (i+N) a parallax image. When N = 1, the viewpoint spacing of adjacent parallax images is the pupil spacing of the human eye. The slit grating parameters are: the grating pitch W s , wherein the width of the light-transmitting strip and the light-blocking strip are W w and W b , respectively, and the distance between the 2D display screen and the slit grating is D.
本发明实施例方法中,通过线程Loop循环机制,硬件抽象层系统库不断循环步骤201所述的对采集到的双视点图像进行匹配、合成处理的过程,将两个摄像头硬件设备采集的图像数据帧匹配、合成、送到预览显示缓冲区,最后显示到应用程序界面上。In the method of the embodiment of the present invention, the hardware abstraction layer system library continuously loops the process of matching and synthesizing the collected dual-viewpoint images according to the thread loop loop mechanism, and the image data collected by the two camera hardware devices is continuously cycled. Frame matching, compositing, sending to the preview display buffer, and finally to the application interface.
需要说明的是,对于单帧拍照功能,按照上述步骤即可实现两视点立体图像的裸眼显示;对于视频录制功能,在步骤201的立体图像合成处理之后,将数据格式再转换为YUV格式,以便于存储,再将图像数据传递给安卓系统的视频录像机子系统(VideoRecorder)进行编码处理;之后再执行步骤202。It should be noted that, for the single-frame photographing function, the naked-eye display of the two-view stereoscopic image can be realized according to the above steps; for the video recording function, after the stereoscopic image synthesizing process of step 201, the data format is re-converted into the YUV format, so that For storage, the image data is transmitted to the video recorder subsystem (VideoRecorder) of the Android system for encoding processing; and then step 202 is performed.
图4为本发明实施例两视点立体图像合成系统的组成结构示意图,如图4所示,至少包括采集单元410、处理单元420,以及显示单元430;其中,4 is a schematic structural diagram of a two-view stereoscopic image synthesizing system according to an embodiment of the present invention. As shown in FIG. 4, at least an acquisition unit 410, a processing unit 420, and a display unit 430 are included.
采集单元410,设置为:利用双MIPI接口采集双视点图像。The collecting unit 410 is configured to collect the dual viewpoint image by using the dual MIPI interface.
采集单元410包括两路具有MIPI接口的后置摄像头即图4中的第一摄像像头411和第二摄像头412,设置为:分别采集左右两路视图。两路摄像头为在水平方向排列两个光学特性、几何特性、成像特性几乎一致的两个摄像头,摄像头之间的距离可以为35mm。两路摄像头分别挂载在不同的I2C总线上,采用独立的数据线路与内存和中央处理器交互,并利用每一帧图像的时间戳进行帧同步。其中,摄像头芯片可以选择Omnivision公司的OV5640芯片。The collecting unit 410 includes two rear cameras having MIPI interfaces, that is, the first camera head 411 and the second camera 412 in FIG. 4, and is configured to separately collect left and right views. The two cameras are two cameras that are arranged in the horizontal direction with two optical characteristics, geometric characteristics, and imaging characteristics, and the distance between the cameras can be 35 mm. The two cameras are mounted on different I 2 C buses, interact with the memory and the central processor using separate data lines, and use the time stamp of each frame for frame synchronization. Among them, the camera chip can choose Omnivision's OV5640 chip.
可选地,采集单元410还包括摄像头驱动模块413,设置为:对采集到的两路视图进行驱动处理后输出给处理单元420。摄像头驱动模块413可以采用V4L2视频驱动框架来实现。Optionally, the collecting unit 410 further includes a camera driving module 413, configured to: drive the collected two-way view and output the result to the processing unit 420. The camera driver module 413 can be implemented using a V4L2 video drive framework.
处理单元420,设置为:对采集到的双视点图像进行匹配、合成处理,生成两视点立体图像数据并输出给显示单元430。处理单元430可采用德州仪器公司的OMAP4系列处理器芯片来实现。The processing unit 420 is configured to perform matching and combining processing on the collected two-viewpoint image, generate two-viewpoint stereoscopic image data, and output the data to the display unit 430. Processing unit 430 can be implemented using Texas Instruments' OMAP4 family of processor chips.
处理单元包括预处理模块421、提取模块422、匹配模块423,合成模块424;其中,
The processing unit includes a pre-processing module 421, an extraction module 422, a matching module 423, and a synthesizing module 424.
预处理模块421,设置为:在内存中注册用于照相机预览显示的专用缓冲区;获取左右两路视频单帧数据,并将其分别存放到专用缓冲区中;同时,利用帧数据时间戳功能进行软件同步;将采集到的YUV格式的数据转换为RGB格式;对格式转换后的左右视图进行图像平滑和尺寸变换处理后输出给提取模块422。The pre-processing module 421 is configured to: register a dedicated buffer for previewing the camera in the memory; acquire single-frame data of the left and right video, and store them in a dedicated buffer; and simultaneously use the frame data timestamp function. Software synchronization is performed; the acquired YUV format data is converted into an RGB format; the format converted left and right views are subjected to image smoothing and size conversion processing, and then output to the extraction module 422.
提取模块422,设置为:采用SIFT特征匹配算法提取预处理后的左右视图的特征,生成32维度的SIFT特征描述子。The extracting module 422 is configured to: extract the features of the pre-processed left and right views by using a SIFT feature matching algorithm, and generate a 32-dimensional SIFT feature descriptor.
匹配模块423,设置为:采用SIFT特征匹配算法匹配提取的左右视图的特征,将右视图中欧式距离最近的点作为当前左视图SIFT关键点的匹配点,记录匹配点对的坐标信息。The matching module 423 is configured to: use the SIFT feature matching algorithm to match the extracted features of the left and right views, and use the point closest to the Euclidean distance in the right view as the matching point of the current left view SIFT key point, and record the coordinate information of the matching point pair.
合成模块424,设置为:通过预先设置的特定的像素排列方式对左右视图像素点进行排列,生成一帧可在光栅下显示的立体图像数据。The synthesizing module 424 is configured to: arrange the left and right view pixel points by a specific pixel arrangement manner set in advance, and generate a frame of stereoscopic image data that can be displayed under the raster.
可选地,还包括剔除模块425,设置为:采用RANSAC算法剔除误匹配的特征点,并估计左右视图像素坐标映射模型。Optionally, the culling module 425 is further configured to: remove the mismatched feature points by using the RANSAC algorithm, and estimate the left and right view pixel coordinate mapping models.
可选地,还包括遮挡修复模块426,设置为:利用估计得到的左右视图像素坐标映射模型确定存在遮挡区时,对视图中被异物遮挡的区域用另一视图对应像素点的灰度值修正,以实现遮挡区域的修复。Optionally, the occlusion repair module 426 is further configured to: when the occlusion region is determined by using the estimated left and right view pixel coordinate mapping model, correcting the occlusion region of the view by the gray value of the pixel corresponding to the other view To achieve the repair of the occlusion area.
可选地,还包括噪声修复模块427,设置为:利用中值滤波器检测左右视图的噪声,并对噪声点加以标注后输出给合成模块424。Optionally, the noise repair module 427 is further configured to: detect the noise of the left and right views by using a median filter, and output the noise points to the synthesis module 424.
显示单元430,设置为:通过狭缝光栅前置LED立体显示器实现对两视点立体图像的裸眼显示。The display unit 430 is configured to realize naked-eye display of the two-view stereoscopic image through the slit grating front LED stereoscopic display.
本领域普通技术人员可以理解上述实施例的全部或部分步骤可以使用计算机程序流程来实现,所述计算机程序可以存储于一计算机可读存储介质中,所述计算机程序在相应的硬件平台上(如系统、设备、装置、器件等)执行,在执行时,包括方法实施例的步骤之一或其组合。One of ordinary skill in the art will appreciate that all or a portion of the steps of the above-described embodiments can be implemented using a computer program flow, which can be stored in a computer readable storage medium, such as on a corresponding hardware platform (eg, The system, device, device, device, etc. are executed, and when executed, include one or a combination of the steps of the method embodiments.
可选地,上述实施例的全部或部分步骤也可以使用集成电路来实现,这些步骤可以被分别制作成一个个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。
Alternatively, all or part of the steps of the above embodiments may also be implemented by using an integrated circuit. These steps may be separately fabricated into individual integrated circuit modules, or multiple modules or steps may be fabricated into a single integrated circuit module. achieve.
上述实施例中的装置/功能模块/功能单元可以采用通用的计算装置来实现,它们可以集中在单个的计算装置上,也可以分布在多个计算装置所组成的网络上。The devices/function modules/functional units in the above embodiments may be implemented by a general-purpose computing device, which may be centralized on a single computing device or distributed over a network of multiple computing devices.
上述实施例中的装置/功能模块/功能单元以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。上述提到的计算机可读取存储介质可以是只读存储器,磁盘或光盘等。When the device/function module/functional unit in the above embodiment is implemented in the form of a software function module and sold or used as a stand-alone product, it can be stored in a computer readable storage medium. The above mentioned computer readable storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
本发明实施例中采用SIFT特征匹配算法,匹配提取的左右视图的特征更能有效地应对了“焦距突变”等移动终端拍摄时经常遇到的图像突变问题,增强了匹配稳定性、提高抗噪声能力。另外,通过确定当前场景左右视图之间像素坐标映射模型,并根据该坐标映射模型对单视点低质量立体素材进行遮挡修复和去噪处理,再生成两视点立体图像,解决了相关技术中立体素材质量一般、合成处理效率不高的问题。通过对遮挡区域的修复及噪声修复,实现了在移动终端在摄像头被遮挡、采集的立体图像素材噪声较大的情况下,以较快的速度采集、合成、显示出高质量的立体图像。
In the embodiment of the present invention, the SIFT feature matching algorithm is adopted, and the features of the extracted left and right views are more effectively coped with the image abrupt problem frequently encountered in the shooting of mobile terminals such as “focus abrupt change”, and the matching stability and the anti-noise are enhanced. ability. In addition, by determining the pixel coordinate mapping model between the left and right views of the current scene, and performing occlusion repair and denoising processing on the single-view low-quality stereo material according to the coordinate mapping model, and generating two-view stereo images, the related art stereo material is solved. The problem is that the quality is general and the synthesis processing efficiency is not high. Through the repair of the occlusion area and the noise repair, it is realized that the high-quality stereo image is collected, synthesized and displayed at a relatively fast speed when the mobile terminal is occluded and the collected stereoscopic image material has a large noise.
Claims (18)
- 一种两视点立体图像合成方法,包括:A two-view stereoscopic image synthesis method, comprising:利用双移动行业处理器接口MIPI接口采集双视点图像;Acquiring dual viewpoint images using the dual mobile industry processor interface MIPI interface;对采集到的双视点图像进行匹配、合成处理,生成两视点立体图像数据;Perform matching and synthesizing processing on the collected two-viewpoint image to generate two-viewpoint stereoscopic image data;通过狭缝光栅前置发光二极管LED立体显示器实现对两视点立体图像的裸眼显示。The naked eye display of the two-view stereo image is realized by the slit grating front LED diode stereoscopic display.
- 根据权利要求1所述的两视点立体图像合成方法,其中,利用双MIPI接口采集双视点图像的步骤之后,还包括:对采集到的双视点图像进行视频驱动处理。The two-view stereoscopic image synthesizing method according to claim 1, wherein after the step of acquiring the dual-viewpoint image by using the dual MIPI interface, the method further comprises: performing video driving processing on the acquired dual-viewpoint image.
- 根据权利要求1或2所述的两视点立体图像合成方法,其中,所述对采集到的双视点图像进行匹配、合成处理包括:The two-view stereoscopic image synthesizing method according to claim 1 or 2, wherein the matching and synthesizing the acquired dual-viewpoint image comprises:利用两个并行执行的预览线程,分别提取出所述采集到的双视点图像中的左右视图每一帧图像的数据,进行匹配、合成处理。The data of each frame of the left and right views in the collected two-view image is extracted by using two preview threads executed in parallel, and matching and synthesizing processing are performed.
- 根据权利要求3所述的两视点立体图像合成方法,其中,所述进行匹配、合成处理包括:The two-view stereoscopic image synthesizing method according to claim 3, wherein the performing the matching and synthesizing processing comprises:在内存中注册用于照相机预览显示的专用缓冲区;从所述采集到的双视点图像中获取左右两路视频单帧数据,分别存放到专用缓冲区中;Registering a dedicated buffer for previewing the camera in the memory; acquiring the left and right video single frame data from the acquired two-viewpoint image, and storing the data in a dedicated buffer;利用帧数据时间戳对获得的左右两路视频单帧数据进行同步处理;Synchronizing the obtained left and right video single frame data by using frame data time stamp;将采集到的YUV格式的数据转换为RGB格式;Converting the collected data in YUV format to RGB format;对格式转换后的左右视图进行图像平滑和尺寸变换处理;Perform image smoothing and size conversion processing on the left and right views after format conversion;采用尺度不变特征变换SIFT特征匹配算法提取并匹配的经过图像平滑和尺寸变换处理后的左右视图的特征;The features of the left and right views after image smoothing and size transformation are extracted and matched by the scale invariant feature transform SIFT feature matching algorithm;通过预先设置的特定的像素排列方式对左右视图像素点进行排列,生成一帧可在光栅下显示的立体图像数据。The left and right view pixel points are arranged by a specific pixel arrangement manner set in advance, and one frame of stereoscopic image data that can be displayed under the raster is generated.
- 根据权利要求4所述的两视点立体图像合成方法,采用SIFT特征匹 配算法提取并匹配的经过图像平滑和尺寸变换处理后的左右视图的特征的步骤之后,还包括:利用随机取样一致性RANSAC算法剔除匹配后的左右视图的特征中的误匹配的特征点。The two-view stereo image synthesis method according to claim 4, wherein the SIFT feature is used After the step of extracting and matching the features of the left and right views after the image smoothing and the size transform processing, the method further includes: using the random sampling consistency RANSAC algorithm to cull the mismatched feature points in the matched left and right view features.
- 根据权利要求4所述的两视点立体图像合成方法,其中,通过预先设置的特定的像素排列方式对左右视图像素点进行排列,生成一帧可在光栅下显示的立体图像数据的步骤中,所述像素排列方式为:以纵向列为单位,合成图的第一列排布左视图的第一列像素,合成图的第二列排布右视图的第一列像素;合成图的第三列排布左视图第二列像素,合成图的第四列排布右视图第二列像素,以此类推,直至左右视图像素完全排布进入合成图中。The two-view stereoscopic image synthesizing method according to claim 4, wherein the left and right view pixel points are arranged by a specific pixel arrangement manner set in advance, and a step of generating a frame of stereoscopic image data that can be displayed under the raster is performed. The pixel arrangement manner is: in the vertical column unit, the first column of the composite view is arranged in the first column of the left view, and the second column of the composite view is arranged in the first column of the right view; the third column of the composite view The second column of pixels in the left view is arranged, the fourth column of the composite view is arranged in the second column of the right view, and so on, until the left and right view pixels are completely arranged into the composite image.
- 根据权利要求4或5所述的两视点立体图像合成方法,其中,所述生成一帧可在光栅下显示的立体图像数据之前还包括:验证所述左右视图是否被遮挡,如果存在被遮挡,修复遮挡区域。The two-view stereoscopic image synthesizing method according to claim 4 or 5, wherein the generating a frame before the stereoscopic image data displayed under the raster further comprises: verifying whether the left and right views are occluded, and if there is occlusion, Repair the occlusion area.
- 根据权利要求7所述的两视点立体图像合成方法,其中,所述修复遮挡区域包括:采用另一视图中像素点的灰度值修正视图中对应的被异物遮挡的区域。The two-view stereoscopic image synthesizing method according to claim 7, wherein the repairing the occlusion region comprises: correcting the corresponding occluded region in the view by using the gradation value of the pixel point in the other view.
- 根据权利要求7所述的两视点立体图像合成方法,还包括:利用中值滤波器检测修复遮挡后的左右视图的噪声,并对噪声点加以标注;The two-view stereoscopic image synthesizing method according to claim 7, further comprising: detecting a noise of the left and right views after the occlusion is repaired by using a median filter, and labeling the noise points;对于确定为噪声的点进行噪声点修复。Noise point repair is performed on points determined to be noise.
- 根据权利要求9所述的两视点立体图像合成方法,其中,所述进行噪声修复包括:采用另一视点对应像素点的灰度值修正确认出的噪声点内每个像素灰度值。The two-view stereoscopic image synthesizing method according to claim 9, wherein the performing the noise repair comprises: correcting the gray value of each pixel in the confirmed noise point by using the gray value of the pixel corresponding to the other viewpoint.
- 一种两视点立体图像合成系统,包括:采集单元、处理单元,以及显示单元;其中,A two-view stereoscopic image synthesis system includes: an acquisition unit, a processing unit, and a display unit; wherein采集单元,设置为:利用双MIPI接口采集双视点图像;The acquisition unit is configured to: acquire a dual viewpoint image by using a dual MIPI interface;处理单元,设置为:对采集到的双视点图像进行匹配、合成处理,生成两视点立体图像数据并输出给显示单元;The processing unit is configured to: perform matching and synthesizing processing on the collected two-viewpoint image, generate two-viewpoint stereoscopic image data, and output the data to the display unit;显示单元,设置为:通过狭缝光栅前置LED立体显示器实现对两视点立体图像的裸眼显示。 The display unit is configured to realize naked-eye display of the two-view stereoscopic image through the slit grating front LED stereoscopic display.
- 根据权利要求11所述的两视点立体图像合成系统,其中,所述采集单元包括两路具有MIPI接口的后置摄像头,设置为:分别采集左右两路视图;The two-view stereoscopic image synthesizing system according to claim 11, wherein the acquisition unit comprises two rear cameras having MIPI interfaces, and is configured to separately collect left and right views;两路摄像头分别挂载在不同的I2C总线上,采用独立的数据线路与内存和中央处理器交互,并利用每一帧图像的时间戳进行帧同步。The two cameras are mounted on different I 2 C buses, interact with the memory and the central processor using separate data lines, and use the time stamp of each frame for frame synchronization.
- 根据权利要求12所述的两视点立体图像合成系统,其中,所述采集单元还包括摄像头驱动模块,设置为:对采集到的两路视图进行驱动处理后输出给处理单元。The two-view stereoscopic image synthesizing system according to claim 12, wherein the acquisition unit further comprises a camera driving module, configured to: drive the collected two-way view and output the same to the processing unit.
- 根据权利要求11所述的两视点立体图像合成系统,其中,所述处理单元包括预处理模块、提取模块、匹配模块,合成模块;其中,The two-view stereoscopic image synthesis system according to claim 11, wherein the processing unit comprises a pre-processing module, an extraction module, a matching module, and a synthesis module;预处理模块,设置为:在内存中注册用于照相机预览显示的专用缓冲区;获取左右两路视频单帧数据,分别存放到专用缓冲区中;同时,利用帧数据时间戳功能进行软件同步;将采集到的YUV格式的数据转换为RGB格式;对格式转换后的左右视图进行图像平滑和尺寸变换处理后输出给提取模块;The pre-processing module is configured to: register a dedicated buffer for previewing the camera in the memory; acquire the single-frame data of the left and right video, and store them in the dedicated buffer respectively; at the same time, use the frame data timestamp function to perform software synchronization; Converting the collected data in the YUV format into an RGB format; performing image smoothing and size conversion processing on the left and right views after the format conversion, and outputting the image to the extraction module;提取模块,设置为:采用SIFT特征匹配算法提取预处理后的左右视图的特征,生成32维度的SIFT特征描述子;The extraction module is configured to: extract a feature of the pre-processed left and right views by using a SIFT feature matching algorithm, and generate a 32-dimensional SIFT feature descriptor;匹配模块,设置为:采用SIFT特征匹配算法匹配提取的左右视图的特征,将右视图中欧式距离最近的点作为当前左视图SIFT关键点的匹配点,记录匹配点对的坐标信息;The matching module is configured to: use the SIFT feature matching algorithm to match the extracted features of the left and right views, and use the point closest to the Euclidean distance in the right view as the matching point of the current left view SIFT key point, and record the coordinate information of the matching point pair;合成模块,设置为:通过预先设置的特定的像素排列方式对左右视图像素点进行排列,生成一帧可在光栅下显示的立体图像数据。The synthesizing module is configured to: arrange the left and right view pixels by a specific pixel arrangement manner set in advance, and generate a frame of stereoscopic image data that can be displayed under the raster.
- 根据权利要求14所述的两视点立体图像合成系统,其中,所述处理单元还包括剔除模块,设置为:采用RANSAC算法剔除误匹配的特征点,并估计左右视图像素坐标映射模型。The two-view stereoscopic image synthesizing system according to claim 14, wherein the processing unit further comprises a culling module, configured to: cull the mismatched feature points by using the RANSAC algorithm, and estimate the left and right view pixel coordinate mapping models.
- 根据权利要求18所述的两视点立体图像合成系统,其中,所述处理单元还包括遮挡修复模块,设置为:利用估计得到的左右视图像素坐标映射模型确定存在遮挡区时,对视图中被异物遮挡的区域用另一视图对应像素 点的灰度值修正,以实现遮挡区域的修复。The two-view stereoscopic image synthesizing system according to claim 18, wherein the processing unit further comprises an occlusion repair module, configured to: determine, by using the estimated left and right view pixel coordinate mapping model, when the occlusion region exists, the foreign object in the view The occluded area uses another view corresponding pixel The gray value of the point is corrected to achieve the repair of the occlusion area.
- 根据权利要求19所述的两视点立体图像合成系统,所述处理单元还包括噪声修复模块,设置为:利用中值滤波器检测左右视图的噪声,并对噪声点加以标注后输出给合成模块。The two-view stereoscopic image synthesizing system according to claim 19, wherein the processing unit further comprises a noise repairing module, configured to: detect the noise of the left and right views by using a median filter, and mark the noise points and output the same to the synthesizing module.
- 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1-10任一项的方法。 A computer readable storage medium storing computer executable instructions for performing the method of any of claims 1-10.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410489839.6 | 2014-09-22 | ||
CN201410489839.6A CN105430368A (en) | 2014-09-22 | 2014-09-22 | Two-viewpoint stereo image synthesizing method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016045425A1 true WO2016045425A1 (en) | 2016-03-31 |
Family
ID=55508266
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/082557 WO2016045425A1 (en) | 2014-09-22 | 2015-06-26 | Two-viewpoint stereoscopic image synthesizing method and system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105430368A (en) |
WO (1) | WO2016045425A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108534091A (en) * | 2018-05-10 | 2018-09-14 | 华域视觉科技(上海)有限公司 | Car light and automobile with three-dimensional lighting effect |
CN110889814A (en) * | 2019-11-21 | 2020-03-17 | 上海无线电设备研究所 | Visible light image histogram enhancement method and device based on Sysgen |
US11526820B2 (en) | 2014-12-09 | 2022-12-13 | Connectwise, Llc | Systems and methods for interfacing between a sales management system and a project planning system |
CN117834844A (en) * | 2024-01-09 | 2024-04-05 | 国网湖北省电力有限公司荆门供电公司 | Binocular stereo matching method based on feature correspondence |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105913474A (en) * | 2016-04-05 | 2016-08-31 | 清华大学深圳研究生院 | Binocular three-dimensional reconstruction device and three-dimensional reconstruction method thereof, and Android application |
CN106097289B (en) * | 2016-05-30 | 2018-11-27 | 天津大学 | A kind of stereo-picture synthetic method based on MapReduce model |
US10650506B2 (en) * | 2016-07-22 | 2020-05-12 | Sony Corporation | Image processing apparatus and image processing method |
CN106643671B (en) * | 2016-12-01 | 2019-04-09 | 江苏省测绘工程院 | A kind of underwater cloud denoising method based on airborne LiDAR sounding system |
CN107404644A (en) * | 2017-07-27 | 2017-11-28 | 深圳依偎控股有限公司 | It is a kind of based on the live display methods of double 3D for taking the photograph collection and system |
CN107786866B (en) * | 2017-09-30 | 2020-05-19 | 深圳睛灵科技有限公司 | Binocular vision image synthesis system and method |
CN113115087B (en) * | 2021-03-22 | 2022-07-12 | 西安交通大学 | Wireless updated content U disk and implementation method thereof |
CN113673648A (en) * | 2021-08-31 | 2021-11-19 | 云南昆钢电子信息科技有限公司 | Unmanned electric locomotive two-dimensional code positioner |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102157112A (en) * | 2011-04-07 | 2011-08-17 | 黑龙江省四维影像数码科技有限公司 | Seamless splicing separate LED free stereo display screen |
CN102385816A (en) * | 2011-11-22 | 2012-03-21 | 吉林大学 | Manufacture method of slit grating for LED (Light Emitting Display) screen naked-eye stereo display |
CN102572482A (en) * | 2012-01-06 | 2012-07-11 | 浙江大学 | 3D (three-dimensional) reconstruction method for stereo/multi-view videos based on FPGA (field programmable gata array) |
US20120194512A1 (en) * | 2011-01-31 | 2012-08-02 | Samsung Electronics Co., Ltd. | Three-dimensional image data display controller and three-dimensional image data display system |
CN103108195A (en) * | 2011-11-10 | 2013-05-15 | 鸿富锦精密工业(深圳)有限公司 | Device capable of shooting in stereoscopic mode |
CN203444715U (en) * | 2013-08-13 | 2014-02-19 | 北京乐成光视科技发展有限公司 | LED display screen used for naked eye 3D display |
CN103995361A (en) * | 2014-06-17 | 2014-08-20 | 上海新视觉立体显示科技有限公司 | Naked eye 3D display pixel unit and multi-view naked eye 3D image display device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103345736B (en) * | 2013-05-28 | 2016-08-31 | 天津大学 | A kind of virtual viewpoint rendering method |
-
2014
- 2014-09-22 CN CN201410489839.6A patent/CN105430368A/en not_active Withdrawn
-
2015
- 2015-06-26 WO PCT/CN2015/082557 patent/WO2016045425A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120194512A1 (en) * | 2011-01-31 | 2012-08-02 | Samsung Electronics Co., Ltd. | Three-dimensional image data display controller and three-dimensional image data display system |
CN102157112A (en) * | 2011-04-07 | 2011-08-17 | 黑龙江省四维影像数码科技有限公司 | Seamless splicing separate LED free stereo display screen |
CN103108195A (en) * | 2011-11-10 | 2013-05-15 | 鸿富锦精密工业(深圳)有限公司 | Device capable of shooting in stereoscopic mode |
CN102385816A (en) * | 2011-11-22 | 2012-03-21 | 吉林大学 | Manufacture method of slit grating for LED (Light Emitting Display) screen naked-eye stereo display |
CN102572482A (en) * | 2012-01-06 | 2012-07-11 | 浙江大学 | 3D (three-dimensional) reconstruction method for stereo/multi-view videos based on FPGA (field programmable gata array) |
CN203444715U (en) * | 2013-08-13 | 2014-02-19 | 北京乐成光视科技发展有限公司 | LED display screen used for naked eye 3D display |
CN103995361A (en) * | 2014-06-17 | 2014-08-20 | 上海新视觉立体显示科技有限公司 | Naked eye 3D display pixel unit and multi-view naked eye 3D image display device |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11526820B2 (en) | 2014-12-09 | 2022-12-13 | Connectwise, Llc | Systems and methods for interfacing between a sales management system and a project planning system |
US12112286B2 (en) | 2014-12-09 | 2024-10-08 | Connect Wise, LLC | Systems and methods for interfacing between a sales management system and a project planning system |
CN108534091A (en) * | 2018-05-10 | 2018-09-14 | 华域视觉科技(上海)有限公司 | Car light and automobile with three-dimensional lighting effect |
CN110889814A (en) * | 2019-11-21 | 2020-03-17 | 上海无线电设备研究所 | Visible light image histogram enhancement method and device based on Sysgen |
CN110889814B (en) * | 2019-11-21 | 2024-04-26 | 上海无线电设备研究所 | Visible light image histogram enhancement method and device based on Sysgen |
CN117834844A (en) * | 2024-01-09 | 2024-04-05 | 国网湖北省电力有限公司荆门供电公司 | Binocular stereo matching method based on feature correspondence |
Also Published As
Publication number | Publication date |
---|---|
CN105430368A (en) | 2016-03-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016045425A1 (en) | Two-viewpoint stereoscopic image synthesizing method and system | |
US9948919B2 (en) | Stereoscopic 3D camera for virtual reality experience | |
US10511787B2 (en) | Light-field camera | |
CN102917235B (en) | Image processing apparatus and image processing method | |
JP5814692B2 (en) | Imaging apparatus, control method therefor, and program | |
CN203233507U (en) | Video signal processing equipment | |
CN101883215A (en) | Imaging device | |
Schmeing et al. | Depth image based rendering: A faithful approach for the disocclusion problem | |
CN105635720A (en) | Stereo vision camera with double-lens single sensor | |
US20130088574A1 (en) | Detective Adjusting Apparatus for Stereoscopic Image and Related Method | |
CN102209254A (en) | One-dimensional integrated imaging method and device | |
TWI584050B (en) | Panoramic stereoscopic image synthesis method, apparatus and mobile terminal | |
WO2012068724A1 (en) | Three-dimensional image acquisition system and method | |
US20120105593A1 (en) | Multi-view video and still 3d capture system | |
JP2009175866A (en) | Stereoscopic image generation device, its method, and its program | |
CN103488039A (en) | 3D camera module and electronic equipment with 3D camera module | |
JP2013115668A (en) | Image processing apparatus, image processing method, and program | |
JP2004200973A (en) | Apparatus and method of inputting simple stereoscopic image, program, and recording medium | |
CN107071391B (en) | A method of enhancing display 3D naked eye figure | |
CN201957179U (en) | Stereoscopic display system based on digital micro-mirror device (DMD) | |
CN101854559A (en) | Multimode stereoscopic three-dimensional camera system | |
JP2012134885A (en) | Image processing system and image processing method | |
CN207603821U (en) | A kind of bore hole 3D systems based on cluster and rendering | |
CN212364739U (en) | Holographic image display system | |
US20170048511A1 (en) | Method for Stereoscopic Reconstruction of Three Dimensional Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15844638 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15844638 Country of ref document: EP Kind code of ref document: A1 |