WO2023236508A1 - Image stitching method and system based on billion-pixel array camera - Google Patents
Image stitching method and system based on billion-pixel array camera Download PDFInfo
- Publication number
- WO2023236508A1 WO2023236508A1 PCT/CN2022/141925 CN2022141925W WO2023236508A1 WO 2023236508 A1 WO2023236508 A1 WO 2023236508A1 CN 2022141925 W CN2022141925 W CN 2022141925W WO 2023236508 A1 WO2023236508 A1 WO 2023236508A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- feature point
- spliced
- moving speed
- gigapixel
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000000605 extraction Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 42
- 230000003287 optical effect Effects 0.000 claims description 12
- 238000006243 chemical reaction Methods 0.000 claims description 10
- 238000013519 translation Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 abstract description 6
- 238000012216 screening Methods 0.000 abstract 1
- 230000009466 transformation Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
Definitions
- the present application relates to the field of image processing, and more specifically, to an image splicing method and system based on a gigapixel array camera.
- the array camera replaces the shooting effect of one large lens with multiple small lenses. Its principle is to control multiple cameras at the same time for shooting. Compared with traditional cameras, the 100-megapixel array camera has a wider field of view and is more convenient for shooting. The resulting photos are also larger, while being smaller in size.
- Image registration is the key to image splicing. Image registration aims to find the same area in two images to calculate the coordinate changes between images. The accuracy of image registration directly determines the quality of image splicing.
- image registration is usually achieved by performing grayscale processing, angle transformation, edge processing, etc. on the image itself, ignoring the image deviation caused by the movement of the shooting target and the shooting device, resulting in low image stitching accuracy.
- the purpose of the embodiments of the present invention is to provide an image splicing method and system based on a gigapixel array camera, which determines the offset rate based on the movement of the gigapixel array camera and the target object, and corrects the feature points through the offset rate to avoid movement.
- the resulting image deviation improves the efficiency and accuracy of image stitching.
- an image splicing method based on a gigapixel array camera including: acquiring image data captured by a gigapixel array camera; the image data is a first image of a target object to be spliced and the second image to be spliced; perform feature point extraction on the first image to be spliced and the second image to be spliced respectively to determine the first feature point and the second feature point; obtain the third feature point of the gigapixel array camera a moving speed; obtaining the second moving speed of the target object; calculating a deviation rate based on the first moving speed and the second moving speed; adjusting the first feature point and the first moving speed based on the deviation rate Use the second feature point to obtain the first modified feature point and the second modified feature point; match the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs; filter the multiple optimal feature point pairs.
- the optimal feature point pair is used to splice the first image to be spliced and the
- the first image to be spliced and the second image to be spliced are two consecutive frame images.
- performing feature point extraction on the first image to be spliced and the second image to be spliced respectively, and determining the first feature point and the second feature point include: based on the first image to be spliced or For each pixel on the second image to be spliced, with the pixel as the center, calculate the grayscale difference of n adjacent pixels; if the grayscale difference of adjacent pixels that meet the preset conditions is greater than n /2, then the pixel point is determined as the first feature point or the second feature point.
- calculating the offset rate based on the first moving speed and the second moving speed includes: calculating a first offset amount based on the first moving speed and the second moving speed respectively. s 1 and the second offset s 2 ; if the gigapixel array camera and the target object move in the same direction, the offset rate R is calculated by the following formula:
- the offset rate R is calculated by the following formula:
- t 2 represents the shooting time of the second image to be spliced
- t 1 represents the shooting time of the first image to be spliced
- vx 1 represents the moving speed of the gigapixel array camera at time t 1
- vd 1 represents the moving speed of the target object at time t 1 .
- calculating the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively includes: calculating the first offset through the following formula s 1 and second offset s 2 :
- vx 1 represents the moving speed of the gigapixel array camera at time t 1
- vx 2 represents the moving speed of the gigapixel array camera at time t 2
- vd 1 represents the moving speed of the target object at time t 1
- vd 2 represents The moving speed of the target object at time t 2
- ⁇ t represents the shooting time difference between the first image to be spliced and the second image to be spliced.
- adjusting the first feature point based on the offset rate to obtain a first corrected feature point includes: constructing a three-dimensional coordinate system; obtaining the coordinate value of any one of the plurality of first feature points ( x 1 ,y 1 ,z 1 ); perform coordinate transformation on the feature point through the following formula:
- f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
- c x and c y respectively represent the optical center, in pixels
- Q represents the rotation matrix
- T represents the translation matrix
- [Q T] represents the external parameter matrix of the gigapixel array camera
- R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all feature points in the first feature point and repeats the above steps; maps the first corrected feature point based on the corrected coordinate value of all feature points in the first feature point.
- adjusting the second feature point based on the offset rate to obtain a second modified feature point includes: obtaining the coordinate value (x 2 , y ) of any one of the plurality of second feature points. 2 ,z 2 ); perform coordinate transformation on the feature point through the following formula:
- f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
- c x and c y respectively represent the optical center of the camera, in pixels
- Q represents rotation Matrix
- T represents the translation matrix
- [Q T] represents the external parameter matrix of the gigapixel array camera
- R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all the feature points in the second feature point and repeats the above steps; and maps the second corrected feature point based on the corrected coordinate value of all the feature points in the second feature point.
- matching the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs includes: using Hamming distance to match the first modified feature point and the third modified feature point. 2. Correct each feature point in the feature points to obtain multiple optimal feature point pairs with the shortest Hamming distance.
- the splicing of the first image to be spliced and the second image to be spliced to obtain a total image includes: using a weighted fusion algorithm to obtain a spliced total image.
- an image splicing system based on a gigapixel array camera
- an image data acquisition module for acquiring image data captured by the gigapixel array camera
- the image data is a target The first image to be spliced and the second image to be spliced of the object
- a feature point determination module configured to extract feature points of the first image to be spliced and the second image to be spliced respectively, and determine the first feature point and the second image to be spliced.
- Two feature points Two feature points; a speed determination module, used to obtain the first moving speed and the second moving speed of the gigapixel array camera and the target object; an offset rate calculation module, used to calculate the first moving speed and the second moving speed based on the first moving speed and the second moving speed of the target object; The second moving speed is used to calculate the offset rate; the feature point correction module is used to adjust the first feature point and the second feature point based on the offset rate to obtain the first corrected feature point and the second corrected feature point.
- an optimal feature point pair acquisition module used to match the first corrected feature point and the second corrected feature point to obtain multiple optimal feature point pairs
- an image splicing module used to filter the multiple The optimal feature point pair is used to splice the first image to be spliced and the second image to be spliced to obtain a total image.
- the first image to be spliced and the second image to be spliced are two consecutive frame images.
- the feature point determination module is further configured to: calculate n neighboring pixels based on each pixel on the first image to be spliced or the second image to be spliced, with the pixel as the center.
- the grayscale difference if there are more than n/2 adjacent pixels whose grayscale difference meets the preset conditions, then the pixel is determined to be the first feature point or the second feature point.
- the offset rate calculation module is further configured to: calculate the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively; if If the gigapixel array camera and the target object move in the same direction, the offset rate R is calculated by the following formula:
- the offset rate R is calculated by the following formula:
- t 2 represents the shooting time of the second image to be spliced
- t 1 represents the shooting time of the first image to be spliced
- vx 1 represents the moving speed of the gigapixel array camera at time t 1
- vd 1 represents the moving speed of the target object at time t 1 .
- calculating the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively includes: calculating the first offset through the following formula s 1 and second offset s 2 :
- vx 1 represents the moving speed of the gigapixel array camera at time t 1
- vx 2 represents the moving speed of the gigapixel array camera at time t 2
- vd 1 represents the moving speed of the target object at time t 1
- vd 2 represents The moving speed of the target object at time t 2
- ⁇ t represents the shooting time difference between the first image to be spliced and the second image to be spliced.
- the feature point correction module is further used to: construct a three-dimensional coordinate system; obtain the coordinate value (x 1 , y 1 , z 1 ) of any one of the plurality of first feature points; use the following formula Perform coordinate transformation on the feature point:
- f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
- c x and c y respectively represent the optical center, in pixels
- Q represents the rotation matrix
- T represents the translation matrix
- [Q T] represents the external parameter matrix of the gigapixel array camera
- R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all feature points in the first feature point and repeats the above steps; and maps the first corrected feature point based on the corrected coordinate value of all feature points in the first feature point.
- the feature point correction module is further configured to: obtain the coordinate value (x 2 , y 2 , z 2 ) of any one of the plurality of second feature points; and calculate the feature point through the following formula Perform coordinate transformation:
- f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
- c x and c y respectively represent the optical center of the camera, in pixels
- Q represents rotation Matrix
- T represents the translation matrix
- [Q T] represents the external parameter matrix of the gigapixel array camera
- R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all the feature points in the second feature point and repeats the above steps; and maps the second corrected feature point based on the corrected coordinate value of all the feature points in the second feature point.
- the optimal feature point pair acquisition module is further configured to: use Hamming distance to match each feature point in the first modified feature point and the second modified feature point, and obtain the shortest Hamming distance Multiple optimal feature point pairs.
- the image splicing module is further used to: use a weighted fusion algorithm to obtain the spliced total image.
- the present invention uses a gigapixel array camera to capture a target object and obtain the first image to be spliced and the second image to be spliced; feature point extraction is performed on the first image to be spliced and the second image to be spliced respectively, and the first feature point and the second image to be spliced are determined.
- Two feature points obtain the first moving speed of the gigapixel array camera; obtain the second moving speed of the target object; calculate the offset rate based on the first moving speed and the second moving speed; adjust the first feature point based on the offset rate and the second feature point to obtain the first modified feature point and the second modified feature point; match the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs; screen multiple optimal feature point pairs, Splice the first image to be spliced and the second image to be spliced to obtain a total image.
- the calculation of the offset rate takes into account the errors that the moving direction and speed of the camera and the subject may cause to the image stitching.
- a three-dimensional coordinate system is introduced to combine the camera parameters with the offset rate to correct the feature points of the two images; through The image data collected in this way is of higher quality and can improve the efficiency and accuracy of image stitching.
- Figure 1 is a schematic flow chart of an image stitching method based on a gigapixel array camera provided by an embodiment of the present application
- Figure 2 is a schematic structural diagram of an image stitching system based on a gigapixel array camera provided by an embodiment of the present application.
- Embodiments of the present application provide an image splicing method and system based on a gigapixel array camera, which includes photographing a target object with a gigapixel array camera, acquiring a first image to be spliced and a second image to be spliced; and separately processing the first image to be spliced.
- the present invention can improve the efficiency and accuracy of image stitching.
- the image splicing method and system based on a gigapixel array camera can be integrated into electronic equipment, and the electronic equipment can be terminals, servers, and other equipment.
- the terminal can be a light field camera, a vehicle camera, a mobile phone, a tablet, a smart Bluetooth device, a laptop, or a personal computer (PC);
- the server can be a single server or composed of multiple servers. server cluster.
- the above examples should not be construed as limitations of this application.
- Figure 1 shows a schematic flowchart of an image stitching method based on a gigapixel array camera provided by an embodiment of the present application. Please refer to Figure 1, which specifically includes the following steps:
- the gigapixel array camera is a cross-scale imaging camera that combines a main lens and an array of N micro lenses.
- the micro lenses can form different focal lengths according to different optical path designs. When multiple lenses work in parallel, they can capture Different images from near and far.
- the first image to be spliced and the second image to be spliced may be two consecutive frame images.
- the first image to be spliced and the second image to be spliced may be two frames of images acquired within a preset time interval; for example, the starting time is 17 o'clock and the preset time interval is 5 seconds, the image taken at 17 o'clock will be used as the first image to be stitched, and the image taken at 17:00 minutes and 5 seconds will be used as the second image to be stitched.
- the computer device receives the image data collected by the gigapixel array camera, and the image data can be transmitted through the fifth generation mobile communication technology or through the wifi network.
- the image data can be portraits, large animals, small animals, vehicles, plants, etc.
- S120 Extract feature points from the first image to be spliced and the second image to be spliced respectively, and determine the first feature point and the second feature point.
- the grayscale difference of n adjacent pixel points on a circle with a radius of d can be calculated. value; if there are more than n/2 adjacent pixels whose grayscale difference value meets the preset condition, then the pixel is determined to be the first feature point or the second feature point.
- step S150 may specifically include the following steps:
- vx 1 represents the moving speed of the gigapixel array camera at time t 1
- vx 2 represents the moving speed of the gigapixel array camera at time t 2
- vd 1 represents the moving speed of the target object at time t 1
- vd 2 represents The moving speed of the target object at time t 2
- ⁇ t represents the shooting time difference between the first image to be spliced and the second image to be spliced.
- the offset rate R is calculated by the following formula:
- the offset rate R is calculated by the following formula:
- t 2 represents the shooting time of the second image to be spliced
- t 1 represents the shooting time of the first image to be spliced
- vx 1 represents the moving speed of the gigapixel array camera at time t 1
- vd 1 represents the moving speed of the target object at time t 1 .
- step S160 may specifically include the following steps:
- f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
- c x and c y respectively represent the optical center, in pixels
- Q represents the rotation matrix
- T represents the translation matrix
- [Q T] represents the external parameter matrix of the gigapixel array camera
- R represents the offset rate, Indicates the corrected coordinate value of the feature point.
- f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
- c x and c y respectively represent the optical center of the camera, in pixels
- Q represents rotation Matrix
- T represents the translation matrix
- [Q T] represents the external parameter matrix of the gigapixel array camera
- R represents the offset rate, Indicates the corrected coordinate value of the feature point.
- the calculation of the offset rate takes into account the errors that may be caused by the moving direction and speed of the camera and the subject in image stitching.
- This implementation method introduces a three-dimensional coordinate system and innovatively combines the camera's internal parameter matrix and external parameter matrix with the offset rate caused by movement to correct the feature points of the two images and reduce the occurrence of splicing errors.
- Hamming distance is used to match each feature point in the first modified feature point and the second modified feature point to obtain a plurality of optimal feature point pairs with the shortest Hamming distance.
- the required number of optimal feature point pairs can be preset, or the Hamming distance threshold can be preset, which is not specifically limited here.
- a weighted fusion algorithm can be used to obtain the total image after stitching.
- this embodiment also provides an image stitching system based on a gigapixel array camera. As shown in Figure 2, the system includes:
- the image data acquisition module 210 is used to acquire image data captured by a gigapixel array camera; the image data is the first image to be spliced and the second image to be spliced of the target object.
- the feature point determination module 220 is configured to extract feature points from the first image to be spliced and the second image to be spliced respectively, and determine the first feature point and the second feature point.
- the speed determination module 230 is configured to obtain the first moving speed and the second moving speed of the gigapixel array camera and the target object respectively.
- the deviation rate calculation module 240 is configured to calculate a deviation rate based on the first moving speed and the second moving speed.
- the feature point correction module 250 is configured to adjust the first feature point and the second feature point based on the offset rate to obtain first corrected feature points and second corrected feature points.
- the optimal feature point pair acquisition module 260 is used to match the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs.
- the image splicing module 270 is used to screen the multiple optimal feature point pairs, splice the first image to be spliced and the second image to be spliced, and obtain a total image.
- the first image to be spliced and the second image to be spliced are two consecutive frame images.
- the feature point determination module 220 is further configured to: calculate n neighboring pixels based on each pixel on the first image to be spliced or the second image to be spliced, with the pixel as the center. The grayscale difference value of the point; if there are more than n/2 adjacent pixel points whose grayscale difference value meets the preset condition, then the pixel point is determined to be the first feature point or the second feature point.
- the offset rate calculation module 240 is further configured to: calculate the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively; If the gigapixel array camera and the target object move in the same direction, the offset rate R is calculated by the following formula:
- the offset rate R is calculated by the following formula:
- t 2 represents the shooting time of the second image to be spliced
- t 1 represents the shooting time of the first image to be spliced
- vx 1 represents the moving speed of the gigapixel array camera at time t 1
- vd 1 represents the moving speed of the target object at time t 1 .
- calculating the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively includes: calculating the first offset through the following formula s 1 and second offset s 2 :
- vx 1 represents the moving speed of the gigapixel array camera at time t 1
- vx 2 represents the moving speed of the gigapixel array camera at time t 2
- vd 1 represents the moving speed of the target object at time t 1
- vd 2 represents The moving speed of the target object at time t 2
- ⁇ t represents the shooting time difference between the first image to be spliced and the second image to be spliced.
- the feature point correction module 250 is further used to: construct a three-dimensional coordinate system; obtain the coordinate value (x 1 , y 1 , z 1 ) of any one of the plurality of first feature points; through the following The formula performs coordinate transformation on the feature point:
- f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
- c x and c y respectively represent the optical center, in pixels
- Q represents the rotation matrix
- T represents the translation matrix
- [Q T] represents the external parameter matrix of the gigapixel array camera
- R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all feature points in the first feature point and repeats the above steps; and maps the first corrected feature point based on the corrected coordinate value of all feature points in the first feature point.
- the feature point correction module 250 is further configured to: obtain the coordinate value (x 2 , y 2 , z 2 ) of any one of the plurality of second feature points; Point coordinate transformation:
- f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
- c x and c y respectively represent the optical center of the camera, in pixels
- Q represents rotation Matrix
- T represents the translation matrix
- [Q T] represents the external parameter matrix of the gigapixel array camera
- R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all the feature points in the second feature point and repeats the above steps; and maps the second corrected feature point based on the corrected coordinate value of all the feature points in the second feature point.
- the optimal feature point pair acquisition module 260 is further configured to use Hamming distance to match each feature point in the first modified feature point and the second modified feature point to obtain the shortest Hamming distance. multiple optimal feature point pairs.
- the image splicing module 270 is further configured to use a weighted fusion algorithm to obtain a spliced total image.
- this system takes into account the errors that the moving direction and speed of the camera and the subject may cause in image stitching. It also introduces a three-dimensional coordinate system, combines camera parameters with the offset rate, and corrects the feature points of the two images. , which can improve the efficiency and accuracy of image stitching.
- the disclosed devices and methods can be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented.
- the coupling or direct coupling or communication connection between each other shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- each functional unit in the embodiment provided by this application can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
- the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
- the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
- the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code. .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Theoretical Computer Science (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Data Mining & Analysis (AREA)
- Computational Mathematics (AREA)
- Computing Systems (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The present application is specifically applied to the field of image processing. Provided are an image stitching method and system based on a billion-pixel array camera. The method comprises: a billion-pixel array camera photographing a target object, so as to acquire a first image to be subjected to stitching and a second image to be subjected to stitching; respectively performing feature point extraction on said first image and said second image, so as to determine first feature points and second feature points; acquiring a first moving speed of the billion-pixel array camera; acquiring a second moving speed of the target object; calculating a deviation rate on the basis of the first moving speed and the second moving speed; adjusting the first feature points and the second feature points on the basis of the deviation rate, so as to obtain first corrected feature points and second corrected feature points; matching the first corrected feature points and the second corrected feature points, so as to obtain a plurality of optimal feature point pairs; and performing screening on the plurality of optimal feature point pairs, and stitching said first image and said second image, so as to obtain a complete image. In this way, the present invention can improve the splicing efficiency and accuracy of images.
Description
本申请涉及图像处理领域,更具体地,涉及一种基于亿像素阵列式相机的图像拼接方法及系统。The present application relates to the field of image processing, and more specifically, to an image splicing method and system based on a gigapixel array camera.
阵列式相机通过多个小镜头代替一个大镜头的拍摄效果,其原理是通过同时控制多台相机进行拍摄的技术,相比于传统的相机来说,亿像素阵列式相机的视野更广,拍出的照片也更大,同时其体积更小。The array camera replaces the shooting effect of one large lens with multiple small lenses. Its principle is to control multiple cameras at the same time for shooting. Compared with traditional cameras, the 100-megapixel array camera has a wider field of view and is more convenient for shooting. The resulting photos are also larger, while being smaller in size.
图像配准是图像拼接的关键,图像配准旨在找出两张图像中的相同区域,以计算图像间的坐标变化,图像配准的精度直接决定着图像的拼接质量。Image registration is the key to image splicing. Image registration aims to find the same area in two images to calculate the coordinate changes between images. The accuracy of image registration directly determines the quality of image splicing.
现有技术中,通常采用对图像本身进行灰度处理、角度变换、边缘处理等方式来实现图像配准,忽略了拍摄目标和拍摄装置的移动所造成的图像偏差,导致图像拼接精度低。In the existing technology, image registration is usually achieved by performing grayscale processing, angle transformation, edge processing, etc. on the image itself, ignoring the image deviation caused by the movement of the shooting target and the shooting device, resulting in low image stitching accuracy.
发明内容Contents of the invention
本发明实施例的目的在于提供一种基于亿像素阵列式相机的图像拼接方法及系统,根据亿像素阵列式相机和目标对象的移动确定偏移率,并通过偏移率修正特征点,避免移动所导致的图像偏差,提高图像拼接效率和精度。具体技术方案如下:The purpose of the embodiments of the present invention is to provide an image splicing method and system based on a gigapixel array camera, which determines the offset rate based on the movement of the gigapixel array camera and the target object, and corrects the feature points through the offset rate to avoid movement. The resulting image deviation improves the efficiency and accuracy of image stitching. The specific technical solutions are as follows:
在本发明实施例的第一方面,提供一种基于亿像素阵列式相机的图像拼接方法,包括:获取亿像素阵列式相机拍摄的图像数据;所述图像数据是目标对象的第一待拼接图像和第二待拼接图像;分别对所述第一待拼接图像和所述第二待拼接图像进行特征点提取,确定第一特征点和第二特征点;获取所述亿像素阵列式相机的第一移动速度; 获取所述目标对象的第二移动速度;基于所述第一移动速度和所述第二移动速度,计算偏移率;基于所述偏移率调整所述第一特征点和所述第二特征点,获得第一修正特征点和第二修正特征点;匹配所述第一修正特征点和所述第二修正特征点,获得多个最优特征点对;筛选所述多个最优特征点对,拼接所述第一待拼接图像和所述第二待拼接图像,获得总图像。In a first aspect of the embodiment of the present invention, an image splicing method based on a gigapixel array camera is provided, including: acquiring image data captured by a gigapixel array camera; the image data is a first image of a target object to be spliced and the second image to be spliced; perform feature point extraction on the first image to be spliced and the second image to be spliced respectively to determine the first feature point and the second feature point; obtain the third feature point of the gigapixel array camera a moving speed; obtaining the second moving speed of the target object; calculating a deviation rate based on the first moving speed and the second moving speed; adjusting the first feature point and the first moving speed based on the deviation rate Use the second feature point to obtain the first modified feature point and the second modified feature point; match the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs; filter the multiple optimal feature point pairs. The optimal feature point pair is used to splice the first image to be spliced and the second image to be spliced to obtain a total image.
可选地,所述第一待拼接图像和所述第二待拼接图像是连续的两帧图像。Optionally, the first image to be spliced and the second image to be spliced are two consecutive frame images.
可选地,所述分别对所述第一待拼接图像和所述第二待拼接图像进行特征点提取,确定第一特征点和第二特征点,包括:根据所述第一待拼接图像或所述第二待拼接图像上的每个像素点,以该像素点为中心,计算n个邻近像素点的灰度差值;若所述灰度差值符合预设条件的邻近像素点大于n/2个,则该像素点确定为第一特征点或第二特征点。Optionally, performing feature point extraction on the first image to be spliced and the second image to be spliced respectively, and determining the first feature point and the second feature point include: based on the first image to be spliced or For each pixel on the second image to be spliced, with the pixel as the center, calculate the grayscale difference of n adjacent pixels; if the grayscale difference of adjacent pixels that meet the preset conditions is greater than n /2, then the pixel point is determined as the first feature point or the second feature point.
可选地,所述基于所述第一移动速度和所述第二移动速度,计算偏移率,包括:分别基于所述第一移动速度和所述第二移动速度,计算第一偏移量s
1和第二偏移量s
2;若所述亿像素阵列式相机和所述目标对象同向移动,则通过以下公式计算偏移率R:
Optionally, calculating the offset rate based on the first moving speed and the second moving speed includes: calculating a first offset amount based on the first moving speed and the second moving speed respectively. s 1 and the second offset s 2 ; if the gigapixel array camera and the target object move in the same direction, the offset rate R is calculated by the following formula:
若所述亿像素阵列式相机和所述目标对象反向移动,则通过以下公式计算偏移率R:If the gigapixel array camera and the target object move in opposite directions, the offset rate R is calculated by the following formula:
其中,t
2表示第二待拼接图像的拍摄时刻,t
1表示第一待拼接图像的拍摄时刻,则第一待拼接图像与第二待拼接图像的拍摄时间差Δ t=|t
2-t
1|,vx
1表示亿像素阵列式相机在t
1时刻的移动速度,vd
1表示目标对象在t
1时刻的移动速度。
Among them, t 2 represents the shooting time of the second image to be spliced, and t 1 represents the shooting time of the first image to be spliced, then the shooting time difference between the first image to be spliced and the second image to be spliced is Δt=|t 2 -t 1 |, vx 1 represents the moving speed of the gigapixel array camera at time t 1 , vd 1 represents the moving speed of the target object at time t 1 .
可选地,所述分别基于所述第一移动速度和所述第二移动速度,计算第一偏移量s
1和第二偏移量s
2,包括:通过以下公式计算第一偏移量s
1和第二偏移量s
2:
Optionally, calculating the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively includes: calculating the first offset through the following formula s 1 and second offset s 2 :
s
1=|vx
1-vx
2|*Δt
s 1 =|vx 1 -vx 2 |*Δt
s
2=|vd
1-vd
2|*Δt
s 2 =|vd 1 -vd 2 |*Δt
其中,vx
1表示亿像素阵列式相机在t
1时刻的移动速度,vx
2表示亿像素阵列式相机在t
2时刻的移动速度,vd
1表示目标对象在t
1时刻的移动速度,vd
2表示目标对象在t
2时刻的移动速度,Δt表示第一待拼接图像与第二待拼接图像的拍摄时间差。
Among them, vx 1 represents the moving speed of the gigapixel array camera at time t 1 , vx 2 represents the moving speed of the gigapixel array camera at time t 2 , vd 1 represents the moving speed of the target object at time t 1 , vd 2 represents The moving speed of the target object at time t 2 , Δt represents the shooting time difference between the first image to be spliced and the second image to be spliced.
可选地,基于所述偏移率调整所述第一特征点,获得第一修正特征点,包括:构建三维坐标系;获取多个所述第一特征点中任一个特征点的坐标值(x
1,y
1,z
1);通过以下公式对该特征点进行坐标变换:
Optionally, adjusting the first feature point based on the offset rate to obtain a first corrected feature point includes: constructing a three-dimensional coordinate system; obtaining the coordinate value of any one of the plurality of first feature points ( x 1 ,y 1 ,z 1 ); perform coordinate transformation on the feature point through the following formula:
其中,
表示坐标转换值,
表示亿像素阵列式相机的内参矩阵,f
x和f
y分别表示x轴、y轴方向焦距的长度,单位是毫米;c
x和c
y分别表示光学中心,单位是像素;Q表示旋转矩阵,T表示平移矩阵,[Q T]表示亿像素阵列式相机的外参矩阵;R表示偏移率,
表示该特征点的修正坐标值;遍历所述第一特征点中的所有特征点, 重复以上步骤;基于所述第一特征点中所有特征点的修正坐标值,映射出第一修正特征点。
in, Represents the coordinate conversion value, Represents the internal parameter matrix of the gigapixel array camera, f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters; c x and c y respectively represent the optical center, in pixels; Q represents the rotation matrix, T represents the translation matrix, [Q T] represents the external parameter matrix of the gigapixel array camera; R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all feature points in the first feature point and repeats the above steps; maps the first corrected feature point based on the corrected coordinate value of all feature points in the first feature point.
可选地,基于所述偏移率调整所述第二特征点,获得第二修正特征点,包括:获取获取多个所述第二特征点中任一个特征点的坐标值(x
2,y
2,z
2);通过以下公式对该特征点进行坐标变换:
Optionally, adjusting the second feature point based on the offset rate to obtain a second modified feature point includes: obtaining the coordinate value (x 2 , y ) of any one of the plurality of second feature points. 2 ,z 2 ); perform coordinate transformation on the feature point through the following formula:
其中,
表示坐标转换值,
表示亿像素阵列式相机的内参矩阵,f
x和f
y分别表示x轴、y轴方向焦距的长度,单位是毫米;c
x和c
y分别表示相机的光学中心,单位是像素;Q表示旋转矩阵,T表示平移矩阵,[Q T]表示亿像素阵列式相机的外参矩阵;R表示偏移率,
表示该特征点的修正坐标值;遍历所述第二特征点中的所有特征点,重复以上步骤;基于述第二特征点中所有特征点的修正坐标值,映射出第二修正特征点。
in, Represents the coordinate conversion value, Represents the internal parameter matrix of the gigapixel array camera, f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters; c x and c y respectively represent the optical center of the camera, in pixels; Q represents rotation Matrix, T represents the translation matrix, [Q T] represents the external parameter matrix of the gigapixel array camera; R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all the feature points in the second feature point and repeats the above steps; and maps the second corrected feature point based on the corrected coordinate value of all the feature points in the second feature point.
可选地,所述匹配所述第一修正特征点和所述第二修正特征点,获得多个最优特征点对,包括:采用汉明距离匹配所述第一修正特征点和所述第二修正特征点中的每一特征点,获得汉明距离最短的多个最优特征点对。Optionally, matching the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs includes: using Hamming distance to match the first modified feature point and the third modified feature point. 2. Correct each feature point in the feature points to obtain multiple optimal feature point pairs with the shortest Hamming distance.
可选地,所述拼接所述第一待拼接图像和所述第二待拼接图像,获得总图像,包括:采用加权融合算法,获得拼接后的总图像。Optionally, the splicing of the first image to be spliced and the second image to be spliced to obtain a total image includes: using a weighted fusion algorithm to obtain a spliced total image.
在本发明实施例的又一方面,提供一种基于亿像素阵列式相机的 图像拼接系统,包括:图像数据获取模块,用于获取亿像素阵列式相机拍摄的图像数据;所述图像数据是目标对象的第一待拼接图像和第二待拼接图像;特征点确定模块,用于分别对所述第一待拼接图像和所述第二待拼接图像进行特征点提取,确定第一特征点和第二特征点;速度确定模块,用于获取所述亿像素阵列式相机、所述目标对象的第一移动速度和第二移动速度;偏移率计算模块,用于基于所述第一移动速度和所述第二移动速度,计算偏移率;特征点修正模块,用于基于所述偏移率调整所述第一特征点和所述第二特征点,获得第一修正特征点和第二修正特征点;最优特征点对获取模块,用于匹配所述第一修正特征点和所述第二修正特征点,获得多个最优特征点对;图像拼接模块,用于筛选所述多个最优特征点对,拼接所述第一待拼接图像和所述第二待拼接图像,获得总图像。In another aspect of the embodiment of the present invention, an image splicing system based on a gigapixel array camera is provided, including: an image data acquisition module for acquiring image data captured by the gigapixel array camera; the image data is a target The first image to be spliced and the second image to be spliced of the object; a feature point determination module, configured to extract feature points of the first image to be spliced and the second image to be spliced respectively, and determine the first feature point and the second image to be spliced. Two feature points; a speed determination module, used to obtain the first moving speed and the second moving speed of the gigapixel array camera and the target object; an offset rate calculation module, used to calculate the first moving speed and the second moving speed based on the first moving speed and the second moving speed of the target object; The second moving speed is used to calculate the offset rate; the feature point correction module is used to adjust the first feature point and the second feature point based on the offset rate to obtain the first corrected feature point and the second corrected feature point. Feature points; an optimal feature point pair acquisition module, used to match the first corrected feature point and the second corrected feature point to obtain multiple optimal feature point pairs; an image splicing module, used to filter the multiple The optimal feature point pair is used to splice the first image to be spliced and the second image to be spliced to obtain a total image.
可选地,所述第一待拼接图像和所述第二待拼接图像是连续的两帧图像。Optionally, the first image to be spliced and the second image to be spliced are two consecutive frame images.
可选地,所述特征点确定模块进一步用于:根据所述第一待拼接图像或所述第二待拼接图像上的每个像素点,以该像素点为中心,计算n个邻近像素点的灰度差值;若所述灰度差值符合预设条件的邻近像素点大于n/2个,则该像素点确定为第一特征点或第二特征点。Optionally, the feature point determination module is further configured to: calculate n neighboring pixels based on each pixel on the first image to be spliced or the second image to be spliced, with the pixel as the center. The grayscale difference; if there are more than n/2 adjacent pixels whose grayscale difference meets the preset conditions, then the pixel is determined to be the first feature point or the second feature point.
可选地,所述偏移率计算模块,进一步用于:分别基于所述第一移动速度和所述第二移动速度,计算第一偏移量s
1和第二偏移量s
2;若所述亿像素阵列式相机和所述目标对象同向移动,则通过以下公式计算偏移率R:
Optionally, the offset rate calculation module is further configured to: calculate the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively; if If the gigapixel array camera and the target object move in the same direction, the offset rate R is calculated by the following formula:
若所述亿像素阵列式相机和所述目标对象反向移动,则通过以下公式计算偏移率R:If the gigapixel array camera and the target object move in opposite directions, the offset rate R is calculated by the following formula:
其中,t
2表示第二待拼接图像的拍摄时刻,t
1表示第一待拼接图像的拍摄时刻,则第一待拼接图像与第二待拼接图像的拍摄时间差Δt=|t
2-t
1|,vx
1表示亿像素阵列式相机在t
1时刻的移动速度,vd
1表示目标对象在t
1时刻的移动速度。
Among them, t 2 represents the shooting time of the second image to be spliced, and t 1 represents the shooting time of the first image to be spliced, then the shooting time difference between the first image to be spliced and the second image to be spliced is Δt = |t 2 - t 1 | , vx 1 represents the moving speed of the gigapixel array camera at time t 1 , vd 1 represents the moving speed of the target object at time t 1 .
可选地,所述分别基于所述第一移动速度和所述第二移动速度,计算第一偏移量s
1和第二偏移量s
2,包括:通过以下公式计算第一偏移量s
1和第二偏移量s
2:
Optionally, calculating the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively includes: calculating the first offset through the following formula s 1 and second offset s 2 :
s
1=|vx
1-vx
2|*Δt
s 1 =|vx 1 -vx 2 |*Δt
s
2=|vd
1-vd
2|*Δt
s 2 =|vd 1 -vd 2 |*Δt
其中,vx
1表示亿像素阵列式相机在t
1时刻的移动速度,vx
2表示亿像素阵列式相机在t
2时刻的移动速度,vd
1表示目标对象在t
1时刻的移动速度,vd
2表示目标对象在t
2时刻的移动速度,Δt表示第一待拼接图像与第二待拼接图像的拍摄时间差。
Among them, vx 1 represents the moving speed of the gigapixel array camera at time t 1 , vx 2 represents the moving speed of the gigapixel array camera at time t 2 , vd 1 represents the moving speed of the target object at time t 1 , vd 2 represents The moving speed of the target object at time t 2 , Δt represents the shooting time difference between the first image to be spliced and the second image to be spliced.
可选地,所述特征点修正模块进一步用于:构建三维坐标系;获取多个所述第一特征点中任一个特征点的坐标值(x
1,y
1,z
1);通过以下公式对该特征点进行坐标变换:
Optionally, the feature point correction module is further used to: construct a three-dimensional coordinate system; obtain the coordinate value (x 1 , y 1 , z 1 ) of any one of the plurality of first feature points; use the following formula Perform coordinate transformation on the feature point:
其中,
表示坐标转换值,
表示亿像素阵列式相机的内参矩阵,f
x和f
y分别表示x轴、y轴方向焦距的长度,单位是毫米;c
x和c
y分别表示光学中心,单位是像素;Q表示旋转矩阵,T表示平移 矩阵,[Q T]表示亿像素阵列式相机的外参矩阵;R表示偏移率,
表示该特征点的修正坐标值;遍历所述第一特征点中的所有特征点,重复以上步骤;基于所述第一特征点中所有特征点的修正坐标值,映射出第一修正特征点。
in, Represents the coordinate conversion value, Represents the internal parameter matrix of the gigapixel array camera, f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters; c x and c y respectively represent the optical center, in pixels; Q represents the rotation matrix, T represents the translation matrix, [Q T] represents the external parameter matrix of the gigapixel array camera; R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all feature points in the first feature point and repeats the above steps; and maps the first corrected feature point based on the corrected coordinate value of all feature points in the first feature point.
可选地,所述特征点修正模块进一步用于:获取获取多个所述第二特征点中任一个特征点的坐标值(x
2,y
2,z
2);通过以下公式对该特征点进行坐标变换:
Optionally, the feature point correction module is further configured to: obtain the coordinate value (x 2 , y 2 , z 2 ) of any one of the plurality of second feature points; and calculate the feature point through the following formula Perform coordinate transformation:
其中,
表示坐标转换值,
表示亿像素阵列式相机的内参矩阵,f
x和f
y分别表示x轴、y轴方向焦距的长度,单位是毫米;c
x和c
y分别表示相机的光学中心,单位是像素;Q表示旋转矩阵,T表示平移矩阵,[Q T]表示亿像素阵列式相机的外参矩阵;R表示偏移率,
表示该特征点的修正坐标值;遍历所述第二特征点中的所有特征点,重复以上步骤;基于述第二特征点中所有特征点的修正坐标值,映射出第二修正特征点。
in, Represents the coordinate conversion value, Represents the internal parameter matrix of the gigapixel array camera, f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters; c x and c y respectively represent the optical center of the camera, in pixels; Q represents rotation Matrix, T represents the translation matrix, [Q T] represents the external parameter matrix of the gigapixel array camera; R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all the feature points in the second feature point and repeats the above steps; and maps the second corrected feature point based on the corrected coordinate value of all the feature points in the second feature point.
可选地,所述最优特征点对获取模块进一步用于:采用汉明距离匹配所述第一修正特征点和所述第二修正特征点中的每一特征点,获得汉明距离最短的多个最优特征点对。Optionally, the optimal feature point pair acquisition module is further configured to: use Hamming distance to match each feature point in the first modified feature point and the second modified feature point, and obtain the shortest Hamming distance Multiple optimal feature point pairs.
可选地,所述图像拼接模块进一步用于:采用加权融合算法,获 得拼接后的总图像。Optionally, the image splicing module is further used to: use a weighted fusion algorithm to obtain the spliced total image.
本发明采用亿像素阵列式相机拍摄目标对象,获取第一待拼接图像和第二待拼接图像;分别对第一待拼接图像和第二待拼接图像进行特征点提取,确定第一特征点和第二特征点;获取亿像素阵列式相机的第一移动速度;获取目标对象的第二移动速度;基于第一移动速度和第二移动速度,计算偏移率;基于偏移率调整第一特征点和第二特征点,获得第一修正特征点和第二修正特征点;匹配第一修正特征点和第二修正特征点,获得多个最优特征点对;筛选多个最优特征点对,拼接第一待拼接图像和第二待拼接图像,获得总图像。其中,偏移率的计算考虑了相机和拍摄对象的移动方向和速度可能对图像拼接造成的误差,同时引入三维坐标系,将相机参数与偏移率结合,修正两张图像的特征点;通过该方式采集的图像数据质量较高,同时能够提高图像拼接效率和精度。The present invention uses a gigapixel array camera to capture a target object and obtain the first image to be spliced and the second image to be spliced; feature point extraction is performed on the first image to be spliced and the second image to be spliced respectively, and the first feature point and the second image to be spliced are determined. Two feature points; obtain the first moving speed of the gigapixel array camera; obtain the second moving speed of the target object; calculate the offset rate based on the first moving speed and the second moving speed; adjust the first feature point based on the offset rate and the second feature point to obtain the first modified feature point and the second modified feature point; match the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs; screen multiple optimal feature point pairs, Splice the first image to be spliced and the second image to be spliced to obtain a total image. Among them, the calculation of the offset rate takes into account the errors that the moving direction and speed of the camera and the subject may cause to the image stitching. At the same time, a three-dimensional coordinate system is introduced to combine the camera parameters with the offset rate to correct the feature points of the two images; through The image data collected in this way is of higher quality and can improve the efficiency and accuracy of image stitching.
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present application. For those skilled in the art, other drawings can also be obtained based on these drawings without exerting creative efforts.
图1是本申请实施例提供的一种基于亿像素阵列式相机的图像拼接方法的流程示意图;Figure 1 is a schematic flow chart of an image stitching method based on a gigapixel array camera provided by an embodiment of the present application;
图2是本申请实施例提供的一种基于亿像素阵列式相机的图像拼接系统的结构示意图。Figure 2 is a schematic structural diagram of an image stitching system based on a gigapixel array camera provided by an embodiment of the present application.
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结 合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本申请实施例的组件可以以各种不同的配置来布置和设计。In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments These are part of the embodiments of this application, but not all of them. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a variety of different configurations.
因此,以下对在附图中提供的本申请的实施例的详细描述并非旨在限制要求保护的本申请的范围,而是仅仅表示本申请的选定实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。Accordingly, the following detailed description of the embodiments of the application provided in the appended drawings is not intended to limit the scope of the claimed application, but rather to represent selected embodiments of the application. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of this application.
本申请实施例提供了一种基于亿像素阵列式相机的图像拼接方法及系统,包括亿像素阵列式相机拍摄目标对象,获取第一待拼接图像和第二待拼接图像;分别对第一待拼接图像和第二待拼接图像进行特征点提取,确定第一特征点和第二特征点;获取亿像素阵列式相机的第一移动速度;获取目标对象的第二移动速度;基于第一移动速度和第二移动速度,计算偏移率;基于偏移率调整第一特征点和第二特征点,获得第一修正特征点和第二修正特征点;匹配第一修正特征点和第二修正特征点,获得多个最优特征点对;筛选多个最优特征点对,拼接第一待拼接图像和第二待拼接图像,获得总图像。通过上述方式,本发明能够提高图像拼接效率和精度。Embodiments of the present application provide an image splicing method and system based on a gigapixel array camera, which includes photographing a target object with a gigapixel array camera, acquiring a first image to be spliced and a second image to be spliced; and separately processing the first image to be spliced. Extract feature points from the image and the second image to be spliced to determine the first feature point and the second feature point; obtain the first moving speed of the gigapixel array camera; obtain the second moving speed of the target object; based on the first moving speed and Second moving speed, calculate the offset rate; adjust the first feature point and the second feature point based on the offset rate to obtain the first corrected feature point and the second corrected feature point; match the first corrected feature point and the second corrected feature point , obtain multiple optimal feature point pairs; filter multiple optimal feature point pairs, splice the first image to be spliced and the second image to be spliced, and obtain the total image. Through the above method, the present invention can improve the efficiency and accuracy of image stitching.
该基于亿像素阵列式相机的图像拼接方法及系统,具体可以集成在电子设备中,该电子设备可以为终端、服务器等设备。其中,终端可以为光场摄像机、车载相机、手机、平板电脑、智能蓝牙设备、笔记本电脑、或者个人电脑(Personal Computer,PC)等设备;服务器可以是单一服务器,也可以是由多个服务器组成的服务器集群。以上举例不应理解为对本申请的限制。The image splicing method and system based on a gigapixel array camera can be integrated into electronic equipment, and the electronic equipment can be terminals, servers, and other equipment. Among them, the terminal can be a light field camera, a vehicle camera, a mobile phone, a tablet, a smart Bluetooth device, a laptop, or a personal computer (PC); the server can be a single server or composed of multiple servers. server cluster. The above examples should not be construed as limitations of this application.
图1示出了本申请实施例提供的基于亿像素阵列式相机的图像 拼接方法的流程示意图,请参考图1,具体包括如下步骤:Figure 1 shows a schematic flowchart of an image stitching method based on a gigapixel array camera provided by an embodiment of the present application. Please refer to Figure 1, which specifically includes the following steps:
S110、获取亿像素阵列式相机拍摄的图像数据;所述图像数据是目标对象的第一待拼接图像和第二待拼接图像。S110. Obtain the image data captured by the gigapixel array camera; the image data is the first image to be spliced and the second image to be spliced of the target object.
其中,亿像素阵列式相机是由一个主镜头和N个微型镜头阵列式结合的跨尺度成像相机,微型镜头根据不同的光路设计,可以形成不同的焦距,多个镜头并行工作时,能够捕捉到远近不同的画面。Among them, the gigapixel array camera is a cross-scale imaging camera that combines a main lens and an array of N micro lenses. The micro lenses can form different focal lengths according to different optical path designs. When multiple lenses work in parallel, they can capture Different images from near and far.
由此通过亿像素阵列式相机采集图像数据,不仅能够大幅度提升采集数据和成像范围的数量级,同时获得多个焦点,实现大视野和细节的兼顾。Therefore, collecting image data through a gigapixel array camera can not only greatly increase the order of magnitude of the collected data and imaging range, but also obtain multiple focus points to achieve both a large field of view and details.
在一种实施方式中,所述第一待拼接图像和所述第二待拼接图像可以是连续的两帧图像。In one implementation, the first image to be spliced and the second image to be spliced may be two consecutive frame images.
在另一种实施方式中,所述第一待拼接图像和所述第二待拼接图像可以在预设时间间隔内获取的两帧图像;例如起始时刻为17点,预设时间间隔为5秒,则将17点拍摄的图像作为第一待拼接图像,将17点0分5秒拍摄的图像作为第二待拼接图像。In another implementation, the first image to be spliced and the second image to be spliced may be two frames of images acquired within a preset time interval; for example, the starting time is 17 o'clock and the preset time interval is 5 seconds, the image taken at 17 o'clock will be used as the first image to be stitched, and the image taken at 17:00 minutes and 5 seconds will be used as the second image to be stitched.
可选地,计算机装置接收亿像素阵列式相机采集的图像数据,所述图像数据可以通过第五代移动通讯技术进行传输,也可以通过wifi网络进行传输。其中,图像数据可以是人像、大型动物、小型动物、车辆以及植物等。Optionally, the computer device receives the image data collected by the gigapixel array camera, and the image data can be transmitted through the fifth generation mobile communication technology or through the wifi network. Among them, the image data can be portraits, large animals, small animals, vehicles, plants, etc.
S120、分别对所述第一待拼接图像和所述第二待拼接图像进行特征点提取,确定第一特征点和第二特征点。S120: Extract feature points from the first image to be spliced and the second image to be spliced respectively, and determine the first feature point and the second feature point.
其中,可以根据所述第一待拼接图像或所述第二待拼接图像上的每个像素点,以该像素点为中心,计算半径为d的圆周上的n个邻近像素点的灰度差值;若所述灰度差值符合预设条件的邻近像素点大于n/2个,则该像素点确定为第一特征点或第二特征点。Wherein, according to each pixel point on the first image to be spliced or the second image to be spliced, with the pixel point as the center, the grayscale difference of n adjacent pixel points on a circle with a radius of d can be calculated. value; if there are more than n/2 adjacent pixels whose grayscale difference value meets the preset condition, then the pixel is determined to be the first feature point or the second feature point.
S130、获取所述亿像素阵列式相机的第一移动速度。S130. Obtain the first moving speed of the gigapixel array camera.
S140、获取所述目标对象的第二移动速度。S140. Obtain the second moving speed of the target object.
S150、基于所述第一移动速度和所述第二移动速度,计算偏移率。S150. Calculate a deflection rate based on the first moving speed and the second moving speed.
在一种实施方式中,步骤S150可以具体包括以下步骤:In one implementation, step S150 may specifically include the following steps:
S151、分别基于所述第一移动速度和所述第二移动速度,计算第一偏移量s
1和第二偏移量s
2,公式如下:
S151. Calculate the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively. The formula is as follows:
s
1=|vx
1-vx
2|*Δt
s 1 =|vx 1 -vx 2 |*Δt
s
2=|vd
1-vd
2|*Δt
s 2 =|vd 1 -vd 2 |*Δt
其中,vx
1表示亿像素阵列式相机在t
1时刻的移动速度,vx
2表示亿像素阵列式相机在t
2时刻的移动速度,vd
1表示目标对象在t
1时刻的移动速度,vd
2表示目标对象在t
2时刻的移动速度,Δt表示第一待拼接图像与第二待拼接图像的拍摄时间差。
Among them, vx 1 represents the moving speed of the gigapixel array camera at time t 1 , vx 2 represents the moving speed of the gigapixel array camera at time t 2 , vd 1 represents the moving speed of the target object at time t 1 , vd 2 represents The moving speed of the target object at time t 2 , Δt represents the shooting time difference between the first image to be spliced and the second image to be spliced.
S152、根据所述亿像素阵列式相机和所述目标对象的移动方向,计算偏移率。S152. Calculate the offset rate according to the moving direction of the gigapixel array camera and the target object.
具体地,若所述亿像素阵列式相机和所述目标对象同向移动,则通过以下公式计算偏移率R:Specifically, if the gigapixel array camera and the target object move in the same direction, the offset rate R is calculated by the following formula:
若所述亿像素阵列式相机和所述目标对象反向移动,则通过以下公式计算偏移率R:If the gigapixel array camera and the target object move in opposite directions, the offset rate R is calculated by the following formula:
其中,t
2表示第二待拼接图像的拍摄时刻,t
1表示第一待拼接图像的拍摄时刻,则第一待拼接图像与第二待拼接图像的拍摄时间差Δt=|t
2-t
1|,vx
1表示亿像素阵列式相机在t
1时刻的移动速度,vd
1表示目标对象在t
1时刻的移动速度。
Among them, t 2 represents the shooting time of the second image to be spliced, and t 1 represents the shooting time of the first image to be spliced, then the shooting time difference between the first image to be spliced and the second image to be spliced is Δt = |t 2 - t 1 | , vx 1 represents the moving speed of the gigapixel array camera at time t 1 , vd 1 represents the moving speed of the target object at time t 1 .
由此,考虑到相机和拍摄对象的移动方向和速度可能对图像拼接造成的误差,引入偏移率的计算公式,以提高图像拼接的精度。Therefore, considering the errors that the moving direction and speed of the camera and the subject may cause to image stitching, a calculation formula for the offset rate is introduced to improve the accuracy of image stitching.
S160、基于所述偏移率调整所述第一特征点和所述第二特征点,获得第一修正特征点和第二修正特征点。S160. Adjust the first feature point and the second feature point based on the offset rate to obtain first modified feature points and second modified feature points.
在一种实施方式中,步骤S160可以具体包括以下步骤:In one implementation, step S160 may specifically include the following steps:
S161、构建三维坐标系。S161. Construct a three-dimensional coordinate system.
S162、获取多个所述第一特征点中任一个特征点的坐标值(x
1,y
1,z
1)。
S162. Obtain the coordinate value (x 1 , y 1 , z 1 ) of any one of the plurality of first feature points.
S163、通过以下公式对该特征点进行坐标变换:S163. Use the following formula to transform the coordinates of the feature point:
其中,
表示坐标转换值,
表示亿像素阵列式相机的内参矩阵,f
x和f
y分别表示x轴、y轴方向焦距的长度,单位是毫米;c
x和c
y分别表示光学中心,单位是像素;Q表示旋转矩阵,T表示平移矩阵,[Q T]表示亿像素阵列式相机的外参矩阵;R表示偏移率,
表示该特征点的修正坐标值。
in, Represents the coordinate conversion value, Represents the internal parameter matrix of the gigapixel array camera, f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters; c x and c y respectively represent the optical center, in pixels; Q represents the rotation matrix, T represents the translation matrix, [Q T] represents the external parameter matrix of the gigapixel array camera; R represents the offset rate, Indicates the corrected coordinate value of the feature point.
S164、遍历所述第一特征点中的所有特征点,重复步骤S162-S163。S164. Traverse all the feature points in the first feature point and repeat steps S162-S163.
S165、基于所述第一特征点中所有特征点的修正坐标值,映射出第一修正特征点。S165. Map the first corrected feature points based on the corrected coordinate values of all feature points in the first feature points.
S166、获取获取多个所述第二特征点中任一个特征点的坐标值(x
2,y
2,z
2)。
S166. Obtain the coordinate value (x 2 , y 2 , z 2 ) of any one of the plurality of second feature points.
S167、通过以下公式对该特征点进行坐标变换:S167. Use the following formula to transform the coordinates of the feature point:
其中,
表示坐标转换值,
表示亿像素阵列式相机的内参矩阵,f
x和f
y分别表示x轴、y轴方向焦距的长度,单位是毫米;c
x和c
y分别表示相机的光学中心,单位是像素;Q表示旋转矩阵,T表示平移矩阵,[Q T]表示亿像素阵列式相机的外参矩阵;R表示偏移率,
表示该特征点的修正坐标值。
in, Represents the coordinate conversion value, Represents the internal parameter matrix of the gigapixel array camera, f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters; c x and c y respectively represent the optical center of the camera, in pixels; Q represents rotation Matrix, T represents the translation matrix, [Q T] represents the external parameter matrix of the gigapixel array camera; R represents the offset rate, Indicates the corrected coordinate value of the feature point.
S168、遍历所述第二特征点中的所有特征点,重复步骤S166-S167。S168: Traverse all the feature points in the second feature point and repeat steps S166-S167.
S169、基于述第二特征点中所有特征点的修正坐标值,映射出第二修正特征点。S169. Based on the corrected coordinate values of all the feature points in the second feature points, map the second corrected feature points.
偏移率的计算考虑了相机和拍摄对象的移动方向和速度可能对图像拼接造成的误差,The calculation of the offset rate takes into account the errors that may be caused by the moving direction and speed of the camera and the subject in image stitching.
该实施方式通过引入三维坐标系,创新性地将相机内参矩阵、外参矩阵与移动导致的偏移率结合,修正两张图像的特征点,减少拼接错误的情况发生。This implementation method introduces a three-dimensional coordinate system and innovatively combines the camera's internal parameter matrix and external parameter matrix with the offset rate caused by movement to correct the feature points of the two images and reduce the occurrence of splicing errors.
S170、匹配所述第一修正特征点和所述第二修正特征点,获得多个最优特征点对。S170. Match the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs.
其中,采用汉明距离匹配所述第一修正特征点和所述第二修正特征点中的每一特征点,获得汉明距离最短的多个最优特征点对。Wherein, Hamming distance is used to match each feature point in the first modified feature point and the second modified feature point to obtain a plurality of optimal feature point pairs with the shortest Hamming distance.
具体地,可以预设所需的最优特征点对数量,也可以预设汉明距 离阈值,在此不做具体限定。Specifically, the required number of optimal feature point pairs can be preset, or the Hamming distance threshold can be preset, which is not specifically limited here.
S180、筛选所述多个最优特征点对,拼接所述第一待拼接图像和所述第二待拼接图像,获得总图像。S180. Screen the multiple optimal feature point pairs, splice the first image to be spliced and the second image to be spliced, and obtain a total image.
其中,可以采用加权融合算法,获得拼接后的总图像。Among them, a weighted fusion algorithm can be used to obtain the total image after stitching.
由此,在兼顾大视野和细节的图像数据基础上,能够避免相机和拍摄对象移动造成的图像偏差,提高图像拼接效率和精度。As a result, on the basis of image data that takes into account both large field of view and details, image deviations caused by movement of the camera and subject can be avoided, and image stitching efficiency and accuracy can be improved.
为实现上述方法类实施例,本实施例还提供一种基于亿像素阵列式相机的图像拼接系统,如图2所示,该系统包括:In order to implement the above method embodiments, this embodiment also provides an image stitching system based on a gigapixel array camera. As shown in Figure 2, the system includes:
图像数据获取模块210,用于获取亿像素阵列式相机拍摄的图像数据;所述图像数据是目标对象的第一待拼接图像和第二待拼接图像。The image data acquisition module 210 is used to acquire image data captured by a gigapixel array camera; the image data is the first image to be spliced and the second image to be spliced of the target object.
特征点确定模块220,用于分别对所述第一待拼接图像和所述第二待拼接图像进行特征点提取,确定第一特征点和第二特征点。The feature point determination module 220 is configured to extract feature points from the first image to be spliced and the second image to be spliced respectively, and determine the first feature point and the second feature point.
速度确定模块230,用于分别获取所述亿像素阵列式相机和所述目标对象的第一移动速度、第二移动速度。The speed determination module 230 is configured to obtain the first moving speed and the second moving speed of the gigapixel array camera and the target object respectively.
偏移率计算模块240,用于基于所述第一移动速度和所述第二移动速度,计算偏移率。The deviation rate calculation module 240 is configured to calculate a deviation rate based on the first moving speed and the second moving speed.
特征点修正模块250,用于基于所述偏移率调整所述第一特征点和所述第二特征点,获得第一修正特征点和第二修正特征点。The feature point correction module 250 is configured to adjust the first feature point and the second feature point based on the offset rate to obtain first corrected feature points and second corrected feature points.
最优特征点对获取模块260,用于匹配所述第一修正特征点和所述第二修正特征点,获得多个最优特征点对。The optimal feature point pair acquisition module 260 is used to match the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs.
图像拼接模块270,用于筛选所述多个最优特征点对,拼接所述第一待拼接图像和所述第二待拼接图像,获得总图像。The image splicing module 270 is used to screen the multiple optimal feature point pairs, splice the first image to be spliced and the second image to be spliced, and obtain a total image.
可选地,所述第一待拼接图像和所述第二待拼接图像是连续的两帧图像。Optionally, the first image to be spliced and the second image to be spliced are two consecutive frame images.
可选地,所述特征点确定模块220进一步用于:根据所述第一待拼 接图像或所述第二待拼接图像上的每个像素点,以该像素点为中心,计算n个邻近像素点的灰度差值;若所述灰度差值符合预设条件的邻近像素点大于n/2个,则该像素点确定为第一特征点或第二特征点。Optionally, the feature point determination module 220 is further configured to: calculate n neighboring pixels based on each pixel on the first image to be spliced or the second image to be spliced, with the pixel as the center. The grayscale difference value of the point; if there are more than n/2 adjacent pixel points whose grayscale difference value meets the preset condition, then the pixel point is determined to be the first feature point or the second feature point.
可选地,所述偏移率计算模块240,进一步用于:分别基于所述第一移动速度和所述第二移动速度,计算第一偏移量s
1和第二偏移量s
2;若所述亿像素阵列式相机和所述目标对象同向移动,则通过以下公式计算偏移率R:
Optionally, the offset rate calculation module 240 is further configured to: calculate the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively; If the gigapixel array camera and the target object move in the same direction, the offset rate R is calculated by the following formula:
若所述亿像素阵列式相机和所述目标对象反向移动,则通过以下公式计算偏移率R:If the gigapixel array camera and the target object move in opposite directions, the offset rate R is calculated by the following formula:
其中,t
2表示第二待拼接图像的拍摄时刻,t
1表示第一待拼接图像的拍摄时刻,则第一待拼接图像与第二待拼接图像的拍摄时间差Δt=|t
2-t
1|,vx
1表示亿像素阵列式相机在t
1时刻的移动速度,vd
1表示目标对象在t
1时刻的移动速度。
Among them, t 2 represents the shooting time of the second image to be spliced, and t 1 represents the shooting time of the first image to be spliced, then the shooting time difference between the first image to be spliced and the second image to be spliced is Δt = |t 2 - t 1 | , vx 1 represents the moving speed of the gigapixel array camera at time t 1 , vd 1 represents the moving speed of the target object at time t 1 .
可选地,所述分别基于所述第一移动速度和所述第二移动速度,计算第一偏移量s
1和第二偏移量s
2,包括:通过以下公式计算第一偏移量s
1和第二偏移量s
2:
Optionally, calculating the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively includes: calculating the first offset through the following formula s 1 and second offset s 2 :
s
1=|vx
1-vx
2|*Δt
s 1 =|vx 1 -vx 2 |*Δt
s
2=|vd
1-vd
2|*Δt
s 2 =|vd 1 -vd 2 |*Δt
其中,vx
1表示亿像素阵列式相机在t
1时刻的移动速度,vx
2表示亿像素阵列式相机在t
2时刻的移动速度,vd
1表示目标对象在t
1时刻的移动速度,vd
2表示目标对象在t
2时刻的移动速度,Δt表示第一待拼接图像与第二待拼接图像的拍摄时间差。
Among them, vx 1 represents the moving speed of the gigapixel array camera at time t 1 , vx 2 represents the moving speed of the gigapixel array camera at time t 2 , vd 1 represents the moving speed of the target object at time t 1 , vd 2 represents The moving speed of the target object at time t 2 , Δt represents the shooting time difference between the first image to be spliced and the second image to be spliced.
可选地,所述特征点修正模块250进一步用于:构建三维坐标系;获取多个所述第一特征点中任一个特征点的坐标值(x
1,y
1,z
1);通过以下公式对该特征点进行坐标变换:
Optionally, the feature point correction module 250 is further used to: construct a three-dimensional coordinate system; obtain the coordinate value (x 1 , y 1 , z 1 ) of any one of the plurality of first feature points; through the following The formula performs coordinate transformation on the feature point:
其中,
表示坐标转换值,
表示亿像素阵列式相机的内参矩阵,f
x和f
y分别表示x轴、y轴方向焦距的长度,单位是毫米;c
x和c
y分别表示光学中心,单位是像素;Q表示旋转矩阵,T表示平移矩阵,[Q T]表示亿像素阵列式相机的外参矩阵;R表示偏移率,
表示该特征点的修正坐标值;遍历所述第一特征点中的所有特征点,重复以上步骤;基于所述第一特征点中所有特征点的修正坐标值,映射出第一修正特征点。
in, Represents the coordinate conversion value, Represents the internal parameter matrix of the gigapixel array camera, f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters; c x and c y respectively represent the optical center, in pixels; Q represents the rotation matrix, T represents the translation matrix, [Q T] represents the external parameter matrix of the gigapixel array camera; R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all feature points in the first feature point and repeats the above steps; and maps the first corrected feature point based on the corrected coordinate value of all feature points in the first feature point.
可选地,所述特征点修正模块250进一步用于:获取获取多个所述第二特征点中任一个特征点的坐标值(x
2,y
2,z
2);通过以下公式对该特征点进行坐标变换:
Optionally, the feature point correction module 250 is further configured to: obtain the coordinate value (x 2 , y 2 , z 2 ) of any one of the plurality of second feature points; Point coordinate transformation:
其中,
表示坐标转换值,
表示亿像素阵列式相机 的内参矩阵,f
x和f
y分别表示x轴、y轴方向焦距的长度,单位是毫米;c
x和c
y分别表示相机的光学中心,单位是像素;Q表示旋转矩阵,T表示平移矩阵,[Q T]表示亿像素阵列式相机的外参矩阵;R表示偏移率,
表示该特征点的修正坐标值;遍历所述第二特征点中的所有特征点,重复以上步骤;基于述第二特征点中所有特征点的修正坐标值,映射出第二修正特征点。
in, Represents the coordinate conversion value, Represents the internal parameter matrix of the gigapixel array camera, f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters; c x and c y respectively represent the optical center of the camera, in pixels; Q represents rotation Matrix, T represents the translation matrix, [Q T] represents the external parameter matrix of the gigapixel array camera; R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all the feature points in the second feature point and repeats the above steps; and maps the second corrected feature point based on the corrected coordinate value of all the feature points in the second feature point.
可选地,所述最优特征点对获取模块260进一步用于:采用汉明距离匹配所述第一修正特征点和所述第二修正特征点中的每一特征点,获得汉明距离最短的多个最优特征点对。Optionally, the optimal feature point pair acquisition module 260 is further configured to use Hamming distance to match each feature point in the first modified feature point and the second modified feature point to obtain the shortest Hamming distance. multiple optimal feature point pairs.
可选地,所述图像拼接模块270进一步用于:采用加权融合算法,获得拼接后的总图像。Optionally, the image splicing module 270 is further configured to use a weighted fusion algorithm to obtain a spliced total image.
该系统在计算偏移率时,考虑了相机和拍摄对象的移动方向和速度可能对图像拼接造成的误差,同时引入三维坐标系,将相机参数与偏移率结合,修正两张图像的特征点,能够提高图像拼接效率和精度。When calculating the offset rate, this system takes into account the errors that the moving direction and speed of the camera and the subject may cause in image stitching. It also introduces a three-dimensional coordinate system, combines camera parameters with the offset rate, and corrects the feature points of the two images. , which can improve the efficiency and accuracy of image stitching.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述装置中模块/单元/子单元/组件的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and simplicity of description, the specific working processes of the modules/units/subunits/components in the above-described device can be referred to the corresponding processes in the foregoing method embodiments, and will not be repeated here. Repeat.
在本申请所提供的实施例中,应该理解到,所揭露装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它 的形式。In the embodiments provided in this application, it should be understood that the disclosed devices and methods can be implemented in other ways. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the coupling or direct coupling or communication connection between each other shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请提供的实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in the embodiment provided by this application can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product. The computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code. .
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释,此外,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。It should be noted that similar reference numerals and letters represent similar items in the following drawings. Therefore, once an item is defined in one drawing, it does not need further definition and explanation in subsequent drawings. In addition, the terms "first", "second", "third", etc. are only used to distinguish descriptions and shall not be understood as indicating or implying relative importance.
最后应说明的是:以上所述实施例,仅为本申请的具体实施方式,用以说明本申请的技术方案,而非对其限制,本申请的保护范围并不局限于此,尽管参照前述实施例对本申请进行了详细的说明,本领域 的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本申请实施例技术方案的精神和范围。都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。Finally, it should be noted that the above-mentioned embodiments are only specific implementation modes of the present application, and are used to illustrate the technical solutions of the present application, but not to limit them. The protection scope of the present application is not limited thereto. Although refer to the foregoing The embodiments describe the present application in detail. Those of ordinary skill in the art should understand that any person familiar with the technical field can still modify the technical solutions recorded in the foregoing embodiments within the technical scope disclosed in the present application. It is possible to easily think of changes or equivalent substitutions of some of the technical features; however, these modifications, changes or substitutions do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the present application. All are covered by the protection scope of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.
Claims (9)
- 一种基于亿像素阵列式相机的图像拼接方法,其特征在于,包括:An image stitching method based on a gigapixel array camera, which is characterized by including:获取亿像素阵列式相机拍摄的图像数据;所述图像数据是目标对象的第一待拼接图像和第二待拼接图像;Obtain the image data captured by the gigapixel array camera; the image data is the first image to be spliced and the second image to be spliced of the target object;分别对所述第一待拼接图像和所述第二待拼接图像进行特征点提取,确定第一特征点和第二特征点;Perform feature point extraction on the first image to be spliced and the second image to be spliced respectively, and determine the first feature point and the second feature point;获取所述亿像素阵列式相机的第一移动速度;Obtain the first moving speed of the gigapixel array camera;获取所述目标对象的第二移动速度;Obtain the second moving speed of the target object;基于所述第一移动速度和所述第二移动速度,计算偏移率;Calculate a deflection rate based on the first movement speed and the second movement speed;所述基于所述第一移动速度和所述第二移动速度,计算偏移率,包括:Calculating the offset rate based on the first moving speed and the second moving speed includes:分别基于所述第一移动速度和所述第二移动速度,计算第一偏移量s 1和第二偏移量s 2; Calculate the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively;若所述亿像素阵列式相机和所述目标对象同向移动,则通过以下公式计算偏移率R:If the gigapixel array camera and the target object move in the same direction, the offset rate R is calculated by the following formula:若所述亿像素阵列式相机和所述目标对象反向移动,则通过以下公式计算偏移率R:If the gigapixel array camera and the target object move in opposite directions, the offset rate R is calculated by the following formula:其中,t 2表示第二待拼接图像的拍摄时刻,t 1表示第一待拼接图像的拍摄时刻,则第一待拼接图像与第二待拼接图像的拍摄时间差Δt=|t 2-t 1|,vx 1表示亿像素阵列式相机在t 1时刻的移动速度,vd 1表示目标对象在t 1时刻的移动速度; Among them, t 2 represents the shooting time of the second image to be spliced, and t 1 represents the shooting time of the first image to be spliced, then the shooting time difference between the first image to be spliced and the second image to be spliced is Δt = |t 2 - t 1 | , vx 1 represents the moving speed of the gigapixel array camera at time t 1 , vd 1 represents the moving speed of the target object at time t 1 ;基于所述偏移率调整所述第一特征点和所述第二特征点,获得第一修正特征点和第二修正特征点;Adjust the first feature point and the second feature point based on the offset rate to obtain first modified feature points and second modified feature points;匹配所述第一修正特征点和所述第二修正特征点,获得多个最优特征点对;Match the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs;筛选所述多个最优特征点对,拼接所述第一待拼接图像和所述第二待拼接 图像,获得总图像。Screen the multiple optimal feature point pairs, splice the first image to be spliced and the second image to be spliced, and obtain a total image.
- 根据权利要求1所述的图像拼接方法,其特征在于,所述第一待拼接图像和所述第二待拼接图像是连续的两帧图像。The image splicing method according to claim 1, wherein the first image to be spliced and the second image to be spliced are two consecutive frames of images.
- 根据权利要求1所述的图像拼接方法,其特征在于,所述分别对所述第一待拼接图像和所述第二待拼接图像进行特征点提取,确定第一特征点和第二特征点,包括:The image splicing method according to claim 1, wherein the first feature point extraction and the second feature point extraction are performed on the first image to be spliced and the second image to be spliced respectively, and the first feature point and the second feature point are determined, include:根据所述第一待拼接图像或所述第二待拼接图像上的每个像素点,以该像素点为中心,计算n个邻近像素点的灰度差值;According to each pixel on the first image to be spliced or the second image to be spliced, with the pixel as the center, calculate the grayscale differences of n adjacent pixels;若所述灰度差值符合预设条件的邻近像素点大于n/2个,则该像素点确定为第一特征点或第二特征点。If there are more than n/2 adjacent pixels whose grayscale differences meet the preset conditions, then the pixel is determined to be the first feature point or the second feature point.
- 根据权利要求1所述的图像拼接方法,其特征在于,所述分别基于所述第一移动速度和所述第二移动速度,计算第一偏移量s 1和第二偏移量s 2,包括: The image splicing method according to claim 1, wherein the first offset s 1 and the second offset s 2 are calculated based on the first moving speed and the second moving speed respectively, include:通过以下公式计算第一偏移量s 1和第二偏移量s 2: The first offset s 1 and the second offset s 2 are calculated by the following formulas:s 1=|vx 1-vx 2|*Δt s 1 =|vx 1 -vx 2 |*Δts 2=|vd 1-vd 2|*Δt s 2 =|vd 1 -vd 2 |*Δt其中,vx 1表示亿像素阵列式相机在t 1时刻的移动速度,vx 2表示亿像素阵列式相机在t 2时刻的移动速度,vd 1表示目标对象在t 1时刻的移动速度,vd 2表示目标对象在t 2时刻的移动速度,Δt表示第一待拼接图像与第二待拼接图像的拍摄时间差。 Among them, vx 1 represents the moving speed of the gigapixel array camera at time t 1 , vx 2 represents the moving speed of the gigapixel array camera at time t 2 , vd 1 represents the moving speed of the target object at time t 1 , vd 2 represents The moving speed of the target object at time t 2 , Δt represents the shooting time difference between the first image to be spliced and the second image to be spliced.
- 根据权利要求1所述的图像拼接方法,其特征在于,基于所述偏移率调整所述第一特征点,获得第一修正特征点,包括:The image splicing method according to claim 1, characterized in that adjusting the first feature point based on the offset rate to obtain the first corrected feature point includes:构建三维坐标系;Construct a three-dimensional coordinate system;获取多个所述第一特征点中任一个特征点的坐标值(x 1,y 1,z 1); Obtain the coordinate value (x 1 , y 1 , z 1 ) of any one of the plurality of first feature points;通过以下公式对该特征点进行坐标变换:Use the following formula to transform the coordinates of the feature point:其中, 表示坐标转换值, 表示亿像素阵列式相机的内参矩阵,f x和f y分别表示x轴、y轴方向焦距的长度,单位是毫米;c x和c y分别表示光学中心,单位是像素;Q表示旋转矩阵,T表示平移矩阵,[Q T]表示亿像素阵列式相机的外参矩阵;R表示偏移率, 表示该特征点的修正坐标值; in, Represents the coordinate conversion value, Represents the internal parameter matrix of the gigapixel array camera, f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters; c x and c y respectively represent the optical center, in pixels; Q represents the rotation matrix, T represents the translation matrix, [Q T] represents the external parameter matrix of the gigapixel array camera; R represents the offset rate, Indicates the corrected coordinate value of the feature point;遍历所述第一特征点中的所有特征点,重复以上步骤;Traverse all feature points in the first feature point and repeat the above steps;基于所述第一特征点中所有特征点的修正坐标值,映射出第一修正特征点。Based on the corrected coordinate values of all the feature points in the first feature points, the first corrected feature points are mapped.
- 根据权利要求5所述的图像拼接方法,其特征在于,基于所述偏移率调整所述第二特征点,获得第二修正特征点,包括:The image stitching method according to claim 5, wherein adjusting the second feature point based on the offset rate to obtain a second corrected feature point includes:获取多个所述第二特征点中任一个特征点的坐标值(x 2,y 2,z 2); Obtain the coordinate value (x 2 , y 2 , z 2 ) of any one of the plurality of second feature points;通过以下公式对该特征点进行坐标变换:Use the following formula to transform the coordinates of the feature point:其中, 表示坐标转换值, 表示亿像素阵列式相机的内参矩阵,f x和f y分别表示x轴、y轴方向焦距的长度,单位是毫米;c x和c y分别表示相机的光学中心,单位是像素;Q表示旋转矩阵,T表示平移矩阵,[Q T]表 示亿像素阵列式相机的外参矩阵;R表示偏移率, 表示该特征点的修正坐标值; in, Represents the coordinate conversion value, Represents the internal parameter matrix of the gigapixel array camera, f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters; c x and c y respectively represent the optical center of the camera, in pixels; Q represents rotation Matrix, T represents the translation matrix, [Q T] represents the external parameter matrix of the gigapixel array camera; R represents the offset rate, Indicates the corrected coordinate value of the feature point;遍历所述第二特征点中的所有特征点,重复以上步骤;Traverse all feature points in the second feature point and repeat the above steps;基于述第二特征点中所有特征点的修正坐标值,映射出第二修正特征点。Based on the corrected coordinate values of all the feature points in the second feature points, the second corrected feature points are mapped.
- 根据权利要求1所述的图像拼接方法,其特征在于,所述匹配所述第一修正特征点和所述第二修正特征点,获得多个最优特征点对,包括:The image splicing method according to claim 1, wherein the matching of the first corrected feature point and the second corrected feature point to obtain a plurality of optimal feature point pairs includes:采用汉明距离匹配所述第一修正特征点和所述第二修正特征点中的每一特征点,获得汉明距离最短的多个最优特征点对。Hamming distance is used to match each feature point in the first modified feature point and the second modified feature point, and a plurality of optimal feature point pairs with the shortest Hamming distances are obtained.
- 根据权利要求1所述的图像拼接方法,其特征在于,所述拼接所述第一待拼接图像和所述第二待拼接图像,获得总图像,包括:The image splicing method according to claim 1, characterized in that, splicing the first image to be spliced and the second image to be spliced to obtain a total image includes:采用加权融合算法,获得拼接后的总图像。A weighted fusion algorithm is used to obtain the spliced total image.
- 一种基于亿像素阵列式相机的图像拼接系统,其特征在于,包括:An image stitching system based on a gigapixel array camera, which is characterized by including:图像数据获取模块,用于获取亿像素阵列式相机拍摄的图像数据;所述图像数据是目标对象的第一待拼接图像和第二待拼接图像;An image data acquisition module is used to acquire image data captured by a gigapixel array camera; the image data is the first image to be spliced and the second image to be spliced of the target object;特征点确定模块,用于分别对所述第一待拼接图像和所述第二待拼接图像进行特征点提取,确定第一特征点和第二特征点;A feature point determination module, configured to extract feature points from the first image to be spliced and the second image to be spliced respectively, and determine the first feature point and the second feature point;速度确定模块,用于分别获取所述亿像素阵列式相机和所述目标对象的第一移动速度、第二移动速度;A speed determination module, configured to obtain the first moving speed and the second moving speed of the gigapixel array camera and the target object respectively;偏移率计算模块,用于基于所述第一移动速度和所述第二移动速度,计算偏移率;A deviation rate calculation module, configured to calculate a deviation rate based on the first moving speed and the second moving speed;所述偏移率计算模块,进一步用于分别基于所述第一移动速度和所述第二移动速度,计算第一偏移量s 1和第二偏移量s 2; The offset rate calculation module is further configured to calculate the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively;若所述亿像素阵列式相机和所述目标对象同向移动,则通过以下公式计算偏移率R:If the gigapixel array camera and the target object move in the same direction, the offset rate R is calculated by the following formula:若所述亿像素阵列式相机和所述目标对象反向移动,则通过以下公式计算偏移率R:If the gigapixel array camera and the target object move in opposite directions, the offset rate R is calculated by the following formula:其中,t 2表示第二待拼接图像的拍摄时刻,t 1表示第一待拼接图像的拍摄时刻,则第一待拼接图像与第二待拼接图像的拍摄时间差Δt=|t 2-t 1|,vx 1表示亿像素阵列式相机在t 1时刻的移动速度,vd 1表示目标对象在t 1时刻的移动速度; Among them, t 2 represents the shooting time of the second image to be spliced, and t 1 represents the shooting time of the first image to be spliced, then the shooting time difference between the first image to be spliced and the second image to be spliced is Δt = |t 2 - t 1 | , vx 1 represents the moving speed of the gigapixel array camera at time t 1 , vd 1 represents the moving speed of the target object at time t 1 ;特征点修正模块,用于基于所述偏移率调整所述第一特征点和所述第二特征点,获得第一修正特征点和第二修正特征点;A feature point correction module, configured to adjust the first feature point and the second feature point based on the offset rate to obtain first corrected feature points and second corrected feature points;最优特征点对获取模块,用于匹配所述第一修正特征点和所述第二修正特征点,获得多个最优特征点对;An optimal feature point pair acquisition module is used to match the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs;图像拼接模块,用于筛选所述多个最优特征点对,拼接所述第一待拼接图像和所述第二待拼接图像,获得总图像。An image splicing module is used to screen the plurality of optimal feature point pairs, splice the first image to be spliced and the second image to be spliced, and obtain a total image.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210639481.5A CN114841862B (en) | 2022-06-07 | 2022-06-07 | Image splicing method and system based on hundred million pixel array type camera |
CN202210639481.5 | 2022-06-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023236508A1 true WO2023236508A1 (en) | 2023-12-14 |
Family
ID=82573495
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/141925 WO2023236508A1 (en) | 2022-06-07 | 2022-12-26 | Image stitching method and system based on billion-pixel array camera |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114841862B (en) |
WO (1) | WO2023236508A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118521671A (en) * | 2024-07-19 | 2024-08-20 | 深圳中安高科电子有限公司 | CMOS area array camera array train bottom imaging method and device |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114841862B (en) * | 2022-06-07 | 2023-02-03 | 北京拙河科技有限公司 | Image splicing method and system based on hundred million pixel array type camera |
CN115829843B (en) * | 2023-01-09 | 2023-05-12 | 深圳思谋信息科技有限公司 | Image stitching method, device, computer equipment and storage medium |
CN118014828B (en) * | 2023-12-19 | 2024-08-20 | 苏州一际智能科技有限公司 | Image stitching method, device and system for array camera |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170006219A1 (en) * | 2015-06-30 | 2017-01-05 | Gopro, Inc. | Image stitching in a multi-camera array |
CN107945113A (en) * | 2017-11-17 | 2018-04-20 | 北京天睿空间科技股份有限公司 | The antidote of topography's splicing dislocation |
CN113891111A (en) * | 2021-09-29 | 2022-01-04 | 北京拙河科技有限公司 | Live broadcast method, device, medium and equipment for billion pixel video |
CN114418839A (en) * | 2021-12-09 | 2022-04-29 | 浙江大华技术股份有限公司 | Image stitching method, electronic device and computer-readable storage medium |
CN114841862A (en) * | 2022-06-07 | 2022-08-02 | 北京拙河科技有限公司 | Image splicing method and system based on hundred million pixel array type camera |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10547825B2 (en) * | 2014-09-22 | 2020-01-28 | Samsung Electronics Company, Ltd. | Transmission of three-dimensional video |
CN107646126B (en) * | 2015-07-16 | 2020-12-08 | 谷歌有限责任公司 | Camera pose estimation for mobile devices |
JP6741533B2 (en) * | 2016-09-26 | 2020-08-19 | キヤノン株式会社 | Imaging control device and control method thereof |
JP2019164136A (en) * | 2018-03-19 | 2019-09-26 | 株式会社リコー | Information processing device, image capturing device, mobile body, image processing system, and information processing method |
CN108566513A (en) * | 2018-03-28 | 2018-09-21 | 深圳臻迪信息技术有限公司 | A kind of image pickup method of unmanned plane to moving target |
CN110706257B (en) * | 2019-09-30 | 2022-07-22 | 北京迈格威科技有限公司 | Identification method of effective characteristic point pair, and camera state determination method and device |
CN112866542B (en) * | 2019-11-12 | 2022-08-12 | Oppo广东移动通信有限公司 | Focus tracking method and apparatus, electronic device, and computer-readable storage medium |
CN111260542A (en) * | 2020-01-17 | 2020-06-09 | 中国电子科技集团公司第十四研究所 | SAR image splicing method based on sub-block registration |
-
2022
- 2022-06-07 CN CN202210639481.5A patent/CN114841862B/en active Active
- 2022-12-26 WO PCT/CN2022/141925 patent/WO2023236508A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170006219A1 (en) * | 2015-06-30 | 2017-01-05 | Gopro, Inc. | Image stitching in a multi-camera array |
CN107945113A (en) * | 2017-11-17 | 2018-04-20 | 北京天睿空间科技股份有限公司 | The antidote of topography's splicing dislocation |
CN113891111A (en) * | 2021-09-29 | 2022-01-04 | 北京拙河科技有限公司 | Live broadcast method, device, medium and equipment for billion pixel video |
CN114418839A (en) * | 2021-12-09 | 2022-04-29 | 浙江大华技术股份有限公司 | Image stitching method, electronic device and computer-readable storage medium |
CN114841862A (en) * | 2022-06-07 | 2022-08-02 | 北京拙河科技有限公司 | Image splicing method and system based on hundred million pixel array type camera |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118521671A (en) * | 2024-07-19 | 2024-08-20 | 深圳中安高科电子有限公司 | CMOS area array camera array train bottom imaging method and device |
Also Published As
Publication number | Publication date |
---|---|
CN114841862B (en) | 2023-02-03 |
CN114841862A (en) | 2022-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023236508A1 (en) | Image stitching method and system based on billion-pixel array camera | |
CN111147741B (en) | Focusing processing-based anti-shake method and device, electronic equipment and storage medium | |
US11019330B2 (en) | Multiple camera system with auto recalibration | |
KR101657039B1 (en) | Image processing apparatus, image processing method, and imaging system | |
TWI808987B (en) | Apparatus and method of five dimensional (5d) video stabilization with camera and gyroscope fusion | |
US10733705B2 (en) | Information processing device, learning processing method, learning device, and object recognition device | |
WO2020088133A1 (en) | Image processing method and apparatus, electronic device and computer-readable storage medium | |
US10915998B2 (en) | Image processing method and device | |
WO2020259474A1 (en) | Focus tracking method and apparatus, terminal device, and computer-readable storage medium | |
WO2021139176A1 (en) | Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium | |
WO2017120771A1 (en) | Depth information acquisition method and apparatus, and image collection device | |
WO2017020150A1 (en) | Image processing method, device and camera | |
CN112005548B (en) | Method of generating depth information and electronic device supporting the same | |
JP6577703B2 (en) | Image processing apparatus, image processing method, program, and storage medium | |
JP2017108387A (en) | Image calibrating, stitching and depth rebuilding method of panoramic fish-eye camera and system thereof | |
WO2019232793A1 (en) | Two-camera calibration method, electronic device and computer-readable storage medium | |
JPWO2018235163A1 (en) | Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method | |
TWI761684B (en) | Calibration method of an image device and related image device and operational device thereof | |
JP2010041419A (en) | Image processor, image processing program, image processing method, and electronic apparatus | |
JP2017017689A (en) | Imaging system and program of entire-celestial-sphere moving image | |
WO2017128750A1 (en) | Image collection method and image collection device | |
JP5857712B2 (en) | Stereo image generation apparatus, stereo image generation method, and computer program for stereo image generation | |
JP7312026B2 (en) | Image processing device, image processing method and program | |
TW201342303A (en) | Three-dimensional image obtaining system and three-dimensional image obtaining method | |
CN111353945B (en) | Fisheye image correction method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22945636 Country of ref document: EP Kind code of ref document: A1 |