CN107333064B - Spherical panoramic video splicing method and system - Google Patents
Spherical panoramic video splicing method and system Download PDFInfo
- Publication number
- CN107333064B CN107333064B CN201710607242.0A CN201710607242A CN107333064B CN 107333064 B CN107333064 B CN 107333064B CN 201710607242 A CN201710607242 A CN 201710607242A CN 107333064 B CN107333064 B CN 107333064B
- Authority
- CN
- China
- Prior art keywords
- image
- pair
- video
- corner
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 239000011159 matrix material Substances 0.000 claims abstract description 47
- 230000009466 transformation Effects 0.000 claims abstract description 40
- 238000001514 detection method Methods 0.000 claims abstract description 38
- 230000008569 process Effects 0.000 claims abstract description 17
- 230000004927 fusion Effects 0.000 claims description 13
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000005316 response function Methods 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 5
- 230000007704 transition Effects 0.000 abstract description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 210000001503 joint Anatomy 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 239000011521 glass Substances 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及图像处理技术领域,特别涉及一种球形全景视频的拼接方法及系统。The invention relates to the technical field of image processing, in particular to a method and system for splicing spherical panoramic videos.
背景技术Background technique
全景视频是一种新型的视频。传统视频只能看到空间中某个方向和范围的信息,与此相比,全景视频记录了空间360度全方位信息,并且可以与用户进行一定的交互,例如改变观察方向和切换场景等。它是由多个不同方向拍摄的普通视频拼合而成的。视频拼接在军事、医学、计算机视觉、视频会议、安防等领域中都有重要应用。全景视频由全景图发展而来。全景图的拼接已研究得比较多。Panoramic video is a new type of video. Compared with traditional video, which can only see information in a certain direction and range in space, panoramic video records all-round information in 360-degree space, and can interact with users, such as changing the viewing direction and switching scenes. It is assembled from multiple ordinary videos shot in different directions. Video splicing has important applications in military, medical, computer vision, video conferencing, security and other fields. Panoramic video is developed from panorama. The stitching of panoramas has been studied a lot.
按技术方法的不同,最常用的方法可分为三类:基于频域的方法、基于特征的方法和基于灰度梯度的方法。其中,基于频域的方法以分数傅里叶变换的改进方法为代表,该类方法能很好地解决平移、旋转和均匀缩放的配准问题,但却无法很好地解决透视投影变换的配准。基于特征的方法以尺度不变特征变换算法(SIFT)代表,该方法能很好的解决包括透视变换的各种变换的配准问题,但该方法时间复杂度非常高,而且对于纹理图像的配准结果也不够理想。基于灰度梯度的方法是通过图像灰度差最小化来求解投影变换参数,使用LM优化技术求取透视投影参数,但这些方法易受光照变化的影响。According to the different technical methods, the most commonly used methods can be divided into three categories: frequency domain-based methods, feature-based methods, and gray gradient-based methods. Among them, the method based on frequency domain is represented by the improved method of fractional Fourier transform, which can well solve the registration problem of translation, rotation and uniform scaling, but cannot solve the registration problem of perspective projection transformation. allow. The feature-based method is represented by the Scale Invariant Feature Transform (SIFT) algorithm, which can well solve the registration problem of various transformations including perspective transformation, but the time complexity of this method is very high, and it is not suitable for the matching of texture images. The quasi-results are not ideal either. The method based on gray gradient is to solve the projection transformation parameters by minimizing the image gray difference, and use the LM optimization technique to obtain the perspective projection parameters, but these methods are easily affected by illumination changes.
按构造方式不同,全景图可分为柱型、方型和球型三种(全景视频亦然)。其中柱型全景研究得较多,各种全景相关技术也由其率先推进。但柱型全景有个固有的缺点,不能完整的包含空间信息,顶部和底部的空间信息是没有的。球型全景空间信息完全,但比柱型全景困难,研究报道则少很多。一方面,其拍摄方案比较麻烦,需拍摄十几甚至几十照片;另一方面,如此多的照片误差会累积得太多,以至于自动拼接最后无法进行,因此它是一种半自动的方法,经常需要手动调整。因此,如何提供拍摄数量少、全自动、拼接图像质量高的球形全景视频拼接方法,以使全景视频的拍摄变得简易,并且拍摄质量也得到提高,是本领域技术人员需要解决的技术问题。According to the different construction methods, panoramic images can be divided into three types: cylindrical, square and spherical (the same is true for panoramic videos). Among them, column panorama has been studied more, and various panorama related technologies are also pioneered by it. However, the column panorama has an inherent disadvantage. It cannot completely contain the spatial information, and the spatial information of the top and bottom is not available. The spatial information of spherical panorama is complete, but it is more difficult than that of cylindrical panorama, and the research reports are much less. On the one hand, the shooting scheme is cumbersome, requiring dozens or even dozens of photos to be taken; on the other hand, so many photo errors will accumulate so much that automatic stitching cannot be performed in the end, so it is a semi-automatic method. Manual adjustments are often required. Therefore, how to provide a spherical panorama video stitching method with a small number of shots, fully automatic, and high stitched image quality, so as to make panorama video shooting easier and improve the shooting quality, is a technical problem to be solved by those skilled in the art.
发明内容SUMMARY OF THE INVENTION
本发明的目的是提供一种球形全景视频的拼接方法及系统,可使全景视频的拼接变得简易和高效,并且图像质量可以得到提高。The purpose of the present invention is to provide a method and system for splicing spherical panoramic videos, which can make the splicing of panoramic videos simple and efficient, and the image quality can be improved.
为解决上述技术问题,本发明提供一种球形全景视频的拼接方法,所述方法包括:In order to solve the above-mentioned technical problems, the present invention provides a method for splicing spherical panoramic videos, the method comprising:
获取鱼眼镜头拍摄的视频对;其中,所述视频对包括具有N帧第一图像的第一视频和具有N帧第二图像的第二视频;且N帧第一图像与N帧第二图像具有一一对应关系;Acquire a video pair captured by a fisheye lens; wherein the video pair includes a first video with N frames of first images and a second video with N frames of second images; and N frames of first images and N frames of second images have a one-to-one correspondence;
分别将具有所述一一对应关系的第一图像与第二图像作为图像对执行拼接操作,形成拼接视频;其中,所述拼接操作的过程包括:The first image and the second image with the one-to-one correspondence are respectively used as image pairs to perform a splicing operation to form a spliced video; wherein, the process of the splicing operation includes:
确定所述图像对中第一图像对应的角点;determining the corner corresponding to the first image in the image pair;
在所述图像对中第二图像中与各所述角点对应的经验区域进行搜索,并利用HSV算法对搜索结果进行检验确定各所述角点对应的匹配点;Search for the experience area corresponding to each of the corner points in the second image in the image pair, and use the HSV algorithm to check the search result to determine the matching point corresponding to each of the corner points;
根据选定的角点以及对应的匹配点计算所述图像对的转换矩阵;Calculate the transformation matrix of the image pair according to the selected corner points and the corresponding matching points;
利用所述转换矩阵确定所述图像对中未重叠区域中各像素的颜色值,并按照权值系数进行重叠区域的像素融合确定重叠区域中各像素的颜色值;Use the conversion matrix to determine the color value of each pixel in the non-overlapping area in the image pair, and perform pixel fusion in the overlapping area according to the weight coefficient to determine the color value of each pixel in the overlapping area;
将全部所述颜色值填充到所述拼接视频中的对应位置。All the color values are filled into the corresponding positions in the stitched video.
可选的,确定所述图像对中第一图像对应的角点,包括:Optionally, determining the corner points corresponding to the first image in the image pair, including:
判断所述图像对中的第一图像是否为所述第一视频的第一帧图像;determining whether the first image in the image pair is the first frame image of the first video;
若是第一帧图像,则对所述第一帧图像进行角点检测,确定所述第一帧图像对应的角点;If it is the first frame of image, performing corner detection on the first frame of image to determine the corner corresponding to the first frame of image;
若不是第一帧图像,则判断所述图像对是否为间隔预定帧数量的图像对;If it is not the first frame of image, determine whether the image pair is an image pair separated by a predetermined number of frames;
若不是间隔预定帧数量的图像对,则将前一帧图像对中的第一图像的角点作为所述图像对中的第一图像对应的角点;If it is not an image pair separated by a predetermined number of frames, the corner point of the first image in the image pair of the previous frame is used as the corner point corresponding to the first image in the image pair;
若是间隔预定帧数量的图像对,则对所述图像对中的第一图像的选定重叠区进行角点检测得到阴影角点,并判断所述阴影角点的数量与经纬图像重叠区中的角点的数量之差是否大于阈值;若是,则对所述图像对中的第一图像进行角点检测,确定所述图像对中的第一图像对应的角点。If the image pairs are separated by a predetermined number of frames, then corner detection is performed on the selected overlapping area of the first image in the image pair to obtain shadow corners, and it is determined that the number of shadow corners is the same as that in the overlapping area of the latitude and longitude images. Whether the difference in the number of corner points is greater than a threshold; if so, perform corner point detection on the first image in the image pair to determine the corner point corresponding to the first image in the image pair.
可选的,所述角点检测的过程,包括:Optionally, the process of corner detection includes:
利用公式进行角点检测,得到预选角点M;Use the formula Perform corner detection to obtain preselected corner M;
利用公式R=det(M)-k(trace(M))2计算所述预选角点M的响应函数值R;Calculate the response function value R of the preselected corner point M by using the formula R=det(M)-k(trace(M)) 2 ;
将所述响应函数值R按照从高到底的顺序进行排列,并选取前预定数量的响应函数值R对应的预选角点M作为角点检测选定的角点;The response function values R are arranged in the order from high to bottom, and the preselected corner points M corresponding to the response function values R of the previous predetermined number are selected as the corner points selected for corner detection;
其中,I为第一视频,Ix、Iy为通过水平和竖直方向差分算子对图像滤波得到的梯度图像,w(x,y)取二维高斯函数,det为求矩阵行列式,trace为矩阵的迹,k为一个常数。Wherein, I is the first video, I x and I y are the gradient images obtained by filtering the image by the difference operator in the horizontal and vertical directions, w(x, y) is a two-dimensional Gaussian function, det is the matrix determinant, trace is the trace of the matrix, and k is a constant.
可选的,在所述图像对中第二图像中与各所述角点对应的经验区域进行搜索,包括:Optionally, searching the experience area corresponding to each of the corner points in the second image in the image pair, including:
确定所述角点检测选定的角点在所述图像对中第二图像中对应的理论位置,并以各所述理论位置为中心向外扩展预定范围作为经验区域搜索对应的匹配点。Determine the corresponding theoretical positions of the corner points selected by the corner detection in the second image in the image pair, and expand a predetermined range outward with each of the theoretical positions as the center to search for corresponding matching points as an empirical area.
可选的,利用HSV算法对搜索结果进行检验确定各所述角点对应的匹配点,包括:Optionally, use the HSV algorithm to check the search results to determine the matching points corresponding to each of the corner points, including:
分别利用公式计算所述经验区域中每一个像素点与所述经验区域对应的角点的加权颜色差总和Ex,y;Use the formula separately Calculate the weighted color difference sum E x,y of each pixel in the experience area and the corner corresponding to the experience area;
选取最小的加权颜色差总和Ex,y对应的像素点作为角点的匹配点;Select the pixel point corresponding to the minimum weighted color difference sum E x, y as the matching point of the corner point;
其中,Ex,y为第二图像在(x,y)对于第一图像中角点的颜色差总和,H′s+u,t+v和S′s+u,t+v为第一图像中在(s+u,t+v)处的色调和饱和度,H″x+u,y+v和S″x+u,y+v为第二图像中在(x+u,y+v)处的色调和饱和度,kh和ks为两个系数,w(u,v)为窗口像素位置的权重函数,u,v分别为窗口内像素的位置,l为正方形窗口的尺寸。Among them, E x, y is the sum of the color difference of the second image at (x, y) for the corner points of the first image, H' s+u, t+v and S' s+u, t+v are the first Hue and saturation at (s+u,t+v) in the image, H″ x+u,y+v and S″ x+u,y+v are the second image at (x+u,y +v) hue and saturation, k h and k s are two coefficients, w(u, v) is the weight function of the window pixel position, u, v are the pixel positions in the window, and l is the square window size.
可选的,根据选定的角点以及对应的匹配点计算所述图像对的转换矩阵,包括:Optionally, calculate the transformation matrix of the image pair according to the selected corner points and the corresponding matching points, including:
从所述角点检测选定的角点中按照距离阈值选择5个角点作为选定的角点;From the corner points selected by the corner point detection, 5 corner points are selected as the selected corner points according to the distance threshold;
根据选定的角点以及匹配点,利用公式计算所述图像对的转换矩阵;According to the selected corner points and matching points, use the formula calculating a transformation matrix for the pair of images;
其中,a11、a12…a43共为15个未知数,φL、θL、φR、θR分别是第一图像和第二图像的拍摄坐标系转换成球坐标系后的经纬度。Among them, a 11 , a 12 . . . a 43 are 15 unknowns in total, and φ L , θ L , φ R , and θ R are the latitude and longitude of the first image and the second image after the shooting coordinate system is converted into a spherical coordinate system, respectively.
可选的,所述距离阈值rt具体为:rt=ktr0;其中,r0为转向半径,kt为一个系数。Optionally, the distance threshold rt is specifically: rt =k t r 0 , where r 0 is the turning radius, and k t is a coefficient.
可选的,利用所述转换矩阵确定所述图像对中未重叠区域中各像素的颜色值,并按照权值系数进行重叠区域的像素融合确定重叠区域中各像素的颜色值,包括:Optionally, use the conversion matrix to determine the color value of each pixel in the non-overlapping area in the image pair, and perform pixel fusion in the overlapping area according to the weight coefficient to determine the color value of each pixel in the overlapping area, including:
遍历所述转换矩阵中经度和纬度,确定第一图像中各位置的颜色值c1和第二图像中各位置的颜色值c2;Traverse the longitude and latitude in the transformation matrix, and determine the color value c 1 of each position in the first image and the color value c 2 of each position in the second image;
利用颜色值确定公式确定所述图像对拼接后的图像中各位置的颜色值;Use color values to determine formulas Determine the color value of each position in the image after the image pair is spliced;
其中,c为所述图像对拼接后的图像中各位置的颜色值,k1、k2是权值系数。Wherein, c is the color value of each position in the image after the image pair is spliced, and k 1 and k 2 are weight coefficients.
本发明还提供一种球形全景视频的拼接系统,所述系统包括:The present invention also provides a splicing system for spherical panoramic video, the system comprising:
视频获取模块,用于获取鱼眼镜头拍摄的视频对;其中,所述视频对包括具有N帧第一图像的第一视频和具有N帧第二图像的第二视频;且N帧第一图像与N帧第二图像具有一一对应关系;A video acquisition module for acquiring a video pair captured by a fisheye lens; wherein the video pair includes a first video with N frames of first images and a second video with N frames of second images; and N frames of first images There is a one-to-one correspondence with the N frames of the second image;
拼接模块,用于分别将具有所述一一对应关系的第一图像与第二图像作为图像对执行拼接操作,形成拼接视频;a splicing module, configured to perform a splicing operation with the first image and the second image having the one-to-one correspondence as an image pair, respectively, to form a spliced video;
拼接操作模块,用于确定所述图像对中第一图像对应的角点;在所述图像对中第二图像中与各所述角点对应的经验区域进行搜索,并利用HSV算法对搜索结果进行检验确定各所述角点对应的匹配点;根据选定的角点以及对应的匹配点计算所述图像对的转换矩阵;利用所述转换矩阵确定所述图像对中未重叠区域中各像素的颜色值,并按照权值系数进行重叠区域的像素融合确定重叠区域中各像素的颜色值;将全部所述颜色值填充到所述拼接视频中的对应位置。A stitching operation module, used to determine the corner points corresponding to the first image in the image pair; search for the experience area corresponding to each of the corner points in the second image in the image pair, and use the HSV algorithm to search results Carry out inspection to determine the matching point corresponding to each of the corner points; calculate the transformation matrix of the image pair according to the selected corner point and the corresponding matching point; use the transformation matrix to determine each pixel in the non-overlapping area of the image pair and perform pixel fusion in the overlapping area according to the weight coefficient to determine the color value of each pixel in the overlapping area; fill all the color values into the corresponding positions in the spliced video.
本发明所提供的一种球形全景视频的拼接方法,包括:获取鱼眼镜头拍摄的视频对;分别将具有所述一一对应关系的第一图像与第二图像作为图像对执行拼接操作,形成拼接视频;其中,所述拼接操作的过程包括:确定所述图像对中第一图像对应的角点;在所述图像对中第二图像中与各所述角点对应的经验区域进行搜索,并利用HSV算法对搜索结果进行检验确定各所述角点对应的匹配点;根据选定的角点以及对应的匹配点计算所述图像对的转换矩阵;利用所述转换矩阵确定所述图像对中未重叠区域中各像素的颜色值,并按照权值系数进行重叠区域的像素融合确定重叠区域中各像素的颜色值;将全部所述颜色值填充到所述拼接视频中的对应位置。A method for splicing spherical panoramic videos provided by the present invention includes: acquiring video pairs captured by a fisheye lens; respectively performing a splicing operation on a first image and a second image having the one-to-one correspondence as an image pair to form splicing videos; wherein, the process of the splicing operation includes: determining the corner points corresponding to the first image in the image pair; searching for the experience area corresponding to each of the corner points in the second image in the image pair, And use the HSV algorithm to check the search result to determine the matching point corresponding to each of the corner points; calculate the transformation matrix of the image pair according to the selected corner point and the corresponding matching point; use the transformation matrix to determine the image pair. The color value of each pixel in the non-overlapping area is determined, and the pixel fusion of the overlapping area is performed according to the weight coefficient to determine the color value of each pixel in the overlapping area; all the color values are filled into the corresponding positions in the spliced video.
可见,通过对鱼眼镜头拍摄的视频对进行合适的坐标变换实现全景视频自动拼接。其中转换矩阵的求解过程中需要进行角点检测、经验区域搜索和HSV颜色空间匹配来实现匹配点的自动检测;该方法拍摄数量少,拼接速度快,拼接过程启动后无需人工调整和干预。生成视频拼接效果好,中间接缝位置拼接准确,过渡自然。从整个视频播放看,没有图像抖动和错位。本发明还提供了一种球形全景视频的拼接系统,具有上述有益效果,在此不再赘述。It can be seen that the automatic stitching of panoramic videos is realized by performing appropriate coordinate transformation on the video pairs captured by the fisheye lens. Among them, corner detection, empirical region search and HSV color space matching are required in the process of solving the transformation matrix to realize automatic detection of matching points; this method has a small number of shots, fast stitching, and no manual adjustment and intervention after the stitching process is started. The stitching effect of the generated video is good, the stitching of the middle seam is accurate, and the transition is natural. From the entire video playback, there is no image jitter and misalignment. The present invention also provides a splicing system for spherical panoramic video, which has the above beneficial effects, and will not be repeated here.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only It is an embodiment of the present invention. For those of ordinary skill in the art, other drawings can also be obtained according to the provided drawings without creative work.
图1为本发明实施例所提供的球形全景视频的拼接方法的流程图;1 is a flowchart of a method for splicing spherical panoramic videos according to an embodiment of the present invention;
图2为本发明实施例所提供的鱼眼图像中经纬度及转向半径的计算原理示意图;2 is a schematic diagram of the calculation principle of latitude and longitude and turning radius in a fisheye image provided by an embodiment of the present invention;
图3为本发明实施例所提供的球形全景视频的拍摄方向及坐标系示意图;3 is a schematic diagram of a shooting direction and a coordinate system of a spherical panoramic video provided by an embodiment of the present invention;
图4为本发明实施例所提供的直角坐标对经纬度的转换示意图;4 is a schematic diagram of the conversion of Cartesian coordinates to latitude and longitude provided by an embodiment of the present invention;
图5为本发明实施例所提供的全景展开图像的经纬度与像素坐标的换算示意图;FIG. 5 is a schematic diagram of conversion between longitude and latitude and pixel coordinates of a panoramic unfolded image provided by an embodiment of the present invention;
图6为本发明实施例所提供的颜色融合权值的计算示意图;6 is a schematic diagram of calculating a color fusion weight provided by an embodiment of the present invention;
图7为本发明实施例所提供的拼接重合区域示意图;7 is a schematic diagram of a splicing overlapping area provided by an embodiment of the present invention;
图8为本发明实施例所提供的一种具体拼接视频截图;8 is a screenshot of a specific splicing video provided by an embodiment of the present invention;
图9为本发明实施例所提供的一种具体的球形全景视频的拼接方法的流程示意图;9 is a schematic flowchart of a specific spherical panoramic video stitching method provided by an embodiment of the present invention;
图10为本发明实施例所提供的球形全景视频的拼接系统的结构框图。FIG. 10 is a structural block diagram of a splicing system for spherical panoramic videos provided by an embodiment of the present invention.
具体实施方式Detailed ways
本发明的核心是提供一种球形全景视频的拼接方法及系统,可使全景视频的拼接变得简易和高效,并且图像质量可以得到提高。The core of the present invention is to provide a method and system for splicing spherical panoramic videos, which can make the splicing of panoramic videos simple and efficient, and the image quality can be improved.
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments These are some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
请参考图1,图1为本发明实施例所提供的球形全景视频的拼接方法的流程图;该方法可以包括:Please refer to FIG. 1, which is a flowchart of a method for splicing spherical panoramic videos according to an embodiment of the present invention; the method may include:
S100、获取鱼眼镜头拍摄的视频对;其中,视频对包括具有N帧第一图像的第一视频和具有N帧第二图像的第二视频;且N帧第一图像与N帧第二图像具有一一对应关系。S100. Acquire a video pair captured by a fisheye lens; wherein the video pair includes a first video having N frames of first images and a second video having N frames of second images; and N frames of first images and N frames of second images have a one-to-one correspondence.
具体的,本实施例对鱼眼镜头拍摄的视频对中的每一帧图像对进行拼接,进而可以完成对整个视频对的拼接。即本实施例并不对鱼眼镜头本身进行限定,只要是可以完成拍摄的视频对拼接在一起是全景视频即可。这里的第一视频和第二视频仅仅是为了区分视频对中的两个视频而已,并不对它们进行限定,即视频对中的任一个视频都可以是第一视频,对应的另一个即为第二视频。相应的第一图像和第二图像也仅仅是为了区分两个视频中的帧图像而已。本实施例并不限定N的数值,其可以是大于0的任意整数,两个视频对中的视频的帧数是相同的且具有一一对应的关系,即按照拍摄行程的视频的帧顺序对应。例如,第一视频有5帧第一图像(A1,A2,A3,A4,A5),第二视频有5帧第二图像(B1,B2,B3,B4,B5),则A1与B1为具有一一对应关系的图像对,A2与B2为具有一一对应关系的图像对。Specifically, in this embodiment, each frame of image pair in the video pair captured by the fisheye lens is spliced, so as to complete the splicing of the entire video pair. That is, this embodiment does not limit the fisheye lens itself, as long as the video pairs that can be shot are spliced together to form a panoramic video. The first video and the second video here are only to distinguish the two videos in the video pair, and they are not limited, that is, any video in the video pair can be the first video, and the corresponding other video is the first video. Two videos. The corresponding first image and second image are only for distinguishing frame images in the two videos. This embodiment does not limit the value of N, which can be any integer greater than 0. The frames of the videos in the two video pairs are the same and have a one-to-one correspondence, that is, they correspond to the frames of the videos in the shooting itinerary. . For example, the first video has 5 frames of first images (A1, A2, A3, A4, A5), and the second video has 5 frames of second images (B1, B2, B3, B4, B5), then A1 and B1 have Image pairs with one-to-one correspondence, A2 and B2 are image pairs with one-to-one correspondence.
S110、分别将具有一一对应关系的第一图像与第二图像作为图像对执行拼接操作,形成拼接视频。S110. Perform a splicing operation with the first image and the second image having a one-to-one correspondence as an image pair to form a spliced video.
具体的,对视频对中每一帧图像对都进行拼接操作后,即完成了对整个视频对的拼接工作。即对视频的每帧进行操作并进行适当的优化处理,最终生成拼接视频。本实施例中并不限定每对图像对进行拼接操作的处理形式,可以是同时对每一帧图像对进行拼接,也可以是按照帧顺序依次对各帧图像对进行拼接操作。用户可以根据实际情况进行选择。其中,每一帧图像对对应的拼接操作的过程如下:Specifically, after the splicing operation is performed on each frame image pair in the video pair, the splicing work of the entire video pair is completed. That is, operate on each frame of the video and perform appropriate optimization processing, and finally generate a spliced video. This embodiment does not limit the processing form of performing the splicing operation on each pair of image pairs, which may be splicing each image pair at the same time, or may perform the splicing operation on each frame image pair in sequence according to the frame sequence. Users can choose according to the actual situation. The process of the corresponding splicing operation for each frame of image pair is as follows:
其中,拼接操作的过程包括:Among them, the process of the splicing operation includes:
步骤1、确定图像对中第一图像对应的角点;Step 1. Determine the corner corresponding to the first image in the image pair;
具体的,为了得到后续的转换矩阵,需要确定图像角点。本实施例首先确定第一图像的角点,然后通过匹配确定各角点对应的匹配点。Specifically, in order to obtain the subsequent transformation matrix, the image corners need to be determined. In this embodiment, the corner points of the first image are first determined, and then matching points corresponding to each corner point are determined through matching.
本实施例并不限定每一帧图像对中第一图像角点确定的方式,只要可以确定每一个第一图像的角点即可。例如每一帧图像对中的第一图像都利用角点检测的方法计算对应的角点,或者是在当前帧之前帧对应的角点都比较准确的话可以沿用之前帧的第一图像的角点作为当前帧的图像对中第一图像对应的角点。优选的,为了节省计算资源,加快拼接速度;确定图像对中第一图像对应的角点可以包括:This embodiment does not limit the manner of determining the corner points of the first image in each frame of image pair, as long as the corner points of each first image can be determined. For example, the first image in each frame of image pair uses the method of corner detection to calculate the corresponding corners, or if the corners corresponding to the frames before the current frame are more accurate, the corners of the first image of the previous frame can be used. The corner point corresponding to the first image in the image pair of the current frame. Preferably, in order to save computing resources and speed up splicing; determining the corner corresponding to the first image in the image pair may include:
判断图像对中的第一图像是否为第一视频的第一帧图像;Determine whether the first image in the image pair is the first frame image of the first video;
若是第一帧图像,则对第一帧图像进行角点检测,确定第一帧图像对应的角点;If it is the first frame image, perform corner detection on the first frame image to determine the corner corresponding to the first frame image;
若不是第一帧图像,则判断图像对是否为间隔预定帧数量的图像对;If it is not the first frame of image, then determine whether the image pair is an image pair separated by a predetermined number of frames;
若不是间隔预定帧数量的图像对,则将前一帧图像对中的第一图像的角点作为图像对中的第一图像对应的角点;If it is not an image pair separated by a predetermined number of frames, the corner point of the first image in the image pair of the previous frame is used as the corner point corresponding to the first image in the image pair;
若是间隔预定帧数量的图像对,则对图像对中的第一图像的选定重叠区进行角点检测得到阴影角点,并判断阴影角点的数量与经纬图像重叠区中的角点之差是否大于阈值;若是,则对图像对中的第一图像进行角点检测,确定图像对中的第一图像对应的角点;若不是,则将前一帧图像对中的第一图像的角点作为图像对中的第一图像对应的角点。If the image pairs are separated by a predetermined number of frames, corner detection is performed on the selected overlapping area of the first image in the image pair to obtain shadow corner points, and the difference between the number of shadow corner points and the corner points in the overlapping area of the latitude and longitude images is determined. Whether it is greater than the threshold; if so, perform corner detection on the first image in the image pair to determine the corner corresponding to the first image in the image pair; point as the corner corresponding to the first image in the image pair.
具体的,通过上述角点确定的方法可以仅对第一帧视频的图像对中的第一图像(即第一帧图像)进行角点检测。之后可以仅对间隔预定帧数量的图像对中的第一图像进行角点校正过程。例如每隔10帧执行一次角点校正。这样的处理过程可以在保证角点准确性的情况下,提高计算效率。Specifically, the corner point detection can be performed only on the first image (ie, the first frame of image) in the image pair of the first frame of video by the method for determining the corner point. The corner point correction process may then be performed only on the first image in the image pair spaced apart by a predetermined number of frames. For example, corner correction is performed every 10 frames. Such a processing process can improve the computational efficiency while ensuring the accuracy of the corner points.
进一步本实施例并不限定选定重叠区的大小,即可以在全部的重叠区计算阴影角点,也可以是在重叠区中的一部分计算阴影角点。但是后者的效率更高,因为检索的区域较小。本实施例并不限定选定重叠区的选择方式。Further, this embodiment does not limit the size of the selected overlapping area, that is, the shadow corner points may be calculated in the entire overlapping area, or may be calculated in a part of the overlapping area. But the latter is more efficient because the retrieved area is smaller. This embodiment does not limit the selection manner of the selected overlap region.
可选的,角点检测的过程可以是:利用公式进行角点检测,得到预选角点M;利用公式R=det(M)-k(trace(M))2计算预选角点M的响应函数值R;将响应函数值R按照从高到底的顺序进行排列,并选取前预定数量的响应函数值R对应的预选角点M作为角点检测选定的角点;Optionally, the process of corner detection can be: using the formula Perform corner detection to obtain a preselected corner M; use the formula R=det(M)-k(trace(M)) 2 to calculate the response function value R of the preselected corner M; set the response function value R in the order from high to bottom Arrange, and select the preselected corner point M corresponding to the response function value R of the previous predetermined number as the corner point selected for the corner point detection;
其中,I为第一视频,Ix、Iy为通过水平和竖直方向差分算子对图像滤波得到的梯度图像,w(x,y)取二维高斯函数,det为求矩阵行列式,trace为矩阵的迹,k为一个常数(其取值可以是0.04-0.06)。当R值很大时,该处为角点;当R<0时,该处为边,当|R|很小时,该处为平坦区域。在整个图像中被判为角点的位置可能有很多,可以选择R值最大的n个(例如n=30)来进行进一步的筛选。即本实施例并不对前预定数量的数值进行限定。Wherein, I is the first video, I x and I y are the gradient images obtained by filtering the image by the difference operator in the horizontal and vertical directions, w(x, y) is a two-dimensional Gaussian function, det is the matrix determinant, trace is the trace of the matrix, and k is a constant (its value can be 0.04-0.06). When the value of R is large, it is a corner; when R<0, it is an edge, and when |R| is small, it is a flat area. There may be many positions judged as corner points in the whole image, and the n with the largest R value (for example, n=30) can be selected for further screening. That is, the present embodiment does not limit the numerical value of the first predetermined number.
每隔一定帧数(例如每10帧),对重叠区再进行角点检测。重叠区的说明参看图7。由于相机的轻微震动和景物远近变化,变换矩阵会发生变化。这时,需要重新指定匹配点并重新计算变换矩阵。可以通过检查图像重叠区的拼接情况得知。如图7所示,对于2×180°拍摄,重叠区域为阴影部分所示,其中在经纬图像中为中间的一个狭窄的矩形。图像如果拼接得不好,在此矩形中必定有很多线条的错位,在错位的地方会形成角点。进行角点检测可以把它们检测出来。因此,对于某一帧图像,可以检查鱼眼图像(如鱼眼第一图像)中阴影区域的角点数目和经纬图像阴影区域中的角点数目是否大致相等。如果大致相等,则拼接效果是不错的;如果经纬图像阴影区域中的数目远多于鱼眼图像中的,则说明拼接效果已下降,需要重新取5组匹配点重新计算变换矩阵。Every certain number of frames (for example, every 10 frames), corner detection is performed on the overlapping area. See Figure 7 for an illustration of the overlapping area. The transformation matrix changes due to slight camera shake and changes in the distance of the scene. At this time, it is necessary to re-designate the matching points and re-calculate the transformation matrix. It can be known by checking the stitching of the overlapping areas of the images. As shown in Figure 7, for a 2×180° shot, the overlapping area is shown by the shaded area, which is a narrow rectangle in the middle in the latitude and longitude images. If the image is not stitched well, there must be a lot of misalignment of lines in this rectangle, and corners will be formed at the misaligned place. Performing corner detection can detect them. Therefore, for a certain frame of image, it can be checked whether the number of corner points in the shadow area of the fisheye image (eg, the first fisheye image) is approximately equal to the number of corner points in the shadow area of the latitude and longitude image. If they are roughly equal, the stitching effect is good; if the number of shadow areas in the latitude and longitude images is much larger than that in the fisheye image, it means that the stitching effect has declined, and 5 sets of matching points need to be recalculated to recalculate the transformation matrix.
不需每帧图像都进行如此检查,只需每隔一段时间(例如每10帧)检查一次即可。检查的图像区域也不需取全部的重叠区域,在鱼眼图像中,只需按转向半径外扩n(例如n=5)个像素,向内缩n个像素,取之间包括的圆环区域即可;在经纬图像中,只需沿中线(图中虚线)向左移n个像素,向右移n个像素,取之间包括的矩形区域即可。事实上,错位产生的角点都在中线处。检查检测到的鱼眼图像重叠区中的角点数和经纬图像重叠区中的角点数。如果两者数目差异大,则重新进行角点检测。You don't need to do this every frame of the image, just check it every so often (e.g. every 10 frames). The inspected image area does not need to take all the overlapping areas. In the fisheye image, you only need to expand n (for example, n=5) pixels outward according to the turning radius, shrink n pixels inward, and take the circles included between them. In the latitude and longitude image, just move n pixels to the left and n pixels to the right along the center line (dotted line in the figure), and take the rectangular area included in between. In fact, the corners of the dislocation are all at the midline. Check the detected number of corner points in the overlap area of the fisheye image and the number of corner points in the overlap area of the latitude and longitude images. If the difference between the two numbers is large, the corner detection is performed again.
步骤2、在图像对中第二图像中与各角点对应的经验区域进行搜索,并利用HSV算法对搜索结果进行检验确定各角点对应的匹配点;
具体的,拍摄全景图时,若相机的节点能确保不变并拍摄角度准确知道,第一图像中的任何一点,在第二图像中其实是可以计算出来的。例如当拍摄方式是2×180°时,在第一图像极坐标为(r′p,θ)的点,在第二图像中匹配点位置应为(2r0-r′p,π-θ'),式中r0为转向半径。然而,由于相机节点不能严格确保不变、拍摄角度的误差等原因,匹配点的位置会有所偏移。并且由于拍全景视频过程中的震动和景物远近变化剧烈等原因,匹配点还需进行重新调整。因此在图像对中第二图像中与各角点对应的经验区域进行搜索。本实施例并不限定具体的经验区域(也可以称为经验位置)的确定形式,也不限定具体的经验区域的大小。例如可使用这样的方法:把匹配点的理论位置作为初始经验位置,并以此为中心,指定一个范围,例如l×l的矩形范围内进行搜索(l值可取0.1-0.3倍r0),找到真正匹配点位置。拼接视频新的一帧时,当发现匹配点位置已经偏移时,以原匹配位置为经验位置,以其为中心,再在l×l的矩形范围内重新搜索。以经验位置决定搜索范围的方法不需在整幅图像进行搜索,使搜索范围大大缩小,大大节约了搜索时间。即可选的,确定角点检测选定的角点在图像对中第二图像中对应的理论位置,并以各理论位置为中心向外扩展预定范围作为经验区域搜索对应的匹配点。Specifically, when shooting a panorama, if the nodes of the camera can be guaranteed to remain unchanged and the shooting angle is accurately known, any point in the first image can actually be calculated in the second image. For example, when the shooting mode is 2×180°, in the first image where the polar coordinates are (r′ p , θ), the matching point position in the second image should be (2r 0 -r′ p ,π-θ' ), where r 0 is the turning radius. However, due to the fact that the camera node cannot strictly ensure the invariance, the error of the shooting angle, etc., the position of the matching point will be offset. And due to the vibration during the panorama video shooting and the dramatic changes in the distance of the scene, the matching point needs to be readjusted. Therefore, the empirical regions corresponding to the corner points in the second image of the image pair are searched. This embodiment does not limit the determination form of a specific experience area (which may also be referred to as an experience location), nor does it limit the size of the specific experience area. For example, this method can be used: take the theoretical position of the matching point as the initial empirical position, and use this as the center to specify a range, for example, search within a rectangular range of l×l (the value of l can be 0.1-0.3 times r 0 ), Find the true match point location. When splicing a new frame of the video, when it is found that the position of the matching point has shifted, take the original matching position as the empirical position, take it as the center, and then search again within the 1×1 rectangular range. The method of determining the search range based on the empirical position does not need to search the entire image, which greatly reduces the search range and greatly saves the search time. Alternatively, determine the corresponding theoretical positions of the corners selected by the corner detection in the second image in the image pair, and expand a predetermined range outward with each theoretical position as the center to search for the corresponding matching points as the empirical area.
对第二图像基于HSV查找确定匹配点的过程可以是:对两幅图像,给定一个小窗口,第一图像的窗口的中心指定为正在检查的角点,第二图像的窗口中心对l×l的矩形范围内的全部像素遍历尝试,计算每一个位置与第一图像窗口的加权颜色差总和。The process of determining the matching point based on HSV search for the second image can be: for two images, given a small window, the center of the window of the first image is designated as the corner point being checked, and the center of the window of the second image is equal to l× All pixels within the rectangular range of l are tried to traverse, and the sum of the weighted color difference between each position and the first image window is calculated.
对于颜色比较和匹配,使用HSV颜色模型比RGB颜色模型更为合适。因此,需把第一图像和第二图像转化为HSV颜色表示。窗口的加权颜色差总和用下式计算:For color comparison and matching, it is more appropriate to use the HSV color model than the RGB color model. Therefore, the first image and the second image need to be converted into HSV color representation. The sum of the weighted color differences for the window is calculated as:
即分别利用公式计算经验区域中每一个像素点与经验区域对应的角点的加权颜色差总和Ex,y;using the formula Calculate the weighted color difference sum E x,y of the corners corresponding to each pixel in the experience area and the experience area;
选取最小的加权颜色差总和Ex,y对应的像素点作为角点的匹配点;Select the pixel point corresponding to the minimum weighted color difference sum E x, y as the matching point of the corner point;
其中,Ex,y为第二图像在(x,y)对于第一图像中角点的颜色差总和,H′s+u,t+v和S′s+u,t+v为第一图像中在(s+u,t+v)处的色调和饱和度,H″x+u,y+v和S″x+u,y+v为第二图像中在(x+u,y+v)处的色调和饱和度,kh和ks为两个系数,可设置色调和饱和度对计算结果的重要度。可取kh=0.8而ks=0.2,即令色调比饱和度重要4倍。w(u,v)为窗口像素位置的权重函数,可取其为二维高斯函数,u,v分别为窗口内像素的位置,l为正方形窗口的尺寸。Among them, E x, y is the sum of the color difference of the second image at (x, y) for the corner points of the first image, H' s+u, t+v and S' s+u, t+v are the first Hue and saturation at (s+u,t+v) in the image, H″ x+u,y+v and S″ x+u,y+v are the second image at (x+u,y The hue and saturation at + v ), kh and ks are two coefficients, which can set the importance of hue and saturation to the calculation result. It is desirable that kh = 0.8 and ks = 0.2, ie hue is 4 times more important than saturation. w(u, v) is the weight function of the pixel position of the window, which can be a two-dimensional Gaussian function, u, v are the positions of the pixels in the window, respectively, and l is the size of the square window.
步骤3、根据选定的角点以及对应的匹配点计算图像对的转换矩阵;Step 3. Calculate the transformation matrix of the image pair according to the selected corner point and the corresponding matching point;
具体的,本实施例并不限定选定的角点以及对应的匹配点的数量,但是为了提高计算效率,由于转换矩阵中有15个未知数,因此需要至少5对匹配点才能够计算。优选的,根据选定的角点以及对应的匹配点计算图像对的转换矩阵可以包括:Specifically, this embodiment does not limit the number of selected corner points and corresponding matching points, but in order to improve calculation efficiency, since there are 15 unknowns in the transformation matrix, at least 5 pairs of matching points can be calculated. Preferably, calculating the transformation matrix of the image pair according to the selected corner points and the corresponding matching points may include:
从角点检测选定的角点中按照距离阈值选择5个角点作为选定的角点;Select 5 corner points from the corner points selected by the corner point detection according to the distance threshold as the selected corner points;
利用公式计算选定的角点以及对应的匹配点计算图像对的转换矩阵;Use the formula Calculate the selected corner points and the corresponding matching points to calculate the transformation matrix of the image pair;
其中,a11、a12…a43共为15个未知数,φL、θL、φR、θR分别是第一图像和第二图像的拍摄坐标系转换成球坐标系后的经纬度。Among them, a 11 , a 12 . . . a 43 are 15 unknowns in total, and φ L , θ L , φ R , and θ R are the latitude and longitude of the first image and the second image after the shooting coordinate system is converted into a spherical coordinate system, respectively.
具体的,由于真正需要的点只需5个。这5对匹配点除了希望是颜色变化分明的角点之外,还希望它们之间的距离比较大,否则后面解出的矩阵会有些误差。因此需要从上面的n个角点中根据距离选出5个角点。利用距离阈值进行选择。距离阈值rt具体为:rt=ktr0;其中,r0为转向半径,kt为一个系数,可取0.3-0.6。遍历n个角点,检查相互间距离,直到找到5个相互距离大于rt的角点。Specifically, since only 5 points are really needed. These 5 pairs of matching points not only hope to be corner points with distinct color changes, but also hope that the distance between them is relatively large, otherwise there will be some errors in the matrix solved later. Therefore, it is necessary to select 5 corner points according to the distance from the above n corner points. Use the distance threshold for selection. The distance threshold rt is specifically: rt =k t r 0 ; wherein, r 0 is the turning radius, and k t is a coefficient, which may be 0.3-0.6. Traverse the n corners, check the mutual distance, until you find 5 corners whose mutual distance is greater than rt .
其中,转向半径的定义为:在第一图像的坐标系中,照相机视野角为180°时对应的半径。可以证明,当两幅图像是相机旋转180°拍摄而得时,转向半径等于两个鱼眼图像image1和image2(请参考图2)的对应匹配点到其原点的距离的平均值,即:Wherein, the turning radius is defined as: in the coordinate system of the first image, the corresponding radius when the viewing angle of the camera is 180°. It can be proved that when the two images are taken with the camera rotated by 180°, the turning radius is equal to the average value of the distances from the corresponding matching points of the two fisheye images image1 and image2 (please refer to Figure 2) to their origin, namely:
其中,r'p为图像1中图像中心到像素点P’的距离,x'p、y'p是像素点P’相对于图像中心的直角坐标。r"p为图像2中图像中心到像素点P”的距离,x"p、y"p是像素点P”相对于图像中心的直角坐标。Among them, r' p is the distance from the image center to the pixel point P' in the image 1, and x' p and y' p are the rectangular coordinates of the pixel point P' relative to the image center. r" p is the distance from the image center to the pixel point P" in
由于方程是求解全景图像变换中最重要的方程组。有a11、a12…a43共为15个未知数,因此只需有5对匹配点就可以把矩阵元素都计算出来。可以用列主元高斯消去法等方法来解此方程组。式中φL、θL、φR、θR分别是第一图像和第二图像的拍摄坐标系转换成球坐标系后的经纬度。要理解和计算他们,需要理解下图的坐标系和转换计算方法:due to the equation It is the most important system of equations in solving panoramic image transformation. There are a 11 , a 12 ... a 43 for a total of 15 unknowns, so only 5 pairs of matching points can be used to calculate the matrix elements. This system of equations can be solved by methods such as column pivoting Gaussian elimination. In the formula, φ L , θ L , φ R , and θ R are the latitude and longitude of the first image and the second image after the shooting coordinate system is converted into a spherical coordinate system, respectively. To understand and calculate them, you need to understand the coordinate system and transformation calculation method of the following figure:
全景图的形成其实可以这样想象:一个人坐在一个巨大的玻璃球中看周围场景,人的一只眼睛正好在球心,景物射到该眼睛的光线必经玻璃球面某点并想象在该点处成像。最终全部景物都在玻璃球面成像,这个带像的玻璃球面就是该场景的全景图。鱼眼照片是一种等角度平面图像,全景图像是一种经纬度图像。要使鱼眼图像变成全景图像,可以通过坐标系变换的方法实现。The formation of the panorama can actually be imagined as follows: a person sits in a huge glass sphere and looks at the surrounding scene, one eye of the person is exactly at the center of the sphere, and the light from the scene hitting the eye must pass through a certain point on the glass sphere and imagine that it is there. image at the point. In the end, all the scenes are imaged on the glass sphere, and this glass sphere with the image is the panorama of the scene. A fisheye photo is an isometric plane image, and a panoramic image is a latitude and longitude image. To turn the fisheye image into a panoramic image, it can be realized by the method of coordinate system transformation.
图3为两张鱼眼图像的情形。其中箭头shootL代表相机拍摄第一视频的拍摄方向,箭头shootR代表相机拍摄第二视频的拍摄方向。其中涉及到五个坐标系:(1)世界坐标系XYZ,这是三维世界的坐标系,全景图像的经纬度也将从这个坐标系换算出来。(2)左边半球部分的拍摄坐标系oxlylzl。第一图像的拍摄方向shootL与其zl轴方向相反。(3)左边照片图像(image1)的图像坐标系o’x’y’。此为平面坐标系。观察图像(image1)时用此坐标系。(4)右边半球部分的拍摄坐标系oxryrzr。其拍摄方向shootR与其zr轴方向相反。(5)右边照片图像(image2)的图像坐标系o”x”y”。此为平面坐标系。观察图像(image2)时用此坐标系。除了这五个坐标系外,还有一个由全局坐标球面按经纬度展开的展开图像坐标系。除了这五个坐标系外,还有一个由全局坐标球面按经纬度展开的展开图像坐标系。Figure 3 shows the situation of two fisheye images. The arrow shoot L represents the shooting direction of the camera to shoot the first video, and the arrow shoot R represents the shooting direction of the camera to shoot the second video. There are five coordinate systems involved: (1) The world coordinate system XYZ, which is the coordinate system of the three-dimensional world, and the latitude and longitude of the panoramic image will also be converted from this coordinate system. (2) The shooting coordinate system oxlylzl of the left hemisphere part. The shooting direction shoot L of the first image is opposite to its zl axis direction. (3) The image coordinate system o'x'y' of the left photo image (image1). This is the plane coordinate system. This coordinate system is used when viewing the image (image1). (4) The shooting coordinate system oxryrzr of the right hemisphere part. Its shooting direction shoot R is opposite to its zr axis. (5) The image coordinate system o"x"y" of the photo image (image2) on the right. This is a plane coordinate system. This coordinate system is used when observing the image (image2). In addition to these five coordinate systems, there is also a global coordinate system. The unwrapped image coordinate system of the coordinate sphere is expanded according to latitude and longitude. Besides these five coordinate systems, there is also an unwrapped image coordinate system expanded by the global coordinate sphere according to latitude and longitude.
右边半球的部分虽然大部分没在第一图像中成像,但在标系oxlylzl中是可以表示出来的。坐标系oxryrzr旋转可以得到坐标系oxlylzl。右边半球上的每一点在oxryrzr中的坐标可在第二图像中读出。同一点可以乘以某个矩阵得到就可以变换到oxlylzl中,记为:The part of the right hemisphere, although mostly not imaged in the first image, can be represented in the scalar system oxlylzl. The coordinate system oxryrzr can be rotated to obtain the coordinate system oxlylzl. The coordinates in oxryrzr of each point on the right hemisphere can be read out in the second image. The same point can be multiplied by a certain matrix and can be transformed into oxlylzl, denoted as:
其中,(xL,yL,zL)表示坐标系OXLYLZL中的坐标,(xR,yR,zR)表示坐标系OXRYRZR的。 Among them, (x L , y L , z L ) represents the coordinates in the coordinate system OX L Y L Z L , and (x R , y R , z R ) represents the coordinate system OX R Y R Z R.
另一方面,考虑球坐标系和直角坐标系的转换,如图4。图3、图4中的球半径可以随意,不影响成像,可以假设为1。因此经纬度和直角坐标(x,y,z)存在以下关系:On the other hand, consider the conversion between spherical coordinate system and rectangular coordinate system, as shown in Figure 4. The radius of the sphere in Figure 3 and Figure 4 can be arbitrary and does not affect the imaging, and can be assumed to be 1. So latitude and longitude and Cartesian coordinates (x, y, z) have the following relationship:
和反过来的关系:and the reverse relationship:
由和就可以改写成 Depend on and can be rewritten as
步骤4、利用转换矩阵确定图像对中未重叠区域中各像素的颜色值,并按照权值系数进行重叠区域的像素融合确定重叠区域中各像素的颜色值;Step 4. Determine the color value of each pixel in the non-overlapping area in the image pair by using the conversion matrix, and perform pixel fusion in the overlapping area according to the weight coefficient to determine the color value of each pixel in the overlapping area;
具体的,全景图像是一个宽高比为2:1的图像,其中宽度方向代表经度,高度方向代表纬度。因此,要生成的全景视频一帧图像,需对经度0-360°、纬度-90°-90°进行遍历处理。因此遍历经度θ和纬度查找两个鱼眼视频对应帧图像对应位置的颜色值,然后计算融合颜色,填入生成视频的对应帧图像中。方法如下:Specifically, the panoramic image is an image with an aspect ratio of 2:1, in which the width direction represents longitude and the height direction represents latitude. Therefore, to generate one frame of panoramic video, it is necessary to traverse 0-360° longitude and -90°-90° latitude. So iterate over longitude theta and latitude Find the color value of the corresponding position of the corresponding frame image of the two fisheye videos, then calculate the fusion color, and fill in the corresponding frame image of the generated video. Methods as below:
最终拼接好的全景图像其实是全局坐标系中的球面按经纬度展开得到的图像。由展开图像中的像素坐标可以换算出经纬度,由经纬度又可换算出全局坐标。例如参考图5,第i行第j列像素的经纬度如下计算:The final stitched panoramic image is actually an image obtained by expanding the spherical surface in the global coordinate system according to latitude and longitude. The longitude and latitude can be converted from the pixel coordinates in the expanded image, and the global coordinates can be converted from the longitude and latitude. For example, referring to Figure 5, the latitude and longitude of the pixel in row i and column j is calculated as follows:
然后用经纬度由公式可换算出全局坐标系oxyz中的直角坐标(x,y,z)。Then use the latitude and longitude by the formula The Cartesian coordinates (x, y, z) in the global coordinate system oxyz can be converted.
对于该像素,查找其在左拍摄坐标系中的对应位置。为此需先考虑拍摄坐标系和全局坐标系的关系:考虑图3中坐标系oxlylzl到oxyz的变换。求解出的矩阵a是由坐标系oxryrzr和变换到oxlylzl坐标系的矩阵。而全局坐标系oxyz跟这两个坐标系又不同(见图2)。而oxlylzl需经以下变换可以变换成oxyz:绕yl轴旋转180°,再绕新的xl轴旋转90°。即:For that pixel, find its corresponding location in the left shot coordinate system. To this end, the relationship between the shooting coordinate system and the global coordinate system needs to be considered first: consider the transformation of the coordinate system oxlylzl to oxyz in Figure 3. The solved matrix a is a matrix transformed from the coordinate system oxryrzr and to the oxlylzl coordinate system. The global coordinate system oxyz is different from these two coordinate systems (see Figure 2). And oxlylzl can be transformed into oxyz by the following transformation: rotate 180° around the yl axis, and then rotate 90° around the new xl axis. which is:
上式进行变换,可得:Transform the above formula to get:
由上式就可算得(x,y,z)在左拍摄坐标系中的位置。同时对应右拍摄坐标系中:The position of (x, y, z) in the left shooting coordinate system can be calculated from the above formula. At the same time, it corresponds to the right shooting coordinate system:
-1是矩阵求逆运算。拍摄坐标系与对应照片坐标系经纬度相同,因此可再根据x'=r'cos θ',y'=r'sin θ'就可算出照片图像中的像素坐标(x’,y’)和(x”,y”),就可以查到像素颜色了。式中r0为转向半径,(x’,y’)为鱼眼第一图像的图像坐标,把(x’,y’)和换成(x”,y”)和可算得鱼眼第二图像的图像坐标(x”,y”)。各参数和关系参见图2。-1 is the matrix inversion operation. The shooting coordinate system is the same as the latitude and longitude of the corresponding photo coordinate system, so it can be used according to x'=r'cos θ', y'=r'sin θ', the pixel coordinates (x', y') and (x", y") in the photo image can be calculated, and the pixel color can be checked. In the formula, r 0 is the turning radius, (x', y') is the image coordinate of the first fisheye image, and the (x', y') and replaced by (x", y") and The image coordinates (x", y") of the second fisheye image can be calculated. See Figure 2 for parameters and relationships.
鱼眼第一图像的(x’,y’)和鱼眼第二图像的(x”,y”)至少有一处是有颜色的。对于展开图像中考察的点,有时候只在第一图像中有成像,有时候只在第二图像中有成像,而有时候在两个图像中都有成像。利用颜色值确定公式确定图像对拼接后的图像中各位置的颜色值;其中,c为图像对拼接后的图像中各位置的颜色值,k1、k2是权值系数。At least one of (x', y') of the first fisheye image and (x", y") of the second fisheye image is colored. For the points examined in the unfolded images, sometimes only the first image is imaged, sometimes only the second image, and sometimes both images. Use color values to determine formulas Determine the color value of each position in the image after the image pair is spliced; wherein, c is the color value of each position in the image after the image pair is spliced, and k 1 and k 2 are weight coefficients.
其中,k1+k2=1。可以这样取值:参见图6,P1和P2是匹配点对,P2e是P2沿半径方向在鱼眼边界处的交点,P2e1是P2e在鱼眼第一图像的对应点(由P2e乘以变换矩阵算得)。P1e类似P2e,是P1沿半径方向在鱼眼边界处的交点。k1、k2可以根据P1与P1e和P2e1的距离关系来计算:where k 1 +k 2 =1. It can be taken as follows: see Figure 6 , P1 and P2 are matching point pairs, P2e is the intersection of P2 at the fisheye boundary along the radial direction, P2e1 is the corresponding point of P2e in the first fisheye image ( Calculated by multiplying P 2e by the transformation matrix). Similar to P 2e , P 1e is the intersection of P 1 at the fisheye boundary in the radial direction. k 1 and k 2 can be calculated according to the distance relationship between P 1 and P 1e and P 2e1 :
步骤5、将全部颜色值填充到拼接视频中的对应位置。Step 5. Fill all the color values to the corresponding positions in the spliced video.
至此,一个全景视频就生成了。对一个生成的全景视频案例使用普通的视频播放器播放的截图例子见图8。使用专用的全景视频播放器可取得360度VR的效果。At this point, a panoramic video is generated. See Figure 8 for an example of a screenshot of a generated panoramic video case played using a normal video player. Use the dedicated panoramic video player for 360-degree VR.
下面通过具体例子说明上述方案,对场景使用摄像机和鱼眼镜头拍摄好两个视野大于180度的视频素材。使用以下步骤处理并生成一个新的视频。请参考图9。The above solution is described below through a specific example, and two video materials with a field of view greater than 180 degrees are shot by using a camera and a fisheye lens for the scene. Use the following steps to process and generate a new video. Please refer to Figure 9.
1、启动开始对两个视频的每帧进行以下操作。1. Start Start doing the following for each frame of the two videos.
2、如果是第一帧,或者由11跳转至本步,进行从3往下操作,否则直接从9往下操作。2. If it is the first frame, or jump from 11 to this step, and operate from 3 down, otherwise directly operate from 9 down.
3、对图像1(即第一图像)进行角点检测。3. Perform corner detection on image 1 (ie, the first image).
4、对图像2(即第二图像)基于经验范围进行搜索。4. Search image 2 (ie, the second image) based on the empirical range.
5、对图像2基于HSV查找确定匹配点。5. Determine matching points for
6、从上述n对匹配点中选定5对。6. Select 5 pairs from the above n pairs of matching points.
7、若为第一帧,则计算转向半径。若非第一帧,则跳过此步。第一帧计算出来的转向半径为后续帧计算所用。7. If it is the first frame, calculate the turning radius. If it is not the first frame, skip this step. The turning radius calculated in the first frame is used for the calculation in subsequent frames.
8、解方程组计算转换矩阵。8. Solve the system of equations to calculate the transformation matrix.
9、全景图像是一个宽高比为2:1的图像,其中宽度方向代表经度,高度方向代表纬度。因此,要生成的全景视频一帧图像,需对经度0-360°、纬度-90°-90°进行遍历处理。因此遍历经度θ和纬度查找两个鱼眼视频对应帧图像对应位置的颜色值,然后计算融合颜色,填入生成视频的对应帧图像中。9. A panoramic image is an image with an aspect ratio of 2:1, where the width direction represents longitude and the height direction represents latitude. Therefore, to generate one frame of panoramic video, it is necessary to traverse 0-360° longitude and -90°-90° latitude. So iterate over longitude theta and latitude Find the color value of the corresponding position of the corresponding frame image of the two fisheye videos, then calculate the fusion color, and fill in the corresponding frame image of the generated video.
10、每隔一定帧数(例如每10帧),对重叠区再进行角点检测。10. Every certain number of frames (for example, every 10 frames), perform corner detection on the overlapping area.
11、检查由10检测到的鱼眼图像重叠区中的角点数和经纬图像重叠区中的角点数。如果两者数目差异大,则返回2,否则,跳转至1,直至所有帧全部处理完成。11. Check the number of corner points in the overlap area of the fisheye image detected by 10 and the number of corner points in the overlap area of the latitude and longitude images. If the difference between the two numbers is large,
基于上述技术方案,本发明实施例提供的球形全景视频的拼接方法,适用于2×180°拍摄方式的球形全景视频自动拼接方法;对两个使用鱼眼镜头拍摄好的视频,先对第一视频中的图像进行角点检测,然后在第二视频的对应帧图像中对应的经验位置进行搜索比较,再由HSV检验比较的方法得出各个在视频2中的匹配点,利用5对匹配点,解方程计算出视频图像的转换矩阵。利用转换矩阵复制各对应像素的颜色值到生成视频中,并按照系数进行重叠区域的像素融合。对视频的每帧进行操作并进行适当的优化处理,最终生成拼接视频。对于全景视频的拼接,本实施例提供的方法拍摄数量少,拼接速度快,拼接过程启动后无需人工调整和干预。生成视频拼接效果好,中间接缝位置拼接准确,过渡自然。从整个视频播放看,没有图像抖动和错位。Based on the above technical solutions, the method for splicing spherical panoramic videos provided by the embodiments of the present invention is suitable for the automatic splicing method of spherical panoramic videos in a 2×180° shooting mode; The images in the video are detected at the corners, and then the corresponding empirical positions in the corresponding frame images of the second video are searched and compared, and then each matching point in the
下面对本发明实施例提供的球形全景视频的拼接系统进行介绍,下文描述的球形全景视频的拼接系统与上文描述的球形全景视频的拼接方法可相互对应参照。The following describes the splicing system for spherical panoramic videos provided by the embodiments of the present invention. The splicing system for spherical panoramic videos described below and the splicing method for spherical panoramic videos described above may refer to each other correspondingly.
请参考图10,图10为本发明实施例所提供的球形全景视频的拼接系统的结构框图,该系统可以包括:Please refer to FIG. 10. FIG. 10 is a structural block diagram of a spherical panoramic video splicing system provided by an embodiment of the present invention, and the system may include:
视频获取模块100,用于获取鱼眼镜头拍摄的视频对;其中,视频对包括具有N帧第一图像的第一视频和具有N帧第二图像的第二视频;且N帧第一图像与N帧第二图像具有一一对应关系;A
拼接模块200,用于分别将具有一一对应关系的第一图像与第二图像作为图像对执行拼接操作,形成拼接视频;The
拼接操作模块300,用于确定图像对中第一图像对应的角点;在图像对中第二图像中与各角点对应的经验区域进行搜索,并利用HSV算法对搜索结果进行检验确定各角点对应的匹配点;根据选定的角点以及对应的匹配点计算图像对的转换矩阵;利用转换矩阵确定图像对中未重叠区域中各像素的颜色值,并按照权值系数进行重叠区域的像素融合确定重叠区域中各像素的颜色值;将全部颜色值填充到拼接视频中的对应位置。The
说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。The various embodiments in the specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments, and the same and similar parts between the various embodiments can be referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant part can be referred to the description of the method.
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。Professionals may further realize that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, computer software, or a combination of the two, in order to clearly illustrate the possibilities of hardware and software. Interchangeability, the above description has generally described the components and steps of each example in terms of functionality. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of the present invention.
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。The steps of a method or algorithm described in conjunction with the embodiments disclosed herein may be directly implemented in hardware, a software module executed by a processor, or a combination of the two. The software module can be placed in random access memory (RAM), internal memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other in the technical field. in any other known form of storage medium.
以上对本发明所提供的一种球形全景视频的拼接方法及系统进行了详细介绍。本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护范围内。The method and system for splicing spherical panoramic videos provided by the present invention are described above in detail. The principles and implementations of the present invention are described herein by using specific examples, and the descriptions of the above embodiments are only used to help understand the method and the core idea of the present invention. It should be pointed out that for those skilled in the art, without departing from the principle of the present invention, several improvements and modifications can also be made to the present invention, and these improvements and modifications also fall within the protection scope of the claims of the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710607242.0A CN107333064B (en) | 2017-07-24 | 2017-07-24 | Spherical panoramic video splicing method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710607242.0A CN107333064B (en) | 2017-07-24 | 2017-07-24 | Spherical panoramic video splicing method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107333064A CN107333064A (en) | 2017-11-07 |
CN107333064B true CN107333064B (en) | 2020-11-13 |
Family
ID=60200645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710607242.0A Active CN107333064B (en) | 2017-07-24 | 2017-07-24 | Spherical panoramic video splicing method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107333064B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12167137B2 (en) | 2020-03-16 | 2024-12-10 | Realsee (Beijing) Technology Co., Ltd. | Method and device for generating a panoramic image |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108154476A (en) * | 2017-12-22 | 2018-06-12 | 成都华栖云科技有限公司 | The method of video-splicing correction |
CN109063632B (en) * | 2018-07-27 | 2022-02-01 | 重庆大学 | Parking space characteristic screening method based on binocular vision |
CN110264397B (en) * | 2019-07-01 | 2022-12-30 | 广东工业大学 | Method and device for extracting effective region of fisheye image |
CN113793382B (en) * | 2021-08-04 | 2024-10-18 | 北京旷视科技有限公司 | Video image seam search method, video image splicing method and device |
CN114390222B (en) * | 2022-03-24 | 2022-07-08 | 北京唱吧科技股份有限公司 | Switching method and device suitable for 180-degree panoramic video and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101621634A (en) * | 2009-07-24 | 2010-01-06 | 北京工业大学 | Method for splicing large-scale video with separated dynamic foreground |
CN103516995A (en) * | 2012-06-19 | 2014-01-15 | 中南大学 | A real time panorama video splicing method based on ORB characteristics and an apparatus |
CN103679636A (en) * | 2013-12-23 | 2014-03-26 | 江苏物联网研究发展中心 | Rapid image splicing method based on point and line features |
CN106210535A (en) * | 2016-07-29 | 2016-12-07 | 北京疯景科技有限公司 | The real-time joining method of panoramic video and device |
-
2017
- 2017-07-24 CN CN201710607242.0A patent/CN107333064B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101621634A (en) * | 2009-07-24 | 2010-01-06 | 北京工业大学 | Method for splicing large-scale video with separated dynamic foreground |
CN103516995A (en) * | 2012-06-19 | 2014-01-15 | 中南大学 | A real time panorama video splicing method based on ORB characteristics and an apparatus |
CN103679636A (en) * | 2013-12-23 | 2014-03-26 | 江苏物联网研究发展中心 | Rapid image splicing method based on point and line features |
CN106210535A (en) * | 2016-07-29 | 2016-12-07 | 北京疯景科技有限公司 | The real-time joining method of panoramic video and device |
Non-Patent Citations (2)
Title |
---|
基于灰度累积评价的全景图像自动拼接算法;罗立宏, 谭夏梅;《兰州理工大学学报》;20070630;第33卷(第3期);全文 * |
球形全景图像的自动拼接;罗立宏,谭夏梅;《计算机应用与软件》;20080630;第25卷(第6期);正文第1、2章,图1-7 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12167137B2 (en) | 2020-03-16 | 2024-12-10 | Realsee (Beijing) Technology Co., Ltd. | Method and device for generating a panoramic image |
Also Published As
Publication number | Publication date |
---|---|
CN107333064A (en) | 2017-11-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107333064B (en) | Spherical panoramic video splicing method and system | |
KR102227583B1 (en) | Method and apparatus for camera calibration based on deep learning | |
CN104699842B (en) | Picture display method and device | |
US6486908B1 (en) | Image-based method and system for building spherical panoramas | |
CN101394573B (en) | A method and system for generating panoramas based on feature matching | |
CN100437639C (en) | Image processing apparatus and image processing meethod, storage medium, and computer program | |
TWI728620B (en) | Method of adjusting texture coordinates based on control regions in a panoramic image | |
Gallagher | Using vanishing points to correct camera rotation in images | |
CN108257183A (en) | A kind of camera lens axis calibrating method and device | |
CN106157304A (en) | A kind of Panoramagram montage method based on multiple cameras and system | |
Ha et al. | Panorama mosaic optimization for mobile camera systems | |
JP6683307B2 (en) | Optimal spherical image acquisition method using multiple cameras | |
CN107527336B (en) | Lens relative position calibration method and device | |
Lo et al. | Image stitching for dual fisheye cameras | |
CN111866523B (en) | Panoramic video synthesis method and device, electronic equipment and computer storage medium | |
WO2013149866A2 (en) | Method and device for transforming an image | |
US10482571B2 (en) | Dual fisheye, hemispherical image projection and stitching method, device and computer-readable medium | |
KR20060056050A (en) | Automated 360 ° Panorama Image Generation | |
KR101868740B1 (en) | Apparatus and method for generating panorama image | |
CN109767381A (en) | A shape-optimized rectangular panoramic image construction method based on feature selection | |
US20090059018A1 (en) | Navigation assisted mosaic photography | |
Ha et al. | Embedded panoramic mosaic system using auto-shot interface | |
CN107318010A (en) | Method and apparatus for generating stereoscopic panoramic image | |
CN112950468A (en) | Image splicing method, electronic device and readable storage medium | |
CN115375741A (en) | Registration method and related device for panoramic image and laser point cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |