CN207369210U - Multilayer camera apparatus for 3 D visual image capture - Google Patents
Multilayer camera apparatus for 3 D visual image capture Download PDFInfo
- Publication number
- CN207369210U CN207369210U CN201721035347.5U CN201721035347U CN207369210U CN 207369210 U CN207369210 U CN 207369210U CN 201721035347 U CN201721035347 U CN 201721035347U CN 207369210 U CN207369210 U CN 207369210U
- Authority
- CN
- China
- Prior art keywords
- camera
- sensor
- multiple images
- image
- camera apparatus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 117
- 238000003384 imaging method Methods 0.000 claims abstract description 109
- 239000004744 fabric Substances 0.000 claims description 13
- 230000003287 optical effect Effects 0.000 claims description 12
- 238000000034 method Methods 0.000 description 50
- 210000001508 eye Anatomy 0.000 description 33
- 230000015654 memory Effects 0.000 description 27
- 238000004891 communication Methods 0.000 description 25
- 238000012545 processing Methods 0.000 description 20
- 238000005516 engineering process Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 16
- 230000004438 eyesight Effects 0.000 description 15
- 238000012937 correction Methods 0.000 description 13
- 230000002093 peripheral effect Effects 0.000 description 12
- 230000008569 process Effects 0.000 description 12
- 238000004590 computer program Methods 0.000 description 9
- 238000009877 rendering Methods 0.000 description 9
- 238000003702 image correction Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 230000000670 limiting effect Effects 0.000 description 7
- 239000007787 solid Substances 0.000 description 7
- 238000006243 chemical reaction Methods 0.000 description 6
- 238000013461 design Methods 0.000 description 6
- 239000000463 material Substances 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 210000003128 head Anatomy 0.000 description 5
- 241000209094 Oryza Species 0.000 description 4
- 235000007164 Oryza sativa Nutrition 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000009434 installation Methods 0.000 description 4
- 235000009566 rice Nutrition 0.000 description 4
- 238000005452 bending Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000002156 mixing Methods 0.000 description 3
- 230000005855 radiation Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 238000002594 fluoroscopy Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000036961 partial effect Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- NFGXHKASABOEEW-UHFFFAOYSA-N 1-methylethyl 11-methoxy-3,7,11-trimethyl-2,4-dodecadienoate Chemical compound COC(C)(C)CCCC(C)CC=CC(C)=CC(=O)OC(C)C NFGXHKASABOEEW-UHFFFAOYSA-N 0.000 description 1
- 241000251468 Actinopterygii Species 0.000 description 1
- 235000017060 Arachis glabrata Nutrition 0.000 description 1
- 235000010777 Arachis hypogaea Nutrition 0.000 description 1
- 244000105624 Arachis hypogaea Species 0.000 description 1
- 235000018262 Arachis monticola Nutrition 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003760 hair shine Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 235000020232 peanut Nutrition 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2213/00—Details of stereoscopic systems
- H04N2213/001—Constructional or mechanical details
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
- Stereoscopic And Panoramic Photography (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
This application involves the multilayer camera apparatus captured for 3 D visual image.In general aspect, a kind of camera apparatus can include first layer imaging sensor, it includes the first multiple images sensor, wherein the axis that the first multiple images sensor is had the tangent line perpendicular to circular shape with the visual field of each that circular shape is laid and is oriented so that in the first multiple images sensor.Camera apparatus can include second layer imaging sensor, it includes the second multiple images sensor, wherein the visual field of each that the second multiple images sensor is oriented such that in the second multiple images sensor has the axis for the visual field of each being not parallel in the first multiple images sensor.
Description
Cross reference to related applications
This application claims entitled " the Multi-Tier Camera Rig for submitted for 17th in August in 2016
The U.S. Provisional Application of Stereoscopic Image Capture (the multilayer camera apparatus for being used for 3 D visual image capture) "
No.62/376,140 priority and rights and interests, it is incorporated herein by reference in their entirety.
Technical field
This specification relates generally to camera apparatus.It is used for specifically, this specification is related to from the image generation of capture
The stereoscopic vision panorama shown in virtual reality (VR) and/or augmented reality (AR) environment.
Background technology
Technology for panoramic photography can be used for image and video to provide the wide view of scene.In general, panoramic shooting can be used
Technology and imaging technique obtain panoramic picture from the multiple adjacent photos shot using traditional camera.Photo can be aligned and be placed in
Together to obtain panoramic picture.
Utility model content
The system of one or more computers, camera apparatus and the image capture device being contained on the camera apparatus can
To be configured as relying on the software, firmware, hardware or their group that system will be caused to perform specific operation or action in operation
Close in system to perform the action.
In a general aspect, a kind of camera apparatus includes:First layer imaging sensor, the first layer image sensing
Device includes the first multiple images sensor, and the first multiple images sensor is laid and is oriented so that with circular shape
Obtaining the visual field of each in the first multiple images sensor has the axis of the tangent line perpendicular to the circular shape;And
Second layer imaging sensor, the second layer imaging sensor include the second multiple images sensor, second multiple images
The visual field of each that sensor is oriented such that in the second multiple images sensor, which has, is not parallel to described first
The axis of the visual field of each in multiple images sensor, the circular shape have certain radius so that more than described first
The visual field of each at least three in the imaging sensor imaging sensors adjoined is overlapping.
In another general aspect, a kind of camera apparatus includes:First layer imaging sensor, first tomographic image pass
Sensor includes being arranged in the first multiple images sensor in the first plane, the first multiple images sensor be configured so that
At least three visual fields of each adjoined in imaging sensor obtained in the first multiple images sensor are overlapping;And the
Two tomographic image sensors, the second layer imaging sensor include the second multiple images sensor being arranged in the second plane,
The second multiple images sensor is respectively provided with the transverse and longitudinal of each in the first multiple images sensor than orientation not
With transverse and longitudinal there is certain radius than orientation, the first multiple images sensor definition circular shape, the circular shape,
So that the visual field of each at least three imaging sensors adjoined in the first multiple images sensor is overlapping.
In yet another general aspect, a kind of camera apparatus includes:Camera case, including:Lower circumference, and upper multiaspect
Lid, the lower circumference is below more covers;First multiple images sensor, it is laid and edge with circular shape
The lower circumference of the camera case so that in the first multiple images sensor each have be orthogonal to
The lower circumference to outer projection;And the second multiple images sensor, it is arranged on the face of upper more covers so that
Each in the second multiple images sensor have the normal for being not parallel to the lower circumference to outer projection, described the
The circular shape of one multiple images sensor has such radius so that in the first multiple images sensor
The visual field of each at least three imaging sensors adjoined is overlapping.
In addition, in other general aspects, a kind of camera apparatus includes the first layer with the first multiple images sensor
Imaging sensor.They field of view axis can be laid and be oriented such that to the first multiple images sensor with circular shape
Perpendicular to them with it come the tangent line of circular shape laid.Camera apparatus further includes comprising the second multiple images sensor
Two layers.Second multiple images sensor can be oriented such that their field of view axis is not parallel to the first multiple images sensor
Field of view axis.The second layer can be located on the first layer of camera apparatus.
Embodiment can include one or more following characteristics individually or with one or more of the other combinations of features.
For example, in any or all the above embodiment, the half of the circular camera apparatus housing of the first multiple images sensor is accommodated
Footpath, is so limited so that the first visual field and more than first figures of the first imaging sensor in the first multiple images sensor
As the 3rd imaging sensor in the second visual field of the second imaging sensor in sensor and first multiple images sensor
The 3rd visual field intersect.In any one or whole in the above-described embodiment, first in the first multiple images sensor
In imaging sensor, the second imaging sensor in first multiple images sensor and first multiple images sensor
3rd imaging sensor arrangement (be disposed) is planar.
In another aspect, a kind of camera apparatus includes camera case.Camera case includes lower circumference and upper more covers.Under
Circumference is located at below more covers.Camera apparatus can also include more than first a cameras, it is laid and along phase with circular shape
The lower circumference of casing body so that more than first it is magazine each have be orthogonal to lower circumference to outer projection.Phase
Machine device can also include more than second a cameras.A camera can be arranged in the respective face of more covers more than second so that
More than second it is magazine each have the normal that is not parallel to lower circumference to outer projection.
In another aspect, a kind of method includes limiting the first image collection for the first layer of multilayer camera apparatus, from the
A camera more than one obtains the first set of image, and a camera more than described first is laid so that a phase more than first with circular shape
Each in machine have be orthogonal to circular shape to outer projection.This method can also include calculating in the first image collection
First light stream and the first image collection is stitched together to create the first stitching image based on the first light stream.This method is also
Including limiting the second image collection for the second layer of multilayer camera apparatus.Second image collection can be obtained from more than second a cameras
, a camera more than described second is laid so that it is multiple it is magazine each with being not parallel to the circle of a camera more than first
The normal of shape to outer projection.This method can also include calculating the second light stream in the second set of image, and based on the
Second image collection is stitched together to create the second stitching image by two light streams.This method by by the first stitching image and
Second stitching image is stitched together and generates omnidirectional's stereoscopic panoramic image.
The other embodiments of this aspect include the corresponding computer being recorded on one or more computer memory devices
System, device and computer program, each computer system, device and computer program are configured as performing the dynamic of this method
Make.
The details of one or more embodiments is elaborated in the the accompanying drawings and the following description.Further feature according to description and
Attached drawing and claims will become obvious.
Brief description of the drawings
Fig. 1 is the frame for being used to capture and render in 3D virtual realities (VR) environment the example system of stereoscopic vision panorama
Figure.
Fig. 2 is to describe to be configured as the example camera device that capture is used to generate the image of the scene of stereoscopic vision panorama
Figure.
Fig. 3 is to describe to be configured as another example camera dress that capture is used to generate the image of the scene of stereoscopic vision panorama
The figure put.
Fig. 4 A to Fig. 4 D are to describe multilayer camera apparatus and the exemplary figure of associated component.
Fig. 5 is the figure of the field of view axis of the camera in the lower circumference for the camera case for showing multilayer camera apparatus.
Fig. 6 is the figure of the field of view axis of the camera in the upper more covers for the camera case for showing multilayer camera apparatus.
Fig. 7 is the figure of examples shown VR equipment.
Fig. 8 is the exemplary graph of the number for the camera and neighbours for illustrating the function as viewing field of camera.
Fig. 9 is the exemplary graph for the interpolation visual field for illustrating the function as viewing field of camera.
Figure 10 is the exemplary graph for the selection for illustrating the configuration to camera apparatus.
Figure 11 is to illustrate the example relationship that can be used for that the minimal amount of camera is determined according to predetermined assembly dia
Curve map.
Figure 12 A-B are the line chart examples for the distortion that may occur during image capture.
Figure 13 A-B are depicted in the example for collecting the light captured during panoramic picture.
Figure 14 A-B illustrate the use of the almost plane perspective projection as shown in Figure 13 A-B.
Figure 15 A-C illustrate the example of the almost plane perspective projection applied to the plane of delineation.
Figure 16 A-B illustrate the example for introducing vertical parallax.
Figure 17 A-B describe the example points for the coordinate system that can be used for illustrating the point in 3D panoramas.
Figure 18 represents the perspective view of the discribed points of Figure 17 A- Figure 17 B.
Figure 19 illustrates the light captured using the panoramic imaging techniques described in the disclosure in omni-directional stereo image.
Figure 20 is the curve map for illustrating the maximum perpendicular parallax caused by the point in 3d space.
Figure 21 is the flow chart of the one embodiment for the process that diagram produces stereoscopic panoramic image.
Figure 22 is the flow chart of one embodiment of the process of diagram capture stereoscopic panoramic image.
Figure 23 is the flow chart of the one embodiment for being illustrated in the process that panoramic picture is rendered in head-mounted display.
Figure 24 is the flow chart of the one embodiment for the process that diagram determines image boundary.
Figure 25 is the flow chart of one embodiment of the process of diagram generation video content.
Figure 26 shows the example that can be used for the computer equipment and mobile computer device for realizing technology described herein.
Similar reference numerals in each attached drawing indicate similar element.
Embodiment
Generally included for example, creating panoramic picture:Around the single camera in camera apparatus or the capture of multiple cameras
The image or video of three-dimensional (3D) scene., can be synchronous simultaneously by each camera when using the camera apparatus for accommodating some cameras
It is configured in particular point in time capture images.For example, the first frame for being captured of each camera can with second camera, third phase
Machine and the 4th camera capture the corresponding first frame roughly the same time and are captured.Image capture can by the same time in a manner of after
It is continuous, until capturing some or all in scene., can be with although many embodiments describe for camera
Embodiment is described with regard to imaging sensor or for camera case (it can include imaging sensor).
The camera apparatus for accommodating multiple cameras may be configured to the special angle of capturing scenes.For example, it is contained in camera
Camera on device can be oriented with special angle, and can be by from all (or at least a portion) contents of angle capture
Handle to generate the panorama of special scenes.
In some embodiments, each camera can orient the different angle with capturing scenes at different angles.
Some or all of the only a parts of capturing scenes or scene are included under events of distortion, can perform multiple processing come interpolation or
Configure any loss from panorama, damage or the content of distortion.
Disclosure below is described in head-mounted display (HMD) equipment in 3D virtual realities (VR) environment
The purpose of content as display captures, handles, corrects and renders multiple apparatus and method of 3D panorama contents.To virtual
The reference of reality can also include or can be augmented reality.In some embodiments, camera apparatus can include multi-layer phase
Machine is to reduce or eliminate the lack part of scene and reduce interpolation.For example, in some embodiments, camera apparatus can include
16 cameras of lower level and 6 cameras of upper level.In some embodiments, lower level (or layer) camera and upper level
The ratio of (or layer) camera is more than 2:1 but less than 3:1 (such as 2.67:1).Camera can be directed at different angles so that
Each camera, which captures, can be processed with the different content for the panorama for generating special scenes.It is appropriate that the ratio of camera has for capture
Depth, 360 ° of videos of focus etc., while it is also important to reduce or minimize the number of camera and image procossing amount.
Fig. 1 is the example system 100 for being used to capture and render in 3D virtual realities (VR) environment stereoscopic vision panorama
Block diagram.In example system 100, camera apparatus 102 can capture, be locally stored (for example, permanent or removable Storage) and/
Or image is provided by network 104, or as an alternative, image can be supplied directly to image processing system 106 for analysis
And processing.In some embodiments of system 100, mobile equipment 108 can serve as camera apparatus 102 with each in network 104
Place provides image.For example, once image is captured, image processing system 106 can perform image multiple calculating and processing,
And the image of processing is supplied to for rendering by head-mounted display (HMD) equipment 110 by network 104.In some implementations
In mode, image processing system 106 can be included in camera apparatus 102 and/or HMD device 110.In some embodiments
In, the image of processing can also be supplied to mobile equipment 108 and/or computing device 112 for wash with watercolours by image processing system 106
Dye, storage or further processing.
HMD device 110 can represent to be capable of virtual reality head-wearing device (headset), the eye of display virtual real content
Mirror, eyepiece or other wearable devices.In operation, HMD device 110 can perform VR application (not shown), its can to
Image that family playback receives and/or processing.In some embodiments, VR application can be as shown in Figure 1 equipment 106,
One or more of 108 or 112 carry out trustship.In one example, HMD device 110 can be provided is caught by camera apparatus 102
The video playback of the scene obtained.In another example, HMD device 110 can provide the static figure for being spliced into single panoramic scene
The playback of picture.
Camera apparatus 102 can be configured to be used as camera (being referred to as capture device) and/or processing equipment to receive
Collection is used for the view data of the rendering content in VR environment.Although camera apparatus 102 is shown as herein and specific function
The block diagram being described together, but camera apparatus 102 can take Fig. 2 to the form of any embodiment shown in Fig. 6, and
In addition can have the function of to describe for camera apparatus everywhere in the disclosure.For example, in order to simplify the work(of description system 100
Can, Fig. 1 shows not having and is arranged in around camera apparatus 102 with the camera apparatus 102 of the camera of capture images.Camera apparatus
102 other embodiment can include any number of camera that cloth is set as multilayer, it can surround the circle of such as device 102
The circumference of shape camera apparatus.
As shown in Figure 1, camera apparatus 102 includes multiple cameras 139 and communication system 132.Camera 139 can include single
Stationary cameras or single camera device.In some embodiments, in one or more layers in accordance with some embodiments, camera
The 139 multiple stationary cameras or more that can include being arranged side by side (for example, assembling) along the peripheral part (for example, ring) of device 102
A video camera device.Camera 139 can be that video camera, imaging sensor, stereoscopic camera, infrared camera, and/or movement are set
It is standby.Communication system 132 can be used for uploading and download image, instruction, and/or other camera related contents.Communication can be had
Line is wireless, and can be docked by dedicated network or common network.
Camera apparatus 102 can be configured as stationary installation or rotary devices.Each camera on device can
(for example, place) is arranged with the pivot of deviating device.Camera apparatus 102 can be configured as around 360 degree of rotations to sweep
Plunder and capture all or part of of 360 degree of views of such as scene.In some embodiments, device 102 can be configured
To be operated in resting position, and in such an arrangement, additional camera apparatus can be added to the device with capturing scenes
Extra outside visual angle.
In some embodiments, camera apparatus 102 include by side offside or it is back-to-back in a manner of multiple numerals for arranging regard
Frequency camera (for example, as shown in Figure 3) so that their camera lens is directed to radially outward direction, to watch surrounding scene or ring
The different piece in border.In some embodiments, multiple digital video camera devices are with tangent with circular camera apparatus 102
The tangential configuration of view direction arrange.For example, camera apparatus 102 can include multiple digital video camera devices, its quilt
Be arranged so that their camera lens and be directed to radially outward direction, at the same with the base portion of device is tangent lays.Digital video
Camera can be directed toward to capture the content of different directions, to check the different angle part of scene around.
In some embodiments, camera apparatus 102 can include multilayer digital video camera device.For example, camera fills
Lower floor can be included by putting, and wherein digital video camera is arranged with side offside or in back-to-back fashion, and be further included with opposite
The upper strata of the additional camera above camera is arranged in lower floor's camera.In some embodiments, upper strata camera is different under
In the plane of layer camera from camera apparatus 102 outwardly.For example, upper strata camera can be arranged in vertical with lower floor camera or connect
Closely in the plane vertical with lower floor camera, and each camera can be from the center of lower floor outwardly.In some embodiments
In, the number of the camera apparatus in upper strata can be differently configured from the number of the camera in lower floor.
In some embodiments, the image of the camera apparatus in lower floor can be on camera apparatus 102 with adjacent
To handling.In such an arrangement, each first camera device in each adjacent cameras set and camera apparatus base portion
The tangent arrangement (for example, place) of circular path, and be directed at (for example, being directed toward with camera lens) left direction.Each adjacent phase
The tangent arrangement of circular path (for example, placement) of each second camera device and camera apparatus base portion in machine set, and be aligned
(for example, being directed toward with camera lens) is directed toward right direction.Upper strata camera can also be arranged similarly relative to each other.In some realities
Apply in mode, adjacent camera is (for example, adjoining) neighbours on same level or layer.
The example of the camera used on camera apparatus 102 sets the progressive scanning mode that can include about 60 frame per second
(that is, each grid stroke is sampled each frame to produce video rather than the standard recording mode such as such as most of video cameras
Interlacing pattern).In addition, each camera can be configured with identical (or similar) setting.It is by each camera configuration
Identical (or similar) sets the advantage that can provide capture images, which can splice in the desired manner after capture
Together.Exemplary arrangement can include one or more cameras being arranged to identical scaling, focusing, exposure and shutter speed,
And camera is set on white balance in the case where stabilization function to be set to related or is closed.
In some embodiments, camera apparatus 102 can be in the advance for capturing one or more images or video
Row calibration.For example, each camera on camera apparatus 102 can be calibrated and/or be configured to the video that pans.For example, institute
Stating setting can include configuring a device into with wide visual field and clockwise or counterclockwise around 360 degree of scopes with spy
Determine rotational speed operation.In some embodiments, for example, the camera on device 102 can be configured as around the capture road of scene
One frame of often degree capture under 360 degree of scopes in footpath.In some embodiments, for example, the camera on device 102 can be configured as
Scope often spends capture multiframe under 360 degree (or less) in the capture path of scene.In some embodiments, for example, dress
Put the camera on 102 and can be configured as the multiple frames for capturing the scope for capturing path around scene without every degree capture spy
Surely the frame measured.
In some embodiments, camera can be configured (such as setting) as synchronously work with particular point in time from
Camera capture video on camera apparatus.In some embodiments, camera can be configured as synchronously working with one
Between in section from one or more cameras capture particular video frequency part.Another example of calibration camera device can include configuration such as
What stores incoming image.For example, incoming image can be used as single frame or video storage (such as .avi files .mpg text
Part), and the image so stored can upload to internet, another server or equipment, or with camera apparatus 102
Each camera be locally stored together.In some embodiments, incoming image can be stored as encoded video.
Image processing system 106 includes interpolation module 114, capture correction module 116 and concatenation module 118.It is for example, interior
Module 116 is inserted to represent to can be used for following algorithm:The part of digital picture and video is sampled and determine to be likely to from
What camera apparatus 102 captured adjoins the interpolated image number occurred between image.In some embodiments, interpolation module 114
It can be configured as interpolated image fragment, image section, and/or the horizontal or vertical image bar for determining to adjoin between image.
In some embodiments, interpolation module 114 can be configured as between the related pixel for determining to adjoin in image flow field (and/
Or flow vector).Flow field can be used for the conversion undergone of compensation image and the image of conversion undergone for handling.For example, can
To use the conversion of the specific pixel grid of flow field acquisition image to compensate.In some embodiments, interpolation module 114 can
To generate one or more images of a not part for captured images by the interpolation to image around, and can incite somebody to action
The image generated interweaves into the image captured to generate the additional virtual real content of scene.
Capture correction module 116 can be configured as by compensating non-ideal capture setting to correct the image of capture.Show
Example capture, which is provided as non-limiting example, can include circular camera track, parallel master (camera) axis, perpendicular to camera track
View direction, with the view direction of camera apparatus trajectory tangential, and/or other contact conditions.In some embodiments, catch
Obtain correction module 116 and can be configured as non-circular camera track of the compensation during image capture and/or in image capture
One or two in the non-parallel main shaft of period.
In capture correction module 116 can be configured as adjustment specific image set to compensate and capture using multiple cameras
Hold, wherein camera interval is greater than about 30 degree.For example, if the distance between camera is 40 degree, by being collected from additional camera
The content inside perhaps lost by interpolation, capture correction module 116 can be solved in the special scenes based on the covering of very few camera
Any loss content.
In some embodiments, capture correction module 116 can be additionally configured to adjustment image collection with compensate due to
Camera misalignment caused by camera attitude error etc..If for example, occur during image capture camera attitude error (such as by
In error caused by the orientation of camera and position), then module 116 can mix two or more row from some picture frames
Pixel, is included due to ill-exposed (or the exposure changed from picture frame to picture frame) and/or due to one or more with removing
Pseudomorphism caused by the misalignment of camera.Concatenation module 118 can be configured as based on restriction, obtaining, and/or interpolation
Image generates 3D 3 D visual images.Concatenation module 118 can be configured as from multiple images part mix/splice pixel and/
Or image bar.Splicing can be based on the flow field for example determined by interpolation module 114.For example, concatenation module 118 can be (from interpolation
Module 114) the interpolated image frame for the part for not being image collection is received, and picture frame is interweaved into image collection.Interweave
It can include:Module 118, which is based at least partially on, is spliced picture frame and image collection by the light stream that interpolation module 114 generates
Together.
It can be used for generating the omnidirectional's solid for being used for being shown in VR head-mounted displays (for example, full side through splicing and combining
To solid) panorama.Picture frame can be based on from multiple adjacent camera being captured to collection being arranged on specific device
Video flowing.Such device can be included in the device first layer or level in about 12 to about 16 cameras, and the dress
4 to 8 cameras in the second layer or level put, the wherein second layer are located on first layer.In some embodiments, can be with
Include the camera of odd number in each layer of device.In some embodiments, device includes more than one or two phases
The set of adjacent camera.In some embodiments, device can include the multiple adjacent cameras that can be assemblied in side by side on device
Set.In some embodiments, concatenation module 118 can use the attitude information associated with least one phase adjacency pair
A part for image collection is spliced in advance before intertexture is performed.More clearly show and retouch below in conjunction with such as Fig. 3
State the phase adjacency pair on camera apparatus.
In some embodiments, image mosaic can be included to the video that will be captured together using optics Flow Technique
Content is stitched together.Such optic flow technique can be used for capturing in previously used camera pair and/or single camera specific
Intermediate video content is generated between video content.It is a series of on circular fixed camera device that this technology may be used as simulation
The mode of camera captures images.The camera of simulation can carry out capture content similar to following methods:By single camera with circular shape
Shape (for example, circle, circular, circular pattern) is scanned to capture 360 degree of image, but in the above-described techniques everywhere, real
It can be static that border, which is placed on and placed less camera and the device on device,.A series of ability for simulating cameras also provides
The advantages of every content frame in video can be captured, is (for example, capture 360 figures at the capture interval of an image with every degree
Picture).
By using the dense set of image (for example, often spending 360 images of an image), in the intermediate video generated
Hold the video content that light stream can be used to be spliced to actual acquisition, in fact, camera apparatus capture is less than 360 images.Example
Such as, if circular camera apparatus includes 8 pairs of cameras (i.e. 16 cameras) or 16 non-matching cameras, the picture count captured
Can be with as low as 16 images.Optic flow technique can be used for simulating the content between 16 images, to provide in 360 degree of video
Hold.
In some embodiments, interpolation efficiency can be improved using optic flow technique.For example, 360 images of interpolation are substituted,
Can be in each continuous camera to calculating light stream between (for example, [1-2], [2-3], [3-4]).Given 16 captured figures
Picture and light stream, interpolation module 114 and/or capture correction module 116 can calculate any pixel in any medial view, without
Must in one of 16 images interpolation whole image.
In some embodiments, concatenation module 118 may be configured to splicing from the device collection with multilayer camera
Image, wherein the plurality of layer is positioned above or below mutual.Concatenation module 118 can handle what is captured from every layer of camera
Then video content can be existed the stitching image splicing associated with each layer with creating stitching image for each layer
Together to produce 360 degree of images.For example, camera apparatus can include 16 cameras in a lower layer, and in the upper layer 6
Camera, upper strata is located above lower floor wherein on the device.In such an example, concatenation module 118 will can be come from lower floor
16 cameras image mosaic together, to generate the stitching image (for example, lower floor stitching image) associated with lower floor.Spell
Connection module 118 can also be by the image mosaic of 6 cameras on upper strata together, to generate the splicing associated with upper strata
Image (for example, upper strata stitching image).In order to generate 360 degree of images, concatenation module then can by lower floor's stitching image with it is upper
Layer stitching image is stitched together.In some embodiments, adjacent camera is (for example, adjoining on same level or layer
) neighbours.
Image processing system 106 further includes projection module 120 and image correction module 122.Projection module 120 can by with
It is set to and generates 3D 3 D visual images by projecting image onto in plane perspective plane.For example, projection module 120 can obtain
Specific image set projection, and can by by some images in described image from plane perspective projection transform be ball
Face (wait for rectangle (equirectangular)) perspective projection configures the re-projection of a part for image collection.The conversion bag
Include projection modeling technique.
Projecting modeling can include limiting projection centre and projection plane.In the example described in the disclosure, in projection
The heart can represent the optical centre at origin (0,0,0) place of predefined xyz coordinate systems.Projection plane can be placed in the projection
The front of the heart, wherein camera are towards with along the z-axis capture images of xyz coordinate systems.In general, it can use from coordinate (x, y, z)
Intersection point to the plane perspective plane of the specific image light (image ray) of projection centre projects to calculate.For example, it can lead to
Cross using matrix computations to manipulate coordinate system the conversion that is projected.
Projection modeling for stereoscopic vision panorama can include the use of the multi-view image without single projection centre.
Various visual angles are shown generally as circular shape (such as spherical) (referring to Figure 13 B).When rendering content, changed from a coordinate system
During to another coordinate system, system described herein can use sphere as approximate.
In general, spherical (waiting rectangle) projection provides the spherical form with ball centre for equally surrounding projection centre
Plane.Perspective projection is provided provides the image of 3D objects with the reality of approximated user on plane (such as 2D faces) perspective plane
The view of border visual perception.In general, can be in flat image plane (for example, computer monitor, mobile equipment LCD screen)
Image is rendered, therefore projection is shown in a manner of plane perspective to provide undistorted view.However, plane projection may not be permitted
Perhaps 360 degree of visual fields, thus capture image (for example, video) can by wait rectangle (i.e. sphere) have an X-rayed in a manner of store, and
Plane perspective can be projected to again when rendering.
After the completion of specific re-projection, projection module 120 can be transmitted for the image that is rendered in HMD through weight
Projection section.For example, left eye display that re-projection part can be supplied in HMD 110 by projection module 120 and will throw again
Shadow part is supplied to the right eye display in HMD 110.In some embodiments, projection module 120 can be configured as passing through
Above-mentioned re-projection is performed to calculate and reduce vertical parallax.
Image correction module 122 can be configured as by compensating distortion to generate 3D 3 D visual images, including but not
It is limited to perspective distortion.In some embodiments, image correction module 122 can determine maintain 3D solids light stream it is specific away from
From, and image can be split, only to show to maintain the part of the scene of such stream.For example, image correction module
122 can determine that the light stream of 3D stereo-pictures is maintained from the about one radial direction rice of outer edge away from circular camera apparatus 102 to for example
Between about 5 radial direction rice of outer edge of camera apparatus.Therefore, image correction module 122 may insure in one meter and five meters
Between sample be selected for rendering in the projection of no distortion in HMD 110, while also carried for the user of HMD 110
For the appropriate 3D stereoeffects with appropriate parallax.
In some embodiments, image correction module 122 can estimate light stream by adjusting specific image.For example,
The adjustment can include:A part in correcting image, determines the estimation camera posture associated with the part in image, with
And the stream between the image in the definite part.In a non-limiting example, image correction module 122 can compensate
Calculate the rotational differential between two specific images of stream therein.The correction can be used for removing and be drawn by rotational difference (i.e. rotating flow)
The flow component risen.Such correction causes by translating flowing caused by (for example, parallax stream), this can reduce what stream estimation calculated
Complexity, while make obtained image accurate and robust.In some embodiments, image can be performed before rendering
Processing in addition to image rectification.For example, splicing, mixing or additional school can be performed to image before execution renders
Positive processing.
In some embodiments, image correction module 122 can be corrected to be not based on the camera of plane perspective projection
The projection distortion that the picture material of geometry (camera geometry) capture causes.For example, can be by from multiple and different
Visual angle is inserted into image and applies school to image by adjusting from the viewing light associated with image of common origin
Just.Interpolated image can be interleaved into the image of capture, come to be produced for human eye as the rotation parallax of comfort level
Human eye seems accurate virtual content.
In example system 100, equipment 106,108 and 112 can be laptop computer, desktop computer, mobile meter
Calculate equipment or game console.In some embodiments, equipment 106,108 and 112 can be arranged (for example, putting
Put/position) mobile computing device in HMD device 110.The mobile computing device can include display device, it can be by
Screen as such as HMD device 110.Equipment 106,108 and 112 can include being used for the hardware and/or soft for performing VR applications
Part.In addition, when equipment 106,108 and 112 is placed on relative to the front of HMD device 110 or when being maintained in the range of position,
These equipment can include the hardware and/or software of the 3D movements that can recognize, monitor and track HMD device 110.At some
In embodiment, equipment 106,108 and 112 can provide additional content by network 104 to HMD device 110.In some implementations
In mode, equipment 102,106,108,110 and 112 can match or be connected to one another one by network 104
It is or multiple/interfaced.The connection can be wired or wireless.Network 104 can be public communication network or private communication
Network.
System 100 can include Electronic saving.The Electronic saving can be included in any equipment (for example, camera apparatus
102nd, image processing system 106, HMD device 110, and/or such).Electronic saving can include information is stored electronically
Non-transitory storage medium.The Electronic saving can be configured as image, the image obtained, the figure of pretreatment of storage capture
Picture, the image post-processed etc..It can be processed using the image of any disclosed camera apparatus capture and be stored as one or more
A video flowing is stored as each frame.In some embodiments, storage can occur during capture, and render can be straight
Sending and receiving life is after capture portion, so that not being concurrently faster to access full-view stereo content earlier than capture and processing.
Fig. 2 is to describe to be configured as the example camera device that capture is used to generate the image of the scene of stereoscopic vision panorama
200 figure.Camera apparatus 200 includes the first camera 202A and second camera for being attached to circumferential support base portion (not shown)
202B.As shown in the figure, camera 202A and 202B is towards directing out (towards the image/scene to be captured) and parallel to device
200 pivot or axis (A1) and arranged with annular location.In some embodiments, the figure of Fig. 2 can correspond to more
One layer of layer camera apparatus.
In discribed example, camera 202A and 202B (B1) separated by a distance is arranged (such as placement) and is installing
On plate 208.In some embodiments, the distance between each camera on camera apparatus 200 (B1) can represent average people
Class interpupillary distance (IPD).Be separated by IPD distances place camera can approximate human eye rotate (left or right, such as the institute of arrow 204 at it
Show), to scan how image can be checked during the scene around the capture path indicated by arrow 204.Example is averaged mankind's IPD degree
Amount may be about 5 centimetres to about 6.5 centimetres.In some embodiments, each camera for being separated by standard IPD distances to arrange can
To be a part for stereoscopic camera pair.
In some embodiments, camera apparatus 200 can be configured as the diameter of approximate test human head.For example, phase
Machine device 200 can be designed to about 8 centimetres to about 10 centimetres of diameter 206.Can be that device 200 selects the diameter 206
How observe relative to pivot A1 rotations and employment with approximate human head and see scene image.Other measurements are possible
, and if for example to use the diameter of bigger, device 200 or system 100 can adjust capture technique and obtained
Image.
In a non-limiting example, camera apparatus 200 can have about 8 centimetres to about 10 centimetres of diameter 206, and
And the camera for being separated by about 6 centimetres of IPD distances to place can be accommodated.Multiple devices are described below to lay.Retouched in the disclosure
Each lay stated can be configured with the distance between above-mentioned or other diameters and camera.
As shown in Fig. 2, two cameras 202A, 202B may be configured to wide visual field.For example, camera can capture about
150 degree of visual fields to about 180 degree.Camera 202A, 202B can have the fish eye lens for being used for capturing the wider visual field.In some realities
Apply in mode, camera 202A, 202B are used as three-dimensional right.
In operation, device 200 can be rotated by 360 ° to capture panoramic scene around pivot A1.As an alternative, the dress
Putting can remain static, and can add additional camera apparatus to camera apparatus 200 to capture the appendix of 360 degree of scenes
Divide (for example, as illustrated in figs. 3 and 4).
Fig. 3 is to describe to be configured as another example camera dress that capture is used to generate the image of the scene of stereoscopic vision panorama
Put 300 figure.Camera apparatus 300 includes the multiple camera 302A-302H for being attached to circumferential support base portion (not shown).First phase
Machine 302A is shown as solid line, and additional camera 302B-302H is shown in broken lines, to represent that instruction is optional.With phase
Camera (see the camera 202A and 202B) contrast of parallel installation shown in machine device 200, camera 302A-302H and circular camera
The excircle of device 300 is tangent to be arranged.As shown in figure 3, camera 302A has adjacent cameras 302B and adjacent cameras 302H.
Similar with the camera in device 200 in discribed example, camera 202A and 202B are separated by specific range (B1)
To arrange.In this example, camera 302A and 302B can be used as phase adjacency pair to capture respectively to the left with right direction away from center
The angle of camera lens, as described in detail later.
In one example, camera apparatus 300 is to include 306 (its of rotatable or fixed base (not shown) and installing plate
Be referred to as supporting item) circular apparatus, and adjacent camera to including:First camera 302A, it is placed on installation
On plate 306, and it is configured to point to viewing side that is tangent with the edge of installing plate 306 and being set as being directed toward left direction by cloth
To, and second camera 302B, by with first camera side by side in a manner of be placed on installing plate 306 and be placed on and the first phase
Machine 302A is set as being directed toward and is installed at a distance of interpupillary distance (or different distance (for example, being less than IPD distances)) place, second camera 302B by cloth
The edge of plate 306 is tangent and is set as being directed toward the view direction in direction to the right by cloth.Similarly, phase adjacency pair can be by camera
302C and 302D are produced, and another pair can be produced by camera 302E and 302F, and another to can be by camera 302G and 302H
Produce.In some embodiments, whether each camera (for example, 302A) can be with not adjoining or not but adjoining with its neighbour its own
Camera matches so that each camera on the device can be matched with another camera on device.In some embodiments, often
A camera apparatus can directly neighbours' (in either side) match.
In some embodiments, one or more stereo-pictures can be generated by interpolation module 114.For example, except
Outside the stereoscopic camera shown on camera apparatus 300, additional stereoscopic camera can be generated as synthetic stereo image camera.Tool
Body, the analog frame of 3D scenes can be produced by analyzing the light (for example, ray trace) from capture images.The analysis can wrap
Include:Pass through specific image or picture frame and the light of entrance scene from viewpoint backward tracing.If specific light shines scene
In object, then each image pixel that it passes through can be drawn to match the object with certain color.If the light does not have
The object is shone, then can be utilized with the background in scene or the matched color of further feature to draw the image pixel.Make
With viewpoint and ray trace, interpolation module 114 can generate the additional scene content for appearing to originate from simulating stereo camera.This is attached
Add content can be including the content outside image effect, missing image content, background content, visual field.
As shown in figure 3, the tangent arrangement (for example, being placed) of excircle of camera 302A-302H and camera apparatus 300, and
And therefore can be with the visual field up to 180 degree of capturing scenes.That is, since camera is placed in a tangential manner, it is possible in the device
On each camera in capture the 180 degree visual field not being blocked completely.
In some embodiments, camera apparatus 300 includes adjacent cameras.For example, device 300 can include adjacent cameras
302A and 302B.Camera 302A can be configured with associating of being directed toward on the view direction tangent with the edge of installing plate 304
Camera lens and be set as being directed toward direction to the left by cloth.Similarly, camera 302B can be arranged in installing plate 304 in side-by-side fashion
On, and be placed at camera 302A approximation mankind's interpupillary distances, and direction and the edge phase of installing plate 304 are set as by cloth
The view direction cut and it is set as being directed toward direction to the right by cloth.
In some embodiments, the particular sensor of (or on camera apparatus 300) can be arranged on camera 302A-H
It is tangent with the excircle of camera 302A-H (or device 300), rather than make actual camera 302A-H arranged tangentials.With this side
Formula, camera 302A-H can be placed according to user preference, and sensor can the position based on device 300, scan speed
Degree, or based on camera configuration and set detect which camera or camera 302A-H can be with capture images.
In some embodiments, neighbours can include the camera 302A and camera laid with back-to-back or side-by-side configuration
302E.This laying can be used for collecting the left and right at the azimuth 308 formed by corresponding camera lens and installing plate 304
Visual angle.In some embodiments, camera is set as the azimuth 308 with being formed respectively by camera lens and installing plate 304 by cloth
Left and right inclination angle.
In some embodiments, be placed on camera on camera apparatus 300 can during image interpolation with it is any its
Its adjacent cameras match, and outwardly facing direction on simply around circular apparatus be aligned.In some embodiments,
Device 300 includes single camera (for example, camera 302A).In the case where only camera 302A is installed to the event of device 300, Ke Yitong
Cross and camera apparatus 300 is rotated clockwise complete 360 degree to capture stereoscopic panoramic image.
In some embodiments, the figure of Fig. 3 can correspond to one layer of multilayer camera apparatus.For example, in such reality
Apply in mode, one layer of camera that can include being attached to the circumferential support structure of multilayer camera apparatus of multilayer camera apparatus
302A-302H。
Fig. 4 A to Fig. 4 D be illustrate according to embodiment, camera apparatus 400 (being referred to as multilayer camera apparatus)
The figure of each view (being respectively perspective view, side view, top view and bottom view).As shown in the figure, camera apparatus 400 includes tool
There is the camera case 420 of lower circumference 430 and upper more covers 440.Lower circumference 430 can include camera 405A to 405C and 405M.
Although this embodiment of lower circumference 430 includes the camera more than four, for simplicity, only four cameras are carried out
Mark.In this embodiment, camera (it can also be referred to as capture device or imaging sensor) may be collectively referred to as camera
405.Upper more covers 440 can include camera 415A to 415B and 415M.Although this embodiment of upper more covers includes more
In the camera of three, but for simplicity, only three cameras are marked.In this embodiment, camera (its
Can also be referred to as capture device or imaging sensor) it may be collectively referred to as camera 415.
Camera 415 (for example, 415A etc.) is included in first layer camera (or imaging sensor), and 405 (example of camera
Such as, 405A etc.) it is included in second layer camera (or imaging sensor).First layer camera apparatus is properly termed as the master of camera
Layer.As shown in Figure 4 B, the visual field (or center) of the magazine each imaging sensor of first layer is arranged in plane PQ1 or with putting down
Face PQ1 intersects, and the visual field (or center) of the magazine each imaging sensor of the second layer is arranged in plane PQ2 or with putting down
Face PQ2 intersects.Plane PQ1 is parallel to plane PQ2.
In this embodiment, camera apparatus 400 includes the first layer and six cameras 405 of 16 camera apparatus 415
The second layer.In some embodiments, the ratio of lower level (or layer) camera and upper level (or layer) camera is more than 2:1 but
Less than 3:1 (such as 2.67:1).
As shown in the figure, in this embodiment, camera apparatus 400 only includes two layers of camera.Camera apparatus 400 does not include the
Three layers of camera, and therefore only there is camera on two planar.In this embodiment, in camera apparatus 400
The corresponding level camera similar to second layer camera is not present below one layer of camera.It can exclude the camera of lower-level (or layer)
To reduce image procossing, weight, expense etc., without sacrificing imagery exploitation.
Although being not shown, in some embodiments, camera apparatus can include three layers of camera.In such implementation
In mode, third layer camera apparatus can have the number (for example, six cameras) identical from second layer camera or different numbers
The camera of (for example, less, more).First layer camera (for example, 16) can be arranged between second and third layer camera.
Similar with other embodiment as described herein, the camera 405 of the lower circumference 430 of camera 400 is outwardly facing (example
Such as, away from camera apparatus 400 center towards).In this embodiment, each camera 405 is oriented such that camera 405
Lens system axis of the visual field centered on it perpendicular to the tangent line of circular shape (for example, circle, be substantially to justify), the circle
Shape by camera case 420 lower circumference 430 and and then the circular shape that is limited by camera 405 limit.Such example is extremely
It is few to be shown in Fig. 5 with the axis 510 and tangent line 520 associated with camera 405.
In this embodiment, each camera is configured such that axis 510 may extend through camera apparatus (shown in Fig. 5)
A lens system (for example, center of a camera lens or capture sensor) on 400 side, passes through camera apparatus 400
Center 530, and another lens system for the opposite side for passing through camera apparatus 400.Camera 405 (or imaging sensor) surrounds phase
The lower circumference 430 of casing body 420 is laid with circular shape so that each camera 405 have can it is orthogonal with lower circumference 430 and
And then the circular shape with being limited from camera apparatus 400 is orthogonal to outer projection's (or projection centre).In other words, camera 405 can
With with away from camera apparatus 400 inside towards projection.
In some embodiments, the lens system of each camera 405 deviates the center of the main body of each camera 405.This
Each camera is caused to be angularly offset relative to other cameras 405 to be disposed in camera case 420 so that each phase
The visual field of machine can be with vertical orientation (for example, circular being cut perpendicular to what is limited by lower circumference 420 relative to camera apparatus 400
Line).
Although it is not shown, in some embodiments, the camera of odd number, which can be included in camera case, to be made
For a part for lower circumference 420.In such embodiment, not over multiple camera's lens systems and camera apparatus
In the case that the axis of step is carried out at center, the lens system of camera can have with perpendicular to the axis of the tangent line of camera apparatus (or by
Camera apparatus limit circle) centered on visual field.
In some embodiments, can the optical properties (optics) based on one or more cameras 405 (for example, regarding
, pixel resolution) limit the minimum or maximum geometry of lower circumference 430.For example, the minimum diameter of lower circumference 430 and/
Or maximum gauge can be limited based on the visual field of at least one camera 405.In some embodiments, relatively large (or wide)
Visual field can cause relatively small lower circumference 430.As shown in Fig. 4 A to Fig. 4 D, for example, each cloth in camera 405 is set as indulging
To pattern (for example, 4:3 transverse and longitudinals are than pattern) so that the horizontal size of the image captured by camera 405 is less than to be captured by camera 405
Image vertical dimension.In some embodiments, each in camera 405 can with any transverse and longitudinal than orientation (for example,
16:9 or 9:16 transverse and longitudinal ratios, 3:4 transverse and longitudinal ratios) arrangement.
In some embodiments, the diameter (or radius (RA) (shown in Fig. 4 C)) of lower circumference 430 be defined so as to
The visual fields of few three camera apparatus 405 adjoined it is overlapping (for example, intersecting, intersect at least point in space, area, and/or
In volume).Sensor arrangement in camera 405 is planar (it is basically parallel to the plane through lower circumference 430).At some
In embodiment, the whole visual fields (such as or substantially whole visual field) of at least two adjacent cameras 405 can with camera 405
Third camera (adjoining with least one phase in the camera 405 that two adjoin) visual field it is overlapping.In some embodiments,
The visual field of three any set for adjoining camera 405 can be overlapping so that any point around lower circumference 430 is (for example, pass through phase
Any point in the plane of the sensor of machine 405) it can be captured by least three cameras 405.Three adjoin the overlapping of camera 405
Can be important for 360 ° of videos can be captured with appropriate depth, focus etc..
In some embodiments, the camera 415 of upper more covers 430 of camera apparatus 400 is outwardly facing (for example, remote
The center of camera apparatus 400).According to some embodiments, each in camera 415 is oriented such that the mirror of camera 415
Axis and the axis of the visual field of the lens system of camera 405 along the visual field of head system is not parallel.For example, as shown in fig. 6, for
Those cameras 415 (for example, camera 415A and camera 405A) of arrangement, camera directly over camera 405 on camera case 420
The axis of 415 visual field 610 and the axis of the visual field 510 of camera 405 form acute angle.In addition, in the opposite side of camera case 420
On the top of camera 405 arrangement those cameras 415 (for example, camera 415A and camera 405B), the visual field 610 of camera 415
Axis and the axis of the visual field 510 of camera 405 form obtuse angle.
In some embodiments, each camera is configured such that axis 610 can extend across camera apparatus (shown in Fig. 6)
A lens system (for example, center of a camera lens or capture sensor) on 400 sides, passes through the center 630 of lower circumference.
Camera 415 (or imaging sensor) is laid around upper more covers 440 of camera case 420 with circular shape so that each camera
415 have the outer projections (or projection centre) for the normal for being not parallel to lower circumference 430.
In some embodiments, camera 415 is arranged in the respective face 445 of more covers 440.For example, such as Fig. 4 A-4D
Shown, camera 415A is arranged on the 445A of face, and camera 415B is arranged on the 445B of face, and camera 415M is arranged in face 445M
On.The face 445 of upper more covers 440 can be oriented in the plane with the different angle of the angle of the plane from lower circumference 430.One
In a little embodiments, although the camera 405 of lower circumference 430 can be directed out towards face from the center of camera case 420
445 can upward and outward guide camera 415 from the center of camera case 420, as shown in figs. 4 a-4d.In other words, camera
415 can have projection remote from the interior section of camera apparatus 400 and facing upwards.Although being not shown, at some
In embodiment, the camera apparatus 415 of odd number can be included in camera case 420, and one as upper more covers 440
Part.
In some embodiments, optical properties that can be based on one or more cameras 415 are (for example, visual field, pixel point
Resolution) limit the minimum or maximum geometry of upper more covers 440.For example, can the visual field based at least one camera 415
To limit the minimum diameter and/or maximum gauge of upper more covers 440.In some embodiments, it is at least one in camera 415
The visual field of relatively large (or wide) of (for example, sensor of at least one camera 415) can cause relatively small upper more covers
440.As shown in Fig. 4 A to 4D, for example, each in camera 415 is set as transverse mode (for example, 3 by cloth:4 transverse and longitudinals compare mould
Formula) so that the horizontal size of the image captured by camera 415 is more than the vertical dimension of the image captured by camera 415.At some
In embodiment, each in camera 415 can be with any transverse and longitudinal ratio or orientation (for example, 16:9 or 9:16 transverse and longitudinal ratios, 4:3
Transverse and longitudinal ratio) lay.
In some embodiments, in restriction more covers 440 diameter (or radius) so that at least three adjoin camera
415 visual field is overlapping.In some embodiments, at least two adjoin the whole visual field of camera 415 (for example, or substantially whole
A visual field) can be with the visual field of the third camera (adjoining with least one phase in two cameras adjoined 415) in camera 415
It is overlapping.In some embodiments, the visual field of any set of three cameras 415 adjoined can be overlapping so that around upper more
Any point (for example, through any point in the plane of the sensor of camera 415) of cover 440 can be by least three cameras
415 captures.
According to some embodiments, face 445 can be angled so that camera 415 is captured outside the visual field of camera 405
Image.For example, since camera 405 can be arranged along the lower circumference 430 of camera case 420 so that more than first magazine every
One has the outside projection for being orthogonal to lower circumference 430, and camera 405 possibly can not be captured directly over camera case 420
Image.Therefore, face 445 can be at an angle of so that camera 415 captures the image directly over camera case 420.
According to some embodiments, camera apparatus 400 can include rod shell 450.Rod shell 450 can be multiple including one
Airflow chamber, it has been configured to heat carrying-off from camera 405 and 415 towards the bottom of camera apparatus 400.According to some implementations
Mode, fan 460 can promote air to flow through camera apparatus 400 and removed from camera case 420 positioned at the bottom of rod shell
Heat.
In some embodiments, camera apparatus 400 can include being used to record what is captured with camera 405 and camera 410
The microphone (not shown) of the associated audio of image (and video).In some embodiments, camera apparatus 400 can include
Microphone mounting base, external microphone can be attached to it and be connected to camera apparatus 400.
In some embodiments, camera apparatus 400 can be including another equipment for installation into such as tripod
Mechanism, and installing mechanism can be attached to rod shell 450.In some embodiments, one or more openings can be arranged
(for example, being arranged in the bottom side of camera apparatus 400) so that camera apparatus 400 can be installed to tripod.In some embodiments
In, pacify for camera apparatus 400 to be installed to the coupled connection mechanism of another equipment of such as tripod and can be arranged in microphone
Fill the opposite side in the position of seat.In some embodiments, for camera apparatus 400 to be installed to the coupling machine of another equipment
Structure can be in the side identical with the position of microphone mounting base.
In some embodiments, camera apparatus 400 can be removably coupled to such as carrier (for example, such as four axis
The aerial carrier of aircraft) another equipment.In some embodiments, camera apparatus 400 can be by material system light enough
Into so that camera apparatus 400 and associated camera 405,415 can be moved using the carrier of such as four-axle aircraft.
In some embodiments, the camera apparatus described in the disclosure can include being installed on any on circular shell
The camera of number.In some embodiments, it is each in the four direction that camera can be outside with the center from circular apparatus
Adjacent cameras device on a is mounted equidistant.In this example, for example, the camera for being configured to stereoscopic vision neighbours can be along circle
It is all outwards to aim at, and arranged in a manner of zero degree, 90 degree, 180 degree and 270 degree so that each stereopsis
Feel that neighbours capture the independent quadrant of 360 degree of visual fields.In general, the selectable field of view of camera determines the camera fields of view of stereoscopic vision neighbours
Lap and camera between and adjoin the size of any blind spot between quadrant.One example camera device can use quilt
It is configured to about 120 degree of one or more stereo vision camera neighbours to the field of about 180 degree of capture.
In some embodiments, the camera case of the multilayer camera apparatus described in the disclosure can be configured with about 5
Centimetre to about 8 centimetres diameter (for example, diameter 206 in Fig. 2) with simulate mankind's interpupillary distance capture for example, if user by she
Head or body with the other parts of a quarter circle shape, semicircular in shape, full circular shape or circular shape rotate and
The content that will be seen that.In some embodiments, diameter may refer to through the device or camera case from camera lens to phase
The distance of machine camera lens.In some embodiments, diameter may refer to through the device from a camera sensor to another
The distance of camera sensor.
In some embodiments, camera apparatus amplifies from about 8 centimetres to about 25 centimetres, for example to accommodate additional camera
Fixing piece.In some embodiments, less camera can be used on the device of small diameter.In such an example,
The view between the camera on the device can be found out or be deduced to system described in the disclosure, and to such view and reality
The view of border capture is interleaved.
In some embodiments, for example, the camera apparatus described in the disclosure can be used for by using with revolving mirror
The camera or rotating camera of head capture whole panorama to capture panoramic picture in single exposure.Above-mentioned camera and camera apparatus can
With with the disclosure described in method be used together.Specifically, it can be held using any other camera apparatus as described herein
Method of the row on a camera apparatus description.In some embodiments, the content of camera apparatus and subsequent captured can be with
Other content combinations, the other guide such as virtual content, computer graphical (CG) content rendered, and/or other acquisitions
Or the image of generation.
In general, can using the image that at least three cameras (for example, 405A, 405B, 405C) on camera apparatus 400 capture
For calculating the depth measure of special scenes.Depth measure can be used for changing the part of scene (or image from scene)
For 3D stereoscopic vision contents.For example, interpolation module 114 can be spliced into 360 degree of stereopsis using depth measure to produce
The 3D stereo contents of frequency image.
In some embodiments, camera apparatus 400 can capture the omni-directional stereo (ODS) shown in such as Figure 19 and throw
Whole light needed for shadow, while maximize picture quality and minimize image fault.
(every layer) camera of the circular shape of the radius R for the radius r (not shown) for being more than ODS visuals field circle along having.Through
Sin will be at an angle of with the normal with the circle where the camera by crossing the ODS light of camera-1(r/R) in this manner carry out.Two
A different arrangements of cameras is possible:Tangential layout (not shown), and be radially laid out, such as such as Fig. 4 A camera apparatus into 4D
Shown in 400 each layer.
Magazine half is exclusively used in the light of capture left image and the other half is exclusively used in the capture right side by tangential layout
The light of image, and make each camera be aligned so that along the camera optical axis come according to this by the ODS light of the camera
Mode carries out.On the other hand, the radial direction of camera apparatus 400 is laid out collects both left image and right image using whole cameras
Light and therefore each camera direct out towards.
In some embodiments, it is the advantages of the radial design of camera apparatus 400:Image interpolation occurs adjoining phase
Between machine, and for tangential in design, image interpolation must send out life every between a camera, this makes view interpolation problem
Baseline doubles and makes it more added with challenge.In some embodiments, each camera of the camera apparatus 400 of radial design must
The light of left image and right image must be captured, the horizontal field of view needed for each camera adds 2sin-1(r/R).In practice, this meaning
Taste that the radial design of camera apparatus 400 is more preferable for larger device radius and tangential in design for less radius more
It is good.
For example, may be greater than 3cm wide and therefore can limit can be by phase for camera included in camera apparatus 400
Machine device 400 does much small.Therefore, in some embodiments, radial design can be more suitable for and further discussion is
Based on the layout.In some embodiments, 400 geometry of camera apparatus can be described (referring to figure with 3 parameters
4C).The number n of the radius R of the device, the horizontal field of view γ of camera and camera.Camera apparatus 400 described herein is at least
Realize consideration indicated below:
- device radius R is minimized, thus reduce vertical distortion.
- make to adjoin the distance between camera minimum, thus reduce the baseline of view interpolation.
- there is enough horizontal field of view for each camera so that it can splice apart from the device at least certain distance d's
Content.
- vertical field of view of each camera is maximized, this causes the big vertical field of view for exporting video.
- eral image quality is maximized, this, which is generally required, uses big camera.
In the ring (or layer) of camera apparatus 400 adjoin camera between, view can be formed straight between two cameras
The view of synthesis and these synthesis can only include the point observed as both cameras on line.Fig. 4 C show in order to allow to away from
From 400 center of camera apparatus, all the points at least with a distance from d are spliced and by by the amounts of a camera looks into fee.
The ring that the given radius comprising n camera is R, horizontal field of view can export as follows needed for the minimum of each camera:
b2=d2+R2-2dR cosβ (2)
In some embodiments, can by the panoramic picture of each layer of generation of camera apparatus (for example, camera apparatus 400)
With with least 10 degree overlapping for it is expected minimum splicing distance (for example, about 0.5m) place.More details are in the following
It is open:Anderson et al.,“Jump:Virtual Reality Video (jumps:Virtual reality video) ", ACM
Transactions on Graphics(TOG),Proceedings of ACM SIGGRAPH Asia 2016,Vol.35,
Issue 6, Art.No.198, November 2016, entire contents are incorporated herein by reference.
Fig. 7 is the figure of examples shown VR equipment (VR head-wearing devices) 702.User can by similar to place goggles,
Head-wearing device 702 is placed on her eyes and puts on VR head-wearing devices 702 by sunglasses etc.,.In some embodiments,
With reference to figure 1, VR head-wearing devices 702 can use one or more high-speed wireds and/or wireless communication protocol (for example, Wi-Fi,
Bluetooth, bluetooth LE, USB etc.) or by using HDMI interface dock/connect with multiple monitors of computing device 106,108 or 112
Connect.Virtual content can be supplied to VR head-wearing devices 702 by the connection, so as in the screen included in VR head-wearing devices 702
Shown on (not shown) to user.In some embodiments, VR head-wearing devices 702 can be the equipment for enabling projection.At this
In a little embodiments, user can select to provide or project (projection) content to VR head-wearing devices 702.
In addition, VR head-wearing devices 702 can use one or more high-speed wireds and/or wireless communication interface and agreement
(for example, Wi-Fi, bluetooth, bluetooth LE, Universal Serial Bus (USB) etc.) is docked/is connected with computing device 104.Computing device
(Fig. 1) can recognize the interface to VR head-wearing devices 702, and in response, can perform VR applications, it makes user and calculating
Equipment is in the 3D environment (VR spaces) of the computer generation including virtual content.
In some embodiments, VR head-wearing devices 702 can include the removable computing device that can perform VR applications.
Removable computing device can be similar to computing device 108 or 112.Removable computing device may be embodied in VR head-wearing devices
In the shell or frame of (for example, VR head-wearing devices 702), then which can be worn by the user of VR head-wearing devices 702
On.In these embodiments, removable computing device can provide 3D environment (VR space) of the user being generated with computer
The display or screen watched during interaction.As described above, mobile computing device 104 can be connected using wired or wireless interface protocol
It is connected to VR head-wearing devices 702.Mobile computing device 104 can be the controller in VR spaces, can be as pair in VR spaces
As occurring, input can be provided to VR spaces, and feedback/output can be received from VR spaces.
In some embodiments, mobile computing device 108 can perform VR applications, and can be to VR head-wearing devices
702 provide data for creating VR spaces.In some embodiments, on the screen included in VR head-wearing devices 702 to
The content in the VR spaces that user shows can also be shown on the display device that mobile computing device 108 includes.This allows
Other people see the content that user may interact in VR spaces.
VR head-wearing devices 702 can provide the position of instruction mobile computing device 108 and the information and data of orientation.VR should
With can receive and be used as the instruction of the user mutual in VR spaces using position and orientation data.
Fig. 8 is diagram showing as camera (and neighbours) number of the function of one layer of viewing field of camera of multilayer camera apparatus
Example curve map 800.Curve map 800 represents to be determined for can be with for the predefined visual field for generating stereoscopic vision panorama
The exemplary graph for the number of cameras being arranged on one layer of multilayer camera apparatus.Curve map 800 can be used for calculating camera and set
Put and disposed with camera, to ensure specific stereoscopic full views result.One example, which is set, can include selecting multiple cameras with attached
To specific camera apparatus.Another setting can include determining that the calculation that will be used during capture, pretreatment or post-processing step
Method.For example, for Optical flow interpolation technology, splicing complete 360 degree of panoramas can indicate that each optical ray direction should be by extremely
Few two cameras are seen.This may be limited to cover the minimal number of whole 360 degree of cameras to be used, it is that camera regards
The function of field theta [θ].Optical flow interpolation technology can be performed by camera neighbours (or to) or each camera and configuration.
As shown in figure 8, describe the curve map of diagram function 802.Function 802 is denoted as the function of viewing field of camera [θ] 806
Camera number [n] 804.In this example, about 95 degree of viewing field of camera is shown by line 808.Line 808 and function 802
Intersection point 810 is shown will provide desired panorama result using a camera in 16 (16) for being respectively provided with 95 degree of visual fields.Such
In example, camera apparatus can be configured by being interleaved to the adjacent cameras of each adjacent cameras set, should so as to be used in
Any space being likely to occur when placing adjacent cameras on device.
In addition to being interleaved to adjacent cameras, light stream requirement can specify that system 100 calculates the camera of same type
Between light stream.I.e., it is possible to calculate light stream for first camera, and light stream then is calculated for second camera, rather than together
When calculate the two.In general, the stream at pixel can be calculated as orientation (for example, direction and angle) and amplitude (for example, speed).
Fig. 9 is interpolation visual field [θ of the diagram as the function of viewing field of camera [θ] 9041] 902 exemplary graph 900.It is bent
Line chart 900 be determined for the visual field of camera which partly shared with its left neighbour or right neighbours.Here, at about 95 degree
Viewing field of camera (being shown by line 906), interpolation visual field is illustrated as about 48 degree, as shown in intersection point 908.
Give two continuous cameras and do not capture the image of identical visual field usually, then the visual field of interpolation camera will be by
The intersection point of the visual field of camera neighbours represents.Interpolation visual field [θ1] can be angle between viewing field of camera [θ] and camera neighbours letter
Number., can be by [θ if having selected the camera (using the method shown in Fig. 8) of minimal amount for given viewing field of camera1]
Function as [θ] is calculated, as shown in Figure 9.
Figure 10 is the exemplary graph 1000 of the selection for the configuration for illustrating camera apparatus.Specifically, curve map 1000 can be with
For determining that it is much that certain camera device can be designed to.Curve map 1000 is depicted as assembly dia [D is in centimeters]
Curve map of the splicing of function than [d/D] 1002.In order to produce comfortable virtual reality full views watching experience, in showing for the disclosure
It is about 5 centimetres to about 6.5 centimetres that the 3 D spliced diameter of omnidirectional [d] is selected in example, this is typical human IPD.In some embodiment party
In formula, the capture diameter [D] roughly the same with splicing diameter [d] can be used 3 D spliced to perform omnidirectional.That is, for example, protecting
The splicing of about " 1 " is held than easier splicing can be provided in the post processing of omnidirectional's stereo-picture.This specific configuration can
So that distortion minimization, because the optical ray for splicing is identical with the light of actual camera capture.When selected camera
When number is high (for example, per 12-18 camera of device), it can be difficult to obtain the splicing ratio of " 1 ".
In order on alleviator the problem of too many camera, which can be designed to have the size of bigger to wrap
Hold additional camera and allow splicing than maintaining identical (or essentially identical).In order to ensure stitching algorithm during capture to by
Content in the image at portrait attachment center is sampled, and can fix splicing than the angle [α] with definite camera relative to device.
For example, Figure 10 shows that the sampling near optical center improves picture quality and minimizes geometric distortion.Specifically, compared with
Small angle [α] can help prevent device and block (for example, the camera imaging component of device in itself).
As shown in Figure 10, at 1006,0.75 splicing corresponds to about 6.5 centimetres (i.e. typical mankind IPD) than [d/D]
Assembly dia.Splicing is reduced to about 0.45 permission assembly dia than [d/D] increases to about 15 centimetres (at 1008 show),
This can allow additional camera being added in the device.Camera can the spelling based on selection relative to the angle of camera apparatus
Connect than being adjusted.Can be greatly to about 12.5 centimetres for example, camera angle is adjusted to about 30 degree of instruction device diameters.It is similar
Ground, for example, camera apparatus angle is adjusted to about 25 degree of instruction device diameters can be large enough to 15 centimetres, and renders for user
When still maintain appropriate parallax and visual effect.
In general, setter diameter [D], can calculate optimal camera angle [α].Maximum field of view can be calculated from [α]
[Θu].Maximum field of view [Θu] generally correspond to device will not partial occlusion camera visual field.Maximum field of view can limit camera dress
How many camera apparatus can be held by putting, and still provide the view not being blocked.
Figure 11 is the camera that diagram can be used for determining the one of multilayer camera apparatus layer according to predefined assembly dia
Minimal amount example relationship curve map 1100.Here, show in 1104 layers given of assembly dia [D] most
Small number of cameras [nmin]1102.Assembly dia [D] 1104, which limits, most very much not blocks visual field, its function is to limit camera most
Peanut.As shown in the figure, at 1106, for about 10 centimetres of assembly dia, can be used most in one layer of camera apparatus
The view that the small a camera in 16 (16) is not blocked with providing.Modification assembly dia can allow to increase or decrease to be placed on the device
Number of cameras.In one example, about 12 to about 16 can be contained on about 8 to about 25 centimetres of device size, the device
A camera.
Since other methods can be used for adjustment visual field and image capture to set, so these calculating can be with these other sides
Method is combined with camera apparatus size of further refining.It is, for example, possible to use optical flow algorithm is commonly used in change (for example, reduction)
Splice the number of the camera of omnidirectional's stereoscopic full views.In some embodiments, it is for example, describing in the disclosure or from the disclosure
The curve map of the system and method generation of description can be applied in combination to generate the virtual content for being used for being rendered in HMD device.
Figure 12 A-B represent line chart (line drawing) example for the distortion that can occur during image capture.Specifically
, shown here distortion corresponds to the effect occurred when capturing stereoscopic full views.In general, when the camera of capturing scenes leans on
When capturing the scene recently, distortion may be more serious.Figure 12 A represent to multiply two meters apart from outside one meter two meters arranged of image center
Scene in plane.Figure 12 B are the planes identical with Figure 12 A, but outside 25 centimetres of the plan range camera in the figure.Two
A figure is all using 6.5 centimetres of capture diameters.Figure 12 A show the slight stretching of the immediate vicinity at 1202, and Figure 12 B are shown
The center 1204 more expanded.This distortion can be corrected using many technologies.Following paragraphs describe using in capture images
Hold analyze projection (for example, sphere and plane projection) with correct the approximation method of distortion and system (for example, camera apparatus/catch
Obtain device).
Figure 13 A-B depict panoramic picture is collected by camera on one layer of multilayer camera apparatus during capture
The example of light.Figure 13 A are shown:The image collection of given capture, can be left from anywhere in capturing on path 1302
Eye and both right eyes generate fluoroscopy images.Here, the light of left eye is shown with light 1304a, and use is shown at 1306a
In the light of right eye.It is in some embodiments, insufficient due to camera setting, failure or only for the device setting of scene,
May be without each discribed light of capture.Therefore, some in light 1304a and 1306a can be by approximation (for example, being based on
Other light carry out interpolation).For example, if scene is infinity, the measurable feature of in scene is included from origin
To the radiation direction of destination.
In some embodiments, rayorigin may not be collectable.Therefore, the system in the disclosure can be approximate
Left eye and/or right eye are with the origin position of definite light.Figure 13 B show approximate radiation direction 1306b for right eye extremely
1306f.In this example, the light from identical point is substituted, each light is derived from the difference in circular shape 1302.Light
1306b to 1306f is shown as with capture circular shape 1302 into tangent angle, and is arranged in capture circular shape 1302
At the specific region of circumference.In addition, the two different images Sensor-image sensor 13-1s associated with camera apparatus
Show with the position of imaging sensor 13-2 (associated with camera or including in the camera) in camera apparatus circular shape 1303
Go out.As shown in Figure 13 B, camera apparatus circular shape 1303 is more than capture circular shape 1302.
Can use by this way from the approximate multiple light of the outside different directions of circular shape (and with each light
The color and intensity of associated image).In this way it is possible to being provided for both left eye and right-eye view includes many images
Whole 360 degree of panoramic views.This technology can cause apart from the distortion of object in solving, but under some examples, when
Still it can deform when being imaged to neighbouring object.For simplicity, approximate left eye ray direction is not described.
In the example embodiment, a small amount of light 1306b to 1306f illustrate only.However, it is possible to limit thousands of such light
(and image associated with those light).Therefore, many new images associated with each light can be limited (for example, interior
Insert).
As shown in Figure 13 B, light 1306b is projected between imaging sensor 13-1 and imaging sensor 13-2, and image passes
Sensor 13-1 and imaging sensor 13-2 can be arranged on one layer of multilayer camera apparatus.Imaging sensor 13-1 is passed with image
Sensor 13-2 is adjacent.Light can be with range image sensor 13-1 (for example, projection centre of imaging sensor 13-1) distance G1
With range image sensor 13-2 (for example, projection centre of imaging sensor 13-2) distance G2.Distance G1 and G2 can be based on
The position that light 1306b intersects with camera apparatus circular shape 1303.Distance G1 can be differently configured from (for example, be more than, less than) away from
From G2.
In order to define the image associated with light 1306b (for example, interpolated image, new images), by imaging sensor 13-
First image (not shown) (for example, being stitched together) of 1 capture is combined with the second image captured by imaging sensor 13-2.
In some embodiments, optic flow technique can be used for combining the first image and the second image.For example, it can identify from coming
From the pixel of corresponding first image of the pixel of the second image.
In order to limit the image associated with such as light 1306b, corresponding pixel is based on distance G1 and G2 and deviates.Can
With assume the resolution ratio of imaging sensor 13-1,13-2, transverse and longitudinal than, height etc. for limit light 1306b image (for example,
New images) purpose be identical.In some embodiments, resolution ratio, transverse and longitudinal than, height etc. can be different.However,
, it is necessary to change interpolation to adapt to these differences in such embodiment.
As a specific example, first pixel associated with the object in the first image can be identified as and second
The second pixel that object in image is associated is corresponding.Because from imaging sensor 13-1, (it is located at camera apparatus circle shape
First position around shape 1303) visual angle capture the first image, and from imaging sensor 13-2 (its be located at camera apparatus justify
The second place around shape shape 1303) angle capture the second image, with the second image in position (XY coordinate positions) phase
Than the object will shift on the position (for example, XY coordinate positions) in the first image.Similarly, it is associated with the object
First pixel will be shifted relative to the second also associated with object pixel on position (for example, X-Y coordinate position).In order to
The new images associated with light 1306b are produced, can be limited and the first pixel and the second picture based on the ratio of distance G1 and G2
Plain (and object) corresponding new pixel.Specifically, new pixel can be limited at such position:It is from based on distance G1
First pixel of (and factor of the distance between position with the position based on the first pixel and the second pixel is come scalable) and
Based on distance G2's (and with factor of the distance between the position based on the first pixel and the position of the second pixel come scalable)
Second pixel deviates in position.
According to the above embodiment, it can be and the light 1306b consistent with the first image and the second image is associated
New images limit parallax.Specifically, the object of relatively close camera apparatus can be moved than being relatively distant from the object of camera apparatus
The amount of dynamic bigger.Can distance G1 and G2 based on light 1306b, pixel offset (for example, from the first pixel and the second picture
Element) between maintain the parallax.
The mistake can be repeated to all light (for example, light 1306b to 1306f) around capture circular shape 1302
Journey.The new images associated with each light around capture circular shape 1302 can be based on camera apparatus circular shape 1303
Around each light and imaging sensor (for example, adjacent imaging sensor, imaging sensor 13-1,13-2) between away from
From limiting.
As shown in Figure 13 B, the diameter of camera apparatus circular shape 1303 is more than the diameter of capture circular shape 1302.One
In a little embodiments, the diameter of camera apparatus circular shape 1303 may be greater than the 1.5 of the diameter of capture circular shape 1302
To between 8 times.As a specific example, the diameter for capturing circular shape can be 6 centimetres, and camera apparatus circular shape
1303 diameter (for example, camera mounting ring 412 shown in Fig. 4 A) can be 30 centimetres.
Figure 14 A-B illustrate the use of almost plane perspective projection, as illustrated in Figure 13 A-B.Figure 14 A are shown in approximation
There is the panoramic scene of distortion lines before plane perspective light and projection.As shown in the figure, curtain rod 1402a, window frame 1404a,
The object with bending features is depicted as with door 1406a, but actually they are linear feature objects.Linear feature object bag
Include the object (such as flat index card, rectangular box, rectangular frame etc.) of no curved surface.In this example, object
1402a, 1404a and 1406a be shown as it is curved because their distortions in the picture.Figure 14 B are shown at 90 degree
The correction chart picture of horizontal field of view lower aprons plane perspective projection.Here, curtain rod 1402a, window frame 1404a and door 1406a difference
It is shown as straight object 1402a, 1404b and 1404c of correction.
Figure 15 A-C illustrate the example of the almost plane perspective projection applied to the plane of delineation.Figure 15 A, which are shown, uses this
Technology described in open is projected from the plane perspective of pan-shot.Discribed plan view 1500 can be represented in Figure 14 B
The covering of plane shown in image.Specifically, Figure 15 A represent Figure 14 A of correction, wherein curve projection is in line.Here, entirely
The plane 1500 of scape is shown (horizontal field of view with 90 degree) with one meter of distance.Line 1502,1504,1506 and 1508 is straight
, and (corresponding to Figure 14 A) before, identical center line is bending and distortion.
Other distortions can be occurred based on selected projection scheme.For example, Figure 15 B, Figure 15 C represent to use in the disclosure
Technology from the plane (1510 and 1520) of the plane perspective of pan-shot projection generation.With 25 centimetres of distances, (90 degree horizontal
Visual field) capture panorama.Figure 15 B show left eye capture 1510, and Figure 15 C show right eye capture 1520.Here, plane
The bottom of (1512,1522) does not project to straight line and introduces vertical parallax.When being projected using plane perspective, it may occur that this
Kind certain variations.
Figure 16 A-B illustrate the example for introducing vertical parallax.Figure 16 A are depicted according to typical omnidirectional's stereoscopic full views technology
The straight line 1602a of capture.In discribed example, each light 1604a-1618a is derived from the difference in circular shape 1622
Point.
Figure 16 B depict same straight line when being watched using perspective approximation technique.As shown in the figure, show that straight line 1602a becomes
Shape is 1602b.Light 1604b-1618b is derived from a single point in circular shape 1622.The deformation can have make line 1602b
Left side the effect of beholder is pushed away towards beholder and by the right half part of line.For left eye, opposite feelings can occur
Condition, i.e. the left-half of line seem further from and the right half part of the line seem closer to.The line of deformation is in two asymptotes
Between bend, its interval is equal to the distance that panorama renders the diameter 1624 of circular selection 1622.Since deformation is shown as catching with panorama
The identical size of radius is obtained, so it may be significant only for neighbouring object.This variant can cause user to see
See the vertical parallax of image, this may cause fusion difficult when the image to distortion performs splicing.
Figure 17 A-B depict the example points of the coordinate system available for the point in diagram 3D panoramas.Figure 17 A-B, which are depicted, to be passed through
The point (O, Y, Z) 1702 of panoramic technique imaging described in the disclosure.Projection of this in the panorama of left and right can be by (- θ, φ)
(θ, φ) is represented, as below equation (1) and (2) are shown respectively, wherein:
And wherein r1704 is the radius of panorama capture.
Figure 17 A depict the top view of the panoramic imagery of point (O, Y, Z) 1702.Figure 17 B depict point (O, Y, Z) 1702
The side view of panoramic imagery.(- θ, φ) in shown spot projection to left panorama and project (θ, φ) in right panorama.These
Particular figure is captured and does not project in another plane.
Figure 18 represents the projection view of the discribed points of Figure 17 A-17B.Here, 1702 perspective eye is put to be oriented as
Around y-axis angle [α] it is rotating in the case of flatly see, it is such as shown in figure 18 by 1802.Due to the perspective projection only
Consider radiation direction, it is possible to which the light conversion by will be seen that the point 1702 in panoramic projection 1802 is the reference of perspective camera
It is to find out the light along a little 1702 projections.For example, the following ray cast shown in point 1702 along table 1 below:
Table 1
Perform perspective segmentation, it may be determined that spot projection, as shown in the equation in table 2 below:
Table 2
If as can be seen that(the original 3D points 1702 for corresponding to infinity), then putting 1702 will be usually at two thoroughly
Identical y-coordinate is projected in visible image, and therefore will be not present vertical parallax.However, as θ becomes far from(with this
Point is mobile closer to camera), for left eye and right eye, the y-coordinate of projection is by different (except with seeing that the perspective to point 1702 regards
The situation of wild corresponding α=0).
In some embodiments, distortion can be avoided by capture images in a specific way and scene.For example, nearly
Scene capture in may cause distortion element to occur to camera (i.e. distance is less than one meter).Therefore, from one meter of outwards capture
Scene or image are a kind of modes for making distortion minimization.
In some embodiments, distortion can be corrected using depth information.For example, the accurate depth letter of given scenario
Breath, it is possible to correct distortion.That is, since distortion can depend on current view direction, so possible before rendering can not
Can be to panoramic picture application individual distortion.On the contrary, depth information can transmit together with panorama, and used when rendering.
Figure 19 shows the light captured using the panoramic imaging techniques described in the disclosure in omni-directional stereo image
Line.In this example, clockwise light 1902,1904,1906 is directed toward corresponding to left eye around circular shape 1900
Light.Similarly, it is directed toward light of the anticlockwise light 1908,1910,1912 corresponding to right eye around circle 1900.Often
A light counterclockwise can have the corresponding light clockwise seen in the same direction on the opposite side of circular shape.This can
Light is watched to provide left/right for each direction for being represented in single image.
The ray sets of panorama described in capture the utility model can include:Moved around circular shape 1900
Camera (not shown), is aligned (for example, by camera lens outwardly facing scene so that the camera and circular shape 1900 are tangent
And with circular shape 1900 is tangent is directed toward).For left eye, camera can be directed toward right side (for example, light 1904 is captured to
The right side of center line 1914a).Similarly, for right eye, camera can be directed toward left side (for example, light 1910 is captured to center
The left side of line 1914a).For the camera on the opposite side of circular shape 1900 and less than center line 1914b, can use
Center line 1914b limits similar left region and right region.Produce omni-directional stereo image can be used for real camera capture or
Computer graphical (CG) content previously rendered.View interpolation can make together with the camera content of capture and the CG contents rendered
With with the point between real camera of the analog capture for example in circular shape 1900.
Stitching image set can include the use of sphere/isometric projection for storing panoramic picture.In general, at this
There are two images, each one image of eyes in kind method.Each pixel in isometric drawing picture corresponds to the direction on sphere.
For example, x coordinate can correspond to longitude and y-coordinate can correspond to latitude.For list-omnidirectional images, the viewing of pixel
The origin of light can be identical point.However, for stereo-picture, each light of watching may originate from circular shape 1900
On difference.Then, each pixel in the image by analyzing capture, preferable viewing light is generated according to projection model,
And sampled to watching the pixel of the image of light capture most matched with ideal light rays or interpolation from it, can be from capture
Image mosaic panoramic picture.Next, values of light can be mixed to generate panoramic pixel value.
In some embodiments, it can use and often be spent based on the view interpolation of light stream to be produced in circular shape 1900
At least one image.In some embodiments, the whole row of panoramic picture can be once filled, because can determine:Such as
A pixel in the fruit row will be sampled from given image, then the pixel in the row will be sampled from identical image.
The panoramic format that capture with the disclosure is used together in terms of rendering may insure:Watched by left eye and right eye
The image coordinate of object differs only by horizontal-shift.This horizontal-shift is known as parallax.This be suitable for wait rectangular projection, and
In this projection, object can seem very distortion.
The amplitude of this distortion can depend on the distance and view direction to camera.The distortion can include:Line is bent
Distortion, different left eyes and right eye distortion, and in some embodiments, parallax may no longer appear as level.Generally
For, the vertical parallax of 1-2 degree (on the sphere plane of delineation) can cosily be tolerated by human user.In addition, for periphery
Object in informer (peripheral eye line) can ignore distortion.This corresponds to about 30 degree of distance center view direction.
Based on these discoveries, limitation can be constructed, the region near camera that its restriction object should not enter into is to avoid uncomfortable change
Shape.
Figure 20 is the curve map 2000 for the maximum perpendicular parallax that diagram is caused by the point in 3d space.Specifically, curve map
2000 depict 30 degree of the given center by the spot projection in 3d space extremely away from the image maximum in terms of degree caused by the point
Vertical parallax.Curve map 2000 depicts the horizontal level apart from camera apart from the upright position of image center (in meters) control
(in meters).In the figure, camera is located at origin [0,0].When curve map is moved away from the origin, the order of severity of distortion becomes
Obtain smaller.For example, on the graph from about zero to one 2002, and from zero to negative one 2004 (vertical), distortion be worst.This
Corresponding to the image of directly over camera (being placed on origin) and underface.As scene is displaced outwardly, distortion mitigates, and works as phase
Machine only encounters the vertical parallax of half degree when putting at 2006 and 2008 to scene imaging.
If it exceeds 30 distortions enclosed of being outside one's consideration can be ignored, then all pixels of the view direction within 30 degree of limit all may be used
To be removed.If it is 15 degree to allow peripheral threshold value, 15 degree of pixel can be removed.For example, the pixel removed can be set
It is set to color block (for example, black, white, magenta etc.) or still image (for example, logo, known boundaries, veining layer etc.)
New expression with the pixel of removal can be inserted into panorama to substitute the pixel removed.In some embodiments, move
The pixel removed can be blurred, and the Fuzzy Representation of the pixel removed can be inserted into panorama to substitute the picture removed
Element.
Figure 21 is the flow chart that diagram is used to produce one embodiment of the process 2100 of stereoscopic panoramic image.Such as Figure 21 institutes
Show, at frame 2102, system 100 can limit image collection based on the image of capture.Image can include pretreatment image,
Post-process image, virtual content, video, picture frame, the part of picture frame, pixel etc..
The image of restriction can be accessible by user, such as accesses content (for example, VR using head-mounted display (HMD)
Content).System 100 can determine the specific action performed by user.For example, at certain point, at frame 2104, system 100
The view direction associated with the user of VR HMD can be received.Similarly, if user changes its view direction, in frame
System can receive the instruction of the change of user's view direction at 2106.
Instruction in response to receiving such change on view direction, system 100 can configure the one of image collection
Partial re-projection, as shown in frame 2108.Re-projection can be at least partially based on the view direction of change and the image with capture
Associated visual field.The visual field can be from one to 180 degree, and can take into account the fritter (sliver) of the image of scene
The panoramic picture of scene.The re-projection of configuration can be used for a part for image collection being converted to plane from spherical perspective projection
Projection.In some embodiments, re-projection can be included from around the bending projected from spherical perspective projection to plane perspective
Path projects (recast) part for watching light associated with image collection again come the multiple viewpoints laid.
Re-projection can include any or all of step that the part on the surface of sphere scene is mapped to plane scene.
These steps can include:Correct the scene content of distortion, in seam crossing or close in seam crossing mixing (for example, splicing) scene
Hold, tone maps, and/or scalable.
After re-projection is completed, system 100 can render the view of renewal based on the re-projection, as shown in frame 2110.
The view of renewal can be configured as correction distortion and provide a user stereoscopic parallax.At frame 2112, system 100 can provide
Including the renewal view with the corresponding stereoscopic full views scene of the view direction of change.For example, system 100 can provide renewal
View can be provided with correcting the distortion of original view (before re-projection) in the display of VR head-mounted displays
Stereoscopic parallax effect.
Figure 22 is that diagram is used to capture the one of the process 2200 of comprehensive panoramic picture using multilayer camera apparatus device
The flow chart of a embodiment.At frame 2202, system 100 can based on from first layer at least one adjacent cameras set receive
The video flowing of the capture integrated to limit image collection as the first layer of polyphaser device.In certain embodiments, first layer can be with
It is the lower floor of multilayer camera apparatus, and the camera in first layer can be arranged and laid with circular shape so that more than first
It is magazine each have be orthogonal to the circular shape to outer projection.For example, system 100 can use adjacent cameras (example
Such as, it is as shown in Figure 2) or multiple adjacent cameras set (for example, as shown in Figure 3 and Figure 5).In some embodiments, system 100
The video flowing from the capture collected with circular shape come about 12 to about 16 cameras laying can be used to limit image
Set so that they have be orthogonal to the circular shape to outer projection.In some embodiments, system 100 can use
Computer graphical (CG) content that partly or entirely renders limits image collection.
At frame 2204, system 100 can calculate the first light stream of the first image collection.For example, calculate the first image set
Light stream in conjunction can include:The image intensity field of a part for the analysis pixel column associated with image collection, and to pixel
The part of row performs optic flow technique, as detailed above.
In some embodiments, the first light stream can be used for the picture frame that interpolation is not a part for image collection, and
It is and as detailed above.Then, system 100 can be based at least partially on the first light stream by picture frame and the first image
Set is stitched together (at step 2206).
At frame 2208, system 100 can be based on the capture collected from least one adjacent cameras set in the second layer
Video flowing come for polyphaser device the second layer limit image collection.In certain embodiments, the second layer can be multi-layer phase
The upper strata of machine device, and the camera in the second layer can lay so that it is multiple it is magazine each have and be not parallel to the
The normal of the circular shape of a camera more than one to outer projection.For example, system 100 can use the second layer of multilayer camera apparatus
Upper more covers (for example, as shown in Fig. 4 A and Fig. 6) or multiple adjacent cameras set adjacent cameras.In some embodiments
In, system 100 can use the video flowing of the capture that from about 4 to about 8 cameras collect to limit image collection.In some implementations
In mode, system 100 can limit image collection using computer graphical (CG) content partly or entirely rendered.
At frame 2210, system 100 can calculate the second light stream of the second image collection.For example, calculate the second image set
Light stream in conjunction can include:The image intensity field of a part for the analysis pixel column associated with image collection, and to pixel
The part of row performs optic flow technique, as detailed above.
In some embodiments, the first light stream can be used for the picture frame that interpolation is not a part for image collection, and
It is and as detailed above.System 100 can be based at least partially on the second light stream by picture frame and the second image collection
It is stitched together (at step 2212).
At frame 2214, system 100 can by by first stitching image associated with the first layer of multilayer camera and
Second stitching image associated with the second layer of multilayer camera next life that is stitched together is helped to solid.In some embodiments
In, omnidirectional's stereoscopic full views are used to be shown in VR head-mounted displays.In some embodiments, system 100 can use with
At least one attitude information that is associated of solid neighborhood performs image mosaic, with for example perform interweave before spell in advance
Connect a part for image collection.
Figure 23 is the flow chart of the one embodiment for being illustrated in the process 2300 that panoramic picture is rendered in head-mounted display.
As shown in figure 23, at frame 2302, system 100 can receive image collection.Described image can be described to be filled from multilayer camera
The content for the capture put.At frame 2304, system 100 can select the part in the picture frame in described image.Described image
The content that frame can be captured using multilayer camera apparatus.System 100 can use any part of the content of capture.Example
Such as, system 100 can be selected to include by the device from about one radial direction rice of the outer edge apart from camera apparatus base portion to apart from phase
A part in the picture frame of the content of the range acquisition of the about five radial direction rice of outer edge of machine device base portion.In some embodiment party
In formula, the selection can be based on how far user can perceive 3D contents.Here, arrived apart from one meter of camera apparatus apart from camera about
Five meters of distance can represent that user can watch in " region " of 3D contents.It is more shorter than this, then 3D views may distortion, and
Longer than this, then user possibly can not find out 3D shapes.That is, context can only look like 2D from distant place.
At frame 2306, the selected portion in picture frame can be spliced together to generate stereoscopic vision panoramic view.
In the example, which can be based at least partially on selected part and at least one other image in selected portion
Frame matches.At frame 2308, panoramic view can be provided in the display of such as HMD device.In some embodiments, may be used
To perform splicing using the selected splicing ratio of the diameter for being based at least partially on camera apparatus.In some embodiments,
The splicing includes following multiple steps:By the second pixel column in the first pixel column and the second picture frame in the first picture frame
Match somebody with somebody, and the second pixel column is matched into the 3rd pixel column in the 3rd picture frame to form the scene parts of linking.At some
In embodiment, many pixel columns can be matched and combine by this way come to form frame, and these frames can be combined
To form image.Furthermore, it is possible to these images are combined to form scene.According to some embodiments, system 100 can be multiphase
Each layer of machine device performs frame 2306 and 2308 to create the stitching image associated with each layer of camera apparatus, and is
The image mosaic of splicing can be generated panoramic view by system 100 together.
In some embodiments, method 2300 can include interpolations steps, it carrys out interpolation and non-image using system 100
The additional image frame of a part for part in frame.For example, such interpolation can be performed to ensure by phase apart from each other
Flowed between the image of machine capture.Once performing the interpolation of additional image content, system 100 can hand over additional image frame
Knit in the part in picture frame, to generate the virtual content of view.The virtual content can be spliced together conduct
Part in the picture frame to interweave with additional image frame.For example, the result can be used as renewal view to be supplied to HMD.This is more
New view can be based at least partially on the part and additional image frame in picture frame.
Figure 24 is one embodiment that diagram is used to determine the process 2400 of one layer of image boundary of multilayer camera apparatus
Flow chart.At frame 2402, system 100 can be based on from least one adjacent cameras collection in one layer of multilayer camera apparatus
The video flowing for closing the capture collected limits image collection.For example, system 100 can use an adjacent cameras set (such as Fig. 2
It is shown) or multiple adjacent cameras set (as illustrated in figs. 3 and 4).In some embodiments, system 100 can be used from about
The video flowing of the capture that 12 to about 16 cameras are collected limits image collection.In some embodiments, system 100 can make
Image collection is limited with computer graphical (CG) content partly or entirely rendered.In some embodiments, with image set
Closing corresponding video flowing includes the video content of coding.In some embodiments, with the corresponding video flowing of image collection
It can include being configured with using at least one adjacent cameras set of 180 degree of visual fields the content that obtains.
At frame 2404, by by with from the part in the path of circular shape come the image of multiple viewpoints laid
The viewing light that the part of set is associated projects a viewpoint again, system 100 can by with the one of multilayer camera apparatus layer
A part for associated image collection is from fluoroscopy images plane projection to the sphere plane of delineation.For example, image collection can be by
One layer of polyphaser device captures, and imaging sensor is laid in the circle of camera apparatus with annular shape in the polyphaser device
On camera case, it can be with the multiple cameras of trustship (for example, as shown in Figure 4 A).Each camera can be associated with viewpoint, and
These viewpoints are from camera apparatus outwardly at scene.A single point is come from specifically, substituting, viewing light is come from the device
Each camera.System 100 can project the light of each viewpoint on path in single viewpoint again.For example, system
100 can analyze each viewpoint of the scene by camera capture, and can calculate phase Sihe difference to determine that expression comes from
The scene (or scene set) of the scene of single interpolation viewpoint.
At frame 2406, system 100 can determine with the corresponding peripheral boundary of single viewpoint, it is and outer by removing this
The pixel of surrounding edge out-of-bounds generates the image of renewal.The peripheral boundary can draw the clear simplicity of the picture material from distortion
Picture material.For example, peripheral boundary can describe the pixel of no distortion according to the pixel with distortion.In some implementations
In mode, peripheral boundary can be on the visual field beyond the Representative peripheral area of visual field of user.Pixel as removal can be true
The picture material of distortion will not be unnecessarily presented to user by protecting.Remove the pixel can include with colored block, still image or
Fuzzy pixel is represented to replace the pixel, as described in detail above.In some embodiments, peripheral boundary is defined as
About 150 degree of visual field of the one or more cameras associated with the image captured.In some embodiments, periphery sides
Boundary is defined as about 120 degree of visual field of the one or more cameras associated with the image captured.In some embodiment party
In formula, peripheral boundary is about 30 degree of corresponding spherical shapes above the viewing plane of the camera associated with the image of capture
A part, and remove pixel include blacking (black out) or remove sphere scene top.In some embodiments
In, peripheral boundary is about 30 degree of corresponding spherical shapes below the viewing plane of the camera associated with the image of capture
A part, and remove top of the pixel including blacking or removal sphere scene.At frame 2408, system 100 can provide more
New image in the range of peripheral boundary for showing.
In some embodiments, method 2400 can also include:By one layer of the image set from multi-layer phase machine device
At least two frames in conjunction are stitched together.The splicing may comprise steps of:Sampled pixel arranges from frame, and at least
The additional pixel column not captured in the frame is inserted between the pixel column of two samplings.In addition, the splicing can include inciting somebody to action
The step of row and additional column of sampling are mixed to generate pixel value.In some embodiments, at least portion can be used
Point ground performs mixing based on the diameter of the circular camera apparatus for obtaining capture images come the splicing ratio of selection.The splicing is also
It may comprise steps of:By the way that pixel value is configured to can for example to provide the left scene and You Chang for showing in HMD
Scape generates three-dimensional stereoscopic visual panorama.
Figure 25 is the flow chart that diagram is used to generate one embodiment of the process 2500 of video content.At frame 2502,
System 100 can limit image collection based on the video flowing for the capture collected from least one adjacent cameras set.For example, it is
System 100 can use solid to (as shown in Figure 2) or multiple adjacent cameras set (for example, as illustrated in figs. 3 and 4).At some
In embodiment, system 100 can be used from about 12 in one layer of multilayer camera apparatus to about 16 cameras and multi-layer phases
The video flowing of the capture that 4 to 8 cameras in the second layer of machine device are collected limits image collection.In some embodiments
In, system 100 can limit image collection using computer graphical (CG) content partly or entirely rendered.
At frame 2504, the rectangular video streams such as image collection can be spliced into by system 100.For example, the splicing can wrap
Include:Combined by the image being associated with camera capture angle to the left and with the image that camera capture angle to the right is associated.
At frame 2506, system can by by video flowing from the first view and the second view etc. rectangular projection to perspective
To render the video flowing for playback.First view can correspond to the left-eye view of head-mounted display, and the second view can
With the right-eye view corresponding to head-mounted display.
At frame 2508, system can determine that wherein distortion is higher than the border of predetermined threshold.Predefined threshold value can carry
For parallax level, mismatch level (level of mismatch), and/or the admissible error level in specific image set.
For example, when by video flowing from a plane or view projections to another plane or view, distortion can at least part ground
In projection configurations.
At frame 2510, as being discussed in detail above, system can be by removing in image collection in boundary or super
Go out the picture material on border to generate the video flowing of renewal.When updating the video flowing, such as can be provided more to the user of HMD
New stream is for display.In general, the system and method through disclosure description can be used for capture images, moved from the image of capture
Except distortion, and image is rendered to provide 3D stereoscopic vision views to the user of HMD device.
Figure 26 shows the general purpose computing device 2600 and General Mobile that can be used together with technology described herein
The example of computer equipment 2650.Computing device 2600 is intended to indicate that various forms of digital computers, calculating such as on knee
Machine, desktop computer, work station, personal digital assistant, server, blade server, mainframe and other suitable computers.
Computing device 2650 is intended to indicate that various forms of mobile equipment, such as personal digital assistant, cell phone, smart phone and
Other similar computing devices.Component, their connection and relation depicted herein and their function are merely exemplary
, and it is not intended to limit the embodiment of utility model described herein and/or claimed.
Computing device 2600 includes processor 2602, memory 2604, storage device 2606, is connected to 2604 and of memory
The high-speed interface 2608 of high-speed expansion ports 2610 and it is connected to low speed bus 2614 and the low-speed interface of storage device 2606
2612.Each in component 2602,2604,2606,2608,2610 and 2612 uses various bus interconnections, and can pacify
Installed in other ways on public mainboard or in the case of appropriate.Processor 2602 can be handled in computing device 2600
The instruction of interior execution, including the instruction in memory 2604 or in storage device 2606 is stored in, to be set in exterior input/output
The graphical information of standby upper display GUI, the external input/output device are such as couple to the display of high-speed interface 2608
2616.In other embodiments, multiple processors and/or multiple buses can be together with multiple memories and more in the case of appropriate
The memory of a type is used together.Furthermore, it is possible to multiple computing devices 2600 are connected, wherein each equipment provides necessary operation
A part (for example, as server library, the group of blade server or multicomputer system).
Memory 2604 is in 2600 inner storag information of computing device.In one embodiment, memory 2604 is one
Or multiple volatile memory-elements.In another embodiment, memory 2604 is one or more nonvolatile memories
Unit.Memory 2604 can also be another form of computer-readable medium, such as disk or CD.
Storage device 2606 can be that computing device 2600 provides massive store.In one embodiment, storage is set
Standby 2606 can be or comprising:Computer-readable medium, such as floppy device, hard disc apparatus, compact disk equipment or tape unit,
Flash memory or other similar solid storage devices, or including the equipment or the equipment array of other configurations in storage area network.
Computer program product can be tangibly embodied in information carrier.Computer program product can also include instruction, the instruction
One or more methods, method as escribed above are performed when executed.Information carrier is computer or machine-readable media, all
Such as the memory on memory 2604, storage device 2606 or processor 2602.
High-speed controller 2608 manage computing device 2600 bandwidth-intensive operations, and low speed controller 2612 manage compared with
Low bandwidth intensive.Such distribution of function is only exemplary.In one embodiment, high-speed controller 2608
It is couple to memory 2604, display 2616 (for example, by graphics processor or accelerator) and is subjected to various expansion cards
The high-speed expansion ports 2610 of (not shown).In embodiments, low speed controller 2612 is couple to storage device 2606 and low
Fast ECP Extended Capabilities Port 2614.The low speed that can include various communication port (for example, USB, bluetooth, Ethernet, wireless ethernet) expands
Exhibition port can be couple to one or more input-output apparatus, such as keyboard, instruction equipment, scanner or for example pass through net
Network adapter is couple to the networked devices of such as interchanger or router.
Computing device 2600 can realize with many different forms, as shown in the figure.For example, it may be implemented as marking
Quasi- server 2620, or repeatedly realized in the group of such server.It can also be implemented as rack-mount server
A part for system 2624.In addition, it can be realized in such as personal computer of laptop computer 2622.As an alternative,
Component from computing device 2600 can be combined with other component (not shown) in the mobile equipment of such as equipment 2650.This
Each in a little equipment can include computing device 2600, one or more of 2650, and whole system can be by that
Multiple computing devices 2600,2605 of this communication form.
Computing device 2650 includes processor 2652, such as memory 2664, display 2654,2666 and of communication interface
The input-output apparatus of transceiver 2668 and other components.Equipment 2650 is also provided with such as microdrive or other
The storage device of equipment, to provide extra storage.Each in component 2650,2652,2664,2654,2666 and 2668
Using various bus interconnections, and some components may be mounted on public mainboard or be installed in other ways in the case of appropriate.
Processor 2652 can perform the instruction in computing device 2650, including the instruction being stored in memory 2664.
Processor may be implemented as including the individually chipset with the chip of multiple analog- and digital- processors.For example, processor
Coordination to other components of equipment 2650 can be provided --- the application that is such as run to user interface, by equipment 2650, with
And the control of the wireless communication carried out by equipment 2650.
Processor 2652 can be by being couple to control interface 2658 and display interface 2656 and the user of display 2654
Communication.Display 2654 can be such as TFT LCD (Thin Film Transistor-LCD) or OLED (Organic Light Emitting Diode)
Display or other appropriate Display Techniques.Display interface 2656 can include being used to drive display 2654 to user's presentation figure
The proper circuit of shape and other information.Control interface 2658 can receive order from the user and be converted for submitting to
Processor 2652.It is furthermore possible to also provide external interface 2662 communicates with processor 2652, with realize equipment 2650 with it is other
The near region field communication of equipment.External interface 2662 can provide wire communication for example in some embodiments, or at it
Wireless communication in its embodiment, and can also use multiple interfaces.
Memory 2664 is stored in the information in computing device 2650.Memory 2664 may be implemented as one or more
In computer-readable medium, one or more volatile memory-elements or one or more Nonvolatile memery units
It is one or more.Extended menory 2674 can also be provided and be connected to equipment 2650 by expansion interface 2672, extension
Interface 2672 can include such as SIMM (single-in-line memory module) card interface.Such extended menory 2674 can be
Equipment 2650 provides extra memory space, or can be with the application of storage device 2650 or other information.Specifically, expand
The instruction that memory 2674 can include performing or supplementing the above process is opened up, and security information can also be included.Therefore, example
Such as, extended menory 2674 may be provided as the security module of equipment 2650, and can be with allowing to use equipment safely
2650 instruction programs.Furthermore, it is possible to provide safety applications together with additional information via SIMM cards, such as with can not be illegal
Identification information is placed on SIMM cards by the mode of intrusion.
Memory can include such as flash memory and or NVRAM memory, as described below.In one embodiment, calculate
Machine program product is tangibly embodied in information carrier.Computer program product includes instruction, it performs one when executed
A or multiple methods --- such as the above method.Information carrier is can be for example by transceiver 2668 or external interface
2662 computer or machine-readable medias received, on such as memory 2664, extended menory 2674 or processor 2652
Memory.
Equipment 2650 can carry out wireless communication by communication interface 2666, and communication interface 2666 can include when necessary
Digital signal processing circuit.Communication interface 2666 can provide such as GSM audio calls, SMS, EMS or MMS message transmitting-receiving,
The communication of the various patterns such as CDMA, TDMA, PDC, WCDMA, CDMA2000 or GPRS or agreement.Such communication can be such as
Occurred by RF transceiver 2668.Additionally, it is possible to such as using transceiver (not shown) as bluetooth, Wi-Fi or other
Generation short haul connection.In addition, GPS (global positioning system) receiver module 2670 can provide extra lead to equipment 2650
Boat and location-related wireless data, in the case of its is appropriate can by run in equipment 2650 using.
Equipment 2650 can also use audio codec 2660 and audibly communicate, audio codec 2660 can from
Family receives the information said and is converted into available digital information.Audio codec 2660 equally can such as pass through example
Loudspeaker such as in the electrophone of equipment 2650 produces audible sound for user.Such sound can include coming from
The sound of voice telephone calls, can include the sound (for example, speech message, music file etc.) of record, and can be with
The sound generated including the application by being operated in equipment 2650.
Computing device 2650 can realize with many different forms, as shown in the figure.For example, it may be implemented as bee
Cellular telephone 2680.It can also be implemented as the one of smart phone 2682, personal digital assistant or other similar mobile equipment
Part.
The various embodiments of system and technology described herein can be in Fundamental Digital Circuit, integrated circuit, specially set
Realized in the ASIC (application-specific integrated circuit) of meter, computer hardware, firmware, software, and/or its combination.These various embodiment party
Formula can include can perform on programmable systems and/or interpretable embodiments in one or more computer programs,
The programmable system includes being that at least one programmable processor, at least one input equipment and at least one output are set
Standby, the programmable processor can be special or general processor, it is coupled to receive data from storage system and refers to
Make and transmit data and instruction to storage system.
These computer programs (also referred to as program, software, software application or code) include being used for programmable processing
The machine instruction of device, and can be realized with advanced programs and/or object-oriented programming languages and/or compilation/machine language.Such as
Used herein, term " machine readable media ", " computer-readable medium " refer to any computer program product, device
And/or equipment (for example, disk, CD, memory, programmable logic device (PLD)), it is used to provide to programmable processor
Machine instruction and/or data, it includes the machine readable media for receiving the machine instruction as machine-readable signal.Term " machine
Device readable signal " refers to any signal that machine instruction and/or data are provided to programmable processor.
Interacted to provide with user, system and technology described herein can be realized on computers, the calculating
Machine has:For showing the display device of information (for example, CRT (cathode-ray tube) or LCD (liquid crystal display) is monitored to user
Device) and user can by its to computer provide input keyboard and instruction equipment (for example, mouse or trace ball).It is other
The equipment of type can also be used for providing and be interacted with user;For example, the feedback for being supplied to user can be any type of sense organ
Feed back (for example, visual feedback, audio feedback or touch feedback);And input from the user can be received in any form,
Including sound, voice or sense of touch.
System and technology described herein can realize that in computing systems the computer system includes aft-end assembly (example
Such as, as data server) or including middleware component (for example, application server), or including front end assemblies (for example, tool
There is the client computer of graphic user interface or Web browser, user can pass through itself and system described herein and technology
Realization interact) or such rear end, middleware or front end assemblies it is any combination of.The component of system can pass through
Any form or the digital data communications of medium (for example, communication network) interconnection.The example of communication network includes LAN
(" LAN "), wide area network (" WAN ") and internet.
Computing system can include client and server.Client and server is generally remote from each other, and usually logical
Communication network is crossed to interact.Relation between client and server is by means of being run on corresponding computer and each other
Between have client-server relation computer program and produce.
Many embodiments have been described.It will be appreciated, however, that in the situation for the spirit and scope for not departing from this specification
Under, various modifications can be carried out.For example, appended each claim and the example of above-mentioned such claim can be with
Any combinations are combined to produce other example embodiment.
The further embodiment described in following example.
Example 1:A kind of camera apparatus, including:First layer imaging sensor, the first layer imaging sensor include first
Multiple images sensor, the first multiple images sensor lays and is oriented such that described with circular shape
The visual field of each in one multiple images sensor has the axis of the tangent line perpendicular to the circular shape;And second layer figure
As sensor, the second layer imaging sensor includes the second multiple images sensor, the second multiple images sensor quilt
The visual field of each being oriented so that in the second multiple images sensor, which has, is not parallel to first multiple images
The axis of the visual field of each in sensor
Example 2:According to the camera apparatus described in example 1, wherein, in the first multiple images sensor each
Visual field is disposed in the first plane, and the visual field of each in the second multiple images sensor is disposed in the second plane
It is interior.
Example 3:Camera apparatus according to example 1 or 2, wherein, the first multiple images sensor is disposed in
In first plane, and the second multiple images sensor is disposed in the second plane parallel to first plane.
Example 4:Camera apparatus according to example 1 to 3, wherein, the first multiple images sensor is included in
In the first layer so that the first visual field of the first imaging sensor in the first multiple images sensor and described first
The 3rd figure in second visual field of the second imaging sensor in multiple images sensor and the first multiple images sensor
As the 3rd visual field of sensor intersects.
Example 5:Camera apparatus according to example 1 to 4, wherein the camera apparatus has housing, the housing limit
Such radius of circular camera apparatus housing has been determined so that at least three in the first multiple images sensor adjoin
Imaging sensor in the visual field of each it is overlapping.
Example 6:According to the camera apparatus described in example 5, three imaging sensors adjoined intersect with plane.
Example 7:Camera apparatus according to example 1 to 6, further comprises:Rod shell, is arranged in the second layer figure
As the first layer imaging sensor between sensor and the rod shell.
Example 8:Camera apparatus according to example 1 to 7, wherein, the second layer imaging sensor includes six figures
As sensor, and the first layer imaging sensor includes 16 imaging sensors.
Example 9:Camera apparatus according to example 1 to 8, each in the first multiple images sensor regard
Field is orthogonal with the visual field of each in the second multiple images sensor.
Example 10:Camera apparatus according to example 1 to 9, wherein, it is each in the first multiple images sensor
The transverse and longitudinal ratio of a visual field is in vertical pattern, and the transverse and longitudinal of the visual field of each in the second multiple images sensor is than at
In transverse mode.
Example 11:A kind of camera apparatus, including:First layer imaging sensor, the first layer imaging sensor include cloth
The first multiple images sensor in the first plane is put, the first multiple images sensor is configured such that described first
The visual field of each at least three in the multiple images sensor imaging sensors adjoined is overlapping;And second tomographic image
Sensor, the second layer imaging sensor include being arranged in the second multiple images sensor in the second plane, and described the
Two multiple images sensors are respectively provided with the horizontal stroke more different than orientation from the transverse and longitudinal of each in the first multiple images sensor
Vertical ratio orientation.
Example 12:According to the camera apparatus described in example 11, wherein, first plane is parallel to second plane.
Example 13:Camera apparatus according to example 11 or 12, further comprises:Rod shell, is arranged in described second
The first layer imaging sensor between tomographic image sensor and the rod shell.
Example 14:According to the camera apparatus described in example 1 to 10 or 11 to 13, wherein, the first layer imaging sensor
In imaging sensor and the imaging sensor in the second layer imaging sensor ratio 2:1 and 3:Between 1.
Example 15:According to the camera apparatus described in example 1 to 10 or 11 to 14, spliced using Optical flow interpolation described in use
First layer imaging sensor and the image of second layer imaging sensor capture.
Example 16:A kind of camera apparatus, including:Camera case, including:Lower circumference, and upper more covers, the lower circumference
It is arranged in below more covers;First multiple images sensor, it is laid and along the camera shell with circular shape
The lower circumference of body so that in the first multiple images sensor each have be orthogonal to the lower circumference
To outer projection;And the second multiple images sensor, it is arranged on the face of upper more covers so that more than described second
Each in imaging sensor have the normal for being not parallel to the lower circumference to outer projection.
Example 17:According to the camera apparatus described in example 16, wherein, the lower circumference define such radius so that
The visual field of at least three imaging sensors adjoined in the first multiple images sensor intersects.
Example 17:Camera apparatus according to example 16 or 17, wherein, the figure in the first multiple images sensor
As the ratio of the imaging sensor in sensor and the second multiple images sensor is 2:1 and 3:Between 1.
In addition, the logic flow described in attached drawing is not required shown particular order or consecutive order to realize desired knot
Fruit.Furthermore, it is possible to provide other steps from described flow or can be with removal process, and other components can be added
It is added in described system or other components is removed from described system.Therefore, other embodiments will in appended right
In the range of seeking book.
Claims (17)
1. a kind of camera apparatus, including:
First layer imaging sensor, the first layer imaging sensor include the first multiple images sensor, more than described first
Imaging sensor with circular shape come lay and be oriented such that in the first multiple images sensor each
Visual field has the axis of the tangent line perpendicular to the circular shape;And
Second layer imaging sensor, the second layer imaging sensor include the second multiple images sensor, more than described second
The visual field of each that imaging sensor is oriented such that in the second multiple images sensor is described with being not parallel to
The axis of the visual field of each in first multiple images sensor,
The circular shape has certain radius so that at least three images adjoined in the first multiple images sensor
The visual field of each in sensor is overlapping.
2. camera apparatus according to claim 1, wherein, the visual field of each in the first multiple images sensor
It is disposed in the first plane, the visual field of each in the second multiple images sensor is disposed in the second plane.
3. camera apparatus according to claim 1, wherein, the first multiple images sensor is disposed in the first plane
It is interior, and the second multiple images sensor is disposed in the second plane parallel to first plane.
4. camera apparatus according to claim 1, wherein, the first multiple images sensor is included in described first
In layer so that the first visual field of the first imaging sensor in the first multiple images sensor and first multiple images
The 3rd imaging sensor in second visual field of the second imaging sensor in sensor and the first multiple images sensor
The 3rd visual field intersect.
5. camera apparatus according to claim 1, wherein three imaging sensors adjoined intersect with a plane.
6. camera apparatus according to claim 1, further comprises rod shell, wherein the first layer imaging sensor cloth
Put between the second layer imaging sensor and the rod shell.
7. camera apparatus according to claim 1, wherein, the second layer imaging sensor includes six image sensings
Device, and the first layer imaging sensor includes 16 imaging sensors.
8. camera apparatus according to claim 1, wherein, the visual field of each in the first multiple images sensor
It is orthogonal with the visual field of each in the second multiple images sensor.
9. camera apparatus according to claim 1, wherein, the visual field of each in the first multiple images sensor
Transverse and longitudinal ratio be in vertical pattern, the transverse and longitudinal ratio of the visual field of each in the second multiple images sensor is in horizontal mould
Formula.
10. a kind of camera apparatus, including:
First layer imaging sensor, the first multiple images that the first layer imaging sensor includes being arranged in the first plane pass
Sensor, what the first multiple images sensor was configured such that in the first multiple images sensor at least three adjoins
The visual field of each in imaging sensor is overlapping;And
Second layer imaging sensor, the second multiple images that the second layer imaging sensor includes being arranged in the second plane pass
Sensor, the second multiple images sensor is respectively provided with to be taken with the transverse and longitudinal of each ratio in the first multiple images sensor
To different transverse and longitudinals than being orientated,
The first multiple images sensor definition circular shape, the circular shape have certain radius so that described
The visual field of each at least three imaging sensors adjoined in one multiple images sensor is overlapping.
11. camera apparatus according to claim 10, wherein, first plane is parallel to second plane.
12. camera apparatus according to claim 10, further comprises rod shell, the first layer imaging sensor arrangement
Between the second layer imaging sensor and the rod shell.
13. camera apparatus according to claim 10, wherein, imaging sensor in the first layer imaging sensor with
The ratio of imaging sensor in the second layer imaging sensor is 2:1 and 3:Between 1.
14. camera apparatus according to claim 10, wherein, using Optical flow interpolation first tomographic image is used to splice
Sensor and the image of second layer imaging sensor capture.
15. a kind of camera apparatus, including:
Camera case, including:
Lower circumference, and
Upper more covers, the lower circumference is below more covers;
First multiple images sensor, it is laid and along the lower circumference cloth of the camera case with circular shape
Put so that each in the first multiple images sensor have be orthogonal to the lower circumference to outer projection;And
Second multiple images sensor, it is arranged on the face of upper more covers so that the second multiple images sensing
Each in device have the normal for being not parallel to the lower circumference to outer projection,
The circular shape of the first multiple images sensor has such radius so that first multiple images
The visual field of each at least three in the sensor imaging sensors adjoined is overlapping.
16. camera apparatus according to claim 15, wherein, the second multiple images sensor definition second is specific
Radius so that at least three visual fields for adjoining imaging sensor in the second multiple images sensor intersect.
17. camera apparatus according to claim 15, wherein, the imaging sensor in the first multiple images sensor
Ratio with the imaging sensor in the second multiple images sensor is 2:1 and 3:Between 1.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662376140P | 2016-08-17 | 2016-08-17 | |
US62/376,140 | 2016-08-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN207369210U true CN207369210U (en) | 2018-05-15 |
Family
ID=59738478
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201721035347.5U Expired - Fee Related CN207369210U (en) | 2016-08-17 | 2017-08-17 | Multilayer camera apparatus for 3 D visual image capture |
CN201710706614.5A Pending CN109361912A (en) | 2016-08-17 | 2017-08-17 | Multilayer camera apparatus for 3 D visual image capture |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710706614.5A Pending CN109361912A (en) | 2016-08-17 | 2017-08-17 | Multilayer camera apparatus for 3 D visual image capture |
Country Status (4)
Country | Link |
---|---|
CN (2) | CN207369210U (en) |
DE (2) | DE202017104934U1 (en) |
GB (1) | GB2555908A (en) |
WO (1) | WO2018035347A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109361912A (en) * | 2016-08-17 | 2019-02-19 | 谷歌有限责任公司 | Multilayer camera apparatus for 3 D visual image capture |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019161289A1 (en) | 2018-02-17 | 2019-08-22 | Dreamvu, Inc. | System and method for capturing omni-stereo videos using multi-sensors |
USD931355S1 (en) | 2018-02-27 | 2021-09-21 | Dreamvu, Inc. | 360 degree stereo single sensor camera |
USD943017S1 (en) | 2018-02-27 | 2022-02-08 | Dreamvu, Inc. | 360 degree stereo optics mount for a camera |
EP3690822A1 (en) | 2019-01-30 | 2020-08-05 | Koninklijke Philips N.V. | Image representation of a scene |
CN113544733A (en) * | 2019-03-10 | 2021-10-22 | 谷歌有限责任公司 | 360-degree wide-angle camera using butt-joint method |
WO2020263868A1 (en) | 2019-06-24 | 2020-12-30 | Circle Optics, Inc. | Lens design for low parallax panoramic camera systems |
WO2021133843A1 (en) * | 2019-12-23 | 2021-07-01 | Circle Optics, Inc. | Mounting systems for multi-camera imagers |
CN111540017B (en) * | 2020-04-27 | 2023-05-05 | 深圳市瑞立视多媒体科技有限公司 | Method, device, equipment and storage medium for optimizing camera position variable |
CN114189697B (en) * | 2021-12-03 | 2022-10-14 | 腾讯科技(深圳)有限公司 | Video data processing method and device and readable storage medium |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6141034A (en) * | 1995-12-15 | 2000-10-31 | Immersive Media Co. | Immersive imaging method and apparatus |
IL139995A (en) * | 2000-11-29 | 2007-07-24 | Rvc Llc | System and method for spherical stereoscopic photographing |
JP2004531113A (en) * | 2001-02-09 | 2004-10-07 | リー,クジン | Omnidirectional three-dimensional image data acquisition apparatus by annotation, method and method for enlarging photosensitive area |
US6947059B2 (en) * | 2001-08-10 | 2005-09-20 | Micoy Corporation | Stereoscopic panoramic image capture device |
US20050025313A1 (en) * | 2003-06-19 | 2005-02-03 | Wachtel Robert A. | Digital imaging system for creating a wide-angle image from multiple narrow angle images |
US8004558B2 (en) * | 2005-04-07 | 2011-08-23 | Axis Engineering Technologies, Inc. | Stereoscopic wide field of view imaging system |
EP2490659B1 (en) * | 2009-10-22 | 2017-03-01 | Henkel AG & Co. KGaA | Composition for the temporary shaping of keratinic fibres comprising a nonionic propyleneoxide-modified starch and a chitosan |
US9036001B2 (en) * | 2010-12-16 | 2015-05-19 | Massachusetts Institute Of Technology | Imaging system for immersive surveillance |
WO2012082127A1 (en) * | 2010-12-16 | 2012-06-21 | Massachusetts Institute Of Technology | Imaging system for immersive surveillance |
US9007432B2 (en) * | 2010-12-16 | 2015-04-14 | The Massachusetts Institute Of Technology | Imaging systems and methods for immersive surveillance |
US9413930B2 (en) * | 2013-03-14 | 2016-08-09 | Joergen Geerds | Camera system |
US9911454B2 (en) * | 2014-05-29 | 2018-03-06 | Jaunt Inc. | Camera array including camera modules |
CA2960427A1 (en) * | 2014-10-07 | 2016-04-14 | Nokia Technologies Oy | Camera devices with a large field of view for stereo imaging |
US20160165211A1 (en) * | 2014-12-08 | 2016-06-09 | Board Of Trustees Of The University Of Alabama | Automotive imaging system |
US20170363949A1 (en) * | 2015-05-27 | 2017-12-21 | Google Inc | Multi-tier camera rig for stereoscopic image capture |
CN105739231B (en) * | 2016-05-06 | 2019-04-26 | 中国科学技术大学 | A kind of multi-cam full-view stereo imaging device of plane distribution |
DE202017104934U1 (en) * | 2016-08-17 | 2017-11-20 | Google Inc. | Multi-level camera carrier system for stereoscopic image acquisition |
-
2017
- 2017-08-16 DE DE202017104934.5U patent/DE202017104934U1/en not_active Expired - Lifetime
- 2017-08-16 DE DE102017118714.6A patent/DE102017118714A1/en not_active Withdrawn
- 2017-08-17 GB GB1713180.6A patent/GB2555908A/en not_active Withdrawn
- 2017-08-17 CN CN201721035347.5U patent/CN207369210U/en not_active Expired - Fee Related
- 2017-08-17 WO PCT/US2017/047384 patent/WO2018035347A1/en active Application Filing
- 2017-08-17 CN CN201710706614.5A patent/CN109361912A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109361912A (en) * | 2016-08-17 | 2019-02-19 | 谷歌有限责任公司 | Multilayer camera apparatus for 3 D visual image capture |
Also Published As
Publication number | Publication date |
---|---|
DE202017104934U1 (en) | 2017-11-20 |
DE102017118714A1 (en) | 2018-02-22 |
CN109361912A (en) | 2019-02-19 |
GB201713180D0 (en) | 2017-10-04 |
WO2018035347A8 (en) | 2018-03-29 |
WO2018035347A1 (en) | 2018-02-22 |
GB2555908A (en) | 2018-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN207369210U (en) | Multilayer camera apparatus for 3 D visual image capture | |
CN107431796B (en) | The omnibearing stereo formula of panoramic virtual reality content captures and rendering | |
CN107637060A (en) | Camera is equipped and stereo-picture capture | |
US10375381B2 (en) | Omnistereo capture and render of panoramic virtual reality content | |
US20170363949A1 (en) | Multi-tier camera rig for stereoscopic image capture | |
US10460459B2 (en) | Stitching frames into a panoramic frame | |
US10038887B2 (en) | Capture and render of panoramic virtual reality content | |
CN106797460B (en) | The reconstruction of 3 D video | |
Lin et al. | Seamless video stitching from hand‐held camera inputs | |
CN107925722A (en) | Stabilisation based on accelerometer data | |
CN107810633A (en) | Three-dimensional rendering system | |
JP7196421B2 (en) | Information processing device, information processing system, information processing method and program | |
US11812009B2 (en) | Generating virtual reality content via light fields | |
Chapdelaine-Couture et al. | The omnipolar camera: A new approach to stereo immersive capture | |
Huang et al. | Stereo panorama imaging and display for 3D VR system | |
Gurrieri et al. | A model for the omnidirectional acquisition and rendering of stereoscopic images for human viewing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180515 Termination date: 20200817 |
|
CF01 | Termination of patent right due to non-payment of annual fee |