US20190019299A1 - Adaptive stitching of frames in the process of creating a panoramic frame - Google Patents
Adaptive stitching of frames in the process of creating a panoramic frame Download PDFInfo
- Publication number
- US20190019299A1 US20190019299A1 US16/067,832 US201616067832A US2019019299A1 US 20190019299 A1 US20190019299 A1 US 20190019299A1 US 201616067832 A US201616067832 A US 201616067832A US 2019019299 A1 US2019019299 A1 US 2019019299A1
- Authority
- US
- United States
- Prior art keywords
- frame
- imagers
- frames
- panoramic
- irregular line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 80
- 230000008569 process Effects 0.000 title claims abstract description 26
- 230000003044 adaptive effect Effects 0.000 title 1
- 230000001788 irregular Effects 0.000 claims abstract description 50
- 238000003860 storage Methods 0.000 claims description 24
- 230000003287 optical effect Effects 0.000 claims description 18
- 238000003384 imaging method Methods 0.000 claims description 15
- 238000012360 testing method Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 150000001875 compounds Chemical class 0.000 description 3
- 238000005520 cutting process Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 239000004615 ingredient Substances 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 241000226585 Antennaria plantaginifolia Species 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 206010028813 Nausea Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000008693 nausea Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000014616 translation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
- G03B35/08—Stereoscopic photography by simultaneous recording
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
- G03B35/08—Stereoscopic photography by simultaneous recording
- G03B35/10—Stereoscopic photography by simultaneous recording having single camera with stereoscopic-base-defining system
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B37/00—Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
- G03B37/04—Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
- G06T3/047—Fisheye or wide-angle transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/282—Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/38—Registration of image sequences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
Definitions
- the present invention in some embodiments thereof, relates to image processing and, more specifically, but not exclusively, to stitching of frames.
- VR Virtual Reality
- VR is a special type of image or video content.
- VR is designed to replace the reality in order to provide the viewer with immersive sensation of the recorded content, including video and audio.
- the viewer uses a special type of display glasses, commonly referred to as VR headsets, VR goggles or VR glasses.
- the VR headset effectively blocks the viewer's natural vision and replaces it by a recorded or live-broadcasted content.
- VR content is different than standard digital content designed to be presented on a flat screen, because VR is designed to replace the natural vision.
- VR is designed to be presented for a wide field of view (FOV) while providing stereo vision.
- FOV wide field of view
- a method of creating a panoramic frame by combining frames along an irregular line identified in an overlapping depicted in the frames comprising:
- the method comprises detecting in the first frame a plurality of feature points each having at least one predefined feature point characteristic; wherein the plurality of objects are segmented according to the plurality of feature points.
- the plurality of objects are identified on a watershed image generated from the first portion based on the plurality of feature points.
- the plurality of feature points are identified using an accelerated segment test (FAST) process.
- FAST accelerated segment test
- the overlap is identified by conducting a feature descriptor matching process.
- the first frame and the second frame are captured by imagers from a plurality of imagers of an imaging device.
- the first frame and the second frame are captured by two neighboring imagers from the plurality of imagers, a first field of view of one of the two neighboring imagers is overlapping with a second field of view of another of the two neighboring imagers; wherein the two neighboring imagers are mounted around a common center together with other imagers.
- an optical axis of a first of the two neighboring imagers is tilted in relation to an axis passing through the common center and a respective tangential point of an origin point of a field of view of the first neighboring imager on a virtual circle passing via all origin points of the fields of view of the plurality of imagers.
- the method further comprises: adjusting the irregular line path according to a new irregular line path calculated similarly to the irregular line path for a pair of new frames captured after the first frame and second frame, and using the adjusted irregular line path for stitching the new images.
- the method comprises the adjusting and the using are performed when a difference between the irregular line path and the new irregular line path is greater than a threshold.
- the panoramic frame is one of a plurality of frames of a virtual reality (VR) file.
- VR virtual reality
- the panoramic frame is a left eye panoramic frame and the plurality of imagers are intertwined with a plurality of additional imagers capturing a plurality of additional frames which are combined into a right eye panoramic frame.
- the method further comprises combining the right eye panoramic frame with the left eye panoramic frame for creating a stereoscopic frame.
- members of the plurality of imagers and the plurality of additional frames are alternately arranged along a virtual circle encircling a common center.
- the panoramic frame is a right eye panoramic frame and the plurality of imagers are intertwined with a plurality of additional imagers capturing a plurality of additional frames which are combined into a left eye panoramic frame; further comprising combining the left eye panoramic frame with the right eye panoramic frame for creating a stereoscopic frame.
- the plurality of imagers are arranged along a virtual circle encircling a common center; wherein each one of the plurality of imagers is mounted such that an optical axis thereof is tilted in relation to an axis passing through the common center and a tangential point of an origin of a field of view of a respective the imager.
- a system of creating a panoramic frame by combining frames along an irregular line identified in an overlapping depicted in the frames comprises an interface adapted for receiving first and second frames from two of a plurality of frames captured by a plurality of imagers of an imaging device, a code store adapted for storing a code, and a processor adapted for executing the code, wherein the code comprising: code instructions for identifying an overlap between content documented in a first frame portion of a first frame and content documented in a second frame portion of a second frame, code instructions for segmenting a plurality of objects in the first frame portion, code instructions for identifying boundaries of the plurality of objects, code instructions for identifying an irregular line path connecting between two opposing sides of the first frame portion along at least some of the boundaries of at least some of the plurality of objects, and code instructions for stitching the first frame and second frame along the irregular line path in the process of creating a panoramic frame.
- FIG. 1 is a flowchart of creating a panoramic frame using an irregular stitch line identified in an overlapping portion of neighboring frames captured by a plurality of imagers having multiple viewing angles around a common center region, in accordance with some embodiments of the present invention
- FIG. 2 is a block diagram of components of a system for creating a panoramic frame, for instance by executing the method depicted in FIG. 1 , in accordance with some embodiments of the present invention
- FIG. 3A is a schematic illustration of an exemplary arrangement of imagers capturing frames at multiple viewing angles that are stitched into the panoramic frame using the systems and/or methods described herein, in accordance with some embodiments of the present invention
- FIG. 3B is a schematic lateral illustration of a virtual reality (VR) imaging device having an arrangement of imagers for capturing frames at multiple viewing angles and for stitching the captured frames into a panoramic frame using methods described herein, in accordance with some embodiments of the present invention
- FIGS. 3C and 3D are schematic illustrations of an overlap between fields of view of imagers having, respectively, tilted optical axes and non-tilted optical axes in relation to a radius of a virtual circle passing via mounting points of the imagers, in accordance with some embodiments of the present invention
- FIG. 4 is an exemplary overlapping area marked with feature points (dots), boundaries of objects (green colored lines), and a shortest irregular path which crosses the exemplary overlapping area along some of the boundaries, in accordance with some embodiments of the present invention.
- FIG. 5 is an exemplary overlapping area marked with boundaries of objects depicted in the exemplary overlapping area, in accordance with some embodiments of the present invention.
- FIGS. 6A-6C are exemplary images depicting the process of acquiring frames captured at multiple viewing angles, adapting the frames, and the created panoramic frame, in accordance with some embodiments of the present invention.
- the present invention in some embodiments thereof, relates to image processing and, more specifically, but not exclusively, to stitching of frames.
- An aspect of some embodiments of the present invention relates to systems and/or methods (e.g., code executed by a processor of a computing device) for stitching frames or images (for brevity, referred to herein interchangeably) captured at multiple viewing angles, optionally around a common center region, by multiple imagers (e.g. cameras, image sensors) arranged into a panoramic frame or panoramic frame that depicts the environment surrounding the common center region, for brevity also referred to herein a common center.
- the stitching is performed to reduce visibility of the stitching seams, and to create an improved panoramic frame viewing experience to the user, which more closely resembles the captured surrounding environment.
- the stitching of frames is performed along an irregular line identified in an overlapping area between fields of view of the frames.
- the irregular line is optionally identified along boundaries of objects or segmented areas which are located in overlapping area.
- the frames to be stitched by embodiments of the present invention may be pre-processed (i.e., before the stitching) for alignment and registration.
- the overlapping areas may not necessarily be identified from an analysis of an overlapping area in the fields of view but rather estimated using external methods.
- the overlapping areas are estimated by projecting a visual representation of the frames onto a virtual sphere skeleton model.
- the overlapping areas are estimated based on a calibration model (e.g., a mathematical model) defined for one or both of the imagers that capture the overlapping frames.
- the calibration model may be defined based on principal point parameter(s), focal length parameter(s), and/or fisheye distortion parameter(s).
- the systems and/or methods described herein provide a technical solution to the technical problem of how to reduce the visibility of seams in a panoramic frame that depicts the environment surrounding the common center when the panoramic frame is created by combining frames captured at multiple viewing angles.
- the stitched frames may be frames of a VR file.
- Such VR files may be used in virtual reality systems, for example, presented to a user within a VR headset for viewing VR videos.
- the frames acquired for stitching together are captured by different cameras, which have different perspectives and/or different characteristics such as focus, exposure, white balance.
- the lenses used by the imagers e.g., wide and/or fish-eye lenses
- a sub technical problem may be stitching of the frames captured by the different cameras, which have different characteristics and changing orientation (e.g. do to minor movements of fixed or adjustable cameras), in a manner to decrease or eliminate visibility of the seams. Visible seams reduce the natural looking or real feeling that the VR video is able to provide. Reducing or eliminating stitching distortions (e.g., in left and right panoramic frames, each designed to be viewed by respective left and right eyes) and/or different artifacts improves the VR video, by removing or reducing inconsistent, non-horizontal parallax, which would otherwise cause the viewer discomfort and/or nausea.
- stitching distortions e.g., in left and right panoramic frames, each designed to be viewed by respective left and right eyes
- different artifacts improves the VR video, by removing or reducing inconsistent, non-horizontal parallax, which would otherwise cause the viewer discomfort and/or nausea.
- the systems and/or methods described herein tie mathematical operations (e.g., estimation of overlapping areas of frames, calculation of motion gradient(s), and frame stitching) to the ability of a processor to process digital images, for example, by stitching frames acquired at multiple viewing angles around a common center into a panoramic frame based on an irregular line crossing an overlaying area of the frames.
- mathematical operations e.g., estimation of overlapping areas of frames, calculation of motion gradient(s), and frame stitching
- the cutting line There are several methods that define the cutting line. The simplest one is a straight vertical line cutting both overlapping frames along a straight line, at a selected coordinate. Adding a gradient transition from one overlapping frame to another is also used for adapting a straight line. For example, when connecting two frames, left and right, the transparency of the overlapping area of the left frame may gradually increase from left to right transparence on the left frame and the exact opposite in the right frame. All the straight cutting methods have one significant disadvantage, human vision is especially sensitive to well defined geometric figures, such as straight lines, so that any well-defined stitching lines are prominent to the user.
- the systems and/or methods described herein relate to processing frames acquired at multiple viewing angles by imagers mounted around a common center.
- New data is created in the form of a panoramic frame, by stitching together the acquired frames.
- the panoramic frame may be stored in a memory device, and optionally played back to a user, for example, displayed in VT headgear.
- the panoramic frame may be incorporated in a video that includes multiple consequent panoramic frames.
- the systems and/or methods described herein improve performance of a computer, for example, by using less memory and/or improving computation time in producing an improved digital image.
- the present invention may be a system, a method, and/or a computer program product.
- the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
- the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, and any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- SRAM static random access memory
- CD-ROM compact disc read-only memory
- DVD digital versatile disk
- memory stick a floppy disk, and any suitable combination of the foregoing.
- a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures.
- two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- FIG. 1 is a flowchart of a method of stitching frames captured by neighboring imagers at multiple viewing angles into a panoramic frame, in accordance with some embodiments of the present invention.
- the method identify an irregular line in overlapping areas of frames by detecting boundaries of objects or any other segmented areas in the overlapping areas and identifying the shortest irregular line which crosses (e.g. divides into two) the overlapping areas.
- the method is repeated for each pair of frames captured substantially simultaneously by imagers of a panoramic imaging device. Reference is also made to FIG.
- FIG. 2 which is a block diagram of components of a system 200 that allows a user to capture individual panoramic frames or a video of a sequence of panoramic frames of an environment surrounding (at least in part) a common center region, using multiple viewing imagers each pointed toward a different angle around the common center region.
- the user may record the video for playback in a virtual reality setting, such as using a VR headset.
- the method of claim 1 may be implemented by system 200 of FIG. 2 .
- System 200 includes a computing unit 202 housed with imagers 212 , for example a custom designed unit, referred to herein as a VR imaging device (see for example FIG. 3B which is an exemplary housing of a VR imaging device), or separate from a housing which includes the imagers 212 , for instance a personal computer, a server, a mobile device, a wearable computer, or other implementations.
- Computing unit 202 includes one or more processor(s) 204 , and a program store 206 storing code instructions for execution by processor(s) 204 .
- Processor(s) 204 may be, for example, a processing unit(s) (CPU), one or more graphics processing unit(s) (GPUs), field programmable gate array(s) (FPGA), digital signal processor(s) (DSP), and application specific integrated circuit(s) (ASIC).
- processors may be part of a processing unit that includes multiple processors (homogenous or heterogeneous) arranged for parallel processing, as clusters and/or as one or more multi core processing units.
- Program store 206 store code instructions implementable by processor(s) 204 , for example, a random access memory (RAM), read-only memory (ROM), and/or a storage device, for example, non-volatile memory, magnetic media, semiconductor memory devices, hard drive, removable storage, and optical media (e.g., DVD, CD-ROM).
- RAM random access memory
- ROM read-only memory
- storage device for example, non-volatile memory, magnetic media, semiconductor memory devices, hard drive, removable storage, and optical media (e.g., DVD, CD-ROM).
- the instructions are optionally to implement the method described in FIG. 1 .
- Computing unit 202 includes or is in communication with a data repository 208 , for example, a memory, a storage unit, a hard drive, an optical disc, a remote storage server, and/or a cloud server (e.g., accessed via a network connection).
- Data repository 208 may store the raw acquired frames (e.g., in raw frame repository 208 A), store the frames adapted as described by the systems and/or methods described herein (e.g., in adapted frame repository 208 B), and/or the created panoramic frames (e.g., in panoramic frame repository 208 C).
- Computing unit 202 includes a data interface 210 (e.g., physical, virtual and/or a software interface) for receiving frames acquired from each of multiple imagers 212 (e.g., digital cameras).
- Imagers 212 e.g. red, green, blue (RGB) imagers
- RGB red, green, blue
- imagers 212 are paired to capture frame for the left or right eyes, which may be presented to the different eyes, for example, using a VT headset.
- imagers may be divided to two groups, left eye group and right eye group. Members of the left eye group are in even places and members of the right eye group are in uneven places (assuming that places are distributed in a sequential order in a circle surrounding the common center region) or vice versa.
- frames captured by imagers of the left eye group are stitched to form a left eye panoramic frame and frames captured by imagers of the right eye group are stitched to form a right eye panoramic frame.
- each one of the left eye panoramic frame and the right eye panoramic frame is done separately, for instance as described below (overlapping areas are between frames captured by group members).
- the left eye panoramic frame and the right eye panoramic frame are combined to create a stereoscopic panoramic frame.
- the dashed rectangles may be members of the left eye group and the non-dashed rectangles are members of the right eye group.
- imagers 212 capture frames that are displayed to both eyes simultaneously, for example, projected within a 180 degree theater.
- Each one of the imagers 212 may have a wide angle lens designed to capture a wide field of view, for example, a fish eye lens.
- Exemplary imagers 212 are cameras with ultra-wide and/or vertical angle lenses that capture about 120 degrees horizontally and about 175 degree vertically, or other values.
- the number of imagers 212 is selected to cover the designed environment (twice in embodiments wherein a stereoscopic panoramic frame is created), and may be based on the field of view that may be captured by the lenses of the cameras, for example, 4 imagers, or 8 imagers, 10 imagers, 16 imagers and/or any intermediate or larger number of imagers.
- FIG. 3A depicts exemplary implementations for the arrangement of imagers 312 (e.g., corresponding to imagers 212 as described with reference to FIG. 2 ), in accordance with some embodiments of the present invention.
- Imagers 312 are mounted around a common center 302 .
- Imagers 312 are arranged to acquire frames at multiple viewing angles. The frames are stitched into a panoramic frame (using the systems and/or methods described herein) that depict the environment surrounding common center 302 .
- Each one of arrangements 304 and 306 includes eight imagers 312 arranged to cover 360 degrees around common center 302 .
- Implementation 304 depicts imagers 312 arranged in a square 308 (or rectangular) format, including two imagers 312 per side of square 308 , which may be paired to capture frames for the left and right eyes.
- FIG. 3B which is a lateral view of an exemplary quadratic housing with truncated corners of a custom designed unit that includes 4 pairs of lateral imagers 1312 where each pair is located at another truncated corner of the exemplary quadratic housing.
- Arrangement 306 includes imagers 312 arranged in a circle 310 (or oval) format, spaced apart along the circumference of circle 310 . Imagers 312 may capture frames for the left and right eyes. It is noted that other implementation shapes may be used.
- imagers are divided to pairs wherein each pair is designed to capture a stereoscopic frame.
- overlapping areas in pairs of stereoscopic frames are identified and used for creating a panoramic frame is described below.
- a stereoscopic frame is referred to herein as a frame and a pair of imagers designed to capture a stereoscopic frame is referred to herein as an imager.
- a VR imaging device having an arrangement of pairs of imagers for capturing stereoscopic frames is used where the fields of view are as depicted in FIG. 3C .
- optical axis each imager in a pair of imagers is tilted toward the other imager of the pair of imagers, for instance toward the optical axis thereof.
- the tiling of the optical axis is in relation to an axis passing through the common center and through a respective tangential point of an origin point of the field of view of respective imager on a virtual circle passing via all origin points of the fields of view of all imagers, for example see the circle depicted in FIGS. 3C and 3D .
- the tilting is between 20 and 30 degrees, for instance 22 degrees as depicted in FIG. 3C .
- the tilting of imagers reduces the overlapping areas between the fields of views.
- FIG. 3C depicts an arrangement of imagers wherein each imager is tilted toward its paired imager
- FIG. 3D depicts another arrangement of imagers wherein the optical axis of each imager is aligned to continue a radius of a virtual circle passing via mounting points of all imagers, for instance the points of origin of the optical axes of the imagers.
- FIG. 3C depicts an arrangement of imagers wherein each imager is tilted toward its paired imager
- FIG. 3D depicts another arrangement of imagers wherein the optical axis of each imager is aligned to continue a radius of a virtual circle passing via mounting points of all imagers, for instance the points of origin of the optical axes of the imagers.
- Implementations 304 and 306 may have a substantially disc shape, in which imagers 312 are arranged along a plane. It is noted that other implementation profiles may be used, for example, a sphere, or half-sphere.
- Additional imagers 312 may be positioned to face up or down (not shown).
- Computing unit 202 and imagers 312 may be housed within a casing based on implementations 304 and/or 306 , for example, as a standalone portable unit, which may be used by consumers at home.
- computing unit 202 includes a communication interface 214 (e.g., physical, software, and/or virtual) to communicate with one or more external devices, to store and/or present the created panoramic frames (e.g., videos), for instance a Wi-FiTM module or a BluetoothTM module.
- exemplary external devices include a personal display device 216 (e.g., a VR headset), a storage device 218 to store the videos for future playback, and a server 220 (e.g., web server, storage server, video server) that may be communicated with over a network 222 .
- the recorded videos may be publicly played, for example, projected onto a panoramic screen in a theater, in a room, or at home, for example, by a projector (not shown).
- Computing unit 202 includes or is in communication with a user interface 224 (which may be integrated within a housing containing computing unit 202 , implemented as software on a client terminal, and/or implemented as part of the display device displaying the panoramic frames), for example, a touchscreen, a keyboard, a mouse, and voice activated software using speakers and microphone.
- User interface 224 may access a code (e.g., stored on a client terminal and/or on computing unit 202 ) to customize the creation of the panoramic frames based on user inputs.
- the imagers are calibrated for calculating a camera calibration model for image alignment as well as reduction of parallax distortion.
- the camera calibration model is calculated based on intrinsic parameters of each imager, for instance principal point parameter(s), focal length parameter(s), and/or fisheye distortion parameter(s) and optionally based on extrinsic parameter(s).
- intrinsic parameters may include fisheye distortion parameters.
- the parameters may be calculated by placing imagers of the VR imaging device in front of a chessboard pattern while the VR imaging device is rotated and capturing sequence(s) of frames.
- a camera calibration model which is optionally executed using processor(s) 204 ; corners of a pattern of n ⁇ m chessboard tiles are detected, for instance by finding a linear least squares homography of the pattern.
- a Gauss-Newton method may be applied to find the above imager parameters and rotations and translations of the imagers that yield the detected homographies for several views of chessboard.
- a Jacobian matrix is calculated and a quality criterion is calculated based on a mean square error method. This allows calibrating extrinsic parameters of a respective rig for the calculation of the camera calibration model, mutual rotations by the means of bundle adjustment on frames where distant objects are shot.
- the extrinsic parameters are angles of rotation of each imager (e.g. optical axis angle) in 3D space (e.g. tilt, pan, and roll).
- Distant content may be detected by calculating an optical flow between each two imagers looking in the same direction by displaced horizontally (a stereo pair).
- Parallax of pairs of frames containing distant content should have a significantly lower parallax.
- Homographies may be detected by matching feature points, for instance using a scale-invariant feature transform (SIFT) process or a speeded up robust features (SURF) process.
- Rotations may be found by a Levenerg-Macart method. Jacobian matrixes are numerically approximated. Intrinsic parameters may not be changed at this stage.
- the calibration may be initially executed using twenty frames of the chessboard pattern per imager.
- unclear frames with high pixel re-projection error of calibration corners are removed to assure only frames having a quality above a threshold are used for the calibration in order to ensure a low pixel re-projection error.
- the calibration is made on the corners of a checker board pattern item. It is optionally assumed that the board is not moving, and hence remains in fixed coordinates in world coordinate system (X-Y plane) with squares starting at (0,0,0).
- a bundle adjustment algorithm may be applied.
- the following pseudo code iteratively collects frames for calculating intrinsic parameters:
- 2-5 are executed using standard computer vision routines, using the intrinsic parameters of the respective imager calculated during the calibration process.
- each imager is calibrated separately. For instance, when 8 imagers are used, the calibration process is executed on 4 even (left) imagers and then on 4 odd (right) imagers.
- the 0 th imager may be added artificially and temporarily to the odd imagers for calibration. This way, both even and odd imagers having a common field of view.
- FIG. 1 The figure describes a process wherein a pair of frames having an overlapping area depicting a common potion of the environment is processed for stitching the pair of frames.
- the process is optionally repeated, either iteratively or simultaneously, for pair of frames captured by neighboring imagers which are used to capture a set of frames.
- the set of frames optionally includes frames captured simultaneously or substantially simultaneously (e.g. with minor technical time drift) by a plurality of imagers mounted around a common center, for instance using the arrangements described above, for instance using the system depicted in FIG. 2 and optionally any of the arrangements depicted in FIG. 3A .
- the methods is used for stitching multiple sets of frames into multiple panoramic frames, optionally stereoscopic, which are captured sequentially using an imaging device such as the imaging device depicted in FIG. 3B .
- This allows creating a VR file having a plurality of sequential panoramic frames for the creation and viewing of photographically-captured panoramas and the exploration of objects through frames taken at multiple viewing angles.
- the panoramic frame or image (referred to interchangeably) is optionally a VR panorama frame that documents the environment surrounding a center area to emulate an environment around a viewer (inside, looking out), yielding a sense of place and optionally changes in the place over time.
- a pair of frames having an overlapping area depicting a common potion of the environment is selected for processing from a set of frames which is captured simultaneously (a term used herein for describing also substantially simultaneously, for instance with a minor time deviation of less than 1 second).
- the set of frames comprises frames captured by imagers at multiple viewing angles, for example as described above, for instance using a system as depicted in FIG. 2 and/or an arrangement as depicted in FIG. 3A .
- the frames are projected on a sphere.
- an overlapping area between the frames of the pair of frames is identified as described in co filed application titled “STITCHING FRAMES INTO A PANORAMIC FRAME” of the same inventors which is incorporated herein by reference (attorney reference number 64940).
- a portion of one of the frames which depicts the frames is selected, optionally randomly.
- irregular line path crossing the overlapping area along boundaries of the objects depicted in the overlapping area is identified in each pair of frames.
- feature points are detected in the respective overlapping area, for instance by executing a corner detection method, such as an accelerated segment test (FAST) or any other suitable feature detection process.
- FAST accelerated segment test
- the feature points may be any point having one or more predefined feature point characteristic (value or a range of values), for instance a geometric shape and/or size, color, hue and/or any characteristic intensity.
- objects depicted in the overlapping area are detected, optionally based on the location of the feature points in the overlapping area.
- the objects may be detected using grow-region algorithms, such as a Watershed process, for growing regions around the feature points.
- boundaries of the objects depicted in the overlapping area are identified, for instance using boundaries marking or identification processes.
- identifying an irregular line path crossing the overlapping area along the identified boundaries are selected along the boundaries, for instance the shortest irregular line path which crosses the overlapping area.
- the topmost point and bottommost vertexes are forced to be vertices through which the irregular line path.
- the graph is than analyzed for identifying the shortest irregular line path crossing the overlapping area along the boundaries of the segments or areas.
- FIG. 4 is an exemplary overlapping area marked with feature points (dots), boundaries of objects (green colored lines), and a shortest irregular path which crosses the exemplary overlapping area along some of the boundaries, in accordance with some embodiments of the present invention.
- FIG. 5 depicts an exemplary overlapping area marked with boundaries of objects depicted in the exemplary overlapping area, in accordance with some embodiments of the present invention.
- FIG. 6A is exemplary set of 4 frames which are captured by imagers at multiple viewing angles and FIGS. 6B-6C which are parts of a panoramic frame stitched in accordance with the process depicted in FIG. 1 and described above.
- the frames are acquired using system 200 as described with reference to FIG. 2 .
- coordinates of the selected irregular path is matched with a previously calculated irregular path which is calculated for frames from the same imagers.
- another irregular path is calculated by shifting the coordinates of the previously calculated irregular path toward the coordinates of the current irregular path.
- the respective frames are stitched along the irregular path (used as a seam).
- the process is repeated for any pair of frames captured using neighboring imagers for creating a panoramic frame.
- the created panoramic frame is added to a previously created panoramic frame (when applicable) for creating a panoramic VR file.
- co filed application titled “STITCHING FRAMES INTO A PANORAMIC FRAME” of the same inventors which is incorporated herein by reference (attorney reference number 64940).
- composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
- a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
- range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
- a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range.
- the phrases “ranging/ranges between” a first indicate number and a second indicate number and “ranging/ranges from” a first indicate number “to” a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Studio Devices (AREA)
- Stereoscopic And Panoramic Photography (AREA)
- Image Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Processing Or Creating Images (AREA)
- Closed-Circuit Television Systems (AREA)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/067,832 US20190019299A1 (en) | 2016-01-03 | 2016-12-12 | Adaptive stitching of frames in the process of creating a panoramic frame |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662274321P | 2016-01-03 | 2016-01-03 | |
US201662274317P | 2016-01-03 | 2016-01-03 | |
PCT/IL2016/051328 WO2017115348A1 (en) | 2016-01-03 | 2016-12-12 | Adaptive stitching of frames in the process of creating a panoramic frame |
US16/067,832 US20190019299A1 (en) | 2016-01-03 | 2016-12-12 | Adaptive stitching of frames in the process of creating a panoramic frame |
Publications (1)
Publication Number | Publication Date |
---|---|
US20190019299A1 true US20190019299A1 (en) | 2019-01-17 |
Family
ID=59224696
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/067,832 Abandoned US20190019299A1 (en) | 2016-01-03 | 2016-12-12 | Adaptive stitching of frames in the process of creating a panoramic frame |
US15/560,495 Expired - Fee Related US10460459B2 (en) | 2016-01-03 | 2016-12-12 | Stitching frames into a panoramic frame |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/560,495 Expired - Fee Related US10460459B2 (en) | 2016-01-03 | 2016-12-12 | Stitching frames into a panoramic frame |
Country Status (6)
Country | Link |
---|---|
US (2) | US20190019299A1 (zh) |
EP (2) | EP3398163A4 (zh) |
JP (2) | JP2019511016A (zh) |
KR (2) | KR20180111798A (zh) |
CN (2) | CN107960121A (zh) |
WO (2) | WO2017115348A1 (zh) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10460459B2 (en) | 2016-01-03 | 2019-10-29 | Humaneyes Technologies Ltd. | Stitching frames into a panoramic frame |
WO2020194190A1 (en) * | 2019-03-25 | 2020-10-01 | Humaneyes Technologies Ltd. | Systems, apparatuses and methods for acquiring, processing and delivering stereophonic and panoramic images |
WO2022076934A1 (en) * | 2020-10-09 | 2022-04-14 | Arizona Board Of Regents On Behalf Of Arizona State University | Anti-tamper protection using dendrites |
US20220180491A1 (en) * | 2019-05-15 | 2022-06-09 | Ntt Docomo, Inc. | Image processing apparatus |
US20220253988A1 (en) * | 2021-02-05 | 2022-08-11 | Motorola Solutions, Inc. | Device, method and system for identifying objects in warped images from a fisheye camera |
WO2024090674A1 (en) * | 2022-10-29 | 2024-05-02 | Samsung Electronics Co., Ltd. | Method and apparatus for stitching frames of image comprising moving objects |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10636121B2 (en) * | 2016-01-12 | 2020-04-28 | Shanghaitech University | Calibration method and apparatus for panoramic stereo video system |
JP6545229B2 (ja) * | 2017-08-23 | 2019-07-17 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理装置の制御方法およびプログラム |
US11748952B2 (en) * | 2017-09-27 | 2023-09-05 | Intel Corporation | Apparatus and method for optimized image stitching based on optical flow |
JP2019080174A (ja) * | 2017-10-25 | 2019-05-23 | 株式会社リコー | 画像処理装置、撮像システム、通信システム、画像処理方法、およびプログラム |
KR102025735B1 (ko) * | 2017-11-23 | 2019-09-26 | 전자부품연구원 | 복수의 촬영 영상을 이용한 360 vr 영상 변환 시스템 및 방법 |
GB2572956B (en) * | 2018-04-16 | 2021-09-08 | Sony Interactive Entertainment Inc | Calibration system and method |
TWI678920B (zh) * | 2018-05-23 | 2019-12-01 | 宏碁股份有限公司 | 影片處理裝置、其影片處理方法及電腦程式產品 |
CN108900817A (zh) * | 2018-08-23 | 2018-11-27 | 重庆加河科技有限公司 | 一种基于vr的教育实时监测系统 |
EP3667414B1 (en) * | 2018-12-14 | 2020-11-25 | Axis AB | A system for panoramic imaging |
KR20200081527A (ko) | 2018-12-19 | 2020-07-08 | 삼성전자주식회사 | 전자 장치 및 그 제어 방법 |
US10832377B2 (en) * | 2019-01-04 | 2020-11-10 | Aspeed Technology Inc. | Spherical coordinates calibration method for linking spherical coordinates to texture coordinates |
CN112099616A (zh) * | 2019-06-17 | 2020-12-18 | 深圳市黑电科技有限公司 | 一种体控ar眼镜360度全视角实现方法、系统及ar眼镜 |
JP7393809B2 (ja) * | 2019-06-21 | 2023-12-07 | スリーアイ インク | 全方位画像情報に基づく自動位相マッピング処理方法及びそのシステムとコンピュータープログラム |
KR102384177B1 (ko) * | 2019-06-21 | 2022-04-29 | 주식회사 쓰리아이 | 전방위 화상정보 기반의 자동위상 매핑 처리 방법 및 그 시스템 |
WO2021031210A1 (zh) * | 2019-08-22 | 2021-02-25 | 深圳市铂岩科技有限公司 | 视频处理方法和装置、存储介质和电子设备 |
CN112927238B (zh) * | 2019-12-06 | 2022-07-01 | 四川大学 | 结合光流与分水岭分割的岩心序列图像标注方法 |
KR102617222B1 (ko) * | 2019-12-24 | 2023-12-26 | 주식회사 쓰리아이 | 전방위 화상정보 기반의 자동위상 매핑 처리 방법 및 그 시스템 |
GB2591278A (en) * | 2020-01-24 | 2021-07-28 | Bombardier Transp Gmbh | A monitoring system of a rail vehicle, a method for monitoring and a rail vehicle |
CN111476716B (zh) * | 2020-04-03 | 2023-09-26 | 深圳力维智联技术有限公司 | 一种实时视频拼接方法和装置 |
CN111738923B (zh) * | 2020-06-19 | 2024-05-10 | 京东方科技集团股份有限公司 | 图像处理方法、设备及存储介质 |
KR20220025600A (ko) | 2020-08-24 | 2022-03-03 | 삼성전자주식회사 | 영상 생성 방법 및 장치 |
CN112085814B (zh) * | 2020-09-07 | 2024-05-14 | 北京百度网讯科技有限公司 | 电子地图显示方法、装置、设备及可读存储介质 |
CN112102307B (zh) * | 2020-09-25 | 2023-10-20 | 杭州海康威视数字技术股份有限公司 | 全局区域的热度数据确定方法、装置及存储介质 |
KR102431955B1 (ko) * | 2021-02-09 | 2022-08-18 | 브라이튼코퍼레이션 주식회사 | 비주얼콘텐츠 제작관리를 위한 비주얼콘텐츠 리뷰기능 제공방법 및 제공장치 |
CN116563186B (zh) * | 2023-05-12 | 2024-07-12 | 中山大学 | 一种基于专用ai感知芯片的实时全景感知系统及方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060257053A1 (en) * | 2003-06-16 | 2006-11-16 | Boudreau Alexandre J | Segmentation and data mining for gel electrophoresis images |
US20140104378A1 (en) * | 2011-04-08 | 2014-04-17 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Capturing panoramic or semi-panoramic 3d scenes |
US20150054913A1 (en) * | 2013-08-21 | 2015-02-26 | Jaunt Inc. | Image stitching |
US20150124049A1 (en) * | 2012-06-06 | 2015-05-07 | Sony Corporation | Image processing apparatus, image processing method, and program |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7259784B2 (en) | 2002-06-21 | 2007-08-21 | Microsoft Corporation | System and method for camera color calibration and image stitching |
US7730406B2 (en) * | 2004-10-20 | 2010-06-01 | Hewlett-Packard Development Company, L.P. | Image processing system and method |
US9035968B2 (en) * | 2007-07-23 | 2015-05-19 | Humaneyes Technologies Ltd. | Multi view displays and methods for producing the same |
US10080006B2 (en) * | 2009-12-11 | 2018-09-18 | Fotonation Limited | Stereoscopic (3D) panorama creation on handheld device |
US9361717B2 (en) * | 2010-10-19 | 2016-06-07 | Humaneyes Technologies Ltd. | Methods and systems of generating an interlaced composite image |
KR101819621B1 (ko) | 2010-10-19 | 2018-01-17 | 휴먼아이즈 테크놀로지즈 리미티드 | 인터레이스된 복합 이미지를 발생시키는 방법 및 시스템 |
US8818132B2 (en) * | 2010-11-29 | 2014-08-26 | Microsoft Corporation | Camera calibration with lens distortion from low-rank textures |
US8581961B2 (en) * | 2011-03-31 | 2013-11-12 | Vangogh Imaging, Inc. | Stereoscopic panoramic video capture system using surface identification and distance registration technique |
JP5769813B2 (ja) * | 2011-11-07 | 2015-08-26 | 株式会社ソニー・コンピュータエンタテインメント | 画像生成装置および画像生成方法 |
CN104335569B (zh) * | 2012-06-11 | 2017-08-25 | 索尼电脑娱乐公司 | 图像生成设备以及图像生成方法 |
JP5828039B2 (ja) * | 2012-06-11 | 2015-12-02 | 株式会社ソニー・コンピュータエンタテインメント | 画像生成装置および画像生成方法 |
US9413930B2 (en) * | 2013-03-14 | 2016-08-09 | Joergen Geerds | Camera system |
CN203632790U (zh) * | 2013-10-18 | 2014-06-04 | 中国民用航空总局第二研究所 | 一种可同时进行全景显示与局部查看的分布式联动系统 |
WO2015085406A1 (en) | 2013-12-13 | 2015-06-18 | 8702209 Canada Inc. | Systems and methods for producing panoramic and stereoscopic videos |
JP6224251B2 (ja) * | 2013-12-19 | 2017-11-01 | インテル コーポレイション | ボウル形状イメージングシステム |
WO2015127535A1 (en) * | 2014-02-26 | 2015-09-03 | Searidge Technologies Inc. | Image stitching and automatic-color correction |
US9262801B2 (en) * | 2014-04-01 | 2016-02-16 | Gopro, Inc. | Image taping in a multi-camera array |
JP2019511016A (ja) | 2016-01-03 | 2019-04-18 | ヒューマンアイズ テクノロジーズ リミテッド | フレームのパノラマフレームへのステッチング |
-
2016
- 2016-12-12 JP JP2017549069A patent/JP2019511016A/ja active Pending
- 2016-12-12 US US16/067,832 patent/US20190019299A1/en not_active Abandoned
- 2016-12-12 EP EP16881396.2A patent/EP3398163A4/en not_active Withdrawn
- 2016-12-12 JP JP2018534905A patent/JP2019511024A/ja active Pending
- 2016-12-12 WO PCT/IL2016/051328 patent/WO2017115348A1/en active Application Filing
- 2016-12-12 WO PCT/IL2016/051329 patent/WO2017115349A1/en active Application Filing
- 2016-12-12 CN CN201680023730.0A patent/CN107960121A/zh active Pending
- 2016-12-12 US US15/560,495 patent/US10460459B2/en not_active Expired - Fee Related
- 2016-12-12 KR KR1020187020220A patent/KR20180111798A/ko unknown
- 2016-12-12 CN CN201680081415.3A patent/CN108700798A/zh active Pending
- 2016-12-12 EP EP16881395.4A patent/EP3398016A4/en not_active Withdrawn
- 2016-12-12 KR KR1020177032907A patent/KR20180101165A/ko unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060257053A1 (en) * | 2003-06-16 | 2006-11-16 | Boudreau Alexandre J | Segmentation and data mining for gel electrophoresis images |
US20140104378A1 (en) * | 2011-04-08 | 2014-04-17 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Capturing panoramic or semi-panoramic 3d scenes |
US20150124049A1 (en) * | 2012-06-06 | 2015-05-07 | Sony Corporation | Image processing apparatus, image processing method, and program |
US20150054913A1 (en) * | 2013-08-21 | 2015-02-26 | Jaunt Inc. | Image stitching |
US20150058102A1 (en) * | 2013-08-21 | 2015-02-26 | Jaunt Inc. | Generating content for a virtual reality system |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10460459B2 (en) | 2016-01-03 | 2019-10-29 | Humaneyes Technologies Ltd. | Stitching frames into a panoramic frame |
WO2020194190A1 (en) * | 2019-03-25 | 2020-10-01 | Humaneyes Technologies Ltd. | Systems, apparatuses and methods for acquiring, processing and delivering stereophonic and panoramic images |
US20220180491A1 (en) * | 2019-05-15 | 2022-06-09 | Ntt Docomo, Inc. | Image processing apparatus |
US12136192B2 (en) * | 2019-05-15 | 2024-11-05 | Ntt Docomo, Inc. | Image processing apparatus |
WO2022076934A1 (en) * | 2020-10-09 | 2022-04-14 | Arizona Board Of Regents On Behalf Of Arizona State University | Anti-tamper protection using dendrites |
US20220253988A1 (en) * | 2021-02-05 | 2022-08-11 | Motorola Solutions, Inc. | Device, method and system for identifying objects in warped images from a fisheye camera |
WO2024090674A1 (en) * | 2022-10-29 | 2024-05-02 | Samsung Electronics Co., Ltd. | Method and apparatus for stitching frames of image comprising moving objects |
Also Published As
Publication number | Publication date |
---|---|
CN107960121A (zh) | 2018-04-24 |
EP3398016A1 (en) | 2018-11-07 |
EP3398163A1 (en) | 2018-11-07 |
WO2017115348A1 (en) | 2017-07-06 |
US10460459B2 (en) | 2019-10-29 |
JP2019511024A (ja) | 2019-04-18 |
WO2017115349A1 (en) | 2017-07-06 |
JP2019511016A (ja) | 2019-04-18 |
EP3398163A4 (en) | 2019-10-16 |
EP3398016A4 (en) | 2019-08-28 |
KR20180101165A (ko) | 2018-09-12 |
CN108700798A (zh) | 2018-10-23 |
US20180063513A1 (en) | 2018-03-01 |
KR20180111798A (ko) | 2018-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190019299A1 (en) | Adaptive stitching of frames in the process of creating a panoramic frame | |
KR101944050B1 (ko) | 파노라마 가상 현실 콘텐츠의 캡쳐 및 렌더링 | |
US10375381B2 (en) | Omnistereo capture and render of panoramic virtual reality content | |
KR102023587B1 (ko) | 카메라 리그 및 입체 이미지 캡쳐 | |
EP3262614B1 (en) | Calibration for immersive content systems | |
US10373362B2 (en) | Systems and methods for adaptive stitching of digital images | |
US20170363949A1 (en) | Multi-tier camera rig for stereoscopic image capture | |
EP2323416A2 (en) | Stereoscopic editing for video production, post-production and display adaptation | |
CN109361912A (zh) | 用于立体视觉图像捕获的多层相机装置 | |
CN111866523B (zh) | 全景视频合成方法、装置、电子设备和计算机存储介质 | |
US20190266802A1 (en) | Display of Visual Data with a Virtual Reality Headset | |
JP2019509526A (ja) | 多数のカメラを用いた最適の球形映像獲得方法 | |
KR101947799B1 (ko) | 가상현실 콘텐츠 서비스를 위한 360도 vr 어안 렌더링 방법 | |
Zilly | Method for the automated analysis, control and correction of stereoscopic distortions and parameters for 3D-TV applications: new image processing algorithms to improve the efficiency of stereo-and multi-camera 3D-TV productions | |
Zilly | Method for the automated analysis, control and correction of stereoscopic distortions and parameters for 3D-TV applications | |
CN108924530A (zh) | 一种3d拍摄异常图像校正的方法、装置及移动端 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HUMANEYES TECHNOLOGIES LTD., ISRAEL Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BAR, ANTON;TULBOVICH, YITZCHAK;FINE, SHMUEL;REEL/FRAME:046421/0522 Effective date: 20161013 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |