US20190228500A1 - Information processing method, information processing device, and program - Google Patents
Information processing method, information processing device, and program Download PDFInfo
- Publication number
- US20190228500A1 US20190228500A1 US16/372,764 US201916372764A US2019228500A1 US 20190228500 A1 US20190228500 A1 US 20190228500A1 US 201916372764 A US201916372764 A US 201916372764A US 2019228500 A1 US2019228500 A1 US 2019228500A1
- Authority
- US
- United States
- Prior art keywords
- image
- editing
- smartphone
- output
- case
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 31
- 238000003672 processing method Methods 0.000 title claims abstract description 26
- 238000000034 method Methods 0.000 claims abstract description 178
- 230000008569 process Effects 0.000 claims abstract description 175
- 238000003860 storage Methods 0.000 claims description 31
- 238000004519 manufacturing process Methods 0.000 abstract description 33
- 238000010586 diagram Methods 0.000 description 55
- 238000001514 detection method Methods 0.000 description 14
- 230000008859 change Effects 0.000 description 11
- 230000003287 optical effect Effects 0.000 description 11
- 230000015572 biosynthetic process Effects 0.000 description 9
- 238000003786 synthesis reaction Methods 0.000 description 9
- 230000004075 alteration Effects 0.000 description 5
- 230000002093 peripheral effect Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 239000013256 coordination polymer Substances 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000002950 deficient Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000003705 background correction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- G06T3/0018—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
- G06T3/047—Fisheye or wide-angle transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
- H04N23/661—Transmitting camera control signals through networks, e.g. control via the Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H04N5/2258—
-
- H04N5/23216—
-
- H04N5/23238—
-
- H04N5/23293—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Definitions
- the present invention relates to at least one of an information processing method, an information processing device, and a program.
- a method for displaying a panoramic image has been known conventionally.
- a user interface (that will be referred to as a “UI” below) has been known for accepting an instruction from a user with respect to display of a panoramic image in a panoramic image display (see, for example, Japanese Patent Application Publication No. 2011-076249).
- a conventional UI assigns a function for scrolling an image to so-called “dragging” at a time of image display on a smartphone or the like, and hence, it may be difficult for a user to execute an image operation, for example, editing of an image or the like.
- an information processing method for causing a computer to process an image
- the image processing method causes the computer to execute an acquisition step of acquiring the image, a first production step of producing an editing image for editing at least a portion of the image and a first changing image for changing the editing image to be output, an editing step of editing the image on the produced editing image, a second production step of producing a second changing image based on an image edited at the editing step, and an output step of outputting an output image that has at least the editing image edited at the editing step and the second changing image produced at the second production step.
- FIG. 1 is a diagram that illustrates one example of an entire configuration of an image taking system according to one embodiment of the present invention.
- FIG. 2A , FIG. 2B , and FIG. 2C are diagrams that illustrate one example of an image taking device according to one embodiment of the present invention.
- FIG. 3 is a diagram that illustrates one example of image taking by an image taking device according to one embodiment of the present invention.
- FIG. 4A , FIG. 4B , and FIG. 4C are diagrams that illustrate one example of an image taken by an image taking device according to one embodiment of the present invention.
- FIG. 5 is a block diagram that illustrates one example of a hardware configuration of an image taking device according to one embodiment of the present invention.
- FIG. 6 is a block diagram that illustrates one example of a hardware configuration of a smartphone according to one embodiment of the present invention.
- FIG. 7 is a sequence diagram that illustrates one example of an entire process of an image taking system according to one embodiment of the present invention.
- FIG. 8A , FIG. 8B , FIG. 8C , and FIG. 8D are diagrams that illustrate one example of an all celestial sphere image according to one embodiment of the present invention.
- FIG. 9 is a diagram that illustrates one example of an all celestial sphere panoramic image according to one embodiment of the present invention.
- FIG. 10A , FIG. 10B , FIG. 100 , and FIG. 10D are diagrams for illustrating one example of an initial image according to one embodiment of the present invention.
- FIG. 11 is a diagram that illustrates one example of an output image at an initial state for executing editing of an image according to one embodiment of the present invention.
- FIG. 12A , FIG. 12B , and FIG. 12C are diagrams for illustrating one example of editing of an area to be output according to one embodiment of the present invention.
- FIG. 13A and FIG. 13B are diagrams for illustrating one example of enlargement or reduction of an area to be output according to one embodiment of the present invention.
- FIG. 14 is a diagram for illustrating one example of another zoom process according to one embodiment of the present invention.
- FIG. 15 is a table for illustrating one example of another zoom process according to one embodiment of the present invention.
- FIG. 16A , FIG. 16B , FIG. 16C , FIG. 16D , and FIG. 16E are diagrams for illustrating one example of a “range” of another zoom process according to one embodiment of the present invention.
- FIG. 17A and FIG. 17B are diagrams for illustrating one example of editing that is executed for a predetermined area based on an editing image according to one embodiment of the present invention.
- FIG. 18 is a flowchart that illustrates one example of an entire process of a smartphone according to one embodiment of the present invention.
- FIG. 19A and FIG. 19B are diagrams for illustrating one example of changing of an output such as a position or direction of a changing image according to one embodiment of the present invention.
- FIG. 20 is a functional diagram for illustrating one example of a functional configuration of an image taking system according to one embodiment of the present invention.
- FIG. 1 is a diagram that illustrates one example of an entire configuration of an image taking system according to one embodiment of the present invention.
- An image taking system 10 has an image taking device 1 and a smartphone 2 .
- the image taking device 1 has a plurality of optical systems, and produces, and outputs to the smartphone 2 , for example, a taken image of a wide range such as all directions around the image taking device 1 (that will be referred to as an “all celestial sphere image” below). Details of the image taking device 1 and an all celestial sphere image will be described below.
- An image that is processed by the image taking system 10 is, for example, an all celestial sphere image.
- a panoramic image is, for example, an all celestial sphere image.
- An example of an all celestial sphere image will be described below.
- An information processing device is, for example, the smartphone 2 .
- the smartphone 2 will be described as an example below.
- the smartphone 2 is a device for causing a user to operate an all celestial sphere image acquired from the image taking device 1 .
- the smartphone 2 is a device for causing a user to output an acquired all celestial sphere image. A detail of the smartphone 2 will be described below.
- the image taking device 1 and the smartphone 2 are subjected to wired or wireless connection.
- the smartphone 2 downloads from the image taking device 1 , and inputs to the smartphone 2 , data such an all celestial sphere image output from the image taking device 1 .
- connection may be executed through a network.
- an entire configuration is not limited to a configuration illustrated in FIG. 1 .
- the image taking device 1 and the smartphone 2 may be an integrated device.
- another computer other than the image taking device 1 and the smartphone 2 may be connected to be composed of three or more devices.
- FIG. 2A , FIG. 2B , and FIG. 2C are diagrams that illustrate one example of an image taking device according to one embodiment of the present invention.
- FIG. 2A , FIG. 2B , and FIG. 2C are diagrams that illustrate one example of an appearance of the image taking device 1 .
- FIG. 2A is one example of an elevation view of the image taking device 1 .
- FIG. 2B is one example of a left side view of the image taking device 1 .
- FIG. 2C is one example of a plan view of the image taking device 1 .
- the image taking device 1 has a front side image taking element 1 H 1 , a back side image taking element 1 H 2 , and a switch 1 H 3 .
- a hardware that is provided in an interior of the image taking device 1 will be described below.
- the image taking device 1 produces an all celestial sphere image by using images taken by the front side image taking element 1 H 1 and the back side image taking element 1 H 2 .
- the switch 1 H 3 is a so-called “shutter button” and is an input device for causing a user to execute an instruction of image taking for the image taking device 1 .
- the image taking device 1 is held by hand of a user, for example, as illustrated in FIG. 2A , and the switch 1 H 3 is pushed to execute image taking.
- FIG. 3 is a diagram that illustrates one example of image taking by an image taking device according to one embodiment of the present invention.
- a user holds the image taking device 1 by hand and pushes the switch 1 H 3 in FIG. 2A , FIG. 2B , and FIG. 2C to execute image taking.
- the image taking device 1 it is possible for the image taking device 1 to take an image in all directions around the image taking device 1 by the front side image taking element 1 H 1 in FIG. 2A , FIG. 2B , and FIG. 2C and the back side image taking element 1 H 2 in FIG. 2A , FIG. 2B , and FIG. 2C .
- FIG. 4A , FIG. 4B , and FIG. 4C are diagrams that illustrate one example of an image taken by an image taking device according to one embodiment of the present invention.
- FIG. 4A is one example of an image taken by the front side image taking element 1 H 1 in FIG. 2A , FIG. 2B , and FIG. 2C .
- FIG. 4B is one example of an image taken by the back side image taking element 1 H 2 in FIG. 2A , FIG. 2B , and FIG. 2C .
- FIG. 4C is one example of an image that is produced based on an image taken by the front side image taking element 1 H 1 in FIG. 2A , FIG. 2B , and FIG. 2C and an image taken by the back side image taking element 1 H 2 in FIG. 2A , FIG. 2B , and FIG. 2C .
- An image taken by the front side image taking element 1 H 1 in FIG. 2A , FIG. 2B , and FIG. 2C is an image with an image taking range that is a wide range in a front direction of the image taking device 1 , for example, a range of 180° as an angle of view, as illustrated in FIG. 4A .
- An image taken by the front side image taking element 1 H 1 in FIG. 2A , FIG. 2B , and FIG. 2C has a distortion aberration as illustrated in FIG. 4A , in a case where the front side image taking element 1 H 1 in FIG. 2A , FIG. 2B , and FIG. 2C uses an optical system for taking an image with a wide range, for example, a so-called “fisheye lens”.
- FIG. 4A taken by the front side image taking element 1 H 1 in FIG. 2A , FIG. 2B , and FIG. 2C is a so-called “hemispherical image” that has a wide range in one side of the image taking device 1 and a distortion aberration (that will be referred to as a “hemispherical image” below).
- an angle of view it is desirable for an angle of view to be within a range greater than or equal to 180° and less than or equal to 200°.
- a hemispherical image in FIG. 4A and a hemispherical image in FIG. 4B that will be described below are synthesized in a case where an angle of view is greater than 180°, there is an overlapping image area, and hence, synthesis is facilitated.
- An image taken by the back side image taking element 1 H 2 in FIG. 2A , FIG. 2B , and FIG. 2C is an image with an image taking range that is a wide range in a back direction of the image taking device 1 , for example, a range of 180° as an angle of view, as illustrated in FIG. 4B .
- FIG. 4B taken by the back side image taking element 1 H 2 in FIG. 2A , FIG. 2B , and FIG. 2C is a hemispherical image similar to that of FIG. 4A .
- the image taking device 1 executes processes such as a distortion correction process and a synthesis process, and thereby produces an image illustrated in FIG. 4C from a front side hemispherical image in FIG. 4A and a back side hemispherical image in FIG. 4B .
- FIG. 4C is an image produced by, for example, Mercator's projection, equidistant cylindrical projection, or the like, namely, an all celestial sphere image.
- an all celestial sphere image is not limited to an image produced by the image taking device 1 .
- An all celestial sphere image may be, for example, an image taken by another camera or the like, or an image produced based on an image taken by another camera. It is desirable for an all celestial sphere image to be an image with a view angle in a wide range taken by a so-called “all direction camera”, a so-called “wide angle lens camera”, or the like.
- an all celestial sphere image will be described as an example, and an image is not limited to such an all celestial sphere image.
- An image may be, for example, an image or the like taken by a compact camera, a single lens reflex camera, a smartphone, or the like.
- An image may be a panoramic image that extends horizontally or vertically, or the like.
- FIG. 5 is a block diagram that illustrates one example of a hardware configuration of an image taking device according to one embodiment of the present invention.
- the image taking device 1 has an image taking unit 1 H 4 , an image processing unit 1 H 7 , an image control unit 1 H 8 , a Central Processing Unit (CPU) 1 H 9 , and a Read-Only Memory (ROM) 1 H 10 . Furthermore, the image taking device 1 has a Static Random Access Memory (SRAM) 1 H 11 , a Dynamic Random Access Memory (DRAM) 1 H 12 , and an operation interface (I/F) 1 H 13 . Moreover, the image taking device 1 has a network I/F 1 H 14 , a wireless I/F 1 H 15 , and an antenna 1 H 16 . Each component of the image taking device 1 is connected through a bus 1 H 17 and executes input or output of data or a signal.
- SRAM Static Random Access Memory
- DRAM Dynamic Random Access Memory
- I/F operation interface
- the image taking device 1 has a network I/F 1 H 14 , a wireless I/F 1 H 15 , and an antenna 1 H 16 .
- the image taking unit 1 H 4 has the front side image taking element 1 H 1 and the back side image taking element 1 H 2 .
- a lens 1 H 5 that corresponds to the front side image taking element 1 H 1 and a lens 1 H 6 that corresponds to the back side image taking element 1 H 2 are placed.
- the front side image taking element 1 H 1 and the back side image taking element 1 H 2 are so-called “camera units”.
- the front side image taking element 1 H 1 and the back side image taking element 1 H 2 have optical sensors such as a Complementary Metal Oxide semiconductor (CMOS) or a Charge Coupled Device (CCD).
- CMOS Complementary Metal Oxide semiconductor
- CCD Charge Coupled Device
- the front side image taking element 1 H 1 executes a process for converting light incident on the lens 1 H 5 to produce image data.
- the back side image taking element 1 H 2 executes a process for converting light incident on the lens 1 H 6 to produce image data.
- the image taking unit 1 H 4 outputs image data produced by the front side image taking element 1 H 1 and the back side image taking element 1 H 2 to the image processing unit 1 H 7 .
- image data are the front side hemispherical image in FIG. 4A and the back side hemispherical image in FIG. 4B or the like.
- the front side image taking element 1 H 1 and the back side image taking element 1 H 2 may have an optical element other than a lens, such as a stop or a low-pass filter, in order to execute image taking with a high image quality. Furthermore, the front side image taking element 1 H 1 and the back side image taking element 1 H 2 may execute a process such as a so-called “defective pixel correction” or a so-called “hand movement correction” in order to execute image taking with a high image quality.
- the image processing unit 1 H 7 executes a process for producing an all celestial sphere image in FIG. 4C from image data that are input from the image taking unit 1 H 4 .
- a detail of a process for producing an all celestial sphere image will be described below.
- a process that is executed by the image processing unit 1 H 7 may be such that a part or an entirety of a process is executed in parallel and redundantly by another computer.
- the image taking control unit 1 H 8 is a control device that controls each component of the image taking device 1 .
- the CPU 1 H 9 executes an operation or a control for each process that is executed by the image taking device 1 .
- the CPU 1 H 9 executes each kind of program.
- the CPU 1 H 9 may be composed of a plurality of CPUs or devices or a plurality of cores in order to attain speeding-up due to parallel processing.
- a process of the CPU 1 H 9 may be such that another hardware resource is provided inside or outside the image taking device 1 and caused to execute a part or an entirety of a process for the image taking device 1 .
- the ROM 1 H 10 , the SRAM 1 H 11 , and the DRAM 1 H 12 are examples of a storage device.
- the ROM 1 H 10 stores, for example, a program, data, or a parameter that is executed by the CPU 1 H 9 .
- the SRAM 1 H 11 and the DRAM 1 H 12 store, for example, a program, data to be used in a program, data to be produced by a program, a parameter, or the like, in a case where the CPU 1 H 9 executes a program.
- the image taking device 1 may have an auxiliary storage device such as a hard disk.
- the operation I/F 1 H 13 is an interface that executes a process for inputting an operation of a user to the image taking device 1 , such as the switch 1 H 3 .
- the operation I/F 1 H 13 is an operation device such as a switch, a connector or cable for connecting an operation device, a circuit for processing a signal input from an operation device, a driver, a control device, or the like.
- the operation I/F 1 H 13 may have an output device such as a display.
- the operation I/F 1 H 13 may be a so-called “touch panel” wherein an input device and an output device are integrated, or the like.
- the operation I/F 1 H 13 may have an interface such as a Universal Serial Bus (USB), connect a storage medium such as Flash Memory (“Flash Memory” is a registered trademark), and input from and output to the image taking device 1 , data.
- USB Universal Serial Bus
- Flash Memory Flash Memory
- the switch 1 H 3 may have an electric power source switch for executing an operation other than a shutter operation, a parameter input switch, or the like.
- the network I/F 1 H 14 , the wireless I/F 1 H 15 , and the antenna 1 H 16 are devices for connecting the image taking device 1 with another computer through a wireless or wired network and a peripheral circuit or the like.
- the image taking device 1 is connected to a network through the network I/F 1 H 14 and transmits data to the smartphone 2 .
- the network I/F 1 H 14 , the wireless I/F 1 H 15 , and the antenna 1 H 16 may be configured to be connected by using a connector such as a USB, a cable, or the like.
- the bus 1 H 17 is used for an input or an output of data or the like between respective components of the image taking device 1 .
- the bus 1 H 17 is a so-called “internal bus”.
- the bus 1 H 17 is, for example, a Peripheral Component Interconnect Bus Express (PCI Express).
- PCI Express Peripheral Component Interconnect Bus Express
- the image taking device 1 is not limited to a case of two image taking elements. For example, it may have three or more image taking elements. Moreover, the image taking device 1 may change an image taking angle of one image taking element to take a plurality of partial images. Furthermore, the image taking device 1 is not limited to an optical system that uses a fisheye lens. For example, a wide angle lens may be used.
- a process that is executed by the image taking device 1 is not limited to that is executed by the image taking device 1 .
- a part or an entirety of a process that is executed by the image taking device 1 may be executed by the smartphone 2 or another computer connected through a network while the image taking device 1 may transmit data or a parameter.
- FIG. 6 is a block diagram that illustrates one example of a hardware configuration of an information processing device that includes a smartphone according to one embodiment of the present invention.
- An information processing device is a computer.
- An information processing device may be, for example, a notebook Personal Computer (PC), a Personal Digital Assistance (PDA), a tablet, a mobile phone, or the like, other than a smartphone.
- PC Personal Computer
- PDA Personal Digital Assistance
- the smartphone 2 that is one example of an information processing device has an auxiliary storage device 2 H 1 , a main storage device 2 H 2 , an input/output device 2 H 3 , a state sensor 2 H 4 , a CPU 2 H 5 , and a network I/F 2 H 6 .
- Each component of the smartphone 2 is connected to a bus 2 H 7 and executes an input or an output of data or a signal.
- the auxiliary storage device 2 H 1 stores information such as each kind of data that includes an intermediate result of a process executed by the CPU 2 H 5 due to a control of the CPU 2 H 5 , a control device, or the like, a parameter, or a program.
- the auxiliary storage device 2 H 1 is, for example, a hard disk, a flash Solid State Drive (SSD), or the like.
- SSD Solid State Drive
- information stored in the auxiliary storage device 2 H 1 is such that a part or an entirety of such information may be stored in a file server connected to the network I/F 2 H 6 or the like, instead of the auxiliary storage device 2 H 1 .
- the main storage device 2 H 2 is a main storage device such as a storage area to be used by a program that is executed by the CPU 2 H 5 , that is, a so-called “Memory”.
- the main storage device 2 H 2 stores information such as data, a program, or a parameter.
- the main storage device 2 H 2 is, for example, a Static Random Access Memory (SRAM), a DRAM, or the like.
- the main storage device 2 H 2 may have a control device for executing storage in or acquisition from a memory.
- the input/output device 2 H 3 is a device that has functions of an output device for executing display and an input device for inputting an operation of a user.
- the input/output device 2 H 3 is a so-called “touch panel”, a “peripheral circuit”, a “driver”, or the like.
- the input/output device 2 H 3 executes a process for displaying, to a user, an image input in, for example, a predetermined Graphical User Interface (GUI) or the smartphone 2 .
- GUI Graphical User Interface
- the input/output device 2 H 3 executes a process for inputting an operation of a user, for example, in a case where a GUI with a display or an image is operated by such a user.
- the state sensor 2 H 4 is a sensor for detecting a state of the smartphone 2 .
- the state sensor 2 H 4 is a gyro sensor, an angle sensor, or the like.
- the state sensor 2 H 4 determines, for example, whether or not one side that is possessed by the smartphone 2 is provided at a predetermined or greater angle with respect to a horizon. That is, the state sensor 2 H 4 executes a detection as to whether the smartphone 2 is provided at a state of a longitudinally directional attitude or a state of a laterally directional attitude.
- the CPU 2 H 5 executes a calculation in each process that is executed by the smartphone 2 and a control of a device that is provided in the smartphone 2 .
- the CPU 2 H 5 executes each kind of program.
- the CPU 2 H 5 may be composed of a plurality of CPUs or devices, or a plurality of cores in order to execute a process in parallel, redundantly, or dispersedly.
- a process for the CPU 2 H 5 is such that another hardware resource may be provided inside or outside the smartphone 2 to execute a part or an entirety of a process for the smartphone 2 .
- the smartphone 2 may have a Graphics Processing Unit (GPU) for executing image processing, or the like.
- GPU Graphics Processing Unit
- the network I/F 2 H 6 is a device such as an antenna, a peripheral circuit, a driver, or the like, for inputting or outputting data, or the like, that is connected to another computer through a wireless or wired network.
- the smartphone 2 executes a process for inputting image data from the image taking device 1 due to the CPU 2 H 5 and the network I/F 2 H 6 .
- the smartphone 2 executes a process for outputting a predetermined parameter or the like to the image taking device 1 due to the CPU 2 H 5 and the network I/F 2 H 6 .
- FIG. 7 is a sequence diagram that illustrates one example of an entire process for an image taking system according to one embodiment of the present invention.
- step S 0701 the image taking device 1 executes a process for producing an all celestial sphere image.
- FIG. 8A , FIG. 8B , FIG. 8C , and FIG. 8D are diagrams that illustrate one example of an all celestial sphere image according to one embodiment of the present invention.
- FIG. 8A , FIG. 8B , FIG. 8C , and FIG. 8D are diagrams that illustrate one example of a process for producing an all celestial sphere image at step S 0701 .
- FIG. 8A is a diagram illustrated in such a manner that positions in a hemispherical image in FIG. 4A where incidence angles are equal in a horizontal direction or a vertical direction with respect to an optical axis are connected by a line.
- An incidence angle ⁇ in a horizontal direction with respect to an optical axis and an incidence angle ⁇ in a vertical direction with respect to such an optical axis will be denoted below.
- FIG. 8B is a diagram illustrated in such a manner that positions in a hemispherical image in FIG. 4B where incidence angles are equal in a horizontal direction or a vertical direction with respect to an optical axis are connected by a line.
- FIG. 8C is a diagram that illustrates one example of an image processed in accordance with Mercator projection.
- FIG. 8C is an example of a case where an image in a state illustrated in FIG. 8A or FIG. 8 B is, for example, caused to correspond to a preliminarily produced Look Up Table (LUT) or the like and processed in accordance with equidistant cylindrical projection.
- LUT Look Up Table
- FIG. 8D is one example of a synthesis process for synthesizing images provided by applying a process illustrated in FIG. 8C to FIG. 8A and FIG. 8B .
- a synthesis process is to produce an image by using a plurality of images, for example, in a state illustrated in FIG. 8C .
- a synthesis process is not limited to a process for simply arranging pre-processed images successively.
- a synthesis process may be a process for executing a synthesis process in such a manner that a pre-processed image in FIG. 4A is arranged at a center of an all celestial sphere image and a pre-processed image in FIG. 4B is divided and arranged at left and right sides thereof, so as to produce an all celestial sphere image illustrated in FIG. 4C .
- a process for producing an all celestial sphere image is not limited to a process in accordance with equidistant cylindrical projection.
- a so-called “upside-down” case is provided in such a manner that, like FIG. 8B , an alignment of pixels in a direction of ⁇ is upside-down with respect to an alignment in FIG. 8A and an alignment of pixels in a direction of ⁇ is left-right reversal with respect to an alignment in FIG. 8A .
- the image taking device 1 may execute a process for rolling or rotating a pre-processed image in a state of FIG. 8B by 180° so as to align with an alignment of pixels in a direction of ⁇ and a direction of ⁇ in FIG. 8A .
- a process for producing an all celestial sphere image may execute a correction process for correcting distortion aberration that is provided in an image in a state of FIG. 8A or FIG. 8B .
- a process for producing an all celestial sphere image may execute a process for improving an image quality, for example, shading correction, gamma correction, white balance, hand movement correction, an optical black correction process, a defective pixel correction process, an edge enhancement process, a linear correction process, or the like.
- a synthesis process may execute correction by utilizing an overlapping range to execute such a synthesis process at high precision.
- the image taking device 1 Due to a process for producing an all celestial sphere image, the image taking device 1 produces an all celestial sphere image from a hemispherical image that is taken by the image taking device 1 .
- step S 0702 the smartphone 2 executes a process for acquiring an all celestial sphere image produced at step S 0701 .
- a case where the smartphone 2 acquires an all celestial sphere image in FIG. 8D will be described as an example below.
- the smartphone 2 produces an all celestial sphere panoramic image from an all celestial sphere image acquired at step S 0702 .
- FIG. 9 is a diagram that illustrates one example of an all celestial sphere panoramic image according to one embodiment of the present invention.
- the smartphone 2 executes a process for producing an all celestial sphere panoramic image in FIG. 9 from an all celestial sphere image in FIG. 8D .
- An all celestial sphere panoramic image is an image provided in such a manner that an all celestial sphere image is applied onto a spherical shape.
- a process for producing an all celestial sphere panoramic image is realized by, for example, an Application Programming Interface (API) such as Open GL (“Open GL” is a registered trademark) for Embedded Systems (Open GL ES).
- API Application Programming Interface
- Open GL is a registered trademark
- Open GL ES Embedded Systems
- An all celestial sphere panoramic image is produced by dividing an image into triangles, joining vertices P of triangles (that will be referred to as “vertices P” below), and applying a polygon thereof.
- the smartphone 2 executes a process for causing a user to input an operation for starting an output of an image.
- the smartphone 2 for example, reduces and outputs an all celestial sphere panoramic image produced at step S 0703 , that is, displays a so-called “thumbnail image”.
- the smartphone 2 outputs a list of thumbnail images, for example, to cause a user to select an image to be output.
- the smartphone 2 executes, for example, a process for inputting an operation for causing a user to select one image from a list of thumbnail images.
- the smartphone 2 executes a process for producing an initial image based on an all celestial sphere panoramic image selected at step S 0704 .
- FIG. 10A , FIG. 10B , FIG. 100 , and FIG. 10D are diagrams for illustrating one example of an initial image according to one embodiment of the present invention.
- FIG. 10A is a diagram that illustrates a three-dimensional coordinate system for illustrating one example of an initial image according to one embodiment of the present invention.
- the smartphone 2 places a virtual camera 3 at a position of an origin and produces each kind of image at a viewpoint of the virtual camera 3 .
- an all celestial sphere panoramic image is represented by, for example, a sphere CS.
- the virtual camera 3 corresponds to a viewpoint of a user that views an all celestial sphere panoramic image wherein such an all celestial sphere panoramic image is a sphere CS at a placed position thereof.
- FIG. 10B is a diagram for illustrating one example of a predetermined area for a virtual camera according to one embodiment of the present invention.
- FIG. 10B is a case where FIG. 10A is represented by three-plane figures.
- FIG. 10B is a case where the virtual camera 3 is placed at an origin of FIG. 10A .
- FIG. 10C is a projection view of one example of a predetermined area for a virtual camera according to one embodiment of the present invention.
- a predetermined area T is an area where a view angle of the virtual camera 3 is projected onto a sphere CS.
- the smartphone 2 produces an image based on a predetermined area T.
- FIG. 10D is a diagram for illustrating one example of information for determining a predetermined area for a virtual camera according to one embodiment of the present invention.
- a predetermined area T is determined by, for example, predetermined area information (x, y, ⁇ ).
- a view angle ⁇ is an angle that indicates an angle of the virtual camera 3 as illustrated in FIG. 10D .
- coordinates of a center point CP of such a predetermined area T are represented by (x,y) in predetermined area information.
- a distance from the virtual camera 3 to a center point CP is represented by Formula (1) described below:
- An initial image is an image provided by determining a predetermined area T based on a preliminarily set initial setting and being produced based on such a determined predetermined area T.
- the smartphone 2 causes a user to execute an operation for switching to an image editing mode.
- the smartphone 2 outputs, for example, an initial image.
- step S 0707 the smartphone 2 executes a process for outputting an output image for editing an image.
- FIG. 11 is a diagram that illustrates one example of an output image at an initial state for editing an image according to one embodiment of the present invention.
- An output image is, for example, an output image 21 at an initial state.
- An output image has an editing image 31 at an initial state and a changing image 41 at an initial state.
- An output image displays a button for a Graphical User Interface (GUI) for accepting an operation of a user.
- GUI Graphical User Interface
- a GUI is, for example, a blur editing button 51 , a cancellation editing button 52 , or the like.
- an output image may have another GUI.
- An editing image 31 at an initial state is, for example, an initial image produced at step S 0705 .
- a changing image 41 at an initial state is, for example, an image provided by reducing an all celestial sphere panoramic image produced at step S 0703 .
- a user edits an image in an image editing mode, and hence, applies an operation to an editing image or a changing image that is displayed in an output image.
- step S 0708 the smartphone 2 executes a process for causing a user to input an operation for editing an image.
- the smartphone 2 acquires coordinates where a user inputs an operation for the input/output device 2 H 3 .
- the smartphone 2 executes a process for determining whether an operation is executed for an area of the editing image 31 at an initial state in FIG. 11 or an operation is executed for an area of the changing image 41 at an initial state in FIG. 11 , based on acquired coordinates.
- Image editing is editing that is executed based on an operation of a user.
- Editing of an area to be output is editing for changing an area to be output in an image based on a changing image or editing executed for a predetermined area based on an editing image.
- Editing for changing an area to be output is executed in a case where an operation is applied to an area of a changing image at step S 0709 .
- Editing to be executed for a predetermined area based on an editing image is executed in a case where an operation is applied to an area of an editing image at step S 0709 .
- step S 0709 In a case where a user operates a changing image (an area of a changing image is determined at step S 0709 ), the smartphone 2 goes to step S 0710 . In a case where a user operates an editing image (an area of an editing image is determined at step S 0709 ), the smartphone 2 goes to step S 0712 .
- FIG. 12A , FIG. 12B , and FIG. 12C are diagrams for illustrating one example of editing of an area to be output according to one embodiment of the present invention.
- FIG. 12A is a diagram that illustrates one example of an output image after editing an area to be output according to one embodiment of the present invention.
- An output image is, for example, an output image 22 after editing an area to be output.
- the output image 22 after editing an area to be output has an editing image 32 after editing an area to be output and a changing image 42 after editing an area to be output.
- the editing image 32 after editing an area to be output is an image produced by changing a predetermined area T as illustrated in FIG. 10A , FIG. 10B , FIG. 100 , and FIG. 10D in the editing image 31 at an initial state in FIG. 11 .
- the changing image 42 after editing an area to be output is an image produced by changing a predetermined area T illustrated in FIG. 10A , FIG. 10B , FIG. 100 , and FIG. 10D in the changing image 41 at an initial state in FIG. 11 .
- FIG. 12B is a diagram that illustrates one example of a predetermined area after editing an area to be output according to one embodiment of the present invention.
- the output image 22 after editing an area to be output is provided at, for example, a viewpoint of a case where the virtual camera 3 at a state of FIG. 10B is pan-rotated as illustrated in FIG. 12B .
- FIG. 12C is a diagram that illustrates one example of an operation in a case of editing of an area to be output according to one embodiment of the present invention.
- Editing of an area to be output is executed in such a manner that a user operates a screen area where a changing image is output.
- An operation to be input at step S 0708 is, for example, an operation for changing an area to be output with respect to left and right directions of an image or the like.
- an operation that is input by a user is such that a screen where the changing image 41 at an initial state in FIG. 11 is traced with a finger in left and right directions of such a screen as illustrated in FIG. 12C , that is, a so-called “swipe operation”, or the like.
- an input amount on a swipe operation is provided as (dx, dy).
- a relation between a polar coordinate system ( ⁇ , ⁇ ) of an all celestial sphere in FIG. 8A , FIG. 8B , FIG. 8C , and FIG. 8D and an input amount (dx, dy) is represented by Formula (2) described below:
- k is a predetermined constant for executing adjustment.
- An output image is changed based on an input amount input for a swipe operation, and hence, it is possible for a user to operate an image with a feeling that a sphere such as a terrestrial globe is rotated.
- the changing image 42 after editing an area to be output executes perspective projection transformation of coordinates (Px, Py, Pz) of a vertex P in three-dimensional space based on ( ⁇ , ⁇ ) calculated in accordance with Formula (2).
- a polar coordinate system ( ⁇ , ⁇ ) of an all celestial sphere is represented by Formula (3) described below:
- a polar coordinate system ( ⁇ , ⁇ ) of an all celestial sphere is calculated based on a total value of input amounts for respective swipe operations. Even in a case where a plurality of swipe operations are executed or the like, calculation of a polar coordinate system ( ⁇ , ⁇ ) of an all celestial sphere is executed, and thereby, it is possible to keep constant operability.
- editing of an area to be output is not limited to pan-rotation.
- tilt-rotation of the virtual camera 3 in upper and lower directions of an image may be realized.
- An operation that is input at step S 0708 is, for example, an operation for enlarging or reducing an area to be output or the like.
- FIG. 13A and FIG. 13B are diagrams for illustrating one example of enlargement or reduction of an area to be output according to one embodiment of the present invention.
- an operation that is input by a user is such that two fingers are spread on a screen where the changing image 41 at an initial state in FIG. 11 is output, as illustrated in FIG. 13A , that is, a so-called “pinch-out operation”, or the like.
- an operation that is input by a user is such that two fingers are moved closer to each other on a screen where the changing image 41 at an initial state in FIG. 11 is output, as illustrated in FIG. 13B , that is, a so-called “pinch-in operation”, or the like.
- a pinch-out or pinch-in operation is sufficient as long as a position where a finger of a user first contacts is provided in an area with a changing image displayed thereon, and may be an operation that subsequently uses an area with an editing image displayed thereon. Furthermore, an operation may be executed by a so-called “stylus pen” that is a tool for operating a touch panel or the like.
- the smartphone 2 executes a so-called “zoom process”.
- a zoom process is a process for producing an image with a predetermined area enlarged or reduced based on an operation that is input by a user.
- the smartphone 2 acquires an amount of change dz based on an operation that is input by a user.
- a zoom process is a process for executing calculation in accordance with Formula (4) described below:
- ⁇ indicated in Formula (4) described above is a view angle ⁇ of the virtual camera 3 as illustrated in FIG. 10A , FIG. 10B , FIG. 10C , and FIG. 10D .
- m indicated in Formula (4) is a coefficient for adjusting an amount of zoom.
- ⁇ 0 indicated in Formula (4) is a view angle ⁇ at an initial state, that is, a view angle ⁇ in a case where an initial image is produced at step S 0705 .
- the smartphone 2 determines a range of a predetermined area T in FIG. 10A , FIG. 10B , FIG. 100 , and FIG. 10D by using a view angle ⁇ calculated in accordance with Formula (4) for a projection matrix.
- a view angle ⁇ is calculated based on a total value of amounts of change due to operations as illustrated in FIG. 13A and FIG. 13B . Even in a case where a plurality of operations as illustrated in FIG. 13A and FIG. 13B are executed or the like, calculation of a view angle ⁇ of a celestial sphere is executed, and thereby, it is possible to keep constant operability.
- a zoom process is not limited to a process in accordance with Formula (4) or Formula (5).
- a zoom process may be realized by combining a view angle ⁇ of the virtual camera 3 and a change in a position of a viewpoint.
- FIG. 14 is a diagram for illustrating one example of another zoom process according to one embodiment of the present invention.
- FIG. 14 is a model diagram for illustrating another zoom process.
- a sphere CS in FIG. 14 is similar to a sphere CS in FIG. 10A , FIG. 10B , FIG. 100 , and FIG. 10D .
- a radius of a sphere CS is described as “1”.
- An origin in FIG. 14 is provided at an initial position of the virtual camera 3 .
- a position of the virtual camera 3 is changed on an optical axis, that is, a z-axis in FIG. 10A . It is possible to represent an amount of movement d of the virtual camera 3 by a distance from an origin. For example, in a case where the virtual camera 3 is positioned at an origin, that is, a case of an initial state, an amount of movement d is “ ⁇ ”.
- a range of a predetermined area T in FIG. 10A , FIG. 10B , FIG. 100 , and FIG. 10D is represented by an angle of view ⁇ based on an amount of movement d and a view angle ⁇ of the virtual camera 3 .
- an angle of view ⁇ is identical to a view angle ⁇ .
- an angle of view ⁇ and a view angle ⁇ exhibit different ranges.
- Another zoom process is a process for changing an angle of view ⁇ .
- FIG. 15 is a table for illustrating one example of another zoom process according to one embodiment of the present invention.
- Illustrative table 4 illustrates an example of a case where an angle of view ⁇ is a range of 60° to 300°.
- the smartphone 2 determines which of a view angle ⁇ and an amount of movement d of the virtual camera 3 is preferentially changed based on a zoom specification value ZP.
- RANGE is a range that is determined based on a zoom specification value ZP.
- “OUTPUT MAGNIFICATION” is an output magnification of an image calculated based on an image parameter determined by another zoom process.
- ZOOM SPECIFICATION VALUE ZP is a value that corresponds to an angle of view to be output. Another zoom process changes a process for determining an amount of movement d and a view angle ⁇ based on a zoom specification value ZP. For a process to be executed in another zoom process, one of four methods is determined based on a zoom specification value ZP as illustrated in illustrative table 4 . A range of a zoom specification value ZP is divided into four ranges that are a range of A-B, a range of B-C, a range of C-D, and a range of D-E.
- ANGLE OF VIEW ⁇ is an angle of view ⁇ that corresponds to an image parameter determined by another zoom process.
- CHANGING PARAMETER is a description that illustrates a parameter that is changed by each of four methods based on a zoom specification value ZP. “REMARKS” are remarks for “CHANGING PARAMETER”.
- “viewWH” in illustrative table 4 is a value that represents a width or a height of an output area. In a case where an output area is laterally long, “viewWH” is a value of a width. In a case where an output area is longitudinally long, “viewWH” is a value of a height. That is, “viewWH” is a value that represents a size of an output area in longitudinal direction.
- “imgWH” in illustrative table 4 is a value that represents a width or a height of an output image. In a case where an output area is laterally long, “imgWH” is a value of a width of an output image. In a case where an output area is longitudinally long, “imgWH” is a value of a height of an output image. That is, “imgWH” is a value that represents a size of an output image in longitudinal direction.
- imageDeg in illustrative table 4 is a value that represents an angle of a display range of an output image. In a case where a width of an output image is represented, “imageDeg” is 360°. In a case where a height of an output image is represented, “imageDeg” is 180°.
- FIG. 16A , FIG. 16B , FIG. 16C , FIG. 16D , and FIG. 16E are diagrams for illustrating one example of a “range” of another zoom process according to one embodiment of the present invention.
- FIG. 16A , FIG. 16B , FIG. 16C , FIG. 16D , and FIG. 16E A case of a so-called “zoom-out” in FIG. 16A , FIG. 16B , FIG. 16C , FIG. 16D , and FIG. 16E will be described as an example below.
- a left figure in each figure of FIG. 16A , FIG. 16B , FIG. 16C , FIG. 16D , and FIG. 16E illustrates one example of an image to be output.
- a right figure in each figure of FIG. 16A , FIG. 16B , FIG. 16C , FIG. 16D , and FIG. 16E is a diagram that illustrates one example of a state of the virtual camera 3 at a time of an output in a model diagram illustrated in FIG. 14 .
- FIG. 16A is one example of an output in a case where a zoom specification value ZP is input in such a manner that a “RANGE” in illustrative table 4 in FIG. 15 is “A-B”.
- an amount of movement d of the virtual camera 3 is changed on a condition that a view angle ⁇ is fixed as illustrated in FIG. 16A .
- an angle of view ⁇ is increased.
- an amount of movement d of the virtual camera 3 in a case of “A-B” is from 0 to a radius of a sphere CS. That is, a radius of a sphere CS is “1” in a case of FIG. 16A , FIG. 16B , FIG. 16C , FIG. 16D , and FIG. 16E , and hence, an amount of movement d of the virtual camera 3 is a value within a range of 0-1.
- An amount of movement d of the virtual camera 3 is a value that corresponds to a zoom specification value ZP.
- FIG. 16B is one example of an output in a case where a zoom specification value ZP is input in such a manner that “RANGE” in illustrative table 4 in FIG. 15 is “B-C”.
- “B-C” is a case where a zoom specification value ZP is a value greater than that of “A-B”.
- an amount of movement d of the virtual camera 3 is fixed at a value for positioning the virtual camera 3 at a periphery of a sphere CS. That is, as illustrated in FIG. 16B , an amount of movement d of the virtual camera 3 is fixed at “1” that is a radius of a sphere CS.
- a view angle ⁇ is changed on a condition that an amount of movement d of the virtual camera 3 is fixed.
- a view angle ⁇ is increased on a condition that an amount of movement d of the virtual camera 3 is fixed, an angle of view co is increased from FIG. 16A to FIG. 16B .
- an amount of movement d of the virtual camera 3 is fixed and a view angle ⁇ is increased, so that it is possible to realize a zoom-out process.
- a view angle ⁇ is calculated as ⁇ /2.
- a range of a view angle ⁇ is from 60° that is a value fixed in a case of “A-B” to 120°.
- an angle of view ⁇ is identical to a zoom specification value ZP.
- a value of an angle of view ⁇ is increased.
- FIG. 16C is one example of an output in a case where a zoom specification value ZP is input in such a manner that “RANGE” in illustrative table 4 in FIG. 15 is “C-D”.
- “C-D” is a case where a zoom specification value ZP is a value greater than that of “B-C”.
- an amount of movement d of the virtual camera 3 is changed on a condition that a view angle ⁇ is fixed as illustrated in FIG. 16C .
- an amount of movement d of the virtual camera 3 is increased on a condition that a view angle ⁇ is fixed, an angle of view ⁇ is increased.
- An amount of movement d of the virtual camera 3 is calculated in accordance with a formula based on a zoom specification value ZP illustrated in illustrative table 4 I FIG. 15 .
- an amount of movement d of the virtual camera 3 is changed to a maximum display distance dmax 1 .
- a maximum display distance dmax 1 is a distance where a sphere CS is displayed so as to be maximum in an output area of the smartphone 2 .
- An output area is, for example, a size of a screen where the smartphone 2 outputs an image or the like, or the like.
- a maximum display distance dmax 1 is, for example, a case of FIG. 16D .
- a maximum display distance dmax 1 is calculated in accordance with Formula (6) described below:
- viewW in Formula (6) described above is a value that represents a width of an output area of the smartphone 2 .
- viewH in Formula (6) described above is a value that represents a height of an output area of the smartphone 2 . A similar matter will be described below.
- a maximum display distance dmax 1 is calculated based on values of “viewW” and “viewH” that are output areas of the smartphone 2 .
- FIG. 16D is one example of an output in a case where a zoom specification value ZP is input in such a manner that “RANGE” in illustrative table 4 in FIG. 15 is “D-E”.
- “D-E” is a case where a zoom specification value ZP is a value greater than that of “C-D”.
- an amount of movement d of the virtual camera 3 is changed on a condition that a view angle ⁇ is fixed as illustrated in FIG. 16D .
- An amount of movement d of the virtual camera 3 is changed to a limit display distance dmax 2 .
- a limit display distance dmax 2 is a distance where a sphere CS is displayed so as to be inscribed in an output area of the smartphone 2 .
- a limit display distance dmax 2 is calculated in Formula (7) described below:
- a limit display distance dmax 2 is, for example, a case of FIG. 16E .
- a limit display distance dmax 2 is calculated based on values of “viewW” and “viewH” that are output areas of the smartphone 2 .
- a limit display distance dmax 2 represents a maximum range that is able to be output by the smartphone 2 , that is, a limit value of an amount of movement d of the virtual camera 3 .
- An embodiment may be limited in such a manner that a zoom specification value ZP is included in a range illustrated in illustrative table 4 in FIG. 15 , that is, a value of an amount of movement d of the virtual camera 3 is less than or equal to a limit display distance dmax 2 . Due to such limitation, the smartphone 2 is provided on a condition that an output image is fitted to a screen that is an output area or a condition that an image with a predetermined output magnification is output to a user, so that it is possible to realize zoom-out.
- an angle of view ⁇ is not identical to a zoom specification value ZP. Furthermore, as illustrated in illustrative table 4 in FIG. 15 and FIG. 16A , FIG. 16B , FIG. 16C , FIG. 16D , and FIG. 16E , an angle of view ⁇ is continuous in each range but such an angle of view ⁇ is not uniformly increased by zoom-out toward a wide-angle side. That is, in a case of “C-D” where an amount of movement d of the virtual camera 3 is changed, an angle of view ⁇ is increased with such an amount of movement d of the virtual camera 3 .
- an angle of view ⁇ is decreased with such an amount of movement d of the virtual camera 3 .
- a decrease in an amount of movement d of the virtual camera 3 in “D-E” is caused by reflecting an outer area of a sphere CS.
- the smartphone 2 changes an amount of movement d of the virtual camera 3 , and thereby, it is possible to output an image with a less feeling of strangeness to a user and change an angle of view w.
- an angle of view ⁇ is frequently increased.
- the smartphone 2 fixes a view angle ⁇ of the virtual camera 3 and increases an amount of movement d of the virtual camera 3 .
- the smartphone 2 fixes a view angle ⁇ of the virtual camera 3 , and thereby, it is possible to reduce an increase in such a view angle ⁇ of the virtual camera 3 .
- the smartphone 2 reduces an increase in a view angle ⁇ of the virtual camera 3 , and thereby, it is possible to output an image with less distortion to a user.
- the smartphone 2 increases an amount of movement d of the virtual camera 3 , that is, moves the virtual camera 3 to be distant, and thereby, it is possible to provide a user with an open-feeling of a wide angle display. Furthermore, movement for moving the virtual camera 3 to be distant is similar to movement at a time when a human being confirms a wide range, and hence, it is possible for the smartphone 2 to realize zoom-out with a less feeling of strangeness due to movement for moving the virtual camera to be distant.
- an angle of view m is decreased with changing a zoom specification value ZP toward a wide-angle direction.
- the smartphone 2 decreases an angle of view w, and thereby, it is possible to provide a user with a feeling of being distant from a sphere CS.
- the smartphone 2 provides a user with a feeling of being distant from a sphere CS, and thereby, it is possible to output an image with a less feeling of strangeness to a user.
- the smartphone 2 it is possible for the smartphone 2 to output an image with a less feeling of strangeness to a user, due to another zoom process illustrated in illustrative table 4 in FIG. 15 .
- an embodiment is not limited to a case where only an amount of movement d or a view angle ⁇ of the virtual camera 3 illustrated in illustrative table 4 in FIG. 15 is changed. It is sufficient for an embodiment to be a mode for preferentially changing an amount of movement d or a view angle ⁇ of the virtual camera 3 on a condition illustrated in illustrative table 4 in FIG. 15 , and a fixed value may be changed to a sufficiently small value, for example, for adjustment.
- an embodiment is not limited to zoom-out.
- An embodiment may realize, for example, zoom-in.
- a case where an area to be output is edited is not limited to a case where an operation is executed for a changing image.
- the smartphone 2 may edit an area to be output, for example, in a case where an operation is executed for an editing image.
- Editing to be executed for a predetermined area based on an editing image is blur editing that blurs a predetermined pixel.
- FIG. 12A A case where a user executes blur editing for the output image 22 after editing of an area to be output in FIG. 12A , FIG. 12B , and FIG. 12C will be described as an example below.
- the smartphone 2 causes a user to input a so-called “tap operation” for an area where an editing image 32 for the output image 22 after editing of an area to be output in FIG. 12A , FIG. 12B , and FIG. 12C is displayed.
- the smartphone 2 executes a process for blurring a predetermined range centered at a point tapped by a user.
- FIG. 17A and FIG. 17B are diagrams for illustrating one example of editing to be executed for a predetermined area based on an editing image according to one embodiment of the present invention.
- FIG. 17A is a diagram for illustrating one example of blur editing according to one embodiment of the present invention.
- FIG. 17A is a diagram that illustrates an output image 23 after blur editing.
- the output image 23 after blur editing has an editing image 33 after blur editing and a changing image 43 after blur editing.
- the editing image 33 after blur editing is produced by applying blur editing to an output image after editing of an area to be output in FIG. 12A , FIG. 12B , and FIG. 12C .
- Blur editing is realized by, for example, a Gauss function, an average of peripheral pixels, a low-pass filter, or the like. Blur editing is illustrated like, for example, a blur editing area 5 .
- the smartphone 2 executed calculation of a point (Px, Py, Pz) in a three-dimensional space from coordinates of a point tapped by a user.
- the smartphone 2 calculates (Px, Py, Pz) from two-dimensional coordinates through inverse transformation of perspective projection transformation that uses a view frustum.
- There is no information of depth in two-dimensional coordinates and hence, (Px, Py, Pz) are calculated by using a point on a sphere and simultaneous equations.
- a sign of Pz in a projection coordinate system is constant, and hence, it is possible for the smartphone 2 to calculate simultaneous equations.
- Coordinates of an all celestial sphere panoramic image correspond to (Px, Py, Pz), and hence, it is possible for the smartphone 2 to calculate coordinates on an all celestial sphere panoramic image from calculated (Px, Py, Pz). Therefore, a changing image 43 after blur editing is provided on a condition that blur editing is reflected as illustrated in FIG. 17 A.
- FIG. 17B is a diagram for illustrating one example of editing that cancels blurring according to one embodiment of the present invention.
- Editing that is applied to a predetermined area based on an editing image is editing that cancels blur editing for a blur editing area 5 blurred by such blur editing.
- the smartphone 2 outputs an output image 24 for cancellation editing that displays a filling area 6 on the blur editing area 5 with applied blur editing.
- the output image 24 for cancellation editing is an image that displays the filling area 6 on the blur editing area 5 in the editing image 33 after blur editing in FIG. 17A .
- a user executes a tap operation for a displayed filling area 6 , that is, an area with applied blurring.
- the smartphone 2 executes a process for cancelling blur editing in a predetermined range centered at a point tapped by a user.
- the smartphone 2 provides a predetermined range centered at a point tapped by a user in the editing image 33 after blur editing on a state of the output image 22 after editing of an area to be output in FIG. 12A , FIG. 12B , and FIG. 12C .
- the smartphone 2 may change a range of editing applied to a predetermined area based on an editing image or the like in accordance with a magnification.
- the smartphone 2 calculates amounts of movement of coordinates to be output. That is, at step S 0710 , the smartphone 2 calculates a position of a predetermined area T in FIG. 10A , FIG. 10B , FIG. 100 , and FIG. 10D that corresponds to a swipe operation of a user based on, for example, Formula (2) described above.
- the smartphone 2 updates a position of a predetermined area T in FIG. 10A , FIG. 10B , FIG. 100 , and FIG. 10D at a position calculated at step S 0710 .
- the smartphone 2 calculates coordinates of a point that is an editing object. That is, at step S 0712 , the smartphone 2 calculates coordinates that correspond to a tap operation of a user and executes calculation for projection onto three-dimensional coordinates.
- the smartphone 2 calculates a predetermined area that is edited centered at coordinates calculated at step S 0712 and based on an editing image. That is, at step S 0713 , the smartphone 2 calculates a pixel that is a point specified by a tap operation of a user or a periphery of such a point and is an object for blur editing or the like.
- the smartphone 2 produces an editing image.
- the smartphone 2 produces a changing image based on a predetermined area T updated at step S 0711 .
- the smartphone 2 produces an editing image wherein a blurring process is reflected on a pixel calculated at step S 0713 .
- the smartphone 2 produces a changing image.
- the smartphone 2 produces a changing image based on a predetermined area T updated at step S 0711 .
- the smartphone 2 produces an changing image that indicates a location that is a blurring object at step S 713 .
- the smartphone 2 repeats processes of step S 0708 through step S 0715 .
- FIG. 18 is a flowchart that illustrates one example of an entire process on a smartphone according to one embodiment of the present invention.
- step S 1801 the smartphone 2 executes a process for acquiring an image from the image taking device 1 in FIG. 1 or the like.
- a process at step S 1801 corresponds to a process at step S 0702 in FIG. 7 .
- step S 1802 the smartphone 2 executes a process for producing a panoramic image.
- a process at step S 1802 is executed based on an image acquired at step S 1801 .
- a process at step S 1802 corresponds to a process at step S 0703 in FIG. 7 .
- a process for causing a user to select an image to be output is a process for outputting a thumbnail image or providing a UI for causing a user to execute an operation for a thumbnail image, or the like.
- step S 1804 the smartphone 2 executes a process for producing an initial image.
- a process at step S 1804 corresponds to a process at step S 0705 in FIG. 7 .
- the smartphone 2 produces and outputs an image selected by a user at step S 1803 as an initial image.
- step S 1805 the smartphone 2 executes determination as to whether or not switching to a mode for editing an image is executed.
- a process at step S 1805 executes determination based on whether or not an operation of a user at step S 0706 in FIG. 7 is provided.
- the smartphone 2 goes to step S 1806 .
- determination is provided at step S 1805 in such a manner that switching to a mode for editing an image is not provided NO at step S 1805
- the smartphone 2 returns to step S 1804 .
- a case where determination is provided at step S 1805 in such a manner that switching to a mode for editing an image is provided is a case where an input to start editing of an image is provided by a user.
- a case where determination is provided at step S 1805 in such a manner that switching to a mode for editing an image is not provided is a case where a user does not execute an operation. Therefore, in a case where a user does not execute an operation, the smartphone 2 continues to output an initial image and waits for an input of an user to start editing of an image.
- step S 1806 the smartphone 2 executes a process for outputting an output image for editing an image.
- a process at step S 1806 corresponds to a process at step S 0707 in FIG. 7 .
- the smartphone 2 outputs an output image and thereby accepts an operation of a user at step S 0708 in FIG. 7 .
- step S 1807 the smartphone 2 executes determination as to whether an operation of a user is executed for an editing image or a changing image.
- a process at step S 1807 corresponds to a process at step S 0709 in FIG. 7 .
- the smartphone 2 executes determination as to whether an operation of a user at step S 0708 in FIG. 7 is executed for an editing image or a changing image.
- step S 1807 In a case where determination is provided in such a manner that an operation of a user is executed for a changing image (a changing image at step S 1807 ), the smartphone 2 goes to step S 1808 . In a case where determination is provided in such a manner that an operation of a user is executed for an editing image (an editing image at step S 1807 ), the smartphone 2 goes to step S 1810 .
- the smartphone 2 executes a process for calculating an amount of movement of a predetermined area due to an operation.
- a process at step S 1808 corresponds to a process at step S 0710 in FIG. 7 .
- the smartphone 2 calculates an amount of movement for moving a predetermined area based on a swipe operation that is executed by a user and changes such a predetermined area.
- step S 1809 the smartphone 2 executes a process for updating a predetermined area.
- a process at step S 1809 corresponds to a process at step S 0711 in FIG. 7 .
- the smartphone 2 moves a predetermined area T in FIG. 10A , FIG. 10B , FIG. 10C , and FIG. 10D to a position that corresponds to an amount of movement calculated at step S 1808 , and updates such a predetermined area T from a position of an initial image to a position that corresponds to a swipe operation of a user.
- the smartphone 2 executes a process for calculating, and three-dimensionally projecting, coordinates that are objects for an operation.
- a process at step S 1810 corresponds to a process at step S 0712 in FIG. 7 .
- the smartphone 2 calculates coordinates on an all celestial sphere image that corresponds to a point specified by a tap operation of a user.
- the smartphone 2 executes a process for calculating a pixel that is an object for blurring.
- the smartphone 2 has an editing state table that causes flag data as to whether or not an object for blurring is provided, to correspond to each pixel.
- An editing state table represents whether or not each pixel is output in a blur state.
- the smartphone 2 refers to an editing state table, determines whether or not each pixel in an output image is output in a blur state, and outputs an image. That is, a process at step S 1811 is a process for updating an editing state table. In a case where an operation for either blurring as illustrated in FIG. 17A or cancellation as illustrated in FIG. 17B is provided in a tap operation of a user, the smartphone 2 updates an editing state table based on such an operation.
- step S 1812 the smartphone 2 executes a process for producing an editing image.
- a process at step S 1812 corresponds to a process at step S 0714 in FIG. 7 .
- step S 1813 the smartphone 2 executes a process for producing a changing image.
- a process at step S 1813 corresponds to a process at step S 0715 in FIG. 7 .
- the smartphone 2 Due to processes at step S 1812 and step S 1813 , the smartphone 2 produces an output image and executes an output to a user.
- the smartphone 2 returns to step S 1807 and repeats previously illustrated processes.
- the smartphone 2 executes, for example, a blurring process as illustrated in FIG. 17 A and an output.
- An image that is output to a user by the smartphone 2 is output at 30 or more frames per 1 second in such a manner that such a user feels smooth reproduction of an animation. It is desirable for the smartphone 2 to execute an output at 60 or more frames per 1 second in such a manner that a user feels particularly smooth reproduction.
- a frame rate of an output may be such that 60 frames per 1 second is changed to, for example, 59.94 frames per 1 second.
- processes at step S 1812 and step S 1813 are not limited to processes for causing the smartphone 2 to execute a blurring process and an output.
- the smartphone 2 has an image provided by preliminarily applying a blurring process to all of pixels of an image to be output and an image provided by applying no blurring process.
- the smartphone 2 outputs each pixel by simultaneously selecting an image provided by executing a blurring process based on an editing state table or an image provided by executing no blurring process. It is possible for the smartphone 2 to reduce an amount of calculation for outputting an image by preliminarily executing a blurring process. That is, it is possible for the smartphone 2 to realize a high-speed image output such as 60 frames per 1 second by executing selection and a simultaneous output of each pixel.
- the smartphone 2 may store an output image.
- the smartphone 2 outputs a stored image. Due to storage, a process for selecting and producing each pixel of an image to be output is not required, and hence, it is possible for the smartphone 2 to reduce an amount of calculation. Therefore, the smartphone 2 stores an output image, and thereby, it is possible to realize a high-speed image output such as 60 frames per 1 second.
- an output image is not limited to an image illustrated in FIG. 11 or the like.
- a shape, a position, a size, or a range of an editing image or a changing image may be changed.
- FIG. 19A and FIG. 19B are diagrams for illustrating one example of changing of an output such as a position, a direction, or the like, of a changing image according to one embodiment of the present invention.
- An information processing device that is one example of a device for displaying an output image is, for example, the smartphone 2 .
- the smartphone 2 will be described as an example below.
- FIG. 19A is a diagram that illustrates one example of changing of an attitude of the smartphone 2 according to one embodiment of the present invention.
- a changing image is output to a position 7 before changing as illustrated in FIG. 19A .
- a case of FIG. 19A will be described as an example below.
- an attitude of the smartphone 2 is changed by a user in a direction of rotation as illustrated in FIG. 19A .
- An attitude of the smartphone 2 is detected by the state sensor 2 H 4 in FIG. 6 .
- the smartphone 2 rotates and outputs an output image based on an attitude of the smartphone 2 that is a result of detection.
- the smartphone 2 may change a position or a direction of an area for outputting a changing image based on a result of detection.
- FIG. 19B is a diagram that illustrates one example that changes a position or a direction of an area where a changing image is displayed based on a result of detection, according to one embodiment of the present invention.
- the smartphone 2 changes a position of an area for outputting a changing image from a position illustrated as the position 7 before changing in FIG. 19A to a first changing position 71 or a second changing position 72 .
- a changing image may be output on a condition that a direction for an output is changed based on a result of detection as illustrated in FIG. 19B , that is, rotated from a state of FIG. 19A to that as illustrated in FIG. 19B .
- the smartphone 2 changes a position or a direction of an area for outputting a changing image based on a result of detection. That is, it is possible for the smartphone 2 to output an image at a position or in a direction for facilitating an operation of a user even if an attitude of the smartphone 2 is changed, in order to output such an image in accordance with such an attitude.
- the smartphone 2 may display such a changing image so as to bounce during such changing.
- FIG. 20 is a block diagram that illustrates one example of a functional configuration of an image taking system according to one embodiment of the present invention.
- the image taking system 10 has the image taking device 1 and the smartphone 2 .
- the image taking system 10 has a first image taking part 1 F 1 , a second image taking part 1 F 2 , and an all celestial sphere image production part 1 F 3 .
- the image taking system 10 has an image acquisition part 2 F 1 , a production part 2 F 2 , an input/output part 2 F 3 , a detection part 2 F 4 , a storage part 2 F 5 , and a control part 2 F 6 .
- the first image talking part 1 F 1 and the second image taking part 1 F 2 take and produce images that are materials of an all celestial sphere image.
- the first image taking part 1 F 1 is realized by, for example, the front side image taking element 1 H 1 in FIG. 2A , FIG. 2B , and FIG. 2C or the like.
- the second image taking part 1 F 2 is realized by, for example, the back side image taking element 1 H 2 in FIG. 2A , FIG. 2B , and FIG. 2C or the like.
- An image that is a material of an all celestial sphere image is, for example, a hemispherical image as illustrated in FIG. 4A or FIG. 4B .
- the all celestial sphere image production part 1 F 3 produces an image that is output to the smartphone 2 , such as an all celestial sphere image.
- the all celestial sphere image production part 1 F 3 is realized by, for example, the image processing unit 1 H 7 in FIG. 5 or the like.
- the all celestial sphere image production part 1 F 3 produces an all celestial sphere image from hemispherical images that are taken by the first image taking part 1 F 1 and the second image taking part 1 F 2 .
- the image acquisition part 2 F 1 acquires image data such as an all celestial sphere image from the image taking device 1 .
- the image acquisition part 2 F 1 is realized by, for example, the network I/F 2 H 6 in FIG. 6 or the like.
- the image acquisition part 2 F 1 executes a process for causing the smartphone 2 to acquire image data such as an all celestial sphere image.
- the production part 2 F 2 executes a process for producing each kind of image and each kind of calculation necessary for production of an image.
- the production part 2 F 2 has a changing image production part 2 F 21 and an editing image production part 2 F 22 .
- the production part 2 F 2 is realized by the CPU 2 H 5 in FIG. 6 or the like.
- the changing image production part 2 F 21 executes a process for executing production of a changing image.
- the changing image production part 2 F 21 acquires, for example, image data and an editing state table from the storage part 2 F 5 .
- the changing image production part 2 F 21 produces a changing image based on an acquired editing state table and image data.
- the editing image production part 2 F 22 executes a process for executing production of an editing image.
- the editing image production part 2 F 22 acquires, for example, image data and an editing state table from the storage part 2 F 5 .
- the editing image production part 2 F 22 produces an editing image based on an acquired editing state table and image data.
- the production part 2 F 2 calculates, and stores as an editing state table, coordinates associated with an operation in a case where a user executes a tap or swipe operation. Furthermore, an image produced by the production part 2 F 2 may be stored in the storage part 2 F 5 and taken according to a process.
- the production part 2 F 2 may produce each kind of image based on a result of detection that is acquired from the detection part 2 F 4 .
- the input/output part 2 F 3 executes a process for inputting an operation of a user.
- the input/output part 2 F 3 causes a user to execute a process for outputting an image produced by the production part 2 F 2 .
- the input/output part 2 F 3 is realized by, for example, the input/output device 2 H 3 in FIG. 6 or the like.
- the detection part 2 F 4 executes a process for detecting an attitude of the smartphone 2 .
- the detection part 2 F 4 is realized by, for example, the state sensor 2 H 4 in FIG. 6 or the like.
- the storage part 2 F 5 stores each kind of information acquired or produced by the smartphone 2 .
- the storage part 2 F 5 has, for example, an editing state table storage part 2 F 51 and an image storage part 2 F 52 .
- the storage part 2 F 5 is realized by, for example, the auxiliary storage device 2 H 1 or the main storage device 2 H 2 in FIG. 6 or the like.
- the editing state table storage part 2 F 51 stores data of a table that represents a pixel where a blurring process is executed.
- the image storage part 2 F 52 stores an all celestial sphere image acquired by the image acquisition part 2 F 1 , an output image produced by the production part 2 F 2 , and the like.
- the control part 2 F 6 controls each kind of a component that is provided in the smartphone 2 .
- the control part 2 F 6 controls each kind of component, and thereby, realizes each kind of process, a process for assisting each kind of process, and the like.
- the control part 2 F 6 is realized by, for example, the CPU 2 H 5 in FIG. 6 or the like.
- an entire process is not limited to a case as illustrated in FIG. 7 .
- a part or an entirety of each process may be processed by a device other than a device as illustrated in FIG. 7 .
- the smartphone 2 produces an editing image and a changing image based on an all celestial sphere image acquired from the image taking device 1 or the like.
- An editing image is an image for outputting a predetermined area that is determined by a predetermined area T and causes a user to execute an editing operation such as blurring or cancellation of blurring.
- a changing image is an image for causing a user to execute an operation for changing a position, a size, or a range of a predetermined area T, or the like.
- the smartphone 2 outputs an output image that has at least an editing image and a changing image.
- An output image has an editing image and a changing image, and thereby, it is possible for the smartphone 2 to cause a user to execute editing such as blurring and simultaneously change an area output in such an editing image by such a changing image. Therefore, in a case where a user executes a blurring operation for an all celestial sphere image or the like, it is possible for the smartphone 2 to output an image for facilitating an operation. Hence, the smartphone 2 outputs an output image that has an editing image and a changing image, and thereby, it is possible for a user to readily execute an operation of an image.
- the smartphone 2 may be realized by a computer-executable program described in a legacy programing language such as Assembler, C, C++, C#, or Java (“Java” is a registered trademark), an object-oriented programming language, or the like. It is possible for a program to be stored in and distributed by a recording medium such as a ROM or an Electrically Erasable Programmable ROM (EEPROM). It is possible for a program to be stored in and distributed by a recording medium such as an Erasable Programmable ROM (EPROM).
- a legacy programing language such as Assembler, C, C++, C#, or Java (“Java” is a registered trademark), an object-oriented programming language, or the like. It is possible for a program to be stored in and distributed by a recording medium such as a ROM or an Electrically Erasable Programmable ROM (EEPROM). It is possible for a program to be stored in and distributed by a recording medium such as an Erasable Programmable ROM (EPROM).
- a program can be stored in and distributed by a recording medium such as a flash memory, a flexible disk, a CD-ROM, a CD-RW, a DVD-ROM, a DVD-RAM, a DVD-RW, or the like. It is possible for a program to be stored in a device-readable recording medium such as a Blu-Ray disk (“Blu-Ray disk” is a registered trademark), SD (“SD” is a registered trademark) card, or an MO or distributed through a telecommunication line.
- a recording medium such as a flash memory, a flexible disk, a CD-ROM, a CD-RW, a DVD-ROM, a DVD-RAM, a DVD-RW, or the like. It is possible for a program to be stored in a device-readable recording medium such as a Blu-Ray disk (“Blu-Ray disk” is a registered trademark), SD (“SD” is a registered trademark) card, or an MO or distributed through a telecommunication line.
- Blu-Ray disk is
- an image in an embodiment is not limited to a still image.
- an image may be an animation.
- a part or an entirety of each process in an embodiment may be realized by, for example, a programmable device (PD) such as a field programmable gate array (FPGA).
- a part or an entirety of each process in an embodiment may be realized by an Application Specific Integrated Circuit (ASIC).
- PD programmable device
- FPGA field programmable gate array
- ASIC Application Specific Integrated Circuit
- At least one illustrative embodiment of the present invention may relate to an information processing method, an information processing device, and a program.
- At least one illustrative embodiment of the present invention may aim at facilitating execution of an image operation for a user.
- an information processing method that causes a computer to process an image, characterized by causing the computer to execute an acquisition step for acquiring the image, a production step for producing an editing image for editing a predetermined area of the image and a changing image for changing the predetermined area to be output, and an output step for outputting an output image that has at least the editing image and the changing image.
- Illustrative embodiment (1) is an information processing method for causing a computer to process an image, wherein the image processing method causes the computer to execute an acquisition step for acquiring the image, a production step for producing an editing image for editing a predetermined area of the image and a changing image for changing the predetermined area to be output, and an output step for outputting an output image that has at least the editing image and the changing image.
- Illustrative embodiment (2) is the information processing method as described in illustrative embodiment (1), wherein an editing area input step for acquiring an editing area that is a target area of the editing by using the editing image and an editing step for editing the editing area are executed.
- Illustrative embodiment (3) is the image processing method as described in illustrative embodiment (2), wherein the editing step is a step for blurring the editing area.
- Illustrative embodiment (4) is the information processing method as described in illustrative embodiment (3), wherein an acquisition image that has just been acquired in the acquisition step and a blurred image produced by a blurring process are produced and the output image is output by selecting a pixel of the blurred image for the editing area and a pixel of the acquisition image for that other than the editing area.
- Illustrative embodiment (5) is the information processing method as described in any one of illustrative embodiments (2) to (4), wherein a specified area input step for acquiring a specifying area for specifying an area of a part or an entirety of an image output with the editing image and a cancellation step for canceling the editing process executed for the specifying area are executed.
- Illustrative embodiment (6) is the information processing method as described in any one of illustrative embodiments (1) to (5), wherein an operation input step for acquiring an operation for changing, enlarging, or reducing the predetermined area that is output with the editing image by using the changing image is executed.
- Illustrative embodiment (7) is the information processing method as described in illustrative embodiment (6), wherein a determination step for determining a view point position and a view angle is executed based on the operation and the determination changes one of the view point position and the view angle based on an area indicated by the operation.
- Illustrative embodiment (8) is the information processing method as described in any one of illustrative embodiments (1) to (7), wherein a detection step for detecting an attitude of a device that displays the output image and a changing step for changing a position or direction of the changing image based on a result of detection by the detection step are executed.
- Illustrative embodiment (9) is an information processing device that processes an image, wherein the image processing device has an acquisition means for acquiring the image, a production means for producing an editing image for editing a predetermined area of the image and a changing image for changing the predetermined area to be output, and an output means for outputting an output image that has at least the editing image and the changing image.
- Illustrative embodiment (10) is a program for causing a computer to process an image, wherein the program causes the computer to execute an acquisition step for acquiring the image, a production step for producing an editing image for editing a predetermined area of the image and a changing image for changing the predetermined area to be output, and an output step for outputting an output image that has at least the editing image and the changing image.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
- Studio Devices (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This application is a continuation application filed under 35 U.S.C. 111(a) claiming the benefit under 35 U.S.C. 120 of U.S. patent application Ser. No. 15/671,338 filed on Aug. 8, 2017, which is a continuation application filed under 35 U.S.C. 111(a) claiming the benefit under 35 U.S.C. 120 of U.S. patent application Ser. No. 14/924,871 filed on Oct. 28, 2015, which is a continuation application filed under 35 U.S.C. 111(a) claiming the benefit under 35 U.S.C. 120 and 365(c) of PCT International Application No.PCT/JP2015/057609 filed on Mar. 10, 2015, which is based upon and claims priority to Japanese Patent Application No. 2014-054782 filed on Mar. 18, 2014, the entire contents of which are incorporated herein by reference.
- The present invention relates to at least one of an information processing method, an information processing device, and a program.
- A method for displaying a panoramic image has been known conventionally.
- A user interface (that will be referred to as a “UI” below) has been known for accepting an instruction from a user with respect to display of a panoramic image in a panoramic image display (see, for example, Japanese Patent Application Publication No. 2011-076249).
- However, a conventional UI assigns a function for scrolling an image to so-called “dragging” at a time of image display on a smartphone or the like, and hence, it may be difficult for a user to execute an image operation, for example, editing of an image or the like.
- According to one aspect of the present invention, there is provided an information processing method for causing a computer to process an image, wherein the image processing method causes the computer to execute an acquisition step of acquiring the image, a first production step of producing an editing image for editing at least a portion of the image and a first changing image for changing the editing image to be output, an editing step of editing the image on the produced editing image, a second production step of producing a second changing image based on an image edited at the editing step, and an output step of outputting an output image that has at least the editing image edited at the editing step and the second changing image produced at the second production step.
-
FIG. 1 is a diagram that illustrates one example of an entire configuration of an image taking system according to one embodiment of the present invention. -
FIG. 2A ,FIG. 2B , andFIG. 2C are diagrams that illustrate one example of an image taking device according to one embodiment of the present invention. -
FIG. 3 is a diagram that illustrates one example of image taking by an image taking device according to one embodiment of the present invention. -
FIG. 4A ,FIG. 4B , andFIG. 4C are diagrams that illustrate one example of an image taken by an image taking device according to one embodiment of the present invention. -
FIG. 5 is a block diagram that illustrates one example of a hardware configuration of an image taking device according to one embodiment of the present invention. -
FIG. 6 is a block diagram that illustrates one example of a hardware configuration of a smartphone according to one embodiment of the present invention. -
FIG. 7 is a sequence diagram that illustrates one example of an entire process of an image taking system according to one embodiment of the present invention. -
FIG. 8A ,FIG. 8B ,FIG. 8C , andFIG. 8D are diagrams that illustrate one example of an all celestial sphere image according to one embodiment of the present invention. -
FIG. 9 is a diagram that illustrates one example of an all celestial sphere panoramic image according to one embodiment of the present invention. -
FIG. 10A ,FIG. 10B ,FIG. 100 , andFIG. 10D are diagrams for illustrating one example of an initial image according to one embodiment of the present invention. -
FIG. 11 is a diagram that illustrates one example of an output image at an initial state for executing editing of an image according to one embodiment of the present invention. -
FIG. 12A ,FIG. 12B , andFIG. 12C are diagrams for illustrating one example of editing of an area to be output according to one embodiment of the present invention. -
FIG. 13A andFIG. 13B are diagrams for illustrating one example of enlargement or reduction of an area to be output according to one embodiment of the present invention. -
FIG. 14 is a diagram for illustrating one example of another zoom process according to one embodiment of the present invention. -
FIG. 15 is a table for illustrating one example of another zoom process according to one embodiment of the present invention. -
FIG. 16A ,FIG. 16B ,FIG. 16C ,FIG. 16D , andFIG. 16E are diagrams for illustrating one example of a “range” of another zoom process according to one embodiment of the present invention. -
FIG. 17A andFIG. 17B are diagrams for illustrating one example of editing that is executed for a predetermined area based on an editing image according to one embodiment of the present invention. -
FIG. 18 is a flowchart that illustrates one example of an entire process of a smartphone according to one embodiment of the present invention. -
FIG. 19A andFIG. 19B are diagrams for illustrating one example of changing of an output such as a position or direction of a changing image according to one embodiment of the present invention. -
FIG. 20 is a functional diagram for illustrating one example of a functional configuration of an image taking system according to one embodiment of the present invention. - An embodiment of the present invention will be described below.
-
FIG. 1 is a diagram that illustrates one example of an entire configuration of an image taking system according to one embodiment of the present invention. - An
image taking system 10 has animage taking device 1 and asmartphone 2. - The
image taking device 1 has a plurality of optical systems, and produces, and outputs to thesmartphone 2, for example, a taken image of a wide range such as all directions around the image taking device 1 (that will be referred to as an “all celestial sphere image” below). Details of theimage taking device 1 and an all celestial sphere image will be described below. An image that is processed by theimage taking system 10 is, for example, an all celestial sphere image. A panoramic image is, for example, an all celestial sphere image. An example of an all celestial sphere image will be described below. - An information processing device is, for example, the
smartphone 2. Thesmartphone 2 will be described as an example below. Thesmartphone 2 is a device for causing a user to operate an all celestial sphere image acquired from theimage taking device 1. Thesmartphone 2 is a device for causing a user to output an acquired all celestial sphere image. A detail of thesmartphone 2 will be described below. - The
image taking device 1 and thesmartphone 2 are subjected to wired or wireless connection. For example, thesmartphone 2 downloads from theimage taking device 1, and inputs to thesmartphone 2, data such an all celestial sphere image output from theimage taking device 1. Here, connection may be executed through a network. - Here, an entire configuration is not limited to a configuration illustrated in
FIG. 1 . For example, theimage taking device 1 and thesmartphone 2 may be an integrated device. Furthermore, another computer other than theimage taking device 1 and thesmartphone 2 may be connected to be composed of three or more devices. -
FIG. 2A ,FIG. 2B , andFIG. 2C are diagrams that illustrate one example of an image taking device according to one embodiment of the present invention. -
FIG. 2A ,FIG. 2B , andFIG. 2C are diagrams that illustrate one example of an appearance of theimage taking device 1.FIG. 2A is one example of an elevation view of theimage taking device 1.FIG. 2B is one example of a left side view of theimage taking device 1.FIG. 2C is one example of a plan view of theimage taking device 1. - The
image taking device 1 has a front side image taking element 1H1, a back side image taking element 1H2, and a switch 1H3. A hardware that is provided in an interior of theimage taking device 1 will be described below. - The
image taking device 1 produces an all celestial sphere image by using images taken by the front side image taking element 1H1 and the back side image taking element 1H2. - The switch 1H3 is a so-called “shutter button” and is an input device for causing a user to execute an instruction of image taking for the
image taking device 1. - The
image taking device 1 is held by hand of a user, for example, as illustrated inFIG. 2A , and the switch 1H3 is pushed to execute image taking. -
FIG. 3 is a diagram that illustrates one example of image taking by an image taking device according to one embodiment of the present invention. - As illustrated in
FIG. 3 , a user holds theimage taking device 1 by hand and pushes the switch 1H3 inFIG. 2A ,FIG. 2B , andFIG. 2C to execute image taking. As illustrated inFIG. 3 , it is possible for theimage taking device 1 to take an image in all directions around theimage taking device 1 by the front side image taking element 1H1 inFIG. 2A ,FIG. 2B , andFIG. 2C and the back side image taking element 1H2 inFIG. 2A ,FIG. 2B , andFIG. 2C . -
FIG. 4A ,FIG. 4B , andFIG. 4C are diagrams that illustrate one example of an image taken by an image taking device according to one embodiment of the present invention. -
FIG. 4A is one example of an image taken by the front side image taking element 1H1 inFIG. 2A ,FIG. 2B , andFIG. 2C .FIG. 4B is one example of an image taken by the back side image taking element 1H2 inFIG. 2A ,FIG. 2B , andFIG. 2C .FIG. 4C is one example of an image that is produced based on an image taken by the front side image taking element 1H1 inFIG. 2A ,FIG. 2B , andFIG. 2C and an image taken by the back side image taking element 1H2 inFIG. 2A ,FIG. 2B , andFIG. 2C . - An image taken by the front side image taking element 1H1 in
FIG. 2A ,FIG. 2B , andFIG. 2C is an image with an image taking range that is a wide range in a front direction of theimage taking device 1, for example, a range of 180° as an angle of view, as illustrated inFIG. 4A . An image taken by the front side image taking element 1H1 inFIG. 2A ,FIG. 2B , andFIG. 2C has a distortion aberration as illustrated inFIG. 4A , in a case where the front side image taking element 1H1 inFIG. 2A ,FIG. 2B , andFIG. 2C uses an optical system for taking an image with a wide range, for example, a so-called “fisheye lens”. An image inFIG. 4A taken by the front side image taking element 1H1 inFIG. 2A ,FIG. 2B , andFIG. 2C is a so-called “hemispherical image” that has a wide range in one side of theimage taking device 1 and a distortion aberration (that will be referred to as a “hemispherical image” below). - Here, it is desirable for an angle of view to be within a range greater than or equal to 180° and less than or equal to 200°. In particular, as a hemispherical image in
FIG. 4A and a hemispherical image inFIG. 4B that will be described below are synthesized in a case where an angle of view is greater than 180°, there is an overlapping image area, and hence, synthesis is facilitated. - An image taken by the back side image taking element 1H2 in
FIG. 2A ,FIG. 2B , andFIG. 2C is an image with an image taking range that is a wide range in a back direction of theimage taking device 1, for example, a range of 180° as an angle of view, as illustrated inFIG. 4B . - An image in
FIG. 4B taken by the back side image taking element 1H2 inFIG. 2A ,FIG. 2B , andFIG. 2C is a hemispherical image similar to that ofFIG. 4A . - The
image taking device 1 executes processes such as a distortion correction process and a synthesis process, and thereby produces an image illustrated inFIG. 4C from a front side hemispherical image inFIG. 4A and a back side hemispherical image inFIG. 4B .FIG. 4C is an image produced by, for example, Mercator's projection, equidistant cylindrical projection, or the like, namely, an all celestial sphere image. - Here, an all celestial sphere image is not limited to an image produced by the
image taking device 1. An all celestial sphere image may be, for example, an image taken by another camera or the like, or an image produced based on an image taken by another camera. It is desirable for an all celestial sphere image to be an image with a view angle in a wide range taken by a so-called “all direction camera”, a so-called “wide angle lens camera”, or the like. - Furthermore, an all celestial sphere image will be described as an example, and an image is not limited to such an all celestial sphere image. An image may be, for example, an image or the like taken by a compact camera, a single lens reflex camera, a smartphone, or the like. An image may be a panoramic image that extends horizontally or vertically, or the like.
-
FIG. 5 is a block diagram that illustrates one example of a hardware configuration of an image taking device according to one embodiment of the present invention. - The
image taking device 1 has an image taking unit 1H4, an image processing unit 1H7, an image control unit 1H8, a Central Processing Unit (CPU) 1H9, and a Read-Only Memory (ROM) 1H10. Furthermore, theimage taking device 1 has a Static Random Access Memory (SRAM) 1H11, a Dynamic Random Access Memory (DRAM) 1H12, and an operation interface (I/F) 1H13. Moreover, theimage taking device 1 has a network I/F 1H14, a wireless I/F 1H15, and an antenna 1H16. Each component of theimage taking device 1 is connected through a bus 1H17 and executes input or output of data or a signal. - The image taking unit 1H4 has the front side image taking element 1H1 and the back side image taking element 1H2. A lens 1H5 that corresponds to the front side image taking element 1H1 and a lens 1H6 that corresponds to the back side image taking element 1H2 are placed. The front side image taking element 1H1 and the back side image taking element 1H2 are so-called “camera units”. The front side image taking element 1H1 and the back side image taking element 1H2 have optical sensors such as a Complementary Metal Oxide semiconductor (CMOS) or a Charge Coupled Device (CCD). The front side image taking element 1H1 executes a process for converting light incident on the lens 1H5 to produce image data. The back side image taking element 1H2 executes a process for converting light incident on the lens 1H6 to produce image data. The image taking unit 1H4 outputs image data produced by the front side image taking element 1H1 and the back side image taking element 1H2 to the image processing unit 1H7. For example, image data are the front side hemispherical image in
FIG. 4A and the back side hemispherical image inFIG. 4B or the like. - Here, the front side image taking element 1H1 and the back side image taking element 1H2 may have an optical element other than a lens, such as a stop or a low-pass filter, in order to execute image taking with a high image quality. Furthermore, the front side image taking element 1H1 and the back side image taking element 1H2 may execute a process such as a so-called “defective pixel correction” or a so-called “hand movement correction” in order to execute image taking with a high image quality.
- The image processing unit 1H7 executes a process for producing an all celestial sphere image in
FIG. 4C from image data that are input from the image taking unit 1H4. A detail of a process for producing an all celestial sphere image will be described below. Here, a process that is executed by the image processing unit 1H7 may be such that a part or an entirety of a process is executed in parallel and redundantly by another computer. - The image taking control unit 1H8 is a control device that controls each component of the
image taking device 1. - The CPU 1H9 executes an operation or a control for each process that is executed by the
image taking device 1. For example, the CPU 1H9 executes each kind of program. Here, the CPU 1H9 may be composed of a plurality of CPUs or devices or a plurality of cores in order to attain speeding-up due to parallel processing. Furthermore, a process of the CPU 1H9 may be such that another hardware resource is provided inside or outside theimage taking device 1 and caused to execute a part or an entirety of a process for theimage taking device 1. - The ROM 1H10, the SRAM 1H11, and the DRAM 1H12 are examples of a storage device. The ROM 1H10 stores, for example, a program, data, or a parameter that is executed by the CPU 1H9. The SRAM 1H11 and the DRAM 1H12 store, for example, a program, data to be used in a program, data to be produced by a program, a parameter, or the like, in a case where the CPU 1H9 executes a program. Here, the
image taking device 1 may have an auxiliary storage device such as a hard disk. - The operation I/F 1H13 is an interface that executes a process for inputting an operation of a user to the
image taking device 1, such as the switch 1H3. The operation I/F 1H13 is an operation device such as a switch, a connector or cable for connecting an operation device, a circuit for processing a signal input from an operation device, a driver, a control device, or the like. Here, the operation I/F 1H13 may have an output device such as a display. Furthermore, the operation I/F 1H13 may be a so-called “touch panel” wherein an input device and an output device are integrated, or the like. Moreover, the operation I/F 1H13 may have an interface such as a Universal Serial Bus (USB), connect a storage medium such as Flash Memory (“Flash Memory” is a registered trademark), and input from and output to theimage taking device 1, data. - Here, the switch 1H3 may have an electric power source switch for executing an operation other than a shutter operation, a parameter input switch, or the like.
- The network I/F 1H14, the wireless I/F 1H15, and the antenna 1H16 are devices for connecting the
image taking device 1 with another computer through a wireless or wired network and a peripheral circuit or the like. For example, theimage taking device 1 is connected to a network through the network I/F 1H14 and transmits data to thesmartphone 2. Here, the network I/F 1H14, the wireless I/F 1H15, and the antenna 1H16 may be configured to be connected by using a connector such as a USB, a cable, or the like. - The bus 1H17 is used for an input or an output of data or the like between respective components of the
image taking device 1. The bus 1H17 is a so-called “internal bus”. The bus 1H17 is, for example, a Peripheral Component Interconnect Bus Express (PCI Express). - Here, the
image taking device 1 is not limited to a case of two image taking elements. For example, it may have three or more image taking elements. Moreover, theimage taking device 1 may change an image taking angle of one image taking element to take a plurality of partial images. Furthermore, theimage taking device 1 is not limited to an optical system that uses a fisheye lens. For example, a wide angle lens may be used. - Here, a process that is executed by the
image taking device 1 is not limited to that is executed by theimage taking device 1. A part or an entirety of a process that is executed by theimage taking device 1 may be executed by thesmartphone 2 or another computer connected through a network while theimage taking device 1 may transmit data or a parameter. -
FIG. 6 is a block diagram that illustrates one example of a hardware configuration of an information processing device that includes a smartphone according to one embodiment of the present invention. - An information processing device is a computer. An information processing device may be, for example, a notebook Personal Computer (PC), a Personal Digital Assistance (PDA), a tablet, a mobile phone, or the like, other than a smartphone.
- The
smartphone 2 that is one example of an information processing device has an auxiliary storage device 2H1, a main storage device 2H2, an input/output device 2H3, a state sensor 2H4, a CPU 2H5, and a network I/F 2H6. Each component of thesmartphone 2 is connected to a bus 2H7 and executes an input or an output of data or a signal. - The auxiliary storage device 2H1 stores information such as each kind of data that includes an intermediate result of a process executed by the CPU 2H5 due to a control of the CPU 2H5, a control device, or the like, a parameter, or a program. The auxiliary storage device 2H1 is, for example, a hard disk, a flash Solid State Drive (SSD), or the like. Here, information stored in the auxiliary storage device 2H1 is such that a part or an entirety of such information may be stored in a file server connected to the network I/F 2H6 or the like, instead of the auxiliary storage device 2H1.
- The main storage device 2H2 is a main storage device such as a storage area to be used by a program that is executed by the CPU 2H5, that is, a so-called “Memory”. The main storage device 2H2 stores information such as data, a program, or a parameter. The main storage device 2H2 is, for example, a Static Random Access Memory (SRAM), a DRAM, or the like. The main storage device 2H2 may have a control device for executing storage in or acquisition from a memory.
- The input/output device 2H3 is a device that has functions of an output device for executing display and an input device for inputting an operation of a user.
- The input/output device 2H3 is a so-called “touch panel”, a “peripheral circuit”, a “driver”, or the like.
- The input/output device 2H3 executes a process for displaying, to a user, an image input in, for example, a predetermined Graphical User Interface (GUI) or the
smartphone 2. - The input/output device 2H3 executes a process for inputting an operation of a user, for example, in a case where a GUI with a display or an image is operated by such a user.
- The state sensor 2H4 is a sensor for detecting a state of the
smartphone 2. The state sensor 2H4 is a gyro sensor, an angle sensor, or the like. The state sensor 2H4 determines, for example, whether or not one side that is possessed by thesmartphone 2 is provided at a predetermined or greater angle with respect to a horizon. That is, the state sensor 2H4 executes a detection as to whether thesmartphone 2 is provided at a state of a longitudinally directional attitude or a state of a laterally directional attitude. - The CPU 2H5 executes a calculation in each process that is executed by the
smartphone 2 and a control of a device that is provided in thesmartphone 2. For example, the CPU 2H5 executes each kind of program. Here, the CPU 2H5 may be composed of a plurality of CPUs or devices, or a plurality of cores in order to execute a process in parallel, redundantly, or dispersedly. Furthermore, a process for the CPU 2H5 is such that another hardware resource may be provided inside or outside thesmartphone 2 to execute a part or an entirety of a process for thesmartphone 2. For example, thesmartphone 2 may have a Graphics Processing Unit (GPU) for executing image processing, or the like. - The network I/F 2H6 is a device such as an antenna, a peripheral circuit, a driver, or the like, for inputting or outputting data, or the like, that is connected to another computer through a wireless or wired network. For example, the
smartphone 2 executes a process for inputting image data from theimage taking device 1 due to the CPU 2H5 and the network I/F 2H6. Thesmartphone 2 executes a process for outputting a predetermined parameter or the like to theimage taking device 1 due to the CPU 2H5 and the network I/F 2H6. -
FIG. 7 is a sequence diagram that illustrates one example of an entire process for an image taking system according to one embodiment of the present invention. - At step S0701, the
image taking device 1 executes a process for producing an all celestial sphere image. -
FIG. 8A ,FIG. 8B ,FIG. 8C , andFIG. 8D are diagrams that illustrate one example of an all celestial sphere image according to one embodiment of the present invention. -
FIG. 8A ,FIG. 8B ,FIG. 8C , andFIG. 8D are diagrams that illustrate one example of a process for producing an all celestial sphere image at step S0701. -
FIG. 8A is a diagram illustrated in such a manner that positions in a hemispherical image inFIG. 4A where incidence angles are equal in a horizontal direction or a vertical direction with respect to an optical axis are connected by a line. An incidence angle θ in a horizontal direction with respect to an optical axis and an incidence angle ϕ in a vertical direction with respect to such an optical axis will be denoted below. - Similarly to
FIG. 8A ,FIG. 8B is a diagram illustrated in such a manner that positions in a hemispherical image inFIG. 4B where incidence angles are equal in a horizontal direction or a vertical direction with respect to an optical axis are connected by a line. -
FIG. 8C is a diagram that illustrates one example of an image processed in accordance with Mercator projection.FIG. 8C is an example of a case where an image in a state illustrated inFIG. 8A or FIG. 8B is, for example, caused to correspond to a preliminarily produced Look Up Table (LUT) or the like and processed in accordance with equidistant cylindrical projection. -
FIG. 8D is one example of a synthesis process for synthesizing images provided by applying a process illustrated inFIG. 8C toFIG. 8A andFIG. 8B . - As illustrated in
FIG. 8D , a synthesis process is to produce an image by using a plurality of images, for example, in a state illustrated inFIG. 8C . Here, a synthesis process is not limited to a process for simply arranging pre-processed images successively. For example, in a case where a center of an all celestial sphere image in a horizontal direction is not provided at θ=180°, a synthesis process may be a process for executing a synthesis process in such a manner that a pre-processed image inFIG. 4A is arranged at a center of an all celestial sphere image and a pre-processed image inFIG. 4B is divided and arranged at left and right sides thereof, so as to produce an all celestial sphere image illustrated inFIG. 4C . - Here, a process for producing an all celestial sphere image is not limited to a process in accordance with equidistant cylindrical projection. For example, a so-called “upside-down” case is provided in such a manner that, like
FIG. 8B , an alignment of pixels in a direction of ϕ is upside-down with respect to an alignment inFIG. 8A and an alignment of pixels in a direction of θ is left-right reversal with respect to an alignment inFIG. 8A . In an upside-down case, theimage taking device 1 may execute a process for rolling or rotating a pre-processed image in a state ofFIG. 8B by 180° so as to align with an alignment of pixels in a direction of ϕ and a direction of θ inFIG. 8A . - Furthermore, a process for producing an all celestial sphere image may execute a correction process for correcting distortion aberration that is provided in an image in a state of
FIG. 8A orFIG. 8B . Moreover, a process for producing an all celestial sphere image may execute a process for improving an image quality, for example, shading correction, gamma correction, white balance, hand movement correction, an optical black correction process, a defective pixel correction process, an edge enhancement process, a linear correction process, or the like. - Here, for example, in a case where an image taking range of a hemispherical image overlaps with an image taking range of another hemispherical image, a synthesis process may execute correction by utilizing an overlapping range to execute such a synthesis process at high precision.
- Due to a process for producing an all celestial sphere image, the
image taking device 1 produces an all celestial sphere image from a hemispherical image that is taken by theimage taking device 1. - At step S0702, the
smartphone 2 executes a process for acquiring an all celestial sphere image produced at step S0701. A case where thesmartphone 2 acquires an all celestial sphere image inFIG. 8D will be described as an example below. - At step S0703, the
smartphone 2 produces an all celestial sphere panoramic image from an all celestial sphere image acquired at step S0702. -
FIG. 9 is a diagram that illustrates one example of an all celestial sphere panoramic image according to one embodiment of the present invention. - At step S0703, the
smartphone 2 executes a process for producing an all celestial sphere panoramic image inFIG. 9 from an all celestial sphere image inFIG. 8D . An all celestial sphere panoramic image is an image provided in such a manner that an all celestial sphere image is applied onto a spherical shape. - A process for producing an all celestial sphere panoramic image is realized by, for example, an Application Programming Interface (API) such as Open GL (“Open GL” is a registered trademark) for Embedded Systems (Open GL ES).
- An all celestial sphere panoramic image is produced by dividing an image into triangles, joining vertices P of triangles (that will be referred to as “vertices P” below), and applying a polygon thereof.
- At step S0704, the
smartphone 2 executes a process for causing a user to input an operation for starting an output of an image. At step S0704, thesmartphone 2, for example, reduces and outputs an all celestial sphere panoramic image produced at step S0703, that is, displays a so-called “thumbnail image”. In a case where a plurality of all celestial sphere panoramic images are stored in thesmartphone 2, thesmartphone 2 outputs a list of thumbnail images, for example, to cause a user to select an image to be output. At step S0704, thesmartphone 2 executes, for example, a process for inputting an operation for causing a user to select one image from a list of thumbnail images. - At step S0705, the
smartphone 2 executes a process for producing an initial image based on an all celestial sphere panoramic image selected at step S0704. -
FIG. 10A ,FIG. 10B ,FIG. 100 , andFIG. 10D are diagrams for illustrating one example of an initial image according to one embodiment of the present invention. -
FIG. 10A is a diagram that illustrates a three-dimensional coordinate system for illustrating one example of an initial image according to one embodiment of the present invention. - As illustrated in
FIG. 10A , a three-dimensional coordinate system with XYZ axes will be described below. Thesmartphone 2 places avirtual camera 3 at a position of an origin and produces each kind of image at a viewpoint of thevirtual camera 3. In a case of a coordinate system inFIG. 10A , an all celestial sphere panoramic image is represented by, for example, a sphere CS. Thevirtual camera 3 corresponds to a viewpoint of a user that views an all celestial sphere panoramic image wherein such an all celestial sphere panoramic image is a sphere CS at a placed position thereof. -
FIG. 10B is a diagram for illustrating one example of a predetermined area for a virtual camera according to one embodiment of the present invention. -
FIG. 10B is a case whereFIG. 10A is represented by three-plane figures.FIG. 10B is a case where thevirtual camera 3 is placed at an origin ofFIG. 10A .FIG. 10C is a projection view of one example of a predetermined area for a virtual camera according to one embodiment of the present invention. - A predetermined area T is an area where a view angle of the
virtual camera 3 is projected onto a sphere CS. Thesmartphone 2 produces an image based on a predetermined area T. -
FIG. 10D is a diagram for illustrating one example of information for determining a predetermined area for a virtual camera according to one embodiment of the present invention. - A predetermined area T is determined by, for example, predetermined area information (x, y, α).
- A view angle α is an angle that indicates an angle of the
virtual camera 3 as illustrated inFIG. 10D . In a case of a diagonal angle ofview 2L of a predetermined area T that is represented by a view angle α, coordinates of a center point CP of such a predetermined area T are represented by (x,y) in predetermined area information. - Here, a distance from the
virtual camera 3 to a center point CP is represented by Formula (1) described below: -
f=tan(α/2) (Formula 1) - An initial image is an image provided by determining a predetermined area T based on a preliminarily set initial setting and being produced based on such a determined predetermined area T. An initial setting is, for example, (x, y, α)=(0, 0, 34) or the like.
- At step S0706, the
smartphone 2 causes a user to execute an operation for switching to an image editing mode. Here, in a case where a user does not execute an operation for switching to an image editing mode, thesmartphone 2 outputs, for example, an initial image. - At step S0707, the
smartphone 2 executes a process for outputting an output image for editing an image. -
FIG. 11 is a diagram that illustrates one example of an output image at an initial state for editing an image according to one embodiment of the present invention. - An output image is, for example, an
output image 21 at an initial state. An output image has anediting image 31 at an initial state and a changingimage 41 at an initial state. - An output image displays a button for a Graphical User Interface (GUI) for accepting an operation of a user. A GUI is, for example, a
blur editing button 51, acancellation editing button 52, or the like. Here, an output image may have another GUI. - An
editing image 31 at an initial state is, for example, an initial image produced at step S0705. - A changing
image 41 at an initial state is, for example, an image provided by reducing an all celestial sphere panoramic image produced at step S0703. - A user edits an image in an image editing mode, and hence, applies an operation to an editing image or a changing image that is displayed in an output image.
- At step S0708, the
smartphone 2 executes a process for causing a user to input an operation for editing an image. - At step S0709, the
smartphone 2 acquires coordinates where a user inputs an operation for the input/output device 2H3. At step S0709, thesmartphone 2 executes a process for determining whether an operation is executed for an area of theediting image 31 at an initial state inFIG. 11 or an operation is executed for an area of the changingimage 41 at an initial state inFIG. 11 , based on acquired coordinates. - Image editing is editing that is executed based on an operation of a user. Editing of an area to be output is editing for changing an area to be output in an image based on a changing image or editing executed for a predetermined area based on an editing image.
- Editing for changing an area to be output is executed in a case where an operation is applied to an area of a changing image at step S0709.
- Editing to be executed for a predetermined area based on an editing image is executed in a case where an operation is applied to an area of an editing image at step S0709.
- In a case where a user operates a changing image (an area of a changing image is determined at step S0709), the
smartphone 2 goes to step S0710. In a case where a user operates an editing image (an area of an editing image is determined at step S0709), thesmartphone 2 goes to step S0712. -
FIG. 12A ,FIG. 12B , andFIG. 12C are diagrams for illustrating one example of editing of an area to be output according to one embodiment of the present invention. -
FIG. 12A is a diagram that illustrates one example of an output image after editing an area to be output according to one embodiment of the present invention. - An output image is, for example, an
output image 22 after editing an area to be output. Theoutput image 22 after editing an area to be output has anediting image 32 after editing an area to be output and a changingimage 42 after editing an area to be output. - The
editing image 32 after editing an area to be output is an image produced by changing a predetermined area T as illustrated inFIG. 10A ,FIG. 10B ,FIG. 100 , andFIG. 10D in theediting image 31 at an initial state inFIG. 11 . - The changing
image 42 after editing an area to be output is an image produced by changing a predetermined area T illustrated inFIG. 10A ,FIG. 10B ,FIG. 100 , andFIG. 10D in the changingimage 41 at an initial state inFIG. 11 . -
FIG. 12B is a diagram that illustrates one example of a predetermined area after editing an area to be output according to one embodiment of the present invention. - The
output image 22 after editing an area to be output is provided at, for example, a viewpoint of a case where thevirtual camera 3 at a state ofFIG. 10B is pan-rotated as illustrated inFIG. 12B . -
FIG. 12C is a diagram that illustrates one example of an operation in a case of editing of an area to be output according to one embodiment of the present invention. - Editing of an area to be output is executed in such a manner that a user operates a screen area where a changing image is output.
- An operation to be input at step S0708 is, for example, an operation for changing an area to be output with respect to left and right directions of an image or the like.
- In a case of
FIG. 12A ,FIG. 12B , andFIG. 12C , an operation that is input by a user is such that a screen where the changingimage 41 at an initial state inFIG. 11 is traced with a finger in left and right directions of such a screen as illustrated inFIG. 12C , that is, a so-called “swipe operation”, or the like. - Herein, an input amount on a swipe operation is provided as (dx, dy).
- A relation between a polar coordinate system (ϕ, θ) of an all celestial sphere in
FIG. 8A ,FIG. 8B ,FIG. 8C , andFIG. 8D and an input amount (dx, dy) is represented by Formula (2) described below: -
θ=k×dx -
θ=k×dy (Formula 2) - In Formula (2) described above, k is a predetermined constant for executing adjustment.
- An output image is changed based on an input amount input for a swipe operation, and hence, it is possible for a user to operate an image with a feeling that a sphere such as a terrestrial globe is rotated.
- Here, for simplifying a process, what position of a screen a swipe operation is input at may not be taken into consideration. That is, similar values may be input for an input amount (dx, dy) in Formula (2) even though a swipe operation is executed at any position of a screen where the changing
image 41 at an initial state is output. - The changing
image 42 after editing an area to be output executes perspective projection transformation of coordinates (Px, Py, Pz) of a vertex P in three-dimensional space based on (ϕ, θ) calculated in accordance with Formula (2). - In a case where a user executes a swipe operation with an input amount (dx2, dy2) in a case of
FIG. 12A , a polar coordinate system (ϕ, θ) of an all celestial sphere is represented by Formula (3) described below: -
ϕ=k×(dx+dx2) -
θ=k×(dy+dy2) (Formula 3) - As illustrated in (3) described above, a polar coordinate system (ϕ, θ) of an all celestial sphere is calculated based on a total value of input amounts for respective swipe operations. Even in a case where a plurality of swipe operations are executed or the like, calculation of a polar coordinate system (ϕ, θ) of an all celestial sphere is executed, and thereby, it is possible to keep constant operability.
- Here, editing of an area to be output is not limited to pan-rotation. For example, tilt-rotation of the
virtual camera 3 in upper and lower directions of an image may be realized. - An operation that is input at step S0708 is, for example, an operation for enlarging or reducing an area to be output or the like.
-
FIG. 13A andFIG. 13B are diagrams for illustrating one example of enlargement or reduction of an area to be output according to one embodiment of the present invention. - In a case where enlargement of an area to be output is executed, an operation that is input by a user is such that two fingers are spread on a screen where the changing
image 41 at an initial state inFIG. 11 is output, as illustrated inFIG. 13A , that is, a so-called “pinch-out operation”, or the like. - In a case where reduction of an area to be output is executed, an operation that is input by a user is such that two fingers are moved closer to each other on a screen where the changing
image 41 at an initial state inFIG. 11 is output, as illustrated inFIG. 13B , that is, a so-called “pinch-in operation”, or the like. - Here, a pinch-out or pinch-in operation is sufficient as long as a position where a finger of a user first contacts is provided in an area with a changing image displayed thereon, and may be an operation that subsequently uses an area with an editing image displayed thereon. Furthermore, an operation may be executed by a so-called “stylus pen” that is a tool for operating a touch panel or the like.
- In a case where an operation illustrated in
FIG. 13A andFIG. 13B is input, thesmartphone 2 executes a so-called “zoom process”. - A zoom process is a process for producing an image with a predetermined area enlarged or reduced based on an operation that is input by a user.
- In a case where an operation illustrated in
FIG. 13A andFIG. 13B is input, thesmartphone 2 acquires an amount of change dz based on an operation that is input by a user. - A zoom process is a process for executing calculation in accordance with Formula (4) described below:
-
α=α0+m×dz (Formula 4) - based on an amount of change dz.
- α indicated in Formula (4) described above is a view angle α of the
virtual camera 3 as illustrated inFIG. 10A ,FIG. 10B ,FIG. 10C , andFIG. 10D . m indicated in Formula (4) is a coefficient for adjusting an amount of zoom. α0 indicated in Formula (4) is a view angle α at an initial state, that is, a view angle α in a case where an initial image is produced at step S0705. - In a case where an operation illustrated in
FIG. 13A andFIG. 13B is input, thesmartphone 2 determines a range of a predetermined area T inFIG. 10A ,FIG. 10B ,FIG. 100 , andFIG. 10D by using a view angle α calculated in accordance with Formula (4) for a projection matrix. - In a case where calculation is executed in accordance with Formula (4) and a user executes an operation for providing an amount of change dz2, the
smartphone 2 executes calculation in accordance with Formula (5) described below: -
α=α0+m×(dz+dz2) (Formula 5) - As indicated in (5) described above, a view angle α is calculated based on a total value of amounts of change due to operations as illustrated in
FIG. 13A andFIG. 13B . Even in a case where a plurality of operations as illustrated inFIG. 13A andFIG. 13B are executed or the like, calculation of a view angle α of a celestial sphere is executed, and thereby, it is possible to keep constant operability. - Here, a zoom process is not limited to a process in accordance with Formula (4) or Formula (5).
- A zoom process may be realized by combining a view angle α of the
virtual camera 3 and a change in a position of a viewpoint. -
FIG. 14 is a diagram for illustrating one example of another zoom process according to one embodiment of the present invention. -
FIG. 14 is a model diagram for illustrating another zoom process. A sphere CS inFIG. 14 is similar to a sphere CS inFIG. 10A ,FIG. 10B ,FIG. 100 , andFIG. 10D . InFIG. 14 , a radius of a sphere CS is described as “1”. - An origin in
FIG. 14 is provided at an initial position of thevirtual camera 3. A position of thevirtual camera 3 is changed on an optical axis, that is, a z-axis inFIG. 10A . It is possible to represent an amount of movement d of thevirtual camera 3 by a distance from an origin. For example, in a case where thevirtual camera 3 is positioned at an origin, that is, a case of an initial state, an amount of movement d is “θ”. - A range of a predetermined area T in
FIG. 10A ,FIG. 10B ,FIG. 100 , andFIG. 10D is represented by an angle of view ω based on an amount of movement d and a view angle α of thevirtual camera 3. An angle of view ω as illustrated inFIG. 14 is an angle of view in a case where thevirtual camera 3 is positioned at an origin, namely, a case of d=θ. - In a case where the
virtual camera 3 is positioned at an origin, namely, a case of d=θ, an angle of view ω is identical to a view angle α. In a case where thevirtual camera 3 is displaced from an origin, that is, a case where a value of d is increased, an angle of view ω and a view angle α exhibit different ranges. - Another zoom process is a process for changing an angle of view ω.
-
FIG. 15 is a table for illustrating one example of another zoom process according to one embodiment of the present invention. - Illustrative table 4 illustrates an example of a case where an angle of view ω is a range of 60° to 300°.
- As illustrated in illustrative table 4, the
smartphone 2 determines which of a view angle α and an amount of movement d of thevirtual camera 3 is preferentially changed based on a zoom specification value ZP. - “RANGE” is a range that is determined based on a zoom specification value ZP.
- “OUTPUT MAGNIFICATION” is an output magnification of an image calculated based on an image parameter determined by another zoom process.
- “ZOOM SPECIFICATION VALUE ZP” is a value that corresponds to an angle of view to be output. Another zoom process changes a process for determining an amount of movement d and a view angle α based on a zoom specification value ZP. For a process to be executed in another zoom process, one of four methods is determined based on a zoom specification value ZP as illustrated in illustrative table 4. A range of a zoom specification value ZP is divided into four ranges that are a range of A-B, a range of B-C, a range of C-D, and a range of D-E.
- “ANGLE OF VIEW ω” is an angle of view ω that corresponds to an image parameter determined by another zoom process.
- “CHANGING PARAMETER” is a description that illustrates a parameter that is changed by each of four methods based on a zoom specification value ZP. “REMARKS” are remarks for “CHANGING PARAMETER”.
- “viewWH” in illustrative table 4 is a value that represents a width or a height of an output area. In a case where an output area is laterally long, “viewWH” is a value of a width. In a case where an output area is longitudinally long, “viewWH” is a value of a height. That is, “viewWH” is a value that represents a size of an output area in longitudinal direction.
- “imgWH” in illustrative table 4 is a value that represents a width or a height of an output image. In a case where an output area is laterally long, “imgWH” is a value of a width of an output image. In a case where an output area is longitudinally long, “imgWH” is a value of a height of an output image. That is, “imgWH” is a value that represents a size of an output image in longitudinal direction.
- “imageDeg” in illustrative table 4 is a value that represents an angle of a display range of an output image. In a case where a width of an output image is represented, “imageDeg” is 360°. In a case where a height of an output image is represented, “imageDeg” is 180°.
-
FIG. 16A ,FIG. 16B ,FIG. 16C ,FIG. 16D , andFIG. 16E are diagrams for illustrating one example of a “range” of another zoom process according to one embodiment of the present invention. - A case of a so-called “zoom-out” in
FIG. 16A ,FIG. 16B ,FIG. 16C ,FIG. 16D , andFIG. 16E will be described as an example below. Here, a left figure in each figure ofFIG. 16A ,FIG. 16B ,FIG. 16C ,FIG. 16D , andFIG. 16E illustrates one example of an image to be output. A right figure in each figure ofFIG. 16A ,FIG. 16B ,FIG. 16C ,FIG. 16D , andFIG. 16E is a diagram that illustrates one example of a state of thevirtual camera 3 at a time of an output in a model diagram illustrated inFIG. 14 . -
FIG. 16A is one example of an output in a case where a zoom specification value ZP is input in such a manner that a “RANGE” in illustrative table 4 inFIG. 15 is “A-B”. In a case of “A-B”, a view angle α of thevirtual camera 3 is fixed at, for example α=60°. In a case of “A-B”, an amount of movement d of thevirtual camera 3 is changed on a condition that a view angle α is fixed as illustrated inFIG. 16A . In a case where an amount of movement d of thevirtual camera 3 is increased on a condition that a view angle α is fixed, an angle of view ω is increased. In a case of “A-B”, a view angle α is fixed and an amount of movement d of thevirtual camera 3 is increased, so that it is possible to realize a zoom-out process. Here, an amount of movement d of thevirtual camera 3 in a case of “A-B” is from 0 to a radius of a sphere CS. That is, a radius of a sphere CS is “1” in a case ofFIG. 16A ,FIG. 16B ,FIG. 16C ,FIG. 16D , andFIG. 16E , and hence, an amount of movement d of thevirtual camera 3 is a value within a range of 0-1. An amount of movement d of thevirtual camera 3 is a value that corresponds to a zoom specification value ZP. -
FIG. 16B is one example of an output in a case where a zoom specification value ZP is input in such a manner that “RANGE” in illustrative table 4 inFIG. 15 is “B-C”. “B-C” is a case where a zoom specification value ZP is a value greater than that of “A-B”. In a case of “B-C”, an amount of movement d of thevirtual camera 3 is fixed at a value for positioning thevirtual camera 3 at a periphery of a sphere CS. That is, as illustrated inFIG. 16B , an amount of movement d of thevirtual camera 3 is fixed at “1” that is a radius of a sphere CS. In a case of “B-C”, a view angle α is changed on a condition that an amount of movement d of thevirtual camera 3 is fixed. In a case where a view angle α is increased on a condition that an amount of movement d of thevirtual camera 3 is fixed, an angle of view co is increased fromFIG. 16A toFIG. 16B . In a case of “B-C”, an amount of movement d of thevirtual camera 3 is fixed and a view angle α is increased, so that it is possible to realize a zoom-out process. In a case of “B-C”, a view angle α is calculated as ω/2. In a case of “B-C”, a range of a view angle α is from 60° that is a value fixed in a case of “A-B” to 120°. - In a case of “A-B” or “B-C”, an angle of view ω is identical to a zoom specification value ZP. In a case of “A-B” or “B-C”, a value of an angle of view ω is increased.
-
FIG. 16C is one example of an output in a case where a zoom specification value ZP is input in such a manner that “RANGE” in illustrative table 4 inFIG. 15 is “C-D”. “C-D” is a case where a zoom specification value ZP is a value greater than that of “B-C”. In a case of “C-D”, a view angle α is fixed at, for example, α=120°. In a case of “C-D”, an amount of movement d of thevirtual camera 3 is changed on a condition that a view angle α is fixed as illustrated inFIG. 16C . In a case where an amount of movement d of thevirtual camera 3 is increased on a condition that a view angle α is fixed, an angle of view ω is increased. An amount of movement d of thevirtual camera 3 is calculated in accordance with a formula based on a zoom specification value ZP illustrated in illustrative table 4 IFIG. 15 . In a case of “C-D”, an amount of movement d of thevirtual camera 3 is changed to a maximum display distance dmax1. - A maximum display distance dmax1 is a distance where a sphere CS is displayed so as to be maximum in an output area of the
smartphone 2. An output area is, for example, a size of a screen where thesmartphone 2 outputs an image or the like, or the like. A maximum display distance dmax1 is, for example, a case ofFIG. 16D . A maximum display distance dmax1 is calculated in accordance with Formula (6) described below: -
- “viewW” in Formula (6) described above is a value that represents a width of an output area of the
smartphone 2. “viewH” in Formula (6) described above is a value that represents a height of an output area of thesmartphone 2. A similar matter will be described below. - A maximum display distance dmax1 is calculated based on values of “viewW” and “viewH” that are output areas of the
smartphone 2. -
FIG. 16D is one example of an output in a case where a zoom specification value ZP is input in such a manner that “RANGE” in illustrative table 4 inFIG. 15 is “D-E”. “D-E” is a case where a zoom specification value ZP is a value greater than that of “C-D”. In a case of “D-E”, a view angle α is fixed at, for example, α=120°. In a case of “D-E”, an amount of movement d of thevirtual camera 3 is changed on a condition that a view angle α is fixed as illustrated inFIG. 16D . An amount of movement d of thevirtual camera 3 is changed to a limit display distance dmax2. A limit display distance dmax2 is a distance where a sphere CS is displayed so as to be inscribed in an output area of thesmartphone 2. A limit display distance dmax2 is calculated in Formula (7) described below: -
- A limit display distance dmax2 is, for example, a case of
FIG. 16E . - A limit display distance dmax2 is calculated based on values of “viewW” and “viewH” that are output areas of the
smartphone 2. A limit display distance dmax2 represents a maximum range that is able to be output by thesmartphone 2, that is, a limit value of an amount of movement d of thevirtual camera 3. An embodiment may be limited in such a manner that a zoom specification value ZP is included in a range illustrated in illustrative table 4 inFIG. 15 , that is, a value of an amount of movement d of thevirtual camera 3 is less than or equal to a limit display distance dmax2. Due to such limitation, thesmartphone 2 is provided on a condition that an output image is fitted to a screen that is an output area or a condition that an image with a predetermined output magnification is output to a user, so that it is possible to realize zoom-out. - Due to a process for “D-E”, it is possible for the
smartphone 2 to cause a user to recognize that an output image is all celestial sphere panorama. - Here, in a case of “C-D” or “D-E”, an angle of view ω is not identical to a zoom specification value ZP. Furthermore, as illustrated in illustrative table 4 in
FIG. 15 andFIG. 16A ,FIG. 16B ,FIG. 16C ,FIG. 16D , andFIG. 16E , an angle of view ω is continuous in each range but such an angle of view ω is not uniformly increased by zoom-out toward a wide-angle side. That is, in a case of “C-D” where an amount of movement d of thevirtual camera 3 is changed, an angle of view ω is increased with such an amount of movement d of thevirtual camera 3. In a case of “D-E” where an amount of movement d of thevirtual camera 3 is changed, an angle of view ω is decreased with such an amount of movement d of thevirtual camera 3. A decrease in an amount of movement d of thevirtual camera 3 in “D-E” is caused by reflecting an outer area of a sphere CS. In a case where a wide field of view greater than or equal to 240° is specified by a zoom specification value ZP, thesmartphone 2 changes an amount of movement d of thevirtual camera 3, and thereby, it is possible to output an image with a less feeling of strangeness to a user and change an angle of view w. - In a case where a zoom specification value ZP is changed toward a wide-angle direction, an angle of view ω is frequently increased. In a case where an angle of view ω is increased, the
smartphone 2 fixes a view angle α of thevirtual camera 3 and increases an amount of movement d of thevirtual camera 3. Thesmartphone 2 fixes a view angle α of thevirtual camera 3, and thereby, it is possible to reduce an increase in such a view angle α of thevirtual camera 3. Thesmartphone 2 reduces an increase in a view angle α of thevirtual camera 3, and thereby, it is possible to output an image with less distortion to a user. In a case where a view angle α of thevirtual camera 3 is fixed, thesmartphone 2 increases an amount of movement d of thevirtual camera 3, that is, moves thevirtual camera 3 to be distant, and thereby, it is possible to provide a user with an open-feeling of a wide angle display. Furthermore, movement for moving thevirtual camera 3 to be distant is similar to movement at a time when a human being confirms a wide range, and hence, it is possible for thesmartphone 2 to realize zoom-out with a less feeling of strangeness due to movement for moving the virtual camera to be distant. - In a case of “D-E”, an angle of view m is decreased with changing a zoom specification value ZP toward a wide-angle direction. In a case of “D-E”, the
smartphone 2 decreases an angle of view w, and thereby, it is possible to provide a user with a feeling of being distant from a sphere CS. Thesmartphone 2 provides a user with a feeling of being distant from a sphere CS, and thereby, it is possible to output an image with a less feeling of strangeness to a user. - Hence, it is possible for the
smartphone 2 to output an image with a less feeling of strangeness to a user, due to another zoom process illustrated in illustrative table 4 inFIG. 15 . - Here, an embodiment is not limited to a case where only an amount of movement d or a view angle α of the
virtual camera 3 illustrated in illustrative table 4 inFIG. 15 is changed. It is sufficient for an embodiment to be a mode for preferentially changing an amount of movement d or a view angle α of thevirtual camera 3 on a condition illustrated in illustrative table 4 inFIG. 15 , and a fixed value may be changed to a sufficiently small value, for example, for adjustment. - Furthermore, an embodiment is not limited to zoom-out. An embodiment may realize, for example, zoom-in.
- Here, a case where an area to be output is edited is not limited to a case where an operation is executed for a changing image. The
smartphone 2 may edit an area to be output, for example, in a case where an operation is executed for an editing image. - Editing to be executed for a predetermined area based on an editing image is blur editing that blurs a predetermined pixel. Herein, for another editing, it is possible to provide, erasing of a specified range of an image, changing of a color tone or a color depth of an image or the like, a color change of a specified range of an image, or the like.
- A case where a user executes blur editing for the
output image 22 after editing of an area to be output inFIG. 12A ,FIG. 12B , andFIG. 12C will be described as an example below. - In a case where a user executes an operation that pushes a
blur editing button 51, thesmartphone 2 causes a user to input a so-called “tap operation” for an area where anediting image 32 for theoutput image 22 after editing of an area to be output inFIG. 12A ,FIG. 12B , andFIG. 12C is displayed. - The
smartphone 2 executes a process for blurring a predetermined range centered at a point tapped by a user. -
FIG. 17A andFIG. 17B are diagrams for illustrating one example of editing to be executed for a predetermined area based on an editing image according to one embodiment of the present invention. -
FIG. 17A is a diagram for illustrating one example of blur editing according to one embodiment of the present invention.FIG. 17A is a diagram that illustrates anoutput image 23 after blur editing. Theoutput image 23 after blur editing has anediting image 33 after blur editing and a changingimage 43 after blur editing. - The
editing image 33 after blur editing is produced by applying blur editing to an output image after editing of an area to be output inFIG. 12A ,FIG. 12B , andFIG. 12C . Blur editing is realized by, for example, a Gauss function, an average of peripheral pixels, a low-pass filter, or the like. Blur editing is illustrated like, for example, ablur editing area 5. - Blur editing is applied to a changing image. The
smartphone 2 executed calculation of a point (Px, Py, Pz) in a three-dimensional space from coordinates of a point tapped by a user. Thesmartphone 2 calculates (Px, Py, Pz) from two-dimensional coordinates through inverse transformation of perspective projection transformation that uses a view frustum. There is no information of depth in two-dimensional coordinates, and hence, (Px, Py, Pz) are calculated by using a point on a sphere and simultaneous equations. A sign of Pz in a projection coordinate system is constant, and hence, it is possible for thesmartphone 2 to calculate simultaneous equations. Coordinates of an all celestial sphere panoramic image correspond to (Px, Py, Pz), and hence, it is possible for thesmartphone 2 to calculate coordinates on an all celestial sphere panoramic image from calculated (Px, Py, Pz). Therefore, a changingimage 43 after blur editing is provided on a condition that blur editing is reflected as illustrated in FIG. 17A. -
FIG. 17B is a diagram for illustrating one example of editing that cancels blurring according to one embodiment of the present invention. - Editing that is applied to a predetermined area based on an editing image is editing that cancels blur editing for a
blur editing area 5 blurred by such blur editing. - In a case where a user executes an operation that pushes the
cancellation editing button 52, thesmartphone 2 outputs anoutput image 24 for cancellation editing that displays a fillingarea 6 on theblur editing area 5 with applied blur editing. As illustrated inFIG. 17B , theoutput image 24 for cancellation editing is an image that displays the fillingarea 6 on theblur editing area 5 in theediting image 33 after blur editing inFIG. 17A . A user executes a tap operation for a displayed fillingarea 6, that is, an area with applied blurring. Thesmartphone 2 executes a process for cancelling blur editing in a predetermined range centered at a point tapped by a user. That is, in editing for cancelling blur editing, thesmartphone 2 provides a predetermined range centered at a point tapped by a user in theediting image 33 after blur editing on a state of theoutput image 22 after editing of an area to be output inFIG. 12A ,FIG. 12B , andFIG. 12C . - Once a taken image of a face of a person or a photography-prohibited building is released or shared on the internet, trouble may be caused. In particular, in a case where a panoramic image with a broad range is taken, an image of many objects in a broad range may frequently be taken. Therefore, it is possible for a user to reduce trouble due to a process for blurring an object that is possibly problematic at a time of release or sharing. It is possible for the
smartphone 2 to facilitate an operation for blurring a face of a person taken in an image due to editing to be applied to a predetermined area based on an editing image. Hence, it is possible for thesmartphone 2 to cause a user to readily execute an image operation due to editing to be applied to a predetermined area based on an editing image. - Here, in a case where editing of an area to be output is executed, the
smartphone 2 may change a range of editing applied to a predetermined area based on an editing image or the like in accordance with a magnification. - At step S0710, the
smartphone 2 calculates amounts of movement of coordinates to be output. That is, at step S0710, thesmartphone 2 calculates a position of a predetermined area T inFIG. 10A ,FIG. 10B ,FIG. 100 , andFIG. 10D that corresponds to a swipe operation of a user based on, for example, Formula (2) described above. - At step S0711, the
smartphone 2 updates a position of a predetermined area T inFIG. 10A ,FIG. 10B ,FIG. 100 , andFIG. 10D at a position calculated at step S0710. - At step S0712, the
smartphone 2 calculates coordinates of a point that is an editing object. That is, at step S0712, thesmartphone 2 calculates coordinates that correspond to a tap operation of a user and executes calculation for projection onto three-dimensional coordinates. - At step S0713, the
smartphone 2 calculates a predetermined area that is edited centered at coordinates calculated at step S0712 and based on an editing image. That is, at step S0713, thesmartphone 2 calculates a pixel that is a point specified by a tap operation of a user or a periphery of such a point and is an object for blur editing or the like. - At step S0714, the
smartphone 2 produces an editing image. In a case where a user executes an operation for a changing image at step S0714, thesmartphone 2 produces a changing image based on a predetermined area T updated at step S0711. In a case where a user executes an operation for an editing image at step S0714, thesmartphone 2 produces an editing image wherein a blurring process is reflected on a pixel calculated at step S0713. - At step S0715, the
smartphone 2 produces a changing image. In a case where a user executes an operation for a changing image at step S0715, thesmartphone 2 produces a changing image based on a predetermined area T updated at step S0711. In a case where a user executes an operation for an editing image at step S0715, thesmartphone 2 produces an changing image that indicates a location that is a blurring object at step S713. - The
smartphone 2 repeats processes of step S0708 through step S0715. -
FIG. 18 is a flowchart that illustrates one example of an entire process on a smartphone according to one embodiment of the present invention. - At step S1801, the
smartphone 2 executes a process for acquiring an image from theimage taking device 1 inFIG. 1 or the like. A process at step S1801 corresponds to a process at step S0702 inFIG. 7 . - At step S1802, the
smartphone 2 executes a process for producing a panoramic image. A process at step S1802 is executed based on an image acquired at step S1801. A process at step S1802 corresponds to a process at step S0703 inFIG. 7 . - At step S1803, the
smartphone 2 executes a process for causing a user to select an image to be output. A process at step S1803 corresponds to a process at step S0704 inFIG. 7 . Specifically, a process for causing a user to select an image to be output is a process for outputting a thumbnail image or providing a UI for causing a user to execute an operation for a thumbnail image, or the like. - At step S1804, the
smartphone 2 executes a process for producing an initial image. A process at step S1804 corresponds to a process at step S0705 inFIG. 7 . At step S1804, thesmartphone 2 produces and outputs an image selected by a user at step S1803 as an initial image. - At step S1805, the
smartphone 2 executes determination as to whether or not switching to a mode for editing an image is executed. A process at step S1805 executes determination based on whether or not an operation of a user at step S0706 inFIG. 7 is provided. In a case where determination is provided at step S1805 in such a manner that switching to a mode for editing an image is provided (YES at step S1805), thesmartphone 2 goes to step S1806. In a case where determination is provided at step S1805 in such a manner that switching to a mode for editing an image is not provided (NO at step S1805), thesmartphone 2 returns to step S1804. - A case where determination is provided at step S1805 in such a manner that switching to a mode for editing an image is provided is a case where an input to start editing of an image is provided by a user. A case where determination is provided at step S1805 in such a manner that switching to a mode for editing an image is not provided is a case where a user does not execute an operation. Therefore, in a case where a user does not execute an operation, the
smartphone 2 continues to output an initial image and waits for an input of an user to start editing of an image. - At step S1806, the
smartphone 2 executes a process for outputting an output image for editing an image. A process at step S1806 corresponds to a process at step S0707 inFIG. 7 . Furthermore, thesmartphone 2 outputs an output image and thereby accepts an operation of a user at step S0708 inFIG. 7 . - At step S1807, the
smartphone 2 executes determination as to whether an operation of a user is executed for an editing image or a changing image. A process at step S1807 corresponds to a process at step S0709 inFIG. 7 . Thesmartphone 2 executes determination as to whether an operation of a user at step S0708 inFIG. 7 is executed for an editing image or a changing image. - In a case where determination is provided in such a manner that an operation of a user is executed for a changing image (a changing image at step S1807), the
smartphone 2 goes to step S1808. In a case where determination is provided in such a manner that an operation of a user is executed for an editing image (an editing image at step S1807), thesmartphone 2 goes to step S1810. - At step S1808, the
smartphone 2 executes a process for calculating an amount of movement of a predetermined area due to an operation. A process at step S1808 corresponds to a process at step S0710 inFIG. 7 . At step S1808, thesmartphone 2 calculates an amount of movement for moving a predetermined area based on a swipe operation that is executed by a user and changes such a predetermined area. - At step S1809, the
smartphone 2 executes a process for updating a predetermined area. A process at step S1809 corresponds to a process at step S0711 inFIG. 7 . At step S1809, thesmartphone 2 moves a predetermined area T inFIG. 10A ,FIG. 10B ,FIG. 10C , andFIG. 10D to a position that corresponds to an amount of movement calculated at step S1808, and updates such a predetermined area T from a position of an initial image to a position that corresponds to a swipe operation of a user. - At step S1810, the
smartphone 2 executes a process for calculating, and three-dimensionally projecting, coordinates that are objects for an operation. A process at step S1810 corresponds to a process at step S0712 inFIG. 7 . At step S1810, thesmartphone 2 calculates coordinates on an all celestial sphere image that corresponds to a point specified by a tap operation of a user. - At step S1811, the
smartphone 2 executes a process for calculating a pixel that is an object for blurring. For example, thesmartphone 2 has an editing state table that causes flag data as to whether or not an object for blurring is provided, to correspond to each pixel. An editing state table represents whether or not each pixel is output in a blur state. Thesmartphone 2 refers to an editing state table, determines whether or not each pixel in an output image is output in a blur state, and outputs an image. That is, a process at step S1811 is a process for updating an editing state table. In a case where an operation for either blurring as illustrated inFIG. 17A or cancellation as illustrated inFIG. 17B is provided in a tap operation of a user, thesmartphone 2 updates an editing state table based on such an operation. - At step S1812, the
smartphone 2 executes a process for producing an editing image. A process at step S1812 corresponds to a process at step S0714 inFIG. 7 . - At step S1813, the
smartphone 2 executes a process for producing a changing image. A process at step S1813 corresponds to a process at step S0715 inFIG. 7 . - Due to processes at step S1812 and step S1813, the
smartphone 2 produces an output image and executes an output to a user. - The
smartphone 2 returns to step S1807 and repeats previously illustrated processes. - In a case where an object for blurring is provided based on an editing state table at processes at step S1812 and step S1813, the
smartphone 2 executes, for example, a blurring process as illustrated in FIG. 17A and an output. - An image that is output to a user by the
smartphone 2 is output at 30 or more frames per 1 second in such a manner that such a user feels smooth reproduction of an animation. It is desirable for thesmartphone 2 to execute an output at 60 or more frames per 1 second in such a manner that a user feels particularly smooth reproduction. Here, a frame rate of an output may be such that 60 frames per 1 second is changed to, for example, 59.94 frames per 1 second. - Here, processes at step S1812 and step S1813 are not limited to processes for causing the
smartphone 2 to execute a blurring process and an output. - For example, the
smartphone 2 has an image provided by preliminarily applying a blurring process to all of pixels of an image to be output and an image provided by applying no blurring process. Thesmartphone 2 outputs each pixel by simultaneously selecting an image provided by executing a blurring process based on an editing state table or an image provided by executing no blurring process. It is possible for thesmartphone 2 to reduce an amount of calculation for outputting an image by preliminarily executing a blurring process. That is, it is possible for thesmartphone 2 to realize a high-speed image output such as 60 frames per 1 second by executing selection and a simultaneous output of each pixel. - Furthermore, for example, in a case where each pixel is selected and a simultaneously output, the
smartphone 2 may store an output image. In a case where a user does not execute an editing operation, thesmartphone 2 outputs a stored image. Due to storage, a process for selecting and producing each pixel of an image to be output is not required, and hence, it is possible for thesmartphone 2 to reduce an amount of calculation. Therefore, thesmartphone 2 stores an output image, and thereby, it is possible to realize a high-speed image output such as 60 frames per 1 second. - Here, an output image is not limited to an image illustrated in
FIG. 11 or the like. For example, a shape, a position, a size, or a range of an editing image or a changing image may be changed. -
FIG. 19A andFIG. 19B are diagrams for illustrating one example of changing of an output such as a position, a direction, or the like, of a changing image according to one embodiment of the present invention. - An information processing device that is one example of a device for displaying an output image is, for example, the
smartphone 2. Thesmartphone 2 will be described as an example below. -
FIG. 19A is a diagram that illustrates one example of changing of an attitude of thesmartphone 2 according to one embodiment of the present invention. - For example, a changing image is output to a
position 7 before changing as illustrated inFIG. 19A . A case ofFIG. 19A will be described as an example below. - For example, an attitude of the
smartphone 2 is changed by a user in a direction of rotation as illustrated inFIG. 19A . An attitude of thesmartphone 2 is detected by the state sensor 2H4 inFIG. 6 . Thesmartphone 2 rotates and outputs an output image based on an attitude of thesmartphone 2 that is a result of detection. Thesmartphone 2 may change a position or a direction of an area for outputting a changing image based on a result of detection. -
FIG. 19B is a diagram that illustrates one example that changes a position or a direction of an area where a changing image is displayed based on a result of detection, according to one embodiment of the present invention. - In a case where a user rotates the
smartphone 2 as illustrated inFIG. 19A , thesmartphone 2 changes a position of an area for outputting a changing image from a position illustrated as theposition 7 before changing inFIG. 19A to a first changingposition 71 or a second changingposition 72. - Here, a changing image may be output on a condition that a direction for an output is changed based on a result of detection as illustrated in
FIG. 19B , that is, rotated from a state ofFIG. 19A to that as illustrated inFIG. 19B . - The
smartphone 2 changes a position or a direction of an area for outputting a changing image based on a result of detection. That is, it is possible for thesmartphone 2 to output an image at a position or in a direction for facilitating an operation of a user even if an attitude of thesmartphone 2 is changed, in order to output such an image in accordance with such an attitude. - Furthermore, for changing of a position or a direction of a changing image, the
smartphone 2 may display such a changing image so as to bounce during such changing. -
FIG. 20 is a block diagram that illustrates one example of a functional configuration of an image taking system according to one embodiment of the present invention. - The
image taking system 10 has theimage taking device 1 and thesmartphone 2. Theimage taking system 10 has a first image taking part 1F1, a second image taking part 1F2, and an all celestial sphere image production part 1F3. Theimage taking system 10 has an image acquisition part 2F1, a production part 2F2, an input/output part 2F3, a detection part 2F4, a storage part 2F5, and a control part 2F6. - The first image talking part 1F1 and the second image taking part 1F2 take and produce images that are materials of an all celestial sphere image. The first image taking part 1F1 is realized by, for example, the front side image taking element 1H1 in
FIG. 2A ,FIG. 2B , andFIG. 2C or the like. The second image taking part 1F2 is realized by, for example, the back side image taking element 1H2 inFIG. 2A ,FIG. 2B , andFIG. 2C or the like. An image that is a material of an all celestial sphere image is, for example, a hemispherical image as illustrated inFIG. 4A orFIG. 4B . - The all celestial sphere image production part 1F3 produces an image that is output to the
smartphone 2, such as an all celestial sphere image. The all celestial sphere image production part 1F3 is realized by, for example, the image processing unit 1H7 inFIG. 5 or the like. The all celestial sphere image production part 1F3 produces an all celestial sphere image from hemispherical images that are taken by the first image taking part 1F1 and the second image taking part 1F2. - The image acquisition part 2F1 acquires image data such as an all celestial sphere image from the
image taking device 1. The image acquisition part 2F1 is realized by, for example, the network I/F 2H6 inFIG. 6 or the like. The image acquisition part 2F1 executes a process for causing thesmartphone 2 to acquire image data such as an all celestial sphere image. - The production part 2F2 executes a process for producing each kind of image and each kind of calculation necessary for production of an image. The production part 2F2 has a changing image production part 2F21 and an editing image production part 2F22. The production part 2F2 is realized by the CPU 2H5 in
FIG. 6 or the like. - The changing image production part 2F21 executes a process for executing production of a changing image. The changing image production part 2F21 acquires, for example, image data and an editing state table from the storage part 2F5. The changing image production part 2F21 produces a changing image based on an acquired editing state table and image data.
- The editing image production part 2F22 executes a process for executing production of an editing image. The editing image production part 2F22 acquires, for example, image data and an editing state table from the storage part 2F5. The editing image production part 2F22 produces an editing image based on an acquired editing state table and image data.
- The production part 2F2 calculates, and stores as an editing state table, coordinates associated with an operation in a case where a user executes a tap or swipe operation. Furthermore, an image produced by the production part 2F2 may be stored in the storage part 2F5 and taken according to a process.
- The production part 2F2 may produce each kind of image based on a result of detection that is acquired from the detection part 2F4.
- The input/output part 2F3 executes a process for inputting an operation of a user. The input/output part 2F3 causes a user to execute a process for outputting an image produced by the production part 2F2. The input/output part 2F3 is realized by, for example, the input/output device 2H3 in
FIG. 6 or the like. - The detection part 2F4 executes a process for detecting an attitude of the
smartphone 2. The detection part 2F4 is realized by, for example, the state sensor 2H4 inFIG. 6 or the like. - The storage part 2F5 stores each kind of information acquired or produced by the
smartphone 2. The storage part 2F5 has, for example, an editing state table storage part 2F51 and an image storage part 2F52. The storage part 2F5 is realized by, for example, the auxiliary storage device 2H1 or the main storage device 2H2 inFIG. 6 or the like. - The editing state table storage part 2F51 stores data of a table that represents a pixel where a blurring process is executed.
- The image storage part 2F52 stores an all celestial sphere image acquired by the image acquisition part 2F1, an output image produced by the production part 2F2, and the like.
- The control part 2F6 controls each kind of a component that is provided in the
smartphone 2. The control part 2F6 controls each kind of component, and thereby, realizes each kind of process, a process for assisting each kind of process, and the like. The control part 2F6 is realized by, for example, the CPU 2H5 inFIG. 6 or the like. - Here, an entire process is not limited to a case as illustrated in
FIG. 7 . For example, a part or an entirety of each process may be processed by a device other than a device as illustrated inFIG. 7 . - The
smartphone 2 produces an editing image and a changing image based on an all celestial sphere image acquired from theimage taking device 1 or the like. An editing image is an image for outputting a predetermined area that is determined by a predetermined area T and causes a user to execute an editing operation such as blurring or cancellation of blurring. A changing image is an image for causing a user to execute an operation for changing a position, a size, or a range of a predetermined area T, or the like. Thesmartphone 2 outputs an output image that has at least an editing image and a changing image. An output image has an editing image and a changing image, and thereby, it is possible for thesmartphone 2 to cause a user to execute editing such as blurring and simultaneously change an area output in such an editing image by such a changing image. Therefore, in a case where a user executes a blurring operation for an all celestial sphere image or the like, it is possible for thesmartphone 2 to output an image for facilitating an operation. Hence, thesmartphone 2 outputs an output image that has an editing image and a changing image, and thereby, it is possible for a user to readily execute an operation of an image. - Here, the
smartphone 2 may be realized by a computer-executable program described in a legacy programing language such as Assembler, C, C++, C#, or Java (“Java” is a registered trademark), an object-oriented programming language, or the like. It is possible for a program to be stored in and distributed by a recording medium such as a ROM or an Electrically Erasable Programmable ROM (EEPROM). It is possible for a program to be stored in and distributed by a recording medium such as an Erasable Programmable ROM (EPROM). It is possible for a program to be stored in and distributed by a recording medium such as a flash memory, a flexible disk, a CD-ROM, a CD-RW, a DVD-ROM, a DVD-RAM, a DVD-RW, or the like. It is possible for a program to be stored in a device-readable recording medium such as a Blu-Ray disk (“Blu-Ray disk” is a registered trademark), SD (“SD” is a registered trademark) card, or an MO or distributed through a telecommunication line. - Here, an image in an embodiment is not limited to a still image. For example, an image may be an animation.
- Furthermore, a part or an entirety of each process in an embodiment may be realized by, for example, a programmable device (PD) such as a field programmable gate array (FPGA). Moreover, a part or an entirety of each process in an embodiment may be realized by an Application Specific Integrated Circuit (ASIC).
- Although preferable practical examples of the present invention have been described in detail above, the present invention is not limited to such particular embodiments and a variety of alterations and modifications are possible within a scope of an essence of the present invention as recited in what is claimed.
- <An illustrative embodiment(s) of an information processing method, an information processing device, and a program>
- At least one illustrative embodiment of the present invention may relate to an information processing method, an information processing device, and a program.
- At least one illustrative embodiment of the present invention may aim at facilitating execution of an image operation for a user.
- According to at least one illustrative embodiment of the present invention, there may be provided an information processing method that causes a computer to process an image, characterized by causing the computer to execute an acquisition step for acquiring the image, a production step for producing an editing image for editing a predetermined area of the image and a changing image for changing the predetermined area to be output, and an output step for outputting an output image that has at least the editing image and the changing image.
- Illustrative embodiment (1) is an information processing method for causing a computer to process an image, wherein the image processing method causes the computer to execute an acquisition step for acquiring the image, a production step for producing an editing image for editing a predetermined area of the image and a changing image for changing the predetermined area to be output, and an output step for outputting an output image that has at least the editing image and the changing image.
- Illustrative embodiment (2) is the information processing method as described in illustrative embodiment (1), wherein an editing area input step for acquiring an editing area that is a target area of the editing by using the editing image and an editing step for editing the editing area are executed.
- Illustrative embodiment (3) is the image processing method as described in illustrative embodiment (2), wherein the editing step is a step for blurring the editing area.
- Illustrative embodiment (4) is the information processing method as described in illustrative embodiment (3), wherein an acquisition image that has just been acquired in the acquisition step and a blurred image produced by a blurring process are produced and the output image is output by selecting a pixel of the blurred image for the editing area and a pixel of the acquisition image for that other than the editing area.
- Illustrative embodiment (5) is the information processing method as described in any one of illustrative embodiments (2) to (4), wherein a specified area input step for acquiring a specifying area for specifying an area of a part or an entirety of an image output with the editing image and a cancellation step for canceling the editing process executed for the specifying area are executed.
- Illustrative embodiment (6) is the information processing method as described in any one of illustrative embodiments (1) to (5), wherein an operation input step for acquiring an operation for changing, enlarging, or reducing the predetermined area that is output with the editing image by using the changing image is executed.
- Illustrative embodiment (7) is the information processing method as described in illustrative embodiment (6), wherein a determination step for determining a view point position and a view angle is executed based on the operation and the determination changes one of the view point position and the view angle based on an area indicated by the operation.
- Illustrative embodiment (8) is the information processing method as described in any one of illustrative embodiments (1) to (7), wherein a detection step for detecting an attitude of a device that displays the output image and a changing step for changing a position or direction of the changing image based on a result of detection by the detection step are executed.
- Illustrative embodiment (9) is an information processing device that processes an image, wherein the image processing device has an acquisition means for acquiring the image, a production means for producing an editing image for editing a predetermined area of the image and a changing image for changing the predetermined area to be output, and an output means for outputting an output image that has at least the editing image and the changing image.
- Illustrative embodiment (10) is a program for causing a computer to process an image, wherein the program causes the computer to execute an acquisition step for acquiring the image, a production step for producing an editing image for editing a predetermined area of the image and a changing image for changing the predetermined area to be output, and an output step for outputting an output image that has at least the editing image and the changing image.
- According to at least an illustrative embodiment of the present invention, it may be possible to facilitate execution of an image operation for a user.
- Although the illustrative embodiment(s) and specific example(s) of the present invention have been described with reference to the accompanying drawings, the present invention is not limited to any of the illustrative embodiment(s) and specific example(s) and the illustrative embodiment(s) and specific example(s) may be altered, modified, or combined without departing from the scope of the present invention.
- All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority or inferiority of the invention. Although an information processing method has been described in detail, it should be understood that various changes, substitutions, and alterations could be made thereto without departing from the spirit and scope of the invention.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/372,764 US20190228500A1 (en) | 2014-03-18 | 2019-04-02 | Information processing method, information processing device, and program |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014054782A JP5835383B2 (en) | 2014-03-18 | 2014-03-18 | Information processing method, information processing apparatus, and program |
JP2014-054782 | 2014-03-18 | ||
PCT/JP2015/057609 WO2015141605A1 (en) | 2014-03-18 | 2015-03-10 | Information processing method, information processing device, and program |
US14/924,871 US9760974B2 (en) | 2014-03-18 | 2015-10-28 | Information processing method, information processing device, and program |
US15/671,338 US10304157B2 (en) | 2014-03-18 | 2017-08-08 | Information processing method, information processing device, and program |
US16/372,764 US20190228500A1 (en) | 2014-03-18 | 2019-04-02 | Information processing method, information processing device, and program |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/671,338 Continuation US10304157B2 (en) | 2014-03-18 | 2017-08-08 | Information processing method, information processing device, and program |
Publications (1)
Publication Number | Publication Date |
---|---|
US20190228500A1 true US20190228500A1 (en) | 2019-07-25 |
Family
ID=54144574
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/924,871 Active US9760974B2 (en) | 2014-03-18 | 2015-10-28 | Information processing method, information processing device, and program |
US15/671,338 Active US10304157B2 (en) | 2014-03-18 | 2017-08-08 | Information processing method, information processing device, and program |
US16/372,764 Abandoned US20190228500A1 (en) | 2014-03-18 | 2019-04-02 | Information processing method, information processing device, and program |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/924,871 Active US9760974B2 (en) | 2014-03-18 | 2015-10-28 | Information processing method, information processing device, and program |
US15/671,338 Active US10304157B2 (en) | 2014-03-18 | 2017-08-08 | Information processing method, information processing device, and program |
Country Status (6)
Country | Link |
---|---|
US (3) | US9760974B2 (en) |
EP (1) | EP3120327A4 (en) |
JP (1) | JP5835383B2 (en) |
CN (2) | CN110456967B (en) |
CA (1) | CA2941469C (en) |
WO (1) | WO2015141605A1 (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016002578A1 (en) * | 2014-07-04 | 2016-01-07 | ソニー株式会社 | Image processing device and method |
JP6518069B2 (en) * | 2015-01-09 | 2019-05-22 | キヤノン株式会社 | Display device, imaging system, display device control method, program, and recording medium |
JP5987931B2 (en) | 2015-02-09 | 2016-09-07 | 株式会社リコー | Video display system, information processing apparatus, video display method, video display program, video processing apparatus, video processing method, and video processing program |
USD791146S1 (en) * | 2015-09-25 | 2017-07-04 | Sz Dji Osmo Technology Co., Ltd. | Display screen or portion thereof with animated graphical user interface |
US10732809B2 (en) | 2015-12-30 | 2020-08-04 | Google Llc | Systems and methods for selective retention and editing of images captured by mobile image capture device |
CN108702441A (en) | 2016-02-24 | 2018-10-23 | 株式会社理光 | Image processing equipment, image processing system and program |
TWI567691B (en) * | 2016-03-07 | 2017-01-21 | 粉迷科技股份有限公司 | Method and system for editing scene in three-dimensional space |
WO2018043135A1 (en) * | 2016-08-31 | 2018-03-08 | ソニー株式会社 | Information processing device, information processing method, and program |
JP7133900B2 (en) * | 2016-11-17 | 2022-09-09 | 株式会社Nttファシリティーズ | Shooting position specifying system, shooting position specifying method, and program |
JP6885158B2 (en) * | 2017-03-31 | 2021-06-09 | 株式会社リコー | Image processing device, photographing device, and image processing method |
JP6331178B1 (en) * | 2017-05-12 | 2018-05-30 | パナソニックIpマネジメント株式会社 | Image processing apparatus and image processing method |
JP7122729B2 (en) * | 2017-05-19 | 2022-08-22 | 株式会社ユピテル | Drive recorder, display device and program for drive recorder |
JP2019099219A (en) * | 2017-12-01 | 2019-06-24 | 株式会社イオグランツ | Package body |
CN110278368A (en) * | 2018-03-15 | 2019-09-24 | 株式会社理光 | Image processing apparatus, camera chain, image processing method |
JP7268372B2 (en) * | 2019-01-31 | 2023-05-08 | 株式会社リコー | Imaging device |
US11436776B2 (en) * | 2019-03-15 | 2022-09-06 | Canon Kabushiki Kaisha | Information processing apparatus and control method thereof |
CN117616447A (en) | 2021-07-09 | 2024-02-27 | 三星电子株式会社 | Electronic device and operation method thereof |
JP7492497B2 (en) * | 2021-12-27 | 2024-05-29 | 株式会社コロプラ | PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030103063A1 (en) * | 2001-12-03 | 2003-06-05 | Tempest Microsystems | Panoramic imaging and display system with canonical magnifier |
US20070046804A1 (en) * | 2005-08-30 | 2007-03-01 | Olympus Corporation | Image capturing apparatus and image display apparatus |
US20090160996A1 (en) * | 2005-11-11 | 2009-06-25 | Shigemitsu Yamaoka | Image processing device, image processing method, program thereof, recording medium containing the program, and imaging device |
US20100162163A1 (en) * | 2008-12-18 | 2010-06-24 | Nokia Corporation | Image magnification |
US20120200665A1 (en) * | 2009-09-29 | 2012-08-09 | Sony Computer Entertainment Inc. | Apparatus and method for displaying panoramic images |
US20140184653A1 (en) * | 2012-12-28 | 2014-07-03 | Samsung Display Co., Ltd. | Image processing device and display device having the same |
US20140184858A1 (en) * | 2013-01-03 | 2014-07-03 | Samsung Electronics Co., Ltd. | Apparatus and method for photographing image in camera device and portable terminal having camera |
US20140194164A1 (en) * | 2013-01-04 | 2014-07-10 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
US9025049B2 (en) * | 2009-08-13 | 2015-05-05 | Fujifilm Corporation | Image processing method, image processing apparatus, computer readable medium, and imaging apparatus |
US9124762B2 (en) * | 2012-12-20 | 2015-09-01 | Microsoft Technology Licensing, Llc | Privacy camera |
US9210321B2 (en) * | 2013-12-05 | 2015-12-08 | Here Global B.V. | Method and apparatus for a shutter animation for image capture |
US9253415B2 (en) * | 2013-11-27 | 2016-02-02 | Adobe Systems Incorporated | Simulating tracking shots from image sequences |
US9270883B2 (en) * | 2012-11-30 | 2016-02-23 | Canon Kabushiki Kaisha | Image processing apparatus, image pickup apparatus, image pickup system, image processing method, and non-transitory computer-readable storage medium |
US9467654B2 (en) * | 2012-05-18 | 2016-10-11 | Ricoh Company, Limited | Video-conference terminal device, video-conference system, image distortion correction method, and image distortion correction processing program product |
Family Cites Families (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5341466A (en) * | 1991-05-09 | 1994-08-23 | New York University | Fractal computer user centerface with zooming capability |
US6121966A (en) * | 1992-11-02 | 2000-09-19 | Apple Computer, Inc. | Navigable viewing system |
JPH06325144A (en) | 1993-05-12 | 1994-11-25 | Toppan Printing Co Ltd | Layout design system |
JPH10340075A (en) | 1997-06-06 | 1998-12-22 | Matsushita Electric Ind Co Ltd | Image display method |
US7620909B2 (en) * | 1999-05-12 | 2009-11-17 | Imove Inc. | Interactive image seamer for panoramic images |
US7149549B1 (en) * | 2000-10-26 | 2006-12-12 | Ortiz Luis M | Providing multiple perspectives for a venue activity through an electronic hand held device |
JP2002329212A (en) | 2001-05-02 | 2002-11-15 | Sony Corp | Device and method for information processing, recording medium, and program |
JP4439763B2 (en) * | 2001-07-04 | 2010-03-24 | 株式会社リコー | Image recording / reproducing system and image recording / reproducing method |
JP2003132362A (en) * | 2001-10-22 | 2003-05-09 | Sony Corp | Information communication system, information communication method and computer program |
JP3641747B2 (en) * | 2001-10-23 | 2005-04-27 | ヴイストン株式会社 | Image display method and image display apparatus |
JP2004254256A (en) * | 2003-02-24 | 2004-09-09 | Casio Comput Co Ltd | Camera apparatus, display method, and program |
FR2854265B1 (en) * | 2003-04-28 | 2006-05-19 | Snecma Moteurs | OPTIMIZING ERGONOMICS WHEN MOVING A VIRTUAL MANNEQUIN |
JP4635437B2 (en) | 2004-01-07 | 2011-02-23 | ソニー株式会社 | Electronic apparatus and image display method |
JP4756876B2 (en) | 2004-06-09 | 2011-08-24 | キヤノン株式会社 | Image display control device, image display control method, program, and storage medium |
JP4916237B2 (en) | 2005-09-16 | 2012-04-11 | 株式会社リコー | Image display apparatus, image display method, program for causing computer to execute the method, and image display system |
CN101305595B (en) * | 2005-11-11 | 2011-06-15 | 索尼株式会社 | Image processing device and image processing method |
EP1961205B1 (en) * | 2005-12-16 | 2019-06-19 | The 41st Parameter, Inc. | Methods and apparatus for securely displaying digital images |
US8477154B2 (en) * | 2006-03-20 | 2013-07-02 | Siemens Energy, Inc. | Method and system for interactive virtual inspection of modeled objects |
US7956847B2 (en) | 2007-01-05 | 2011-06-07 | Apple Inc. | Gestures for controlling, manipulating, and editing of media files using touch sensitive devices |
JP4841449B2 (en) | 2007-01-29 | 2011-12-21 | ソニー株式会社 | Imaging apparatus, image editing method, and program |
JP4345829B2 (en) * | 2007-03-09 | 2009-10-14 | ソニー株式会社 | Image display system, image display apparatus, image display method, and program |
US8259208B2 (en) * | 2008-04-15 | 2012-09-04 | Sony Corporation | Method and apparatus for performing touch-based adjustments within imaging devices |
US8214766B1 (en) | 2008-07-09 | 2012-07-03 | Adobe Systems Incorporated | Method and system for preview control for image adjustment |
EP2207342B1 (en) | 2009-01-07 | 2017-12-06 | LG Electronics Inc. | Mobile terminal and camera image control method thereof |
WO2011055451A1 (en) | 2009-11-06 | 2011-05-12 | パイオニア株式会社 | Information processing device, method therefor, and display device |
KR101662846B1 (en) | 2010-05-12 | 2016-10-06 | 삼성전자주식회사 | Apparatus and method for generating bokeh in out-of-focus shooting |
JP5589644B2 (en) * | 2010-07-27 | 2014-09-17 | 日本精機株式会社 | Peripheral image display device and display method thereof |
JP5645626B2 (en) | 2010-12-06 | 2014-12-24 | キヤノン株式会社 | Display control apparatus, display control method, program, and storage medium |
JP5701040B2 (en) * | 2010-12-14 | 2015-04-15 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
JP5678324B2 (en) | 2011-02-10 | 2015-03-04 | パナソニックIpマネジメント株式会社 | Display device, computer program, and display method |
JP5561214B2 (en) * | 2011-03-15 | 2014-07-30 | オムロン株式会社 | Image processing apparatus and image processing program |
US8898630B2 (en) * | 2011-04-06 | 2014-11-25 | Media Direct, Inc. | Systems and methods for a voice- and gesture-controlled mobile application development and deployment platform |
JP2012249175A (en) * | 2011-05-30 | 2012-12-13 | Olympus Imaging Corp | Imaging apparatus, display method, and program |
CN103186324A (en) | 2011-12-29 | 2013-07-03 | 富泰华工业(深圳)有限公司 | Image editing system and image editing method |
US8831371B2 (en) | 2012-03-02 | 2014-09-09 | Adobe Systems Incorporated | Methods and apparatus for applying blur patterns to images |
US8971623B2 (en) | 2012-03-06 | 2015-03-03 | Apple Inc. | Overlaid user interface tools for applying effects to image |
US8548778B1 (en) * | 2012-05-14 | 2013-10-01 | Heartflow, Inc. | Method and system for providing information from a patient-specific model of blood flow |
US10037820B2 (en) * | 2012-05-29 | 2018-07-31 | Medical Avatar Llc | System and method for managing past, present, and future states of health using personalized 3-D anatomical models |
JP6186775B2 (en) | 2012-05-31 | 2017-08-30 | 株式会社リコー | Communication terminal, display method, and program |
JP6006536B2 (en) * | 2012-06-01 | 2016-10-12 | 任天堂株式会社 | Information processing program, information processing apparatus, information processing system, and panoramic video display method |
JP5920057B2 (en) * | 2012-06-29 | 2016-05-18 | 株式会社リコー | Transmission device, image sharing system, transmission method, and program |
KR20140028311A (en) | 2012-08-28 | 2014-03-10 | 삼성전자주식회사 | Method for setting a selecting region and an electronic device thereof |
US20140062917A1 (en) | 2012-08-29 | 2014-03-06 | Samsung Electronics Co., Ltd. | Method and apparatus for controlling zoom function in an electronic device |
JP6075066B2 (en) | 2012-12-28 | 2017-02-08 | 株式会社リコー | Image management system, image management method, and program |
US20150046299A1 (en) * | 2013-08-12 | 2015-02-12 | Sap Ag | Inventory Assessment with Mobile Devices |
WO2015030221A1 (en) * | 2013-08-28 | 2015-03-05 | Ricoh Company, Ltd. | Image processing apparatus, image processing method, and imaging system |
-
2014
- 2014-03-18 JP JP2014054782A patent/JP5835383B2/en active Active
-
2015
- 2015-03-10 WO PCT/JP2015/057609 patent/WO2015141605A1/en active Application Filing
- 2015-03-10 CN CN201910830851.1A patent/CN110456967B/en active Active
- 2015-03-10 EP EP15765011.0A patent/EP3120327A4/en not_active Withdrawn
- 2015-03-10 CA CA2941469A patent/CA2941469C/en not_active Expired - Fee Related
- 2015-03-10 CN CN201580013724.2A patent/CN106133794B/en active Active
- 2015-10-28 US US14/924,871 patent/US9760974B2/en active Active
-
2017
- 2017-08-08 US US15/671,338 patent/US10304157B2/en active Active
-
2019
- 2019-04-02 US US16/372,764 patent/US20190228500A1/en not_active Abandoned
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030103063A1 (en) * | 2001-12-03 | 2003-06-05 | Tempest Microsystems | Panoramic imaging and display system with canonical magnifier |
US20070046804A1 (en) * | 2005-08-30 | 2007-03-01 | Olympus Corporation | Image capturing apparatus and image display apparatus |
US20090160996A1 (en) * | 2005-11-11 | 2009-06-25 | Shigemitsu Yamaoka | Image processing device, image processing method, program thereof, recording medium containing the program, and imaging device |
US8169527B2 (en) * | 2005-11-11 | 2012-05-01 | Sony Corporation | Apparatus selectively presenting distortion corrected image data |
US20100162163A1 (en) * | 2008-12-18 | 2010-06-24 | Nokia Corporation | Image magnification |
US9025049B2 (en) * | 2009-08-13 | 2015-05-05 | Fujifilm Corporation | Image processing method, image processing apparatus, computer readable medium, and imaging apparatus |
US20120200665A1 (en) * | 2009-09-29 | 2012-08-09 | Sony Computer Entertainment Inc. | Apparatus and method for displaying panoramic images |
US9467654B2 (en) * | 2012-05-18 | 2016-10-11 | Ricoh Company, Limited | Video-conference terminal device, video-conference system, image distortion correction method, and image distortion correction processing program product |
US9270883B2 (en) * | 2012-11-30 | 2016-02-23 | Canon Kabushiki Kaisha | Image processing apparatus, image pickup apparatus, image pickup system, image processing method, and non-transitory computer-readable storage medium |
US9124762B2 (en) * | 2012-12-20 | 2015-09-01 | Microsoft Technology Licensing, Llc | Privacy camera |
US20140184653A1 (en) * | 2012-12-28 | 2014-07-03 | Samsung Display Co., Ltd. | Image processing device and display device having the same |
US20140184858A1 (en) * | 2013-01-03 | 2014-07-03 | Samsung Electronics Co., Ltd. | Apparatus and method for photographing image in camera device and portable terminal having camera |
US20140194164A1 (en) * | 2013-01-04 | 2014-07-10 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
US9253415B2 (en) * | 2013-11-27 | 2016-02-02 | Adobe Systems Incorporated | Simulating tracking shots from image sequences |
US9210321B2 (en) * | 2013-12-05 | 2015-12-08 | Here Global B.V. | Method and apparatus for a shutter animation for image capture |
Also Published As
Publication number | Publication date |
---|---|
US20160048942A1 (en) | 2016-02-18 |
CN106133794B (en) | 2021-11-23 |
CA2941469A1 (en) | 2015-09-24 |
CA2941469C (en) | 2018-05-08 |
EP3120327A1 (en) | 2017-01-25 |
CN110456967B (en) | 2023-05-02 |
JP2015176559A (en) | 2015-10-05 |
CN110456967A (en) | 2019-11-15 |
JP5835383B2 (en) | 2015-12-24 |
US20170337658A1 (en) | 2017-11-23 |
US9760974B2 (en) | 2017-09-12 |
EP3120327A4 (en) | 2017-01-25 |
WO2015141605A1 (en) | 2015-09-24 |
CN106133794A (en) | 2016-11-16 |
US10304157B2 (en) | 2019-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10304157B2 (en) | Information processing method, information processing device, and program | |
US9646404B2 (en) | Information processing method, information processing device, and program that facilitates image processing operations on a mobile device | |
US20130127901A1 (en) | Methods and Apparatus for Calibrating Focused Plenoptic Camera Data | |
JP6350695B2 (en) | Apparatus, method, and program | |
JP2018109946A (en) | Display device, program, and method for display | |
US10701286B2 (en) | Image processing device, image processing system, and non-transitory storage medium | |
JP6583486B2 (en) | Information processing method, information processing program, and information processing apparatus | |
US10785470B2 (en) | Image processing apparatus, image processing method, and image processing system | |
JP6128185B2 (en) | Apparatus, method, and program | |
JP6777208B2 (en) | program | |
US20220165021A1 (en) | Apparatus, system, method, and non-transitory medium | |
CN114157848B (en) | Projection device correction method, projection device correction device, storage medium and projection device | |
JP2016021267A (en) | Device, method, and program | |
JP2018109740A (en) | Display device, program, and method for display | |
JP2017224330A (en) | Device, method, and program | |
JP2024015890A (en) | Image processing system, image processing method, and computer program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: RICOH COMPANY, LTD., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:IRIE, HIRONORI;TERASHITA, TOSHIYUKI;SASAKI, TOMOHIKO;REEL/FRAME:048766/0690 Effective date: 20151020 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |