Nothing Special   »   [go: up one dir, main page]

WO2005122553A1 - Image i/o device - Google Patents

Image i/o device Download PDF

Info

Publication number
WO2005122553A1
WO2005122553A1 PCT/JP2005/010470 JP2005010470W WO2005122553A1 WO 2005122553 A1 WO2005122553 A1 WO 2005122553A1 JP 2005010470 W JP2005010470 W JP 2005010470W WO 2005122553 A1 WO2005122553 A1 WO 2005122553A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
projection
unit
imaging
light
Prior art date
Application number
PCT/JP2005/010470
Other languages
French (fr)
Japanese (ja)
Inventor
Hiroshi Uchigashima
Original Assignee
Brother Kogyo Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Brother Kogyo Kabushiki Kaisha filed Critical Brother Kogyo Kabushiki Kaisha
Publication of WO2005122553A1 publication Critical patent/WO2005122553A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00519Constructional details not otherwise provided for, e.g. housings, covers
    • H04N1/00562Supporting the apparatus as a whole, e.g. stands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00795Reading arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00795Reading arrangements
    • H04N1/00827Arrangements for reading an image from an unusual original, e.g. 3-dimensional objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/19Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays
    • H04N1/195Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays the array comprising a two-dimensional array or a combination of two-dimensional arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/04Scanning arrangements
    • H04N2201/0402Arrangements not specific to a particular one of the scanning methods covered by groups H04N1/04 - H04N1/207
    • H04N2201/0436Scanning a picture-bearing surface lying face up on a support

Definitions

  • the present invention can image a subject existing in a direction in which image signal light is projected by a projection unit and a subject existing in a direction different from the direction in which image signal light is projected by a projection unit.
  • the present invention relates to an image input / output device capable of displaying and outputting arbitrary information in directions.
  • an image capturing device capable of capturing an image on the surface to visually display an imageable area.
  • a device provided with a projecting unit for projecting light onto a specific area The projection means provided in such an image input / output device includes a liquid crystal panel that performs spatial modulation on light emitted from the light source.
  • the projection unit controls the transmission of light emitted from the light source for each pixel by using the liquid crystal panel, and projects a spot light indicating the imaging area.
  • JP-A-8-32848 hereinafter referred to as Document 1
  • the projection unit is used to set the direction not including the imaging direction of the subject, For example, the information cannot be displayed and output on a desk with the projection direction of the projection means.
  • the present invention has been made to solve the above problems. That is, The light can capture an image of a subject existing in the direction in which the image signal light is projected by the projection means and an object present in a direction different from the direction in which the image signal light is projected by the projection means.
  • the purpose of the present invention is to provide an image input / output device capable of displaying and outputting arbitrary information.
  • a light source unit that emits light
  • a spatial modulation unit that spatially modulates light emitted from the light source unit and outputs image signal light.
  • An image input device comprising: a projection unit for projecting the image signal light output from the spatial modulation unit in the projection direction; and an imaging unit capable of capturing at least a subject existing in the projection direction and acquiring data of the captured image.
  • An output device is provided.
  • the imaging unit and the projection unit may be configured so that the imaging direction of the imaging unit is different from the projection direction so that an object existing in a direction different from the projection direction can be imaged by the imaging unit. It is provided so as to be relatively changeable with respect to the projection direction of the means.
  • the imaging direction of the imaging unit and the projection direction of the projection unit can be relatively changed to various directions desired by the user, and the desired image signal can be changed by the spatial modulation unit.
  • Light can be output, and the image signal light is projected by the projection means in the projection direction. Therefore, desired information can be displayed and output by the projection means in a direction different from the imaging direction.
  • the desired image signal light is output.
  • Light is projected by the projection means in the projection direction.
  • the imaging direction of the imaging means and the projection direction of the projection means can be relatively changed, and an arbitrary image signal light is projected in the projection direction by the projection means. It is possible to image a subject existing in the direction in which the image signal light is projected, and a subject existing in a direction different from the direction in which the image signal light is projected by the projection means, and to display and output desired information in the projection direction. There is an effect that can be.
  • FIG. 1 Figs. 1 (a) and 1 (b) are external perspective views of an image input / output device, and Fig. 1 (a) is a diagram viewed from the top side. () Is an enlarged view of the imaging head. FIG. 2 (a) to FIG. 2 (e) are views showing the internal configuration of an imaging head.
  • FIG. 3 is a diagram schematically showing an electric configuration inside the image input / output device.
  • FIG. 4 (a) is an enlarged view of an image projection unit
  • FIG. 4 (b) is a plan view of a light source lens
  • FIG. 4 (c) is a front view of a projection LCD.
  • FIGS. 5 (a) to 5 (c) are views for explaining the arrangement of LED arrays.
  • FIG. 6 is an electrical block diagram of the image input / output device.
  • FIG. 7 is a flowchart of a main process.
  • 8 (a) and 8 (b) are diagrams showing a relative positional relationship between an image projection unit and an image pickup unit.
  • FIG. 9 is a flowchart of digital camera processing.
  • FIG. 10 is a diagram showing a state in which rectangular image light indicating an imageable area is projected on a surface.
  • FIG. 11 is a flowchart of webcam processing.
  • FIG. 12 is a diagram showing a state in which image output light is projected by an image projection unit in a projection process of webcam processing.
  • FIG. 13 is a diagram showing a state in which image output light is projected by an image projection unit in a projection process of webcam processing.
  • FIG. 14 is a flowchart of a projection process.
  • FIG. 15 is a flowchart of stereoscopic image processing.
  • FIG. 16 (a) is a diagram for explaining the principle of the spatial code method
  • FIG. 16 (b) is a diagram showing a mask pattern (gray code) different from FIG. 16 (a). is there.
  • FIG. 17 (a) is a flowchart of a three-dimensional shape detection process
  • FIG. 17 (b) is a flowchart of an imaging process
  • FIG. 17 (c) is a flowchart of a three-dimensional measurement process.
  • FIG. 18 is a diagram showing a state in which pattern light is projected when the relative positional relationship between the image projection unit and the image pickup unit is C mode.
  • FIG. 19 is a diagram for explaining the outline of the code boundary coordinate detection process.
  • FIG. 21 is a flowchart of a process for obtaining code boundary coordinates with subpixel accuracy. is there.
  • FIG. 22 is a flowchart of a process for obtaining a boundary CCDY value for a luminance image having a mask pattern number of PatID [i].
  • FIG. 23 (a) to FIG. 23 (c) are diagrams for explaining a lens aberration correction process.
  • FIGS. 24 (a) and 24 (b) are diagrams for explaining a method of calculating three-dimensional coordinates in a three-dimensional space from coordinates in a CCD space.
  • FIG. 25 is a flowchart of flattening image processing.
  • FIGS. 26 (a) to 26 (c) are diagrams for explaining a document orientation calculation process.
  • FIG. 27 is a flowchart of a plane conversion process.
  • FIG. 28 (a) is a diagram for explaining the outline of the curvature calculation processing.
  • (b) is a diagram showing a flattened image flattened by the plane conversion process.
  • FIG. 29 is a flowchart of an image conversion process for distortion-free projection.
  • FIG. 30 (a) is a side view showing a light source lens 50 of a second embodiment
  • FIG. 30 (b) is a plan view showing a light source lens 60 of the second embodiment.
  • FIG. 31 (a) is a perspective view showing a state in which the light source lens 50 is fixed
  • FIG. 31 (b) is a partial sectional view thereof.
  • FIG. 32 is a diagram showing another example of the pattern light projected on the subject.
  • FIGS. 33 (a) to 33 (c) are views showing a modification of the image projection unit, and FIG. 33 (a) is a plan view of the imaging head, and FIG. 33 (b) and FIG. 33 (c) is a sectional view taken along line AA of FIG. 33 (a).
  • FIG. 1A and FIG. 1B are external perspective views of an image input / output device 1 as an embodiment of the present invention.
  • FIG. 1A is a view of the image input / output device 1 as viewed from above, and
  • FIG. 1B is an enlarged view of the imaging head 2.
  • the image input / output device 1 includes a digital camera mode functioning as a digital camera, a webcam mode functioning as a web camera, a stereoscopic image mode for detecting a three-dimensional shape to obtain a three-dimensional image, a curved document. Flattening to obtain a flattened image obtained by flattening etc. Various modes such as an image mode are provided.
  • the image input / output device 1 is a device capable of projecting an arbitrary image. Further, the image input / output device 1 is configured so as to be able to capture not only a subject located in a direction in which an image is projected but also a subject located in a direction different from a direction in which an image is projected.
  • FIG. 1A in particular, in the stereoscopic image mode or the flattened image mode, in order to detect the three-dimensional shape of the subject P, a stripe pattern formed by alternately arranging light and dark from an image projection unit 13 described later is used.
  • the pattern light image signal light having a predetermined pattern
  • the image input / output device 1 includes an imaging head 2 and an arm member 3 having one end detachably connected to the imaging head 2. Further, at the other end of the arm member 3, a laptop personal computer 49 (hereinafter simply referred to as “PC49”) is provided with a mounting portion 4 to which the imaging head 2 and the arm member 3 can be mounted.
  • PC49 laptop personal computer 49
  • the imaging head 2 has a case that internally includes an image projection unit 13 described later, and that holds a cylindrical imaging case 11 rotatably around an axis outside thereof.
  • a cylindrical lens barrel 5 is disposed at the center thereof.
  • the surface on which the lens barrel 5 is provided is referred to as the front surface of the imaging head 2.
  • the lens barrel 5 is a member that protrudes from the front of the imaging head 2 and includes a projection optical system 20 that is a part of the image projection unit 13 therein.
  • the projection optical system 20 is held by the lens barrel 5.
  • the lens barrel 5 allows the entire projection optical system 20 to move for focus adjustment.
  • a part of the lens of the projection optical system 20, which is a part of the image projection unit 13, is exposed to the outside from the end surface of the lens barrel 5, and the image signal light is projected from this exposed part toward the projection surface.
  • the imaging case 11 is provided with a white balance sensor 6 and a flash 7.
  • a part of the lens of the imaging optical system 21 which is a part of the image imaging unit 14 described later is exposed on the outer surface. A subject facing the imaging optical system 21 is imaged.
  • the flash 7 is a light source for supplementing necessary subject illuminance in the digital camera mode.
  • the flash 7 is composed of, for example, a discharge tube filled with xenon gas.
  • Flat The shoe 7 can be used repeatedly by discharging from a capacitor (not shown) built in the imaging head 2.
  • a monitor LCD 10 is arranged on the back of the imaging head 2.
  • the monitor LCD 10 is configured by a liquid crystal display (Liquid Crystal Display), and receives an image signal from a processor 15 described later and displays an image to a user.
  • the monitor LCD 10 displays a captured image in the digital camera mode or the webcam mode, a three-dimensional shape detection result image in the stereoscopic image mode, a flattened image in the flattened image mode, and the like.
  • a connecting member 12 for detachably connecting the imaging head 2 and the arm member 3 is arranged on a side surface of the imaging head 2.
  • the imaging head 2 of the image input / output device 1 can be used as a single mopile digital camera.
  • the connecting member 12 is formed in a ring shape and is fixed to a side surface of the imaging head 2. Further, a portion to be fitted to the connecting member 12 is formed on one end side of the arm member 3. By this fitting, the imaging head 2 and the arm member 3 can be connected, and the imaging head 2 can be fixed to the PC 49 at an arbitrary angle. By releasing the fitting, the arm member 3 and the imaging head 2 can be separated.
  • the arm member 3 can hold the imaging head 2 with respect to the PC 49 and can hold the imaging head with the orientation of the imaging head 2 in a desired direction with respect to the PC 49.
  • the arm member 3 is formed of a bellows-shaped pipe that can be bent to a desired shape. Since the user can turn the imaging head 2 to a desired position, even if the imaging head 2 is attached to the PC 49, the user can determine the direction in which the image can be projected by the image input / output device 1 and the direction in which the image can be captured. Can be set as follows.
  • the arm member 3 has one end connected to the imaging head 2 via the connecting member 12 as described above, and the other end provided for detachably attaching the arm member 3 to the PC 49.
  • a mounting part 4 is provided.
  • the mounting portion 4 includes a battery 26, an electronic circuit board, and the like, which will be described later. Further, a release button 8, an on-Z off switch 9a, a mode switching switch 9b, and a connector of the personal computer interface 24 are provided on the front side of the mounting portion 4.
  • the on / off switch 9a is located above the release button 8, and the mode switch 9b is It is provided below the close button 8.
  • the image input / output device 1 is connected to a PC 49 via a cable (not shown) connected to the personal computer interface 24.
  • the release button 8 is constituted by a two-stage push button switch that can be set to two states, a “half-pressed state” and a “fully-pressed state”.
  • the state of the release button 8 is managed by a processor 15 described later.
  • the well-known auto focus (AF) and auto exposure (AE) functions are activated, and the focus, aperture, and shutter speed are adjusted. An image is taken with the release button 8 “fully pressed”.
  • the mode switching switch 9b is a switch that can be set to various modes such as a digital camera mode, a webcam mode, a stereoscopic image mode, a flattened image mode, and an off mode.
  • the state of the mode switching switch 9b is managed by the processor 15. When the state of the mode switching switch 9b is detected by the processor 15, the processing of each mode is executed.
  • FIGS. 2A to 2E show the internal configuration of the imaging head 2.
  • the imaging head 2 has a lid 2a and a bottom 2b.
  • the imaging case 11 is a cylindrical case that includes an image imaging unit 14 described below. On the outer peripheral surface of the imaging case 11, a part of the lens of the imaging optical system 21 which is a part of the image imaging unit 14 is exposed, and a plurality of ribs 11a are provided upright.
  • FIG. 2B is a diagram showing a state in which the lid 2a of the imaging head 2 has been removed, and the image projection unit 13 included in the imaging head 2 has been removed.
  • the bottom 2b is provided with an imaging head position detection sensor S1, an imaging unit position detection sensor S2, and a hole 2c.
  • the imaging head position detection sensor S 1 is a two-axis acceleration sensor for detecting the inclination of the imaging head 2.
  • the imaging head position detection sensor S1 uses two axes to determine how many times the projection direction of the image projection unit 13 fixed to the imaging head 2 (the optical axis direction of the projection optical system 20 described later) is inclined with respect to the vertical direction. Detect and output as electrical signal.
  • the imaging unit position detection sensor S2 is a sensor for detecting the position of the imaging unit 14 with respect to the imaging head 2.
  • the hole 2c is a substantially circular hole formed on the surface constituting the front surface of the imaging head 2, and the hole 2c is threaded.
  • the lens barrel 5 (see FIG. 1) is screwed into the hole 2c.
  • the image projection unit 13 included in the imaging head 2 projects an image from the lens barrel 5 movably screwed into the hole 2c.
  • the imaging case 11 has small-diameter portions lib which are cylindrical members smaller in diameter than the imaging case 11 on both end surfaces.
  • One of the small-diameter portions l ib is provided with an imaging portion position detecting rib 11c.
  • the small diameter portion l ib is rotatably supported around an axis by a notch provided in the bottom portion 2b.
  • the imaging case 11 is held so as to be rotatable relative to the imaging head 2.
  • the imaging unit position detecting rib 11c is fixed to one of the small diameter parts l ib, and rotates around the axis of the imaging case 11 with the rotation of the imaging case 11.
  • the above-described imaging unit position detection sensor S2 fixed to the bottom 2b is located near the imaging unit position detection rib 1lc.
  • the imaging unit position detection sensor S2 By detecting the position of the rib 11c for detecting the position of the imaging unit by the imaging unit position detection sensor S2, the direction of the imaging direction of the image imaging unit 14 with respect to the imaging head 2 is detected.
  • An electric signal for determining whether or not the projection direction and the imaging direction have a positional relationship within a predetermined range is output by the imaging unit position detection sensor S2.
  • FIG. 2 (c) is a diagram showing a state where the imaging case 11 has been removed from the bottom 2b
  • FIG. 2 (d) is a diagram showing the imaging case 11.
  • the bottom portion 2b includes a predetermined position fixing rib 2d and a rotation stopper 2e.
  • the predetermined position fixing rib 2d is biasedly engaged with the small diameter portion l ib of the imaging case 11, so that the relative positional relationship between the image projecting unit 13 and the image capturing unit 14 becomes a predetermined positional relationship.
  • the imaging case 11 is fixed at a predetermined position.
  • the rotation stopper 2e is provided so as to engage with a plurality of ribs 11a provided upright on the outer peripheral surface of the imaging case 11. Therefore, by urging the imaging case 11 with the plurality of ribs 11a, the imaging case 11 can be intermittently rotated, and can be fixed at a desired position.
  • FIG. 2E is a diagram showing an internal configuration of the imaging case 11.
  • the imaging case 11 includes an imaging optical system 21 and light incident through the imaging optical system 21 in an electric signal.
  • the CCD22 that converts to the signal is fixed.
  • the image pickup unit 14 composed of the image pickup optical system 21 and the CCD 22 can pick up an image of a subject that faces the image pickup unit 14 and obtain the image pickup data. Further, by rotating the imaging case 11 around the axis with respect to the imaging head 2, the imaging direction of the image imaging unit 14 fixed to the imaging case 11 is shifted by the image projection unit 13 fixed to the imaging head 2. It is changed relative to the projection direction.
  • the image capturing section 14 can be made to face a subject existing in a direction different from the projection direction
  • the image projecting section 13 can only use the subject existing in the direction in which the image signal light is projected by the image projecting section 13. By means of 13, it is possible to capture an image of a subject existing in a direction different from the direction in which the image signal light is projected.
  • one of the small-diameter portions l ib is formed in a tubular shape, and a signal cable is passed therethrough. With this signal cable, the electrical components in the imaging case 11 and the electrical components in the imaging head 2 are electrically connected.
  • FIG. 3 is a block diagram showing an electrical configuration inside the image input / output device 1.
  • the imaging head 2 includes an image projection unit 13 and an image imaging unit 14 as main components.
  • the mounting part 4 contains a processor 15 and a battery 26 as main components.
  • a signal cable is inserted inside the arm member 3.
  • the signal cable extends to the inside of the mounting portion 4 and connects the electrical components in the mounting portion 4 to the electrical components in the imaging head 2.
  • the image projection unit 13 is a unit for projecting a desired projection image on a projection surface.
  • the image projection unit 13 includes a substrate 16, a plurality of LEDs 17 (hereinafter collectively referred to as “LED array 17 A”), a light source lens 18, a projection LCD 19, and a projection optical system 20. They are arranged in a straight line along the optical axis 13a of the projection optical system 20).
  • LED array 17 A a plurality of LEDs 17
  • LCD 19 a projection LCD 19
  • projection optical system 20 arranged in a straight line along the optical axis 13a of the projection optical system 20.
  • the image capturing section 14 is a unit for capturing an image of the subject P.
  • an image pickup optical system 21 and a CCD 22 are arranged along a light input direction (optical axis of the image pickup optical system 21) 14a.
  • the imaging optical system 21 includes a plurality of lenses and has a well-known autofocus function. To do. With the autofocus function, the imaging optical system 21 is configured to automatically adjust the focal length and aperture and form an image of external light on the CCD 22.
  • the CCD 22 has photoelectric conversion elements such as CCD (Charge Coupled Device) elements arranged in a matrix.
  • the CCD 22 generates a signal corresponding to the color and intensity of the light of the image formed on the surface via the imaging optical system 21, converts the signal into digital data, and outputs the digital data to the port processor 15.
  • the processor 15 includes a flash 7, a release button 8, an on-Z off switch 9a, a mode switching switch 9b, a personal computer interface 24, a power supply interface 25, an external memory 27, a cache memory 28, a light source driver 29, and a light source driver 29. They are electrically connected via an LCD driver 30 and a CCD interface 31.
  • a battery 26 is connected to the processor 15 via a power supply interface 25, an LED array 17A is connected via a light source driver 29, a projection LCD 19 is connected via a projection LCD driver 30, and a CCD interface is connected to the processor 15.
  • the CCD 22 is connected via the face 31.
  • the external memory 27 is a removable flash ROM.
  • the external memory 27 stores captured images and three-dimensional information in digital camera mode, webcam mode, and stereoscopic image mode.
  • an SD card, an XD card (both are registered trademarks) or the like can be used as the external memory 27.
  • the cache memory 28 is a high-speed storage device.
  • the cache memory 28 is used, for example, as follows. Under the control of the processor 15, the captured image is transferred to the cache memory 28 at high speed, subjected to image processing, and stored in the external memory 27.
  • SDRAM, DDRRAM, or the like can be used as the cache memory 28.
  • the power supply interface 25, the light source driver 29, the projection LCD driver 30, and the CCD interface 31 are configured by an IC (Integrated Circuit).
  • a power supply interface 25, a light source driver 29, a projection LCD driver 30, and a CCD interface 31 control the notebook 26, the LED array 17A, the projection LCD 19, and the CCD 22, respectively.
  • the main battery 26 for supplying drive power to the imaging head 2 is provided in the mounting section 4, so that the weight of the imaging head 2 can be reduced. Therefore, PC49 In contrast, the operation of changing the direction of the imaging head 2 becomes easy. Further, since the imaging head 2 is reduced in weight, the strength of the arm member 3 is not so required as compared with a case where the main battery 26 is provided in the imaging head 2.
  • FIG. 4A is an enlarged view of the image projection unit 13
  • FIG. 4B is a plan view of the light source lens 18, and
  • FIG. 4C shows an arrangement relationship between the projection LCD 19 and the CCD 22.
  • the image projection unit 13 includes the substrate 16, the LED array 17A, the light source lens 18, the projection LCD 19, and the projection optical system 20 along the projection direction.
  • the substrate 16 is for mounting the LED array 17A and for performing electrical wiring with the LED array 17A. Specifically, an insulating resin is applied to an aluminum substrate provided with through holes, and then a copper pattern is formed by electroless plating. ⁇ A single-layer or multilayer substrate with a glass epoxy substrate as the core Can be used as 16.
  • the LED array 17A is a light source that emits radial light.
  • a plurality of LEDs 17 are arranged in a staggered manner. These LEDs 17 are bonded and electrically connected to the substrate 16 via a silver paste. The LED 17 is also electrically connected to the substrate 16 via a bonding wire.
  • the efficiency of converting electricity into light is increased as compared with the case where an incandescent light bulb, a halogen lamp, or the like is used as the light source.
  • generation of infrared rays and ultraviolet rays can be suppressed. Therefore, according to the present embodiment, the light source can be driven with low power consumption, and power saving and long life can be achieved. Further, according to the present embodiment, it is possible to reduce the temperature rise of the device.
  • the light source lens 18 and the projection optical system 20 described later are made of resin.
  • a lens can be employed. Therefore, the light source lens 18 and the projection optical system 20 can be configured inexpensively and lightly as compared with the case where a glass lens is employed.
  • Each of the LEDs 17 constituting the LED array 17A emits the same emission color.
  • Each of the LEDs 17 uses four elements of Al, In, Ga, and P as materials, and emits an amber color. Therefore, it is not necessary to consider the correction of chromatic aberration that occurs when emitting a plurality of emission colors. It is not necessary to use an achromatic lens as the projection optical system 20 to correct chromatic aberration. Thus, there is an effect of providing a simple surface configuration and an inexpensive material projection unit.
  • the LED array 17A includes 59 LEDs 17, and each LED 17 is driven at 50mW (20 mA, 2.5V). Therefore, all 59 LEDs 17 are driven with approximately 3W of power consumption.
  • the luminous power emitted from each LED 17 The brightness as the luminous flux when illuminated from the projection optical system 20 through the light source lens 18 and the projection LCD 19 is set to about 25 ANSI lumens even in the case of full illumination. ing.
  • the light source lens 18 is a lens as a light-collecting optical system that collects light emitted radially from the LED array 17A, and is made of an optical resin represented by acrylic.
  • the light source lens 18 projects at a position facing each LED 17 of the LED array 17A, and supports a convex lens portion 18a protruding toward the LCD 19 side, and the lens portion 18a.
  • Epoxy or silicon for sealing the LED 17 and bonding the substrate 16 and the light source lens 18 to the base portion 18b and the space inside the base portion 18b, which fills the opening containing the LED array 17A. It includes a resin sealing material 18c and positioning pins 18d projecting from the base portion 18b toward the substrate 16 and connecting the light source lens 18 and the substrate 16.
  • the light source lens 18 is fixed on the substrate 16 by inserting the positioning pins 18 d into the long holes 16 formed in the substrate 16 while enclosing the LED array 17 A inside the opening.
  • the light source lens 18 can be arranged in a small space. Also, by providing a function of supporting the light source lens 18 in addition to the function of mounting the LED array 17A on the substrate 16, it is not necessary to separately provide a component for supporting the light source lens 18, and the part is not required. The number of articles can be reduced.
  • each lens portion 18a is arranged at a position facing each LED 17 of the LED array 17A in a one-to-one relationship.
  • each LED 17 the radial light emitted from each LED 17 is transmitted to each lens unit facing each LED 17.
  • the light is efficiently condensed by 18 and radiated to the projection LCD 19 as radiation having high directivity as shown in the figure.
  • the reason why the directivity is increased in this way is that by injecting light substantially perpendicularly to the projection LCD 19, the in-plane transmittance unevenness due to the optical rotation of the liquid crystal can be suppressed.
  • the projection optical system 20 since the projection optical system 20 has telecentric characteristics and its incident NA is about 0.1, it is regulated so that only light within vertical ⁇ 5 ° can pass through the internal aperture. is there. Therefore, it is essential to improve the image quality that the emission angle of the light from the LED 17 is set to be perpendicular to the projection LCD 19 and that almost all of the luminous flux is within ⁇ 5 °. This is because, if light that is out of the vertical direction enters the projection LCD 19, the transmittance changes depending on the incident angle due to the optical rotation of the liquid crystal, resulting in transmittance unevenness.
  • the projection LCD 19 is a spatial modulation element that spatially modulates light condensed through the light source lens 18 and outputs a desired image signal light to the projection optical system 20.
  • the projection LCD 19 is composed of a plate-like liquid crystal display (Liquid Crystal Display) having a different aspect ratio.
  • each pixel constituting the projection LCD 19 is composed of one pixel row arranged in a straight line along the longitudinal direction of the projection LCD 19, and one pixel row.
  • the projection LCD 19 is arranged so that other pixel rows shifted by a predetermined distance in the longitudinal direction are alternately arranged in parallel.
  • the front of the imaging head 2 is directed to the front side of the paper, light is irradiated toward the projection LCD 19 from the back side of the paper, and a subject image is formed on the CCD 22 from the front side of the paper. It is assumed to be in the state.
  • the projection LCD 19 by arranging the pixels constituting the projection LCD 19 in a staggered manner in the longitudinal direction, the light that is spatially modulated by the projection LCD 19 in one direction perpendicular to the longitudinal direction (transverse direction) is reduced by one. / 2 pitch can be controlled. Therefore, the projection pattern can be controlled at a fine pitch, and the three-dimensional shape can be detected with high accuracy by increasing the resolution.
  • the direction of the stripe is determined.
  • the projection LCD 19 and the CCD 22 are arranged in a relationship as shown in Fig. 4 (c). More specifically, since the wide surface of the projection LCD 19 and the wide surface of the CCD 22 are arranged and directed in substantially the same direction, an image projected from the projection LCD 19 onto the projection surface is formed on the CCD 22. When forming an image, the projected image can be formed as it is without being bent by a half mirror or the like.
  • the CCD 22 is arranged on the longitudinal direction side of the projection LCD 19 (in the direction in which the pixel columns extend). Therefore, in particular, when detecting the three-dimensional shape of the subject using the principle of triangulation in the three-dimensional image mode, the inclination S between the CCD 22 and the subject can be controlled at a half pitch, so that the force S can be obtained. Similarly, three-dimensional shapes can be detected with high accuracy.
  • the projection optical system 20 is a plurality of lenses that project the image signal light that has passed through the projection LCD 19 toward the projection surface, and is formed of a telecentric lens made of a combination of glass and resin. Telecentric refers to a configuration in which the principal ray passing through the projection optical system 20 is parallel to the optical axis in the space on the incident side, and the position of the exit pupil is infinite. By being telecentric in this manner, as described above, only light that passes through the projection LCD 19 at a vertical angle of ⁇ 5 ° can be projected, so that image quality can be improved.
  • FIGS. 5A to 5C are views for explaining the arrangement of the LED array 17A.
  • FIG. 5 (a) is a diagram showing an illuminance distribution of light passing through the light source lens 18
  • FIG. 5 (b) is a plan view showing an arrangement state of the LED array 17A
  • FIG. 5 is a diagram showing a composite illuminance distribution in the example.
  • the projection is designed to reach the surface of the LCD 19 as a level.
  • the plurality of LEDs 17 are arranged in a staggered pattern on the substrate 16.
  • a plurality of LEDs 17 in which a plurality of LEDs 17 are arranged in series at a d pitch are arranged in parallel at a 3 / 2d pitch, and furthermore, a plurality of LED columns arranged in such a manner are arranged in parallel. In this case, every other row is moved by l / 2d in the same direction with respect to the next row.
  • the distance between the one LED 17 and the LCD 17 around the one LED 17 is set to be d (triangular lattice arrangement).
  • the length of d is determined so as to be equal to or less than the full width half maximum (FWHM) of the illuminance distribution formed on the projection LCD 19 by the light emitted from the LED 17 even with one power. .
  • FWHM full width half maximum
  • the combined illuminance distribution of the light passing through the light source lens 18 and reaching the surface of the projection LCD 19 becomes a substantially linear shape including a small lip gap as shown in FIG.
  • Light can be applied to the surface substantially uniformly. Therefore, illuminance unevenness in the projection LCD 19 can be suppressed, and as a result, a high-quality image can be projected.
  • FIG. 6 is an electrical block diagram of the image input / output device 1. The description of the configuration already described above is omitted.
  • the processor 15 includes a CPU 35, a ROM 36, and a RAM 37.
  • the CPU 35 performs various kinds of processing using the RAM 37, according to the program stored in the ROM 36.
  • the processing performed under the control of the CPU 35 includes detection of a pressing operation of the release button 8, capture of image data from the CCD 22, transfer and storage of the image data, detection of the state of the mode switching switch 9b, and the like.
  • the ROM 36 includes a camera control program 36a, a pattern light photographing program 36b, a luminance image generation program 36c, a code image generation program 36d, a code boundary extraction program 36e, a lens aberration correction program 36f, and a triangle.
  • a survey calculation program 36g, a document attitude calculation program 36h, and a plane conversion program 36i are stored.
  • the camera control program 36a is a program relating to control of the entire image input / output device 1 including the main processing shown in FIG.
  • the pattern light photographing program 36b is a program for imaging a state in which pattern light is projected on a subject in order to detect a three-dimensional shape of the subject P, and a state in which the projected pattern light is projected.
  • the luminance image generation program 36c is provided for each of the pattern light existence image obtained by imaging the state where the pattern light is projected by the pattern light imaging program 36b and the pattern light absence image obtained by imaging the state where the pattern light is not projected. This is a program that calculates the Y value in the YCbCr space from the RGB value of the image.
  • a plurality of types of pattern light are projected in time series and imaged for each pattern light to generate a plurality of types of luminance images.
  • the code image generation program 36d binarizes the plurality of luminance images generated by the luminance image generation program 36c with reference to a preset luminance threshold or a luminance image of an image without pattern light, and based on the result, This is a program for generating a code image in which a predetermined code is assigned to each pixel.
  • the code boundary extraction program 36e uses the code image generated by the code image generation program 36d and the luminance image generated by the luminance image generation program 36c to determine the code boundary coordinates with sub-pixel accuracy. It is a program.
  • the lens aberration correction program 36f is a program for correcting the aberration of the imaging optical system 20 with respect to the code boundary coordinates obtained by the sub-pixel accuracy by the code boundary extraction program 36e.
  • the triangulation calculation program 36g is a program that calculates three-dimensional coordinates in the real space related to the boundary coordinates from the boundary coordinates of the code whose aberration has been corrected by the lens aberration correction program 36f.
  • the document attitude calculation program 36h is a program for estimating and obtaining the three-dimensional shape of the subject P such as a book from the three-dimensional coordinates calculated by the triangulation calculation program 36g.
  • the plane conversion program 36i is a program that generates a flattened image as if the subject P such as a book is imaged from the front, based on the three-dimensional shape of the subject P such as a manuscript calculated by the manuscript attitude calculation program 36h. It is.
  • the RAM 37 includes a pattern light image storage unit 37a, a pattern light non-image storage unit 37b, a luminance image storage unit 37c, a code image storage unit 37d, a code boundary coordinate storage unit 37e, and an ID storage unit. 37f, an aberration correction coordinate storage unit 37g, a three-dimensional coordinate storage unit 37h, The calculation result storage unit 37i, the plane conversion result storage unit 37j, the projection image storage unit 37k, and the marking area 371 are allocated as storage areas.
  • the pattern light existence image storage unit 37a stores a pattern light existence image obtained by imaging a state where the pattern light is projected on the subject P by the pattern light photographing program 36b.
  • the pattern light non-image storage unit 37b stores the pattern light non-image obtained by projecting the pattern light on the subject P by the pattern light photographing program 36b and capturing the image, state, and state.
  • the luminance image storage unit 37c stores the luminance image generated by the luminance image generation program 36c.
  • the code image storage unit 37d stores a code image generated by the code image generation program 36d.
  • the code boundary coordinate storage unit 37e stores the boundary coordinates of each code obtained with the sub-pixel accuracy to be extracted by the code boundary extraction program 36e.
  • the ID storage unit 37f stores an ID or the like assigned to a luminance image having a change in brightness at a pixel position having a boundary.
  • the aberration correction coordinate storage unit 37g stores the boundary coordinates of the code whose aberration has been corrected by the lens aberration correction program 36f.
  • the three-dimensional shape coordinate storage unit 37h stores the three-dimensional coordinates of the real space calculated by the triangulation calculation program 36g.
  • the document attitude calculation result storage unit 37i stores parameters related to the three-dimensional shape of the subject P such as a document calculated by the document attitude calculation program 36h.
  • the plane conversion result storage unit 37j stores the plane conversion result generated by the plane conversion program 36.
  • the projection image storage unit 37k stores image information projected from the image projection unit 13.
  • the working area 371 stores data temporarily used for the operation in the CPU 15.
  • An amplifier 32 is connected to the processor 15 via a bus line in addition to the configuration described above.
  • the amplifier 32 sounds a speaker 33 connected to the amplifier 32 to output a warning sound or the like.
  • the PC 49 has a CPU 50.
  • a ROM 51 and a RAM 52 are connected to the CPU 50 via an internal bus.
  • an interface 55 connectable to the PC interface 24, a CRT display 58, and a keyboard 59 are electrically connected to an internal bus to which the CPU is connected via an input / output port 54. .
  • FIG. 7 is a flowchart of a main process executed under the control of the CPU 35. Still Details of the digital camera processing (S605), webcam processing (S607), stereoscopic image processing (S609), and flattened image processing (S611) in the main processing will be described later.
  • a key scan is performed to determine the state of the mode switching switch 9 (S603), and it is determined whether the setting of the mode switching switch 9 is the digital camera mode (S604). If the setting of the mode switching switch 9 is the digital camera mode (S604: Yes), the process shifts to a digital camera process described later (S605).
  • the setting of the mode switching switch 9 is not the digital camera mode (S604: No)
  • the setting of the mode switching switch 9 is not the webcam mode (S606: No)
  • the setting of the mode switching switch 9 is not the stereoscopic image mode (S608: No)
  • the setting of the mode switching switch 9 is not the flat image mode (S610: No)
  • FIGS. 8 (a) and 8 (b) are diagrams showing the positional relationship between the image projection unit 13 and the image pickup unit 14.
  • the projection direction of the image projection unit 13 (the optical axis direction of the projection optical system 20) is 0 ° ⁇ 30 ° (approximately 30 ° to approximately 30 °)
  • the image pickup direction of the image pickup unit 14 (the optical axis direction of the image pickup optical system 21) is set in front of the image pickup head 2 (the surface on which the lens barrel 5 is provided).
  • a mode the state shown in FIG. 8 (b) A).
  • the projection direction of the image projection unit 13 is 0 ° ⁇ 30 ° with respect to the vertical direction, and the image imaging unit
  • B mode the state shown in FIG. 8B.
  • the direction in which the subject captured by the image capturing unit 14 is present and the direction in which the image signal light is projected by the image projecting unit 13 are large angles that are close to perpendicular to each other.
  • the image signal light is not projected on the top, and even if the image signal is projected by the image projection unit 13 and the image is captured by the image capturing unit 14, the projected image can be prevented from being captured. Therefore, when the positional relationship between the image projection unit 13 and the image capturing unit 14 is the B mode, for example, a character string or the like indicating the operation method or the operation procedure (related information related to the captured image acquired by the imaging unit) is projected. In this state, the user can appropriately perform imaging while checking the projected character strings and the like.
  • the user can project the image with the image projecting unit 13. You can capture yourself while viewing the shadowed image.
  • the projection direction of the image projection unit 13 is other than 0 ° ⁇ 30 ° with respect to the vertical direction, and The case where it is detected that the imaging direction of 14 is not substantially 0 ° with respect to the front of the imaging head 2 is hereinafter referred to as C mode (the state shown in FIG. 8 (b) C).
  • the projection direction of the image projection unit 13 is other than 0 ° ⁇ 30 ° with respect to the vertical direction.
  • the case where the imaging direction of the image imaging unit 14 is detected to be substantially 0 ° with respect to the front of the imaging head 2 is hereinafter referred to as a D mode (the state shown in FIG. 8 (b) D).
  • the direction in which the subject captured by the image capturing unit 14 is present and the direction in which the image signal light is projected by the image projecting unit 13 are large angles that are close to perpendicular to each other.
  • the image signal light is not projected on the top, and even if the image signal is projected by the image projection unit 13 and the image is captured by the image capturing unit 14, the projected image can be prevented from being captured.
  • FIG. 9 is a flowchart of the digital camera process (S605 in FIG. 6).
  • the digital camera process is a process of acquiring an image captured by the image capturing unit 14.
  • a high resolution setting signal is transmitted to the CCD 22 (S701).
  • a high quality captured image can be provided to the user.
  • the image projection unit based on the signal output from the imaging head position detection sensor S1, the image projection unit
  • the position of the image pickup unit 13 is obtained, and the position of the image pickup unit 14 is obtained based on the signal output from the image pickup unit position detection sensor S2 (S702b).
  • FIG. 10 is a diagram illustrating a state in which rectangular image light indicating an imageable area is projected on a surface. As described above, the user can visually recognize the area that can be imaged by the rectangular image light before actually imaging by pressing the release button 8.
  • the speaker 33 sounds that the image can be captured.
  • Be informed S702g o
  • the projection direction of the image signal light by the image projection unit 13 is other than 0 ° ⁇ 30 ° (C mode or D mode) with respect to the vertical direction
  • the projection direction of the image projection unit 13 Since there is a high possibility that there is no surface that can be projected, when the positional relationship between the image projecting unit 13 and the image capturing unit 14 is not in either the A mode or the B mode described above, it is determined that imaging is possible. It is notified by sound rather than by image.
  • a pilot lamp (not shown) or the like may be lit or flashed to notify that the image can be captured.
  • the release button 8 is scanned (S703a), and it is determined whether or not the release button 8 is half-pressed (S703b). If the release button 8 is half-pressed (S703b: Yes), the auto focus (AF) and automatic exposure (AE) functions are activated, and the focus, aperture, and shirt speed are adjusted (S703c). If the release button 8 has not been half-pressed (S703b: No), the processing from S703a is repeated.
  • the release button 8 is scanned again (S703d), and it is determined whether or not the release button 8 is fully pressed (S703e). If the release button 8 is fully pressed (S703e: Yes), it is determined whether or not the flash mode is set (S704).
  • the mode is the flash mode (S704: Yes)
  • the flash 7 is emitted (S705)
  • shooting is performed (S706). If not in the flash mode (S704: No), shooting is performed without emitting the flash 7 (S706). If it is determined in S703e that the button is not fully pressed (S703e: No), the processing from S703a is repeated.
  • the captured image is transferred from the CCD 22 to the cache memory 28 (S707).
  • the captured image stored in the cache memory 28 is displayed on the monitor LCD 10 (S708).
  • the captured image can be displayed on the monitor LCD 10 at a higher speed than when the captured image is transferred to the main memory.
  • the captured image is stored in the external memory 27 (S709).
  • FIG. 11 is a flowchart of the webcam process (S607 in FIG. 7).
  • the webcam process is a process of transmitting a captured image (including a still image and a moving image) captured by the image capturing unit 14 to an external network.
  • a captured image including a still image and a moving image
  • FIG. 11 it is assumed that a moving image is transmitted to an external network as a captured image.
  • a low-resolution setting signal is transmitted to the CCD 22 (S801), and the well-known auto focus (AF) and automatic exposure (AE) functions are activated, and the focus, aperture, and shutter speed are adjusted. Is performed (S802a).
  • the image projection unit based on the signal output from the imaging head position detection sensor S1, the image projection unit
  • the position of the image pickup unit 13 is obtained, and the position of the image pickup unit 14 is obtained based on the signal output from the image pickup unit position detection sensor S2 (S802b).
  • FIG. 12 is a diagram showing a state in which the image output light is projected by the image projection unit 13 in the projection processing (S802f) of the webcam processing.
  • a message image M composed of a character string indicating an imaging mode such as “WEBCAM mode” and a captured image f are synthesized and projected.
  • the captured image f is still obtained, and only the message M is projected.
  • the speaker 33 sounds to indicate that the image can be captured.
  • the user is notified (S802g), and shooting is started (S803). That is, for the same reason as the digital camera processing (see FIG. 9), when the positional relationship between the image projecting unit 13 and the image capturing unit 14 is neither the A mode nor the B mode, the image projection by the image projecting unit 13 is not performed. The notification is not given to the user only by voice.
  • the captured image is displayed on the monitor LCD 10 (S804).
  • the message image M composed of a character string such as “imaging is in progress” and the captured image f are combined and stored in the projection image storage unit 37k (S805). That is, in the process of S804, the projection image stored in the projection image storage unit 37k is updated. Therefore, in the projection process of S802f, the captured image f and the message image M are updated by the updated image output light.
  • the projected image allows the user to visually recognize the change in the image capturing state by the image capturing unit 14 using the projected image.
  • the captured image is transferred from the CCD 22 to the cache memory 28 (S807), and the captured image transferred to the cache memory 28 is transmitted to an external network via the RF driver 24 and the antenna 11 as an RF interface. Is performed (S808).
  • the relative positional relationship between the image projecting unit 13 and the image capturing unit 14 is the B mode, for example, the user captures the image while himself or herself.
  • the captured image f obtained by can be checked at any time.
  • FIG. 14 is a flowchart of the projection process (S802f in FIG. 11).
  • This process is a process of projecting an image stored in the projection image storage unit 37k from the image projection unit 13 onto a projection plane.
  • this processing first, it is confirmed whether or not an image is stored in the projection image storage unit 37k (S901). If it is stored (S901: Yes), the image stored in the projection image storage unit 37k is transferred to the projection LCD driver 30 (S902). Next, an image signal corresponding to the image is sent from the projection LCD driver 30 to the projection LCD 19, and an image is displayed on the projection LCD 19 (S903).
  • the light source driver 29 is driven (S904), and the LED array 17A is turned on by an electric signal from the light source driver 29 (S905). After that, the process ends.
  • the LED array 17A when the LED array 17A is turned on, the light emitted from the LED array 17A reaches the projection LCD 19 via the light source lens 18, and in the projection LCD 19, an image signal transmitted from the projection LCD driver 30 is transmitted. A corresponding spatial modulation is applied to the light. The light subjected to the spatial modulation is output from the projection LCD 19 as image signal light. Then, the image signal light output from the projection LCD 19 is projected as a projection image on a projection surface via the projection optical system 20.
  • FIG. 15 is a flowchart of the stereoscopic image processing (S609 in FIG. 7).
  • the stereoscopic image processing is a process of detecting a three-dimensional shape of a subject, obtaining a three-dimensional shape detection result image as the stereoscopic image, and displaying the image.
  • a high resolution setting signal is transmitted to the CCD 22 (S1001). Then, the position of the image projection unit 13 is obtained based on the signal output from the imaging head position detection sensor S1, and the position of the image imaging unit 14 is determined based on the signal output from the imaging unit position detection sensor S2. Is obtained (S1002b).
  • the mode is the A mode (see FIG. 8). (1002e).
  • the mode is the A mode (S1002e: Yes)
  • the fact that the image can be captured is sounded by the speaker 33 and notified by voice (S1002d).
  • a message may be projected or a rectangular image light indicating an imageable range may be projected instead of the notification by voice.
  • the image capturing unit 14 captures an image of a subject in a state where a predetermined pattern light is projected by the image projection unit 13, and the captured image is captured.
  • the three-dimensional shape of the subject P is detected based on the image data acquired by the above. Therefore, when the relative positional relationship between the image projecting unit 13 and the image capturing unit 14 is not a predetermined relationship, the accuracy of measurement of the three-dimensional shape is significantly reduced. Only when the image capturing unit 14 and the image capturing unit 14 have a predetermined relationship, execution of the three-dimensional shape detection process (S1006) is permitted.
  • the release button 8 is scanned (S1003a), and it is determined whether or not the release button 8 is half-pressed (S1003b). If the release button 8 is pressed halfway down (S1003b: Yes), the auto focus (AF) and auto exposure (AE) functions are activated, and the focus, aperture, and shirt speed are adjusted (S1003c). If the release button 8 is not half-pressed (S1003b: No), the processing from S1003a is repeated.
  • the release button 8 is scanned again (S1003d), and the release button 8 is fully pressed. It is determined whether it has been performed (S1003e). If the release button 8 is fully pressed (S1003e: Yes), it is determined whether or not the flash mode is set (S1003f).
  • the three-dimensional shape detection result in the three-dimensional shape detection processing (S1006) is stored in the external memory 27 (S1007), and the three-dimensional shape detection result is displayed on the monitor LCD 10 (S1008a).
  • the three-dimensional shape detection result is displayed as a set of three-dimensional coordinates (X, ⁇ , Z) in the real space of each measurement vertex. That is, a three-dimensional shape detection result image is displayed on the monitor LCD 10 as a three-dimensional image (3D CG image) displaying the surface by connecting measurement vertices as a three-dimensional shape detection result with a polygon.
  • FIG. 16 (a) is a diagram for explaining the principle of the spatial code method used to detect a three-dimensional shape in the above-described three-dimensional shape detection processing (S1006 in FIG. 15).
  • 16 (b) is a view showing pattern light different from 16 (a). Either of FIG. 16 (a) or FIG. 16 (b) may be used for the pattern light, and further, a gray level code which is a multi-tone code may be used.
  • the spatial code method is one type of a method for detecting the three-dimensional shape of a subject based on triangulation between the projected light and the observed image. As shown in FIG. Observer O It is characterized by being installed at a distance D, dividing the space into elongated fan-shaped areas, and coding them.
  • each fan-shaped area is coded by the mask into bright “1” and ⁇ “0”. For example, the area containing point P
  • Each fan-shaped area is assigned a code corresponding to the direction ⁇ , and the boundary of each code can be regarded as one slit light beam. Therefore, the scene is photographed for each mask with a camera as an observation device, and the light and dark patterns are binarized to configure each bit plane of the memory.
  • the horizontal position (address) of the obtained multi-bit plane image is determined in the observation direction.
  • the contents of the memory at this address give the projected light code, ie, ⁇ .
  • the coordinates of the point of interest are determined from ⁇ and ⁇ .
  • the mask pattern used in this method includes mask pattern A,
  • the point Q in Fig. 16 (a) is a force indicating the boundary between the area 3 (011) and the area 4 (100). If the mask A shifts by 1, the code of the area 7 (111) may be generated. There is. In other words, where the Hamming distance is 2 or greater between adjacent regions, a large error is likely force s occur.
  • FIG. 17A is a flowchart of the three-dimensional shape detection processing (S1006 in FIG. 15).
  • an imaging process is performed (S1210).
  • This imaging processing is performed by using the mask patterns of a plurality of pure binary codes shown in FIG. 16 (a) and from the image projection unit 13 to obtain a striped pattern light (FIGS. 1 and 18) in which light and dark are alternately arranged. ) Is projected onto the subject in chronological order, and the pattern light with the pattern light projected image and the pattern light
  • This is a process of acquiring a pattern light non-image obtained by capturing an image that is not projected.
  • the three-dimensional measurement process is a process of actually measuring the three-dimensional shape of a subject by using the image with pattern light and the image without pattern light acquired by the imaging process.
  • the processing ends.
  • FIG. 17B is a flowchart of the imaging process (S1210 in FIG. 12A). This process is executed based on the pattern light photographing program 36a. First, an image of a subject is captured by the image photographing unit 14 which does not project the pattern light from the image projecting unit 13, thereby obtaining a pattern light non-image (S1211). ). The acquired pattern light non-image is stored in the pattern light non-image storage unit 37b.
  • the counter i is initialized (S1212), and it is determined whether or not the value of the counter i is the maximum value imax (S1213).
  • the i-th mask pattern among the mask patterns to be used is displayed on the projection LCD 19, and The i-th pattern light projected by the No. mask pattern is projected on the projection surface (S1214) 0 Then, the state where the pattern light is projected is photographed by the image pickup section 14 (S1215). ).
  • FIG. 17 (c) is a flowchart of the three-dimensional measurement process (S1220 in FIG. 17 (a)). This processing is executed based on the luminance image generation program 36c, and first, a luminance image is generated (S1221).
  • a luminance image for each pattern light presence / absence image is generated.
  • the generated luminance image is stored in the luminance image storage unit 37c. A number corresponding to the number of the pattern light is assigned to each luminance image.
  • the code image generation program 36d generates a code image coded for each pixel by combining the generated luminance images using the spatial coding method described above (S1222). .
  • This code image is binarized by comparing each pixel of the brightness image related to the image with pattern light stored in the brightness image storage unit 37c with the brightness threshold set in advance or the image without pattern light. The result can be generated by assigning the LSB to MSB as described in FIGS. 16 (a) and 16 (b). The generated code image is stored in the code image storage unit 37d.
  • lens aberration correction processing is performed by the lens aberration correction program 36f (S1224).
  • this processing it is possible to correct the error of the code boundary coordinates detected in S1223, which includes the error due to the influence of the distortion of the imaging optical system 21, and the like.
  • a real space conversion process based on the triangulation principle is performed by the triangulation calculation program 36g (S1225).
  • the code boundary coordinates in the CCD space after the aberration correction has been performed by this process are converted into three-dimensional coordinates in the real space, and the three-dimensional shape detection result is obtained. Three-dimensional coordinates are determined.
  • FIG. 19 is a diagram for explaining the outline of the code boundary coordinate detection process (S1223 in FIG. 17).
  • the boundary between the light and dark of the actual pattern light in the CCD space is indicated by a boundary line K, and the code 1 when the pattern light is coded by the space coding method described above.
  • the boundary between the code and another code is indicated by a bold line in the figure.
  • a first pixel G that changes from a certain code of interest (hereinafter, “curCode”) to another code is detected (first pixel G). Detection step).
  • curCode is changed at the next pixel of the force boundary, that is, the first pixel G up to the boundary (thick line), which is a pixel having a curCode. This is detected as the first pixel G.
  • the detection position is moved to the left by “2” in order to specify a pixel region to be used for approximation, and at the position of the detection position curCCDX-2, the code image of interest is referenced by referring to the code image.
  • (CurCode) force A pixel that changes to another code (boundary pixel (pixel H at the detection position of curCCDX-2)) is searched, and a predetermined range (one in the Y-axis direction in this embodiment) is set around that pixel.
  • the pixel range of (pixels and +2 pixels) is specified (part of the pixel region specifying step).
  • the luminance threshold bTh is calculated from a predetermined range (for example, the luminance threshold Even if it is a fixed value that is given in advance, it is OK. Thus, the boundary between light and dark can be detected with sub-pixel accuracy.
  • the detection position is moved to the right side of "1" from curCCDX-2, and the same processing as described above is performed for curCCDX-1 to obtain a representative value in curCCDX_l (boundary boundary Part of the target detection process).
  • the boundary coordinates of the code can be detected with high accuracy and subpixel accuracy, and the real space conversion processing (S1225 in FIG. 17) based on the triangulation principle described above is performed using the boundary coordinates. This makes it possible to detect the three-dimensional shape of the subject with high accuracy.
  • the boundary coordinates can be detected with sub-pixel accuracy using the approximate expression calculated based on the luminance image in this manner, the number of captured images can be increased as in the related art. It is not necessary to use a gray code, which is a special pattern light that can be used even if a pattern light that is lit and shaded by a pure binary code is used.
  • a range from “ ⁇ 3” to “+2” in the Y-axis direction around the boundary pixel, and curCCDX—2 to curCCD as the detection position in the X-axis direction is not limited thereto.
  • a predetermined range in the Y-axis direction around the boundary pixel at the cur CCDX detection position may be set as the pixel region.
  • FIG. 20 is a flowchart of the code boundary coordinate detection process (S1223 in FIG. 17). This process is executed based on the code boundary extraction program 36e.
  • each element of the code boundary coordinate sequence in the CCD space is initialized (S1401), and curCCDX is set as a start coordinate (S1402).
  • curCCDX is set as a start coordinate (S1403). If curCCDX is equal to or smaller than the end coordinate (S1403: Yes), curCode is set to “0” (S1404). That is, curCode is initially set to the minimum value.
  • curCode is smaller than the maximum code (S1405). If curCode is smaller than the maximum code (S1405: Yes), the code image is referred to in curCCDX to search for a pixel of curCode (S1406). Next, it is determined whether or not a pixel of curCode exists (S1407).
  • the boundary is provisionally set to the curCode larger than the curCode.
  • the processing proceeds assuming that the pixel is located at the pixel position of the pixel.
  • curCode does not exist (S1407: No), or when there is a pixel having a Code larger than curCode, or when it does not exist (S1409: No), the next curCode First, “1” is added to the curCode for calculating the boundary coordinates (S1411), and the processing from S1405 is repeatedly exempted.
  • FIG. 21 is a flowchart of the process (S1410 in FIG. 20) for obtaining the code boundary coordinates with sub-pixel accuracy.
  • the mask pattern number of the extracted luminance image is stored in array PatID []
  • the number of images of the extracted luminance image is stored in noPatID (S1502).
  • the arrays PatID [] and noPatID are stored in the ID storage unit 37f.
  • the counter i is initialized (S1503), and it is determined whether or not the value of the counter i is smaller than oPatID (S1504). As a result, if it is determined to be small (S1504: Yes), the CCD Y value of the boundary is obtained for the luminance image having the mask pattern number of PatID [i] corresponding to the counter i, and the value is calculated as f CCDY [i ] (S1505).
  • the median value of fCCDY [i] obtained in the process of S1505 is calculated, and the result is used as a boundary value, or the boundary value is calculated by statistical calculation.
  • the boundary coordinates are represented by the coordinates of curCCDX and the weighted average value obtained in S1507, the boundary coordinates are stored in the code boundary coordinate storage unit 37e, and the process ends.
  • Fig. 22 shows the boundary CCD image for a luminance image with a mask pattern number of PatID [i]
  • FIG. 22 is a flowchart of a process for obtaining a Y value (S1505 in FIG. 21).
  • "0" in S1601 means the minimum value of the CCDX value.
  • the value of "dx” can be set to an appropriate integer including "0" in advance, and in the example described in Fig. 19, "dx" is set to "2". Therefore, according to the example of FIG. 19, this ccdx is set to “curCCDX-2j”.
  • pixel I is detected as a pixel candidate having a boundary, and an eCCDY value is obtained at the position of pixel I.
  • a ccdy value at which the approximate polynomial Bt intersects with the luminance threshold value bTh is determined, and the ccdy value is assigned to the value efCCDY [j] (S1605).
  • S1604 and S1605 it is possible to detect boundary coordinates with sub-pixel accuracy.
  • “1” is added to each of ccdx and counter j (S1605), and the processing from S1602 is repeated. That is, a boundary of sub-pixel accuracy is detected at each detection position within a predetermined range on the left and right with respect to curCCDX.
  • Figs. 23 (a) to 23 (c) are diagrams for explaining the lens aberration correction processing (S1224 in Fig. 17).
  • the lens aberration correction process is performed for the case where the incident light flux is shifted from the position where the image should be formed by the ideal lens due to the aberration of the imaging optical system 21. This is a process for correcting the position to the position where the image should be originally formed.
  • This aberration correction is performed by, for example, calculating the aberration of the optical system in the imaging range of the imaging optical system 21 using the half angle of view hfa, which is the angle of the incident light, as shown in FIG. Correct based on the data obtained.
  • This aberration correction processing is executed based on the lens aberration correction program 36f, and is performed on the code boundary coordinates stored in the code boundary coordinate storage unit 37e. Stored in part 37g.
  • the correction is performed using the following expression.
  • the focal length of the imaging optical system 21 is focal length (mm)
  • the ccd pixel length is pixel length (mm)
  • the center coordinates of the lens in the CCD 22 are (Centx, Centy).
  • FIGS. 24 (a) and 24 (b) are diagrams for explaining a method of calculating three-dimensional coordinates in a three-dimensional space from coordinates in a CCD space in a real space conversion process (S1225 in FIG. 17) based on the principle of triangulation.
  • FIG. 24 (a) and 24 (b) are diagrams for explaining a method of calculating three-dimensional coordinates in a three-dimensional space from coordinates in a CCD space in a real space conversion process (S1225 in FIG. 17) based on the principle of triangulation.
  • the three-dimensional coordinates in the three-dimensional space of the aberration-corrected code boundary coordinates stored in the aberration-corrected coordinate storage unit 37g are calculated by the triangulation operation program 36g. Is calculated.
  • the three-dimensional coordinates calculated in this way are stored in the three-dimensional coordinate storage unit 37h.
  • the optical axis direction of the imaging optical system 21 is the Z axis, and the imaging optical system is along the Z axis.
  • the origin of a point away from the principal point force VPZ of the system 21 is defined as the origin, the X axis is horizontal to the image input / output device 1, and the Y axis is vertical.
  • the projection angle ⁇ p from the image projection unit 13 to the three-dimensional space (X, ⁇ , Z), the distance between the optical axis of the imaging optical system 21 and the optical axis of the image projection unit 13 is D
  • Yf opto-force Yfbottom for the Y-direction field of view of the imaging optical system 21
  • Xfstart force Xfend for the X-direction field of view
  • Hc the length (height) in the Y-axis direction of the CCD22
  • Wc the length (width) in the X-axis direction
  • the projection angle ⁇ p is given based on a code assigned to each pixel.
  • the three-dimensional space position (X, Y, Z) corresponding to the arbitrary coordinates (ccdx, ccdy) of the CCD 22 is a point on the imaging plane of the CCD 22, a projection point of the pattern light, It can be obtained by solving five equations for the triangle formed by and the point that intersects the Y plane.
  • (l) Y -(ta ⁇ ⁇ ⁇ ) ⁇ + ⁇ + tan ⁇ ⁇ _ D + cmp (Xtarget)
  • the principal point position (0, 0, PPZ) of the image projection unit 13, the field of view in the Y direction of the image projection unit 13 is Ypftop force, Ypfbottom, and the field of view in the X direction is Xpfstart to Xpfend.
  • the length (height) of the LCD 19 in the Y-axis direction is Hp, and the length (width) in the X-axis direction is Wp.
  • Yptarget Ypf top-(lcdcy / Hp) X (Ypf top-Ypf bottom)
  • FIG. 25 is a flowchart of the flattened image processing (S 611 in FIG. 7).
  • the flattened image processing is performed, for example, when a document (subject) P in a curved state such as a book is imaged or when a rectangular document P is imaged from an oblique direction (the imaged image becomes trapezoidal). Even so, this is a process of acquiring and displaying a flattened image corrected in a state where the original is not curved or in a state where the original is imaged in a direction perpendicular to the surface.
  • the release button 8 was scanned (S1903a), and the release button 8 was half-pressed. Is determined (SI 903b). If the release button 8 is half-pressed (SI 903b: Yes), the auto focus (AF) and automatic exposure (AE) functions are activated, and the focus, iris, and shutter speed are adjusted (S1903c). If the release button 8 is not half-pressed (SI 903b: No), the processing power of the SI 903a is returned.
  • the release button 8 is scanned again (S1903d), and it is determined whether or not the release button 8 is fully pressed (S1903e). If the release button 8 is fully pressed (S1903e: Yes), it is determined whether or not the flash mode is set (S1903f).
  • a document posture calculation process for calculating the posture of the document P is performed (S1907).
  • the position L, the angle ⁇ , and the curvature ⁇ (X) of the document P with respect to the image input device 1 are calculated as the posture parameters of the document P.
  • the flattened image obtained by the plane change process (S1908) is stored in the external memory 27 (S1909), and the flattened image is displayed on the monitor LCD 10 (S1910).
  • FIG. 26 (a) Force FIG. 26 (c) is a diagram for explaining the document posture calculation process (S1907 in FIG. 25). It is assumed that the curvature of the document P is uniform in the y direction as a precondition for a document such as a book. To do. In this document attitude calculation process, first, as shown in FIG.
  • such a curve is represented by position information (a boundary between the boundary between the code 63 and the code 64 and the boundary between the code 191 and the code 192) of the upper and lower quarters of the range where the pattern light is projected. You can ask for it.
  • the document P is rotationally transformed in the opposite direction by the previously obtained inclination ⁇ ⁇ ⁇ around the X axis, that is, the document P is rotated with respect to the X_Y plane. Assume a parallel state.
  • the displacement in the ⁇ -axis direction can be represented by the curvature ⁇ (X) as a function of X.
  • the position L, angle ⁇ , and curvature ⁇ (X) of the document ⁇ are calculated as the document orientation parameters, and the process ends.
  • FIG. 27 is a flowchart of the plane conversion process (S1908 in FIG. 25).
  • the pattern light stored in the pattern lightless image storage unit 37b is stored.
  • the four corner points of the image are moved by L in the Z direction, rotated by ⁇ in the X axis direction, and then rotated by ⁇ , and then inversely transformed into ⁇ (X) (equivalent to the “bending process” described later)
  • a rectangular area that is, a rectangular area in which the surface of the original P on which characters and the like are written is an image as viewed from a substantially orthogonal direction
  • the number of pixels a included in is obtained (S2102).
  • the coordinates (ccdcx, ccdcy) on the CCD image captured by the ideal camera are obtained by the inverse function of the previous triangulation for the obtained three-dimensional space position (S2107), and are used.
  • the coordinates (ccdx, ccdy) on the CCD image captured by the actual camera are obtained by the inverse function of the previous camera calibration (S2108), and the position corresponding to this position is obtained.
  • the state of the pixel of the pattern light non-image is obtained and stored in the working area 371 of the RAM 37 (S2109).
  • Fig. 28 (a) is a diagram for explaining the outline of the bending process (S2104 in Fig. 27), and Fig. 28 (b) is flattened by the plane conversion process (S1908 in Fig. 25). The original P. The details of the curving process are disclosed in detail in IEICE Transactions DII Vol.J86-D2 No.3 p409 “Bending Document Photograph Using Eye Scanner”.
  • FIG. 9 In the digital camera processing shown in FIG. 9, the webcam processing shown in FIG. 11, and the stereoscopic image processing shown in FIG. 15, when projecting a distortion-free projection image without depending on the projection direction, FIG. In place of the projection processing (S702f, S802f, S1010) executed in the same manner as the projection processing in S806, a projection image conversion processing (S2900) described later is executed.
  • FIG. 29 is a flowchart of the distortion-free projection image conversion process (S2900).
  • the distortion-free projection image conversion process (S2900) is a process of converting an image displayed on the projection LCD 19 into an image that can be projected onto a subject without distortion according to image information stored in the projection image storage unit 37k. is there.
  • each pixel value of image information stored in the projection image storage unit 37k is allocated to each pixel of the ideal camera image coordinate system (ccdcx, ccdcy). Is performed (S2903).
  • the three-dimensional coordinates are stored in the three-dimensional coordinate storage unit 37h by the above equations (6) to (9).
  • the three-dimensional coordinates (X, ⁇ , Z), which are the corresponding points on the surface of the subject, are obtained, and by solving equations (ccdcx, ccdcy) in equations (1) to (5), the distortion-free projection Pixel information of each pixel of the image is calculated and set. That is, first, it is determined whether or not the counter q has reached the number of pixels Qa (S2904).
  • the LCD space coordinates (lcdcx, lcdcy) of the pixel corresponding to the value of the counter q are calculated by the equations (6) to (9). Is converted into coordinates (X, ⁇ , Z) on the subject stored in the working area 371 (S2905).
  • the pixel information on the LCD space coordinates (lcdcx, lcdcy) is transferred to the projection LCD driver 30, so that the projection LCD 19 outputs the projection image projected without distortion on the distorted surface. Is displayed. Therefore, an undistorted image is projected on the subject.
  • FIGS. 30 (a) and 30 (b) are views for explaining a light source lens 50a as another example of the light source lens 18 in the above-described embodiment.
  • FIG. 30A is a side view showing the light source lens 50a
  • FIG. 30B is a plan view showing the light source lens 50a.
  • the same members as those described above are denoted by the same reference numerals, and description thereof will be omitted.
  • the light source lens 18 in the above-described embodiment is configured by arranging a lens portion 18a having a convex aspherical shape corresponding to each LED 17 on a base 18b and being arranged integrally.
  • a bullet-shaped resin lens containing each of the LEDs 17 is formed separately.
  • the position of each LED 17 and the corresponding optical lens 50a can be determined on a one-to-one basis.
  • the relative positional accuracy can be improved, and there is an effect that the light emitting directions are aligned.
  • the surface of the projection LCD 19 is irradiated with light whose incident direction from the LED 17 is aligned perpendicular to the surface of the projection LCD 19, and can uniformly pass through the stop of the projection optical system 20. Illuminance unevenness of the projected image can be suppressed. As a result, the ability to project high-quality images can be achieved.
  • the LED 17 included in the light source lens 50a is mounted on the substrate 16 via an electrode 51a composed of a lead and a reflector.
  • a frame-shaped elastic fixing member 52a that bundles the light source lenses 50a and regulates them in a predetermined direction is arranged on the outer peripheral surface of the group of light source lenses 50a.
  • the fixing member 52a is made of a resin material such as rubber or plastic.
  • the light source lens 50a Since the light source lens 50a is formed separately from each LED 17, the light source lens 50a faces the projection LCD 19 with the angle of the optical axis formed by the convex tip end of each light source lens 50a correctly aligned. It is difficult to install.
  • the fixing member 52a surrounds a group of light source lenses 50a, and the outer peripheral surfaces of the light source lenses 50a are brought into contact with each other.
  • the position of each light source lens 50a is regulated so that the optical axis of the lens 50a faces the projection LCD 19 at a correct angle. According to such a configuration, light can be emitted from the light source lenses 50a to the projection LCD 19 almost vertically. Therefore, the uniform light is radiated on the surface of the projection LCD 19, and it can pass through the aperture of the projection lens uniformly. Degree unevenness can be suppressed. Therefore, a higher quality image can be projected.
  • the fixing member 52a may have rigidity specified to a predetermined size in advance, and may be made of a material having elasticity, and the position of each light source lens 50 may be determined by the elasticity. May be restricted to a predetermined position.
  • FIGS. 31 (a) and 31 (b) show another example of the fixing member 52a for restricting the light source lens 50a to a predetermined position described in FIGS. 30 (a) and 30 (b).
  • FIG. 9 is a view for explaining a fixing member 60.
  • FIG. 31A is a perspective view showing a state in which the light source lens 50a is fixed.
  • FIG. 31 (b) is a partial sectional view thereof.
  • the same members as those described above are denoted by the same reference numerals, and description thereof will be omitted.
  • the fixing member 60 is formed in a plate shape with a through-hole 60a having a cross section along the outer peripheral surface of each light source lens 50a and having a conical cross section in a sectional view. Each light source lens 50a is inserted and fixed in each of the through holes 60a.
  • An elastic urging plate 61 is interposed between the fixing member 60 and the substrate 16, and an electrode is provided between the urging plate 61 and the lower surface of each light source lens 50a.
  • An annular O-ring 62 having elasticity is arranged so as to surround 51a.
  • the LED 17 included in the light source lens 50a is mounted on the substrate 16 via the biasing plate 61 and the electrode 51a penetrating through holes formed in the substrate 16.
  • each light source lens 50a is fixed by penetrating through each through hole 60a having a cross section along the outer peripheral surface of the light source lens.
  • the optical axis of the light source lens 50 can be more reliably fixed so as to face the projection LCD 19 at a correct angle.
  • the LED 17 can be urged to the correct position by the urging force of the O-ring 62 and fixed.
  • an impact force that may be generated when the present apparatus 1 is carried can be absorbed by the elastic force of the O-ring 62, and the position of the light source lens 50a is shifted by the influence of the impact. As a result, it is possible to prevent the inconvenience that light cannot be irradiated from the light source lens 50a vertically toward the projection LCD 19.
  • the processing of S1002g in the stereoscopic image processing of Fig. 15 is positioned as a three-dimensional information detection prohibition unit.
  • the processing of S801 in the webcam processing of FIG. 11 is positioned as a resolution lowering transmission means.
  • the process of acquiring and displaying a flattened image has been described as the flattened image mode.
  • a well-known OCR function on the image input / output device 1
  • a great effect is obtained in that the text written on the document can be read with higher accuracy than when the document that is curved by the OCR function is read.
  • the process of transmitting the low resolution setting signal to the CCD 22 is replaced with the process of transferring from the CCD 22 to the cache memory 28 (S807) or the captured image
  • the resolution of the captured image may be reduced before transmission. Also in this case, the subsequent processing speed can be improved by lowering the resolution of the captured image.
  • all the luminance images having a change in brightness are extracted, and a case where a provisional CCDY value is obtained for all of the luminance images has been described.
  • the number of luminance images is not limited to one if it is one or more, not necessarily the whole.
  • the boundary coordinates can be obtained at high speed by reducing the number of sheets to be extracted.
  • fCCDY [i] is weighted and averaged, and in S1607 of Fig. 22, each value is averaged using efCCDY [j] as an approximate polynomial.
  • the method of averaging each value is not limited to these. For example, a method of taking the simple average of each value, a method of using the median of each value, a method of calculating an approximate expression of each value, and using the detection position in the approximate expression as a boundary coordinate, and a statistical operation A method for obtaining the information may be adopted.
  • a striped pattern light in which a plurality of types of light and dark are alternately arranged.
  • the light for detecting the three-dimensional shape is not limited to the power and the pattern light.
  • the light source lens 18, the projection LCD 19, and the projection optical system 20 are provided in a straight line along the projection direction.
  • the configuration of 13 is not necessarily limited to this.
  • FIG. 33 (a) to FIG. 33 (c) are diagrams showing modified examples of the image projection unit 13.
  • FIG. 33 (a) is a plan view of the imaging head 2
  • FIGS. 33 (b) and 33 (c) are cross-sectional views taken along line AA of FIG. 33 (a). Note that, in FIG. 33 (a), the illustration of the cover 2a is omitted to show the inside of the imaging head 2.
  • the image projection unit is provided with a reflection mirror 200 in a projection optical system 20 composed of a plurality of lenses, and a direction of light emitted from the light source lens 18. May be changed to a predetermined projection direction and projected.
  • the optical path can be changed by the reflection mirror 200, the light source lens 18, the LCD 19, and the projection optical system 20 need not always be provided in a straight line along the projection direction. It can be stored compactly according to the shape of the head 2.
  • the light source is provided with the LED array 17A and the light source lens 18.
  • the light source may be configured as shown in FIG. 33 (c). good.
  • the light source means shown in FIG. 33 (c) includes a light source 170 such as a halogen lamp that emits white light or a white LED (Light Emitting Diode), and a light source 170 that emits white light. And a reflecting plate 180 for efficiently entering the LCD 19.
  • the digital camera is fixed to the small diameter portion l ib of the imaging case 11.
  • the imaging unit position detection sensor S2 By detecting the position of the rib 11c for detecting the position of the imaging unit by the imaging unit position detection sensor S2, the position of the image imaging unit 14 with respect to the imaging head 2 has been detected, but the image projection unit 13 and the image imaging unit Means for detecting the relative positional relationship with 14 is not limited to this.
  • the predetermined position fixing rib 2d is engaged with a stove (not shown) provided on one of the small diameter portions l ib of the imaging case 11, and further rotation of the imaging case 11 is restricted, It is OK even if it is detected that the image capturing unit 14 has a predetermined positional relationship with the image projecting unit 13.
  • the detection of the three-dimensional shape is permitted only when the positional relationship between the image projecting unit 13 and the image capturing unit 14 surely becomes a predetermined positional relationship, and the detection of the three-dimensional shape is more improved. Can be done with precision.
  • the image input / output device includes a projection housing for holding projection means
  • the image processing apparatus may further include an imaging housing that holds the imaging unit and that is disposed so as to be relatively movable with respect to the projection housing.
  • the spatial modulation means is constituted by a liquid crystal panel
  • the projection means has a projection optical system for forming an image signal light output from the liquid crystal panel on a predetermined projection surface
  • the imaging means converts the light into electricity.
  • a light-receiving element that converts the light into a signal and an imaging optical system that forms incident light on the light-receiving element may be provided.
  • the projection unit has the projection optical system that forms an image signal light output from the liquid crystal panel on a predetermined projection surface, and the imaging unit transmits incident light to the light receiving element. It has an imaging optical system for forming an image.
  • the light receiving element provided in the imaging means is often smaller than the liquid crystal panel provided in the projection means.
  • the imaging optical system of the imaging means is configured to be smaller than the projection optical system of the projection means.
  • the projection means is provided with a light source means. Therefore, it is easy to make the imaging housing holding the imaging means smaller than the projection housing holding the projection means, and the imaging housing is arranged so as to be relatively movable with respect to the projection housing.
  • the projection unit may include a projection direction determination unit for projecting the light emitted from the light source unit in a predetermined projection direction.
  • the projection means since the projection means has the projection direction determination means for projecting the light emitted from the light source means in a predetermined projection direction, the light source means, the spatial modulation means,
  • the projection means can be provided outside the comparator in accordance with the shape of the projection housing in which it is not necessary to arrange the projection optical system on a straight line, and the projection housing can be downsized.
  • the image input / output device includes a base supporting the projection unit and the imaging unit, and movably supporting the position of at least one of the projection unit and the imaging unit with respect to the base. And a supporting member.
  • At least one of the projection unit and the imaging unit can be moved with respect to the base, so that even when the projection unit and the imaging unit are supported by the base, There is an effect that flexibility can be provided in a direction in which the image signal light can be projected by the projection means and in a direction in which the image signal can be captured by the imaging means.
  • the support member has one end connected to at least one of the projection means and the imaging means, and has, at the other end, an attachment portion for attaching the support member to the base. May be provided. Further, at least one of the base and the support member may be provided with a power supply unit for supplying power to at least one of the projection unit and the imaging unit.
  • the power supply unit for supplying power to at least one of the projection unit and the imaging unit is provided on at least one of the base and the support member. It is possible to reduce the weight and to easily move the positions of the projection unit and the imaging unit with respect to the base. Further, since the weights of the projecting means and the imaging means are reduced, there is an effect that the strength of the support member is not so required as compared with the case where a power supply unit is provided in the projecting means and the imaging means.
  • the image input / output device may include a projection imaging control unit that executes imaging by the imaging unit while projecting image signal light by the projection unit.
  • imaging by the imaging unit is performed while projecting the image signal light by the projection unit.
  • the imaging means can capture an image of an object existing in a direction different from the projection direction, for example, while projecting the image signal light in a direction different from the direction in which the user himself is present, the imaging means can be used.
  • the imaging unit captures an image of a subject existing in a direction different from the projection direction, the image signal light is not projected on the subject, so that the subject is not included in the image projected by the projection unit. It is possible to take an image of the above, which has an effect.
  • the projection imaging control means may be configured to control the projection means to project the relevant information related to the captured image acquired by the imaging means by the projection means. .
  • the user visually recognizes the related information on the captured image acquired by the imaging unit.
  • the relevant information related to the imaging unit corresponds to, for example, a character string or a graphic indicating an operation method and an operation procedure of the imaging unit, and an imaging mode of the imaging unit.
  • the projection imaging control means may be configured to control the projection means to project the captured image of the subject imaged by the imaging means by the projection means.
  • the captured image of the subject captured by the imaging unit is projected by the projection means, so that the user can visually recognize the captured image of the subject captured by the imaging unit, There is an effect that imaging can be performed.
  • the image input / output device includes a projection image updating unit that updates the image signal light projected by the projection unit. According to such a configuration, since the image signal light projected by the projecting unit is updated, the user can execute imaging while visually confirming the update of the image projected by the projecting unit. There is an effect that can be.
  • the image input / output device when an image signal light having a predetermined pattern is projected on a subject by a projecting unit, the image input / output device outputs an image captured by the imaging unit.
  • a three-dimensional information detecting means for detecting three-dimensional information of the subject based on the three-dimensional information may be provided.
  • the image input / output device has a position determination unit that determines whether the relative positional relationship between the imaging unit and the projection unit is within a predetermined range. Is also good.
  • the position determination unit can determine that the relative positional relationship between the imaging unit and the projection unit is not within the predetermined range.
  • the image input / output device fixes the positional relationship between the imaging unit and the projection unit in a state where the relative positional relationship between the imaging unit and the projection unit is in a predetermined positional relationship. May be provided.
  • the positional relationship between the imaging means and the projection means is fixed by the fixing means in a state where the relative positional relationship between the imaging means and the projection means is in the predetermined positional relation. If it can be done, it has the effect.
  • the image input / output device includes a position determination unit that determines a relative positional relationship between the imaging unit and the projection unit, and the imaging unit and the projection unit that use the position determination unit. If it is determined that the relative positional relationship of In this case, a three-dimensional information detection prohibiting means for prohibiting the detection of the three-dimensional information by the three-dimensional information detecting means is provided.
  • the three-dimensional information detecting unit when the position determining unit determines that the relative positional relationship between the imaging unit and the projecting unit is not within a predetermined range, the three-dimensional information detecting unit is used. Since the detection of three-dimensional information due to is prohibited, it is possible to reliably detect high-precision three-dimensional information. That is, the three-dimensional information detecting means detects the three-dimensional information based on the image taken by the imaging means when the image signal light having a predetermined pattern is projected on the subject by the projecting means. I do. Therefore, the image signal light having the predetermined pattern cannot be projected onto the subject by the projection means in which the relative positional relationship between the imaging means and the projection means is within the predetermined range, or the accuracy is high. When the three-dimensional information cannot be obtained, the detection of the three-dimensional information is prohibited, so that the detection of the three-dimensional information with low accuracy can be suppressed.
  • the image input / output device may include a resolution reduction transmitting unit that reduces the resolution of a captured image obtained by the imaging unit and transmits the reduced image to the outside.
  • the resolution of the captured image obtained by the imaging unit is reduced and transmitted to the outside, so that there is an effect that the processing speed can be improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Input (AREA)
  • Studio Devices (AREA)

Abstract

There is provided an image I/O device including light source means for emitting light, space modulation means for subjecting the light emitted from the light source means to space modulation and outputting image signal light, projection means for projecting image signal light outputted by the space modulation means toward the projection direction, and imaging means capable of imaging an object existing at least in the projection direction and acquiring the imaged data. In the image I/O device, the imaging means and the projection means are arranged in such a manner that imaging direction of the imaging means can be modified with respect to the projection direction of the projection means so that an object existing in a direction different from the projection direction can be imaged by the imaging means.

Description

明 細 書  Specification
画像入出力装置  Image input / output device
技術分野  Technical field
[0001] 本発明は、投影手段により画像信号光が投影される方向に存在する被写体、及び 投影手段により画像信号光が投影される方向と異なる方向に存在する被写体を撮像 することができ、投影方向に任意の情報を表示出力させることのできる画像入出力装 置に関する。  The present invention can image a subject existing in a direction in which image signal light is projected by a projection unit and a subject existing in a direction different from the direction in which image signal light is projected by a projection unit. The present invention relates to an image input / output device capable of displaying and outputting arbitrary information in directions.
背景技術  Background art
[0002] 従来より、面上に載置された被写体を撮像し、その撮像データを取得可能な画像 入出力装置において、撮像可能な領域を視覚的に表示するために、面上の撮像可 能な領域に光を投射する投影手段を備えたものが知られている。そのような画像入 出力装置に備えられた投影手段は、光源から出射された光に空間変調を施す液晶 パネルを備える。投影手段は、この液晶パネルを用いることによって、光源から出射 された光の透過を画素毎に制御し、その撮像領域を示すスポット光を投射する。この ようなタイプの画像入出力装置の一つは、特開平 8— 32848号公報(以下、文献 1と 記す)に記載されている。  [0002] Conventionally, in an image input / output device capable of capturing an image of a subject placed on a surface and acquiring data of the captured image, an image capturing device capable of capturing an image on the surface to visually display an imageable area. There is known a device provided with a projecting unit for projecting light onto a specific area. The projection means provided in such an image input / output device includes a liquid crystal panel that performs spatial modulation on light emitted from the light source. The projection unit controls the transmission of light emitted from the light source for each pixel by using the liquid crystal panel, and projects a spot light indicating the imaging area. One such type of image input / output device is described in JP-A-8-32848 (hereinafter referred to as Document 1).
発明の開示  Disclosure of the invention
[0003] しかしながら、上記文献 1に記載されるような画像入出力装置では、投影手段と撮 像手段との相対的な位置関係が固定されており、投影手段により画像信号光が投影 される方向に存在する被写体のみ撮像可能に構成されていた。そのため、このような 画像入出力装置では、画像入出力装置の用途が限定されてしまうという問題点があ つに。  [0003] However, in the image input / output device as described in Document 1, the relative positional relationship between the projection unit and the imaging unit is fixed, and the direction in which the image signal light is projected by the projection unit. Is configured to be able to take an image of only the object existing in the camera. Therefore, such an image input / output device has a problem that the use of the image input / output device is limited.
[0004] 例えば、使用者が撮像手段を自分自身に向け、使用者自身を撮像しょうとする場 合が考えられるが、その場合は、投影手段を用いて、被写体の撮像方向を含まない 方向、例えば、机上に投影手段の投影方向を向けて情報を表示出力させることがで きない。  [0004] For example, it is conceivable that the user points the imaging unit at himself and tries to image the user himself. In this case, the projection unit is used to set the direction not including the imaging direction of the subject, For example, the information cannot be displayed and output on a desk with the projection direction of the projection means.
[0005] 本発明は、上述した問題点を解決するためになされたものである。すなわち、本発 明は、投影手段により画像信号光が投影される方向に存在する被写体、及び投影手 段により画像信号光が投影される方向と異なる方向に存在する被写体を撮像するこ とができ、投影方向に任意の情報を表示出力させることのできる画像入出力装置を 提供することを目的としている。 [0005] The present invention has been made to solve the above problems. That is, The light can capture an image of a subject existing in the direction in which the image signal light is projected by the projection means and an object present in a direction different from the direction in which the image signal light is projected by the projection means. The purpose of the present invention is to provide an image input / output device capable of displaying and outputting arbitrary information.
[0006] 本発明の一つの側面によれば、光を出射する光源手段と、その光源手段から出射 される光に空間変調を施して画像信号光を出力する空間変調手段とを有し、その空 間変調手段により出力される画像信号光を投影方向に向けて投影する投影手段と、 少なくとも前記投影方向に存在する被写体を撮像し、その撮像データを取得可能な 撮像手段とを備えた画像入出力装置が提供される。この画像入出力装置において、 前記撮像手段および前記投影手段は、前記投影方向とは異なる方向に存在する被 写体を前記撮像手段により撮像可能なように、前記撮像手段の撮像方向が前記投 影手段の投影方向に対して相対的に変更可能に設けられる。  According to one aspect of the present invention, there is provided a light source unit that emits light, and a spatial modulation unit that spatially modulates light emitted from the light source unit and outputs image signal light. An image input device comprising: a projection unit for projecting the image signal light output from the spatial modulation unit in the projection direction; and an imaging unit capable of capturing at least a subject existing in the projection direction and acquiring data of the captured image. An output device is provided. In this image input / output device, the imaging unit and the projection unit may be configured so that the imaging direction of the imaging unit is different from the projection direction so that an object existing in a direction different from the projection direction can be imaged by the imaging unit. It is provided so as to be relatively changeable with respect to the projection direction of the means.
[0007] このような構成によれば、撮像手段の撮像方向と投影手段の投影方向とを、ユーザ が望む様々な向きに相対的に変更可能であり、且つ空間変調手段により所望の画 像信号光を出力することができ、その画像信号光が投影手段により投影方向に向け て投影される。したがって、撮像方向とは異なる方向に所望の情報を投影手段により 表示出力可能である。  [0007] According to such a configuration, the imaging direction of the imaging unit and the projection direction of the projection unit can be relatively changed to various directions desired by the user, and the desired image signal can be changed by the spatial modulation unit. Light can be output, and the image signal light is projected by the projection means in the projection direction. Therefore, desired information can be displayed and output by the projection means in a direction different from the imaging direction.
[0008] また、このような構成によれば、光源手段より出射された光に対して、空間変調手段 により空間変調が施されて所望の画像信号光として出力されると、その所望の画像 信号光が、投影手段により投影方向に向けて投影される。ここで、撮像手段の撮像 方向と投影手段の投影方向とは相対的に変更可能であり、投影手段により任意の画 像信号光が投影方向に向けて投影されるので、投影手段により画像信号光が投影さ れる方向に存在する被写体、及び投影手段により画像信号光が投影される方向と異 なる方向に存在する被写体を撮像することができ、さらに投影方向に所望の情報を 表示出力させることができるという効果がある。  [0008] Further, according to such a configuration, when light emitted from the light source means is spatially modulated by the spatial modulation means and output as desired image signal light, the desired image signal light is output. Light is projected by the projection means in the projection direction. Here, the imaging direction of the imaging means and the projection direction of the projection means can be relatively changed, and an arbitrary image signal light is projected in the projection direction by the projection means. It is possible to image a subject existing in the direction in which the image signal light is projected, and a subject existing in a direction different from the direction in which the image signal light is projected by the projection means, and to display and output desired information in the projection direction. There is an effect that can be.
図面の簡単な説明  Brief Description of Drawings
[0009] [図 1]図 1 (a)および図 1 (b)は画像入出力装置の外観斜視図であり、図 1 (a)は上面 側から見た図であって、図 1 (b)は撮像ヘッドを拡大して示す図である。 [図 2]図 2 (a)から図 2 (e)は撮像ヘッドの内部構成を示す図である。 [0009] [Fig. 1] Figs. 1 (a) and 1 (b) are external perspective views of an image input / output device, and Fig. 1 (a) is a diagram viewed from the top side. () Is an enlarged view of the imaging head. FIG. 2 (a) to FIG. 2 (e) are views showing the internal configuration of an imaging head.
園 3]図 3は画像入出力装置の内部の電気的構成を模式的に示す図である。 Garden 3] FIG. 3 is a diagram schematically showing an electric configuration inside the image input / output device.
園 4]図 4 (a)は画像投影部の拡大図であり、図 4 (b)は光源レンズの平面図であり、 図 4 (c)は投影 LCDの正面図である。 Garden 4] FIG. 4 (a) is an enlarged view of an image projection unit, FIG. 4 (b) is a plan view of a light source lens, and FIG. 4 (c) is a front view of a projection LCD.
[図 5]図 5 (a)から図 5 (c)は LEDアレイの配列に関する説明をするための図である。 園 6]図 6は画像入出力装置の電気的なブロック図である。  [FIG. 5] FIGS. 5 (a) to 5 (c) are views for explaining the arrangement of LED arrays. Garden 6] FIG. 6 is an electrical block diagram of the image input / output device.
[図 7]図 7はメイン処理のフローチャートである。 FIG. 7 is a flowchart of a main process.
[図 8]図 8 (a)及び図 8 (b)は画像投影部と画像撮像部との相対的な位置関係を示す 図である。  8 (a) and 8 (b) are diagrams showing a relative positional relationship between an image projection unit and an image pickup unit.
[図 9]図 9はデジカメ処理のフローチャートである。  FIG. 9 is a flowchart of digital camera processing.
園 10]図 10は撮像可能な領域を示す矩形画像光が面上に投射されている状態を示 す図である。 Garden 10] FIG. 10 is a diagram showing a state in which rectangular image light indicating an imageable area is projected on a surface.
[図 11]図 11は webcam処理のフローチャートである。  FIG. 11 is a flowchart of webcam processing.
[図 12]図 12は webcam処理の投影処理において、画像投影部により画像出力光が 投影されてレ、る状態を示す図である。  FIG. 12 is a diagram showing a state in which image output light is projected by an image projection unit in a projection process of webcam processing.
[図 13]図 13は webcam処理の投影処理において、画像投影部により画像出力光が 投影されてレ、る状態を示す図である。  FIG. 13 is a diagram showing a state in which image output light is projected by an image projection unit in a projection process of webcam processing.
[図 14]図 14は投影処理のフローチャートである。  FIG. 14 is a flowchart of a projection process.
[図 15]図 15は立体画像処理のフローチャートである。  FIG. 15 is a flowchart of stereoscopic image processing.
[図 16]図 16 (a)は、空間コード法の原理を説明するための図であり、図 16 (b)は図 1 6 (a)とは異なるマスクパターン (グレイコード)を示す図である。  [FIG. 16] FIG. 16 (a) is a diagram for explaining the principle of the spatial code method, and FIG. 16 (b) is a diagram showing a mask pattern (gray code) different from FIG. 16 (a). is there.
[図 17]図 17 (a)は 3次元形状検出処理のフローチャートであり、図 17 (b)は撮像処理 のフローチャートであり、図 17 (c)は 3次元計測処理のフローチャートである。  FIG. 17 (a) is a flowchart of a three-dimensional shape detection process, FIG. 17 (b) is a flowchart of an imaging process, and FIG. 17 (c) is a flowchart of a three-dimensional measurement process.
[図 18]図 18は、画像投影部と画像撮像部との相対的な位置関係が Cモードである場 合に、パターン光が投影されてレ、る状態を示す図である。  FIG. 18 is a diagram showing a state in which pattern light is projected when the relative positional relationship between the image projection unit and the image pickup unit is C mode.
園 19]図 19はコード境界座標検出処理の概略を説明するための図である。 Garden 19] FIG. 19 is a diagram for explaining the outline of the code boundary coordinate detection process.
園 20]コード境界座標検出処理のフローチャートである。 Garden 20] is a flowchart of a code boundary coordinate detection process.
[図 21]図 21はコード境界座標をサブピクセル精度で求める処理のフローチャートで ある。 [FIG. 21] FIG. 21 is a flowchart of a process for obtaining code boundary coordinates with subpixel accuracy. is there.
[図 22]図 22は、 PatID [i]のマスクパターン番号を持つ輝度画像について、境界の C CDY値を求める処理のフローチャートである。  [FIG. 22] FIG. 22 is a flowchart of a process for obtaining a boundary CCDY value for a luminance image having a mask pattern number of PatID [i].
[図 23]図 23 (a)から図 23 (c)はレンズ収差補正処理を説明するための図である。  FIG. 23 (a) to FIG. 23 (c) are diagrams for explaining a lens aberration correction process.
[図 24]図 24 (a)及び図 24 (b)は CCD空間における座標から 3次元空間における 3次 元座標を算出する方法を説明するための図である。  FIGS. 24 (a) and 24 (b) are diagrams for explaining a method of calculating three-dimensional coordinates in a three-dimensional space from coordinates in a CCD space.
[図 25]図 25は平面化画像処理のフローチャートである。  FIG. 25 is a flowchart of flattening image processing.
[図 26]図 26 (a)から図 26 (c)は原稿姿勢演算処理を説明するための図である。  FIGS. 26 (a) to 26 (c) are diagrams for explaining a document orientation calculation process.
[図 27]図 27は平面変換処理のフローチャートである。  FIG. 27 is a flowchart of a plane conversion process.
[図 28]図 28 (a)は、湾曲計算処理についての大略を説明するための図であり、図 28 FIG. 28 (a) is a diagram for explaining the outline of the curvature calculation processing.
(b)は平面変換処理によって平面化された平面化画像を示す図である。 (b) is a diagram showing a flattened image flattened by the plane conversion process.
[図 29]図 29は無歪投影用画像変換処理のフローチャートである。  FIG. 29 is a flowchart of an image conversion process for distortion-free projection.
[図 30]図 30 (a)は第 2実施例の光源レンズ 50を示す側面図であり、図 30 (b)は第 2 実施例の光源レンズ 60を示す平面図である。  FIG. 30 (a) is a side view showing a light source lens 50 of a second embodiment, and FIG. 30 (b) is a plan view showing a light source lens 60 of the second embodiment.
[図 31]図 31 (a)は、光源レンズ 50を固定した状態を示す斜視図であり、図 31 (b)は その部分的な断面図である。  FIG. 31 (a) is a perspective view showing a state in which the light source lens 50 is fixed, and FIG. 31 (b) is a partial sectional view thereof.
[図 32]図 32は被写体に投影するパターン光としての他の例を示す図である。  FIG. 32 is a diagram showing another example of the pattern light projected on the subject.
[図 33]図 33 (a)から図 33 (c)は画像投影部の変形例を示す図であって、図 33 (a)は 撮像ヘッドの平面図であり、図 33 (b)及び図 33 (c)は図 33 (a)の A— A視断面図で ある。  [FIG. 33] FIGS. 33 (a) to 33 (c) are views showing a modification of the image projection unit, and FIG. 33 (a) is a plan view of the imaging head, and FIG. 33 (b) and FIG. 33 (c) is a sectional view taken along line AA of FIG. 33 (a).
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0010] 以下、本発明の好ましい実施例について、添付図面を参照して説明する。図 1 (a) 及び図 1 (b)は、本発明の実施形態としての画像入出力装置 1の外観斜視図である。 図 1 (a)は画像入出力装置 1を上面側から見た図であり、図 1 (b)は撮像ヘッド 2を拡 大して示す図である。 Hereinafter, preferred embodiments of the present invention will be described with reference to the accompanying drawings. FIG. 1A and FIG. 1B are external perspective views of an image input / output device 1 as an embodiment of the present invention. FIG. 1A is a view of the image input / output device 1 as viewed from above, and FIG. 1B is an enlarged view of the imaging head 2.
[0011] 画像入出力装置 1は、デジタルカメラとして機能するデジカメモード、ウェブカメラと して機能する webcamモード、 3次元形状を検出して立体画像を取得するための立 体画像モード、湾曲した原稿等を平面化した平面化画像を取得するための平面化 画像モード等の種々のモードを備える。また、画像入出力装置 1は、任意の画像を投 影可能な装置である。さらに、画像入出力装置 1は、画像が投影される方向に位置 する被写体だけではなぐ画像が投影される方向と異なる方向に位置する被写体も 撮像可能なように構成されてレ、る。 [0011] The image input / output device 1 includes a digital camera mode functioning as a digital camera, a webcam mode functioning as a web camera, a stereoscopic image mode for detecting a three-dimensional shape to obtain a three-dimensional image, a curved document. Flattening to obtain a flattened image obtained by flattening etc. Various modes such as an image mode are provided. The image input / output device 1 is a device capable of projecting an arbitrary image. Further, the image input / output device 1 is configured so as to be able to capture not only a subject located in a direction in which an image is projected but also a subject located in a direction different from a direction in which an image is projected.
[0012] 図 1 (a)では、特に、立体画像モードや平面化画像モードにおいて、被写体 Pの 3 次元形状を検出するために、後述する画像投影部 13から明暗を交互に並べてなる 縞状のパターン光(所定のパターンを有した画像信号光)が投影されてレ、る様子が 示されている。  In FIG. 1A, in particular, in the stereoscopic image mode or the flattened image mode, in order to detect the three-dimensional shape of the subject P, a stripe pattern formed by alternately arranging light and dark from an image projection unit 13 described later is used. The pattern light (image signal light having a predetermined pattern) is projected and projected.
[0013] 画像入出力装置 1は、撮像ヘッド 2と、その撮像ヘッド 2と一端が着脱自在に連結さ れたアーム部材 3とを供える。さらに、アーム部材 3の他端には、ラップトップ型のパー ソナルコンピュータ 49 (以下、単に「PC49」と称す)に、撮像ヘッド 2及びアーム部材 3を取り付け可能な取付部 4が備えられてレ、る。  The image input / output device 1 includes an imaging head 2 and an arm member 3 having one end detachably connected to the imaging head 2. Further, at the other end of the arm member 3, a laptop personal computer 49 (hereinafter simply referred to as “PC49”) is provided with a mounting portion 4 to which the imaging head 2 and the arm member 3 can be mounted. RU
[0014] 撮像ヘッド 2は、その内部に後述する画像投影部 13を内包するとともに、その外部 において円筒状の撮像ケース 11を軸心回りに回転可能に保持するケースを有する。 撮像ヘッド 2の一面には、その中央部に筒状の鏡筒 5が配置されている。以下の説明 では、鏡筒 5が設けられてレ、る面を撮像ヘッド 2の正面とする。  The imaging head 2 has a case that internally includes an image projection unit 13 described later, and that holds a cylindrical imaging case 11 rotatably around an axis outside thereof. On one surface of the imaging head 2, a cylindrical lens barrel 5 is disposed at the center thereof. In the following description, the surface on which the lens barrel 5 is provided is referred to as the front surface of the imaging head 2.
[0015] 鏡筒 5は、撮像ヘッド 2の正面から突出し、その内部に画像投影部 13の一部である 投影光学系 20を内包する部材である。この鏡筒 5によって、投影光学系 20が保持さ れる。また、鏡筒 5によって、投影光学系 20全体が焦点調節のため移動可能とされて いる。鏡筒 5端面からは、画像投影部 13の一部である投影光学系 20のレンズの一部 が外面に露出しており、この露出部分から投影面に向かって画像信号光が投影され る。  The lens barrel 5 is a member that protrudes from the front of the imaging head 2 and includes a projection optical system 20 that is a part of the image projection unit 13 therein. The projection optical system 20 is held by the lens barrel 5. The lens barrel 5 allows the entire projection optical system 20 to move for focus adjustment. A part of the lens of the projection optical system 20, which is a part of the image projection unit 13, is exposed to the outside from the end surface of the lens barrel 5, and the image signal light is projected from this exposed part toward the projection surface.
[0016] 撮像ケース 11には、ホワイトバランスセンサー 6と、フラッシュ 7とが設けられてレ、る。  The imaging case 11 is provided with a white balance sensor 6 and a flash 7.
また、ホワイトバランスセンサー 6とフラッシュ 7との間に、後述する画像撮像部 14の一 部である撮像光学系 21のレンズの一部が外面に露出している。撮像光学系 21に対 向する被写体が撮像される。  Further, between the white balance sensor 6 and the flash 7, a part of the lens of the imaging optical system 21 which is a part of the image imaging unit 14 described later is exposed on the outer surface. A subject facing the imaging optical system 21 is imaged.
[0017] フラッシュ 7は、デジカメモードにおいて、必要な被写体照度を補足するための光源 である。フラッシュ 7は、例えばキセノンガスが充填された放電管で構成される。フラッ シュ 7は、撮像ヘッド 2に内蔵されているコンデンサ(図示せず)からの放電により繰り 返し使用することができる。 The flash 7 is a light source for supplementing necessary subject illuminance in the digital camera mode. The flash 7 is composed of, for example, a discharge tube filled with xenon gas. Flat The shoe 7 can be used repeatedly by discharging from a capacitor (not shown) built in the imaging head 2.
[0018] また撮像ヘッド 2の背面には、モニタ LCD10が配置されている。モニタ LCD10は、 液晶ディスプレイ(Liquid Crystal Display)で構成されており、後述するプロセッ サ 15からの画像信号を受けて画像を使用者に表示する。例えば、モニタ LCD10に は、デジカメモードや webcamモードにおける撮像画像や、立体画像モードにおける 3次元形状検出結果画像、平面化画像モードにおける平面化画像等が表示される。  A monitor LCD 10 is arranged on the back of the imaging head 2. The monitor LCD 10 is configured by a liquid crystal display (Liquid Crystal Display), and receives an image signal from a processor 15 described later and displays an image to a user. For example, the monitor LCD 10 displays a captured image in the digital camera mode or the webcam mode, a three-dimensional shape detection result image in the stereoscopic image mode, a flattened image in the flattened image mode, and the like.
[0019] 更に、撮像ヘッド 2の側面には、撮像ヘッド 2とアーム部材 3とを着脱自在に連結す る連結部材 12が配置されている。画像入出力装置 1のうち、撮像ヘッド 2部分は単な るモパイル型のデジタルカメラとしても使用可能である。  Further, on a side surface of the imaging head 2, a connecting member 12 for detachably connecting the imaging head 2 and the arm member 3 is arranged. The imaging head 2 of the image input / output device 1 can be used as a single mopile digital camera.
[0020] 連結部材 12は、リング状に形成され、撮像ヘッド 2の側面に固定されている。また、 アーム部材 3の一端側には連結部材 12に嵌合する部位が形成されている。この嵌合 により、撮像ヘッド 2とアーム部材 3とを連結することができると共に、撮像ヘッド 2を P C49に対し任意の角度で固定することができるようになっている。また嵌合を解放す ることによって、アーム部材 3と撮像ヘッド 2とを切り離すことができる。  The connecting member 12 is formed in a ring shape and is fixed to a side surface of the imaging head 2. Further, a portion to be fitted to the connecting member 12 is formed on one end side of the arm member 3. By this fitting, the imaging head 2 and the arm member 3 can be connected, and the imaging head 2 can be fixed to the PC 49 at an arbitrary angle. By releasing the fitting, the arm member 3 and the imaging head 2 can be separated.
[0021] アーム部材 3は、 PC49に対して撮像ヘッド 2を保持すると共に、撮像ヘッド 2の向き を PC49に対し所望の向きにした状態で撮像ヘッドを保持することができる。アーム 部材 3は、所望の形状に屈曲可能な蛇腹状のパイプで構成されている。ユーザは撮 像ヘッド 2を所望の位置に向けることができるので、撮像ヘッド 2が PC49に取り付け られた状態でも、画像入出力装置 1により画像投影可能な方向と撮像可能な方向と をユーザの意図通りに設定することができる。  The arm member 3 can hold the imaging head 2 with respect to the PC 49 and can hold the imaging head with the orientation of the imaging head 2 in a desired direction with respect to the PC 49. The arm member 3 is formed of a bellows-shaped pipe that can be bent to a desired shape. Since the user can turn the imaging head 2 to a desired position, even if the imaging head 2 is attached to the PC 49, the user can determine the direction in which the image can be projected by the image input / output device 1 and the direction in which the image can be captured. Can be set as follows.
[0022] アーム部材 3は、上述のようにその一端側が連結部材 12を介して撮像ヘッド 2に連 結されるとともに、その他端側には、アーム部材 3を PC49に着脱可能に取り付けるた めの取付部 4が設けられている。  The arm member 3 has one end connected to the imaging head 2 via the connecting member 12 as described above, and the other end provided for detachably attaching the arm member 3 to the PC 49. A mounting part 4 is provided.
[0023] 取付部 4は、その内部に後述するバッテリ 26や電子回路基板等が内包されている 。さらに、取付部 4の手前側には、レリーズボタン 8と、オン Zオフスィッチ 9aと、モード 切替スィッチ 9bと、パソコン用インターフェイス 24のコネクタとが設けられている。オン /オフスィッチ 9aはレリーズボタン 8の上方に設けられ、モード切替スィッチ 9bはレリ ーズボタン 8の下方に設けられている。画像入出力装置 1は、このパソコン用インター フェイス 24に接続されたケーブル(図示せず)を介して、 PC49と接続される。 [0023] The mounting portion 4 includes a battery 26, an electronic circuit board, and the like, which will be described later. Further, a release button 8, an on-Z off switch 9a, a mode switching switch 9b, and a connector of the personal computer interface 24 are provided on the front side of the mounting portion 4. The on / off switch 9a is located above the release button 8, and the mode switch 9b is It is provided below the close button 8. The image input / output device 1 is connected to a PC 49 via a cable (not shown) connected to the personal computer interface 24.
[0024] レリーズボタン 8は、「半押し状態」と「全押し状態」との 2種類の状態に設定可能な 2 段階の押しボタン式のスィッチで構成されている。レリーズボタン 8の状態は後述する プロセッサ 15によって管理される。レリーズボタン 8の「半押し状態」で周知のオートフ オーカス (AF)および自動露出 (AE)機能が起動し、ピント、絞り、シャツタスピードが 調節される。レリーズボタン 8「全押し状態」で撮像等が行われる。  The release button 8 is constituted by a two-stage push button switch that can be set to two states, a “half-pressed state” and a “fully-pressed state”. The state of the release button 8 is managed by a processor 15 described later. When the release button 8 is pressed halfway, the well-known auto focus (AF) and auto exposure (AE) functions are activated, and the focus, aperture, and shutter speed are adjusted. An image is taken with the release button 8 “fully pressed”.
[0025] オン Zオフスィッチ 9aは、これを押圧することにより、画像入出力装置 1の主電源の オンとオフが切り換わる。  When the on-Z off switch 9a is pressed, the main power supply of the image input / output device 1 is switched on and off.
[0026] モード切替スィッチ 9bは、デジカメモード、 webcamモード、立体画像モード、平面 化画像モード、オフモード等の種々のモードに設定可能なスィッチである。モード切 替スィッチ 9bの状態はプロセッサ 15によって管理される。モード切替スィッチ 9bの状 態がプロセッサ 15によって検出されることで各モードの処理が実行される。  The mode switching switch 9b is a switch that can be set to various modes such as a digital camera mode, a webcam mode, a stereoscopic image mode, a flattened image mode, and an off mode. The state of the mode switching switch 9b is managed by the processor 15. When the state of the mode switching switch 9b is detected by the processor 15, the processing of each mode is executed.
[0027] 図 2 (a)から図 2 (e)は、撮像ヘッド 2の内部構成を示している。図 2 (a)に示すように 、撮像ヘッド 2は、蓋部 2aと底部 2bとを有する。撮像ケース 11は、後述する画像撮像 部 14を内包する円筒状のケースである。撮像ケース 11の外周面には、画像撮像部 1 4の一部である撮像光学系 21のレンズの一部が露出しているとともに、複数のリブ 11 aが立設されている。  FIGS. 2A to 2E show the internal configuration of the imaging head 2. FIG. As shown in FIG. 2A, the imaging head 2 has a lid 2a and a bottom 2b. The imaging case 11 is a cylindrical case that includes an image imaging unit 14 described below. On the outer peripheral surface of the imaging case 11, a part of the lens of the imaging optical system 21 which is a part of the image imaging unit 14 is exposed, and a plurality of ribs 11a are provided upright.
[0028] 図 2 (b)は、撮像ヘッド 2の蓋部 2aが取り外され、さらに撮像ヘッド 2に内包される画 像投影部 13が取り外された状態を示す図である。図 2 (b)に示すように、底部 2bには 、撮像ヘッド位置検出センサ S 1と、撮像部位置検出センサ S2と、孔 2cとが設けられ ている。  FIG. 2B is a diagram showing a state in which the lid 2a of the imaging head 2 has been removed, and the image projection unit 13 included in the imaging head 2 has been removed. As shown in FIG. 2 (b), the bottom 2b is provided with an imaging head position detection sensor S1, an imaging unit position detection sensor S2, and a hole 2c.
[0029] 撮像ヘッド位置検出センサ S 1は、撮像ヘッド 2の傾きを検出するための 2軸加速度 センサである。撮像ヘッド位置検出センサ S 1は、撮像ヘッド 2に固定される画像投影 部 13の投影方向 (後述する投影光学系 20の光軸方向)が鉛直方向に対し何度傾斜 しているかを 2軸で検出し、電気信号として出力する。  The imaging head position detection sensor S 1 is a two-axis acceleration sensor for detecting the inclination of the imaging head 2. The imaging head position detection sensor S1 uses two axes to determine how many times the projection direction of the image projection unit 13 fixed to the imaging head 2 (the optical axis direction of the projection optical system 20 described later) is inclined with respect to the vertical direction. Detect and output as electrical signal.
[0030] 撮像部位置検出センサ S2は、撮像ヘッド 2に対する画像撮像部 14の位置を検出 するためのセンサである。 [0031] 孔 2cは撮像ヘッド 2の正面を構成する面に形成される略円形の孔であって、孔 2c にはネジ切りが施されている。この孔 2cに鏡筒 5 (図 1参照)がネジ嵌合される。後述 するように、撮像ヘッド 2に内包される画像投影部 13は、この孔 2cに移動自在にネジ 嵌合された鏡筒 5から画像を投影する。 The imaging unit position detection sensor S2 is a sensor for detecting the position of the imaging unit 14 with respect to the imaging head 2. The hole 2c is a substantially circular hole formed on the surface constituting the front surface of the imaging head 2, and the hole 2c is threaded. The lens barrel 5 (see FIG. 1) is screwed into the hole 2c. As will be described later, the image projection unit 13 included in the imaging head 2 projects an image from the lens barrel 5 movably screwed into the hole 2c.
[0032] 撮像ケース 11は、その両端面に撮像ケース 11よりも小径の円筒状部材である小径 部 l ibを有する。小径部 l ibの一方には、撮像部位置検出用リブ 11 cが設けられて いる。  [0032] The imaging case 11 has small-diameter portions lib which are cylindrical members smaller in diameter than the imaging case 11 on both end surfaces. One of the small-diameter portions l ib is provided with an imaging portion position detecting rib 11c.
[0033] 小径部 l ibは、底部 2bに設けられた切欠きに、軸心回りに回転可能に支持される 。これにより、撮像ケース 11は、撮像ヘッド 2に対し相対回転可能に保持される。  [0033] The small diameter portion l ib is rotatably supported around an axis by a notch provided in the bottom portion 2b. Thus, the imaging case 11 is held so as to be rotatable relative to the imaging head 2.
[0034] 撮像部位置検出用リブ 11cは、小径部 l ibの一方に固定されており、撮像ケース 1 1の回転に伴って撮像ケース 11の軸心回りに回転する。この撮像部位置検出用リブ 1 lcの近傍には、底部 2bに固定された上述の撮像部位置検出センサ S2が位置して いる。撮像部位置検出センサ S2により、この撮像部位置検出用リブ 11cの位置を検 出することで、撮像ヘッド 2に対する画像撮像部 14の撮像方向の向きが検出される。 撮像部位置検出センサ S2により、投影方向と撮像方向とが所定範囲の位置関係に あるか否力を判定する為の電気信号が出力される。  The imaging unit position detecting rib 11c is fixed to one of the small diameter parts l ib, and rotates around the axis of the imaging case 11 with the rotation of the imaging case 11. The above-described imaging unit position detection sensor S2 fixed to the bottom 2b is located near the imaging unit position detection rib 1lc. By detecting the position of the rib 11c for detecting the position of the imaging unit by the imaging unit position detection sensor S2, the direction of the imaging direction of the image imaging unit 14 with respect to the imaging head 2 is detected. An electric signal for determining whether or not the projection direction and the imaging direction have a positional relationship within a predetermined range is output by the imaging unit position detection sensor S2.
[0035] 図 2 (c)は、底部 2bから撮像ケース 11が取り外された状態を示す図であり、図 2 (d) は、撮像ケース 11を示す図である。図 2 (c)に示されるように、底部 2bは、所定位置 固定用リブ 2dと回転止め 2eとを備えている。  FIG. 2 (c) is a diagram showing a state where the imaging case 11 has been removed from the bottom 2b, and FIG. 2 (d) is a diagram showing the imaging case 11. As shown in FIG. 2 (c), the bottom portion 2b includes a predetermined position fixing rib 2d and a rotation stopper 2e.
[0036] 所定位置固定用リブ 2dは、撮像ケース 11の小径部 l ibと付勢係合することにより、 画像投影部 13と画像撮像部 14との相対的な位置関係が所定の位置関係にある状 態で撮像ケース 11を所定の位置に固定する。  The predetermined position fixing rib 2d is biasedly engaged with the small diameter portion l ib of the imaging case 11, so that the relative positional relationship between the image projecting unit 13 and the image capturing unit 14 becomes a predetermined positional relationship. In a certain state, the imaging case 11 is fixed at a predetermined position.
[0037] 回転止め 2eは、撮像ケース 11の外周面に立設された複数のリブ 11aと係合するよ うに設けられている。したがって、複数のリブ 11aにより撮像ケース 11を付勢すること によって、撮像ケース 11を断続的に回動させることも可能であり、また所望の位置で 固定させることもできる。  [0037] The rotation stopper 2e is provided so as to engage with a plurality of ribs 11a provided upright on the outer peripheral surface of the imaging case 11. Therefore, by urging the imaging case 11 with the plurality of ribs 11a, the imaging case 11 can be intermittently rotated, and can be fixed at a desired position.
[0038] 図 2 (e)は、撮像ケース 11の内部構成を示す図である。図 2 (e)に示すように、撮像 ケース 11には、撮像光学系 21と、その撮像光学系 21を介して入射した光を電気信 号に変換する CCD22とが固定されている。この撮像光学系 21と CCD22とから構成 される画像撮像部 14は、その画像撮像部 14に対向して存在する被写体を撮像しそ の撮像データを取得することができる。また、撮像ケース 11を撮像ヘッド 2に対し軸心 回りに回動させることにより、撮像ケース 11に固定された画像撮像部 14による撮像方 向が、撮像ヘッド 2に固定された画像投影部 13の投影方向に対して相対的に変更さ れる。よって、投影方向とは異なる方向に存在する被写体に画像撮像部 14を対向さ せることができるので、画像投影部 13により画像信号光が投影される方向に存在す る被写体だけでなぐ画像投影部 13により画像信号光が投影される方向と異なる方 向に存在する被写体をも撮像することができる。 FIG. 2E is a diagram showing an internal configuration of the imaging case 11. As shown in FIG. 2 (e), the imaging case 11 includes an imaging optical system 21 and light incident through the imaging optical system 21 in an electric signal. The CCD22 that converts to the signal is fixed. The image pickup unit 14 composed of the image pickup optical system 21 and the CCD 22 can pick up an image of a subject that faces the image pickup unit 14 and obtain the image pickup data. Further, by rotating the imaging case 11 around the axis with respect to the imaging head 2, the imaging direction of the image imaging unit 14 fixed to the imaging case 11 is shifted by the image projection unit 13 fixed to the imaging head 2. It is changed relative to the projection direction. Therefore, since the image capturing section 14 can be made to face a subject existing in a direction different from the projection direction, the image projecting section 13 can only use the subject existing in the direction in which the image signal light is projected by the image projecting section 13. By means of 13, it is possible to capture an image of a subject existing in a direction different from the direction in which the image signal light is projected.
[0039] また、図 2 (e)に示すように、小径部 l ibの一方は管状に構成されており、信号ケー ブルが揷通される。この信号ケーブルにより、撮像ケース 11内の電気的構成部品と 撮像ヘッド 2内の電気的構成部品とが電気的に接続される。  Further, as shown in FIG. 2 (e), one of the small-diameter portions l ib is formed in a tubular shape, and a signal cable is passed therethrough. With this signal cable, the electrical components in the imaging case 11 and the electrical components in the imaging head 2 are electrically connected.
[0040] 図 3は、画像入出力装置 1の内部の電気的構成をブロック図として示す図である。  FIG. 3 is a block diagram showing an electrical configuration inside the image input / output device 1.
撮像ヘッド 2には主たる構成要素として画像投影部 13と画像撮像部 14とが設けられ ている。取付部 4には主たる構成要素としてプロセッサ 15ゃバッテリ 26が内蔵されて いる。  The imaging head 2 includes an image projection unit 13 and an image imaging unit 14 as main components. The mounting part 4 contains a processor 15 and a battery 26 as main components.
[0041] アーム部材 3の内部には信号ケーブルが挿通される。この信号ケーブルは、取付 部 4の内部まで延出され、取付部 4内の電気的構成部品と撮像ヘッド 2内の電気的 構成部品とを接続する。  A signal cable is inserted inside the arm member 3. The signal cable extends to the inside of the mounting portion 4 and connects the electrical components in the mounting portion 4 to the electrical components in the imaging head 2.
[0042] 画像投影部 13は、投影面に所望の投影画像を投影するためのユニットである。画 像投影部 13には、基板 16と、複数個の LED17 (その総称として以下「LEDアレイ 1 7A」という)と、光源レンズ 18と、投影 LCD19と、投影光学系 20とが、投影方向(投 影光学系 20の光軸) 13aに沿って一直線上に配置されている。なお、画像投影部 1 3については、図 4 (a)から図 4 (c)を参照して後に詳細に説明する。  [0042] The image projection unit 13 is a unit for projecting a desired projection image on a projection surface. The image projection unit 13 includes a substrate 16, a plurality of LEDs 17 (hereinafter collectively referred to as “LED array 17 A”), a light source lens 18, a projection LCD 19, and a projection optical system 20. They are arranged in a straight line along the optical axis 13a of the projection optical system 20). The image projection unit 13 will be described later in detail with reference to FIGS. 4 (a) to 4 (c).
[0043] 画像撮像部 14は、被写体 Pを撮像するためのユニットである。画像撮像部 14には、 撮像光学系 21と CCD22とが、光の入力方向(撮像光学系 21の光軸) 14aに沿って 配置されている。  The image capturing section 14 is a unit for capturing an image of the subject P. In the image pickup section 14, an image pickup optical system 21 and a CCD 22 are arranged along a light input direction (optical axis of the image pickup optical system 21) 14a.
[0044] 撮像光学系 21は、複数枚のレンズで構成され、周知のオートフォーカス機能を有 する。オートフォーカス機能により、撮像光学系 21は、自動で焦点距離及び絞りを調 整し外部からの光を CCD22上に結像するよう構成されている。 The imaging optical system 21 includes a plurality of lenses and has a well-known autofocus function. To do. With the autofocus function, the imaging optical system 21 is configured to automatically adjust the focal length and aperture and form an image of external light on the CCD 22.
[0045] CCD22は、マトリクス状に配列された CCD (Charge Coupled Device)素子な どの光電変換素子を有する。 CCD22は、撮像光学系 21を介して表面に結像される 画像の光の色及び強さに応じた信号を生成し、これをデジタルデータに変換してプ 口セッサ 15に出力する。  The CCD 22 has photoelectric conversion elements such as CCD (Charge Coupled Device) elements arranged in a matrix. The CCD 22 generates a signal corresponding to the color and intensity of the light of the image formed on the surface via the imaging optical system 21, converts the signal into digital data, and outputs the digital data to the port processor 15.
[0046] プロセッサ 15には、フラッシュ 7、レリーズボタン 8、オン Zオフスィッチ 9a、モード切 替スィッチ 9b、パソコン用インターフェイス 24、電源インターフェイス 25、外部メモリ 2 7、キャッシュメモリ 28、光源ドライバ 29、投影 LCDドライバ 30、 CCDインターフェイ ス 31を介して電気的に接続されている。また、プロセッサ 15には、電源インターフエ イス 25を介してバッテリ 26が接続され、光源ドライバ 29を介して LEDアレイ 17Aが接 続され、投影 LCDドライバ 30を介して投影 LCD19が接続され、 CCDインターフェイ ス 31を介して CCD22が接続されている。プロセッサ 15に接続されたこれらの構成要 素は、プロセッサ 15によって管理される。  [0046] The processor 15 includes a flash 7, a release button 8, an on-Z off switch 9a, a mode switching switch 9b, a personal computer interface 24, a power supply interface 25, an external memory 27, a cache memory 28, a light source driver 29, and a light source driver 29. They are electrically connected via an LCD driver 30 and a CCD interface 31. A battery 26 is connected to the processor 15 via a power supply interface 25, an LED array 17A is connected via a light source driver 29, a projection LCD 19 is connected via a projection LCD driver 30, and a CCD interface is connected to the processor 15. The CCD 22 is connected via the face 31. These components connected to the processor 15 are managed by the processor 15.
[0047] 外部メモリ 27は、着脱自在なフラッシュ ROMである。外部メモリ 27には、デジカメ モードや webcamモード、そして立体画像モードにぉレ、て撮像した撮像画像や 3次 元情報が格納される。具体的には、外部メモリ 27として SDカード、 XDカード(共に登 録商標)等を使用することができる。  The external memory 27 is a removable flash ROM. The external memory 27 stores captured images and three-dimensional information in digital camera mode, webcam mode, and stereoscopic image mode. Specifically, an SD card, an XD card (both are registered trademarks) or the like can be used as the external memory 27.
[0048] キャッシュメモリ 28は、高速な記憶装置である。キャッシュメモリ 28は例えば次のよう に用いられる。プロッセサ 15による制御の下で、撮像された撮像画像は高速でキヤッ シュメモリ 28に転送され、画像処理を施された後、外部メモリ 27に格納される。キヤッ シュメモリ 28として、 SDRAM, DDRRAM等を使用することができる。  [0048] The cache memory 28 is a high-speed storage device. The cache memory 28 is used, for example, as follows. Under the control of the processor 15, the captured image is transferred to the cache memory 28 at high speed, subjected to image processing, and stored in the external memory 27. As the cache memory 28, SDRAM, DDRRAM, or the like can be used.
[0049] 電源インターフェイス 25、光源ドライバ 29、投影 LCDドライバ 30、 CCDインターフ ヱイス 31は、 IC (Integrated Circuit :集積回路)によって構成されている。電源インタ 一フェイス 25、光源ドライバ 29、投影 LCDドライバ 30、 CCDインターフェイス 31は、 それぞれ、ノ ッテリ 26、 LEDアレイ 17A、投影 LCD19、 CCD22を制御する。  [0049] The power supply interface 25, the light source driver 29, the projection LCD driver 30, and the CCD interface 31 are configured by an IC (Integrated Circuit). A power supply interface 25, a light source driver 29, a projection LCD driver 30, and a CCD interface 31 control the notebook 26, the LED array 17A, the projection LCD 19, and the CCD 22, respectively.
[0050] 図 3に示すように、撮像ヘッド 2へ駆動電力を供給するためのメインバッテリ 26は取 付部 4に設けられているので、撮像ヘッド 2を軽量化することができる。よって、 PC49 に対し撮像ヘッド 2の向きを変更する操作が容易になる。また、撮像ヘッド 2は軽量化 されているので、撮像ヘッド 2にメインバッテリ 26が設けられる場合に比較して、ァー ム部材 3の強度はそれほど要求されなレ、。 As shown in FIG. 3, the main battery 26 for supplying drive power to the imaging head 2 is provided in the mounting section 4, so that the weight of the imaging head 2 can be reduced. Therefore, PC49 In contrast, the operation of changing the direction of the imaging head 2 becomes easy. Further, since the imaging head 2 is reduced in weight, the strength of the arm member 3 is not so required as compared with a case where the main battery 26 is provided in the imaging head 2.
[0051] 図 4 (a)は画像投影部 13の拡大図であり、図 4 (b)は光源レンズ 18の平面図であり 、図 4 (c)は投影 LCD19と CCD22との配置関係を示す図である。  FIG. 4A is an enlarged view of the image projection unit 13, FIG. 4B is a plan view of the light source lens 18, and FIG. 4C shows an arrangement relationship between the projection LCD 19 and the CCD 22. FIG.
[0052] 上述した通り、画像投影部 13は、投影方向に沿って、基板 16と、 LEDアレイ 17Aと 、光源レンズ 18と、投影 LCD19と、投影光学系 20とを備えている。  As described above, the image projection unit 13 includes the substrate 16, the LED array 17A, the light source lens 18, the projection LCD 19, and the projection optical system 20 along the projection direction.
[0053] 基板 16は、 LEDアレイ 17Aを実装すると共に、 LEDアレイ 17Aとの電気的な配線 をするためのものである。具体的には、スルーホールを設けたアルミ製基板に絶縁樹 脂を塗布してから無電解メツキにて銅パターンを形成したものゃガラエポ基材をコア とする単層または多層構造の基板を基板 16として使用することができる。  [0053] The substrate 16 is for mounting the LED array 17A and for performing electrical wiring with the LED array 17A. Specifically, an insulating resin is applied to an aluminum substrate provided with through holes, and then a copper pattern is formed by electroless plating. ゃ A single-layer or multilayer substrate with a glass epoxy substrate as the core Can be used as 16.
[0054] LEDアレイ 17Aは、各々が放射状の光を発光する光源である。基板 16上には、複 数個の LED17 (発光ダイオード)が千鳥状に配列されている。これらの LED17は、 銀ペーストを介して基板 16に接着されおよび電気的に結線されている。また、 LED1 7は、ボンディングワイヤを介しても基板 16に電気的に結線されている。  [0054] The LED array 17A is a light source that emits radial light. On the substrate 16, a plurality of LEDs 17 (light emitting diodes) are arranged in a staggered manner. These LEDs 17 are bonded and electrically connected to the substrate 16 via a silver paste. The LED 17 is also electrically connected to the substrate 16 via a bonding wire.
[0055] このように光源として複数個の LED17を使用することで、光源として白熱電球、ハ ロゲンランプ等を使用する場合に比べて、電気を光に変換する効率 (電気光変換効 率)を高め、同時に赤外線や紫外線の発生を抑えることができる。よって、本実施形 態によれば、光源を省電力で駆動でき、節電化、長寿命化を図ることができる。また、 本実施形態によれば、装置の温度上昇を低減させることができる。  [0055] By using a plurality of LEDs 17 as the light source in this way, the efficiency of converting electricity into light (electro-optical conversion efficiency) is increased as compared with the case where an incandescent light bulb, a halogen lamp, or the like is used as the light source. At the same time, generation of infrared rays and ultraviolet rays can be suppressed. Therefore, according to the present embodiment, the light source can be driven with low power consumption, and power saving and long life can be achieved. Further, according to the present embodiment, it is possible to reduce the temperature rise of the device.
[0056] この様に、 LED17はハロゲンランプ等に比べて熱線の発生が極めて低ぐ且つ光 源自身の総発熱量も低いので、後述する光源レンズ 18や投影光学系 20として、樹 脂製のレンズを採用することができる。よって、ガラス製のレンズを採用する場合に比 ベて、光源レンズ 18および投影光学系 20を安価で軽量に構成することができる。  As described above, since the LED 17 generates extremely low heat rays and has a low total heat generation of the light source itself as compared with a halogen lamp or the like, the light source lens 18 and the projection optical system 20 described later are made of resin. A lens can be employed. Therefore, the light source lens 18 and the projection optical system 20 can be configured inexpensively and lightly as compared with the case where a glass lens is employed.
[0057] LEDアレイ 17Aを構成する各 LED17は、各々同じ発光色を発光する。各 LED17 は、材料に Al、 In、 Ga、 Pの 4元素が用いられており、アンバー色を発光する。よって 、複数色の発光色を発光させる場合に生ずる色収差の補正を考慮する必要はなぐ 色収差を補正するために投影光学系 20として色消しレンズを採用する必要はないの で、簡単な面構成および安価な材料の投影手段を提供するという効果がある。 [0057] Each of the LEDs 17 constituting the LED array 17A emits the same emission color. Each of the LEDs 17 uses four elements of Al, In, Ga, and P as materials, and emits an amber color. Therefore, it is not necessary to consider the correction of chromatic aberration that occurs when emitting a plurality of emission colors. It is not necessary to use an achromatic lens as the projection optical system 20 to correct chromatic aberration. Thus, there is an effect of providing a simple surface configuration and an inexpensive material projection unit.
[0058] また、他の発光色に比べて電気光変換率が約 801umen/Wと高い 4元素材料の アンバー色 LEDを採用することで、一層、高輝度、節電、長寿命化を図ることができ る。尚、各 LED17を千鳥状に配置する事に関する効果については、図 4 (a)から図 4 (c)を参照して説明する。  [0058] In addition, by adopting an amber LED of a four-element material having an electric-to-optical conversion rate as high as about 801umen / W as compared with other emission colors, higher luminance, power saving, and longer life can be achieved. it can. The effect of arranging the LEDs 17 in a staggered manner will be described with reference to FIGS. 4 (a) to 4 (c).
[0059] 具体的には、 LEDアレイ 17Aは 59個の LED17からなり、各 LED17は 50mW (20 mA, 2. 5V)で駆動される。したがって、全 59個の LED17は略 3Wの消費電力で駆 動される。各 LED17から発光される光力 光源レンズ 18、投影 LCD19を通過して 投影光学系 20から照射される場合の光束値としての明るさは、全面照射の場合であ つても 25ANSIルーメン程度に設定されている。  [0059] Specifically, the LED array 17A includes 59 LEDs 17, and each LED 17 is driven at 50mW (20 mA, 2.5V). Therefore, all 59 LEDs 17 are driven with approximately 3W of power consumption. The luminous power emitted from each LED 17 The brightness as the luminous flux when illuminated from the projection optical system 20 through the light source lens 18 and the projection LCD 19 is set to about 25 ANSI lumens even in the case of full illumination. ing.
[0060] この明るさを採用することで、例えば、立体画像モードにおいて、人や動物の顔面 等の被写体の 3次元形状を検出する場合に、人や動物に眩しさを与えず、人や動物 が目をつぶつていない状態の 3次元形状を検出することができる。  [0060] By adopting this brightness, for example, when detecting the three-dimensional shape of a subject such as the face of a person or an animal in the stereoscopic image mode, it does not give glare to the person or the animal, and Can detect 3D shapes in which the eyes are not closed.
[0061] 光源レンズ 18は、 LEDアレイ 17Aから放射状に発光される光を集光する集光光学 系としてのレンズであり、その材質はアクリルに代表される光学樹脂で構成されている  [0061] The light source lens 18 is a lens as a light-collecting optical system that collects light emitted radially from the LED array 17A, and is made of an optical resin represented by acrylic.
[0062] 具体的には、光源レンズ 18は、 LEDアレイ 17Aの各 LED17に対向する位置に投 影 LCD19側に向けて凸設された凸状のレンズ部 18aと、そのレンズ部 18aを支持す るベース部 18bと、そのベース部 18bの内部空間であって LEDアレイ 17Aを内包す る開口に充填される、 LED17の封止および基板 16と光源レンズ 18との接着を目的 としたエポキシまたはシリコン樹脂の封止材 18cと、ベース部 18bから基板 16側に突 設され、光源レンズ 18と基板 16とを接続する位置決めピン 18dとを備えている。 [0062] Specifically, the light source lens 18 projects at a position facing each LED 17 of the LED array 17A, and supports a convex lens portion 18a protruding toward the LCD 19 side, and the lens portion 18a. Epoxy or silicon for sealing the LED 17 and bonding the substrate 16 and the light source lens 18 to the base portion 18b and the space inside the base portion 18b, which fills the opening containing the LED array 17A. It includes a resin sealing material 18c and positioning pins 18d projecting from the base portion 18b toward the substrate 16 and connecting the light source lens 18 and the substrate 16.
[0063] 光源レンズ 18は、開口の内部に LEDアレイ 17Aを内包させつつ、基板 16に穿設 されている長孔 16に位置決めピン 18dを差し込むことによって、基板 16上に固定さ れる。  The light source lens 18 is fixed on the substrate 16 by inserting the positioning pins 18 d into the long holes 16 formed in the substrate 16 while enclosing the LED array 17 A inside the opening.
[0064] よって、省スペースで光源レンズ 18を配置することができる。また、基板 16に LED アレイ 17Aを実装するという機能の他に、光源レンズ 18を支持するという機能をも備 えさせることによって、光源レンズ 18を支持する部品を別途設ける必要がなくなり、部 品の点数を削減することができる。 Accordingly, the light source lens 18 can be arranged in a small space. Also, by providing a function of supporting the light source lens 18 in addition to the function of mounting the LED array 17A on the substrate 16, it is not necessary to separately provide a component for supporting the light source lens 18, and the part is not required. The number of articles can be reduced.
[0065] また、各レンズ部 18aは、 LEDアレイ 17Aの各 LED17と 1対 1の関係で対向する位 置に配置されている。 [0065] Further, each lens portion 18a is arranged at a position facing each LED 17 of the LED array 17A in a one-to-one relationship.
[0066] よって、各 LED17から発光される放射状の光は、各 LED17に対向する各レンズ部  Therefore, the radial light emitted from each LED 17 is transmitted to each lens unit facing each LED 17.
18によって効率良く集光され、図に示すような指向性の高い放射光として投影 LCD 19に照射される。この様に指向性を高めたのは、投影 LCD19に略垂直に光を入射 することによって、液晶の旋光性に起因する面内の透過率ムラが抑制され得るためで ある。また同時に、投影光学系 20は、テレセントリック特性を持ち、その入射 NAが 0. 1程度であるため、垂直 ± 5° 以内の光のみが内部の絞りを通過できるように規制さ れているためである。従って、 LED17からの光を出射角度を投影 LCD19に対し垂 直に揃え、且つ、 ± 5° にほどんどの光束を入れることが画質向上の要点となる。な ぜなら投影 LCD19に垂直から外れた光を入射すると、液晶の旋光性により、透過率 が入射角度に依存して変わってしまい、透過率ムラとなるからである。  The light is efficiently condensed by 18 and radiated to the projection LCD 19 as radiation having high directivity as shown in the figure. The reason why the directivity is increased in this way is that by injecting light substantially perpendicularly to the projection LCD 19, the in-plane transmittance unevenness due to the optical rotation of the liquid crystal can be suppressed. At the same time, since the projection optical system 20 has telecentric characteristics and its incident NA is about 0.1, it is regulated so that only light within vertical ± 5 ° can pass through the internal aperture. is there. Therefore, it is essential to improve the image quality that the emission angle of the light from the LED 17 is set to be perpendicular to the projection LCD 19 and that almost all of the luminous flux is within ± 5 °. This is because, if light that is out of the vertical direction enters the projection LCD 19, the transmittance changes depending on the incident angle due to the optical rotation of the liquid crystal, resulting in transmittance unevenness.
[0067] 投影 LCD19は、光源レンズ 18を通過して集光された光に空間変調を施して、投影 光学系 20に向けて所望の画像信号光を出力する空間変調素子である。具体的には 、投影 LCD19は、縦横の比率の異なる板状の液晶ディスプレイ(Liquid Crystal Display)で構成されてレ、る。  The projection LCD 19 is a spatial modulation element that spatially modulates light condensed through the light source lens 18 and outputs a desired image signal light to the projection optical system 20. Specifically, the projection LCD 19 is composed of a plate-like liquid crystal display (Liquid Crystal Display) having a different aspect ratio.
[0068] 図 4 (c)に示すように、投影 LCD19を構成する各画素は、投影 LCD19の長手方 向に沿って一直線状に並べられた 1の画素列と、その 1の画素列とは投影 LCD19の 長手方向において所定間隔ずれた他の画素列とが交互に並列に並べられた状態と なるように配置されている。  As shown in FIG. 4 (c), each pixel constituting the projection LCD 19 is composed of one pixel row arranged in a straight line along the longitudinal direction of the projection LCD 19, and one pixel row. The projection LCD 19 is arranged so that other pixel rows shifted by a predetermined distance in the longitudinal direction are alternately arranged in parallel.
[0069] 尚、図 4 (c)は紙面手前側に撮像ヘッド 2の正面を向け、紙面裏側から光が投影 LC D19に向けて照射され、紙面手前側から CCD22に被写体像が結像される状態であ るとする。  In FIG. 4 (c), the front of the imaging head 2 is directed to the front side of the paper, light is irradiated toward the projection LCD 19 from the back side of the paper, and a subject image is formed on the CCD 22 from the front side of the paper. It is assumed to be in the state.
[0070] このように、投影 LCD19を構成する画素を長手方向に千鳥状に配置することで、 長手方向と直交する方向(短手方向)において、投影 LCD19によって空間変調が施 される光を 1/2ピッチで制御することができる。従って、細いピッチで投影パターンを 制御でき、分解能を上げて高精度に 3次元の形状を検出することができる。 [0071] 特に、後述する立体画像モードにおいて、被写体の 3次元形状を検出すベぐ被写 体に向けて明暗を交互に並べてなる縞状のパターン光を投光する場合に、その縞方 向を投影 LCD19の短手方向に一致させることで、明暗の境界を 1/2ピッチで制御 すること力 Sできるので、同様に高い解像点数で高精度に 3次元の形状を検出すること ができる。 As described above, by arranging the pixels constituting the projection LCD 19 in a staggered manner in the longitudinal direction, the light that is spatially modulated by the projection LCD 19 in one direction perpendicular to the longitudinal direction (transverse direction) is reduced by one. / 2 pitch can be controlled. Therefore, the projection pattern can be controlled at a fine pitch, and the three-dimensional shape can be detected with high accuracy by increasing the resolution. In particular, in a stereoscopic image mode to be described later, when projecting a striped pattern light in which light and dark are alternately arranged toward an object to detect the three-dimensional shape of the object, the direction of the stripe is determined. By aligning the image with the short side of the LCD 19, it is possible to control the boundary between light and dark at a half pitch, so that the three-dimensional shape can be detected with high resolution and high accuracy as well. .
[0072] 撮像ヘッド 2の内部において、投影 LCD19と CCD22とは、図 4 (c)に示すような関 係で配置される。具体的には、投影 LCD19の幅広面と CCD22の幅広面とは略同 一の方向に向いて配置されてレ、るので、投影 LCD19から投影面に投影されてレ、る 画像を CCD22に結像させる場合に、投影画像をハーフミラー等で屈曲させることな ぐそのままの状態で投影画像を結像させることができる。  [0072] Inside the imaging head 2, the projection LCD 19 and the CCD 22 are arranged in a relationship as shown in Fig. 4 (c). More specifically, since the wide surface of the projection LCD 19 and the wide surface of the CCD 22 are arranged and directed in substantially the same direction, an image projected from the projection LCD 19 onto the projection surface is formed on the CCD 22. When forming an image, the projected image can be formed as it is without being bent by a half mirror or the like.
[0073] CCD22は、投影 LCD19の長手方向側(画素列が延びる方向側)に配置されてい る。よって、特に、立体画像モードにおいて、三角測量の原理を利用して被写体の 3 次元形状を検出する場合には、 CCD22と被写体とのなす傾きを 1/2ピッチで制御 すること力 Sできるので、同様に高精度に 3次元の形状を検出することができる。  The CCD 22 is arranged on the longitudinal direction side of the projection LCD 19 (in the direction in which the pixel columns extend). Therefore, in particular, when detecting the three-dimensional shape of the subject using the principle of triangulation in the three-dimensional image mode, the inclination S between the CCD 22 and the subject can be controlled at a half pitch, so that the force S can be obtained. Similarly, three-dimensional shapes can be detected with high accuracy.
[0074] 投影光学系 20は、投影 LCD19を通過した画像信号光を投影面に向けて投影す る複数のレンズであり、ガラス及び樹脂の組み合わせからなるテレセントリックレンズ で構成されている。テレセントリックとは、投影光学系 20を通過する主光線は、入射 側の空間では光軸に平行になり、射出瞳の位置は無限になる構成をいう。このように テレセントリックにすることで、前述のように投影 LCD19を垂直 ± 5° で通過する光の みを投影し得るので、画質を向上させることができる。  [0074] The projection optical system 20 is a plurality of lenses that project the image signal light that has passed through the projection LCD 19 toward the projection surface, and is formed of a telecentric lens made of a combination of glass and resin. Telecentric refers to a configuration in which the principal ray passing through the projection optical system 20 is parallel to the optical axis in the space on the incident side, and the position of the exit pupil is infinite. By being telecentric in this manner, as described above, only light that passes through the projection LCD 19 at a vertical angle of ± 5 ° can be projected, so that image quality can be improved.
[0075] 図 5 (a)から図 5 (c)は、 LEDアレイ 17Aの配列に関する説明をするための図である 。図 5 (a)は光源レンズ 18を通過した光の照度分布を示す図であり、図 5 (b)は LED アレイ 17Aの配列状態を示す平面図であり、図 5 (c)は投影 LCD19面における合成 照度分布を示す図である。  FIGS. 5A to 5C are views for explaining the arrangement of the LED array 17A. FIG. 5 (a) is a diagram showing an illuminance distribution of light passing through the light source lens 18, FIG. 5 (b) is a plan view showing an arrangement state of the LED array 17A, and FIG. FIG. 5 is a diagram showing a composite illuminance distribution in the example.
[0076] 図 5 (a)に示すように、光源レンズ 18を通過した光は、半値拡がり半角 Θ (=略 5° )で、図 5 (a)左側に図示するような照度分布を有する光として投影 LCD19の表面に 到達するように設計されてレヽる。  As shown in FIG. 5 (a), the light that has passed through the light source lens 18 is a light having a half-value half-width 略 (= approximately 5 °) and an illuminance distribution as shown on the left side of FIG. 5 (a). The projection is designed to reach the surface of the LCD 19 as a level.
[0077] 図 5 (b)に示すように、複数の LED17は基板上 16に千鳥状に配列されている。具 体的には、複数個の LED17を dピッチで直列に並べた LED歹 ljが、 3/2dピッチで 並列に複数並べられており、更に、このように並列に並べられた複数の LED列にお いて、 1列おきの列は隣の列に対して同じ方向に l/2d移動された状態になってい る。 As shown in FIG. 5B, the plurality of LEDs 17 are arranged in a staggered pattern on the substrate 16. Ingredient Specifically, a plurality of LEDs 17 in which a plurality of LEDs 17 are arranged in series at a d pitch are arranged in parallel at a 3 / 2d pitch, and furthermore, a plurality of LED columns arranged in such a manner are arranged in parallel. In this case, every other row is moved by l / 2d in the same direction with respect to the next row.
[0078] 換言すれば、 1の LED17と、その 1の LED17の周辺の LCD17との間隔は dになる ように設定されてレ、る(三角格子配列)。  In other words, the distance between the one LED 17 and the LCD 17 around the one LED 17 is set to be d (triangular lattice arrangement).
[0079] この dの長さは、 LED17の 1つ力も出射された光によって投影 LCD19において形 成される照度分布の半値全幅(FWHM (Full Width Half Maximum) )以下となる 様に決定されている。 [0079] The length of d is determined so as to be equal to or less than the full width half maximum (FWHM) of the illuminance distribution formed on the projection LCD 19 by the light emitted from the LED 17 even with one power. .
[0080] よって、光源レンズ 18を通過して投影 LCD19の表面に到達する光の合成照度分 布は、図 5 (c)に示すように小さなリップノレを含んだ略直線状になり、投影 LCD19の 面に略均一に光を照射することができる。従って、投影 LCD19における照度ムラを 抑制することができ、結果的に、高品質な画像を投影することができる。  Thus, the combined illuminance distribution of the light passing through the light source lens 18 and reaching the surface of the projection LCD 19 becomes a substantially linear shape including a small lip gap as shown in FIG. Light can be applied to the surface substantially uniformly. Therefore, illuminance unevenness in the projection LCD 19 can be suppressed, and as a result, a high-quality image can be projected.
[0081] 図 6は、画像入出力装置 1の電気的なブロック図である。尚、既に上述した構成に ついては、その説明を省略する。プロセッサ 15は、 CPU35と、 ROM36と、 RAM37 とを備えている。  FIG. 6 is an electrical block diagram of the image input / output device 1. The description of the configuration already described above is omitted. The processor 15 includes a CPU 35, a ROM 36, and a RAM 37.
[0082] CPU35は、 ROM36に記憶されたプログラムにしたがレ、、 RAM37を利用して各 種の処理を行う。 CPU35の制御の下で行われる処理には、レリーズボタン 8の押下 げ操作の検知、 CCD22から画像データの取り込み、その画像データの転送、格納、 モード切替スィッチ 9bの状態の検出等が含まれる。  The CPU 35 performs various kinds of processing using the RAM 37, according to the program stored in the ROM 36. The processing performed under the control of the CPU 35 includes detection of a pressing operation of the release button 8, capture of image data from the CCD 22, transfer and storage of the image data, detection of the state of the mode switching switch 9b, and the like.
[0083] ROM36には、カメラ制御プログラム 36aと、パターン光撮影プログラム 36bと、輝度 画像生成プログラム 36cと、コード画像生成プログラム 36dと、コード境界抽出プログ ラム 36eと、レンズ収差補正プログラム 36fと、三角測量演算プログラム 36gと、原稿 姿勢演算プログラム 36hと、平面変換プログラム 36iとが格納されている。  [0083] The ROM 36 includes a camera control program 36a, a pattern light photographing program 36b, a luminance image generation program 36c, a code image generation program 36d, a code boundary extraction program 36e, a lens aberration correction program 36f, and a triangle. A survey calculation program 36g, a document attitude calculation program 36h, and a plane conversion program 36i are stored.
[0084] カメラ制御プログラム 36aは、図 7に示すメイン処理を含む画像入出力装置 1全体の 制御に関するプログラムである。  [0084] The camera control program 36a is a program relating to control of the entire image input / output device 1 including the main processing shown in FIG.
[0085] パターン光撮影プログラム 36bは、被写体 Pの 3次元形状を検出するために被写体 にパターン光を投影した状態と、投影してレ、なレ、状態とを撮像するプログラムである。 [0086] 輝度画像生成プログラム 36cは、パターン光撮影プログラム 36bによってパターン 光を投影した状態を撮像したパターン光有画像と、パターン光を投影していない状 態を撮像したパターン光無画像のそれぞれについて、 YCbCr空間における Y値を 画像の RGB値から算出するプログラムである。 [0085] The pattern light photographing program 36b is a program for imaging a state in which pattern light is projected on a subject in order to detect a three-dimensional shape of the subject P, and a state in which the projected pattern light is projected. [0086] The luminance image generation program 36c is provided for each of the pattern light existence image obtained by imaging the state where the pattern light is projected by the pattern light imaging program 36b and the pattern light absence image obtained by imaging the state where the pattern light is not projected. This is a program that calculates the Y value in the YCbCr space from the RGB value of the image.
[0087] また、パターン光は複数種類のものが時系列に投影され各パターン光毎に撮像さ れ、複数種類の輝度画像が生成される。 [0087] In addition, a plurality of types of pattern light are projected in time series and imaged for each pattern light to generate a plurality of types of luminance images.
[0088] コード画像生成プログラム 36dは、輝度画像生成プログラム 36cによって生成される 複数枚の輝度画像について、予め設定した輝度閾値あるいはパターン光無画像の 輝度画像を参照して 2値化し、その結果より各画素毎に所定のコードを割り当てたコ ード画像を生成するプログラムである。 [0088] The code image generation program 36d binarizes the plurality of luminance images generated by the luminance image generation program 36c with reference to a preset luminance threshold or a luminance image of an image without pattern light, and based on the result, This is a program for generating a code image in which a predetermined code is assigned to each pixel.
[0089] コード境界抽出プログラム 36eは、コード画像生成プログラム 36dによって生成され るコード画像と、輝度画像生成プログラム 36cによって生成される輝度画像とを利用 して、コードの境界座標をサブピクセル精度で求めるプログラムである。 [0089] The code boundary extraction program 36e uses the code image generated by the code image generation program 36d and the luminance image generated by the luminance image generation program 36c to determine the code boundary coordinates with sub-pixel accuracy. It is a program.
[0090] レンズ収差補正プログラム 36fは、コード境界抽出プログラム 36eによってサブピク セル精度で求められているコードの境界座標に対して、撮像光学系 20の収差補正 を行うプログラムである。 The lens aberration correction program 36f is a program for correcting the aberration of the imaging optical system 20 with respect to the code boundary coordinates obtained by the sub-pixel accuracy by the code boundary extraction program 36e.
[0091] 三角測量演算プログラム 36gは、レンズ収差補正プログラム 36fによって収差補正 力 されたコードの境界座標から、その境界座標に関する実空間の 3次元座標を演 算するプログラムである。 [0091] The triangulation calculation program 36g is a program that calculates three-dimensional coordinates in the real space related to the boundary coordinates from the boundary coordinates of the code whose aberration has been corrected by the lens aberration correction program 36f.
[0092] 原稿姿勢演算プログラム 36hは、三角測量演算プログラム 36gで演算された 3次元 座標から例えば書籍などの被写体 Pの 3次元形状を推定して求めるプログラムである [0092] The document attitude calculation program 36h is a program for estimating and obtaining the three-dimensional shape of the subject P such as a book from the three-dimensional coordinates calculated by the triangulation calculation program 36g.
[0093] 平面変換プログラム 36iは、原稿姿勢演算プログラム 36hで演算される原稿などの 被写体 Pの 3次元形状に基づき、例えば書籍などの被写体 Pを正面から撮像したよう な平面化画像を生成するプログラムである。 [0093] The plane conversion program 36i is a program that generates a flattened image as if the subject P such as a book is imaged from the front, based on the three-dimensional shape of the subject P such as a manuscript calculated by the manuscript attitude calculation program 36h. It is.
[0094] RAM37には、パターン光有画像格納部 37aと、パターン光無画像格納部 37bと、 輝度画像格納部 37cと、コード画像格納部 37dと、コード境界座標格納部 37eと、 ID 格納部 37fと、収差補正座標格納部 37gと、 3次元座標格納部 37hと、原稿姿勢演 算結果格納部 37iと、平面変換結果格納部 37jと、投影画像格納部 37kと、ヮーキン グエリア 371とが記憶領域として割り当てられている。 [0094] The RAM 37 includes a pattern light image storage unit 37a, a pattern light non-image storage unit 37b, a luminance image storage unit 37c, a code image storage unit 37d, a code boundary coordinate storage unit 37e, and an ID storage unit. 37f, an aberration correction coordinate storage unit 37g, a three-dimensional coordinate storage unit 37h, The calculation result storage unit 37i, the plane conversion result storage unit 37j, the projection image storage unit 37k, and the marking area 371 are allocated as storage areas.
[0095] パターン光有画像格納部 37aは、パターン光撮影プログラム 36bによって被写体 P にパターン光を投影した状態を撮像したパターン光有画像を格納する。パターン光 無画像格納部 37bは、パターン光撮影プログラム 36bによって被写体 Pにパターン光 を投影してレ、なレ、状態を撮像したパターン光無画像を格納する。  [0095] The pattern light existence image storage unit 37a stores a pattern light existence image obtained by imaging a state where the pattern light is projected on the subject P by the pattern light photographing program 36b. The pattern light non-image storage unit 37b stores the pattern light non-image obtained by projecting the pattern light on the subject P by the pattern light photographing program 36b and capturing the image, state, and state.
[0096] 輝度画像格納部 37cは、輝度画像生成プログラム 36cによって生成される輝度画 像を格納する。コード画像格納部 37dは、コード画像生成プログラム 36dによって生 成されるコード画像を格納する。コード境界座標格納部 37eは、コード境界抽出プロ グラム 36eによって、抽出されるサブピクセル精度で求められた各コードの境界座標 を格納する。 ID格納部 37fは、境界を有する画素位置において明暗の変化を有する 輝度画像に割り当てられる ID等を格納する。収差補正座標格納部 37gは、レンズ収 差補正プログラム 36fによって収差補正がなされたコードの境界座標を格納する。 3 次元形状座標格納部 37hは、三角測量演算プログラム 36gによって演算される実空 間の 3次元座標を格納する。  [0096] The luminance image storage unit 37c stores the luminance image generated by the luminance image generation program 36c. The code image storage unit 37d stores a code image generated by the code image generation program 36d. The code boundary coordinate storage unit 37e stores the boundary coordinates of each code obtained with the sub-pixel accuracy to be extracted by the code boundary extraction program 36e. The ID storage unit 37f stores an ID or the like assigned to a luminance image having a change in brightness at a pixel position having a boundary. The aberration correction coordinate storage unit 37g stores the boundary coordinates of the code whose aberration has been corrected by the lens aberration correction program 36f. The three-dimensional shape coordinate storage unit 37h stores the three-dimensional coordinates of the real space calculated by the triangulation calculation program 36g.
[0097] 原稿姿勢演算結果格納部 37iは、原稿姿勢演算プログラム 36hによって演算される 原稿などの被写体 Pの 3次元形状に関するパラメータを格納する。平面変換結果格 納部 37jは、平面変換プログラム 36はって生成される平面変換結果を格納する。投 影画像格納部 37kは、画像投影部 13から投影する画像情報を格納する。ワーキング エリア 371は、 CPU15での演算のために一時的に使用するデータを格納する。  The document attitude calculation result storage unit 37i stores parameters related to the three-dimensional shape of the subject P such as a document calculated by the document attitude calculation program 36h. The plane conversion result storage unit 37j stores the plane conversion result generated by the plane conversion program 36. The projection image storage unit 37k stores image information projected from the image projection unit 13. The working area 371 stores data temporarily used for the operation in the CPU 15.
[0098] プロセッサ 15には、上述した構成に加え、バスラインを介してアンプ 32が接続され ている。アンプ 32は、そのアンプ 32に接続されたスピーカ 33を鳴動して、警告音な どを出力するためのものである。  [0098] An amplifier 32 is connected to the processor 15 via a bus line in addition to the configuration described above. The amplifier 32 sounds a speaker 33 connected to the amplifier 32 to output a warning sound or the like.
[0099] PC49は、 CPU50を有する。 PC49において、 CPU50には内部バスを介して RO M51及び RAM52が接続される。また、 PC49において、パソコン用インターフェイス 24と接続可能なインターフェイス 55と、 CRTディスプレイ 58と、キーボード 59とは、 入出力ポート 54を介して、 CPUが接続された内部バスと電気的に接続されている。  [0099] The PC 49 has a CPU 50. In the PC 49, a ROM 51 and a RAM 52 are connected to the CPU 50 via an internal bus. In the PC 49, an interface 55 connectable to the PC interface 24, a CRT display 58, and a keyboard 59 are electrically connected to an internal bus to which the CPU is connected via an input / output port 54. .
[0100] 図 7は、 CPU35による制御の下で実行されるメイン処理のフローチャートである。尚 、このメイン処理におけるデジカメ処理(S605)、 webcam処理(S607)、立体画像 処理(S609)、平面化画像処理(S611)の各処理にっレ、ての詳細は後述する。 FIG. 7 is a flowchart of a main process executed under the control of the CPU 35. still Details of the digital camera processing (S605), webcam processing (S607), stereoscopic image processing (S609), and flattened image processing (S611) in the main processing will be described later.
[0101] メイン処理では、まず、電源が起動されると(S601)、プロセッサ 15やその他のイン ターフヱイス等が初期化される(S602)。  In the main processing, first, when the power is turned on (S601), the processor 15 and other interfaces are initialized (S602).
[0102] そして、モード切替スィッチ 9の状態を判別するキースキャンが行われ(S603)、モ ード切替スィッチ 9の設定がデジカメモードか否かが判断される(S604)。モード切替 スィッチ 9の設定がデジカメモードであれば(S604 : Yes)、処理は後述するデジカメ 処理に移行する(S605)。  [0102] Then, a key scan is performed to determine the state of the mode switching switch 9 (S603), and it is determined whether the setting of the mode switching switch 9 is the digital camera mode (S604). If the setting of the mode switching switch 9 is the digital camera mode (S604: Yes), the process shifts to a digital camera process described later (S605).
[0103] 一方、モード切替スィッチ 9の設定がデジカメモードでなければ(S604: No)、モー ド切替スィッチ 9の設定が webcamモードか否かが判断される(S606)。モード切替 スィッチ 9の設定が webcamモードであれば(S606 : Yes)、処理は後述する webca m処理に移行する(S607)。  On the other hand, if the setting of the mode switching switch 9 is not the digital camera mode (S604: No), it is determined whether or not the setting of the mode switching switch 9 is the webcam mode (S606). If the setting of the mode switching switch 9 is the webcam mode (S606: Yes), the process shifts to a webcam process described later (S607).
[0104] 一方、モード切替スィッチ 9の設定が webcamモードでなければ(S606: No)、モ ード切替スィッチ 9の設定が立体画像モードか否かが判断される(S608)。モード切 替スィッチ 9の設定が立体画像モードであれば(S608 : Yes)、処理は後述する立体 画像処理に移行する(S609)。  On the other hand, if the setting of the mode switching switch 9 is not the webcam mode (S606: No), it is determined whether the setting of the mode switching switch 9 is the stereoscopic image mode (S608). If the setting of the mode switching switch 9 is the stereoscopic image mode (S608: Yes), the processing shifts to the stereoscopic image processing described later (S609).
[0105] 一方、モード切替スィッチ 9の設定が立体画像モードでなければ(S608: No)、モ ード切替スィッチ 9の設定が平面化画像モードか否かが判断される(S610)。モード 切替スィッチ 9の設定が平面化画像モードであれば(S610 : Yes)、処理は後述する 平面化画像処理に移行する(S611)。  On the other hand, if the setting of the mode switching switch 9 is not the stereoscopic image mode (S608: No), it is determined whether or not the setting of the mode switching switch 9 is the flattened image mode (S610). If the setting of the mode switching switch 9 is the flattened image mode (S610: Yes), the processing shifts to flattened image processing described later (S611).
[0106] 一方、モード切替スィッチ 9の設定が平面化画像モードでなければ(S610 : No)、 モード切替スィッチ 9の設定がオフモードか否かが判断される(S612)。モード切替ス イッチ 9の設定がオフモードでなければ(S612: No)、 S603からの処理が繰り返えさ れる。モード切替スィッチ 9の設定がオフモードであれば(S612 : Yes)、当該処理を 終了する。  On the other hand, if the setting of the mode switching switch 9 is not the flat image mode (S610: No), it is determined whether the setting of the mode switching switch 9 is the off mode (S612). If the setting of the mode switching switch 9 is not the off mode (S612: No), the processing from S603 is repeated. If the setting of the mode switching switch 9 is the off mode (S612: Yes), the process ends.
[0107] 以下に、デジカメ処理、 webcam処理、立体画像処理について詳細に説明する。こ れらの処理では、画像投影部 13と画像撮像部 14との相対的な位置関係に基づいて 、条件判定が行われるため、まず、画像投影部 13と画像撮像部 14との相対的な位 置関係について説明する。 [0107] Hereinafter, digital camera processing, webcam processing, and stereoscopic image processing will be described in detail. In these processes, since the condition determination is performed based on the relative positional relationship between the image projection unit 13 and the image imaging unit 14, first, the relative position between the image projection unit 13 and the image imaging unit 14 is determined. Rank The relationship will be described.
[0108] 図 8 (a)及び図 8 (b)は画像投影部 13と画像撮像部 14との位置関係を示す図であ る。まず、撮像ヘッド位置検出センサ S1及び撮像部位置検出センサ S2により出力さ れる信号に基づいて、画像投影部 13の投影方向(投影光学系 20の光軸方向)が鉛 直方向に対し 0° ± 30° (略一 30° 〜略 30° )であり、画像撮像部 14の撮像方向( 撮像光学系 21の光軸方向)が、撮像ヘッド 2の正面 (鏡筒 5が設けられた面)に対し 0 。 以外であると検出された場合を、以下 Aモードと称する(図 8 (b) Aに示す状態)。  FIGS. 8 (a) and 8 (b) are diagrams showing the positional relationship between the image projection unit 13 and the image pickup unit 14. FIG. First, based on signals output from the imaging head position detection sensor S1 and the imaging unit position detection sensor S2, the projection direction of the image projection unit 13 (the optical axis direction of the projection optical system 20) is 0 ° ± 30 ° (approximately 30 ° to approximately 30 °), and the image pickup direction of the image pickup unit 14 (the optical axis direction of the image pickup optical system 21) is set in front of the image pickup head 2 (the surface on which the lens barrel 5 is provided). Against 0. In the following, the case where it is detected that it is other than A is referred to as A mode (the state shown in FIG. 8 (b) A).
[0109] 撮像ヘッド位置検出センサ S1及び撮像部位置検出センサ S2により出力される信 号に基づいて、画像投影部 13の投影方向が鉛直方向に対し 0° ± 30° であり、画 像撮像部 14の撮像方向が、撮像ヘッド 2の正面に対し略 0° であると検出された場 合を、以下 Bモードと称する(図 8 (b) Bに示す状態)。  [0109] Based on the signals output by the imaging head position detection sensor S1 and the imaging unit position detection sensor S2, the projection direction of the image projection unit 13 is 0 ° ± 30 ° with respect to the vertical direction, and the image imaging unit The case where the imaging direction of 14 is detected to be substantially 0 ° with respect to the front of the imaging head 2 is hereinafter referred to as B mode (the state shown in FIG. 8B).
[0110] この Bモードでは、画像撮像部 14により撮像される被写体が存在する方向と、画像 投影部 13により画像信号光が投影される方向とが互いに垂直に近い大きな角度とな るので、被写体上には画像信号光が投影されず、画像投影部 13により画像信号光 を投影しつつ画像撮像部 14による撮像を行っても、投影画像は撮像しないようにす ること力 Sできる。したがって、画像投影部 13と画像撮像部 14との位置関係が Bモード である場合には、例えば操作方法や操作手順を示す文字列等 (撮像手段により取得 された撮像画像に関する関連情報)が投影された状態で、使用者がそれらの投影さ れた文字列等を確認しながら、適宜撮像を実行することができる。また例えば、使用 者自身が存在する方向とは異なる方向に画像信号光を投影しつつ、画像撮像部 14 を使用者自身に対向させて撮像することにより、使用者は、画像投影部 13により投 影された画像を視認しながら、 自分自身を撮像することができる。  [0110] In the B mode, the direction in which the subject captured by the image capturing unit 14 is present and the direction in which the image signal light is projected by the image projecting unit 13 are large angles that are close to perpendicular to each other. The image signal light is not projected on the top, and even if the image signal is projected by the image projection unit 13 and the image is captured by the image capturing unit 14, the projected image can be prevented from being captured. Therefore, when the positional relationship between the image projection unit 13 and the image capturing unit 14 is the B mode, for example, a character string or the like indicating the operation method or the operation procedure (related information related to the captured image acquired by the imaging unit) is projected. In this state, the user can appropriately perform imaging while checking the projected character strings and the like. Also, for example, by projecting the image signal light in a direction different from the direction in which the user himself is present, and by taking the image with the image capturing unit 14 facing the user himself, the user can project the image with the image projecting unit 13. You can capture yourself while viewing the shadowed image.
[0111] 撮像ヘッド位置検出センサ S1及び撮像部位置検出センサ S2により出力される信 号に基づいて、画像投影部 13の投影方向が鉛直方向に対し 0° ± 30° 以外であり 、画像撮像部 14の撮像方向が撮像ヘッド 2の正面に対し略 0° 以外であると検出さ れた場合を、以下 Cモードと称する(図 8 (b) Cに示す状態)。  [0111] Based on the signals output by the imaging head position detection sensor S1 and the imaging unit position detection sensor S2, the projection direction of the image projection unit 13 is other than 0 ° ± 30 ° with respect to the vertical direction, and The case where it is detected that the imaging direction of 14 is not substantially 0 ° with respect to the front of the imaging head 2 is hereinafter referred to as C mode (the state shown in FIG. 8 (b) C).
[0112] 撮像ヘッド位置検出センサ S1及び撮像部位置検出センサ S2により出力される信 号に基づいて、画像投影部 13の投影方向が鉛直方向に対し 0° ± 30° 以外であり 、画像撮像部 14の撮像方向が撮像ヘッド 2の正面に対し略 0° であると検出された 場合を、以下 Dモードと称する(図 8 (b) Dに示す状態)。 [0112] Based on the signals output by the imaging head position detection sensor S1 and the imaging unit position detection sensor S2, the projection direction of the image projection unit 13 is other than 0 ° ± 30 ° with respect to the vertical direction. The case where the imaging direction of the image imaging unit 14 is detected to be substantially 0 ° with respect to the front of the imaging head 2 is hereinafter referred to as a D mode (the state shown in FIG. 8 (b) D).
[0113] この Dモードでは、画像撮像部 14により撮像される被写体が存在する方向と、画像 投影部 13により画像信号光が投影される方向とが互いに垂直に近い大きな角度とな るので、被写体上には画像信号光が投影されず、画像投影部 13により画像信号光 を投影しつつ画像撮像部 14による撮像を行っても、投影画像は撮像しないようにす ること力 Sできる。 [0113] In the D mode, the direction in which the subject captured by the image capturing unit 14 is present and the direction in which the image signal light is projected by the image projecting unit 13 are large angles that are close to perpendicular to each other. The image signal light is not projected on the top, and even if the image signal is projected by the image projection unit 13 and the image is captured by the image capturing unit 14, the projected image can be prevented from being captured.
[0114] 図 9は、デジカメ処理(図 6の S605)のフローチャートである。デジカメ処理は、画像 撮像部 14によって撮像した画像を取得する処理である。  FIG. 9 is a flowchart of the digital camera process (S605 in FIG. 6). The digital camera process is a process of acquiring an image captured by the image capturing unit 14.
[0115] この処理では、まず、 CCD22に高解像度設定信号が送信される(S701)。これに より高品質の撮像画像を使用者に提供することができる。  In this process, first, a high resolution setting signal is transmitted to the CCD 22 (S701). As a result, a high quality captured image can be provided to the user.
[0116] 次に、撮像ヘッド位置検出センサ S1から出力される信号に基づいて、画像投影部  Next, based on the signal output from the imaging head position detection sensor S1, the image projection unit
13の位置が取得され、撮像部位置検出センサ S2から出力される信号に基づいて、 画像撮像部 14の位置が取得される(S702b)。  The position of the image pickup unit 13 is obtained, and the position of the image pickup unit 14 is obtained based on the signal output from the image pickup unit position detection sensor S2 (S702b).
[0117] 続いて、画像投影部 13と画像撮像部 14との位置関係が上述した Aモード(図 8 (a) 及び図 8 (b)参照)か否かが調べられる(S702c)。画像投影部 13と画像撮像部 14と の位置関係が Aモードである場合(S702c : Yes)、画像投影部 13により、撮像可能 な領域に撮像可能範囲を示す画像である矩形画像光が投射される(S702d)。図 10 は、撮像可能な領域を示す矩形画像光が面上に投射されている状態を示す図であ る。このように、使用者はレリーズボタン 8を押下することにより実際に撮像する前に、 矩形画像光により撮像可能な領域を視認することができる。  Subsequently, it is checked whether or not the positional relationship between the image projecting unit 13 and the image capturing unit 14 is the above-described A mode (see FIGS. 8A and 8B) (S702c). When the positional relationship between the image projecting unit 13 and the image capturing unit 14 is the A mode (S702c: Yes), the image projecting unit 13 projects rectangular image light, which is an image indicating an imageable range, onto an imageable area. (S702d). FIG. 10 is a diagram illustrating a state in which rectangular image light indicating an imageable area is projected on a surface. As described above, the user can visually recognize the area that can be imaged by the rectangular image light before actually imaging by pressing the release button 8.
[0118] 一方、画像投影部 13と画像撮像部 14との位置関係が上述した Aモードではない 場合(S702c : No)、 Bモード(図 8参照)か否かが調べられる(S702e)。画像投影部 13と画像撮像部 14との位置関係が Bモードである場合(S702e : Yes)、撮像が可能 であることを知らせるための文字等の画像力 画像投影部 13により投影される(S70 2f) 0 On the other hand, when the positional relationship between the image projecting unit 13 and the image capturing unit 14 is not the A mode described above (S702c: No), it is checked whether the mode is the B mode (see FIG. 8) (S702e). When the positional relationship between the image projecting unit 13 and the image capturing unit 14 is the B mode (S702e: Yes), the image is projected by the image projecting unit 13 such as a character for notifying that the image can be captured (S70). 2f) 0
[0119] 一方、画像投影部 13と画像撮像部 14との位置関係が上述した Bモードではない場 合(S702e: No)、撮像可能である旨がスピーカ 33 (図 6参照)を鳴動して報知される (S702g) oすなわち、画像投影部 13による画像信号光の投影方向が鉛直方向に対 し 0° ± 30° 以外(Cモードまたは Dモード)である場合は、画像投影部 13の投影方 向に投影可能な面が存在していない可能性が高いので、画像投影部 13と画像撮像 部 14との位置関係が上述した Aモード及び Bモードのいずれでもない場合は、撮像 可能である旨は投影画像によってではなぐ音声により報知される。なお、音声による 報知に替えて、図示しないパイロットランプ等の点灯、点滅により撮像可能である旨 が報知されても良い。 On the other hand, when the positional relationship between the image projecting unit 13 and the image capturing unit 14 is not the above-described B mode (S702e: No), the speaker 33 (see FIG. 6) sounds that the image can be captured. Be informed (S702g) o In other words, when the projection direction of the image signal light by the image projection unit 13 is other than 0 ° ± 30 ° (C mode or D mode) with respect to the vertical direction, the projection direction of the image projection unit 13 Since there is a high possibility that there is no surface that can be projected, when the positional relationship between the image projecting unit 13 and the image capturing unit 14 is not in either the A mode or the B mode described above, it is determined that imaging is possible. It is notified by sound rather than by image. It should be noted that, instead of the voice notification, a pilot lamp (not shown) or the like may be lit or flashed to notify that the image can be captured.
[0120] 次に、レリーズボタン 8がスキャンされ(S703a)、レリーズボタン 8が半押しされたか 否かが判断される(S703b)。レリーズボタン 8が半押しされてレ、れば(S703b: Yes) 、オートフォーカス (AF)および自動露出 (AE)機能が起動され、ピント、絞り、シャツ タスピードが調節される(S703c)。レリーズボタン 8が半押しされていなければ(S70 3b : No)、 S703aからの処理が繰り返えされる。  Next, the release button 8 is scanned (S703a), and it is determined whether or not the release button 8 is half-pressed (S703b). If the release button 8 is half-pressed (S703b: Yes), the auto focus (AF) and automatic exposure (AE) functions are activated, and the focus, aperture, and shirt speed are adjusted (S703c). If the release button 8 has not been half-pressed (S703b: No), the processing from S703a is repeated.
[0121] 次に、再び、レリーズボタン 8がスキャンされ(S703d)、レリーズボタン 8が全押しさ れたか否かが判断される(S703e)。レリーズボタン 8が全押しされてレ、れば(S703e: Yes)、フラッシュモードか否かが判断される(S704)。  Next, the release button 8 is scanned again (S703d), and it is determined whether or not the release button 8 is fully pressed (S703e). If the release button 8 is fully pressed (S703e: Yes), it is determined whether or not the flash mode is set (S704).
[0122] その結果、フラッシュモードであれば(S704 : Yes)、フラッシュ 7が投光され(S705 )、撮影が行われる(S706)。フラッシュモードでなければ(S704 : No)、フラッシュ 7 を投光することなく撮影が行われる(S706)。尚、 S703eの判断において、全押しさ れてレ、なければ(S703e: No)、 S703aからの処理が繰り返えされる。  As a result, if the mode is the flash mode (S704: Yes), the flash 7 is emitted (S705), and shooting is performed (S706). If not in the flash mode (S704: No), shooting is performed without emitting the flash 7 (S706). If it is determined in S703e that the button is not fully pressed (S703e: No), the processing from S703a is repeated.
[0123] ここで、 S702eの処理において Bモードであると判定され、 S702fの処理において 、撮像可能を知らせる画像が投影されている場合には、 S706の処理では、その画像 の投影が継続されつつ、被写体 Pの撮像が行われる。すなわち、 Bモードでは、撮像 方向が撮像ヘッド 2の正面に対し略 0° であり、画像撮像部 14により撮像される方向 と画像投影部 13により画像信号光が投影される方向とが互いに略垂直に近い大きな 角度となる。そのため、画像撮像部 14により撮像されるべき被写体上には、画像投 影部 13による画像信号光の投影がなされない。よって、画像投影部 13により画像の 投影を行いつつ撮像したとしても、その投影された画像を含まずに被写体 Pを撮像 すること力 Sできる。 [0124] 次に、撮影した撮像画像は CCD22からキャッシュメモリ 28に転送される(S707)。 キャッシュメモリ 28に記憶された撮像画像はモニタ LCD 10に表示される(S 708)。こ のように、撮像画像をキャッシュメモリ 28に転送することでメインメモリに転送する場合 に比較して、撮像画像を高速にモニタ LCD10に表示させることができる。次に、その 撮像画像は外部メモリ 27に格納される(S709)。 [0123] Here, in the process of S702e, when it is determined that the mode is the B mode, and in the process of S702f, an image indicating that imaging is possible is being projected, in the process of S706, the projection of that image is continued. , The subject P is imaged. That is, in the B mode, the imaging direction is substantially 0 ° with respect to the front of the imaging head 2, and the direction in which the image is captured by the image capturing unit 14 and the direction in which the image signal light is projected by the image projecting unit 13 are substantially perpendicular to each other. A large angle close to. Therefore, no image signal light is projected by the image projection unit 13 on the subject to be imaged by the image imaging unit 14. Therefore, even if the image is captured while projecting the image by the image projection unit 13, the force S for imaging the subject P without including the projected image can be obtained. Next, the captured image is transferred from the CCD 22 to the cache memory 28 (S707). The captured image stored in the cache memory 28 is displayed on the monitor LCD 10 (S708). As described above, by transferring the captured image to the cache memory 28, the captured image can be displayed on the monitor LCD 10 at a higher speed than when the captured image is transferred to the main memory. Next, the captured image is stored in the external memory 27 (S709).
[0125] 最後に、モード切替スィッチ 9に変化が無いか否かが判断され (S710)、変化が無 ければ(S710 :Yes)、 S702bからの処理が繰り返えされる。モード切替スィッチ 9に 変化がある場合には(S710 : No)、当該処理を終了する。  [0125] Finally, it is determined whether or not there is a change in the mode switching switch 9 (S710). If there is no change (S710: Yes), the processing from S702b is repeated. If there is a change in the mode switching switch 9 (S710: No), the process ends.
[0126] 図 11は、 webcam処理(図 7の S607)のフローチャートである。 webcam処理は、 画像撮像部 14で撮像した撮像画像(静止画および動画を含む)を外部ネットワーク に送信する処理である。尚、本実施形態では、撮像画像として動画を外部ネットヮー クに送信する場合を想定している。  FIG. 11 is a flowchart of the webcam process (S607 in FIG. 7). The webcam process is a process of transmitting a captured image (including a still image and a moving image) captured by the image capturing unit 14 to an external network. In this embodiment, it is assumed that a moving image is transmitted to an external network as a captured image.
[0127] この処理では、まず、 CCD22に低解像度設定信号が送信され(S801)、周知のォ 一トフオーカス (AF)及び自動露出(AE)機能が起動され、ピント、絞り、シャッター速 度が調節される(S802a)。  In this process, first, a low-resolution setting signal is transmitted to the CCD 22 (S801), and the well-known auto focus (AF) and automatic exposure (AE) functions are activated, and the focus, aperture, and shutter speed are adjusted. Is performed (S802a).
[0128] 次に、撮像ヘッド位置検出センサ S1から出力される信号に基づいて、画像投影部  Next, based on the signal output from the imaging head position detection sensor S1, the image projection unit
13の位置が取得され、撮像部位置検出センサ S2から出力される信号に基づいて、 画像撮像部 14の位置が取得される(S802b)。  The position of the image pickup unit 13 is obtained, and the position of the image pickup unit 14 is obtained based on the signal output from the image pickup unit position detection sensor S2 (S802b).
[0129] 続いて、画像投影部 13と画像撮像部 14との位置関係が上述した Aモード(図 8参 照)か否かが調べられる(S802c)。画像投影部 13と画像撮像部 14との位置関係が Aモードである場合(S802c : Yes)、デジカメ処理(図 9参照)における S702dの処理 と同様に、画像投影部 13により撮像可能範囲を示す画像である矩形画像光が投射 される(S802d)。  Subsequently, it is checked whether or not the positional relationship between the image projecting unit 13 and the image capturing unit 14 is the above-described A mode (see FIG. 8) (S802c). When the positional relationship between the image projecting unit 13 and the image capturing unit 14 is the A mode (S802c: Yes), the image capturing range is indicated by the image projecting unit 13 as in the process of S702d in the digital camera process (see FIG. 9). A rectangular image light, which is an image, is projected (S802d).
[0130] 一方、画像投影部 13と画像撮像部 14との位置関係が上述した Aモードではない 場合(S802c : No)、 Bモード(図 8参照)か否かが調べられる(S802e)。 Bモードであ る場合(S802e : Yes)、後述する投影処理が行われ (S802f)、続いて画像撮像部 1 4により撮像が行われる。画像投影部 13と画像撮像部 14との位置関係が Bモードで ある場合、撮像方向が投影方向に対し垂直に近い大きな角度で交わるため、画像撮 像部 14により撮像されるべき被写体 P上には、画像投影部 13による画像信号光の投 影はなされない。よって、デジカメ処理(図 9参照)における S702fの処理と同様に、 画像投影部 13により画像の投影を行いつつ撮像したとしても、その投影された画像 を含まずに被写体 Pを撮像することができる。 On the other hand, when the positional relationship between the image projection unit 13 and the image pickup unit 14 is not the A mode described above (S802c: No), it is checked whether or not the B mode (see FIG. 8) is used (S802e). When the mode is the B mode (S802e: Yes), a projection process described later is performed (S802f), and then the image is captured by the image capturing unit 14. When the positional relationship between the image projecting unit 13 and the image capturing unit 14 is the B mode, the image capturing direction intersects the projection direction at a large angle that is almost perpendicular to the projection direction. The image signal light is not projected by the image projection unit 13 on the subject P to be imaged by the image unit 14. Therefore, similar to the process of S702f in the digital camera process (see FIG. 9), even if the image is projected while the image projection unit 13 projects the image, the subject P can be imaged without including the projected image. .
[0131] 図 12は、 webcam処理の投影処理(S802f)において、画像投影部 13により画像 出力光が投影されてレ、る状態を示す図である。図 12に示すように webcam処理では 、例えば、「WEBCAMモードです」などの撮像モードを示す文字列からなるメッセ一 ジ画像 Mと、撮像画像 fとが合成されて投影される。 webcam処理の開始時には、未 だ撮像画像 fが得られてレ、なレ、ので、メッセージ Mのみが投影される。  FIG. 12 is a diagram showing a state in which the image output light is projected by the image projection unit 13 in the projection processing (S802f) of the webcam processing. As shown in FIG. 12, in the webcam processing, for example, a message image M composed of a character string indicating an imaging mode such as “WEBCAM mode” and a captured image f are synthesized and projected. At the start of the webcam process, the captured image f is still obtained, and only the message M is projected.
[0132] 一方、画像投影部 13と画像撮像部 14との位置関係が上述した Bモードではない場 合(S802e: No)、撮像可能である旨がスピーカ 33 (図 6参照)を鳴動して報知され( S802g)、撮影が開始される(S803)。すなわち、デジカメ処理(図 9参照)と同様の 理由により、画像投影部 13と画像撮像部 14との位置関係が Aモードではなく Bモー ドでもない場合は、画像投影部 13による画像の投影は行わなわれず、音声のみで使 用者に対する報知が行われる。  On the other hand, when the positional relationship between the image projecting unit 13 and the image capturing unit 14 is not the B mode described above (S802e: No), the speaker 33 (see FIG. 6) sounds to indicate that the image can be captured. The user is notified (S802g), and shooting is started (S803). That is, for the same reason as the digital camera processing (see FIG. 9), when the positional relationship between the image projecting unit 13 and the image capturing unit 14 is neither the A mode nor the B mode, the image projection by the image projecting unit 13 is not performed. The notification is not given to the user only by voice.
[0133] 次に、撮影された撮像画像はモニタ LCD10に表示される(S804)。次に、例えば「 撮像中です」などの文字列からなるメッセージ画像 Mと撮像画像 fとが合成され、投影 画像格納部 37kに格納される(S805)。すなわち、 S804の処理において、投影画像 格納部 37kに格納される投影画像が更新されるので、 S802fの投影処理にぉレ、て、 更新された画像出力光により、撮像画像 f及びメッセージ画像 Mが投影され、使用者 は、画像撮像部 14による撮像状態の変化を投影画像により視認することができる。  [0133] Next, the captured image is displayed on the monitor LCD 10 (S804). Next, the message image M composed of a character string such as “imaging is in progress” and the captured image f are combined and stored in the projection image storage unit 37k (S805). That is, in the process of S804, the projection image stored in the projection image storage unit 37k is updated. Therefore, in the projection process of S802f, the captured image f and the message image M are updated by the updated image output light. The projected image allows the user to visually recognize the change in the image capturing state by the image capturing unit 14 using the projected image.
[0134] 次に、撮像画像は CCD22からキャッシュメモリ 28に転送され(S807)、キャッシュメ モリ 28に転送された撮像画像は RFインターフェイスである RFドライバ 24及びアンテ ナ 11を介して外部ネットワークに送信される(S808)。  [0134] Next, the captured image is transferred from the CCD 22 to the cache memory 28 (S807), and the captured image transferred to the cache memory 28 is transmitted to an external network via the RF driver 24 and the antenna 11 as an RF interface. Is performed (S808).
[0135] そして、最後に、モード切替スィッチ 9に変化が無いかが判断される(S809)。モー ド切替スィッチ 9に変化が無ければ(S809 : Yes)、 S802からの処理が繰り返えされ る。モード切替スィッチ 9に変化があれば(S809 : No)、当該処理を終了する。  [0135] Finally, it is determined whether or not the mode switching switch 9 has changed (S809). If there is no change in the mode switching switch 9 (S809: Yes), the processing from S802 is repeated. If there is a change in the mode switching switch 9 (S809: No), the process ends.
[0136] なお、上述の webcam処理では図 12に示されるようにメッセージ画像 Mと撮像画像 fとが投影されていた力 図 13に示すように撮像画像 fのみが画像投影部 13により投 影されても良い。 [0136] In the above-described webcam processing, as shown in FIG. Force with which f was projected As shown in FIG. 13, only the captured image f may be projected by the image projection unit 13.
[0137] また、図 13に示されるように、画像投影部 13と画像撮像部 14との相対的な位置関 係が Bモードである場合には、例えば使用者が自らを撮像しながらその撮像により取 得された撮像画像 fを随時確認することができる。  Further, as shown in FIG. 13, when the relative positional relationship between the image projecting unit 13 and the image capturing unit 14 is the B mode, for example, the user captures the image while himself or herself. The captured image f obtained by can be checked at any time.
[0138] 図 14は、投影処理(図 11の S802f)のフローチャートである。この処理は、画像投 影部 13から投影画像格納部 37kに格納されている画像を投影面に投影する処理で ある。この処理では、まず、投影画像格納部 37kに画像が格納されているか否かが 確認される(S901)。格納されていれば(S901 : Yes)、投影画像格納部 37kに格納 されている画像は投影 LCDドライバ 30に転送される(S902)。次に、投影 LCDドライ バ 30から、その画像に応じた画像信号が投影 LCD19に送られ、投影 LCD19に画 像が表示される(S903)。  FIG. 14 is a flowchart of the projection process (S802f in FIG. 11). This process is a process of projecting an image stored in the projection image storage unit 37k from the image projection unit 13 onto a projection plane. In this processing, first, it is confirmed whether or not an image is stored in the projection image storage unit 37k (S901). If it is stored (S901: Yes), the image stored in the projection image storage unit 37k is transferred to the projection LCD driver 30 (S902). Next, an image signal corresponding to the image is sent from the projection LCD driver 30 to the projection LCD 19, and an image is displayed on the projection LCD 19 (S903).
[0139] 次に、光源ドライバ 29が駆動され (S904)、その光源ドライバ 29からの電気信号に よって LEDアレイ 17Aが点灯される(S905)。その後、当該処理を終了する。  Next, the light source driver 29 is driven (S904), and the LED array 17A is turned on by an electric signal from the light source driver 29 (S905). After that, the process ends.
[0140] こうして、 LEDアレイ 17Aが点灯すると、 LEDアレイ 17Aから発光する光は、光源レ ンズ 18を介して投影 LCD19に到達し、投影 LCD19において、投影 LCDドライバ 3 0から送信される画像信号に応じた空間変調がその光に施される。空間変調が施さ れた光は、投影 LCD19から画像信号光として出力される。そして、その投影 LCD19 力 出力される画像信号光は、投影光学系 20を介して投影面に投影画像として投 影される。  [0140] Thus, when the LED array 17A is turned on, the light emitted from the LED array 17A reaches the projection LCD 19 via the light source lens 18, and in the projection LCD 19, an image signal transmitted from the projection LCD driver 30 is transmitted. A corresponding spatial modulation is applied to the light. The light subjected to the spatial modulation is output from the projection LCD 19 as image signal light. Then, the image signal light output from the projection LCD 19 is projected as a projection image on a projection surface via the projection optical system 20.
[0141] 図 15は、立体画像処理(図 7の S609)のフローチャートである。立体画像処理は、 被写体の 3次元形状を検出し、その立体画像としての 3次元形状検出結果画像を取 得、表示する処理である。  FIG. 15 is a flowchart of the stereoscopic image processing (S609 in FIG. 7). The stereoscopic image processing is a process of detecting a three-dimensional shape of a subject, obtaining a three-dimensional shape detection result image as the stereoscopic image, and displaying the image.
[0142] この処理では、まず、 CCD22に高解像度設定信号が送信される(S1001)。それ から、撮像ヘッド位置検出センサ S1から出力される信号に基づいて、画像投影部 13 の位置が取得され、撮像部位置検出センサ S2から出力される信号に基づいて、画 像撮像部 14の位置が取得される(S 1002b)。  In this processing, first, a high resolution setting signal is transmitted to the CCD 22 (S1001). Then, the position of the image projection unit 13 is obtained based on the signal output from the imaging head position detection sensor S1, and the position of the image imaging unit 14 is determined based on the signal output from the imaging unit position detection sensor S2. Is obtained (S1002b).
[0143] 続いて、画像投影部 13と画像撮像部 14との位置関係が上述した Cモード(図 8参 照)か否かが調べられる(S1002c)。画像投影部 13と画像撮像部 14との位置関係 が Cモードである場合(S1002c: Yes)、撮像可能である旨がスピーカ 33を鳴動して 音声により報知される(S1002d)。 Next, the positional relationship between the image projecting unit 13 and the image capturing unit 14 is described in the C mode (see FIG. 8). Is checked (S1002c). When the positional relationship between the image projecting unit 13 and the image capturing unit 14 is the C mode (S1002c: Yes), the fact that the image can be captured is sounded by the speaker 33 and notified by voice (S1002d).
[0144] 一方、画像投影部 13と画像撮像部 14との相対的な位置関係が上述した Cモード ではなレ、場合(S 1002c: No)、 Aモード(図 8参照)か否かが調べられる(1002e)。 Aモードである場合(S1002e: Yes)、撮像可能である旨がスピーカ 33を鳴動して音 声により報知される(S1002d)。なお、 Aモードである場合は、この音声による報知に 替えて、メッセージが投影されたり、撮像可能範囲を示す矩形画像光が投影されても 良い。 On the other hand, if the relative positional relationship between the image projecting unit 13 and the image capturing unit 14 is not the above-described C mode (S1002c: No), it is determined whether the mode is the A mode (see FIG. 8). (1002e). When the mode is the A mode (S1002e: Yes), the fact that the image can be captured is sounded by the speaker 33 and notified by voice (S1002d). Note that in the case of the A mode, a message may be projected or a rectangular image light indicating an imageable range may be projected instead of the notification by voice.
[0145] 一方、画像投影部 13と画像撮像部 14との相対的な位置関係が上述した Cモード ではなぐ Aモードでもない場合(S1002e : No)、 Aモードまたは Cモードの状態にす るようスピーカ 33を鳴動して音声により警告がなされ(S1002g)、処理は S1002bの 処理に戻る。すなわち、使用者が撮像ヘッド 2と撮像ケース 11を調整することにより A モードまたは Cモードとするまで、後述する三次元形状検出処理(S1006)による三 次元形状の検出が禁止される。  On the other hand, when the relative positional relationship between the image projecting unit 13 and the image capturing unit 14 is not the A mode which is not the same as the C mode described above (S1002e: No), the state is set to the A mode or the C mode. The speaker 33 is sounded to give a warning by voice (S1002g), and the process returns to the process of S1002b. That is, until the user adjusts the imaging head 2 and the imaging case 11 to switch to the A mode or the C mode, the detection of the three-dimensional shape by the three-dimensional shape detection process (S1006) described later is prohibited.
[0146] 後に詳細に説明するが、三次元形状検出処理(S1006)では、画像投影部 13によ り所定のパターン光が投影されている状態で画像撮像部 14により被写体が撮像され 、その撮像により取得された撮像データに基づいて被写体 Pの三次元形状が検出さ れる。したがって、画像投影部 13と画像撮像部 14との相対的な位置関係が予め定 められた所定の関係にない場合は、その三次元形状の計測の精度が著しく低下する ため、画像投影部 13と画像撮像部 14とが予め定められた所定の関係にある場合の み、三次元形状検出処理(S1006)の実行を許可することとしたのである。  As will be described in detail later, in the three-dimensional shape detection process (S1006), the image capturing unit 14 captures an image of a subject in a state where a predetermined pattern light is projected by the image projection unit 13, and the captured image is captured. The three-dimensional shape of the subject P is detected based on the image data acquired by the above. Therefore, when the relative positional relationship between the image projecting unit 13 and the image capturing unit 14 is not a predetermined relationship, the accuracy of measurement of the three-dimensional shape is significantly reduced. Only when the image capturing unit 14 and the image capturing unit 14 have a predetermined relationship, execution of the three-dimensional shape detection process (S1006) is permitted.
[0147] 次に、レリーズボタン 8がスキャンされ(S1003a)、レリーズボタン 8が半押しされた か否かが判断される(S1003b)。レリーズボタン 8が半押しされてレ、れば(S1003b: Yes)、オートフォーカス (AF)および自動露出 (AE)機能が起動され、ピント、絞り、 シャツタスピードが調節される(S1003c)。尚、レリーズボタン 8が半押しされていなけ れば(S 1003b: No)、 S 1003aからの処理が繰り返えされる。  Next, the release button 8 is scanned (S1003a), and it is determined whether or not the release button 8 is half-pressed (S1003b). If the release button 8 is pressed halfway down (S1003b: Yes), the auto focus (AF) and auto exposure (AE) functions are activated, and the focus, aperture, and shirt speed are adjusted (S1003c). If the release button 8 is not half-pressed (S1003b: No), the processing from S1003a is repeated.
[0148] 次に、再び、レリーズボタン 8がスキャンされ(S1003d)、レリーズボタン 8が全押しさ れたか否かが判断される(S1003e)。レリーズボタン 8が全押しされていれば(S100 3e : Yes)、フラッシュモードか否かが判断される(S1003f)。 [0148] Next, the release button 8 is scanned again (S1003d), and the release button 8 is fully pressed. It is determined whether it has been performed (S1003e). If the release button 8 is fully pressed (S1003e: Yes), it is determined whether or not the flash mode is set (S1003f).
[0149] その結果、フラッシュモードであれば(S1003f : Yes)、フラッシュ 7が投光され(S1 003g)、撮影が行われる(S1003h)。フラッシュモードでなければ(S1003f : No)、 フラッシュ 7を投光することなく撮影が行われる(S1003h)。尚、 S1003eの判断にお いて、全押しされていなければ(S1003e : No)、 S1003aからの処理が繰り返えされ る。 As a result, in the flash mode (S1003f: Yes), the flash 7 is emitted (S1003g), and shooting is performed (S1003h). If not in the flash mode (S1003f: No), shooting is performed without emitting the flash 7 (S1003h). If it is determined in S1003e that the button is not fully pressed (S1003e: No), the processing from S1003a is repeated.
[0150] 次に、後述する 3次元形状検出処理が実行され、被写体の 3次元形状が検出され る(S1006)。  Next, a three-dimensional shape detection process described later is executed to detect a three-dimensional shape of the subject (S1006).
[0151] 次に、 3次元形状検出処理(S1006)における 3次元形状検出結果は外部メモリ 27 に格納され(S1007)、 3次元形状検出結果はモニタ LCD10に表示される(S1008a )。尚、この 3次元形状検出結果とは、各計測頂点の実空間における 3次元座標 (X, Υ, Z)の集合体として表示される。すなわち、 3次元形状検出結果としての計測頂点 をポリゴンで結んでそのサーフェスを表示した立体画像(3Dの CG画像)としての 3次 元形状検出結果画像がモニタ LCD 10に表示される。  Next, the three-dimensional shape detection result in the three-dimensional shape detection processing (S1006) is stored in the external memory 27 (S1007), and the three-dimensional shape detection result is displayed on the monitor LCD 10 (S1008a). The three-dimensional shape detection result is displayed as a set of three-dimensional coordinates (X, Υ, Z) in the real space of each measurement vertex. That is, a three-dimensional shape detection result image is displayed on the monitor LCD 10 as a three-dimensional image (3D CG image) displaying the surface by connecting measurement vertices as a three-dimensional shape detection result with a polygon.
[0152] 次に、モード切替スィッチ 9に変化が無いか否かが判断され (S1011)、変化が無 ければ(S1011 :Yes)、 S702からの処理を繰り返えされる。モード切替スィッチ 9に 変化があれば(S 1011: No)、当該処理を終了する。  Next, it is determined whether there is no change in the mode switching switch 9 (S1011), and if there is no change (S1011: Yes), the processing from S702 is repeated. If there is a change in the mode switching switch 9 (S1011: No), the process ends.
[0153] 図 16 (a)は、上述した 3次元形状検出処理(図 15の S1006)において、 3次元形状 を検出するために利用する空間コード法の原理を説明するための図であり、図 16 (b )はず 16 (a)とは異なるパターン光を示す図である。パターン光にはこれら図 16 (a) または図 16 (b)のいずれが用いられても良ぐ更には、多階調コードであるグレイレ ベルコードが用いられても良レ、。  [0153] FIG. 16 (a) is a diagram for explaining the principle of the spatial code method used to detect a three-dimensional shape in the above-described three-dimensional shape detection processing (S1006 in FIG. 15). 16 (b) is a view showing pattern light different from 16 (a). Either of FIG. 16 (a) or FIG. 16 (b) may be used for the pattern light, and further, a gray level code which is a multi-tone code may be used.
[0154] 尚、この空間コード法につての詳細は、佐藤宏介、他 1名、「空間コード化による距 離画像入力」、電子通信学会論文誌、 85Z3Vol. J 68— D No3 p369〜375に 詳細に開示されている。  [0154] Details of this spatial coding method are described in Kosuke Sato and one other, "Distance Image Input by Spatial Coding", Transactions of the Institute of Electronics, Information and Communication Engineers, 85Z3Vol. J 68—D No3 p369-375. It is disclosed in detail.
[0155] 空間コード法は、投影光と観測画像間の三角測量に基づいて被写体の 3次元形状 を検出する方法の 1種であり、図 16 (a)に示されるように、投影光源 Lと観測器 Oとを 距離 Dだけ離して設置し、空間を細長い扇状領域に分割しコード化することを特徴と する。 [0155] The spatial code method is one type of a method for detecting the three-dimensional shape of a subject based on triangulation between the projected light and the observed image. As shown in FIG. Observer O It is characterized by being installed at a distance D, dividing the space into elongated fan-shaped areas, and coding them.
[0156] 図中の 3枚のマスクパターン A, B, Cを MSBから順番に投影すると、各扇状領域は マスクによって明「1」と喑「0」とにコード化される。例えば、点 Pを含む領域は、マスク When the three mask patterns A, B, and C in the figure are projected in order from the MSB, each fan-shaped area is coded by the mask into bright “1” and 喑 “0”. For example, the area containing point P
A, Bでは光が当たらず、マスク Cでは明になるので、 001 (A=0、 B = 0、 C= l)とコ ード化される。 A and B are not illuminated and mask C is bright, so it is coded as 001 (A = 0, B = 0, C = l).
[0157] 各扇状の領域には、その方向 φに相当するコードが割り当てられ、それぞれのコー ドの境界を 1本のスリット光線とみなすことができる。そこで各マスクごとに情景を観測 機器としてのカメラで撮影し、明暗パターンを 2値化してメモリの各ビットプレーンを構 成していく。  [0157] Each fan-shaped area is assigned a code corresponding to the direction φ, and the boundary of each code can be regarded as one slit light beam. Therefore, the scene is photographed for each mask with a camera as an observation device, and the light and dark patterns are binarized to configure each bit plane of the memory.
[0158] こうして、得られた多重ビットプレーン画像の横方向の位置(アドレス)は、観測方向  [0158] The horizontal position (address) of the obtained multi-bit plane image is determined in the observation direction.
Θに相当し、このアドレスのメモリの内容は投影光コード、即ち、 φを与える。この Θと Φとから注目点の座標を決定する。  And the contents of the memory at this address give the projected light code, ie, φ. The coordinates of the point of interest are determined from Θ and Φ.
[0159] また、この方法で使用するマスクパターンとしては、図 16 (a)ではマスクパターン A, [0159] The mask pattern used in this method includes mask pattern A,
B, Cのような純 2進コードを用いる場合が図示されている力 S、マスクの位置ズレが起こ ると領域の境界で大きな誤差が生ずる危険性がある。 When pure binary codes such as B and C are used, there is a risk that a large error will occur at the boundary of the area if the force S and the mask are misaligned as shown.
[0160] 例えば、図 16 (a)の点 Qは領域 3 (011)と領域 4 (100)の境界を示している力 もし マスク Aの 1がずれ込むと領域 7 (111)のコードが生ずる可能性がある。換言すれば 、隣接する領域間でハミング距離が 2以上のところで、大きな誤差が発生する可能性 力 sある。 [0160] For example, the point Q in Fig. 16 (a) is a force indicating the boundary between the area 3 (011) and the area 4 (100). If the mask A shifts by 1, the code of the area 7 (111) may be generated. There is. In other words, where the Hamming distance is 2 or greater between adjacent regions, a large error is likely force s occur.
[0161] そこで、この方法で使用するマスクパターンとしては、図 16 (b)に示すように、隣接 する領域間でハミング距離が常に 1であるコードを使うことで、上述したようなコード化 誤差を避けることができるとされてレ、る。  [0161] Therefore, as shown in Fig. 16 (b), as a mask pattern used in this method, a code in which the Hamming distance is always 1 between adjacent regions is used. It is said that you can avoid.
[0162] 図 17 (a)は、 3次元形状検出処理(図 15の S1006)のフローチャートである。この処 理では、まず、撮像処理が行われる(S1210)。この撮像処理は、図 16 (a)に示す複 数枚の純 2進コードのマスクパターンを利用して画像投影部 13から、明暗を交互に 並べてなる縞状のパターン光(図 1及び図 18参照)を時系列的に被写体に投影し、 各パターン光が投影されてレ、る状態を撮像したパターン光有画像と、パターン光が 投影されていな状態を撮像したパターン光無画像とを取得する処理である。図 1は、 画像投影部 13と画像撮像部 14との相対的な位置関係が Aモードである場合に、上 記パターン光が投影されている状態を示す図であって、図 18は、画像投影部 13と画 像撮像部 14との相対的な位置関係が Cモードである場合に、上記パターン光が投影 されてレ、る状態を示す図である。 FIG. 17A is a flowchart of the three-dimensional shape detection processing (S1006 in FIG. 15). In this process, first, an imaging process is performed (S1210). This imaging processing is performed by using the mask patterns of a plurality of pure binary codes shown in FIG. 16 (a) and from the image projection unit 13 to obtain a striped pattern light (FIGS. 1 and 18) in which light and dark are alternately arranged. ) Is projected onto the subject in chronological order, and the pattern light with the pattern light projected image and the pattern light This is a process of acquiring a pattern light non-image obtained by capturing an image that is not projected. FIG. 1 is a diagram showing a state in which the pattern light is projected when the relative positional relationship between the image projecting unit 13 and the image capturing unit 14 is A mode, and FIG. FIG. 9 is a diagram showing a state where the pattern light is projected and projected when the relative positional relationship between the projection unit 13 and the image pickup unit 14 is a C mode.
[0163] 撮像処理を終了すると(S1210)、 3次元計測処理が行われる(S1220)。 3次元計 測処理は、撮像処理によって取得したパターン光有画像とパターン光無画像とを利 用して、実際に被写体の 3次元形状を計測する処理である。こうして、 3次元計測処 理が終了すると(S1220)、当該処理を終了する。  When the imaging process ends (S1210), a three-dimensional measurement process is performed (S1220). The three-dimensional measurement process is a process of actually measuring the three-dimensional shape of a subject by using the image with pattern light and the image without pattern light acquired by the imaging process. Thus, when the three-dimensional measurement processing ends (S1220), the processing ends.
[0164] 図 17 (b)は、撮像処理(図 12 (a)の S1210)のフローチャートである。この処理は、 パターン光撮影プログラム 36aに基づき実行され、まず、画像投影部 13からパターン 光を投影することなぐ画像撮像部 14によって被写体を撮像することで、パターン光 無画像が取得される(S1211)。尚、取得したパターン光無画像はパターン光無画像 格納部 37bに格納される。  FIG. 17B is a flowchart of the imaging process (S1210 in FIG. 12A). This process is executed based on the pattern light photographing program 36a. First, an image of a subject is captured by the image photographing unit 14 which does not project the pattern light from the image projecting unit 13, thereby obtaining a pattern light non-image (S1211). ). The acquired pattern light non-image is stored in the pattern light non-image storage unit 37b.
[0165] 次に、カウンタ iが初期化され(S1212)、そのカウンタ iの値が最大値 imaxか否かが 判断される(S1213)。尚、最大値 imaxは使用するマスクパターンの数によって決定 される。例えば、 8種類のマスクパターンを使用する場合には、最大 imax ( = 8)とな る。  Next, the counter i is initialized (S1212), and it is determined whether or not the value of the counter i is the maximum value imax (S1213). The maximum value imax is determined by the number of mask patterns used. For example, when eight types of mask patterns are used, the maximum is imax (= 8).
[0166] そして、判断の結果、カウンタ iの値が最大値 imaxより小さい場合には(S1213 : Ye s)、使用するマスクパターンの内、 i番のマスクパターンが投影 LCD19に表示され、 その i番のマスクパターンによって投影される i番のパターン光が投影面に投影される (S1214) 0それから、そのパターン光が投影されている状態が画像撮像部 14によつ て撮影される(S 1215)。 If the result of the determination indicates that the value of the counter i is smaller than the maximum value imax (S1213: Yes), the i-th mask pattern among the mask patterns to be used is displayed on the projection LCD 19, and The i-th pattern light projected by the No. mask pattern is projected on the projection surface (S1214) 0 Then, the state where the pattern light is projected is photographed by the image pickup section 14 (S1215). ).
[0167] こうして、被写体に i番のパターン光が投影された状態を撮像したパターン光有画像 が取得される。尚、取得したパターン光有画像は、パターン光有画像格納部 37aに 格納される。  [0167] In this way, a pattern light existence image obtained by imaging the state where the i-th pattern light is projected on the subject is obtained. The acquired pattern light image is stored in the pattern light image storage section 37a.
[0168] 撮影を終了すると、 i番のパターン光の投影を終了し(S1216)、次のパターン光を 投影すベぐカウンタ iに「1」が加算され(S1217)、 S1213からの処理が繰り返えさ れる。 [0168] When the photographing is completed, the projection of the i-th pattern light is terminated (S1216), "1" is added to the counter i for projecting the next pattern light (S1217), and the processing from S1213 is repeated. Returned It is.
[0169] そして、カウンタ iの値が最大値 imaxより大きいと判断されると(S1213 : No)、当該 処理を終了する。即ち、この撮像処理においては、 1枚のパターン光無画像と、最大 値 imax枚のパターン光有画像とを取得することになる。  [0169] When it is determined that the value of the counter i is larger than the maximum value imax (S1213: No), the process ends. That is, in this imaging process, one image with no pattern light and the maximum value imax images with pattern light are acquired.
[0170] 図 17 (c)は、 3次元計測処理(図 17 (a)の S1220)のフローチャートである。この処 理は、輝度画像生成プログラム 36cに基づき実行され、まず、輝度画像が生成される (S1221)。ここで、輝度は、 YCbCr空間における Y値であり、各画素の RGB値より Y =0. 2989 -R + O. 5866 -G + 0. 1145 ·Β力、ら計算される値である。各画素にっレヽ て Υ値を求めることにより、各パターン光有及び無し画像に関する輝度画像が生成さ れる。生成された輝度画像は、輝度画像格納部 37cに格納される。また、パターン光 の番号に対応した番号が各輝度画像に割り付けられる。  FIG. 17 (c) is a flowchart of the three-dimensional measurement process (S1220 in FIG. 17 (a)). This processing is executed based on the luminance image generation program 36c, and first, a luminance image is generated (S1221). Here, the luminance is a Y value in the YCbCr space, and is a value calculated from Y = 0.2989−R + O.5866−G + 0.1145 · power from the RGB value of each pixel. By obtaining the Υ value for each pixel, a luminance image for each pattern light presence / absence image is generated. The generated luminance image is stored in the luminance image storage unit 37c. A number corresponding to the number of the pattern light is assigned to each luminance image.
[0171] 次に、コード画像生成プログラム 36dにより、上述した空間コード法を利用して、生 成した輝度画像を組み合わせることで、各画素毎にコード化されたコード画像が生成 される(S1222)。  Next, the code image generation program 36d generates a code image coded for each pixel by combining the generated luminance images using the spatial coding method described above (S1222). .
[0172] このコード画像は、輝度画像格納部 37cに格納したパターン光有画像に関する輝 度画像の各画素について、あら力じめ設定した輝度閾値あるいはパターン光無画像 と比較することで 2値化し、その結果を図 16 (a) , (b)に説明した様に、 LSB〜MSB に割り当てることで生成することができる。生成されたコード画像はコード画像格納部 37dに格納される。  [0172] This code image is binarized by comparing each pixel of the brightness image related to the image with pattern light stored in the brightness image storage unit 37c with the brightness threshold set in advance or the image without pattern light. The result can be generated by assigning the LSB to MSB as described in FIGS. 16 (a) and 16 (b). The generated code image is stored in the code image storage unit 37d.
[0173] 次に、コード境界抽出プログラム 36eにより、後述するコード境界座標検出処理が 行われ (S 1223)、各画素毎に割り当てられたコードの境界座標がサブピクセル精度 で検出される。  Next, a code boundary coordinate detection process described later is performed by the code boundary extraction program 36e (S1223), and the boundary coordinates of the code assigned to each pixel are detected with sub-pixel accuracy.
[0174] 次に、レンズ収差補正プログラム 36fにより、レンズ収差補正処理が行われる(S12 24)。この処理によって、撮像光学系 21の歪みなどの影響で誤差を含んでいる S12 23で検出されるコード境界座標の誤差を補正することができる。  Next, lens aberration correction processing is performed by the lens aberration correction program 36f (S1224). By this processing, it is possible to correct the error of the code boundary coordinates detected in S1223, which includes the error due to the influence of the distortion of the imaging optical system 21, and the like.
[0175] 次に、三角測量演算プログラム 36gにより、三角測量原理による実空間変換処理が 行われる(S 1225)。この処理によって収差補正が施された後の CCD空間上のコー ド境界座標は、実空間における 3次元座標に変換され、 3次元形状検出結果としての 3次元座標が求められる。 Next, a real space conversion process based on the triangulation principle is performed by the triangulation calculation program 36g (S1225). The code boundary coordinates in the CCD space after the aberration correction has been performed by this process are converted into three-dimensional coordinates in the real space, and the three-dimensional shape detection result is obtained. Three-dimensional coordinates are determined.
[0176] 図 19は、コード境界座標検出処理(図 17の S1223)の概略を説明するための図で ある。図 19の上側の図には、 CCD空間における実際のパターン光の明暗の境界が 境界線 Kで示されて、また、そのパターン光を上述した空間コード法でコード化した 場合の、 1のコードと他のコードとの境界が図中太線で示されてレ、る。  FIG. 19 is a diagram for explaining the outline of the code boundary coordinate detection process (S1223 in FIG. 17). In the upper diagram of FIG. 19, the boundary between the light and dark of the actual pattern light in the CCD space is indicated by a boundary line K, and the code 1 when the pattern light is coded by the space coding method described above. The boundary between the code and another code is indicated by a bold line in the figure.
[0177] 即ち、上述した空間コード法におけるコード化は、各画素単位で行われるため、実 際のパターン光の境界線 Kと、コード化された境界(図中太線)とではサブピクセル精 度の誤差が生ずる。そこで、このコード境界座標検出処理は、コードの境界座標をサ ブピクセル精度で検出することを目的とする。  [0177] That is, since the coding in the spatial coding method described above is performed on a pixel-by-pixel basis, the sub-pixel accuracy is defined between the actual pattern light boundary K and the coded boundary (thick line in the figure). Error occurs. Therefore, the purpose of this code boundary coordinate detection processing is to detect code boundary coordinates with subpixel accuracy.
[0178] この処理では、まず、ある検出位置(以下「curCCDX」と称す)において、ある着目 コード(以下「curCode」という)から他のコードに変化する第 1画素 Gを検出する(第 1 画素検出工程)。  In this process, first, at a certain detection position (hereinafter, referred to as “curCCDX”), a first pixel G that changes from a certain code of interest (hereinafter, “curCode”) to another code is detected (first pixel G). Detection step).
[0179] 例えば、 curCCDXにおいて、上から順番に各画素を検出すると、境界(太線)まで は curCodeを有する画素である力 境界の次の画素、即ち、第 1画素 Gにおいて、 c urCodeは変化しているので、これを第 1画素 Gとして検出する。  For example, when each pixel is detected in order from the top in curCCDX, curCode is changed at the next pixel of the force boundary, that is, the first pixel G up to the boundary (thick line), which is a pixel having a curCode. This is detected as the first pixel G.
[0180] 次に、その第 1画素 Gの画素位置において、図 17の S1221において輝度画像格 納部 37cに格納された輝度画像の内から、明暗の変化を持つ輝度画像の全部を抽 出する (輝度画像抽出工程)。  Next, at the pixel position of the first pixel G, all of the luminance images having a change in brightness are extracted from among the luminance images stored in the luminance image storage unit 37c in S1221 in FIG. (Brightness image extraction step).
[0181] 次に、近似に利用するための画素領域を特定するために検出位置を「2」左側に移 動させ、検出位置 curCCDX— 2の位置において、コード画像を参照して、着目コー ド(curCode)力 他のコードに変化する画素(境界画素(curCCDX— 2の検出位置 では画素 H) )を探し、その画素を中心に予め定めた範囲(本実施例の場合 Y軸方向 に一 3画素と + 2画素の範囲)の画素範囲を特定する(画素領域特定工程の一部)。  Next, the detection position is moved to the left by “2” in order to specify a pixel region to be used for approximation, and at the position of the detection position curCCDX-2, the code image of interest is referenced by referring to the code image. (CurCode) force A pixel that changes to another code (boundary pixel (pixel H at the detection position of curCCDX-2)) is searched, and a predetermined range (one in the Y-axis direction in this embodiment) is set around that pixel. The pixel range of (pixels and +2 pixels) is specified (part of the pixel region specifying step).
[0182] 次に、その予め定めた範囲内において、図 19中の下側の左側のグラフに示すよう に、 Y方向の画素位置と輝度とに関する近似式(図中実線で示す)を求め、その近似 式における輝度閾値 bThとの交点における Y座標 Y1を求める(境界座標検出工程 の一部)。  Next, within the predetermined range, as shown in the lower left graph in FIG. 19, an approximate expression (indicated by a solid line in the figure) regarding the pixel position and the luminance in the Y direction was obtained, The Y coordinate Y1 at the intersection with the luminance threshold bTh in the approximate expression is obtained (part of the boundary coordinate detection step).
[0183] 尚、輝度閾値 bThは、予め定められた範囲内から算出(例えば、各画素の輝度の 平均の 2分の 1)しても良ぐ予め与えられた固定値であっても良レ、。これにより、明と 暗との境界をサブピクセル精度で検出することができる。 Note that the luminance threshold bTh is calculated from a predetermined range (for example, the luminance threshold Even if it is a fixed value that is given in advance, it is OK. Thus, the boundary between light and dark can be detected with sub-pixel accuracy.
[0184] 次に、検出位置を curCCDX—2から「1」右側に移動させ、 curCCDX—lにおレヽ て上述したのと同様な処理を行レ、、 curCCDX_ lにおける代表値を求める(境界座 標検出工程の一部)。 [0184] Next, the detection position is moved to the right side of "1" from curCCDX-2, and the same processing as described above is performed for curCCDX-1 to obtain a representative value in curCCDX_l (boundary boundary Part of the target detection process).
[0185] このように、境界画素を中心に Y軸方向に予め定めた範囲と、 X軸方向における cu rCCDX- 2から curCCDX+ 2の範囲とで構成される画素領域(図中右下がり斜線 部参照)において、各検出位置における代表値を求める。  [0185] As described above, a pixel area including a predetermined range in the Y-axis direction around the boundary pixel and a range from curCCDX-2 to curCCDX + 2 in the X-axis direction (see a hatched portion falling rightward in the figure) In), a representative value at each detection position is obtained.
[0186] これまでの処理を curCode力、ら他のコードへ変化する画素を持つ輝度画像の全て に行レ、、各輝度画像についての代表値の加重平均値を最終的に curCodeにおける 境界座標として採用する (境界座標検出工程の一部)。  [0186] The above processing is performed on all luminance images having pixels that change to other codes, such as curCode force, and the weighted average value of the representative values for each luminance image is finally set as the boundary coordinates in curCode. Adopt (part of the boundary coordinate detection process).
[0187] これにより、コードの境界座標を高精度にサブピクセル精度で検出することができ、 この境界座標を利用して上述した三角測量原理による実空間変換処理(図 17の S1 225)を行うことで、高精度に被写体の 3次元形状を検出することができる。  As a result, the boundary coordinates of the code can be detected with high accuracy and subpixel accuracy, and the real space conversion processing (S1225 in FIG. 17) based on the triangulation principle described above is performed using the boundary coordinates. This makes it possible to detect the three-dimensional shape of the subject with high accuracy.
[0188] また、このように輝度画像に基づき算出される近似式を利用して境界座標をサブピ クセル精度で検出することができるため、従来のように撮像枚数を増加させることもな ぐまた、純 2進コードで明暗付けられたパターン光が用いられても良ぐ特殊なパタ ーン光であるグレイコードを用いる必要はない。  [0188] Further, since the boundary coordinates can be detected with sub-pixel accuracy using the approximate expression calculated based on the luminance image in this manner, the number of captured images can be increased as in the related art. It is not necessary to use a gray code, which is a special pattern light that can be used even if a pattern light that is lit and shaded by a pure binary code is used.
[0189] 尚、本実施形態では、各検出位置において境界画素を中心に Y軸方向に「- 3」か ら「 + 2」の範囲と、 X軸方向における検出位置としての curCCDX— 2から curCCD X+ 2の範囲とで構成される領域を、近似を求めるための画素領域として説明したが 、この画素領域の Y軸、 X軸の範囲はこれらに限定されるものではなレ、。例えば、 cur CCDXの検出位置における境界画素を中心とした Y軸方向への所定範囲だけを画 素領域としても良い。  In the present embodiment, at each detection position, a range from “−3” to “+2” in the Y-axis direction around the boundary pixel, and curCCDX—2 to curCCD as the detection position in the X-axis direction. Although the region constituted by the range of X + 2 has been described as a pixel region for obtaining an approximation, the range of the Y-axis and the X-axis of this pixel region is not limited thereto. For example, only a predetermined range in the Y-axis direction around the boundary pixel at the cur CCDX detection position may be set as the pixel region.
[0190] 図 20は、コード境界座標検出処理(図 17の S1223)のフローチャートである。この 処理は、コード境界抽出プログラム 36eに基づき実行される。まず、 CCD空間におけ るコード境界座標列の各要素が初期化され (S1401)、 curCCDXが開始座標に設 定される(S1402)。 [0191] 次に、 curCCDXが終了座標以下か否かが判断される(S1403)。 curCCDXが終 了座標以下であれば(S1403 : Yes)、 curCodeは「0」に設定される(S1404)。即ち 、 curCodeは当初、最小値に設定される。 FIG. 20 is a flowchart of the code boundary coordinate detection process (S1223 in FIG. 17). This process is executed based on the code boundary extraction program 36e. First, each element of the code boundary coordinate sequence in the CCD space is initialized (S1401), and curCCDX is set as a start coordinate (S1402). Next, it is determined whether or not curCCDX is equal to or smaller than the end coordinate (S1403). If curCCDX is equal to or smaller than the end coordinate (S1403: Yes), curCode is set to “0” (S1404). That is, curCode is initially set to the minimum value.
[0192] 次に、 curCodeが最大コードより小さいか否かが判断される(S1405)。 curCode が最大コードより小さければ(S1405 : Yes)、 curCCDXにおいてコード画像が参照 され、 curCodeの画素が探される(S1406)。次に、 curCodeの画素が存在するか否 力、が判断する(S1407)。  Next, it is determined whether or not curCode is smaller than the maximum code (S1405). If curCode is smaller than the maximum code (S1405: Yes), the code image is referred to in curCCDX to search for a pixel of curCode (S1406). Next, it is determined whether or not a pixel of curCode exists (S1407).
[0193] その結果、 curCodeの画素が存在していれば(S1407 : Yes)、 curCCDXにおい て、その curCodeよりも大きな Codeの画素がコード画像を参照して探される(S1408 )。次に、その curCodeよりも大きな curCodeの画素が存在するか否かが判断される (S1409)。  As a result, if there is a pixel of curCode (S1407: Yes), a pixel of Code larger than the curCode is searched for in curCCDX by referring to the code image (S1408). Next, it is determined whether or not a pixel having a curCode larger than the curCode exists (S1409).
[0194] その結果、 curCodeよりも大きな Codeの画素が存在していれば(S1409 : Yes)、 後述する境界をサブピクセル精度で求める処理が行われる(S1410)。そして、次の curCodeについて境界座標を求めるベぐ curCodeに「1」が加算され(S1411)、 S 1405からの処理を繰り返えされる。  [0194] As a result, if there is a pixel having a code larger than the curCode (S1409: Yes), a process of obtaining a boundary described later with sub-pixel accuracy is performed (S1410). Then, “1” is added to the curCode for obtaining the boundary coordinates for the next curCode (S1411), and the processing from S1405 is repeated.
[0195] 即ち、境界は、 curCodeを有する画素の画素位置または curCodeよりも大きな Cod eの画素の画素位置に存在しているため、本実施形態では、暫定的に境界は、 curC odeより大きな curCodeの画素の画素位置にあると仮定して処理が進められる。  That is, since the boundary exists at the pixel position of the pixel having the curCode or the pixel position of the pixel of the Code larger than the curCode, in the present embodiment, the boundary is provisionally set to the curCode larger than the curCode. The processing proceeds assuming that the pixel is located at the pixel position of the pixel.
[0196] また、 curCodeが存在していない場合や(S 1407 : No)、 curCodeよりも大きな Co deの画素が存在してレ、なレ、場合には(S1409: No)、次の curCodeにつレ、て境界座 標を求めるベぐ curCodeに「1」が加算され(S1411)、 S1405からの処理が繰り返 免される。  [0196] Also, when curCode does not exist (S1407: No), or when there is a pixel having a Code larger than curCode, or when it does not exist (S1409: No), the next curCode First, “1” is added to the curCode for calculating the boundary coordinates (S1411), and the processing from S1405 is repeatedly exempted.
[0197] こうして、 0力、ら最大コードまでの curCodeについて、 S1405力 S1411までの処 理が繰り返えされ、 curCodeが最大コードより大きくなると(S1405 : No)、検出位置 を変更すベぐ curCCDXに「dCCDX」が加算され(S 1412)、新たな検出位置にお いて、上述したのと同様に S1403からの処理が繰り返えされる。  [0197] In this way, the processing up to S1405 and S1411 is repeated for curCode up to the maximum code and zero power, and when curCode becomes larger than the maximum code (S1405: No), the detection position must be changed. Is added to “dCCDX” (S1412), and the process from S1403 is repeated at the new detection position in the same manner as described above.
[0198] curCCDXを変更してゆき、最終的に curCCDXが終了座標より大きくなると(S14 03 : N〇)、即ち、開始座標から終了座標までの検出が終了すると、当該処理を終了 する。 [0198] The curCCDX is changed, and when the curCCDX finally becomes larger than the end coordinate (S14 03: N〇), that is, when the detection from the start coordinate to the end coordinate is completed, the process ends. To do.
[0199] 図 21は、コード境界座標をサブピクセル精度で求める処理(図 20の S1410)のフロ 一チャートである。  FIG. 21 is a flowchart of the process (S1410 in FIG. 20) for obtaining the code boundary coordinates with sub-pixel accuracy.
[0200] この処理では、まず、図 17の S1221において輝度画像格納部 37cに格納された輝 度画像の内から、図 20の S1409において検出された curCodeよりも大きな Codeを 有する画素の画素位置において、明暗の変化を持つ輝度画像の全部が抽出される( S1501)。  [0200] In this processing, first, from among the brightness images stored in the brightness image storage unit 37c in S1221 in Fig. 17, at the pixel position of a pixel having a Code larger than the curCode detected in S1409 in Fig. 20. Then, all of the luminance images having the change in brightness are extracted (S1501).
[0201] 次に、その抽出された輝度画像のマスクパターン番号が配列 PatID[]へ格納され 、その抽出された輝度画像の画像数が noPatIDへ格納される(S1502)。尚、配列 P atID[]と noPatIDとは ID格納部 37fに格納される。  [0201] Next, the mask pattern number of the extracted luminance image is stored in array PatID [], and the number of images of the extracted luminance image is stored in noPatID (S1502). The arrays PatID [] and noPatID are stored in the ID storage unit 37f.
[0202] 次に、カウンタ iが初期化され(S1503)、カウンタ iの値力 oPatIDより小さいか否か が判断される(S1504)。その結果、小さいと判断されれば(S1504 : Yes)、カウンタ i に対応する PatID[i]のマスクパターン番号を持つ輝度画像について、境界の CCD Y値が求められ、その値が f CCDY [i]へ格納される(S 1505)。  Next, the counter i is initialized (S1503), and it is determined whether or not the value of the counter i is smaller than oPatID (S1504). As a result, if it is determined to be small (S1504: Yes), the CCD Y value of the boundary is obtained for the luminance image having the mask pattern number of PatID [i] corresponding to the counter i, and the value is calculated as f CCDY [i ] (S1505).
[0203] この S1505の処理を終了すると、カウンタ iに「1」力 Sカロ算され(S1506)、 S1504力 らの処理が繰り返えされる。そして、 S1504において、カウンタ iの値カ 0 &«0より 大きいと判断されると(S1504 : No)、即ち、 S1501で抽出された全部の輝度画像に ついて S1505の処理が終了すると、 S1505の処理で求めた fCCDY[i]の加重平均 値が計算され、その結果が境界値とされる(S1507)。  When the process of S1505 ends, “1” force S calories are calculated in counter i (S1506), and the process from S1504 force is repeated. Then, in S1504, when it is determined that the value of the counter i is greater than 0 & «0 (S1504: No), that is, when the processing of S1505 is completed for all the luminance images extracted in S1501, the processing of S1505 The weighted average value of fCCDY [i] obtained in is calculated, and the result is used as the boundary value (S1507).
[0204] 尚、加重平均値に代えて、 S1505の処理で求めた fCCDY[i]の中央値を計算し、 その結果を境界値としたり、統計的な計算により境界値を計算したりすることもできる  [0204] Instead of the weighted average value, the median value of fCCDY [i] obtained in the process of S1505 is calculated, and the result is used as a boundary value, or the boundary value is calculated by statistical calculation. Can also
[0205] 即ち、境界座標は、 curCCDXの座標と、 S1507で求められる加重平均値とで表 現され、この境界座標をコード境界座標格納部 37eに格納して、当該処理を終了す る。 [0205] That is, the boundary coordinates are represented by the coordinates of curCCDX and the weighted average value obtained in S1507, the boundary coordinates are stored in the code boundary coordinate storage unit 37e, and the process ends.
[0206] 図 22は、 PatID[i]のマスクパターン番号を持つ輝度画像について、境界の CCD [0206] Fig. 22 shows the boundary CCD image for a luminance image with a mask pattern number of PatID [i]
Y値を求める処理(図 21の S1505)のフローチャートである。 22 is a flowchart of a process for obtaining a Y value (S1505 in FIG. 21).
[0207] この処理では、まず、「curCCDX— dx」と「0」との内、大きい値を ccdxとして設定 する「ccdx=MAX (curCCDX—dx, 0)」で表される処理が行なわれると共に、カウ ンタ jが初期化される(S1601)。 [0207] In this process, first, a large value between "curCCDX—dx" and "0" is set as ccdx. Then, the process represented by “ccdx = MAX (curCCDX−dx, 0)” is performed, and the counter j is initialized (S1601).
[0208] 具体的には、 S1601でいう「0」は CCDX値の最小値を意味し、例えば、今、検出 位置としての curCCDX値力 S「1」で、予め設定されてレ、る dx値力 S「2」であったとする と、「curCCDX— dx」は「一 1」となり、 CCDX値の最小値である「0」よりも小さくなる ため、「_ 1」における以降の処理は、「ccdx = 0」として設定する処理を行う。  [0208] Specifically, "0" in S1601 means the minimum value of the CCDX value. For example, the curxc DX value power S "1" as the detection position is now set in advance to the dx value. If the force S is “2”, “curCCDX—dx” is “1”, which is smaller than the minimum CCDX value of “0”. ccdx = 0 ”is set.
[0209] 即ち、 CCDX値の最小値よりも小さい位置については、以降の処理を除外する処 理を行う。  [0209] That is, for a position smaller than the minimum value of the CCDX value, processing for excluding subsequent processing is performed.
[0210] 尚、この「dx」の値は、予め「0」を含む適当な整数に設定することができ、図 19で説 明した例では、この「dx」は「2」に設定されており、図 19の例に従えば、この ccdxは「 curCCDX- 2jに設定されることになる。  [0210] The value of "dx" can be set to an appropriate integer including "0" in advance, and in the example described in Fig. 19, "dx" is set to "2". Therefore, according to the example of FIG. 19, this ccdx is set to “curCCDX-2j”.
[0211] 次に、 ccdx< =MIN (curCCDX + dx, ccdW— 1)であるか否かが判断される(S 1602)。つまり、左辺の「MIN (curCCDX+dx, ccdW— 1)」は、「curCCDX+dx 」と、 CCDX値の最大値「ccdW」から「1」を減算した「ccdW—l」との内、小さい値で あることを意味しているので、その値と「ccdx」値との大小が比較される。  Next, it is determined whether or not ccdx <= MIN (curCCDX + dx, ccdW−1) (S1602). In other words, "MIN (curCCDX + dx, ccdW-1)" on the left side is the smaller of "curCCDX + dx" and "ccdW-l" obtained by subtracting "1" from the maximum CCDX value "ccdW". Since this means that the value is a value, the value is compared with the "ccdx" value.
[0212] 即ち、 CCDX値の最大値よりも大きい位置については、以降の処理を除外する処 理を行う。  That is, for a position larger than the maximum value of the CCDX value, a process for excluding the subsequent processes is performed.
[0213] そして、判断の結果、 ccdxが MIN (curCCDX+ dx, ccdW—1)よりも小さければ( S1602 :Yes)、 コード画像と PatID[i]が割り当てられた輝度画像とを参照して、境 界の存在する画素の画素位置の eCCDY値が求められる(S1603)。  [0213] Then, as a result of the determination, if ccdx is smaller than MIN (curCCDX + dx, ccdW-1) (S1602: Yes), the code image and the luminance image to which PatID [i] is assigned are referenced to determine the boundary. The eCCDY value at the pixel position of the pixel where the field exists is determined (S1603).
[0214] 例えば、検出位置を図 19に示す curCCDX— 1であるとすると、画素 Iを境界が存 在する画素候補として検出し、画素 Iの位置において eCCDY値が求められる。  For example, assuming that the detection position is curCCDX-1 shown in FIG. 19, pixel I is detected as a pixel candidate having a boundary, and an eCCDY value is obtained at the position of pixel I.
[0215] 次に、 PatID [i]のマスクパターン番号を持つ輝度画像から、 MAX (eCCDY_dy , 0) < =ccdy< =MIN (eCCDY + dy- l , ccdH_ l)の範囲で、 ccdy方向にお ける輝度に関する近似多項式 Bt = fb (ccdy)が求められる(S1604)。  [0215] Next, from the luminance image having the mask pattern number of PatID [i], in the range of MAX (eCCDY_dy, 0) <= ccdy <= MIN (eCCDY + dy-l, ccdH_l) in the ccdy direction. An approximate polynomial Bt = fb (ccdy) relating to the luminance is calculated (S1604).
[0216] 次に、その近似多項式 Btと輝度閾値 bThとの交差する ccdy値が求められ、その値 力 efCCDY[j]へ格糸内される(S1605)。この S1604と S1605とによって、サブピクセ ル精度の境界座標の検出をすることができる。 [0217] 次に、 ccdxとカウンタ jとに各々「1」が加算され(S 1605)、 S 1602からの処理が繰 り返えされる。即ち、 curCCDXを中心とした左右の所定範囲内における各検出位置 において、サブピクセル精度の境界が検出される。 Next, a ccdy value at which the approximate polynomial Bt intersects with the luminance threshold value bTh is determined, and the ccdy value is assigned to the value efCCDY [j] (S1605). By using S1604 and S1605, it is possible to detect boundary coordinates with sub-pixel accuracy. Next, “1” is added to each of ccdx and counter j (S1605), and the processing from S1602 is repeated. That is, a boundary of sub-pixel accuracy is detected at each detection position within a predetermined range on the left and right with respect to curCCDX.
[0218] そして、 S 1602におレ、て、「ccdx」力 S「MIN (curCCDX+ dx, ccdW— 1)」より大き レヽと半 IJ断されると(S 1602 : No)、 curCCDX— dxから curCCDX+ dxの範囲で計算 された efCCDY[j]について、 ccdy=fy (ccdx)の近似多項式が求められる(S 1606 )。この処理によって S 1605において検出された各値を用いるので、 1つの検出位置 において境界座標を検出しょうとする場合に比べて、境界座標の検出精度を向上さ せること力 Sできる。  [0218] Then, in S1602, if the "ccdx" force S is larger than the S "MIN (curCCDX + dx, ccdW-1)" and the IJ is cut off (S1602: No), the curCCDX-dx For efCCDY [j] calculated within the range of curCCDX + dx, an approximate polynomial of ccdy = fy (ccdx) is obtained (S1606). Since each value detected in step S1605 is used in this process, it is possible to improve the detection accuracy of the boundary coordinates as compared with the case where the detection of the boundary coordinates is performed at one detection position.
[0219] こうして得られた近似多項式と curCCDXとの交点を、 PatID [i]のマスクパターン 番号を持つ輝度画像についての境界の CCDY値として(S 1607)、当該処理を終了 する。ここまでの処理を図 15のフローチャートに示すように、抽出した全部の輝度画 像の 1枚、 1枚に実行し、求められた境界座標について加重平均値を計算して、その 結果を最終的な境界座標としているので(S 1507)、更に、境界座標の検出精度を 向上させることができる。  [0219] The intersection of the thus obtained approximate polynomial and curCCDX is set as the CCDY value of the boundary for the luminance image having the mask pattern number of PatID [i] (S1607), and the process ends. As shown in the flowchart in Fig. 15, the processing up to this point is performed on one of the extracted luminance images, and a weighted average value is calculated for the obtained boundary coordinates. (1507), the detection accuracy of the boundary coordinates can be further improved.
[0220] 図 23 (a)から図 23 (c)は、レンズ収差補正処理(図 17の S1224)を説明するため の図である。レンズ収差補正処理は、図 23 (a)に示すように、撮像光学系 21の収差 により、入射した光束が理想レンズにより結像すべき位置からずれてしまうことに対し て、撮像された画素の位置を本来結像すべき位置へ補正する処理である。  [0220] Figs. 23 (a) to 23 (c) are diagrams for explaining the lens aberration correction processing (S1224 in Fig. 17). As shown in FIG. 23 (a), the lens aberration correction process is performed for the case where the incident light flux is shifted from the position where the image should be formed by the ideal lens due to the aberration of the imaging optical system 21. This is a process for correcting the position to the position where the image should be originally formed.
[0221] この収差補正は、例えば、図 23 (b)に示すように、撮像光学系 21の撮像範囲にお いて、入射光の角度である半画角 hfaをパラメータとして光学系の収差を計算して求 めたデータを基に補正する。  This aberration correction is performed by, for example, calculating the aberration of the optical system in the imaging range of the imaging optical system 21 using the half angle of view hfa, which is the angle of the incident light, as shown in FIG. Correct based on the data obtained.
[0222] この収差補正処理では、レンズ収差補正プログラム 36fに基づき実行され、コード 境界座標格納部 37eに格納されているコード境界座標について行なわれ、収差補正 処理がなされたデータは、収差補正座標格納部 37gに格納される。  [0222] This aberration correction processing is executed based on the lens aberration correction program 36f, and is performed on the code boundary coordinates stored in the code boundary coordinate storage unit 37e. Stored in part 37g.
[0223] 具体的には、実画像における任意点座標 (ccdx, ccdy)を理想カメラ画像での座 標(ccdcx、 ccdcy)に変換する次の(a)から(c)のカメラキャリブレーション(近似式) を用いて補正が行われる。 [0224] 本実施形態では、収差量 dist (%)は、半画角 hfa (deg)を用いて dist = f (Ma)と記 述する。また、撮像光学系 21の焦点距離を focallength (mm)、 ccd画素長 pixelle ngth (mm)、 CCD22におけるレンズの中心座標を(Centx、 Centy)とする。 [0223] Specifically, the camera calibration (approximation) of the following (a) to (c) for converting the arbitrary point coordinates (ccdx, ccdy) in the real image to the coordinates (ccdcx, ccdcy) in the ideal camera image The correction is performed using the following expression. [0224] In the present embodiment, the aberration amount dist (%) is described as dist = f (Ma) using the half angle of view hfa (deg). The focal length of the imaging optical system 21 is focal length (mm), the ccd pixel length is pixel length (mm), and the center coordinates of the lens in the CCD 22 are (Centx, Centy).
[0225] (a) ccdcx= (ccdx- Centx) / (1 + dist/lOO) + Centx  [0225] (a) ccdcx = (ccdx- Centx) / (1 + dist / lOO) + Centx
(b) ccdcy= (ccdy- Centy) / (l + dist/lOO) + Centy  (b) ccdcy = (ccdy- Centy) / (l + dist / lOO) + Centy
(c) hfa = arctan[ ( ( (ccdx— Centx + (ccdy— Centy) 2) ) X pixellength/ focallength] (c) hfa = arctan [(((ccdx—Centx + (ccdy—Centy) 2 )) X pixellength / focallength]
図 24 (a)及び図 24 (b)は、三角測量原理による実空間変換処理(図 17の S1225) において、 CCD空間における座標から、 3次元空間における 3次元座標を算出する 方法を説明するための図である。  FIGS. 24 (a) and 24 (b) are diagrams for explaining a method of calculating three-dimensional coordinates in a three-dimensional space from coordinates in a CCD space in a real space conversion process (S1225 in FIG. 17) based on the principle of triangulation. FIG.
[0226] 三角測量原理による実空間変換処理では、三角測量演算プログラム 36gによって、 収差補正座標格納部 37gに格納されている収差補正がなされたコード境界座標に ついての 3次元空間における 3次元座標が算出される。こうして算出される 3次元座 標は、 3次元座標格納部 37hに格納される。  [0226] In the real space conversion processing based on the principle of triangulation, the three-dimensional coordinates in the three-dimensional space of the aberration-corrected code boundary coordinates stored in the aberration-corrected coordinate storage unit 37g are calculated by the triangulation operation program 36g. Is calculated. The three-dimensional coordinates calculated in this way are stored in the three-dimensional coordinate storage unit 37h.
[0227] 本実施形態では、撮像される横方向に湾曲した被写体 Pに対する画像入力出力装 置 1の座標系として、撮像光学系 21の光軸方向を Z軸、その Z軸に沿って撮像光学 系 21の主点位置力 VPZ離れた地点を原点、画像入出力装置 1に対して水平方向 を X軸、垂直方向を Y軸とする。  In the present embodiment, as the coordinate system of the image input / output device 1 with respect to the horizontally curved subject P to be imaged, the optical axis direction of the imaging optical system 21 is the Z axis, and the imaging optical system is along the Z axis. The origin of a point away from the principal point force VPZ of the system 21 is defined as the origin, the X axis is horizontal to the image input / output device 1, and the Y axis is vertical.
[0228] また、 3次元空間(X, Υ, Z)への画像投影部 13からの投影角度 θ p、撮像光学系 2 1の光軸と画像投影部 13の光軸との距離を D、撮像光学系 21の Y方向の視野を Yft op力 Yfbottom、 X方向の視野を Xfstart力 Xfend、 CCD22の Y軸方向の長さ( 高さ)を Hc、 X軸方向の長さ(幅)を Wcとする。尚、投影角度 Θ pは、各画素毎に割り 当てられたコードに基づき与えられる。  [0228] Also, the projection angle θp from the image projection unit 13 to the three-dimensional space (X, Υ, Z), the distance between the optical axis of the imaging optical system 21 and the optical axis of the image projection unit 13 is D, Yf opto-force Yfbottom for the Y-direction field of view of the imaging optical system 21, Xfstart force Xfend for the X-direction field of view, Hc the length (height) in the Y-axis direction of the CCD22, and Wc the length (width) in the X-axis direction And The projection angle Θp is given based on a code assigned to each pixel.
[0229] この場合、 CCD22の任意座標(ccdx, ccdy)に対応する 3次元空間位置 (X, Y, Z)は、 CCD22の結像面上の点と、パターン光の投影点と、 X— Y平面に交差する点 とで形成される三角形について 5つの式を解くことで得ることができる。 (l ) Y= - (ta η θ ρ) Ζ + ΡΡΖ + tan θ ρ _ D + cmp (Xtarget)  [0229] In this case, the three-dimensional space position (X, Y, Z) corresponding to the arbitrary coordinates (ccdx, ccdy) of the CCD 22 is a point on the imaging plane of the CCD 22, a projection point of the pattern light, It can be obtained by solving five equations for the triangle formed by and the point that intersects the Y plane. (l) Y =-(ta η θ ρ) Ζ + ΡΡΖ + tan θ ρ _ D + cmp (Xtarget)
(2) Y= - (Ytarget/VPZ) Z + Ytarget (3) X= - (Xtarget/VP) Z + Xtarget (2) Y =-(Ytarget / VPZ) Z + Ytarget (3) X =-(Xtarget / VP) Z + Xtarget
(4) Ytarget = Yf top― (ccdcy/Hc) X (Yf top— Yf bottom)  (4) Ytarget = Yf top-(ccdcy / Hc) X (Yf top-Yf bottom)
(5) Xtarget = Xf start + (ccdcx/Wc) X (Xfend— Xfstart)  (5) Xtarget = Xf start + (ccdcx / Wc) X (Xfend— Xfstart)
尚、上記式(1)における cmp (Xtarget)は、撮像光学系 21と画像投影部 13とのズ レを補正する関数であり、ズレが無い理想的な場合には cmp (Xtarget) = 0とみなす こと力 Sできる。この式は、湾曲した被写体 Pに限られるものではなぐ任意の三次元形 状を有するものに対しても利用できるものである。  Note that cmp (Xtarget) in the above equation (1) is a function for correcting the displacement between the imaging optical system 21 and the image projection unit 13. In an ideal case where there is no displacement, cmp (Xtarget) = 0. It can be regarded as S. This equation can be used for an object having an arbitrary three-dimensional shape, not limited to the curved object P.
[0230] 一方、上述したのと同様に、画像投影部 13に含まれる投影 LCD 19上の任意座標 On the other hand, as described above, arbitrary coordinates on the projection LCD 19 included in the image projection unit 13
(lcdcx, lcdcy)と 3次元空間中の 3次元座標 (X, Υ, Z)との関係は次の(6)から(9) の式で表せる。  The relationship between (lcdcx, lcdcy) and the three-dimensional coordinates (X, Υ, Z) in the three-dimensional space can be expressed by the following equations (6) to (9).
[0231] 尚、本実施形態では、画像投影部 13の主点位置(0, 0, PPZ)、画像投影部 13の Y方向の視野を Ypftop力、ら Ypfbottom、 X方向の視野を Xpfstartから Xpfend、投 影 LCD19の Y軸方向の長さ(高さ)を Hp、 X軸方向の長さ(幅) Wpとする。  In the present embodiment, the principal point position (0, 0, PPZ) of the image projection unit 13, the field of view in the Y direction of the image projection unit 13 is Ypftop force, Ypfbottom, and the field of view in the X direction is Xpfstart to Xpfend. The length (height) of the LCD 19 in the Y-axis direction is Hp, and the length (width) in the X-axis direction is Wp.
(6) Y= - (Yptarget/PPZ) Z + Yptarget  (6) Y =-(Yptarget / PPZ) Z + Yptarget
(7) X= - (Xptarget/PPZ) Z + Xptarget  (7) X =-(Xptarget / PPZ) Z + Xptarget
(8) Yptarget = Ypf top― (lcdcy/Hp) X (Ypf top—Ypf bottom)  (8) Yptarget = Ypf top-(lcdcy / Hp) X (Ypf top-Ypf bottom)
(9) Xptarget =Xpf start + (lcdcx/Wp) X (Xpfend— Xpfstart)  (9) Xptarget = Xpf start + (lcdcx / Wp) X (Xpfend— Xpfstart)
この関係式を利用することで、 3次元空間座標 (X, Υ, Z)を上記(6)から(9)の式に 与えることで、 LCD空間座標を(lcdcx, lcdcy)を算出することができる。よって、例え ば、 3次元空間に任意の形状、文字を投影するための LCD素子パターンを算出する こと力 Sできる。  By using this relational expression, it is possible to calculate the LCD space coordinates (lcdcx, lcdcy) by giving the three-dimensional space coordinates (X, Υ, Z) to the above equations (6) to (9). it can. Therefore, for example, it is possible to calculate an LCD element pattern for projecting an arbitrary shape and character in a three-dimensional space.
[0232] 図 25は、平面化画像処理(図 7の S611)のフローチャートである。平面化画像処理 は、例えば、書籍のような湾曲した状態の原稿 (被写体) Pを撮像した場合や矩形状 の原稿 Pを斜め方向から撮像した場合 (撮像された画像は台形状になる)であっても 、その原稿が湾曲していない状態やその面に対して垂直方向から撮像したような状 態に補正された平面化画像を取得、表示する処理である。  FIG. 25 is a flowchart of the flattened image processing (S 611 in FIG. 7). The flattened image processing is performed, for example, when a document (subject) P in a curved state such as a book is imaged or when a rectangular document P is imaged from an oblique direction (the imaged image becomes trapezoidal). Even so, this is a process of acquiring and displaying a flattened image corrected in a state where the original is not curved or in a state where the original is imaged in a direction perpendicular to the surface.
[0233] この処理では、まず、 CCD22に高解像度設定信号が送信すされる(S 1901)。 In this process, first, a high-resolution setting signal is transmitted to the CCD 22 (S1901).
[0234] 次に、レリーズボタン 8がスキャンされ(S 1903a)、レリーズボタン 8が半押しされた か否かが判断される(SI 903b)。レリーズボタン 8が半押しされていれば(SI 903b : Yes)、オートフォーカス (AF)および自動露出(AE)機能が起動され、ピント、絞り、 シャツタスピードが調節される(S1903c)。レリーズボタン 8が半押しされていなけれ ば(SI 903b: No)、 SI 903a力、らの処理力操り返えされる。 [0234] Next, the release button 8 was scanned (S1903a), and the release button 8 was half-pressed. Is determined (SI 903b). If the release button 8 is half-pressed (SI 903b: Yes), the auto focus (AF) and automatic exposure (AE) functions are activated, and the focus, iris, and shutter speed are adjusted (S1903c). If the release button 8 is not half-pressed (SI 903b: No), the processing power of the SI 903a is returned.
[0235] 次に、再び、レリーズボタン 8がスキャンされ(S1903d)、レリーズボタン 8が全押しさ れたか否かが判断される(S1903e)。レリーズボタン 8が全押しされていれば(S190 3e : Yes)、フラッシュモードか否かが判断される(S1903f)。  Next, the release button 8 is scanned again (S1903d), and it is determined whether or not the release button 8 is fully pressed (S1903e). If the release button 8 is fully pressed (S1903e: Yes), it is determined whether or not the flash mode is set (S1903f).
[0236] その結果、フラッシュモードであれば(S1903f : Yes)、フラッシュ 7が投光され(S1 903g)、撮影力行われる(S1903h)。フラッシュモードでなければ(SI 903f : No)、 フラッシュ 7を投光することなく撮影が行われる(S1903h)。尚、 S1903eの判断にお レヽて、全甲しされてレヽなければ(S1903e : No)、 SI 903a力らの処理力操り返えされ る。  As a result, in the flash mode (S1903f: Yes), the flash 7 is emitted (S1903g), and the photographing power is performed (S1903h). If not in the flash mode (SI 903f: No), shooting is performed without emitting the flash 7 (S1903h). If it is determined in S1903e that it has not been fully attacked (S1903e: No), the processing power such as SI 903a will be returned.
[0237] 次に、上述した 3次元形状検出処理(図 15の S1006)と同一の処理である 3次元形 状検出処理が行なわれ、被写体の 3次元形状が検出される(S1906)。  Next, the same three-dimensional shape detection process as the above-described three-dimensional shape detection process (S1006 in FIG. 15) is performed, and the three-dimensional shape of the subject is detected (S1906).
[0238] 次に、 3次元形状検出処理(S1906)によって得られた 3次元形状検出結果に基づ き、原稿 Pの姿勢を演算する原稿姿勢演算処理が行なわれる(S1907)。この処理に よって原稿 Pの姿勢パラメータとして、原稿 Pの画像入力装置 1に対する位置 Lや角 度 Θ、湾曲 φ (X)が演算される。  Next, based on the three-dimensional shape detection result obtained by the three-dimensional shape detection process (S1906), a document posture calculation process for calculating the posture of the document P is performed (S1907). Through this process, the position L, the angle Θ, and the curvature φ (X) of the document P with respect to the image input device 1 are calculated as the posture parameters of the document P.
[0239] 次に、その演算結果に基づき、後述する平面変換処理が行なわれる(S 1908)。こ の平面変換処理によれば、たとえ原稿 Pが湾曲していたとしても湾曲していない状態 に平面化された平面化画像が生成される。  Next, a plane conversion process described later is performed based on the calculation result (S 1908). According to this plane conversion processing, even if the document P is curved, a flattened image that is flattened to a state where the original P is not curved is generated.
[0240] 次に、平面変化処理(S 1908)によって得られた平面化画像が外部メモリ 27に格納 され(S1909)、平面化画像がモニタ LCD10に表示される(S1910)。 [0240] Next, the flattened image obtained by the plane change process (S1908) is stored in the external memory 27 (S1909), and the flattened image is displayed on the monitor LCD 10 (S1910).
[0241] そして、モード切替スィッチ 9に変化が無いか否かが判断され(S1911)、その結果[0241] Then, it is determined whether or not the mode switching switch 9 has changed (S1911).
、変化が無ければ(S1911 : Yes)、再び、 S702からの処理が繰り返えされる。モード 切替スィッチ 9に変化があれば(S1911: No)、当該処理を終了する。 If there is no change (S1911: Yes), the processing from S702 is repeated again. If there is a change in the mode switching switch 9 (S1911: No), the process ends.
[0242] 図 26 (a)力 図 26 (c)は、原稿姿勢演算処理(図 25の S1907)を説明するための 図である。尚、本等の原稿の仮定条件として、原稿 Pの湾曲は y方向に一様であると する。この原稿姿勢演算処理では、まず、図 26 (a)に示すように、 3次元座標格納部FIG. 26 (a) Force FIG. 26 (c) is a diagram for explaining the document posture calculation process (S1907 in FIG. 25). It is assumed that the curvature of the document P is uniform in the y direction as a precondition for a document such as a book. To do. In this document attitude calculation process, first, as shown in FIG.
37hに格納されているコード境界に関する座標データから 3次元空間位置においてFrom the coordinate data on the code boundary stored in 37h
2列に並ぶ点を回帰曲線近似した 2本の曲線が求められる。 Two curves are obtained by regression curve approximation of the points in two columns.
[0243] 例えば、このような曲線は、パターン光を投影した範囲の上下それぞれ 4分の 1の 位置情報(コード 63とコード 64の境界と、コード 191とコード 192との境界とに関する 境界)カゝら求めることができる。 [0243] For example, such a curve is represented by position information (a boundary between the boundary between the code 63 and the code 64 and the boundary between the code 191 and the code 192) of the upper and lower quarters of the range where the pattern light is projected. You can ask for it.
[0244] その 2本の曲線の X軸方向の位置が「0」における点を結ぶ直線を想定し、この直線 力 軸と交わる点、つまり、光軸が原稿 Pと交差する点を、原稿 Pの 3次元空間位置 (0[0244] Assuming a straight line connecting the points where the positions of the two curves in the X-axis direction are "0", the point at which this linear force axis intersects, that is, the point at which the optical axis intersects with document P, is defined as document P. 3D space position (0
, 0, とし、この直線が X—Y平面となす角を原稿 Pの X軸まわりの傾き Θとする。 , 0, and the angle between the straight line and the XY plane is defined as the inclination Θ of the document P about the X axis.
[0245] 次に、図 26 (b)に示すように、原稿 Pを、先に求めた X軸まわりの傾き Θ分だけ逆方 向に回転変換し、つまり、原稿 Pを X_Y平面に対して平行にした状態を想定する。 Next, as shown in FIG. 26 (b), the document P is rotationally transformed in the opposite direction by the previously obtained inclination ま わ り around the X axis, that is, the document P is rotated with respect to the X_Y plane. Assume a parallel state.
[0246] そして、図 26 (c)に示すように、Χ_Ζ平面における原稿 Ρの断面について、 Ζ軸方 向の変位を Xの関数として湾曲 φ (X)で表すことができる。こうして、原稿姿勢パラメ ータとして、原稿 Ρの位置 Lや角度 Θ、湾曲 φ (X)が演算され、当該処理を終了する Then, as shown in FIG. 26 (c), for the section of the original に お け る on the Χ_Ζ plane, the displacement in the Ζ-axis direction can be represented by the curvature φ (X) as a function of X. In this way, the position L, angle Θ, and curvature φ (X) of the document Ρ are calculated as the document orientation parameters, and the process ends.
[0247] 図 27は、平面変換処理(図 25の S1908)のフローチャートである。この処理では、 まず、 RAM37のワーキングエリア 371に当該処理の処理領域が割り当てられ、当該 処理に用いるカウンタ bの変数が初期値 (b = 0)に設定される(S2101)。 FIG. 27 is a flowchart of the plane conversion process (S1908 in FIG. 25). In this processing, first, a processing area of the processing is allocated to the working area 371 of the RAM 37, and a variable of a counter b used for the processing is set to an initial value (b = 0) (S2101).
[0248] 次に、原稿姿勢演算プログラム 36hでの演算結果による原稿 Pの位置 Lと、傾き Θと 、湾曲 Φ (X)とに基づき、パターン光無画像格納部 37bに格納されたパターン光無 画像の 4隅の点を、それぞれ、 Z方向に— L移動し、 X軸方向に— Θ回転し、更に φ ( X)にする湾曲の逆変換 (後述する「湾曲処理」と同等な処理)により求められる点で 取成される矩形領域 (つまり、原稿 Pの文字等が書かれた面が略直交方向から観察 されたような画像となる矩形領域)が設定されると共に、この矩形領域内に含まれる画 素数 aが求められる(S2102)。  Next, based on the position L of the document P, the inclination Θ, and the curvature Φ (X) based on the calculation result of the document posture calculation program 36h, the pattern light stored in the pattern lightless image storage unit 37b is stored. The four corner points of the image are moved by L in the Z direction, rotated by Θ in the X axis direction, and then rotated by Θ, and then inversely transformed into φ (X) (equivalent to the “bending process” described later) A rectangular area (that is, a rectangular area in which the surface of the original P on which characters and the like are written is an image as viewed from a substantially orthogonal direction) is set at the point determined by The number of pixels a included in is obtained (S2102).
[0249] 次に、設定された矩形領域を構成する各画素に対応するパターン光無画像上の座 標が求められて、この座標周辺の画素情報から、平面化画像の各画素の画素情報 が設定される。 [0250] つまり、まず、カウンタ bが画素数 aに到達したか否かが判断される(S2103)。カウ ンタ bが画素数 aに到達していなけば(S2103: No)、矩形領域を構成する 1つの画 素について、 Y軸を中心に湾曲 φ (X)回転移動させる湾曲計算処理が行なわれ (S2 104)、 X軸を中心に傾き Θ回転移動させ(S2105)、 Z軸方向に距離 Lだけシフトさ せる(S2106)。 [0249] Next, coordinates on the pattern light no-image corresponding to each pixel constituting the set rectangular area are obtained, and pixel information of each pixel of the flattened image is obtained from pixel information around the coordinates. Is set. That is, first, it is determined whether or not the counter b has reached the number of pixels a (S2103). If the counter b does not reach the number of pixels a (S2103: No), the curvature calculation processing for rotating one pixel constituting the rectangular area by the curvature φ (X) about the Y axis is performed ( S2 104), tilt around the X axis Θ Rotate and move (S2105), and shift by distance L in the Z axis direction (S2106).
[0251] 次に、求められた 3次元空間位置について、先の 3角測量の逆関数により理想カメ ラで写された CCD画像上の座標(ccdcx, ccdcy)を求め(S2107)、使用している撮 像光学系 20の収差特性に従って、先のカメラキャリブレーションの逆関数により、実 際のカメラで写された CCD画像上の座標(ccdx, ccdy)を求め(S2108)、この位置 に対応するパターン光無画像の画素の状態を求めて、 RAM37のワーキングエリア 3 71に格納する(S2109)。  [0251] Next, the coordinates (ccdcx, ccdcy) on the CCD image captured by the ideal camera are obtained by the inverse function of the previous triangulation for the obtained three-dimensional space position (S2107), and are used. According to the aberration characteristic of the imaging optical system 20, the coordinates (ccdx, ccdy) on the CCD image captured by the actual camera are obtained by the inverse function of the previous camera calibration (S2108), and the position corresponding to this position is obtained. The state of the pixel of the pattern light non-image is obtained and stored in the working area 371 of the RAM 37 (S2109).
[0252] 次に、次の画素について上述した S2103から S2109までの処理を実行すベぐ力 ゥンタ bに「1」が加算される(S2110)。  Next, “1” is added to the force counter b for executing the processing from S2103 to S2109 for the next pixel (S2110).
[0253] こうして、 S2104から S2110までの処理がカウンタ bが画素数 aになるまで繰り返え されると(S2103 : Yes)、 S2101において、当該処理を実行するためにワーキングェ リア 371に割り当てた処理領域が開放され (S 2111 )、当該処理を終了する。  [0253] Thus, when the processing from S2104 to S2110 is repeated until the counter b reaches the number of pixels a (S2103: Yes), in S2101, the processing area is allocated to the working area 371 to execute the processing. The processing area is released (S2111), and the processing ends.
[0254] 図 28 (a)は、湾曲処理(図 27の S2104)についての概略を説明するための図であ り、図 28 (b)は平面変換処理(図 25の S1908)によって平面化された原稿 Pを示して レ、る。尚、この湾曲処理についての詳細については、電子情報通信学会論文誌 DII Vol.J86 - D2 No.3 p409「アイスキャナによる湾曲ドキュメント撮影」に詳細に開 示されている。  [0254] Fig. 28 (a) is a diagram for explaining the outline of the bending process (S2104 in Fig. 27), and Fig. 28 (b) is flattened by the plane conversion process (S1908 in Fig. 25). The original P. The details of the curving process are disclosed in detail in IEICE Transactions DII Vol.J86-D2 No.3 p409 “Bending Document Photograph Using Eye Scanner”.
[0255] 湾曲 Ζ= φ (x)は、求められたコード境界座標列(実空間)で構成される 3次元形状 を、任意の Y値における XZ平面に平行な平面切断された断面形状を、最小 2乗法に より多項式で近似した式で表現される。  [0255] The curvature Ζ = φ (x) is obtained by converting the three-dimensional shape composed of the calculated code boundary coordinate sequence (real space) into a plane-cut cross-sectional shape parallel to the XZ plane at an arbitrary Y value. It is represented by an equation approximated by a polynomial using the least squares method.
[0256] 湾曲する曲面を平面化する場合、図 28 (a)に示すように、 Ζ= φ (χ)上の点に対応 する平面化された点は、 Ζ= φ (0)から Ζ= φ (X)までの曲線の長さによって対応付 けられることになる。 [0256] When planarizing a curved surface, as shown in Fig. 28 (a), the flattened point corresponding to the point on Ζ = φ (χ) is calculated from Ζ = φ (0) to Ζ = Corresponding to the length of the curve up to φ (X).
[0257] こうした湾曲処理を含む平面変換処理によって、例えば、湾曲している状態の原稿 Pを撮像した場合であっても、図 22 (b)に示すように、平面化された平面画像を取得 することができ、このように平面化された画像を用いれば OCR処理の精度を高めるこ とができる。したがって、その平面化された画像によって、原稿に記載された文字や 図形等を明確に認識することができる。 [0257] By the plane conversion process including such a bending process, for example, a document in a curved state Even when P is imaged, a flattened planar image can be obtained as shown in Fig. 22 (b), and using such a flattened image improves the accuracy of OCR processing. be able to. Therefore, the characters and figures written on the document can be clearly recognized from the flattened image.
[0258] 図 9に示したデジカメ処理、図 11に示した webcam処理、図 15に示した立体画像 処理において、投影方向に依存することなく無歪の投影画像を投影する場合には、 図 14の S806の投影処理と同様に実行される投影処理(S702f、 S802f、 S1010) に換えて、後述するような投影用画像変換処理 (S2900)を実行するように構成され る。 In the digital camera processing shown in FIG. 9, the webcam processing shown in FIG. 11, and the stereoscopic image processing shown in FIG. 15, when projecting a distortion-free projection image without depending on the projection direction, FIG. In place of the projection processing (S702f, S802f, S1010) executed in the same manner as the projection processing in S806, a projection image conversion processing (S2900) described later is executed.
[0259] 図 29は、その無歪投影用画像変換処理(S2900)のフローチャートである。この無 歪投影用画像変換処理(S2900)は、投影画像格納部 37kに格納される画像情報 に従って投影 LCD19に表示される画像を、無歪な状態で被写体に投影可能な画像 に変換する処理である。  FIG. 29 is a flowchart of the distortion-free projection image conversion process (S2900). The distortion-free projection image conversion process (S2900) is a process of converting an image displayed on the projection LCD 19 into an image that can be projected onto a subject without distortion according to image information stored in the projection image storage unit 37k. is there.
[0260] この処理では、まず、 RAM37のワーキングエリア 371に当該処理の処理領域が割 り当てられ、当該処理に用いるカウンタ qの変数が初期値(q = 0)に設定される(S29 01)。  In this processing, first, a processing area of the processing is allocated to the working area 371 of the RAM 37, and a variable of a counter q used for the processing is set to an initial value (q = 0) (S2901) .
[0261] 次に、無歪投影用画像 (つまり、湾曲した被写体上において無歪である画像)に変 換された後の画像となる矩形領域として、 LCD空間座標(lcdcx, lcdcy)の空間に相 当するメモリが RAM37のワーキングエリア 371に確保され、設定されると共に、この 矩形領域内に含まれる画素数 Qaが求められる(S2902)。  [0261] Next, as a rectangular area that is an image after being transformed into a distortion-free projection image (that is, an image that has no distortion on a curved subject), the rectangular area in the LCD space coordinates (lcdcx, lcdcy) is used. Corresponding memory is secured and set in the working area 371 of the RAM 37, and the number Qa of pixels included in this rectangular area is obtained (S2902).
[0262] 次に、投影画像格納部 37kに格納される、例えばメッセージ画像 Mや撮像画像 fな どの画像情報の各画素値が、理想カメラ画像座標系(ccdcx, ccdcy)の各画素に配 置される(S2903)。  [0262] Next, each pixel value of image information stored in the projection image storage unit 37k, such as the message image M or the captured image f, is allocated to each pixel of the ideal camera image coordinate system (ccdcx, ccdcy). Is performed (S2903).
[0263] 次に、設定された矩形領域を構成する LCD空間座標 (kdcx, lcdcy)上の各画素 について、上述した式 (6)〜(9)により、 3次元座標格納部 37hに格納された被写体 の表面上の対応点である三次元座標 (X, Υ, Z)が求められ、さらに式(1)〜(5)に おいて、 (ccdcx, ccdcy)について解くことにより、無歪投影用画像の各画素の画素 情報が算出され設定される。 [0264] つまり、まず、カウンタ qが画素数 Qaに到達したか否かが判断される(S2904)。力 ゥンタ qが画素数 Qaに到達してレ、なければ(S2904: No)、カウンタ qの値に対応す る画素の LCD空間座標(lcdcx, lcdcy)は、式(6)〜(9)により、ワーキングエリア 37 1に格納された被写体上の座標 (X, Υ, Z)に変換される(S2905)。 [0263] Next, for each pixel on the LCD space coordinates (kdcx, lcdcy) constituting the set rectangular area, the three-dimensional coordinates are stored in the three-dimensional coordinate storage unit 37h by the above equations (6) to (9). The three-dimensional coordinates (X, Υ, Z), which are the corresponding points on the surface of the subject, are obtained, and by solving equations (ccdcx, ccdcy) in equations (1) to (5), the distortion-free projection Pixel information of each pixel of the image is calculated and set. That is, first, it is determined whether or not the counter q has reached the number of pixels Qa (S2904). If the power counter q does not reach the number of pixels Qa (S2904: No), the LCD space coordinates (lcdcx, lcdcy) of the pixel corresponding to the value of the counter q are calculated by the equations (6) to (9). Is converted into coordinates (X, Υ, Z) on the subject stored in the working area 371 (S2905).
[0265] 次に、 S2905の処理により変換されて得られた被写体上の座標 (X, Υ, Z)は、式( 1)〜(5)において、(ccdcx, ccdcy)について解いた式を利用して理想カメラ画像上 の座標(ccdcx, ccdcy)に変換される(S2906)。  Next, the coordinates (X, Υ, Z) on the object obtained by the conversion in the process of S2905 are obtained by using the equations solved for (ccdcx, ccdcy) in equations (1) to (5). Then, it is converted to coordinates (ccdcx, ccdcy) on the ideal camera image (S2906).
[0266] 次に、 S2906の処理により変換されて得られた座標(ccdcx, ccdcy)に配置されて いる画素情報が取得され、その画素情報は、カウンタ qの値に対応する LCD空間座 標(lcdcx, lcdcy)に書き込まれる(S2907)。  [0266] Next, pixel information arranged at the coordinates (ccdcx, ccdcy) obtained by the conversion in the process of S2906 is obtained, and the pixel information is stored in the LCD spatial coordinates (LCD) corresponding to the value of the counter q. lcdcx, lcdcy) (S2907).
[0267] そして、次の画素について上述した S2904から S2907までの処理を実行すべく、 カウンタ qに「1」が加算される(S2908)。  [0267] Then, "1" is added to the counter q in order to execute the above-described processing from S2904 to S2907 for the next pixel (S2908).
[0268] こうして、 S2904力ら S2908までの処理;^、カウンタ qが画素数 Qaになるまで繰り 返えされると(S2904: Yes)、設定された矩形領域を構成する LCD空間座標(lcdcx , lcdcy)に対応付けられた画素情報は、投影 LCDドライバ 30に転送される(S2909 ) 0 [0268] Thus, the processing from S2904 to S2908; ^, when the counter q is repeated until the number of pixels becomes Qa (S2904: Yes), the LCD space coordinates (lcdcx, lcdcy, which constitute the set rectangular area) pixel information associated with the), is transferred to the projection LCD driver 30 (S2909) 0
[0269] 最後に、 S2901において、当該処理を実行するためにワーキングエリア 371に割り 当てられた処理領域は開放され(S2910)、当該処理を終了する。  [0269] Finally, in S2901, the processing area allocated to the working area 371 to execute the processing is released (S2910), and the processing ends.
[0270] S2909の処理により、 LCD空間座標(lcdcx, lcdcy)上の画素情報が投影 LCDド ライバ 30へ転送されることにより、投影 LCD19は、歪曲面上において無歪に投影さ れる投影画像が表示される。よって、被写体上に無歪な画像が投影される。  [0270] By the process of S2909, the pixel information on the LCD space coordinates (lcdcx, lcdcy) is transferred to the projection LCD driver 30, so that the projection LCD 19 outputs the projection image projected without distortion on the distorted surface. Is displayed. Therefore, an undistorted image is projected on the subject.
[0271] 従って、無歪投影用画像変換処理 (S2900)を実行することにより、投影方向が斜 めである場合だけでなぐ投影面が湾曲している場合であっても、無歪な投影画像を 投影すること力 Sできる。その結果として、特に、メッセージ画像 Mなどの文字列を投影 する場合に、使用者にその情報を正確に認識させることができる。  [0271] Therefore, by performing the distortion-free projection image conversion process (S2900), even if the projection direction is oblique, even if the projection surface is curved, the projection image without distortion can be obtained. Projecting power S can. As a result, especially when projecting a character string such as the message image M, the user can accurately recognize the information.
[0272] 図 30 (a)及び図 30 (b)は、上述した実施形態における光源レンズ 18に関する別の 例としての光源レンズ 50aを説明するための図である。図 30 (a)は光源レンズ 50aを 示す側面図であり、図 30 (b)は光源レンズ 50aを示す平面図である。尚、上述したの と同一な部材には、同一の符号を付し、その説明は省略する。 FIGS. 30 (a) and 30 (b) are views for explaining a light source lens 50a as another example of the light source lens 18 in the above-described embodiment. FIG. 30A is a side view showing the light source lens 50a, and FIG. 30B is a plan view showing the light source lens 50a. In addition, The same members as those described above are denoted by the same reference numerals, and description thereof will be omitted.
[0273] 上述した実施形態における光源レンズ 18は、各 LED17に対応する凸状の非球面 形状を有するレンズ部 18aをベース 18b上に一体的に並べて配置して構成されてい るのに対し、図 30 (a)及び図 30 (b)に示す例では、 LED17の各々を内包する砲弾 型に形成された樹脂製レンズが各々別体に構成されている。  [0273] The light source lens 18 in the above-described embodiment is configured by arranging a lens portion 18a having a convex aspherical shape corresponding to each LED 17 on a base 18b and being arranged integrally. In the example shown in FIG. 30 (a) and FIG. 30 (b), a bullet-shaped resin lens containing each of the LEDs 17 is formed separately.
[0274] このように、各 LED17を内包する光源レンズ 50aを各々別体に構成することで、各 々の LED17とそれに対応する各々の光学レンズ 50aとの位置が 1対 1で決められる ので、相対的な位置精度を高めることができ、光の出射方向が揃うという効果がある。  As described above, by separately configuring the light source lens 50a including each LED 17, the position of each LED 17 and the corresponding optical lens 50a can be determined on a one-to-one basis. The relative positional accuracy can be improved, and there is an effect that the light emitting directions are aligned.
[0275] これに対し、基板 16上にレンズアレイをまとめて位置合わせをすると、各々の LED 17がダイボンディングされる際の位置決め誤差やレンズアレイと基板の線膨張係数 の違いから、光の出射方向がバラバラになってしまう恐れがある。  [0275] On the other hand, when the lens array is aligned on the substrate 16 at a time, the light emission due to the positioning error when each LED 17 is die-bonded and the difference in the linear expansion coefficient between the lens array and the substrate. There is a risk that directions will be scattered.
[0276] 従って、投影 LCD19の面には、 LED17からの光の入射方向が投影 LCD19の面 に垂直にそろった光が照射され、投影光学系 20の絞りを均一に通過できる様になる ため、投影画像の照度ムラを抑えることができる。その結果、高品質な画像を投影す ること力 Sできる。尚、光源レンズ 50aに内包されている LED17はリードおよびリフレタ タからなる電極 51aを介して基板 16に実装されている。  [0276] Therefore, the surface of the projection LCD 19 is irradiated with light whose incident direction from the LED 17 is aligned perpendicular to the surface of the projection LCD 19, and can uniformly pass through the stop of the projection optical system 20. Illuminance unevenness of the projected image can be suppressed. As a result, the ability to project high-quality images can be achieved. The LED 17 included in the light source lens 50a is mounted on the substrate 16 via an electrode 51a composed of a lead and a reflector.
[0277] 1群の光源レンズ 50aの外周面には、各光源レンズ 50aを束ねて所定の方向に規 制する枠状の弾性を有する固定部材 52aが配置されている。この固定部材 52aは、 ゴム、プラスチック等の樹脂製材料で構成されている。  [0277] A frame-shaped elastic fixing member 52a that bundles the light source lenses 50a and regulates them in a predetermined direction is arranged on the outer peripheral surface of the group of light source lenses 50a. The fixing member 52a is made of a resin material such as rubber or plastic.
[0278] 光源レンズ 50aは各 LED17に対して各々別体に構成されているので、各光源レン ズ 50aの凸状の先端部が形成する光軸の角度を正しく揃えて投影 LCD19と対向す るように設置することが困難である。  [0278] Since the light source lens 50a is formed separately from each LED 17, the light source lens 50a faces the projection LCD 19 with the angle of the optical axis formed by the convex tip end of each light source lens 50a correctly aligned. It is difficult to install.
[0279] そこで、図 30 (a)及び図 30 (b)に示す例では、この固定部材 52aによって 1群の光 源レンズ 50aを取り囲み、各光源レンズ 50aの外周面同士を接触させ、各光源レンズ 50aの光軸が投影 LCD19と正しい角度で対向するように各光源レンズ 50aの位置が 規制されている。このような構成によれば、各光源レンズ 50aから投影 LCD19に向け て光を略垂直に照射させることができる。よって、投影 LCD19の面に垂直にそろった 光が照射され、投影レンズの絞りを均一に通過できる様になるため、投影画像の照 度ムラを抑えることができる。従って、一層、高品質な画像を投影することができる。 [0279] Therefore, in the example shown in Fig. 30 (a) and Fig. 30 (b), the fixing member 52a surrounds a group of light source lenses 50a, and the outer peripheral surfaces of the light source lenses 50a are brought into contact with each other. The position of each light source lens 50a is regulated so that the optical axis of the lens 50a faces the projection LCD 19 at a correct angle. According to such a configuration, light can be emitted from the light source lenses 50a to the projection LCD 19 almost vertically. Therefore, the uniform light is radiated on the surface of the projection LCD 19, and it can pass through the aperture of the projection lens uniformly. Degree unevenness can be suppressed. Therefore, a higher quality image can be projected.
[0280] 尚、この固定部材 52aは、予め所定の大きさに規定された剛性を有するものであつ ても良く、弾性力を有する材料で構成してその弾性力によって各光源レンズ 50の位 置を所定の位置に規制するようにしても良い。 [0280] Note that the fixing member 52a may have rigidity specified to a predetermined size in advance, and may be made of a material having elasticity, and the position of each light source lens 50 may be determined by the elasticity. May be restricted to a predetermined position.
[0281] 図 31 (a)及び図 31 (b)は、図 30 (a)及び図 30 (b)で説明した光源レンズ 50aを所 定位置に規制する固定部材 52aに関する別の例としいての固定部材 60を説明する ための図である。図 31 (a)は光源レンズ 50aを固定した状態を示す斜視図であり、図FIGS. 31 (a) and 31 (b) show another example of the fixing member 52a for restricting the light source lens 50a to a predetermined position described in FIGS. 30 (a) and 30 (b). FIG. 9 is a view for explaining a fixing member 60. FIG. 31A is a perspective view showing a state in which the light source lens 50a is fixed.
31 (b)はその部分的な断面図である。尚、上述したのと同一の部材には、同一の符 号を付し、その説明は省略する。 FIG. 31 (b) is a partial sectional view thereof. The same members as those described above are denoted by the same reference numerals, and description thereof will be omitted.
[0282] この固定部材 60は、各光源レンズ 50aの外周面に沿った断面を有する断面視円錐 形状の貫通孔 60aが穿設された板状に形成されている。各光源レンズ 50aは、この 各貫通孔 60aに差し込まれて固定される。 [0282] The fixing member 60 is formed in a plate shape with a through-hole 60a having a cross section along the outer peripheral surface of each light source lens 50a and having a conical cross section in a sectional view. Each light source lens 50a is inserted and fixed in each of the through holes 60a.
[0283] この固定部材 60と基板 16との間には弾性を有する付勢プレート 61が介装されて おり、更に、この付勢プレート 61と各光源レンズ 50aの下面との間には、電極 51aを 囲むように弾性を有する環状の Oリング 62が配置されてレ、る。 [0283] An elastic urging plate 61 is interposed between the fixing member 60 and the substrate 16, and an electrode is provided between the urging plate 61 and the lower surface of each light source lens 50a. An annular O-ring 62 having elasticity is arranged so as to surround 51a.
[0284] 尚、光源レンズ 50aに内包される LED17は、付勢プレート 61、基板 16に穿設され たスルーホールを貫通する電極 51 aを介して基板 16に実装されてレ、る。 [0284] The LED 17 included in the light source lens 50a is mounted on the substrate 16 via the biasing plate 61 and the electrode 51a penetrating through holes formed in the substrate 16.
[0285] 上述した固定部材 60によれば、各光源レンズ 50aは、その光源レンズの外周面に 沿った断面を有する各貫通孔 60aに各々貫通させて固定されるので、上述した固定 部材 50aよりも、一層確実に光源レンズ 50の光軸を正しい角度で投影 LCD19に向 くように固定することができる。 According to the fixing member 60 described above, each light source lens 50a is fixed by penetrating through each through hole 60a having a cross section along the outer peripheral surface of the light source lens. In addition, the optical axis of the light source lens 50 can be more reliably fixed so as to face the projection LCD 19 at a correct angle.
[0286] また、組立時に、〇リング 62の付勢力によって LED17を正しい位置に付勢して固 定すること力 Sできる。 [0286] At the time of assembly, the LED 17 can be urged to the correct position by the urging force of the O-ring 62 and fixed.
[0287] また、本装置 1を運搬する場合等に生ずる可能性のある衝撃力を、〇リング 62の弾 性力によって吸収することができ、その衝撃の影響で光源レンズ 50aの位置がずれ てしまい、光源レンズ 50aから垂直に投影 LCD19に向けて光を照射できないという 不都合を防止することができる。  [0287] Further, an impact force that may be generated when the present apparatus 1 is carried can be absorbed by the elastic force of the O-ring 62, and the position of the light source lens 50a is shifted by the influence of the impact. As a result, it is possible to prevent the inconvenience that light cannot be irradiated from the light source lens 50a vertically toward the projection LCD 19.
[0288] 図 9のデジカメ処理における S706の処理、図 11の webcam処理における S803の 処理は、投影撮像制御手段と位置付けられる。図 11の webcam処理における S805 の処理は、投影画像更新手段と位置付けられる。図 15の立体画像処理における S1 006の処理は三次元情報検出手段と位置付けられる。 [0288] The processing of S706 in the digital camera processing of Fig. 9 and the processing of S803 in the webcam processing of Fig. 11 The processing is regarded as projection imaging control means. The processing of S805 in the webcam processing of FIG. 11 is positioned as a projection image updating means. The processing of S1006 in the stereoscopic image processing of FIG. 15 is positioned as three-dimensional information detection means.
[0289] 図 15の立体画像処理における S1002gの処理は、三次元情報検出禁止手段と位 置付けられる。図 11の webcam処理における S801の処理は、解像度低下送信手段 と位置付けられる。 [0289] The processing of S1002g in the stereoscopic image processing of Fig. 15 is positioned as a three-dimensional information detection prohibition unit. The processing of S801 in the webcam processing of FIG. 11 is positioned as a resolution lowering transmission means.
[0290] 以上実施形態に基づき本発明を説明したが、本発明は上記実施形態に何ら限定 されるものでなぐ本発明の主旨を逸脱しない範囲内で種々の改良変形が可能であ る。  [0290] Although the present invention has been described based on the embodiments, the present invention is not limited to the above embodiments, and various improvements and modifications can be made without departing from the gist of the present invention.
[0291] 例えば、上記実施形態では、平面化画像モードとして、平面化された画像を取得、 表示する処理を説明したが、周知の OCR機能を画像入出力装置 1に搭載することに より、平面化された平面画像がこの OCR機能によって読み取られるよう画像入出力 装置 1が構成されていても良レ、。このような構成の場合には、 OCR機能によって湾曲 した状態の原稿を読み取る場合に比べて高精度に原稿に記載された文章を読み取 ることができるという大きな効果が得られる。  For example, in the above embodiment, the process of acquiring and displaying a flattened image has been described as the flattened image mode. However, by mounting a well-known OCR function on the image input / output device 1, Even if the image input / output device 1 is configured so that the converted planar image can be read by this OCR function, it is acceptable. In the case of such a configuration, a great effect is obtained in that the text written on the document can be read with higher accuracy than when the document that is curved by the OCR function is read.
[0292] また、上記実施形態における図 11のウェブカム処理は、 CCD22に低解像度設定 信号を送信する処理(S801)に代えて、 CCD22からキャッシュメモリ 28に転送する 処理(S807)または撮像画像を外部ネットワークに送信する処理(S808)におレ、て、 撮像画像の解像度を低下させて送信するものであっても良い。このようにしても、撮 像画像の解像度を低下させることにより、その後の処理速度を向上させることができ る。  In the web cam process of FIG. 11 in the above embodiment, the process of transmitting the low resolution setting signal to the CCD 22 (S801) is replaced with the process of transferring from the CCD 22 to the cache memory 28 (S807) or the captured image In the process of transmitting to the network (S808), the resolution of the captured image may be reduced before transmission. Also in this case, the subsequent processing speed can be improved by lowering the resolution of the captured image.
[0293] また、上記実施形態における図 21の S1501においては、明暗の変化を持つ輝度 画像の全部が抽出され、その全部について暫定的な CCDY値を求める場合につい て説明がなされたが、抽出する輝度画像としては、全部である必要はなぐ 1枚以上 であれば、その枚数に限定されることはない。抽出する枚数を減らすことで境界座標 を高速に求めることができる。  Also, in S1501 of FIG. 21 in the above embodiment, all the luminance images having a change in brightness are extracted, and a case where a provisional CCDY value is obtained for all of the luminance images has been described. The number of luminance images is not limited to one if it is one or more, not necessarily the whole. The boundary coordinates can be obtained at high speed by reducing the number of sheets to be extracted.
[0294] また、上記実施形態における図 21の S1507では、 fCCDY[i]を加重平均し、図 2 2の S1607では efCCDY[j]を近似多項式として、各値を平均化する場合について 説明がなされたが、各値を平均化する方法としては、これらに限定されるものではな レ、。例えば、各値の単純平均値を採る方法、各値の中央値を採用する方法、各値の 近似式を算出し、その近似式における検出位置を境界座標とする方法、統計的な演 算により求める方法等が採用されても良い。 [0294] In S1507 of Fig. 21 in the above embodiment, fCCDY [i] is weighted and averaged, and in S1607 of Fig. 22, each value is averaged using efCCDY [j] as an approximate polynomial. Although explained, the method of averaging each value is not limited to these. For example, a method of taking the simple average of each value, a method of using the median of each value, a method of calculating an approximate expression of each value, and using the detection position in the approximate expression as a boundary coordinate, and a statistical operation A method for obtaining the information may be adopted.
[0295] また、例えば、上記実施形態における平面化画像モードにおける 3次元形状検出 処理においては、原稿 Pの 3次元形状を検出するために、複数種類の明暗を交互に 並べてなる縞状のパターン光を投影する場合について説明がなされたが、 3次元形 状を検出するための光は、力、かるパターン光に限定されるものではない。  [0295] For example, in the three-dimensional shape detection process in the flattened image mode in the above embodiment, in order to detect the three-dimensional shape of the document P, a striped pattern light in which a plurality of types of light and dark are alternately arranged. Although the description has been given of the case of projecting a three-dimensional shape, the light for detecting the three-dimensional shape is not limited to the power and the pattern light.
[0296] 例えば、図 32に示すように、湾曲原稿の 3次元形状の検出を簡便に行う場合には、 画像投影部 13から 2本の帯状のスリット光 70, 71が投影されても良レ、。この場合には 、 8枚のパターン光を投影する場合にくらべ、僅力、 2枚の撮像画像から高速に 3次元 形状の検出をすることができる。  For example, as shown in FIG. 32, in the case where the three-dimensional shape of a curved document is easily detected, even if two band-shaped slit lights 70, 71 are projected from the image projection unit 13, it is acceptable. ,. In this case, it is possible to detect a three-dimensional shape from two captured images at a higher speed than in the case of projecting eight pattern lights.
[0297] また、上記実施形態における画像投影部 13において、光源レンズ 18と、投影 LCD 19と、投影光学系 20とは、投影方向に沿って一直線上に備えられていたが、画像投 影部 13の構成は必ずしもこれに限られない。  [0297] In the image projection unit 13 in the above embodiment, the light source lens 18, the projection LCD 19, and the projection optical system 20 are provided in a straight line along the projection direction. The configuration of 13 is not necessarily limited to this.
[0298] 図 33 (a)から図 33 (c)は、画像投影部 13の変形例を示す図である。図 33 (a)は撮 像ヘッド 2の平面図であり、図 33 (b)および図 33 (c)は図 33 (a)の A— A視断面図で ある。なお、図 33 (a)では撮像ヘッド 2の内部を示すために蓋部 2aの記載は省略さ れている。図 33 (b)及び図 33 (c)に示されるように、画像投影部は、複数のレンズか ら構成される投影光学系 20に反射ミラー 200を設け、光源レンズ 18から出射した光 の方向を所定の投影方向に変更して投影するよう構成されても良い。  FIG. 33 (a) to FIG. 33 (c) are diagrams showing modified examples of the image projection unit 13. FIG. 33 (a) is a plan view of the imaging head 2, and FIGS. 33 (b) and 33 (c) are cross-sectional views taken along line AA of FIG. 33 (a). Note that, in FIG. 33 (a), the illustration of the cover 2a is omitted to show the inside of the imaging head 2. As shown in FIGS. 33 (b) and 33 (c), the image projection unit is provided with a reflection mirror 200 in a projection optical system 20 composed of a plurality of lenses, and a direction of light emitted from the light source lens 18. May be changed to a predetermined projection direction and projected.
[0299] このようにすれば、反射ミラー 200により光路を変更できるので、光源レンズ 18、 LC D19、投影光学系 20は必ずしも投影方向に沿って一直線上に設ける必要はなぐ 画像投影部 13を撮像ヘッド 2の形状に合わせてコンパクトに収納することができる。  [0299] In this way, since the optical path can be changed by the reflection mirror 200, the light source lens 18, the LCD 19, and the projection optical system 20 need not always be provided in a straight line along the projection direction. It can be stored compactly according to the shape of the head 2.
[0300] また、上記実施形態は光源手段として、 LEDアレイ 17A、光源レンズ 18を備えてい た力 このような構成に代えて、光源手段は図 33 (c)に示すように構成されていても 良い。図 33 (c)に示される光源手段は、白色光を発光するハロゲンランプや白色 LE D (Light Emitting Diode)などの光源 170と、光源 170力 発せられた白色光を LCD19に効率的に入射させるための反射板 180とから構成される。 In the above embodiment, the light source is provided with the LED array 17A and the light source lens 18. Instead of such a configuration, the light source may be configured as shown in FIG. 33 (c). good. The light source means shown in FIG. 33 (c) includes a light source 170 such as a halogen lamp that emits white light or a white LED (Light Emitting Diode), and a light source 170 that emits white light. And a reflecting plate 180 for efficiently entering the LCD 19.
[0301] また、上記実施形態のデジカメ処理(図 9参照)、 webcam処理(図 11参照)、立体 画像処理(図 15参照)の各処理においては、撮像ケース 11の小径部 l ibに固定さ れた撮像部位置検出用リブ 11cの位置を、撮像部位置検出センサ S2により検出する ことで、撮像ヘッド 2に対する画像撮像部 14の位置が検出されていたが、画像投影 部 13と画像撮像部 14との相対的な位置関係を検出する手段はこれに限られない。 例えば、所定位置固定用リブ 2dが、撮像ケース 11の小径部 l ibの一方に設けられ た図示しないストツバと係合し、撮像ケース 11のそれ以上の回動が規制されたことに 基づいて、画像撮像部 14が画像投影部 13に対し所定の位置関係にあることが検出 されても良レ、。このようにすれば、画像投影部 13と画像撮像部 14との位置関係が確 実に所定の位置関係となった場合にのみ三次元形状の検出が許可され、三次元形 状の検出をより高い精度で行うことができる。 [0301] In each of the digital camera processing (see Fig. 9), the webcam processing (see Fig. 11), and the stereoscopic image processing (see Fig. 15) in the above embodiment, the digital camera is fixed to the small diameter portion l ib of the imaging case 11. By detecting the position of the rib 11c for detecting the position of the imaging unit by the imaging unit position detection sensor S2, the position of the image imaging unit 14 with respect to the imaging head 2 has been detected, but the image projection unit 13 and the image imaging unit Means for detecting the relative positional relationship with 14 is not limited to this. For example, based on the fact that the predetermined position fixing rib 2d is engaged with a stove (not shown) provided on one of the small diameter portions l ib of the imaging case 11, and further rotation of the imaging case 11 is restricted, It is OK even if it is detected that the image capturing unit 14 has a predetermined positional relationship with the image projecting unit 13. With this configuration, the detection of the three-dimensional shape is permitted only when the positional relationship between the image projecting unit 13 and the image capturing unit 14 surely becomes a predetermined positional relationship, and the detection of the three-dimensional shape is more improved. Can be done with precision.
[0302] 本発明の一つの実施形態において、画像入出力装置は、投影手段を保持する投 影筐体と、 [0302] In one embodiment of the present invention, the image input / output device includes a projection housing for holding projection means,
その投影筐体に対し相対的に移動可能に配設された、撮像手段を保持する撮像 筐体とを更に備えていても良い。この場合、空間変調手段は液晶パネルで構成され 、投影手段はその液晶パネルにより出力された画像信号光を所定の投影面に結像さ せる投影光学系を有し、撮像手段は、光を電気信号に変換する受光素子と、入射す る光をその受光素子に結像させる撮像光学系とを有していても良い。  The image processing apparatus may further include an imaging housing that holds the imaging unit and that is disposed so as to be relatively movable with respect to the projection housing. In this case, the spatial modulation means is constituted by a liquid crystal panel, the projection means has a projection optical system for forming an image signal light output from the liquid crystal panel on a predetermined projection surface, and the imaging means converts the light into electricity. A light-receiving element that converts the light into a signal and an imaging optical system that forms incident light on the light-receiving element may be provided.
[0303] このような構成によれば、投影手段は液晶パネルにより出力された画像信号光を所 定の投影面に結像させる投影光学系を有し、撮像手段は入射する光を受光素子に 結像させる撮像光学系を有する。一般的に、上記撮像手段に設けられる受光素子は 、上記投影手段に設けられる液晶パネルに比較して、小さいものが多い。このような 場合、撮像手段が有する撮像光学系は、投影手段が有する投影光学系よりも小さく 構成される。さらに、投影手段には、光源手段が備えられている。よって、撮像手段を 保持する撮像筐体は、投影手段を保持する投影筐体に比較して小さく構成すること が容易であり、その撮像筐体が投影筐体に対し相対的に移動可能に配設されている [0304] また、このような画像入出力装置によれば、撮像筐体は投影筐体に比較して小さく 構成することが容易であり、その撮像筐体が投影筐体に対し相対的に移動可能に配 設されているので、その撮像筐体に保持された撮像手段と投影筐体に保持された投 影手段とを容易に相対的に移動させることができるという効果がある。 [0303] According to such a configuration, the projection unit has the projection optical system that forms an image signal light output from the liquid crystal panel on a predetermined projection surface, and the imaging unit transmits incident light to the light receiving element. It has an imaging optical system for forming an image. In general, the light receiving element provided in the imaging means is often smaller than the liquid crystal panel provided in the projection means. In such a case, the imaging optical system of the imaging means is configured to be smaller than the projection optical system of the projection means. Further, the projection means is provided with a light source means. Therefore, it is easy to make the imaging housing holding the imaging means smaller than the projection housing holding the projection means, and the imaging housing is arranged so as to be relatively movable with respect to the projection housing. Is established [0304] Further, according to such an image input / output device, it is easy to make the imaging housing smaller than the projection housing, and the imaging housing moves relatively to the projection housing. Since it is provided as possible, there is an effect that the imaging means held by the imaging housing and the projection means held by the projection housing can be relatively easily moved.
[0305] 本発明の一つの実施形態において、投影手段は、光源手段から出射される光を所 定の投影方向に投影するための投影方向決定手段を有していても良い。  [0305] In one embodiment of the present invention, the projection unit may include a projection direction determination unit for projecting the light emitted from the light source unit in a predetermined projection direction.
[0306] このような構成によれば、投影手段は、光源手段から出射される光を所定の投影方 向に投光するための投影方向決定手段を有するので、光源手段、空間変調手段及 び投影光学系を一直線上に設ける必要がなぐ投影筐体の形状に合わせて投影手 段をコンパ外に設けることができ、投影筐体を小型化できるという効果がある。  According to such a configuration, since the projection means has the projection direction determination means for projecting the light emitted from the light source means in a predetermined projection direction, the light source means, the spatial modulation means, The projection means can be provided outside the comparator in accordance with the shape of the projection housing in which it is not necessary to arrange the projection optical system on a straight line, and the projection housing can be downsized.
[0307] 本発明の一つの実施形態において、画像入出力装置は、投影手段及び撮像手段 を支持する基体と、その基体に対して投影手段及び撮像手段の少なくとも一方の位 置を移動可能に支持する支持部材とを有していても良い。  [0307] In one embodiment of the present invention, the image input / output device includes a base supporting the projection unit and the imaging unit, and movably supporting the position of at least one of the projection unit and the imaging unit with respect to the base. And a supporting member.
[0308] このような構成によれば、基体に対し投影手段及び撮像手段の少なくとも一方の位 置を移動することができるので、投影手段及び撮像手段が基体に支持された状態で あっても、投影手段により画像信号光を投影可能な方向や、撮像手段により撮像可 能な方向に融通性を持たせることができるという効果がある。  According to such a configuration, at least one of the projection unit and the imaging unit can be moved with respect to the base, so that even when the projection unit and the imaging unit are supported by the base, There is an effect that flexibility can be provided in a direction in which the image signal light can be projected by the projection means and in a direction in which the image signal can be captured by the imaging means.
[0309] 本発明の一つの実施形態において、支持部材は、その一端側が投影手段及び撮 像手段の少なくとも一方に連結されるとともに、その他端側に、基体にその支持部材 を取り付けるための取付部を有していても良い。また、基体及び支持部材の少なくと も一方には、投影手段及び撮像手段の少なくとも一方へ電力を供給するための電源 部が設けられていても良い。  [0309] In one embodiment of the present invention, the support member has one end connected to at least one of the projection means and the imaging means, and has, at the other end, an attachment portion for attaching the support member to the base. May be provided. Further, at least one of the base and the support member may be provided with a power supply unit for supplying power to at least one of the projection unit and the imaging unit.
[0310] このような構成によれば、投影手段及び撮像手段の少なくとも一方へ電力を供給す るための電源部は基体及び支持部材の少なくとも一方に設けられているので、投影 手段及び撮像手段を軽量ィ匕することができ、基体に対し投影手段及び撮像手段の 位置を移動する操作が容易になるという効果がある。また、投影手段及び撮像手段 が軽量化されているので、投影手段及び撮像手段に電源部を設ける場合に比較し て、支持部材の強度がそれほど要求されないとレ、う効果もある。 [0311] 本発明の一つの実施形態において、画像入出力装置は、投影手段により画像信 号光を投影しつつ、撮像手段による撮像を実行する投影撮像制御手段を備えてい ても良い。 [0310] According to such a configuration, the power supply unit for supplying power to at least one of the projection unit and the imaging unit is provided on at least one of the base and the support member. It is possible to reduce the weight and to easily move the positions of the projection unit and the imaging unit with respect to the base. Further, since the weights of the projecting means and the imaging means are reduced, there is an effect that the strength of the support member is not so required as compared with the case where a power supply unit is provided in the projecting means and the imaging means. [0311] In one embodiment of the present invention, the image input / output device may include a projection imaging control unit that executes imaging by the imaging unit while projecting image signal light by the projection unit.
[0312] このような構成によれば、投影手段により画像信号光を投影しつつ、撮像手段によ る撮像が実行される。ここで、撮像手段により投影方向とは異なる方向に存在する被 写体を撮像することができるので、例えば、使用者自身が存在する方向とは異なる方 向に画像信号光を投影しつつ、撮像手段により使用者自身を撮像することにより、使 用者は、投影手段により投影された画像を視認しながら、 自分自身を撮像することが できるという効果がある。また、例えば、撮像手段により、投影方向と異なる方向に存 在する被写体を撮像する場合には、その被写体上には画像信号光が投影されない ので、投影手段により投影された画像を含まずに被写体の撮像を行うことができると レ、う効果がある。  [0312] According to such a configuration, imaging by the imaging unit is performed while projecting the image signal light by the projection unit. Here, since the imaging means can capture an image of an object existing in a direction different from the projection direction, for example, while projecting the image signal light in a direction different from the direction in which the user himself is present, the imaging means can be used. By imaging the user himself by means, there is an effect that the user can image himself while visually recognizing the image projected by the projection means. Further, for example, when the imaging unit captures an image of a subject existing in a direction different from the projection direction, the image signal light is not projected on the subject, so that the subject is not included in the image projected by the projection unit. It is possible to take an image of the above, which has an effect.
[0313] 本発明の一つの実施形態において、投影撮像制御手段は、投影手段を制御する ことにより、撮像手段により取得された撮像画像に関する関連情報を投影手段により 投影するよう構成されていても良い。  [0313] In one embodiment of the present invention, the projection imaging control means may be configured to control the projection means to project the relevant information related to the captured image acquired by the imaging means by the projection means. .
[0314] このような構成によれば、撮像手段により取得された撮像画像に関する関連情報が 投影手段により投影されるので、使用者は、撮像手段により取得された撮像画像に 関する関連情報を視認しつつ、撮像を実行することができるという効果がある。撮像 手段に関する関連情報とは、例えば、撮像手段の操作方法、操作手順、撮像手段の 撮像モードを示す文字列や図形が相当する。 [0314] According to such a configuration, since the related information on the captured image acquired by the imaging unit is projected by the projection unit, the user visually recognizes the related information on the captured image acquired by the imaging unit. In addition, there is an effect that imaging can be performed. The relevant information related to the imaging unit corresponds to, for example, a character string or a graphic indicating an operation method and an operation procedure of the imaging unit, and an imaging mode of the imaging unit.
[0315] 本発明の一つの実施形態において、投影撮像制御手段は、投影手段を制御する ことにより、撮像手段により撮像された被写体の撮像画像を投影手段により投影する よう構成されていても良い。 [0315] In one embodiment of the present invention, the projection imaging control means may be configured to control the projection means to project the captured image of the subject imaged by the imaging means by the projection means.
[0316] このような構成によれば、撮像手段により撮像された被写体の撮像画像が投影手 段により投影されるので、使用者は、撮像手段により撮像された被写体の撮像画像を 視認しつつ、撮像を実行することができるという効果がある。 According to such a configuration, the captured image of the subject captured by the imaging unit is projected by the projection means, so that the user can visually recognize the captured image of the subject captured by the imaging unit, There is an effect that imaging can be performed.
[0317] 本発明の一つの実施形態において、画像入出力装置は、投影手段により投影され る画像信号光を更新する投影画像更新手段を有してレ、ても良レ、。 [0318] このような構成によれば、投影手段により投影される画像信号光が更新されるので 、使用者は投影手段により投影された画像の更新を視認しつつ、撮像を実行するこ とができるという効果がある。 [0317] In one embodiment of the present invention, the image input / output device includes a projection image updating unit that updates the image signal light projected by the projection unit. According to such a configuration, since the image signal light projected by the projecting unit is updated, the user can execute imaging while visually confirming the update of the image projected by the projecting unit. There is an effect that can be.
[0319] 本発明の一つの実施形態において、画像入出力装置は、所定のパターンを有した 画像信号光が投影手段により被写体上に投影された場合に、撮像手段により撮像さ れた撮像画像に基づいて、被写体の三次元情報を検出する三次元情報検出手段を 備えていても良い。  [0319] In one embodiment of the present invention, when an image signal light having a predetermined pattern is projected on a subject by a projecting unit, the image input / output device outputs an image captured by the imaging unit. A three-dimensional information detecting means for detecting three-dimensional information of the subject based on the three-dimensional information may be provided.
[0320] このような構成によれば、所定のパターンを有した画像信号光が投影手段により被 写体上に投影された場合に、撮像手段により撮像された撮像画像に基づいて、被写 体の三次元情報が検出される画像入出力装置において、撮像手段と投影手段とを 互いに相対的に移動させることにより、投影方向とは異なる方向に存在する被写体を 撮像することができるよう構成されているので、画像入出力装置において三次元情報 を検出するために設けられた投影手段及び撮像手段の用途が拡大するという効果が ある。  According to such a configuration, when the image signal light having the predetermined pattern is projected on the object by the projecting means, the object is reproduced based on the image taken by the image sensing means. In the image input / output device in which the three-dimensional information is detected, by moving the imaging means and the projection means relatively to each other, it is possible to image a subject existing in a direction different from the projection direction. Therefore, there is an effect that the use of the projection unit and the imaging unit provided for detecting three-dimensional information in the image input / output device is expanded.
[0321] 本発明の一つの実施形態において、画像入出力装置は、撮像手段と投影手段と の相対的な位置関係が所定範囲の位置関係にあるかを判定する位置判定手段を有 していても良い。  [0321] In one embodiment of the present invention, the image input / output device has a position determination unit that determines whether the relative positional relationship between the imaging unit and the projection unit is within a predetermined range. Is also good.
[0322] このような構成によれば、位置判定手段により、撮像手段と投影手段との相対的な 位置関係が所定範囲の位置関係にないと判定することができるという効果がある。  [0322] According to such a configuration, there is an effect that the position determination unit can determine that the relative positional relationship between the imaging unit and the projection unit is not within the predetermined range.
[0323] 本発明の一つの実施形態において、画像入出力装置は、撮像手段と投影手段と の相対的な位置関係が所定の位置関係にある状態で、撮像手段と投影手段の位置 関係を固定する固定手段を有していても良い。 [0323] In one embodiment of the present invention, the image input / output device fixes the positional relationship between the imaging unit and the projection unit in a state where the relative positional relationship between the imaging unit and the projection unit is in a predetermined positional relationship. May be provided.
[0324] このような構成によれば、撮像手段と投影手段との相対的な位置関係が所定の位 置関係にある状態で、固定手段により、撮像手段と投影手段の位置関係を固定する ことができるとレ、う効果がある。 [0324] According to such a configuration, the positional relationship between the imaging means and the projection means is fixed by the fixing means in a state where the relative positional relationship between the imaging means and the projection means is in the predetermined positional relation. If it can be done, it has the effect.
[0325] 本発明の一つの実施形態において、画像入出力装置は、撮像手段と投影手段と の相対的な位置関係を判定する位置判定手段と、その位置判定手段により撮像手 段と投影手段との相対的な位置関係が所定範囲の位置関係にないと判定された場 合には、三次元情報検出手段による三次元情報の検出を禁止する三次元情報検出 禁止手段とを有してレ、ても良レ、。 [0325] In one embodiment of the present invention, the image input / output device includes a position determination unit that determines a relative positional relationship between the imaging unit and the projection unit, and the imaging unit and the projection unit that use the position determination unit. If it is determined that the relative positional relationship of In this case, a three-dimensional information detection prohibiting means for prohibiting the detection of the three-dimensional information by the three-dimensional information detecting means is provided.
[0326] このような構成によれば、位置判定手段により、撮像手段と投影手段との相対的な 位置関係が所定範囲の位置関係にないと判定された場合には、三次元情報検出手 段による三次元情報の検出が禁止されるので、精度の高い三次元情報を確実に検 出することができるという効果がある。すなわち、三次元情報検出手段は、所定のパ ターンを有した画像信号光が投影手段により被写体上に投影された場合に、撮像手 段により撮像された撮像画像に基づいて、三次元情報を検出する。したがって、撮像 手段と投影手段との相対的な位置関係が所定範囲の位置関係になぐ投影手段に より所定のパターンを有した画像信号光を被写体上に投影することができない、また は精度の高い三次元情報が得られない場合には、三次元情報の検出が禁止される ので、精度の悪い三次元情報が検出されるのを抑制することができるのである。  According to such a configuration, when the position determining unit determines that the relative positional relationship between the imaging unit and the projecting unit is not within a predetermined range, the three-dimensional information detecting unit is used. Since the detection of three-dimensional information due to is prohibited, it is possible to reliably detect high-precision three-dimensional information. That is, the three-dimensional information detecting means detects the three-dimensional information based on the image taken by the imaging means when the image signal light having a predetermined pattern is projected on the subject by the projecting means. I do. Therefore, the image signal light having the predetermined pattern cannot be projected onto the subject by the projection means in which the relative positional relationship between the imaging means and the projection means is within the predetermined range, or the accuracy is high. When the three-dimensional information cannot be obtained, the detection of the three-dimensional information is prohibited, so that the detection of the three-dimensional information with low accuracy can be suppressed.
[0327] 本発明の一つの実施形態において、画像入出力装置は、撮像手段により取得され る撮像画像の解像度を下げて外部に送信する解像度低下送信手段を有していても 良い。  [0327] In one embodiment of the present invention, the image input / output device may include a resolution reduction transmitting unit that reduces the resolution of a captured image obtained by the imaging unit and transmits the reduced image to the outside.
[0328] このような構成によれば、撮像手段により取得される撮像画像の解像度が下げられ て外部に送信されるので、処理速度を向上することができるという効果がある。  According to such a configuration, the resolution of the captured image obtained by the imaging unit is reduced and transmitted to the outside, so that there is an effect that the processing speed can be improved.

Claims

請求の範囲 The scope of the claims
[1] 光を出射する光源手段と、その光源手段から出射される光に空間変調を施して画 像信号光を出力する空間変調手段とを有し、その空間変調手段により出力される画 像信号光を投影方向に向けて投影する投影手段と、  [1] Light source means for emitting light, and spatial modulation means for spatially modulating light emitted from the light source means and outputting image signal light, and an image output by the spatial modulation means Projecting means for projecting the signal light in the projection direction;
少なくとも前記投影方向に存在する被写体を撮像し、その撮像データを取得可能 な撮像手段とを備えた画像入出力装置であって、  An image input / output device comprising: an imaging unit capable of capturing at least a subject existing in the projection direction and acquiring the captured data,
前記撮像手段および前記投影手段は、前記投影方向とは異なる方向に存在する 被写体を前記撮像手段により撮像可能なように、前記撮像手段の撮像方向が前記 投影手段の投影方向に対して相対的に変更可能に設けられていることを特徴とする 画像入出力装置。  The imaging unit and the projection unit are arranged such that an imaging direction of the imaging unit is relatively to a projection direction of the projection unit so that a subject existing in a direction different from the projection direction can be imaged by the imaging unit. An image input / output device, which is provided so as to be changeable.
[2] 前記投影手段を保持する投影筐体と、 [2] a projection housing for holding the projection means,
その投影筐体に対し相対的に移動可能に配設された、前記撮像手段を保持する 撮像筐体とを更に備え、  An imaging housing for holding the imaging unit, the imaging housing being arranged to be relatively movable with respect to the projection housing,
前記空間変調手段は液晶パネルで構成され、  The spatial modulation means is constituted by a liquid crystal panel,
前記投影手段はその液晶パネルにより出力された画像信号光を所定の投影面に 結像させる投影光学系を有し、  The projection means has a projection optical system for forming an image signal light output from the liquid crystal panel on a predetermined projection surface,
前記撮像手段は、光を電気信号に変換する受光素子と、入射する光をその受光素 子に結像させる撮像光学系とを有することを特徴とする請求項 1記載の画像入出力 装置。  2. The image input / output device according to claim 1, wherein the imaging unit includes a light receiving element that converts light into an electric signal, and an imaging optical system that forms an image of incident light on the light receiving element.
[3] 前記投影手段は、前記光源手段から出射される光を所定の投影方向に投影する ための投影方向決定手段を有することを特徴とする請求項 2記載の画像入出力装置  3. The image input / output apparatus according to claim 2, wherein the projection unit has a projection direction determination unit for projecting the light emitted from the light source unit in a predetermined projection direction.
[4] 前記投影手段及び前記撮像手段を支持する基体と、 [4] a base supporting the projection unit and the imaging unit,
その基体に対して前記投影手段及び前記撮像手段の少なくとも一方の位置を移動 可能に支持する支持部材とを有することを特徴とする請求項 1記載の画像入出力装 置。  2. The image input / output device according to claim 1, further comprising a support member that movably supports at least one of the projection unit and the imaging unit with respect to the base.
[5] 前記支持部材は、その一端側が前記投影手段及び前記撮像手段の少なくとも一 方に連結されるとともに、その他端側に、前記基体にその支持部材を取り付けるため の取付部を有し、 [5] The support member has one end connected to at least one of the projection unit and the imaging unit, and has the other end attached to the base member. Has an attachment part,
前記基体及び前記支持部材の少なくとも一方には、前記投影手段及び前記撮像 手段の少なくとも一方へ電力を供給するための電源部が設けられていることを特徴と する請求項 4記載の画像入出力装置。  The image input / output device according to claim 4, wherein a power supply unit for supplying power to at least one of the projection unit and the imaging unit is provided on at least one of the base and the support member. .
[6] 前記投影手段により画像信号光を投影しつつ、前記撮像手段による撮像を実行す る投影撮像制御手段を備えることを特徴とする請求項 1記載の画像入出力装置。 6. The image input / output apparatus according to claim 1, further comprising: a projection imaging control unit that executes imaging by the imaging unit while projecting image signal light by the projection unit.
[7] 前記投影撮像制御手段は、前記投影手段を制御することにより、前記撮像手段に より取得された撮像画像に関する関連情報を、前記投影手段により投影することを特 徴とする請求項 6記載の画像入出力装置。 7. The projection imaging control unit, wherein the projection unit controls the projection unit to project related information on the captured image acquired by the imaging unit by the projection unit. Image input / output device.
[8] 前記投影撮像制御手段は、前記投影手段を制御することにより、前記撮像手段に より撮像された被写体の撮像画像を前記投影手段により投影することを特徴とする請 求項 6記載の画像入出力装置。 [8] The image according to claim 6, wherein the projection imaging control means controls the projection means so as to project the captured image of the subject imaged by the imaging means by the projection means. I / O device.
[9] 前記投影手段により投影される画像信号光を更新する投影画像更新手段を有する ことを特徴とする請求項 6記載の画像入出力装置。 9. The image input / output device according to claim 6, further comprising a projection image updating unit that updates an image signal light projected by the projection unit.
[10] 所定のパターンを有した画像信号光が前記投影手段により前記被写体上に投影さ れた場合に、前記撮像手段により撮像された撮像画像に基づいて、前記被写体の三 次元情報を検出する三次元情報検出手段を備えることを特徴とする請求項 1に記載 の画像入出力装置。 [10] When image signal light having a predetermined pattern is projected onto the subject by the projection unit, three-dimensional information of the subject is detected based on a captured image captured by the imaging unit. The image input / output device according to claim 1, further comprising three-dimensional information detection means.
[11] 前記撮像手段と前記投影手段との相対的な位置関係が所定範囲の位置関係にあ るかを判定する位置判定手段を有することを特徴とする請求項 1記載の画像入出力 装置。  11. The image input / output device according to claim 1, further comprising a position determining unit that determines whether a relative positional relationship between the imaging unit and the projecting unit is within a predetermined range.
[12] 前記撮像手段と前記投影手段との相対的な位置関係が前記所定の位置関係にあ る状態で、前記撮像手段と前記投影手段の位置関係を固定する固定手段を有する ことを特徴とする請求項 11記載の画像入出力装置。  [12] A fixing means for fixing a positional relationship between the imaging means and the projection means in a state where a relative positional relation between the imaging means and the projection means is the predetermined positional relation. 12. The image input / output device according to claim 11, wherein:
[13] 前記撮像手段と前記投影手段との相対的な位置関係を判定する位置判定手段と、 前記位置判定手段により、前記撮像手段と前記投影手段との相対的な位置関係が 所定範囲の位置関係にないと判定された場合には、前記三次元情報検出手段によ る三次元情報の検出を禁止する三次元情報検出禁止手段とを有することを特徴とす る請求項 10に記載の画像入出力装置。 [13] A position determining means for determining a relative positional relationship between the imaging means and the projecting means, and a position within a predetermined range of the relative positional relationship between the imaging means and the projecting means by the position determining means. And a three-dimensional information detection prohibiting unit that prohibits the detection of the three-dimensional information by the three-dimensional information detecting unit when it is determined that the three-dimensional information is not related. The image input / output device according to claim 10, wherein
前記撮像手段により取得される撮像画像の解像度を下げて外部に送信する解像 度低下送信手段を有することを特徴とする請求項 1記載の画像入出力装置。  2. The image input / output apparatus according to claim 1, further comprising: a resolution lowering transmission unit configured to reduce the resolution of a captured image acquired by the imaging unit and transmit the reduced image to the outside.
PCT/JP2005/010470 2004-06-09 2005-06-08 Image i/o device WO2005122553A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004171659A JP2005354306A (en) 2004-06-09 2004-06-09 Image input output apparatus
JP2004-171659 2004-06-09

Publications (1)

Publication Number Publication Date
WO2005122553A1 true WO2005122553A1 (en) 2005-12-22

Family

ID=35503494

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/010470 WO2005122553A1 (en) 2004-06-09 2005-06-08 Image i/o device

Country Status (2)

Country Link
JP (1) JP2005354306A (en)
WO (1) WO2005122553A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2790402A1 (en) * 2013-04-09 2014-10-15 HIMS International Corp. Image magnifying apparatus

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4935606B2 (en) * 2007-09-28 2012-05-23 オムロン株式会社 Imaging system
WO2013019217A1 (en) * 2011-08-02 2013-02-07 Hewlett-Packard Development Company, L.P. Projection capture system and method
US9521276B2 (en) 2011-08-02 2016-12-13 Hewlett-Packard Development Company, L.P. Portable projection capture device
JP2022131476A (en) * 2021-02-26 2022-09-07 レノボ・シンガポール・プライベート・リミテッド Camera module and electronic apparatus with camera module

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000138856A (en) * 1998-10-29 2000-05-16 Seiko Epson Corp Image input device, presentation system and information storage medium
JP2003015218A (en) * 2001-07-03 2003-01-15 Ricoh Co Ltd Projection display device
JP2003348387A (en) * 2002-05-29 2003-12-05 Elmo Co Ltd Document presentation apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000138856A (en) * 1998-10-29 2000-05-16 Seiko Epson Corp Image input device, presentation system and information storage medium
JP2003015218A (en) * 2001-07-03 2003-01-15 Ricoh Co Ltd Projection display device
JP2003348387A (en) * 2002-05-29 2003-12-05 Elmo Co Ltd Document presentation apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2790402A1 (en) * 2013-04-09 2014-10-15 HIMS International Corp. Image magnifying apparatus

Also Published As

Publication number Publication date
JP2005354306A (en) 2005-12-22

Similar Documents

Publication Publication Date Title
WO2005095886A1 (en) 3d shape detection device, 3d shape detection method, and 3d shape detection program
US7845807B2 (en) Projection apparatus and three-dimensional-shape detection apparatus
JP2005293075A5 (en)
TWI668997B (en) Image device for generating panorama depth images and related image device
WO2006035736A1 (en) 3d information acquisition method and 3d information acquisition device
KR100753885B1 (en) Image obtaining apparatus
EP3617644A1 (en) Three-dimensional measuring system and corresponding operating method
JP2007271395A (en) Three-dimensional color/shape measuring apparatus
US20170023780A1 (en) Catadioptric projector systems, devices, and methods
JP2007271530A (en) Apparatus and method for detecting three-dimensional shape
US20080175581A1 (en) Flash device
JP2005291839A5 (en)
WO2005122553A1 (en) Image i/o device
US9906695B2 (en) Manufacturing method of imaging module and imaging module manufacturing apparatus
JP2007256116A (en) Three-dimensional shape detector
US11763491B2 (en) Compensation of three-dimensional measuring instrument having an autofocus camera
JP2005293291A (en) Image input/output device
US11326874B2 (en) Structured light projection optical system for obtaining 3D data of object surface
EP3988895A1 (en) Compensation of three-dimensional measuring instrument having an autofocus camera
JP2005352835A (en) Image i/o device
US20160323486A1 (en) Imaging module, manufacturing method of imaging module, and electronic device
JP4552484B2 (en) Image input / output device
JP2005293290A5 (en)
JP2006031506A (en) Image input-output apparatus
JP4141874B2 (en) Focal length and / or angle of view calculation method and focal length calculation light projection apparatus

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase