Nothing Special   »   [go: up one dir, main page]

WO2001041102A1 - Experience simulating device and method - Google Patents

Experience simulating device and method Download PDF

Info

Publication number
WO2001041102A1
WO2001041102A1 PCT/JP2000/008381 JP0008381W WO0141102A1 WO 2001041102 A1 WO2001041102 A1 WO 2001041102A1 JP 0008381 W JP0008381 W JP 0008381W WO 0141102 A1 WO0141102 A1 WO 0141102A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion
image
movement
display device
video
Prior art date
Application number
PCT/JP2000/008381
Other languages
French (fr)
Japanese (ja)
Inventor
Toshiya Iinuma
Kenji Oyamada
Masahiro Seto
Original Assignee
Sanyo Electric Co., Ltd.
Sanyo Electric Software Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP34131299A external-priority patent/JP2001154570A/en
Priority claimed from JP37113599A external-priority patent/JP2001183968A/en
Application filed by Sanyo Electric Co., Ltd., Sanyo Electric Software Co., Ltd. filed Critical Sanyo Electric Co., Ltd.
Publication of WO2001041102A1 publication Critical patent/WO2001041102A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63GMERRY-GO-ROUNDS; SWINGS; ROCKING-HORSES; CHUTES; SWITCHBACKS; SIMILAR DEVICES FOR PUBLIC AMUSEMENT
    • A63G31/00Amusement arrangements
    • A63G31/16Amusement arrangements creating illusions of travel

Definitions

  • the present invention relates to an experiment apparatus and a method for simulating a simulated experience by giving a human a shaking according to an image.
  • Virtual reality is a technology that gives the illusion of being at a remote location with video and audio, etc., and is being used in various fields such as games, music, and movies.
  • This simulated experience device using virtual reality technology can create a simulated experience for humans by playing software created to resemble movement, vibration, video, and sound that you would actually experience. It is provided safely many times.
  • the software used in such a simulated experience device appeals to human sensibilities and requires specialized know-how to create it, making it difficult to create.
  • the simulated experience device disclosed in Japanese Patent Application Laid-Open No. 11-153,499 corresponds to a motion video detecting means for detecting a motion video which is a motion of a video from a video signal, and a motion video from an audio signal.
  • Voice detection means for detecting voice motion detection means for detecting the motion of the user
  • motion generation means for generating respective motion signals from the motion video, voice and the motion of the user, and experience based on the motion signal On which the person is on board
  • a reproduction effect control means for controlling a reproduction effect of a video signal and an audio signal from the movement of the user.
  • motion drive control is performed based on actual measurement values such as motion video, audio, and the movement of the user instead of software data dedicated to motion drive.
  • Versatility can be improved because it does not rely on software.
  • the present invention is intended to solve such a problem and provide a more realistic experience.
  • a first simulated experience device is a display device for displaying a video, a video signal supply means for supplying a video signal to the display device, and detecting a motion of a video based on the video signal supplied to the display device.
  • a motion detecting means and a pseudo effect means for stimulating the user are provided.
  • the pseudo effect means transfers the motion of the image detected by the motion detecting means to the center of the image display area of the display device and the periphery thereof. , Or the subject and background It is characterized by stimulating the user according to the movement of the surrounding or background video.
  • the simulated effect means is, for example, a ride means arranged in front of the display device and on which the user is placed, a rocking means for rocking the ride means to stimulate the user, and an image detected by the motion detecting means. Movement is classified into the center of the image display area of the display device and its surrounding area, or between the subject and the background, and the swing means for driving and controlling the swing means in accordance with the movement of the surrounding or background image. Motion control means.
  • the swing control means includes, for example, means for driving a swing means in accordance with the movement of the image of the central part or the subject when the magnitude of the movement of the peripheral part or the background image is smaller than a first predetermined value. ing.
  • the swing control means drives the swing means in response to the movement of the image at the bottom of the screen or by movement unrelated to the image. It has means to make it.
  • the first pseudo experience method includes a first step of supplying a video signal to a display device, a second step of detecting a motion of a video based on the video signal supplied to the display device, and a user.
  • the method includes a third step of applying a stimulus, and the third step classifies the motion of the image detected in the second step into a central portion and a peripheral portion of the image display area of the display device, or into a subject and a background.
  • the stimulus is given to the user in response to the movement of the surrounding or background video.
  • the swinging hand is moved according to the movement of the center part or the image of the subject. Driving the step.
  • the swinging means is driven in accordance with the movement of the image at the bottom of the screen or in a motion irrelevant to the image. It has steps.
  • a second pseudo experience device is a display device for displaying a video, a video signal supply means for supplying a video signal to the display device, and a motion for detecting a motion of the video from the video signal supplied to the display device.
  • the pseudo effect means comprises: a means for giving the user a stimulus of a strength corresponding to the motion of the video detected by the motion detection means; When the state where the strength is a predetermined maximum continues for a predetermined time or more, means for gradually reducing the strength of the stimulus is provided.
  • the simulated effect means is, for example, a ride means arranged in front of the display device and on which the user is placed, a rocking means for rocking the ride means to stimulate the user, and a drive control of the rocking means.
  • Swing control means for performing the swing control means.
  • the swing control means includes means for driving the swing means in accordance with the motion of the image detected by the motion detection means, and a state in which the swing amount of the swing means is previously maximum.
  • a mechanism is provided to gradually return the rocking means to the center when the movement has continued for a predetermined time or more.
  • the swing control means reverses the direction of the movement of the image before swinging the riding means in a direction corresponding to the movement of the image.
  • Means are provided for slightly swinging the riding means in the direction.
  • a second pseudo experience method includes a first step of supplying a video signal to a display device, a second step of detecting video motion from a video signal supplied to the display device, and providing a stimulus to the user
  • the method includes a third step, in which the third step is to give a stimulus to the user with a strength corresponding to the motion of the image detected in the second step, and the strength of the stimulus is a predetermined maximum.
  • a step of gradually reducing the intensity of the stimulus is provided.
  • a ride is provided in front of the display device for placing the experience There is provided a rocking means for oscillating the rocking means to stimulate the user. And a step of gradually returning the rocking means to the center when the state in which the rocking amount of the rocking means is the maximum beforehand continues for a predetermined time or more.
  • the third step for example, when the movement of the image detected in the second step suddenly increases, before the rider swings in a direction corresponding to the movement of the image, the direction of the movement of the image is reversed.
  • a step of slightly rocking the riding means in the direction.
  • FIG. 1 is a block diagram showing the configuration of the virtual experience device.
  • FIG. 2 is a block diagram showing the configuration of the swing control means in FIG.
  • FIG. 3 is a flowchart showing the operation of the virtual experience device.
  • FIG. 6 is a block diagram showing still another example of the simulated experience device.
  • Figure 1 shows the configuration of the virtual experience device.
  • 1 is a video source serving as a video signal supply means for a VTR, a CD-ROM, a TV broadcast, a TV game, etc.
  • 2 is a video signal from the video source 1 that captures video in at least one frame unit.
  • the image capturing means 3 is a display means comprising a CRT, a liquid crystal display or the like
  • 4 is a display control means for displaying the video signal captured by the image capturing means 2 on the display means 3.
  • reference numeral 61 denotes the motion vector detected by the motion vector detection means 5 for each small area on the display means 3.
  • Classification unit to be classified 62: Average peripheral motion vector calculation unit that calculates the average of the motion vector of the peripheral region classified by the classification unit 61, and 63: Medium classification by the classification unit 61
  • the central average motion vector calculator that calculates the average of the central motion vector
  • 64 is the lower average motion vector that calculates the average of the lower motion vectors classified by the classification unit 61 This is a file calculation unit.
  • Reference numeral 69 denotes a motion signal based on the average value calculated by each of the average motion vector calculation units 62, 63, and 64 or the motion independent of the video motion generated by the unrelated motion generation means 68. This is an operation signal generation unit for generating.
  • the classification unit 61 classifies the motion vector for each small area detected by the motion vector detection unit 5 into a central part of one screen, a peripheral part of the central part, and a lower part of the screen.
  • the peripheral average motion vector calculator 62 calculates the average value of the motion vector of the central peripheral portion.
  • the center average motion vector calculator 63 calculates the center motion vector. Calculate the average value.
  • the lower average motion vector calculator 64 calculates the average value of the lower motion vector.
  • the peripheral motion determining section 65 determines whether or not the average peripheral motion vector calculated by the peripheral average motion vector calculating section 62 is smaller than a first predetermined value. If the average peripheral motion vector calculated by the peripheral average motion vector calculating unit 62 is smaller than the first predetermined value, the peripheral motion determining unit 65 instructs the central motion determining unit 66 to determine. Put out. If the average motion vector of the surrounding area calculated by the average motion vector of the surrounding area 62 is equal to or larger than the first predetermined value, the surrounding motion determination section 65 determines the average motion vector of the surrounding area. The average motion vector of the surroundings calculated by the vector calculation unit 62 is sent to the operation signal generation unit 69. The motion signal generation unit 69 generates a motion signal based on the average motion vector of the peripheral part calculated by the peripheral average motion vector calculation unit 62.
  • the central motion determining unit 66 When receiving the determination instruction from the peripheral motion determining unit 65, the central motion determining unit 66 calculates the central average motion vector calculated by the central average motion vector calculating unit 63 as the second It is determined whether it is smaller than a predetermined value. If the average motion vector at the center calculated by the average motion vector calculator at the center 63 is smaller than the second predetermined value, the center motion determiner 66 determines the lower motion determiner 67. Give instructions.
  • the center motion determination unit 66 determines the average motion vector at the center.
  • the average motion vector at the center calculated by the calculator 63 is sent to the motion signal generator 69.
  • the motion signal generator 69 generates a motion signal based on the average motion vector of the center calculated by the average motion vector calculator 63 of the center.
  • the average motion vector at the bottom of the screen calculated by the lower average motion vector calculator 6 4 is If it is equal to or more than the third predetermined value, the lower motion determining unit 67 sends the average motion vector at the lower part of the screen calculated by the lower average motion vector calculating unit 64 to the motion signal generating unit 69 .
  • the motion signal generator 69 generates a motion signal based on the average motion vector at the lower part of the screen calculated by the lower average motion vector calculator 64.
  • the irrelevant motion generating unit 68 When receiving an operation instruction from the lower motion determining unit 67, the irrelevant motion generating unit 68 generates a motion vector of a motion irrelevant to the video and sends it to the motion signal generating unit 69.
  • the motion signal generator 69 generates a motion signal based on the motion vector of the motion irrelevant to the video calculated by the irrelevant motion generator 68. Generate motion signals.
  • 1 1 is an effect control means for controlling the drive of the effect device in accordance with the motion signal from the swing control means 6, and 1 2 is for outputting the sound accompanying the video from the video supply source 1 to the speech force 13
  • the sound output means 14 sends air to the rider according to the control signal from the effect control means 11, that is, the movement of the image.
  • Blower means for controlling, 15 is a sound image control means for controlling the sound image of the sound outputted from the speaker 13 according to the control signal from the effect control means 11, and 16 is a control signal from the effect control means 11 This is a lighting method that irradiates the user with light and enhances the sense of reality.
  • the swing control means 6 provides the rider 9 with the rider 7, the speaker 13, the blower 14 and the light 16 through the rider 9. It gives a sense of realism and gives a stimulus that enhances it, so rocking control means 6, ride means 7, rocking means 8, effect control means 11, speaker 13 and blowing means 14 and sound image control means 15 and the writing means 16 correspond to the pseudo effect means of the present invention.
  • Figure 3 shows the operation of the virtual experience device.
  • the image capturing means 2 captures an image from the image supply source 1 (step S 1).
  • the motion vector detecting means 5 detects a motion vector for each small area from the video captured in step S1 (step S2).
  • Oscillation control means 6 detects in step S2
  • the motion vector for each of the small areas is fetched, and the classification unit 61 classifies the motion vector for each of the small areas into the center of the image, the periphery of the center, and the bottom of the screen (step S3).
  • the average motion vector of the surrounding area is calculated by the average motion vector calculating section 62 of the surrounding area (step S4).
  • the central portion average motion vector is calculated by the central portion average motion vector calculation section 63 (step S5).
  • the average motion vector at the bottom of the screen is calculated by the lower average motion vector calculation unit 64 (step S6).
  • the peripheral motion determining section 65 determines whether the average motion vector of the peripheral portion is smaller than a first predetermined value (step S7). If the average movement vector of the surrounding area is equal to or larger than the first predetermined value, the process proceeds to step S8. In step S8, the motion signal generation unit 69 generates a motion signal based on the average motion vector of the surrounding area, and the swing unit 8 controls the ride unit 7 based on the motion signal. Rock it. If it is determined in step S7 that the average motion vector of the peripheral portion is smaller than the first predetermined value, it is determined whether the average motion vector of the central portion is smaller than the second predetermined value. The determination unit 66 determines (step S9).
  • step S9 If it is determined in step S9 that the average motion vector at the center is smaller than the second predetermined value, it is determined whether the motion vector at the bottom of the screen is smaller than the third predetermined value. Is determined (step S11).
  • step S12 the motion signal generator 69 generates a motion signal based on the average motion vector at the bottom of the screen, and the rocking means 8 turns the rocking means 8 based on the motion signal. Swing the ride means 7.
  • the motion signal generated by the operation signal generator 69 is also sent to the effect controller 11.
  • the effect control means 11 controls the blowing means 14, the sound image control means 15, and the lie and means 16 in accordance with the motion signal in order to enhance the sense of reality given to the rider on the ride means 7.
  • FIG. 4 shows a processing procedure of the operation signal generation unit 69 of the swing control means 6 in the above step S8, step S10, step S12, and step S13.
  • the operation signal generation unit 69 basically includes a swing unit that swings the ride unit 7 in accordance with the direction and size of the average movement vector input to the operation signal generation unit 69. 8 is operated (step S27). For example, if the image tilts to the right, the ride means 7 is tilted to the right.
  • step S20 the motion signal generation unit 69 determines whether or not the motion of the video has rapidly increased based on the average motion vector input to the motion signal generation unit 69. If it is determined that the motion of the video has rapidly increased, the process proceeds to step S21. If it is determined that the motion of the video has not rapidly increased, the process proceeds to step S23.
  • step S22 the magnitude of the slight motion of step S21 in the direction of the average motion vector input to the motion signal generation unit 69 and the magnitude of the average motion vector is determined.
  • the rocking means 8 is operated to rock the riding means 7 according to the magnitude of the added movement.
  • step S23 it is determined whether or not the riding means 7 is tilted from the center position to any of the front, rear, left and right positions. If it is determined that it is tilted, the process proceeds to step S24. If it is determined that it is not tilted, the process proceeds to step S27.
  • step S24 it is determined whether or not the inclination of the riding means 7 is kept the same for a predetermined time. If it is determined that the predetermined time is maintained, the process proceeds to step S26. If it is determined that the predetermined time is not maintained, the process proceeds to step S25. In step S26, if the ride means 7 maintains the same inclined state, the user 9 is often accustomed to the experience of that state, so the ride means 7 is gradually moved to the center position. The swinging means 8 is controlled to return.
  • step S25 it is determined whether the inclination of the ride means 7 in the front-rear, left-right direction is the limit position (maximum inclination position). If the inclination of the ride means 7 in the front-rear and left-right directions is at the limit position (maximum inclination position), the process proceeds to step S26. If not, the process proceeds to step S27.
  • step S26 if the ride means 7 is at the limit position, it cannot be moved even if there is further movement of the image, so the swing means 8 is controlled so that the ride means 7 is gradually returned to the center position.
  • step S27 the swing means 8 is operated so as to swing the ride means 7 in accordance with the direction and the magnitude of the average motion vector input to the operation signal generation section 69.
  • the classification unit 61 includes the motion vector extracted by the motion vector detection means 5 on the display unit 3 in the center, the periphery of the center, and the bottom of the screen.
  • the present invention is not limited to this, and the subject and the background in the video may be classified. If the classifying unit 61 classifies the subject and the background of the video, the same configuration as described above can be realized by replacing the center in the above-described embodiment with the subject and the surrounding part as the background. .
  • the configuration for displaying a two-dimensional image on the display means 3 has been described.
  • the present invention is not limited to this, and a video signal obtained by converting a two-dimensional image into a three-dimensional image is displayed on the three-dimensional display means 18. You may apply to the structure which displays.
  • This embodiment is shown in FIG. Note that the configurations denoted by the same reference numerals as those in FIG. 1 have the same functions, and thus description thereof will be omitted.
  • the video signal obtained by converting the 2D video into the 3D video is displayed on the display means 3, but also the 3D video from the video source 1, that is, the left-eye video L
  • the right-eye image R may be output and displayed on the stereoscopic display means 18 based on this signal.
  • the image supply source 1 supplies a three-dimensional image, specifically, supplies a left-eye image L and a right-eye image R, and the display control means 4 performs a three-dimensional display operation based on the image signal.
  • the video capturing means 2 captures the left-eye video L and the right-eye video R, and based on the video captured by the video capturing means 2, the motion vector detecting means 5 The motion vector is detected in consideration of the specific result of the means 19, and then sent to the swing control means 6 to drive the ride means 7.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An experience simulating device comprising a display unit for displaying images, an image signal supply means for supplying image signals to the display unit, a movement detection means for detecting the movements of images based on image signals supplied to the display unit, and an effect simulating means for giving a stimulus to an experiencing person, characterized in that the effect simulating means classifies the movements of images detected by the movement detection means into those at the center and those at the periphery portion of an image display area of the display unit or into those of the object and those at the background, and gives a stimulus to the experiencing person according to the movements of images at the periphery portion or the background.

Description

明 細 書 疑似体験装置及び疑似体験方法 <技術分野〉  Description Simulated experience device and simulated experience method <Technical field>
本発明は、 人間に映像に応じたゆれを与えて疑似体験をシミュレートさせる体 験装置及びその方法に関する。  The present invention relates to an experiment apparatus and a method for simulating a simulated experience by giving a human a shaking according to an image.
<背景技術〉 <Background technology>
近年マルチメディア産業に向けて、 バーチャルリアリティ技術の研究開発が盛 んに行われている。 バーチャルリアリティは映像や音声などで遠隔地にいてもそ の場所にいるような錯覚をもたらす技術であり、 ゲーム、 音楽、 映画などの多方 面の分野で利用されつつある。  In recent years, research and development of virtual reality technology has been actively conducted for the multimedia industry. Virtual reality is a technology that gives the illusion of being at a remote location with video and audio, etc., and is being used in various fields such as games, music, and movies.
このバーチャルリアリティ技術を利用した疑似体験装置は、 実際に体験するで あろう動き、 振動、 映像、 音声に似せて作られたソフトウェアを再生することに より、 人間に対して疑似的な体験を、 何度でも安全に提供するものである。 しかし、 このような疑似体験装置で使用するソフトウェアは、 人間の感性に訴 えるため、 その作成に専門的なノウハウが必要であり、 容易に作成することがで きなかった。  This simulated experience device using virtual reality technology can create a simulated experience for humans by playing software created to resemble movement, vibration, video, and sound that you would actually experience. It is provided safely many times. However, the software used in such a simulated experience device appeals to human sensibilities and requires specialized know-how to create it, making it difficult to create.
そこで、 疑似体験用のソフトウェア開発において、 その工数を削減し、 汎用性 を高めるために、 次のような疑似体験装置 (特開平 1 1— 1 5 3 9 4 9号公報参 照) が開発されている。  In order to reduce the man-hour and increase versatility in software development for virtual experience, the following virtual experience device (see Japanese Patent Application Laid-Open No. 11-153949) was developed. ing.
特開平 1 1一 1 5 3 9 4 9号公報に開示された疑似体験装置は、 映像信号から 映像の動きである動き映像を検出する動き映像検出手段と、 音声信号から動き映 像に対応する音声を検出する音声検出手段と、 体験者の動きを感知する動き感知 手段と、 動き映像と音声と体験者の動きとから夫々のモーション信号を生成する モーション生成手段と、 モーション信号に基づいて体験者が搭乗している搭乗物 のモーション駆動を制御するモーション駆動制御手段と、 体験者の動きから映像 信号と音声信号との再生効果を制御する再生効果制御手段とを備えている。 The simulated experience device disclosed in Japanese Patent Application Laid-Open No. 11-153,499 corresponds to a motion video detecting means for detecting a motion video which is a motion of a video from a video signal, and a motion video from an audio signal. Voice detection means for detecting voice, motion detection means for detecting the motion of the user, motion generation means for generating respective motion signals from the motion video, voice and the motion of the user, and experience based on the motion signal On which the person is on board And a reproduction effect control means for controlling a reproduction effect of a video signal and an audio signal from the movement of the user.
かかる構成により、 モーション駆動専用のソフトウエアデータの代わりに動き 映像、 音声及び体験者の動きといった実際の測定値に基づいてモーション駆動制 御を行なうため、 ソフトウェア開発工数を大幅に削減でき、 また専用のソフトゥ エアに依存しないために汎用性を高めることが可能とした。  With this configuration, motion drive control is performed based on actual measurement values such as motion video, audio, and the movement of the user instead of software data dedicated to motion drive. Versatility can be improved because it does not rely on software.
しかしながら、 単に映像の動きに基づいて搭乗物のモーション駆動を制御して も、 臨場感のある動きは得られない。  However, simply controlling the motion drive of the vehicle based on the motion of the video does not provide realistic motion.
即ち、 体験者 (搭乗者) があたかも動いていると感じるためには、 中央部また は映像の中心となる被写体の動きでなく、 周囲部または周囲の背景などの景色の 動きによって得られることが知られている (テレビジョン学会誌 V o 1 . 5 0, N o . 4, p p . 4 2 9〜 4 3 5 ( 1 9 9 6 ) 「映像と身体の反応」 を参照) 。 そのため、 体験者にとって、 単に映像の動きに応じてモーション駆動を制御し ていても臨場感が得られるものとは限らない。  In other words, in order for the user (passenger) to feel as if it is moving, it can be obtained not by the movement of the subject in the center or the center of the image, but by the movement of the scenery such as the surrounding part or the background. It is known (see the Journal of the Institute of Television Engineers of Japan, Vol. 1.5, No. 4, pp. 429-435 (1996), "Reaction between Images and the Body"). Therefore, simply controlling the motion drive according to the movement of the video does not always give the user a sense of realism.
また、 連続して体験者に刺激を与え続けると、 体験者はその刺激に慣れてしま い、 臨場感の体感が薄れてしまう。 さらに、 搭乗物の駆動範囲も限界があり、 そ の限界まで駆動した後さらに駆動しようとしても駆動できず、 体験者に刺激が与 えられないことに陥る可能性がある。  Also, if the stimulus is continuously given to the user, the user will become accustomed to the stimulus, and the sense of realism will diminish. Furthermore, there is a limit to the driving range of the vehicle, and if you try to drive further after driving to that limit, you will not be able to drive, and there is a possibility that the user will not be stimulated.
本発明は、 斯かる課題を解決し、 より臨場感が体験できるようにするものであ る。  The present invention is intended to solve such a problem and provide a more realistic experience.
<発明の開示〉 <Disclosure of the Invention>
この発明による第 1の疑似体験装置は、 映像を表示するための表示装置、 表示 装置に映像信号を供給する映像信号供給手段、 表示装置に供給される映像信号に 基づいて映像の動きを検出する動き検出手段、 および体験者に刺激を与える疑似 効果手段を備えており、 疑似効果手段は、 動き検出手段で検出した映像の動きを、 表示装置の映像表示領域の中央部とその周囲部とに、 または被写体と背景とに分 類し、 そのうち周囲部または背景の映像の動きに応じて体験者に刺激を与えるこ とを特徴とする。 A first simulated experience device according to the present invention is a display device for displaying a video, a video signal supply means for supplying a video signal to the display device, and detecting a motion of a video based on the video signal supplied to the display device. A motion detecting means and a pseudo effect means for stimulating the user are provided. The pseudo effect means transfers the motion of the image detected by the motion detecting means to the center of the image display area of the display device and the periphery thereof. , Or the subject and background It is characterized by stimulating the user according to the movement of the surrounding or background video.
疑似効果手段は、 たとえば、 表示装置の前方に配置されかつ体験者を載置する ライド手段、 ライド手段を揺動させて体験者に刺激を与える揺動手段、 および動 き検出手段で検出した映像の動きを、 表示装置の映像表示領域の中央部とその周 囲部とに、 または被写体と背景とに分類し、 そのうち周囲部または背景の映像の 動きに応じて揺動手段を駆動制御する揺動制御手段を備えている。  The simulated effect means is, for example, a ride means arranged in front of the display device and on which the user is placed, a rocking means for rocking the ride means to stimulate the user, and an image detected by the motion detecting means. Movement is classified into the center of the image display area of the display device and its surrounding area, or between the subject and the background, and the swing means for driving and controlling the swing means in accordance with the movement of the surrounding or background image. Motion control means.
揺動制御手段は、 たとえば、 上記周囲部または背景の映像の動きの大きさが第 1所定値より小さいとき、 上記中央部または被写体の映像の動きに応じて揺動手 段を駆動させる手段を備えている。  The swing control means includes, for example, means for driving a swing means in accordance with the movement of the image of the central part or the subject when the magnitude of the movement of the peripheral part or the background image is smaller than a first predetermined value. ing.
揺動制御手段は、 たとえば、 上記中央部または被写体の映像の動きの大きさが 第 2所定値より小さいとき、 画面下部の映像の動きに応じてまたは映像に無関係 な動きで揺動手段を駆動させる手段を備えている。  For example, when the magnitude of the movement of the image of the center or the subject is smaller than the second predetermined value, the swing control means drives the swing means in response to the movement of the image at the bottom of the screen or by movement unrelated to the image. It has means to make it.
載の疑似体験装置。 Simulated experience device.
この発明による第 1の疑似体験方法は、 表示装置に映像信号を供給する第 1ス テツプ、 表示装置に供給される映像信号に基づいて映像の動きを検出する第 2ス テツプ、 および体験者に刺激を与える第 3ステップを備えており、 第 3ステップ は、 第 2ステップで検出した映像の動きを、 表示装置の映像表示領域の中央部と その周囲部とに、 または被写体と背景とに分類し、 そのうち周囲部または背景の 映像の動きに応じて体験者に刺激を与えることを特徴とする。  The first pseudo experience method according to the present invention includes a first step of supplying a video signal to a display device, a second step of detecting a motion of a video based on the video signal supplied to the display device, and a user. The method includes a third step of applying a stimulus, and the third step classifies the motion of the image detected in the second step into a central portion and a peripheral portion of the image display area of the display device, or into a subject and a background. The stimulus is given to the user in response to the movement of the surrounding or background video.
表示装置の前方に体験者を載置するライ ド手段が設けられているとともにライ ド手段を揺動させて体験者に刺激を与える揺動手段が設けられており、 第 3ステ ップは、 第 2ステップで検出した映像の動きを、 表示装置の映像表示領域の中央 部とその周囲部とに、 または被写体と背景とに分類し、 そのうち周囲部または背 景の映像の動きに応じて揺動手段を駆動制御するようにしてもよい。  Ride means for placing the experienced person is provided in front of the display device, and oscillating means for oscillating the ride means to stimulate the experienced person is provided. The motion of the image detected in the second step is classified into the center of the image display area of the display device and its surroundings, or between the subject and the background, and the motion is shaken according to the motion of the surrounding or background image. The driving means may be driven and controlled.
第 3ステップは、 たとえば、 上記周囲部または背景の映像の動きの大きさが第 1所定値より小さいとき、 上記中央部または被写体の映像の動きに応じて揺動手 段を駆動させるステップを備えている。 In the third step, for example, when the magnitude of the movement of the surrounding part or the background image is smaller than a first predetermined value, the swinging hand is moved according to the movement of the center part or the image of the subject. Driving the step.
第 3ステップは、 たとえば、 上記中央部または被写体の映像の動きの大きさが 第 2所定値より小さいとき、 画面下部の映像の動きに応じてまたは映像に無関係 な動きで揺動手段を駆動させるステップを備えている。  In the third step, for example, when the magnitude of the movement of the image of the center portion or the subject is smaller than the second predetermined value, the swinging means is driven in accordance with the movement of the image at the bottom of the screen or in a motion irrelevant to the image. It has steps.
この発明による第 2の疑似体験装置は、 映像を表示するための表示装置、 表示 装置に映像信号を供給する映像信号供給手段、 表示装置に供給される映像信号か ら映像の動きを検出する動き検出手段、 および体験者に刺激を与える疑似効果手 段を備えており、 疑似効果手段は、 動き検出手段で検出した映像の動きに応じた 強さの刺激を体験者に与える手段、 および刺激の強さが予め定められた最大であ る状態が所定時間以上継続したとき、 徐々に刺激の強さを弱くする手段を備えて いることを特徴とする。  A second pseudo experience device according to the present invention is a display device for displaying a video, a video signal supply means for supplying a video signal to the display device, and a motion for detecting a motion of the video from the video signal supplied to the display device. Detection means and a pseudo effect means for giving a stimulus to the user.The pseudo effect means comprises: a means for giving the user a stimulus of a strength corresponding to the motion of the video detected by the motion detection means; When the state where the strength is a predetermined maximum continues for a predetermined time or more, means for gradually reducing the strength of the stimulus is provided.
疑似効果手段は、 たとえば、 表示装置の前方に配置されかつ体験者を載置する ライド手段、 ライ ド手段を揺動させて体験者に刺激を与える揺動手段、 および揺 動手段の駆動制御を行う揺動制御手段を備えており、 揺動制御手段は、 動き検出 手段で検出した映像の動きに応じて揺動手段を駆動させる手段、 および揺動手段 の揺動量が予め最大である状態が所定時間以上継続したとき、 揺動手段を中央に 徐々に戻す手段を備えている。  The simulated effect means is, for example, a ride means arranged in front of the display device and on which the user is placed, a rocking means for rocking the ride means to stimulate the user, and a drive control of the rocking means. Swing control means for performing the swing control means. The swing control means includes means for driving the swing means in accordance with the motion of the image detected by the motion detection means, and a state in which the swing amount of the swing means is previously maximum. A mechanism is provided to gradually return the rocking means to the center when the movement has continued for a predetermined time or more.
揺動制御手段は、 たとえば、 動き検出手段で検出した映像の動きが急激に大き くなつたとき、 映像の動きに応じた方向にライド手段を揺動させる前に、 映像の 動きの方向と逆方向にライド手段を少し揺動させる手段を備えている。  For example, when the movement of the image detected by the movement detecting means suddenly increases, the swing control means reverses the direction of the movement of the image before swinging the riding means in a direction corresponding to the movement of the image. Means are provided for slightly swinging the riding means in the direction.
この発明による第 2の疑似体験方法は、 表示装置に映像信号を供給する第 1ス テップ、 表示装置に供給される映像信号から映像の動きを検出する第 2ステップ、 および体験者に刺激を与える第 3ステップを備えており、 第 3ステップは、 第 2 ステツプで検出された映像の動きに応じた強さの刺激を体験者に与えるステツプ、 および刺激の強さが予め定められた最大である状態が所定時間以上継続したとき、 徐々に刺激の強さを弱くするステップを備えていることを特徴とする。  A second pseudo experience method according to the present invention includes a first step of supplying a video signal to a display device, a second step of detecting video motion from a video signal supplied to the display device, and providing a stimulus to the user The method includes a third step, in which the third step is to give a stimulus to the user with a strength corresponding to the motion of the image detected in the second step, and the strength of the stimulus is a predetermined maximum. When the state has continued for a predetermined time or more, a step of gradually reducing the intensity of the stimulus is provided.
表示装置の前方に体験者を載置するライ ド手段が設けられているとともにライ ド手段を揺動させて体験者に刺激を与える揺動手段が設けられており、 第 3ステ ップは、 第 2ステツプで検出された映像の動きに応じて揺動手段を駆動させるス テツプ、 および揺動手段の揺動量が予め最大である状態が所定時間以上,継続した とき、 揺動手段を中央に徐々に戻すステツプを備えていてもよレ、。 A ride is provided in front of the display device for placing the experience There is provided a rocking means for oscillating the rocking means to stimulate the user. And a step of gradually returning the rocking means to the center when the state in which the rocking amount of the rocking means is the maximum beforehand continues for a predetermined time or more.
第 3ステップは、 たとえば、 第 2ステップで検出された映像の動きが急激に大 きくなつたとき、 映像の動きに応じた方向にライド手段を揺動させる前に、 映像 の動きの方向と逆方向にライド手段を少し揺動させるステップを備えている。  In the third step, for example, when the movement of the image detected in the second step suddenly increases, before the rider swings in a direction corresponding to the movement of the image, the direction of the movement of the image is reversed. A step of slightly rocking the riding means in the direction.
<図面の簡単な説明 > <Brief description of drawings>
図 1は、 疑似体験装置の構成を示すブロック図である。  FIG. 1 is a block diagram showing the configuration of the virtual experience device.
図 2は、 図 1中の揺動制御手段の構成を示すプロック図である。  FIG. 2 is a block diagram showing the configuration of the swing control means in FIG.
図 3は、 疑似体験装置の動作を示すフローチャートである。  FIG. 3 is a flowchart showing the operation of the virtual experience device.
図 4は、 図 2中の動作信号生成部 6 9の動作を示すフローチャートである。 図 5は、 疑似体験装置の他の例を示すプロック図である。  FIG. 4 is a flowchart showing the operation of the operation signal generation unit 69 in FIG. FIG. 5 is a block diagram showing another example of the simulated experience device.
図 6は、 疑似体験装置のさらに他の例を示すブロック図である。  FIG. 6 is a block diagram showing still another example of the simulated experience device.
<発明を実施するための最良の形態〉 <Best mode for carrying out the invention>
以下、 図面を参照して、 この発明の実施例について説明する。  Hereinafter, embodiments of the present invention will be described with reference to the drawings.
図 1は、 疑似体験装置の構成を示している。  Figure 1 shows the configuration of the virtual experience device.
図 1において、 1は V T R、 C D - R OM, T V放送、 T Vゲーム等の映像信 号供給手段となる映像供給源、 2は映像供給源 1からの映像信号を少なくとも 1 フレーム単位で映像を撮り込む映像取込手段、 3は C R T、 液晶表示器等からな る表示手段、 4は映像取込手段 2で取込んだ映像信号を表示手段 3に表示させる 表示制御手段である。  In FIG. 1, 1 is a video source serving as a video signal supply means for a VTR, a CD-ROM, a TV broadcast, a TV game, etc., and 2 is a video signal from the video source 1 that captures video in at least one frame unit. The image capturing means 3 is a display means comprising a CRT, a liquid crystal display or the like, and 4 is a display control means for displaying the video signal captured by the image capturing means 2 on the display means 3.
5は映像取込手段 2で取込んだ映像から動きべク トルを検出する動きべク トル 検出手段、 6は動きべク トル検出手段 5で検出した動きべク トルに応じてモーシ ョン信号を生成する揺動制御手段、 7は体験者 9が搭乗する椅子部 1 0を有し、 例えば該椅子部 1 0を中央位置から前後左右に傾かせて摇動させ、 体験者 9に体 感させるライ ド手段、 8は揺動制御手段 6で生成したモーション信号に応じてラ ィド手段 7を揺動させる揺動手段である。 5 is a motion vector detecting means for detecting a motion vector from the video captured by the video capturing means 2, and 6 is a motion signal according to the motion vector detected by the motion vector detecting means 5. The swing control means 7 generates a chair unit 10 on which the experienced person 9 rides, For example, the chair unit 10 is tilted back and forth and right and left from the center position to move the chair unit 10 so that the rider 9 can experience the rider unit 8. A swing means for swinging 7.
動きべク トル検出手段 5は、 表示手段 3の 1画面を複数の小領域に分割し、 各 小領域毎に動きべクトルを検出する。  The motion vector detecting means 5 divides one screen of the display means 3 into a plurality of small areas, and detects a motion vector for each small area.
図 2は、 揺動制御手段 6の構成を示している。  FIG. 2 shows the configuration of the swing control means 6.
図 2において、 6 1は動きべクトル検出手段 5で小領域毎に検出された動きべ ク トルを、 表示手段 3に表示する 1画面の中央部、 中央部の周囲部、 画面の下部 とに分類する分類部、 6 2は分類部 6 1で分類された周囲部の動きべク トルの平 均を演算する周囲部平均動きベク トル算出部、 6 3は分類部 6 1で分類された中 央部の動きべクトルの平均を演算する中央部平均動きべク トル算出部、 6 4は分 類部 6 1で分類された下部の動きべク トルの平均を演算する下部平均動きべク ト ル算出部である。  In FIG. 2, reference numeral 61 denotes the motion vector detected by the motion vector detection means 5 for each small area on the display means 3. (1) The center of the screen, the periphery of the center, and the bottom of the screen. Classification unit to be classified, 62: Average peripheral motion vector calculation unit that calculates the average of the motion vector of the peripheral region classified by the classification unit 61, and 63: Medium classification by the classification unit 61 The central average motion vector calculator that calculates the average of the central motion vector, and 64 is the lower average motion vector that calculates the average of the lower motion vectors classified by the classification unit 61 This is a file calculation unit.
6 5は周囲部平均動きべクトル算出部 6 2で算出された平均値が第 1所定値よ り小さいかどうか判断する周囲動き判定部、 6 6は中央部平均動きべク トル算出 部 6 3で算出された平均値が第 2所定値より小さいかどうか判断する中央動き判 定部、 6 7は下部平均動きべク トル算出部 6 4で算出された平均値が第 3所定値 より小さいかどうか判断する下部動き判定部、 6 8は映像の動きが全体的に小さ いとき、 映像の動きとは無関係な動きを生成する無関係動き生成手段である。  6 5 is a peripheral motion determination unit that determines whether the average value calculated by the peripheral average motion vector calculation unit 62 is smaller than the first predetermined value, and 66 is a central average motion vector calculation unit 6 3 The central motion determining unit 67 determines whether the average value calculated in step 2 is smaller than the second predetermined value, and 67 indicates whether the average value calculated in the lower average motion vector calculating unit 64 is smaller than the third predetermined value. The lower motion judging unit 68 for judging whether or not the motion of the video is small when the motion of the video is small as a whole.
6 9は各平均動きベク トル算出部 6 2、 6 3、 6 4で算出された平均値または 無関係動き生成手段 6 8で生成された映像の動きとは無関係な動きに基づいてモ ーション信号を生成する動作信号生成部である。  Reference numeral 69 denotes a motion signal based on the average value calculated by each of the average motion vector calculation units 62, 63, and 64 or the motion independent of the video motion generated by the unrelated motion generation means 68. This is an operation signal generation unit for generating.
揺動制御手段 6の動作について説明する。 まず、 分類部 6 1は、 動きべク トル 検出手段 5で検出された小領域毎の動きベク トルを、 1画面の中央部、 中央部の 周囲部、 画面の下部とに分類する。  The operation of the swing control means 6 will be described. First, the classification unit 61 classifies the motion vector for each small area detected by the motion vector detection unit 5 into a central part of one screen, a peripheral part of the central part, and a lower part of the screen.
周囲部平均動きべク トル算出部 6 2は、 中央部の周囲部の動きべク トルの平均 値を算出する。 中央部平均動きべク トル算出部 6 3は、 中央部の動きべク トルの 平均値を算出する。 下部平均動きベク トル算出部 6 4は、 下部の動きベク トルの 平均値を算出する。 The peripheral average motion vector calculator 62 calculates the average value of the motion vector of the central peripheral portion. The center average motion vector calculator 63 calculates the center motion vector. Calculate the average value. The lower average motion vector calculator 64 calculates the average value of the lower motion vector.
周囲動き判定部 6 5は、 周囲部平均動きべク トル算出部 6 2で算出された周囲 部の平均動きべク トルが第 1所定値より小さいかどうかを判定する。 周囲部平均 動きべクトル算出部 6 2で算出された周囲部の平均動きべク トルが第 1所定値よ り小さい場合には、 周囲動き判定部 6 5は中央動き判定部 6 6に判定指示を出す。 周囲部平均動きべク トル算出部 6 2で算出された周囲部の平均動きべク トルが 第 1所定値以上である場合には、 周囲動き判定部 6 5は、 周囲部平均動きべク ト ル算出部 6 2で算出された周囲部の平均動きべク トルを動作信号生成部 6 9に送 る。 動作信号生成部 6 9は、 周囲部平均動きべク トル算出部 6 2で算出された周 囲部の平均動きべク トルに基づいてモーション信号を生成する。  The peripheral motion determining section 65 determines whether or not the average peripheral motion vector calculated by the peripheral average motion vector calculating section 62 is smaller than a first predetermined value. If the average peripheral motion vector calculated by the peripheral average motion vector calculating unit 62 is smaller than the first predetermined value, the peripheral motion determining unit 65 instructs the central motion determining unit 66 to determine. Put out. If the average motion vector of the surrounding area calculated by the average motion vector of the surrounding area 62 is equal to or larger than the first predetermined value, the surrounding motion determination section 65 determines the average motion vector of the surrounding area. The average motion vector of the surroundings calculated by the vector calculation unit 62 is sent to the operation signal generation unit 69. The motion signal generation unit 69 generates a motion signal based on the average motion vector of the peripheral part calculated by the peripheral average motion vector calculation unit 62.
中央動き判定部 6 6は、 周囲動き判定部 6 5から判定指示を受けた場合には、 中央部平均動きべク トル算出部 6 3で算出された中央部の平均動きべク トルが第 2所定値より小さいか否かを判定する。 中央部平均動きべク トル算出部 6 3で算 出された中央部の平均動きべク トルが第 2所定値より小さい場合には、 中央動き 判定部 6 6は下部動き判定部 6 7に判定指示を出す。  When receiving the determination instruction from the peripheral motion determining unit 65, the central motion determining unit 66 calculates the central average motion vector calculated by the central average motion vector calculating unit 63 as the second It is determined whether it is smaller than a predetermined value. If the average motion vector at the center calculated by the average motion vector calculator at the center 63 is smaller than the second predetermined value, the center motion determiner 66 determines the lower motion determiner 67. Give instructions.
中央部平均動きべクトル算出部 6 3で算出された中央部の平均動きべク トルが 第 2所定値以上である場合には、 中央動き判定部 6 6は、 中央部平均動きべタ ト ル算出部 6 3で算出された中央部の平均動きべク トルを動作信号生成部 6 9に送 る。 動作信号生成部 6 9は、 中央部平均動きベク トル算出部 6 3で算出された中 央部の平均動きべク トルに基づいてモーション信号を生成する。  When the average motion vector at the center calculated by the average motion vector calculation unit at the center 63 is equal to or greater than the second predetermined value, the center motion determination unit 66 determines the average motion vector at the center. The average motion vector at the center calculated by the calculator 63 is sent to the motion signal generator 69. The motion signal generator 69 generates a motion signal based on the average motion vector of the center calculated by the average motion vector calculator 63 of the center.
下部動き判定部 6 7は、 中央動き判定部 6 6から判定指示を受けた場合には、 下部平均動きべク トル算出部 6 4で算出された画面下部の平均動きべク トルが第 3所定値より小さいか否かを判定する。 下部平均動きべク トル算出部 6 4で算出 された画面下部の平均動きベク トルが第 3所定値より小さい場合には、 下部動き 判定部 6 7は無関係動き生成部 6 8に動作指示を出す。  When receiving a determination instruction from the central motion determining unit 66, the lower motion determining unit 67 determines the average motion vector at the lower part of the screen calculated by the lower average motion vector calculating unit 64 as a third predetermined value. It is determined whether the value is smaller than the value. If the average motion vector at the bottom of the screen calculated by the lower average motion vector calculation unit 64 is smaller than the third predetermined value, the lower motion determination unit 67 issues an operation instruction to the unrelated motion generation unit 68. .
下部平均動きべク トル算出部 6 4で算出された画面下部の平均動きべク トルが 第 3所定値以上である場合には、 下部動き判定部 6 7は、 下部平均動きベク トル 算出部 6 4で算出された画面下部の平均動きべク トルを動作信号生成部 6 9に送 る。 動作信号生成部 6 9は、 下部平均動きベク トル算出部 6 4で算出された画面 下部の平均動きべク トルに基づいてモーション信号を生成する。 The average motion vector at the bottom of the screen calculated by the lower average motion vector calculator 6 4 is If it is equal to or more than the third predetermined value, the lower motion determining unit 67 sends the average motion vector at the lower part of the screen calculated by the lower average motion vector calculating unit 64 to the motion signal generating unit 69 . The motion signal generator 69 generates a motion signal based on the average motion vector at the lower part of the screen calculated by the lower average motion vector calculator 64.
無関係動き生成部 6 8は、 下部動き判定部 6 7から動作指示を受けた場合には、 映像とは無関係な動きの動きべク トルを生成して動作信号生成部 6 9へ送る。 動 作信号生成部 6 9は、 無関係動き生成部 6 8で算出された映像とは無関係な動き の動きべク トルに基づいてモーション信号を生成する。 モーション信号を生成す る。  When receiving an operation instruction from the lower motion determining unit 67, the irrelevant motion generating unit 68 generates a motion vector of a motion irrelevant to the video and sends it to the motion signal generating unit 69. The motion signal generator 69 generates a motion signal based on the motion vector of the motion irrelevant to the video calculated by the irrelevant motion generator 68. Generate motion signals.
1 1は揺動制御手段 6からのモーション信号に応じて効果装置の駆動を制御す る効果制御手段、 1 2は映像供給源 1からの映像に伴う音声をスピー力一 1 3に 出力するための音声出力手段、 1 4はライ ド手段 7に搭乗する体験者 9に更に臨 場感を高めるために、 効果制御手段 1 1からの制御信号、 即ち映像の動きに応じ て体験者へ送風を制御する送風手段、 1 5は効果制御手段 1 1からの制御信号に 応じてスピーカー 1 3から出力される音声の音像を制御する音像制御手段、 1 6 は効果制御手段 1 1からの制御信号に応じて体験者に光を照射し臨場感を高める ライ ト手段である。  1 1 is an effect control means for controlling the drive of the effect device in accordance with the motion signal from the swing control means 6, and 1 2 is for outputting the sound accompanying the video from the video supply source 1 to the speech force 13 In order to further enhance the sense of reality for the rider 9 riding on the ride means 7, the sound output means 14 sends air to the rider according to the control signal from the effect control means 11, that is, the movement of the image. Blower means for controlling, 15 is a sound image control means for controlling the sound image of the sound outputted from the speaker 13 according to the control signal from the effect control means 11, and 16 is a control signal from the effect control means 11 This is a lighting method that irradiates the user with light and enhances the sense of reality.
なお、 動きべク トル検出手段 5で検出した動きべク トルに応じて揺動制御手段 6がライド手段 7、 スピーカー 1 3、 送風手段 1 4およびライ ト手段 1 6を介し て体験者 9に臨場感を感じさせ、 高める刺激を与えることになるので、 揺動制御 手段 6、 ライ ド手段 7、 揺動手段 8、 効果制御手段 1 1、 スピーカ一 1 3、 送風 手段 1 4、 音像制御手段 1 5及びライ ト手段 1 6は、 本発明の疑似効果手段に相 当する。  In addition, according to the motion vector detected by the motion vector detection means 5, the swing control means 6 provides the rider 9 with the rider 7, the speaker 13, the blower 14 and the light 16 through the rider 9. It gives a sense of realism and gives a stimulus that enhances it, so rocking control means 6, ride means 7, rocking means 8, effect control means 11, speaker 13 and blowing means 14 and sound image control means 15 and the writing means 16 correspond to the pseudo effect means of the present invention.
図 3は、 擬似体験装置の動作を示している。  Figure 3 shows the operation of the virtual experience device.
まず、 映像取込手段 2が、 映像供給源 1から映像を取り込む (ステップ S 1 ) 。 動きベク トル検出手段 5は、 ステップ S 1で取込んだ映像から、 小領域毎に動き ベク トルを検出する (ステップ S 2 ) 。 揺動制御手段 6がステップ S 2で検出さ れた小領域毎の動きべク トルを取込み、 分類部 6 1が小領域毎の動きべク トルを、 映像の中央部、 中央部の周囲部及び画面の下部に分類する (ステップ S 3 ) 。 周囲部の平均動きべク トルを、 周囲部平均動きべク トル算出部 6 2が算出する (ステップ S 4 ) 。 中央部の平均動きベク トルを、 中央部平均動きべク トル算出 部 6 3が算出する (ステップ S 5 ) 。 画面下部の平均動きベク トルを下部平均動 きベク トル算出部 6 4が算出する (ステップ S 6 ) 。 First, the image capturing means 2 captures an image from the image supply source 1 (step S 1). The motion vector detecting means 5 detects a motion vector for each small area from the video captured in step S1 (step S2). Oscillation control means 6 detects in step S2 The motion vector for each of the small areas is fetched, and the classification unit 61 classifies the motion vector for each of the small areas into the center of the image, the periphery of the center, and the bottom of the screen (step S3). . The average motion vector of the surrounding area is calculated by the average motion vector calculating section 62 of the surrounding area (step S4). The central portion average motion vector is calculated by the central portion average motion vector calculation section 63 (step S5). The average motion vector at the bottom of the screen is calculated by the lower average motion vector calculation unit 64 (step S6).
周囲動き判定部 6 5が、 周囲部の平均動きべク トルが第 1の所定値より小さい かどうかを判定する (ステップ S 7 ) 。 周囲部の平均動きベク トルが第 1の所定 値以上であれば、 ステップ S 8へ移行する。 ステップ S 8では、 動作信号生成部 6 9が、 周囲部の平均動きべク トルにら基づいてモーション信号を生成するとと もに、 揺動手段 8がこのモーション信号に基づいてライ ド手段 7を揺動させる。 上記ステップ S 7において、 周囲部の平均動きべク トルが第 1の所定値より小 さいと判定されたときには、 中央部の平均動きべク トルが第 2の所定値より小さ いかどうかを中央動き判定部 6 6が判定する (ステップ S 9 ) 。  The peripheral motion determining section 65 determines whether the average motion vector of the peripheral portion is smaller than a first predetermined value (step S7). If the average movement vector of the surrounding area is equal to or larger than the first predetermined value, the process proceeds to step S8. In step S8, the motion signal generation unit 69 generates a motion signal based on the average motion vector of the surrounding area, and the swing unit 8 controls the ride unit 7 based on the motion signal. Rock it. If it is determined in step S7 that the average motion vector of the peripheral portion is smaller than the first predetermined value, it is determined whether the average motion vector of the central portion is smaller than the second predetermined value. The determination unit 66 determines (step S9).
中央部の平均動きベク トルが第 2の所定値以上である場合には、 ステップ S 1 0へ移行する。 ステップ S 1 0では、 動作信号生成部 6 9が中央の平均動きべク トルに基づいてモーション信号を生成するとともに、 揺動手段 8がこのモ一ショ ン信号に基づいて揺動手段 8がライド手段 7を揺動させる。  If the average motion vector at the center is equal to or greater than the second predetermined value, the process proceeds to step S10. In step S10, the motion signal generator 69 generates a motion signal based on the average motion vector in the center, and the rocking means 8 rides on the rocking means 8 based on the motion signal. Swing means 7
ステップ S 9において、 中央部の平均動きべク トルが第 2の所定値より小さい と判定されたときには、 画面下部の動きベク トルが第 3の所定値より小さいかど うかを下部動き判定部 6 7が判定する (ステップ S 1 1 ) 。  If it is determined in step S9 that the average motion vector at the center is smaller than the second predetermined value, it is determined whether the motion vector at the bottom of the screen is smaller than the third predetermined value. Is determined (step S11).
画面下部の動きべク トルが第 3の所定値以上である場合には、 ステップ S 1 2 へ移行する。 ステップ S 1 2では、 動作信号生成部 6 9が画面下部の平均動きべ ク トルに基づいてモーション信号を生成するとともに、 揺動手段 8がこのモ一シ ヨン信号に基づいて揺動手段 8がライド手段 7を揺動させる。  If the motion vector at the bottom of the screen is equal to or greater than the third predetermined value, the process proceeds to step S12. In step S12, the motion signal generator 69 generates a motion signal based on the average motion vector at the bottom of the screen, and the rocking means 8 turns the rocking means 8 based on the motion signal. Swing the ride means 7.
ステップ S 1 1において、 画面下部の平均動きべク トルが第 3の所定値より小 さいと判定されたときには、 映像に無関係に定期的なゆれを揺動手段 8を介して ライド手段 7に与えるための動きべク トルを無関係動き生成部 6 8が生成する (ステップ S 1 3 ) 。 この場合には、 無関係動き生成部 6 8で生成した動きべク トルに応じて動作信号生成部 6 9がモ一ション信号を生成し、 このモーション信 号に基づいて揺動手段 8がライ ド手段 7を揺動させる。 In step S11, when it is determined that the average motion vector at the bottom of the screen is smaller than the third predetermined value, the periodic shaking is performed via the rocking means 8 regardless of the image. The motion vector to be given to the ride means 7 is generated by the unrelated motion generator 68 (step S13). In this case, the motion signal generator 69 generates a motion signal according to the motion vector generated by the irrelevant motion generator 68, and the rocking means 8 rides based on the motion signal. Swing means 7
上記ステップ S 8、 ステップ S I 0、 ステップ S 1 2及びステップ S 1 3にお いて、 動作信号生成部 6 9で生成されるモ一ション信号は効果制御手段 1 1にも 送られる。 効果制御手段 1 1は、 ライ ド手段 7に搭乗する体験者に与える臨場感 を高めるために、 モーション信号に応じて送風手段 1 4、 音像制御手段 1 5及び ライと手段 1 6を制御する。  In the steps S8, S10, S12, and S13, the motion signal generated by the operation signal generator 69 is also sent to the effect controller 11. The effect control means 11 controls the blowing means 14, the sound image control means 15, and the lie and means 16 in accordance with the motion signal in order to enhance the sense of reality given to the rider on the ride means 7.
図 4は、 上記ステップ S 8、 ステップ S 1 0、 ステップ S 1 2及びステップ S 1 3における揺動制御手段 6の動作信号生成部 6 9の処理手順を示している。 動作信号生成部 6 9は、 原則的には、 動作信号生成部 6 9に入力される平均動 きべク トルの方向および大きさに応じて、 ライド手段 7を揺動させるように揺動 手段 8を動作させる (ステップ S 2 7 ) 。 例えば、 映像が右に傾いたら、 ライ ド 手段 7を右に傾ける。  FIG. 4 shows a processing procedure of the operation signal generation unit 69 of the swing control means 6 in the above step S8, step S10, step S12, and step S13. The operation signal generation unit 69 basically includes a swing unit that swings the ride unit 7 in accordance with the direction and size of the average movement vector input to the operation signal generation unit 69. 8 is operated (step S27). For example, if the image tilts to the right, the ride means 7 is tilted to the right.
ただし、 後述するように、 映像の動きが急変した場合、 ライド手段 7の傾きが 所定時間同じ状態を維持している場合またはライ ド手段 7の前後左右方向への傾 きが限界位置 (最大傾き位置) である場合には、 動作信号生成部 6 9は例外的な 制御行なう。  However, as will be described later, when the motion of the image changes suddenly, when the inclination of the ride means 7 is maintained in the same state for a predetermined time, or when the inclination of the ride means 7 in the front, rear, left and right directions is at the limit position (the maximum inclination) In the case of (position), the operation signal generator 69 performs exceptional control.
ステップ S 2 0では、 動作信号生成部 6 9は、 動作信号生成部 6 9に入力され た平均動きべクトルに基づいて、 映像の動きが急激に大きくなつたかどうか判断 する。 映像の動きが急激に大きくなつたと判断したときは、 ステップ S 2 1へ移 行し、 また映像の動きが急激に大きくなつていないと判断したときは、 ステップ S 2 3へ移行する。  In step S20, the motion signal generation unit 69 determines whether or not the motion of the video has rapidly increased based on the average motion vector input to the motion signal generation unit 69. If it is determined that the motion of the video has rapidly increased, the process proceeds to step S21. If it is determined that the motion of the video has not rapidly increased, the process proceeds to step S23.
ステップ S 2 1では、 動作信号生成部 6 9に入力された平均動きべク トルの方 向と逆方向に、 少しライ ド手段 7を動力すように揺動手段 8を制御する。 ここで、 「少し」 とは、 例えば動作信号生成部 6 9に入力された平均動きベク トルの大き さの ι Ζ ΐ oの大きさである。 In step S21, the swing means 8 is controlled so as to slightly drive the ride means 7 in the direction opposite to the direction of the average motion vector input to the operation signal generation section 69. Here, “slightly” means, for example, the magnitude of the average motion vector input to the motion signal generator 69. The size is ι Ζ ΐ o.
次にステップ S 2 2では、 動作信号生成部 6 9に入力された平均動きべク トル の方向へ、 かつ当該平均動きべク トルの大きさにステップ S 2 1の少しの動きの 大きさを加えた動きの大きさに応じて、 ライド手段 7を揺動させるように揺動手 段 8を動作させる。  Next, in step S22, the magnitude of the slight motion of step S21 in the direction of the average motion vector input to the motion signal generation unit 69 and the magnitude of the average motion vector is determined. The rocking means 8 is operated to rock the riding means 7 according to the magnitude of the added movement.
つまり、 映像の動きが急変した場合には、 驚きや臨場感といった効果を体験者 により多く与えるために、 映像の動きと逆方向にライ ド手段 7が揺動せしめられ た後、 映像の動きと同じ方向にライド手段 7が摇動せしめられる。  In other words, when the movement of the image changes suddenly, the rider 7 is swung in the opposite direction to the movement of the image, in order to give the user more effects such as surprise and presence, and then the movement of the image The ride means 7 is moved in the same direction.
ステップ S 2 3では、 ライド手段 7が中央位置から前後左右いずれかの位置へ 傾いている状態かどうか判断する。 傾いていると判断すればステップ S 2 4へ移 行し、 また傾いていないと判断すればステップ S 2 7へ移行する。  In step S23, it is determined whether or not the riding means 7 is tilted from the center position to any of the front, rear, left and right positions. If it is determined that it is tilted, the process proceeds to step S24. If it is determined that it is not tilted, the process proceeds to step S27.
ステップ S 2 4ではライ ド手段 7の傾きが所定時間同じ状態を維持しているか どうか判断する。 所定時間維持していると判断すればステップ S 2 6へ移行し、 また所定時間維持していないと判断すればステップ S 2 5へ移行する。 ステップ S 2 6では、 ライ ド手段 7が同じ傾き状態を維持していると、 体験者 9がその状 態の体感に慣れてきている場合が多いので、 ライ ド手段 7を徐々に中央位置に戻 すよう揺動手段 8を制御する。  In step S24, it is determined whether or not the inclination of the riding means 7 is kept the same for a predetermined time. If it is determined that the predetermined time is maintained, the process proceeds to step S26. If it is determined that the predetermined time is not maintained, the process proceeds to step S25. In step S26, if the ride means 7 maintains the same inclined state, the user 9 is often accustomed to the experience of that state, so the ride means 7 is gradually moved to the center position. The swinging means 8 is controlled to return.
ステップ S 2 5では、 ライド手段 7の前後左右方向への傾きが限界位置 (最大 傾き位置) かどうかを判断する。 ライ ド手段 7の前後左右方向への傾きが限界位 置 (最大傾き位置) であればステップ S 2 6に移行し、 限界位置でなければステ ップ S 2 7に移行する。  In step S25, it is determined whether the inclination of the ride means 7 in the front-rear, left-right direction is the limit position (maximum inclination position). If the inclination of the ride means 7 in the front-rear and left-right directions is at the limit position (maximum inclination position), the process proceeds to step S26. If not, the process proceeds to step S27.
ステップ S 2 6では、 ライド手段 7が限界位置ならさらに映像の動きがあって も動かすことができないので、 ライド手段 7を徐々に中央位置に戻すよう揺動手 段 8を制御する。  In step S26, if the ride means 7 is at the limit position, it cannot be moved even if there is further movement of the image, so the swing means 8 is controlled so that the ride means 7 is gradually returned to the center position.
ステップ S 2 7では、 動作信号生成部 6 9に入力される平均動きべク トルの方 向および大きさに応じて、 ライド手段 7を揺動させるように揺動手段 8を動作さ せる。 以上ステップ S 1乃至ステップ S 1 3及びステップ S 2 0乃至ステップ S 2 7 をステップ S 1で映像が取込まれるごとに繰り返し実行することにより、 体験者 9に体感させることができるものである。 In step S27, the swing means 8 is operated so as to swing the ride means 7 in accordance with the direction and the magnitude of the average motion vector input to the operation signal generation section 69. By repeating steps S1 to S13 and steps S20 to S27 each time an image is captured in step S1, the experienced person 9 can experience the experience.
前述の実施例では、 分類部 6 1は、 動きべク トル検出手段 5で抽出した動きべ ク トノレを、 表示手段 3に表示する 1画面のうち中央部、 中央部の周囲部、 画面下 部とに分類すると説明したが、 本発明はこれに限らず、 映像中の被写体と背景と を分類するようにしてもよい。 もし、 分類部 6 1が映像の被写体と背景とを分類 する構成とした場合でも、 前述の実施例中の中央部を被写体、 また周囲部を背景 と読み替えれば、 前述と同じ構成で実現できる。  In the above-described embodiment, the classification unit 61 includes the motion vector extracted by the motion vector detection means 5 on the display unit 3 in the center, the periphery of the center, and the bottom of the screen. However, the present invention is not limited to this, and the subject and the background in the video may be classified. If the classifying unit 61 classifies the subject and the background of the video, the same configuration as described above can be realized by replacing the center in the above-described embodiment with the subject and the surrounding part as the background. .
また、 前述の実施例では、 表示手段 3に 2次元映像を表示する構成を説明した 力 本発明はこれに限らず、 2次元映像を 3次元映像に変換した映像信号を立体 表示手段 1 8に表示させる構成に適用してもよい。 この実施例を図 5に示す。 な お、 図 1と同じ番号を付した構成は、 同じ機能を有するので、 説明を省略する。  In the above-described embodiment, the configuration for displaying a two-dimensional image on the display means 3 has been described. The present invention is not limited to this, and a video signal obtained by converting a two-dimensional image into a three-dimensional image is displayed on the three-dimensional display means 18. You may apply to the structure which displays. This embodiment is shown in FIG. Note that the configurations denoted by the same reference numerals as those in FIG. 1 have the same functions, and thus description thereof will be omitted.
1 7は動きべク トル検出手段 5で抽出した動きべク トルに応じた視差量を有し た右眼用映像及び左眼用映像を、 映像取込手段 2で取込んだ映像に基づいて生成 する 2 DZ 3 D変換手段、 1 8は立体表示手段である。  Reference numeral 17 denotes a video for the right eye and a video for the left eye having a parallax amount corresponding to the motion vector extracted by the motion vector detecting means 5 based on the video captured by the video capturing means 2. The generated 2DZ3D conversion means and 18 are stereoscopic display means.
前記動きべク トル検出手段 5は、 2次元映像から 3次元映像を生成する際に動 きベク トルを利用するために、 必ず備えられており、 本実施例では、 変換するた めに利用する動きべクトルを、 ライド手段 7を揺動させるための制御信号として 利用しているものである。  The motion vector detecting means 5 is always provided to use a motion vector when generating a 3D image from a 2D image, and is used for conversion in the present embodiment. The motion vector is used as a control signal for swinging the ride means 7.
さらに、 前述の実施例のように、 2次元映像を 3次元映像に変換した映像信号 を表示手段 3に表示させる構成のみならず、 映像供給源 1から 3次元映像、 即ち 左眼用映像 Lと右眼用映像 Rとが出力され、 この信号に基づいて立体表示手段 1 8に表示する構成でもよい。  Further, as in the above-described embodiment, not only the configuration in which the video signal obtained by converting the 2D video into the 3D video is displayed on the display means 3, but also the 3D video from the video source 1, that is, the left-eye video L The right-eye image R may be output and displayed on the stereoscopic display means 18 based on this signal.
力かる構成を図 6に基づいて説明する。 なお、 図 5と同様、 図 1と同じ番号を 付した構成は、 同じ機能を有するので、 説明を省略する。  A powerful configuration will be described with reference to FIG. Note that, as in FIG. 5, the components denoted by the same reference numerals as those in FIG. 1 have the same functions, and thus description thereof will be omitted.
1 9は映像供給源 1から左眼用映像 L及び右眼用映像 Rを取込んで、 被写体と 背景とを区別し、 被写体を特定する被写体特定手段である。 1 9 captures the left-eye image L and the right-eye image R from the image source 1 and This is a subject specifying means for distinguishing the subject from the background and specifying the subject.
この場合、 映像供給源 1は、 3次元映像を供給し、 具体的には左眼用映像 L及 び右眼用映像 Rを供給し、 表示制御手段 4はこの映像信号に基づいて立体表示手 段 1 8に立体表示させる。 また、 映像取込手段 2は左眼用映像 L及び右眼用映像 Rを取込み、 そして該映像取込手段 2で取込んだ映像に基づいて動きべク トル検 出手段 5は、 前記被写体特定手段 1 9の特定結果を考慮して動きべクトルを検出 し、 以下揺動制御手段 6に送って、 ライ ド手段 7を駆動するのである。  In this case, the image supply source 1 supplies a three-dimensional image, specifically, supplies a left-eye image L and a right-eye image R, and the display control means 4 performs a three-dimensional display operation based on the image signal. Display 3D in column 18. The video capturing means 2 captures the left-eye video L and the right-eye video R, and based on the video captured by the video capturing means 2, the motion vector detecting means 5 The motion vector is detected in consideration of the specific result of the means 19, and then sent to the swing control means 6 to drive the ride means 7.

Claims

請 求 の 範 囲 The scope of the claims
1 . 映像を表示するための表示装置、 1. Display device for displaying video,
表示装置に映像信号を供給する映像信号供給手段、  Video signal supply means for supplying a video signal to the display device,
表示装置に供給される映像信号に基づいて映像の動きを検出する動き検出手段、 および体験者に刺激を与える疑似効果手段を備えており、  Motion detection means for detecting the motion of the video based on the video signal supplied to the display device, and pseudo effect means for stimulating the user,
疑似効果手段は、 動き検出手段で検出した映像の動きを、 表示装置の映像表示 領域の中央部とその周囲部とに、 または被写体と背景とに分類し、 そのうち周囲 部または背景の映像の動きに応じて体験者に刺激を与えることを特徴とする疑似 体験装置。  The pseudo effect means classifies the motion of the image detected by the motion detecting means into a central portion of the image display area of the display device and its periphery, or between the subject and the background. A pseudo-experience device characterized by providing stimuli to the user according to the conditions.
2 . 疑似効果手段は、  2. The pseudo effect means
表示装置の前方に配置されかつ体験者を載置するライド手段、  Riding means arranged in front of the display device and for placing the experienced person,
ライド手段を揺動させて体験者に刺激を与える揺動手段、 および  Rocking means for rocking the ride means to stimulate the user; and
動き検出手段で検出した映像の動きを、 表示装置の映像表示領域の中央部とそ の周囲部とに、 または被写体と背景とに分類し、 そのうち周囲部または背景の映 像の動きに応じて揺動手段を駆動制御する揺動制御手段を備えていることを特徴 とする請求項 1に記載の疑似体験装置。  The motion of the image detected by the motion detection means is classified into the center of the image display area of the display device and its surroundings, or into the subject and the background, and according to the motion of the surrounding or background image. 2. The simulated experience device according to claim 1, further comprising swing control means for driving and controlling the swing means.
3 . 揺動制御手段は、  3. The swing control means
上記周囲部または背景の映像の動きの大きさが第 1所定値より小さいとき、 上 記中央部または被写体の映像の動きに応じて揺動手段を駆動させる手段を備えて いることを特徴とする請求項 2に記載の疑似体験装置。  When the magnitude of the movement of the peripheral part or the background image is smaller than a first predetermined value, there is provided means for driving the rocking means in accordance with the movement of the central part or the image of the subject. The simulated experience device according to claim 2.
4 . 揺動制御手段は、  4. The swing control means
上記中央部または被写体の映像の動きの大きさが第 2所定値より小さいとき、 画面下部の映像の動きに応じてまたは映像に無関係な動きで揺動手段を駆動させ る手段を備えていることを特徴とする請求項 3に記載の疑似体験装置。  When the magnitude of the movement of the image of the center or the subject is smaller than the second predetermined value, a means is provided for driving the rocking means in response to the movement of the image at the bottom of the screen or by movement unrelated to the image. 4. The simulated experience device according to claim 3, wherein:
5 . 表示装置に映像信号を供給する第 1ステップ、  5. The first step of supplying a video signal to the display device,
表示装置に供給される映像信号に基づいて映像の動きを検出する第 2ステップ、 および体験者に刺激を与える第 3ステツプを備えており、 A second step of detecting video motion based on the video signal supplied to the display device, And a third step to stimulate the experience
第 3ステップは、 第 2ステップで検出した映像の動きを、 表示装置の映像表示 領域の中央部とその周囲部とに、 または被写体と背景とに分類し、 そのうち周囲 部または背景の映像の動きに応じて体験者に刺激を与えることを特徴とする疑似 体験方法。  In the third step, the motion of the image detected in the second step is classified into the central part of the image display area of the display device and its periphery, or between the subject and the background, and the movement of the image in the peripheral part or the background. A simulated experience method in which the stimulus is given to the experienced person according to the situation.
6 . 表示装置の前方に体験者を載置するライド手段が設けられているとともにラ ィド手段を揺動させて体験者に刺激を与える揺動手段が設けられており、  6. Ride means for placing the experienced person is provided in front of the display device, and swing means for stimulating the experienced person by swinging the ride means are provided.
第 3ステップは、 第 2ステップで検出した映像の動きを、 表示装置の映像表 示領域の中央部とその周囲部とに、 または被写体と背景とに分類し、 そのうち周 囲部または背景の映像の動きに応じて揺動手段を駆動制御することを特徴とする 請求項 5に記載の疑似体験方法。  In the third step, the motion of the image detected in the second step is classified into the central part and the peripheral part of the image display area of the display device, or into the subject and the background, and the peripheral part or the background is imaged. The simulated experience method according to claim 5, wherein the oscillating means is drive-controlled in accordance with the movement of the object.
7 . 第 3ステップは、  7. The third step is
上記周囲部または背景の映像の動きの大きさが第 1所定値より小さいとき、 上 記中央部または被写体の映像の動きに応じて揺動手段を駆動させるステップを備 えていることを特徴とする請求項 6に記載の疑似体験方法。  When the magnitude of the movement of the peripheral or background image is smaller than a first predetermined value, a step of driving the rocking means according to the movement of the image of the central part or the subject is provided. The simulated experience method according to claim 6.
8 . 第 3ステップは、  8. The third step is
上記中央部または被写体の映像の動きの大きさが第 2所定値より小さいとき、 画面下部の映像の動きに応じてまたは映像に無関係な動きで揺動手段を駆動させ るステップを備えていることを特徴とする請求項 7に記載の疑似体験方法。  When the magnitude of the movement of the image of the center or the subject is smaller than the second predetermined value, the method includes a step of driving the rocking means in response to the movement of the image at the bottom of the screen or by movement irrelevant to the image. The simulated experience method according to claim 7, wherein:
9 . 映像を表示するための表示装置、 9. Display device for displaying video,
表示装置に映像信号を供給する映像信号供給手段、  Video signal supply means for supplying a video signal to the display device,
表示装置に供給される映像信号から映像の動きを検出する動き検出手段、 およ び  Motion detection means for detecting the motion of the video from the video signal supplied to the display device; and
体験者に刺激を与える疑似効果手段を備えており、  It has pseudo effect means to stimulate the experiencer,
疑似効果手段は、  The pseudo effect means
動き検出手段で検出した映像の動きに応じた強さの刺激を体験者に与える手段、 および 刺激の強さが予め定められた最大である状態が所定時間以上継続したとき、 徐々に刺激の強さを弱くする手段、 Means for giving the user a stimulus of an intensity corresponding to the motion of the video detected by the motion detecting means, and Means for gradually decreasing the intensity of the stimulus when the state in which the intensity of the stimulus is the predetermined maximum has continued for a predetermined time or more,
を備えていることを特徴とする疑似体験装置。  A pseudo-experience device comprising:
1 0 . 疑似効果手段は、  1 0. The pseudo effect means
表示装置の前方に配置されかつ体験者を載置するライド手段、  Riding means arranged in front of the display device and for placing the experienced person,
ライド手段を揺動させて体験者に刺激を与える揺動手段、 および  Rocking means for rocking the ride means to stimulate the user; and
揺動手段の駆動制御を行う揺動制御手段を備えており、  A swing control means for controlling the drive of the swing means,
揺動制御手段は、  Swing control means,
動き検出手段で検出した映像の動きに応じて揺動手段を駆動させる手段、 およ び  Means for driving the oscillating means in accordance with the motion of the image detected by the motion detecting means; and
揺動手段の揺動量が予め最大である状態が所定時間以上継続したとき、 揺動手 段を中央に徐々に戻す手段、  Means for gradually returning the swing means to the center when the swing amount of the swing means is at the maximum in advance for a predetermined time or more,
を備えていることを特徴とする請求項 9に記載の疑似体験装置。  10. The simulated experience device according to claim 9, comprising:
1 1 . 揺動制御手段は、  1 1. The swing control means
動き検出手段で検出した映像の動きが急激に大きくなつたとき、 映像の動きに 応じた方向にライド手段を揺動させる前に、 映像の動きの方向と逆方向にライド 手段を少し揺動させる手段を備えていることを特徴とする請求項 1 0に記載の疑 似体験装置。  When the motion of the image detected by the motion detection means suddenly increases, slightly swing the ride means in the direction opposite to the direction of the image movement before swinging the ride means in the direction corresponding to the movement of the image 10. The simulated experience device according to claim 10, further comprising means.
1 2 . 表示装置に映像信号を供給する第 1ステップ、  1 2. The first step of supplying a video signal to the display device,
表示装置に供給される映像信号から映像の動きを検出する第 2ステップ、 およ び  A second step of detecting video motion from a video signal supplied to the display device, and
体験者に刺激を与える第 3ステップを備えており、  It has a third step that stimulates the experiencer,
第 3ステップは、  The third step is
第 2ステツプで検出された映像の動きに応じた強さの刺激を体験者に与えるス テツプ、 および  A step of giving a stimulus of a strength according to the motion of the image detected in the second step to the user, and
刺激の強さが予め定められた最大である状態が所定時間以上継続したとき、 徐々に刺激の強さを弱くするステップを備えていることを特徴とする疑似体験方 法。 A simulated experience method comprising a step of gradually reducing the stimulus intensity when a state in which the stimulus intensity is a predetermined maximum continues for a predetermined time or more. Law.
1 3 . 表示装置の前方に体験者を載置するライ ド手段が設けられているととも にライ ド手段を揺動させて体験者に刺激を与える揺動手段が設けられており、 第 3ステツプは、 第 2ステツプで検出された映像の動きに応じて揺動手段を 駆動させるステップ、 および  13. There is provided a ride means for placing the experienced person in front of the display device, and a swing means for oscillating the ride means to stimulate the experienced person is provided. Driving the rocking means in response to the motion of the image detected in the second step; and
揺動手段の揺動量が予め最大である状態が所定時間以上継続したとき、 揺動手 段を中央に徐々に戻すステップ、  A step of gradually returning the rocking means to the center, when the state in which the rocking amount of the rocking means is the maximum in advance continues for a predetermined time or more;
を備えていることを特徴とする請求項 1 2に記載の疑似体験方法。  13. The simulated experience method according to claim 12, comprising:
1 4 . 第 3ステップは、  1 4. The third step is
第 2ステップで検出された映像の動きが急激に大きくなつたとき、 映像の動き に応じた方向にライ ド手段を揺動させる前に、 映像の動きの方向と逆方向にライ ド手段を少し揺動させるステップを備えていることを特徴とする請求項 1 3に記 載の疑似体験方法。  When the movement of the image detected in the second step suddenly increases, before the rocking means swings in the direction corresponding to the movement of the image, slightly move the riding means in the direction opposite to the direction of the movement of the image. 14. The simulated experience method according to claim 13, further comprising a step of swinging.
PCT/JP2000/008381 1999-11-30 2000-11-28 Experience simulating device and method WO2001041102A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP11/341312 1999-11-30
JP34131299A JP2001154570A (en) 1999-11-30 1999-11-30 Device and method for virtual experience
JP11/371135 1999-12-27
JP37113599A JP2001183968A (en) 1999-12-27 1999-12-27 Pseudo experiencing device and pseudo experiencing method

Publications (1)

Publication Number Publication Date
WO2001041102A1 true WO2001041102A1 (en) 2001-06-07

Family

ID=26576943

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2000/008381 WO2001041102A1 (en) 1999-11-30 2000-11-28 Experience simulating device and method

Country Status (1)

Country Link
WO (1) WO2001041102A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0854820A (en) * 1994-08-12 1996-02-27 Sega Enterp Ltd Driving game device and its background picture display method
JPH10277261A (en) * 1998-04-13 1998-10-20 Namco Ltd Image synthesis device and virtual experience device using
JPH11153949A (en) * 1997-11-20 1999-06-08 Sony Corp Body feeling motion device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0854820A (en) * 1994-08-12 1996-02-27 Sega Enterp Ltd Driving game device and its background picture display method
JPH11153949A (en) * 1997-11-20 1999-06-08 Sony Corp Body feeling motion device
JPH10277261A (en) * 1998-04-13 1998-10-20 Namco Ltd Image synthesis device and virtual experience device using

Similar Documents

Publication Publication Date Title
US11790616B2 (en) Immersive virtual display
US10181212B2 (en) Method and system for reducing motion sickness in virtual reality ride systems
US10062247B2 (en) Vibration generation system, storage medium having stored therein vibration generation program, and vibration generation method
US10449446B2 (en) Sensation induction device, sensation induction system, and sensation induction method
JPH0819662A (en) Game unit with image displaying apparatus
JP2007181569A (en) Game device and its control method
JPWO2020090477A1 (en) VR sickness reduction system, head-mounted display, VR sickness reduction method and program
US10758821B2 (en) Operation input system, operation input device, and game system for adjusting force feedback control
JP2009061161A (en) Program, information storage medium and game system
JP4114822B2 (en) Image generating apparatus and information storage medium
JPH11146978A (en) Three-dimensional game unit, and information recording medium
JPH08131659A (en) Virtual reality generating device
JPH11153949A (en) Body feeling motion device
KR20160099075A (en) Image Simulating System, Apparatus for Controlling Platform and Method for Controlling Platform
JP2001183968A (en) Pseudo experiencing device and pseudo experiencing method
WO2001041102A1 (en) Experience simulating device and method
JP3838173B2 (en) Information processing apparatus and method, recording medium, and program
JP2009061159A (en) Program, information storage medium and game system
JP4212015B2 (en) Image generating apparatus and information storage medium
JP2000331184A (en) Image forming device and information storing medium
KR20180039415A (en) Tiny vibration type motion platform system based on serbo motor
JP2001154570A (en) Device and method for virtual experience
CN110947175A (en) High-simulation in-person three-screen body sense racing car
WO2024166207A1 (en) Vibration control device, vr device, vibration control method, and computer program
JP3631890B2 (en) Electronic game equipment

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase