USRE40293E1 - Method and system for facilitating wireless, full-body, real-time user interaction with digitally generated text data - Google Patents
Method and system for facilitating wireless, full-body, real-time user interaction with digitally generated text data Download PDFInfo
- Publication number
- USRE40293E1 USRE40293E1 US11/448,581 US44858106A USRE40293E US RE40293 E1 USRE40293 E1 US RE40293E1 US 44858106 A US44858106 A US 44858106A US RE40293 E USRE40293 E US RE40293E
- Authority
- US
- United States
- Prior art keywords
- representation
- text data
- movement
- image
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/002—Specific input/output arrangements not covered by G06F3/01 - G06F3/16
- G06F3/005—Input arrangements through a video camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
Definitions
- the present invention relates to interactive computer systems, and in particular to a method and apparatus for facilitating wireless, full-body interaction between a human participant and a computer-generated graphical environment including textual data.
- VR virtual reality
- Such systems typically require the user to don goggles, through which he or she perceives the virtual environment, as well as sensors that encode the user's gestures as electrical signals. The user reacts naturally to the changing virtual environment, generating signals that the computer interprets to determine the state and progress of the presented environment.
- VR systems In order to encode a sufficiently board spectrum of gestures to facilitate natural interaction, VR systems ordinarily require the user to wear, in addition to the goggles, at least one “data glove” to detect hand and finger movements, and possibly a helmet to detect head movements.
- Full-body systems which encode movements from numerous anatomical sites to develop a complete computational representation of the user's overall body action, require many more sensors; however, such systems, would be capable of projecting the user fully into the virtual environment, providing the user with greater control and a heightened sense of participation ideally suited to interactive simulations.
- the Mandala system can integrate the user's full image within the virtual environment it creates, but requires a chroma-key blue background.
- the system developed by MIT provides a 3-Dimensional spatial video interaction with a user and is directed to detecting and analyzing gestural information of the user to cause predetermined responses from graphically generated objects within the virtual environment (e.g., if a virtual user pets a graphically generated dog within the virtual environment, the dog may wag its tail).
- all of the above-described VR systems employ complex video analysis and tracking techniques to detect and follow the boundaries of the user's image within the virtual environment.
- Applicant has recognized the need for a simple 2-dimensional video-generated real-time interaction system wherein text data is easily introduced into a virtual environment, as viewed on a monitor or on a projection screen.
- the inputted text data is manipulated in the virtual environment by video representation of a human participant (a “virtual user”).
- a human participant a “virtual user”.
- the human participant or virtual user freely interacts with the inputted text data in the virtual environment as the human participant (and other people watching the monitor or projection screen) absorbs the teachings of the text data as he or she interacts with the text data in the virtual environment.
- a virtual user a real-time video representation of a human participant
- a controlling software program instructs an interactive display system.
- the interactive display system includes an electronic camera (e.g., charge-coupled device or “CCD”) which intermittently records (in realtime) a human participant's (a user) image and inputs the video image into a computer.
- the computer creates a digitized version of the inputted image (e.g., a bitmap of pixels) and thereby creates a “virtual user image”.
- the camera is positioned behind a large viewing screen which faces the user and records the user's image through an opening provided within the screen.
- the computer further receives a data record including text data and text controlling parameters, preferably including at least a fall rate value.
- the data record may be supplied to the computer using any of several conventional memory mediums, such as a hard drive memory, disc memory, compact disc, or the data record may only include a text string which is supplied to the computer directly from the Internet through a connected modem, for example, or through the use of a connected keyboard.
- the text may be inputted to the computer directly from a word processing program, such as Corel's WordPerfect®, or Microsoft Word® as a text file or by “copying” selected text from an open text file directly into the present software program.
- the software program upon receiving a captured video image of the user, the software program (see appendix for source code) first parses the text data into lines of text. The program determines which line or lines of text should be displayed on the screen and further determines the destination of each character within any line that is to be displayed on the screen, according to the fall rate assigned to each character. The program then compares the pixel color value of “destination pixels” (pixels located at the destination of each particular character) with a threshold color value. If the color value of the destination pixels is below the threshold value (i.e., lighter), then the particular character will be positioned at a destination according to its prescribed fall rate.
- a threshold value i.e., lighter
- the program locates a suitable new destination positioned vertically above the original destination which is lighter in color value than the threshold value.
- the program instructs the computer to display all of the characters on the screen at their respective destinations with the particular captured video frame. The process continues where a new image frame is available from the camera.
- the program analyzes each captured image as it acquires at a prescribed sampling rate.
- the sampling rate is dependent on the speed of the video card components and drivers, the speed of the computer, and light sensitivity of the digital camera.
- the program keeps track of when a predetermined number of characters (e.g., at least half of them) travels a predetermined distance from the top of the viewing screen (such as the halfway point). If this is the case, the program causes the entire line of text to fade at a prescribed fade rate. When one line fades at the lower portion of the screen, another line appears at the top of the screen, with its characters beginning to fall. It is preferred that at least three lines of text be displayed on the screen at any given time.
- a predetermined number of characters e.g., at least half of them
- Another aspect of the invention displays each line of text data as a separate color so that viewers may easily discern the letters or characters from different lines of inputted text, allowing the lines of text to be read relatively easily, as the letters fall towards and rest on various darker elements located within the virtual environment, such as the arm of a virtual user and perhaps a “virtual umbrella image” carried by the virtual user.
- each character of an inputted text string is horizontally held so that each character is restricted to vertically movement as it descends on the viewing screen at its prescribed fall rate or ascends upwardly with similar upward movement by the virtual user.
- the program generates a shadow zone of dark pixels which are projected onto the viewing screen.
- the shadow zone is aligned with the optical input of the digital camera so that the camera will be protected from the relatively bright light of the projector.
- the computer generated shadow zone helps prevent the images recorded by the camera from becoming “washed out” by the light from the projector shining on the camera's lens.
- FIG. 1 is a schematic illustrating an interactive display system including a projection screen, a pickup camera, a projector, a computer, and a text input, according to the invention
- FIG. 2 is a perspective view of the inventive display system of FIG. 1 , showing the field of view of the camera, according to the invention
- FIG. 3 is a block diagram of the interactive display system, according to the invention.
- FIG. 4 is a illustration of an exemplary text string to be inputted into the interactive display system, according to the invention.
- FIG. 5 is a illustration of an exemplary data record which includes the text string of FIG. 4 and further includes text-controlling parameters, according to the invention
- FIG. 6 is a flow diagram showing steps of obtaining and combining video and text data, according to the invention.
- FIG. 7 is a front view of the viewing screen, illustrating an exemplary displayed virtual image including a virtual user holding a virtual umbrella, and further including falling text, according to the invention
- FIG. 8 is an enlarged view of a portion of the virtual umbrella of FIG. 7 and select letters of the text, according to the invention.
- FIG. 9 is a enlarged partial front view of the viewing screen showing details of a central opening and a computer generated shadow zone, according to the invention.
- the interactive display system 10 including a large display screen 12 having a generally central and small opening 14 , an image projector 16 positioned to project an image (along projection beam 17 ) onto the front surface of screen 12 , and a camera 18 , preferably a mini black and white CMOS-type video camera (alternately a Connectrix QuickCam® by Logitech® with appropriate platform drivers may be used).
- the camera 18 is positioned behind screen 12 with its image-receiving lens aligned with opening 14 , so that a person (a user) standing in front of screen 12 (in the area represented by reference numeral 15 , in FIGS.
- the camera 18 is preferably sufficiently light sensitive to “capture” repetitive images of the user in apparent realtime (i.e., generating a sufficient number of consecutive images so that when the sampled images are eventually projected, the motions of the user appear smooth and continuous). It is preferred, although not shown, that a rear surface or wall be located opposite the screen 12 , behind the user, and that this rear surface be white and illuminated. Although the rear surface is not necessary for the present system to operate, it does allow the present program to quickly and easily distinguish the virtual user from the background.
- the present interactive display system 10 further includes a computer 20 having a memory 21 and a text input 22 , as shown in FIG. 1 .
- Computer 20 may be conventional, and preferably includes a fast central processing unit (CPU), such as Intel's Pentium II® or higher.
- the video card of the computer preferably includes enough VRAM to display 640 X 480 resolution at high color (16 bit). If a CMOS type camera 18 is being used behind the large screen 12 , then the computer 20 further requires a video capture card 24 to create a bitmap of the analog image of the CMOS camera 18 . If a Connectix®-type camera is used (or equivalent), no capture card 24 is necessary because the Connectix®E-type camera uses software to digitize a recorded image.
- the computer 20 preferably uses Microsoft's Window 98® or Microsoft's Windows NT® operating systems.
- the video capture card 24 is used to receive analog image data from the camera 18 and digitize the image by generating a bitmap (the present program can control the rate of sampling). During each sampling, as described below, the video card 24 captures a frame of image data from camera 18 as the camera records the user and digitizes the frame into array of picture elements, or “pixels,” called a bitmap. Depending on the format, each pixel within a bitmap is identified by an RGB color code (a three number code which identifies the exact color of the pixel). The bitmap representing the received frame of analog image data is stored in memory for analysis and manipulation by the software program.
- RGB color code a three number code which identifies the exact color of the pixel
- the computer receives a data record including text data and text-controlling parameters from text input 22 .
- the present software program manipulates both the digitized image of the received frame of analog image data and the text data from text input 22 and combines the video and text data onto the projection screen 12 in a generally smooth and continuous manner.
- Text input 22 may include any of a variety of conventional input devices, such as a hard drive memory, disc memory, compact disc, or a conventional Internet connection (such as through a connected modem), or through the use of a connected keyboard.
- the text may be supplied to the computer directly from a word processing program, such as Corel's WordPerfect®, or Microsoft Word® as a text file or by copying selected text from an open text file directly into the program.
- Text data is preferably inputted into the memory 21 of computer 20 for later retrieval by the present program and displayed in accordance with the invention, or alternately, the text is inputted in realtime for immediate projection onto screen 12 .
- FIG. 4 An example of text data 30 for use with the present invention is shown in FIG. 4 (shown displayed on a computer monitor 32 ): “When it rains, it pours. When it stops, its wet.”
- This exemplary text data 30 is inputted into the computer 20 by any conventional means and is parsed into appropriately-sized lines 31 (depending on the size of the projection screen 12 ). If the text data 30 exceeds a certain number of characters 36 , the program will parse the text data 30 into separate lines of text 31 so that each line may fit onto the screen 12 (the wider the screen 12 , the longer each line of text 31 ).
- the program will form a line data record to keep track of which lines of text 31 are on the screen 12 , are fading from the screen 12 , and should be introduced at the top of the screen.
- Each line of text 31 located on the line data record will be introduced in a prearranged order and every line of text 31 which is being projected on the screen 12 will be electronically marked or flagged in the line data record. In this manner, as one line of text 31 fades from the screen 12 , its electronic mark is removed from the line data record and a subsequent line of text 31 will be introduced onto the screen 12 .
- Each line located in the line data record is also stored in a character data record 34 , shown in FIG. 5 .
- the character data record 34 may further include various parameters which will control how each character 36 of each line of text 31 moves and appears within the virtual environment, as viewed on screen 12 .
- the parameters may vary depending on the desired effect of the characters 36 of each line of text 31 within the virtual environment and depending on the complexity and speed of the interaction system.
- the controlling parameters may include a fall rate value which controls the speed each character 36 of each line of text 31 appears to fall on the screen 12 .
- the fall rate value may be defined in a variety of ways, such as the number of pixels traveled in one second or between captured frames of video.
- Another character parameter may be a fall delay value which determines the length of time each character 36 will remain at the top of the screen 12 prior to falling.
- a color code may be assigned to each character 36 or all of the characters within each line of text 31 .
- Another parameter for each character 36 is a horizontal hold condition which, when set, restricts each character to vertical movement when projected within the virtual environment.
- a fade parameter controls whether a character 36 fades when reaching the bottom of the screen 12 within the virtual environment.
- An edge-detect parameter determines whether a character will recognize the virtual image of a user within the virtual environment (i.e., determines if a virtual user interacts with the falling characters).
- the distance the characters 36 “fall” along the screen 12 within the virtual environment may also be controlled using a maximum fall distance parameter.
- each character 36 may be controlled in a variety of ways independently of the other characters 36 within a line of text 31 .
- a variety of parameters are possible, as exemplified above, it is preferred that most, if not all, include default values which simplify the setup and use of the present program. It is preferred that an operator merely be required to input the text data 30 into computer 20 without further setup.
- the present program first receives input text data 30 at step 100 .
- the text data 30 is first divided up into lines of text 31 at step 110 .
- the number of characters 36 in each line of text 31 depends, in part, on the dimensions of the viewing screen 12 .
- the lines of text 31 may be divided into any number of characters, following, for example, the length of each line of a poem.
- a fall rate value (and perhaps other variables) is assigned to each character 36 of each line of text 31 at step 120 . As described above, it is preferred that all the characters 36 of any given line of text 31 have the same fall rate.
- a frame of video from the camera is received and digitized by the video capture card 24 , generating a bitmap wherein an RGB color value is assigned to each pixel defined in the captured image frame.
- the program enters into a callback mode whereby every time a new frame of video data is received, select pixels of the bitmap are analyzed, as described below. Since only select pixels are analyzed by the present program, the speed of the program is much faster than prior art VR systems which rely on analyzing each pixel in the bitmap for each frame of image data received.
- the program compares the read color value of the destination pixels with a threshold color (or contrast) value.
- a threshold color or contrast
- the destination of the selected character 36 is raised by a predetermined number of pixels, preferably a single pixel, at step 190 .
- the pixel color value at the newly determined destination of the selected character 36 is again read at step 160 , its value compared again with the threshold value at step 170 and a decision is again made at step 180 until the read pixel value of the destination pixel is less than or equal to the threshold color value.
- the determined destination of the selected character 36 is stored in memory at step 200 .
- the captured video image frame together with each character 36 of the “to be displayed” lines of text 31 are displayed onto screen 12 (at step 230 ), with each character 36 located at each respective destination, as determined by the program and as stored at step 200 .
- the program then returns to step 130 to acquire another image frame of video from camera 18 and repeats the above described steps, as shown in the flow diagram of FIG. 6 .
- FIG. 7 an exemplary projected image on screen 12 is shown to have a shoulder 240 , and an arm 242 of a virtual user 244 .
- the actual user (not shown) and the virtual user 244 are holding an umbrella 246 which appears on the screen 12 in this example, as a virtual umbrella 246 .
- the program has displayed the video image of the virtual user (shown in part) 244 holding the virtual umbrella 246 and the inputted lines of text 31 : “When it rains, it pours”.
- a new line of text 31 “When it rains, it pours” has just been introduced at the top portion of the screen 12 , generally indicated by arrow 248 of FIG. 7 .
- the characters “h” and “e” of the word “When” have already started to fall at their quicker fall rates.
- Various characters 36 from previously introduced lines of text 31 (in this example, all the lines of text 31 shown on the screen 12 are the same: “When it rains, it pours”) have been drifting downwardly from the top portion 248 of the screen 12 to towards the bottom portion of the screen, generally indicated by arrow 250 in FIG. 7 .
- the computer 20 determines the destination of each character 36 , it quickly reads the pixel color values at that destination of the captured image and determines whether a virtual obstacle is already located there. If no obstacle is there (i.e., the read pixels are less than or equal to a threshold value), then the particular character 36 appears on the screen 12 at the destination and, as subsequent images are displayed on screen 12 , the particular character 36 will appear to slowly fall, like snow. If however, a virtual obstacle is determined to be located at the destination of a particular character 36 , then a new destination located higher on the screen 12 is selected so that the character 36 appears to interact with the virtual obstacle located in the virtual environment. This interaction is illustrated in FIG.
- each character 36 of a line of text 31 appear to gradually descend on the screen 12 until the character 36 confronts a dark pixel, signifying a virtual obstacle within the virtual environment.
- the present program may be setup so that the characters 36 fall at any fall rate, it is preferred that the characters 36 fall in a manner which is somewhat familiar and pleasing to the user, similar, for example, to the graceful movements of an airborne leaf slowing falling from a tree.
- a user can “catch” the falling leaf and quickly push it back into the wind currents, but the leaf will always fall back down at its leisurely pace.
- By “training” the introduced characters 36 to respond in this manner within the virtual environment the user is likely to remain comfortable and will be encouraged to further interact and explore within the virtual environment.
- the textual information introduced into the present program may be continuously changed, such as in the case of weather reports, or news stories. Alternately, the textual information may be altered in realtime, perhaps in response to the particular actions of the user.
- a user may be asked questions in a text format with the question appearing on the screen 12 together with the image of the user.
- the user reads the question on the screen 12 , perhaps after capturing the characters 36 , he or she can audibly answer the question.
- the response to the question can be heard by the operator of the system. who may then ask another question (again in text format) which relates or corresponds to the user's response to the previous question.
- This type of interaction between the text-input operator and the user may further entertain and teach the user and any viewer of the system.
- Another aspect of the invention causes inputted lines of text 31 to fade (at a prescribed fade rate) when a predetermined number of characters (e.g., at least half of them) reach a predetermined position on the viewing screen (e.g., enter the lower half of the screen).
- a predetermined number of characters e.g., at least half of them
- three lines of text 31 are displayed on the viewing screen 12 , one line of text 31 being introduced at the top of the screen, a second one falling in the middle of the screen (an “interactive region”), and a third line of text 31 fading out at the lower portion of the screen.
- some of the characters 36 (or letters) of earlier introduced lines of text 31 may remain on the screen, “captured” by virtual obstacles, such as the above-described umbrella 246 of FIG.
- each line of text 31 is displayed as a separate color.
- viewers may easily discern the characters from different lines of text 31 so that the lines of text 31 to be read relatively easily even as the individual characters 36 fall towards and rest on various darker elements located within the virtual environment, such as the arm of a virtual user and perhaps a “virtual umbrella image” carried by the virtual user.
- each character 36 of inputted text data 30 is horizontally-held, so that each character 36 is restricted to vertically movement as it descends on the viewing screen 12 at its prescribed fall rate, or ascends with upward movement of a virtual obstacle.
- the present system can also be installed at a store front, in which case a display monitor may be positioned within the store so that it may be viewed by passing pedestrians.
- a display monitor may be positioned within the store so that it may be viewed by passing pedestrians.
- the appearance of a pedestrian on the monitor will invariably attract the attention of the pedestrian, who will stop and interact with the textual message falling all around his or her image being displayed on the screen.
- the message in this example, would help lure customers inside the store to purchase products.
- the present invention can be setup within an art gallery, exhibit, or museum as an interesting and alluring way to convey any textual information either as a piece of art work, or as a source of information, such as an interactive information kiosk.
- the present software is connected to the Internet through appropriate modems, T 1 , or DSL communication lines for the purpose of allowing Internet users to introduce text data 30 to a particular interactive display shown over the Internet.
- the software program may be used with a local PC whereby the computer's monitor becomes the viewing screen, and a local “webcam” type camera (or any other appropriate camera) may be used to record the user, who in this case, would be positioned in front of the computer. If a CRT or LCD type monitor is used as the screen 12 , the camera may be positioned on top of (or adjacent to) the monitor.
- the shadow zone 13 is shown as a darkened square, however, the shadow zone 13 may take on any useful shape to cover opening 14 , and also may be any shade of gray.
- the program includes means to project the shadow zone 13 when each image and lines of text 31 are projected onto screen 12 . Also, the projected position of the shadow zone 13 on screen 12 may be easily moved by the program, using an appropriate input device, such as a mouse or a keyboard (not shown).
- the invention is capable of simultaneously supporting multiple users. These users can occupy the same space in front of a single camera 18 , or images of each user can be inputted to a single computer 20 using separate cameras at physically disparate locations. In the latter case, remote image data reaches the invention by means of suitable network connections, and the invention sends picture signals to the user's remote monitor over the same facility. In this way, all users interact in the same virtual world.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Controls And Circuits For Display Device (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
An interactive display system includes an electronic camera which intermittently records a participant's image and digitally inputs the video image into a computer, creating a digitized “virtual user image”. The camera is positioned behind a large viewing screen which faces the user and records the user's image through an opening provided within the screen. The computer further receives a text string having a fall rate value. For each digital image received by the camera, a software program determines the destination of each line of text according to its fall rate, and then uses pixel color comparison techniques to determine if a “virtual obstacle” is located at the particular destination. If not, the text is displayed at the destination so that the text appears to “fall” on the screen. If a virtual obstacle is present, a new higher destination is determined until the virtual obstacle is no longer detected. This arrangement causes the falling text displayed on the screen to be selectively “caught” by the user and otherwise manipulated in the virtual environment.
Description
This patent application claims priority under 35 U.S.C. §119 of provisional patent application, Ser. No.: 60/178,228, filed Jan. 26, 2000, which is hereby incorporated by reference as if set forth in its entirety herein (including the source code).
This application incorporates by reference in its entirety a computer program listing appendix containing the source code-listings which were described in the patent application at the time of filing and which have now been reproduced on compact disk. The compact disk contains the file entitled “APPENDIX (SOURCE CODE),” was created on Jun. 18, 2003, and encompasses 812 kilobytes.
1. Field of the Invention
The present invention relates to interactive computer systems, and in particular to a method and apparatus for facilitating wireless, full-body interaction between a human participant and a computer-generated graphical environment including textual data.
2. Description of Related Art
So-called “virtual reality” (“VR”) systems enable users to experience computer-generated environments instead of merely interacting with them over a display screen. Such systems typically require the user to don goggles, through which he or she perceives the virtual environment, as well as sensors that encode the user's gestures as electrical signals. The user reacts naturally to the changing virtual environment, generating signals that the computer interprets to determine the state and progress of the presented environment.
In order to encode a sufficiently board spectrum of gestures to facilitate natural interaction, VR systems ordinarily require the user to wear, in addition to the goggles, at least one “data glove” to detect hand and finger movements, and possibly a helmet to detect head movements. Full-body systems, which encode movements from numerous anatomical sites to develop a complete computational representation of the user's overall body action, require many more sensors; however, such systems, would be capable of projecting the user fully into the virtual environment, providing the user with greater control and a heightened sense of participation ideally suited to interactive simulations.
Unfortunately, numerous practical difficulties limit the capacity of current VR systems to achieve this goal. The nature of the interaction currently offered, even with full-body sensor arrays, is limited. The computational demands placed on a system receiving signals from many sensors can easily overwhelm even large computers, resulting in erratic “jumps” in the visual presentation that reflect processing delays. Moreover, no matter how many sensors surround the user, that cannot “see” the user, and therefore cannot integrate the user's true visual image into the virtual environment.
Economic and convenience factors also limit sensor-type VR systems. As the capabilities of VR systems increase, so do the cost, awkwardness and inconvenience of the sensor array. The sensors add weight and heft, impeding the very motions they are intended to detect. They must also ordinarily be connected, by means of wires, directly to the computer, further limiting the user's movement and complicating equipment arrangements.
In order to overcome the limitations associated with sensor-based VR systems, researchers have devised techniques to introduce the user's recorded image into a virtual environment. The resulting composite image is projected in a manner so that it may be viewed by the user, enabling the user to observe his or her appearance in and interaction with the virtual environment.
Three such approaches include the VideoPlace system (see, e.g., M. Krueger, Artificial Reality II (1991) and U.S. Pat. No. 4,843,568), the Mandala system (see, e.g., Mandala VR News, Fall/Winter 1993; Vincent, “Mandale: Virtual Village” and Stanfel, “Mandela: Virtual Cities,” Proceedings of ACM SIGGRAPH 1993 at 207-208 (1993)), and MIT's system disclosed in U.S. Pat. No. 5,563,988. Unfortunately, these systems exhibit various limitations. For example, Krueger's VideoPlace requires a special background and ultraviolet lamps, and extracts and represents only the user's silhouette. The Mandala system can integrate the user's full image within the virtual environment it creates, but requires a chroma-key blue background. The system developed by MIT provides a 3-Dimensional spatial video interaction with a user and is directed to detecting and analyzing gestural information of the user to cause predetermined responses from graphically generated objects within the virtual environment (e.g., if a virtual user pets a graphically generated dog within the virtual environment, the dog may wag its tail). Regardless of the systems, all of the above-described VR systems employ complex video analysis and tracking techniques to detect and follow the boundaries of the user's image within the virtual environment.
The disclosure of U.S. Pat. Nos. 5,563,988 of Maes et al, and 4,843,568 of Krueger et al. are hereby incorporated by reference as if set forth in their entirety herein.
Applicant has recognized the need for a simple 2-dimensional video-generated real-time interaction system wherein text data is easily introduced into a virtual environment, as viewed on a monitor or on a projection screen. The inputted text data is manipulated in the virtual environment by video representation of a human participant (a “virtual user”). Through movements in the real world, the human participant or virtual user freely interacts with the inputted text data in the virtual environment as the human participant (and other people watching the monitor or projection screen) absorbs the teachings of the text data as he or she interacts with the text data in the virtual environment.
It is an object of the present invention to provide a wireless VR system that easily accepts text data from a text input to be displayed onto a monitor (or projection screen) so that the font, size, shape, and/or movements of words and/or characters (e.g, letters, symbols and numbers) of the inputted text data may be altered by a virtual user (a real-time video representation of a human participant).
It is another object of the invention to enable the virtual user to interact with computer-generated, visually represented text data in such a manner that allows the inputted text to be readable within the virtual environment so that the virtual user may learn, understand, or otherwise experience the meaning of the textual data as he or she interacts with the text within the virtual environment.
It is another object of the invention to provide an interaction display system wherein text data and the video image of a user is combined within a virtual environment to promote marketing, advertisement, sales, and entertainment.
It is another object of the invention to provide an interaction display system wherein text data and the video image of a user is combined within a virtual environment and wherein the text data viewed within the virtual environment may be changed in realtime in response to actions by the user or by others.
It is another object of the invention to provide an interaction display system wherein text data and the video image of a user is combined within a virtual environment and wherein the text data is imported from a local text file.
It is another object of the invention to provide an interaction display system wherein text data and the video image of a user is combined within a virtual environment and wherein the text data is imported from a remote site over the Internet.
It is another object of the invention to provide an interaction display system wherein text data and the video image of a user is combined within a virtual environment and wherein the text data is inputted directed from a keyboard.
The foregoing and other objects of this invention can be appreciated from the Summary of the Invention, the drawing Figures and Detailed Description.
In accordance with the invention, a controlling software program instructs an interactive display system. The interactive display system includes an electronic camera (e.g., charge-coupled device or “CCD”) which intermittently records (in realtime) a human participant's (a user) image and inputs the video image into a computer. The computer creates a digitized version of the inputted image (e.g., a bitmap of pixels) and thereby creates a “virtual user image”. According to one embodiment, the camera is positioned behind a large viewing screen which faces the user and records the user's image through an opening provided within the screen. The computer further receives a data record including text data and text controlling parameters, preferably including at least a fall rate value. The data record may be supplied to the computer using any of several conventional memory mediums, such as a hard drive memory, disc memory, compact disc, or the data record may only include a text string which is supplied to the computer directly from the Internet through a connected modem, for example, or through the use of a connected keyboard. Alternately, the text may be inputted to the computer directly from a word processing program, such as Corel's WordPerfect®, or Microsoft Word® as a text file or by “copying” selected text from an open text file directly into the present software program.
According to the invention, upon receiving a captured video image of the user, the software program (see appendix for source code) first parses the text data into lines of text. The program determines which line or lines of text should be displayed on the screen and further determines the destination of each character within any line that is to be displayed on the screen, according to the fall rate assigned to each character. The program then compares the pixel color value of “destination pixels” (pixels located at the destination of each particular character) with a threshold color value. If the color value of the destination pixels is below the threshold value (i.e., lighter), then the particular character will be positioned at a destination according to its prescribed fall rate. If, however, the color value of the destination pixels exceeds the threshold value (i.e., darker), then it is determined that the virtual user's image is located at the destination of the particular character of the line of text. In such instance, the program locates a suitable new destination positioned vertically above the original destination which is lighter in color value than the threshold value.
Once all the characters of all the lines of text intended to be projected onto the screen are provided with suitable destinations, the program instructs the computer to display all of the characters on the screen at their respective destinations with the particular captured video frame. The process continues where a new image frame is available from the camera.
The program analyzes each captured image as it acquires at a prescribed sampling rate. The sampling rate is dependent on the speed of the video card components and drivers, the speed of the computer, and light sensitivity of the digital camera.
According to the invention, as the individual characters of a single line make their way down along the projection screen, the program keeps track of when a predetermined number of characters (e.g., at least half of them) travels a predetermined distance from the top of the viewing screen (such as the halfway point). If this is the case, the program causes the entire line of text to fade at a prescribed fade rate. When one line fades at the lower portion of the screen, another line appears at the top of the screen, with its characters beginning to fall. It is preferred that at least three lines of text be displayed on the screen at any given time.
Another aspect of the invention displays each line of text data as a separate color so that viewers may easily discern the letters or characters from different lines of inputted text, allowing the lines of text to be read relatively easily, as the letters fall towards and rest on various darker elements located within the virtual environment, such as the arm of a virtual user and perhaps a “virtual umbrella image” carried by the virtual user.
According to another aspect of the invention, each character of an inputted text string is horizontally held so that each character is restricted to vertically movement as it descends on the viewing screen at its prescribed fall rate or ascends upwardly with similar upward movement by the virtual user.
According to another embodiment of the invention, the program generates a shadow zone of dark pixels which are projected onto the viewing screen. The shadow zone is aligned with the optical input of the digital camera so that the camera will be protected from the relatively bright light of the projector. The computer generated shadow zone helps prevent the images recorded by the camera from becoming “washed out” by the light from the projector shining on the camera's lens.
The foregoing discussion will be understood more readily from the following detailed description of the invention, when taken in conjunction with the accompanying drawings, in which:
Referring to FIGS. 1 , 2, and 3 the interactive display system 10, according to the present invention, is shown including a large display screen 12 having a generally central and small opening 14, an image projector 16 positioned to project an image (along projection beam 17) onto the front surface of screen 12, and a camera 18, preferably a mini black and white CMOS-type video camera (alternately a Connectrix QuickCam® by Logitech® with appropriate platform drivers may be used). The camera 18 is positioned behind screen 12 with its image-receiving lens aligned with opening 14, so that a person (a user) standing in front of screen 12 (in the area represented by reference numeral 15, in FIGS. 1 and 2 ) and within the field of view 19 of the camera 18, is recorded by the camera. The camera 18 is preferably sufficiently light sensitive to “capture” repetitive images of the user in apparent realtime (i.e., generating a sufficient number of consecutive images so that when the sampled images are eventually projected, the motions of the user appear smooth and continuous). It is preferred, although not shown, that a rear surface or wall be located opposite the screen 12, behind the user, and that this rear surface be white and illuminated. Although the rear surface is not necessary for the present system to operate, it does allow the present program to quickly and easily distinguish the virtual user from the background.
The present interactive display system 10 further includes a computer 20 having a memory 21 and a text input 22, as shown in FIG. 1. Computer 20 may be conventional, and preferably includes a fast central processing unit (CPU), such as Intel's Pentium II® or higher. The video card of the computer preferably includes enough VRAM to display 640 X 480 resolution at high color (16 bit). If a CMOS type camera 18 is being used behind the large screen 12, then the computer 20 further requires a video capture card 24 to create a bitmap of the analog image of the CMOS camera 18. If a Connectix®-type camera is used (or equivalent), no capture card 24 is necessary because the Connectix®E-type camera uses software to digitize a recorded image. The computer 20 preferably uses Microsoft's Window 98® or Microsoft's Windows NT® operating systems.
As is understood by those skilled in the art, the video capture card 24 is used to receive analog image data from the camera 18 and digitize the image by generating a bitmap (the present program can control the rate of sampling). During each sampling, as described below, the video card 24 captures a frame of image data from camera 18 as the camera records the user and digitizes the frame into array of picture elements, or “pixels,” called a bitmap. Depending on the format, each pixel within a bitmap is identified by an RGB color code (a three number code which identifies the exact color of the pixel). The bitmap representing the received frame of analog image data is stored in memory for analysis and manipulation by the software program.
The computer receives a data record including text data and text-controlling parameters from text input 22. As described below the present software program manipulates both the digitized image of the received frame of analog image data and the text data from text input 22 and combines the video and text data onto the projection screen 12 in a generally smooth and continuous manner.
An example of text data 30 for use with the present invention is shown in FIG. 4 (shown displayed on a computer monitor 32): “When it rains, it pours. When it stops, its wet.” This exemplary text data 30 is inputted into the computer 20 by any conventional means and is parsed into appropriately-sized lines 31 (depending on the size of the projection screen 12). If the text data 30 exceeds a certain number of characters 36, the program will parse the text data 30 into separate lines of text 31 so that each line may fit onto the screen 12 (the wider the screen 12, the longer each line of text 31). The program will form a line data record to keep track of which lines of text 31 are on the screen 12, are fading from the screen 12, and should be introduced at the top of the screen. Each line of text 31 located on the line data record will be introduced in a prearranged order and every line of text 31 which is being projected on the screen 12 will be electronically marked or flagged in the line data record. In this manner, as one line of text 31 fades from the screen 12, its electronic mark is removed from the line data record and a subsequent line of text 31 will be introduced onto the screen 12.
Each line located in the line data record, is also stored in a character data record 34, shown in FIG. 5. According to one aspect of the invention, the character data record 34 may further include various parameters which will control how each character 36 of each line of text 31 moves and appears within the virtual environment, as viewed on screen 12. The parameters may vary depending on the desired effect of the characters 36 of each line of text 31 within the virtual environment and depending on the complexity and speed of the interaction system.
The controlling parameters may include a fall rate value which controls the speed each character 36 of each line of text 31 appears to fall on the screen 12. The fall rate value may be defined in a variety of ways, such as the number of pixels traveled in one second or between captured frames of video.
Another character parameter may be a fall delay value which determines the length of time each character 36 will remain at the top of the screen 12 prior to falling.
A color code may be assigned to each character 36 or all of the characters within each line of text 31.
Another parameter for each character 36 is a horizontal hold condition which, when set, restricts each character to vertical movement when projected within the virtual environment.
A fade parameter controls whether a character 36 fades when reaching the bottom of the screen 12 within the virtual environment.
An edge-detect parameter determines whether a character will recognize the virtual image of a user within the virtual environment (i.e., determines if a virtual user interacts with the falling characters).
The distance the characters 36 “fall” along the screen 12 within the virtual environment may also be controlled using a maximum fall distance parameter.
The above-listed character parameters are provided to illustrate that each character 36 may be controlled in a variety of ways independently of the other characters 36 within a line of text 31. Although a variety of parameters are possible, as exemplified above, it is preferred that most, if not all, include default values which simplify the setup and use of the present program. It is preferred that an operator merely be required to input the text data 30 into computer 20 without further setup.
In operation, referring to FIG. 6 , the present program first receives input text data 30 at step 100. The text data 30 is first divided up into lines of text 31 at step 110. The number of characters 36 in each line of text 31 depends, in part, on the dimensions of the viewing screen 12. The lines of text 31 may be divided into any number of characters, following, for example, the length of each line of a poem. A fall rate value (and perhaps other variables) is assigned to each character 36 of each line of text 31 at step 120. As described above, it is preferred that all the characters 36 of any given line of text 31 have the same fall rate. At step 130, a frame of video from the camera is received and digitized by the video capture card 24, generating a bitmap wherein an RGB color value is assigned to each pixel defined in the captured image frame. Once the program starts capturing frames of video data, the program enters into a callback mode whereby every time a new frame of video data is received, select pixels of the bitmap are analyzed, as described below. Since only select pixels are analyzed by the present program, the speed of the program is much faster than prior art VR systems which rely on analyzing each pixel in the bitmap for each frame of image data received.
At step 140, the program determines which lines of text 31 are to be displayed on the screen 12. It is preferred that three lines of text 31 be displayed on the screen 12 at any given time. The number of lines of text 31 may vary, of course, depending on the particular application, the size of the viewing screen 12, and the desired effect. At step 150, in FIG. 6 , the program determines the destination for a selected character 36 among the characters of the “to be displayed” lines of text 31, according to the predetermined fall rate of the selected character 36. At step 160, the color value of the “destination pixels” or those pixels which are located at the destination of the selected character 36 at step 150. The program, at step 170, then compares the read color value of the destination pixels with a threshold color (or contrast) value. At step 180, if it is determined at the comparison step 80 that the destination pixel color value is greater than the threshold color value, then the destination of the selected character 36 is raised by a predetermined number of pixels, preferably a single pixel, at step 190. In this instance, the pixel color value at the newly determined destination of the selected character 36 is again read at step 160, its value compared again with the threshold value at step 170 and a decision is again made at step 180 until the read pixel value of the destination pixel is less than or equal to the threshold color value.
At this point, the determined destination of the selected character 36 is stored in memory at step 200. At step 210, it is determined if any more characters 36 remain within the “to be displayed” lines of text 31. If there are more characters 36 remaining within the “to be displayed” lines of text 31, another character 36 is selected at step 220 and again the destination of the newly selected character 36 is determined at step 150 and the above-described process is repeated until the destination of each character 36 of the “to be displayed” lines of text 31 have been determined. When this occurs, the captured video image frame together with each character 36 of the “to be displayed” lines of text 31 are displayed onto screen 12 (at step 230), with each character 36 located at each respective destination, as determined by the program and as stored at step 200.
The program then returns to step 130 to acquire another image frame of video from camera 18 and repeats the above described steps, as shown in the flow diagram of FIG. 6.
The end result is that the individual characters 36 of their respective “to be displayed” lines of text 31 will appear to fall (at their prescribed fall rate) from the top of the projection screen 12, as illustrated in FIGS. 7 and 8 . In FIG. 7 , an exemplary projected image on screen 12 is shown to have a shoulder 240, and an arm 242 of a virtual user 244. The actual user (not shown) and the virtual user 244 are holding an umbrella 246 which appears on the screen 12 in this example, as a virtual umbrella 246. As is shown, the program has displayed the video image of the virtual user (shown in part) 244 holding the virtual umbrella 246 and the inputted lines of text 31: “When it rains, it pours”. In this exemplary scene, a new line of text 31: “When it rains, it pours” has just been introduced at the top portion of the screen 12, generally indicated by arrow 248 of FIG. 7. The characters “h” and “e” of the word “When” have already started to fall at their quicker fall rates. Various characters 36 from previously introduced lines of text 31 (in this example, all the lines of text 31 shown on the screen 12 are the same: “When it rains, it pours”) have been drifting downwardly from the top portion 248 of the screen 12 to towards the bottom portion of the screen, generally indicated by arrow 250 in FIG. 7.
Following the steps described above, as the computer 20 determines the destination of each character 36, it quickly reads the pixel color values at that destination of the captured image and determines whether a virtual obstacle is already located there. If no obstacle is there (i.e., the read pixels are less than or equal to a threshold value), then the particular character 36 appears on the screen 12 at the destination and, as subsequent images are displayed on screen 12, the particular character 36 will appear to slowly fall, like snow. If however, a virtual obstacle is determined to be located at the destination of a particular character 36, then a new destination located higher on the screen 12 is selected so that the character 36 appears to interact with the virtual obstacle located in the virtual environment. This interaction is illustrated in FIG. 8 , wherein an enlarged partial section of the umbrella 246 is shown “capturing” the characters “r”, “a”, and “i” of the word “rains”. These characters 36 will remain at the first “light” pixel (below th pixel color threshold value) located above the relatively dark pixels which make up the umbrella 246 so that the characters 36 appear to have fallen onto and are supported by the umbrella 246. If the user raises the umbrella 246, the program will quickly detect this upward movement because the “lighter” pixel between the character 36 and the umbrella 246 will become a dark value causing the computer to respond by raising the affected characters 36 upwardly with the umbrella movement, so that the characters 36 appear to have been lifted by the umbrella 246. The characters 36 can move upwardly very quickly, but will preferably always fall at the predetermined fall rate, as discussed above.
As subsequent video images are sampled, each character 36 of a line of text 31 appear to gradually descend on the screen 12 until the character 36 confronts a dark pixel, signifying a virtual obstacle within the virtual environment. Although the present program may be setup so that the characters 36 fall at any fall rate, it is preferred that the characters 36 fall in a manner which is somewhat familiar and pleasing to the user, similar, for example, to the graceful movements of an airborne leaf slowing falling from a tree. A user can “catch” the falling leaf and quickly push it back into the wind currents, but the leaf will always fall back down at its leisurely pace. By “training” the introduced characters 36 to respond in this manner within the virtual environment, the user is likely to remain comfortable and will be encouraged to further interact and explore within the virtual environment.
The text data 30 may be the lines of a poem, song, or even the tag-line of an advertisement of a particular product and may be projected in any size, shape or font (system fonts are preferably used so that the processing is faster). As the user interacts with the introduced lines of text 31, he or she must physically manipulate the falling characters 36 and perhaps capture several of them to read them and understand the actual text, but the fun of combining such a physical activity with reading only encourages the user to succeed in reading the textual information. Whatever message the textual information is to convey is more likely to reach the viewer or user when the user gets to participate with the message in an interactive game-like environment.
The textual information introduced into the present program may be continuously changed, such as in the case of weather reports, or news stories. Alternately, the textual information may be altered in realtime, perhaps in response to the particular actions of the user. In this instance, a user may be asked questions in a text format with the question appearing on the screen 12 together with the image of the user. When the user reads the question on the screen 12, perhaps after capturing the characters 36, he or she can audibly answer the question. The response to the question can be heard by the operator of the system. who may then ask another question (again in text format) which relates or corresponds to the user's response to the previous question. This type of interaction between the text-input operator and the user may further entertain and teach the user and any viewer of the system.
According to another aspect of the invention, text data may be introduced one line at a time allowing a first line of text 31 to be introduced at the top of the of the viewing screen 12 so that each character 36 begins to fall (as viewed on the viewing screen 12), either at similar fall rates or at different ones. When a predetermined number of falling characters 36 (e.g., at least half of them) travels a predetermined distance from the top of the viewing screen 12, a second line (or select characters of the second line) of text 31 will appear at the top of the screen 12. The characters 36 of the newly introduced second line of text 31 will begin to fall along the viewing screen 12 in a similar manner to those of the first line of text. Third and other lines of text 31 may be displayed in a similar and repeating manner.
Another aspect of the invention causes inputted lines of text 31 to fade (at a prescribed fade rate) when a predetermined number of characters (e.g., at least half of them) reach a predetermined position on the viewing screen (e.g., enter the lower half of the screen). In a preferred embodiment, at any given moment, three lines of text 31 are displayed on the viewing screen 12, one line of text 31 being introduced at the top of the screen, a second one falling in the middle of the screen (an “interactive region”), and a third line of text 31 fading out at the lower portion of the screen. Of course, some of the characters 36 (or letters) of earlier introduced lines of text 31 may remain on the screen, “captured” by virtual obstacles, such as the above-described umbrella 246 of FIG. 7. If one particular character 36 from a first line of text 31 remains held in vertical space by a virtual obstacle (e.g., umbrella 246), the same character 36 from subsequent lines of text 31 will eventually overlay the first “held” character 36 so that similar characters 36 held by virtual obstacles will not accumulate in number over time. However, it is also contemplated that as characters 36 descend onto each other on the screen 12, the program can be altered so that any higher character 36 will consider a lower “held” character to be part of the virtual obstacle so that the characters will pile up (stack on top of each other), until the obstacle is cleared (i.e., the user moves).
According to another aspect of the invention, each line of text 31 is displayed as a separate color. In this arrangement, viewers may easily discern the characters from different lines of text 31 so that the lines of text 31 to be read relatively easily even as the individual characters 36 fall towards and rest on various darker elements located within the virtual environment, such as the arm of a virtual user and perhaps a “virtual umbrella image” carried by the virtual user.
According to another aspect of the invention each character 36 of inputted text data 30 is horizontally-held, so that each character 36 is restricted to vertically movement as it descends on the viewing screen 12 at its prescribed fall rate, or ascends with upward movement of a virtual obstacle.
The techniques described herein for integrating a user's image and inputted textual data within a digitally represented environment can be applied to numerous applications including fun and educational communication applications for use in schools to help kids learn to read or just have fun. In this application, Kids could be asked, for example, to interact with a falling sentence by “capturing” a verb located within the falling sentence.
The present system can also be installed at a store front, in which case a display monitor may be positioned within the store so that it may be viewed by passing pedestrians. In this application, the appearance of a pedestrian on the monitor will invariably attract the attention of the pedestrian, who will stop and interact with the textual message falling all around his or her image being displayed on the screen. The message, in this example, would help lure customers inside the store to purchase products.
Other useful applications for this technology include performance art, either used in connection with an artist or an audience member. The present invention can be setup within an art gallery, exhibit, or museum as an interesting and alluring way to convey any textual information either as a piece of art work, or as a source of information, such as an interactive information kiosk.
According to another aspect of the invention, the present software is connected to the Internet through appropriate modems, T1, or DSL communication lines for the purpose of allowing Internet users to introduce text data 30 to a particular interactive display shown over the Internet. Also, the software program may be used with a local PC whereby the computer's monitor becomes the viewing screen, and a local “webcam” type camera (or any other appropriate camera) may be used to record the user, who in this case, would be positioned in front of the computer. If a CRT or LCD type monitor is used as the screen 12, the camera may be positioned on top of (or adjacent to) the monitor.
Referring to FIG. 9 , an enlarged partial screen 12 showing the central opening 14 is shown, according to another embodiment of the invention. Applicant has discovered that light from projector 16 directed to the optical input lens of camera 18 causes the images captured by the camera to appear “washed out” because the camera automatically compensates for correct exposure. To overcome this problem, the present program generates a shadow zone 13 of dark pixels which are projected onto the viewing screen 12. The shadow zone 13 is aligned with the optical input of the digital camera so that the camera will be protected from the relatively bright light of the projector. The computer-generated shadow zone 13 helps prevent the images recorded by the camera from becoming “washed out” by the light from the projector shining on the camera's lens. The shadow zone 13 is shown as a darkened square, however, the shadow zone 13 may take on any useful shape to cover opening 14, and also may be any shade of gray. The program includes means to project the shadow zone 13 when each image and lines of text 31 are projected onto screen 12. Also, the projected position of the shadow zone 13 on screen 12 may be easily moved by the program, using an appropriate input device, such as a mouse or a keyboard (not shown).
The present software program (see appendix for actual source code) is written in C++ programming language using standard Microsoft® and Video-for-Windows® Libraries.
Although the above detailed description of the invention assumes the presence of a single user, the invention is capable of simultaneously supporting multiple users. These users can occupy the same space in front of a single camera 18, or images of each user can be inputted to a single computer 20 using separate cameras at physically disparate locations. In the latter case, remote image data reaches the invention by means of suitable network connections, and the invention sends picture signals to the user's remote monitor over the same facility. In this way, all users interact in the same virtual world.
The terms and expressions employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described or portions thereof, but it is recognized that various modifications are possible within the scope of the invention claimed.
Claims (33)
1. A method for facilitating real-time interaction between a user and a digitally represented visual environment within which the user's moving image is integrated, said method including the use of a computer, electronic memory, a display, a video camera, and a video input device, the method comprising the steps of:
storing a first computer generated digital image in said electronic memory;
assigning a velocity of movement to said digital image, said velocity of movement including a rate of movement and a direction of movement of said digital image;
recording the image of said user using said video camera;
simultaneously displaying the image of said user and said stored first digital image onto said display, at a predetermined refresh rate;
digitally repositioning said displayed first digital image on said display according to said assigned velocity of movement;
comparing the relative position of said displayed image of said user and said displayed first digital image;
determining when said displayed first digital image and the displayed image of said user are within a predetermined distance on said display;
changing said velocity of movement of said displayed first digital image in response to determining that said displayed first digital image and the displayed image of said user are within said predetermined distance; and
simultaneously displaying said first digital image at said new velocity of movement, and the image of said user.
2. A method for facilitating real-time interaction between a user and digitally represented text data on a display within which the user image is integrated, the method comprising the steps of:
storing text data in said electronic memory;
assigning a velocity of movement to said text data, said velocity of movement including a rate of movement and a direction of movement of said text data;
recording the image of said user using a video camera;
simultaneously displaying the image of said user and said stored text data onto said display, at a predetermined refresh rate;
digitally repositioning said displayed text data on said display by said assigned velocity of movement;
comparing the relative position of said displayed image of said user and said displayed text data;
determining when said displayed text data and the displayed image of said user are within a predetermined distance on said display;
changing the velocity of movement of said displayed text data in response to determining that said displayed text data and the displayed image of said user are within said predetermined distance; and
simultaneously displaying said text data at said new velocity of movement and the image of said user.
3. The method of claim 2 , wherein said display includes an upper edge and a lower edge, and said velocity of movement of said text data includes a direction towards said lower edge of the display.
4. The method of claim 2 , wherein said velocity of movement of said text data includes a first rate of movement, and said new velocity of movement of said text data includes a second rate of movement.
5. The method of claim 2 , wherein said new velocity of movement of said text data includes no movement.
6. The method of claim 2 , wherein said assigned velocity of movement of said text data includes a first direction of movement, and said new velocity of movement of said text data includes a second direction of movement, said second direction of movement being opposite said first direction of movement.
7. The method of claim 2 , wherein said text data is initially displayed on said display at a predetermined location prior to moving at said assigned velocity.
8. The method of claim 2 , wherein prior to performing the step of storing text data into electronic memory, a step of receiving text data from a keyboard is performed.
9. A method for facilitating real-time interaction between a user and digitally represented text data on a display within which the user image is integrated, the method comprising the steps of:
storing text data in an electronic memory;
assigning a velocity of movement to said text data, said velocity of movement including a rate of movement and a direction of movement of said text data;
storing a threshold pixel color value in electronic memory;
recording the image of said user using a video camera;
simultaneously displaying the image of said user and said stored text data onto said display, at a predetermined refresh rate, thereby creating a combined image;
determining a destination of said text data, according to the assigned velocity, said text data destination being the point within the combination image where the text will next be displayed;
measuring the pixel color value of the display image at the determined text data destination;
comparing the measured pixel color value at the text data destination with said stored threshold color value;
displaying said text data at said text data destination in response to said comparing step determining that said measured pixel color value at the text data destination is less than said stored threshold color value; and
displaying said text data at a position within said combined image on said display other than said text data destination in response to determining in said comparing step that said measured pixel color value at the text data destination is greater than the stored threshold color value.
10. The method of claim 9 , further comprising the step of changing the velocity of the text data in response to said comparing step determining that said measured pixel color value at the text data destination is greater than the stored threshold color value.
11. An arrangement capable of facilitating an interaction of at least one portion of a physical object with at least one portion of a virtual object when being executed by a processing arrangement, comprising:
a first set of instructions which, when executed by the processing arrangement, is configured to receive first information associated with the at least one portion of a first representation of the virtual object and second information associated with at least one section of a second representation of the at least one portion of the physical object, wherein the second representation is generated irrespective of an implementation of a particular characteristic associated with at least one of a color or an illumination of a background associated with the at least one portion of the physical object;
a second set of instructions which, when executed by the processing arrangement, is configured to determine a spatial relationship between respective positions associated with the first representation and the second representation by analyzing the first and second information;
a third set of instructions which, when executed by the processing arrangement, is configured to modify at least one portion of the first information based on the determination made by the second set of instructions; and
a fourth set of instructions which, when executed by the processing arrangement, is configured to provide further data associated with the first and second representations based on the modified portion at least one of for display or in a virtual space.
12. The arrangement according to claim 11 , wherein the third set of instructions is configured to modify the at least one portion of the first information by repositioning at least one of the relative position associated with at least one section of the first representation of the at least one portion of the virtual object and the respective position associated with at least one section of the second representation of the at least one portion of the physical object.
13. The arrangement according to claim 11 , wherein the first representation is associated with a first digital image of the virtual object.
14. The arrangement according to claim 13 , wherein the second representation is associated with a second digital image of the at least one portion of the physical object.
15. The arrangement according to claim 14 , further comprising a fifth set of instructions which, when executed by the processing arrangement, is configured to reposition at least one of the first digital image and the second digital image at least one of for display or in the virtual space.
16. The arrangement according to claim 15 , wherein the fourth set of instructions is capable of providing at least one of the first repositioned digital image or the second repositioned digital image at least one of for display or in the virtual space.
17. The arrangement according to claim 16 , wherein the first digital image is capable of being repositioned in a direction relative to the second digital image.
18. The arrangement according to claim 14 , wherein the spatial relationship is provided as a function of a determination of when the first digital image at least one of reaches or is positioned at a predetermined distance from the second digital image.
19. The arrangement according to claim 11 , wherein the second representation is associated with a digital image for the at least one portion of the physical object.
20. The arrangement according to claim 11 , wherein the further data is associated with data provided for at least one of marketing, advertising, sales, entertainment, learning, education, games, conveying weather or conveying news.
21. The arrangement according to claim 11 , further comprising a sixth set of instructions which, when executed by the processing arrangement, is configured to obtain destination data of the virtual object, the destination data being related to a particular position which is targeted to be reached by the virtual object.
22. The arrangement according to claim 21 , wherein the destination data is related to a particular position which it targeted to be reached by the relative position associated with the first representation based on the second information.
23. The arrangement according to claim 11 , further comprising a seventh set of instructions which, when executed by the processing arrangement, is configured to obtain third information associated with a speed of movement of the virtual object which is associated with the first representation.
24. The arrangement according to claim 23 , wherein at least one portion of the third information is capable of being modified based on the determination by the seventh set of instructions.
25. The arrangement according to claim 11 , further comprising an eighth set of instructions which, when executed by the processing arrangement, is configured to modify a speed of a movement of the relative position associated with the virtual object as a function of the second representation.
26. The arrangement according to claim 11 , further comprising a tenth set of instructions which, when executed by the processing arrangement, is configured to reposition at least one of the relative position associated with at least one section of the first representation of the at least one portion of the virtual object and the respective position association with at least one section of the second representation of the at least one portion of the physical object.
27. A method for facilitating an interaction of at least one portion of a physical object with at least one portion of a virtual object, comprising:
receiving first information associated with the at least one portion of a first representation of the virtual object and second information associated with at least one section of a second representation of the at least one portion of the physical object, wherein the second representation is generated irrespective of an implementation of a particular characteristic associated with at least one of a color or an illumination of a background associated with the at least one portion of the physical object;
determining a spatial relationship between respective positions associated with the first representation and the second representation by analyzing the first and second information;
modifying at least one portion of the first data based on the determination of the special relationship; and
providing further data associated with the first and second representations based on the modified portion at least one of for display or in a virtual space.
28. A system for facilitating an interaction of at least one portion of a physical object with at least one portion of a virtual object, comprising:
a first arrangement configured to receive first information associated with the at least one portion of a first representation of the virtual object and second information associated with at least one section of a second representation of the at least one portion of the physical object, wherein the second representation is generated irrespective of an implementation of a particular characteristic associated with at least one of a color or an illumination of a background associated with the at least one portion of the physical object;
a second arrangement configured to determine a spatial relationship between respective positions associated with the first representation and the second representation by analyzing the first and second information;
a third arrangement configured to modify at least one portion of the first data based on the determination made by the determination made by the second arrangement; and
a fourth arrangement configured to provide further data associated with the first and second representations based on the modified portion at least one for display or in a virtual space.
29. An arrangement capable of facilitating an interaction of at least one portion of a physical object with at least one portion of a virtual object when being executed by a processing arrangement, comprising:
a first set of instructions which, when executed by the processing arrangement, is configured to receive first information associated with the at least one portion of a first representation of the virtual object and second information associated with at least one section of a representation of the at least one portion of the physical object;
a second set of instructions which, when executed by the processing arrangement, is configured to obtain destination data associated with the virtual object, the destination data being related to a particular position which is targeted to be reached by the virtual object;
a third set of instructions which, when executed by the processing arrangement, is configured to determine whether the destination data is at least one of reached or configured to be reached by the representation;
a fourth set of instructions which, when executed by the processing arrangement, is configured to modify at least one portion of the first information based on the determination; and
a fifth set of instructions which, when executed by the processing arrangement, is configured to provide the modified first information and the second information at least one of for display or in a virtual space.
30. The arrangement according to claim 29 , wherein the first representation is associated with a first digital image of the virtual object, and the second representation is associated with a second digital image of the at least one portion of the physical object.
31. The arrangement according to claim 29 , wherein the destination data is determined based on the second information.
32. An arrangement capable of facilitating an interaction of at least one portion of a physical object with at least one portion of a virtual object when being executed by a processing arrangement, comprising:
a first set of instructions which, when executed by the processing arrangement, is configured to receive first information associated with the at least one portion of a first representation of the virtual object and second information associated with at least one section of a second representation of the at least one portion of the physical object;
a second set of instructions which, when executed by the processing arrangement, is configured to obtain third information associated with a speed of movement of the virtual object;
a third set of instructions which, when executed by the processing arrangement, is configured to determine whether a relative position associated with the first representation at least one of reaches, is configured to reach or is provided at a predetermined distance from a relative position associated with the second representation by analyzing the first and second information;
a fourth set of instructions which, when executed by the processing arrangement, is configured to modify at least one portion of the third information based on the determination;
a fifth set of instructions which, when executed by the processing arrangement, is configured to reposition the relative position of the first representation as a function of the at least one modified portion of the third information; and
a sixth set of instructions which, when executed by the processing arrangement, is configured to provide the first modified representation and the second representation at least one of for display or in a virtual space.
33. The arrangement according to claim 31 , wherein the first representation is associated with a first digital image of the virtual object, and the second representation is associated with a second digital image the at least one portion of the physical object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/448,581 USRE40293E1 (en) | 2000-01-26 | 2006-06-06 | Method and system for facilitating wireless, full-body, real-time user interaction with digitally generated text data |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17822800P | 2000-01-26 | 2000-01-26 | |
US09/771,011 US6747666B2 (en) | 2000-01-26 | 2001-01-26 | Method and system for facilitating wireless, full-body, real-time user interaction with digitally generated text data |
US11/448,581 USRE40293E1 (en) | 2000-01-26 | 2006-06-06 | Method and system for facilitating wireless, full-body, real-time user interaction with digitally generated text data |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/771,011 Reissue US6747666B2 (en) | 2000-01-26 | 2001-01-26 | Method and system for facilitating wireless, full-body, real-time user interaction with digitally generated text data |
Publications (1)
Publication Number | Publication Date |
---|---|
USRE40293E1 true USRE40293E1 (en) | 2008-05-06 |
Family
ID=22651725
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/771,011 Ceased US6747666B2 (en) | 2000-01-26 | 2001-01-26 | Method and system for facilitating wireless, full-body, real-time user interaction with digitally generated text data |
US11/448,581 Expired - Lifetime USRE40293E1 (en) | 2000-01-26 | 2006-06-06 | Method and system for facilitating wireless, full-body, real-time user interaction with digitally generated text data |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/771,011 Ceased US6747666B2 (en) | 2000-01-26 | 2001-01-26 | Method and system for facilitating wireless, full-body, real-time user interaction with digitally generated text data |
Country Status (3)
Country | Link |
---|---|
US (2) | US6747666B2 (en) |
AU (1) | AU2001234601A1 (en) |
WO (1) | WO2001056010A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230083741A1 (en) * | 2012-04-12 | 2023-03-16 | Supercell Oy | System and method for controlling technical processes |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6747666B2 (en) | 2000-01-26 | 2004-06-08 | New York University | Method and system for facilitating wireless, full-body, real-time user interaction with digitally generated text data |
EP1249792A3 (en) * | 2001-04-12 | 2006-01-18 | Matsushita Electric Industrial Co., Ltd. | Animation data generation apparatus, animation data generation method, animated video generation apparatus, and animated video generation method |
US8300042B2 (en) * | 2001-06-05 | 2012-10-30 | Microsoft Corporation | Interactive video display system using strobed light |
US7259747B2 (en) | 2001-06-05 | 2007-08-21 | Reactrix Systems, Inc. | Interactive video display system |
US8035612B2 (en) * | 2002-05-28 | 2011-10-11 | Intellectual Ventures Holding 67 Llc | Self-contained interactive video display system |
US7348963B2 (en) * | 2002-05-28 | 2008-03-25 | Reactrix Systems, Inc. | Interactive video display system |
US7710391B2 (en) * | 2002-05-28 | 2010-05-04 | Matthew Bell | Processing an image utilizing a spatially varying pattern |
US7576727B2 (en) * | 2002-12-13 | 2009-08-18 | Matthew Bell | Interactive directed light/sound system |
WO2004055776A1 (en) | 2002-12-13 | 2004-07-01 | Reactrix Systems | Interactive directed light/sound system |
CN102034197A (en) | 2003-10-24 | 2011-04-27 | 瑞克楚斯系统公司 | Method and system for managing an interactive video display system |
US7536032B2 (en) | 2003-10-24 | 2009-05-19 | Reactrix Systems, Inc. | Method and system for processing captured image information in an interactive video display system |
US20060038884A1 (en) * | 2004-08-17 | 2006-02-23 | Joe Ma | Driving monitor device |
US7924285B2 (en) * | 2005-04-06 | 2011-04-12 | Microsoft Corporation | Exposing various levels of text granularity for animation and other effects |
US9128519B1 (en) | 2005-04-15 | 2015-09-08 | Intellectual Ventures Holding 67 Llc | Method and system for state-based control of objects |
US8081822B1 (en) | 2005-05-31 | 2011-12-20 | Intellectual Ventures Holding 67 Llc | System and method for sensing a feature of an object in an interactive video display |
JP4533273B2 (en) * | 2005-08-09 | 2010-09-01 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
US8098277B1 (en) | 2005-12-02 | 2012-01-17 | Intellectual Ventures Holding 67 Llc | Systems and methods for communication between a reactive video system and a mobile communication device |
CA2699628A1 (en) | 2007-09-14 | 2009-03-19 | Matthew Bell | Gesture-based user interactions with status indicators for acceptable inputs in volumetric zones |
US8159682B2 (en) | 2007-11-12 | 2012-04-17 | Intellectual Ventures Holding 67 Llc | Lens system |
US8259163B2 (en) | 2008-03-07 | 2012-09-04 | Intellectual Ventures Holding 67 Llc | Display with built in 3D sensing |
US8595218B2 (en) | 2008-06-12 | 2013-11-26 | Intellectual Ventures Holding 67 Llc | Interactive display management systems and methods |
US9132352B1 (en) | 2010-06-24 | 2015-09-15 | Gregory S. Rabin | Interactive system and method for rendering an object |
TW201326755A (en) * | 2011-12-29 | 2013-07-01 | Ind Tech Res Inst | Ranging apparatus, ranging method, and interactive display system |
US9098147B2 (en) * | 2011-12-29 | 2015-08-04 | Industrial Technology Research Institute | Ranging apparatus, ranging method, and interactive display system |
US8928590B1 (en) * | 2012-04-03 | 2015-01-06 | Edge 3 Technologies, Inc. | Gesture keyboard method and apparatus |
USD750666S1 (en) * | 2013-09-10 | 2016-03-01 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with icon |
Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4521773A (en) | 1981-08-28 | 1985-06-04 | Xerox Corporation | Imaging array |
US4599611A (en) | 1982-06-02 | 1986-07-08 | Digital Equipment Corporation | Interactive computer-based information display system |
US4843568A (en) | 1986-04-11 | 1989-06-27 | Krueger Myron W | Real time perception of and response to the actions of an unencumbered participant/user |
US5051904A (en) | 1988-03-24 | 1991-09-24 | Olganix Corporation | Computerized dynamic tomography system |
US5129013A (en) | 1987-10-13 | 1992-07-07 | At&T Bell Laboratories | Graphics image editor |
US5341155A (en) | 1990-11-02 | 1994-08-23 | Xerox Corporation | Method for correction of position location indicator for a large area display system |
US5473364A (en) | 1994-06-03 | 1995-12-05 | David Sarnoff Research Center, Inc. | Video technique for indicating moving objects from a movable platform |
US5490240A (en) | 1993-07-09 | 1996-02-06 | Silicon Graphics, Inc. | System and method of generating interactive computer graphic images incorporating three dimensional textures |
US5534917A (en) | 1991-05-09 | 1996-07-09 | Very Vivid, Inc. | Video image based control system |
US5544317A (en) | 1990-11-20 | 1996-08-06 | Berg; David A. | Method for continuing transmission of commands for interactive graphics presentation in a computer network |
US5552824A (en) | 1993-02-18 | 1996-09-03 | Lynx System Developers, Inc. | Line object scene generation apparatus |
US5563988A (en) | 1994-08-01 | 1996-10-08 | Massachusetts Institute Of Technology | Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment |
US5666175A (en) | 1990-12-31 | 1997-09-09 | Kopin Corporation | Optical systems for displays |
US5721585A (en) | 1996-08-08 | 1998-02-24 | Keast; Jeffrey D. | Digital video panoramic image capture and display system |
US6005493A (en) | 1996-09-20 | 1999-12-21 | Hitachi, Ltd. | Method of displaying moving object for enabling identification of its moving route display system using the same, and program recording medium therefor |
US6008865A (en) | 1997-02-14 | 1999-12-28 | Eastman Kodak Company | Segmentation-based method for motion-compensated frame interpolation |
US6040873A (en) | 1996-05-17 | 2000-03-21 | Sony Corporation | Apparatus and method for processing moving image data |
US6067367A (en) | 1996-10-31 | 2000-05-23 | Yamatake-Honeywell Co., Ltd. | Moving direction measuring device and tracking apparatus |
US6069637A (en) | 1996-07-29 | 2000-05-30 | Eastman Kodak Company | System for custom imprinting a variety of articles with images obtained from a variety of different sources |
US6094215A (en) | 1998-01-06 | 2000-07-25 | Intel Corporation | Method of determining relative camera orientation position to create 3-D visual images |
US6203425B1 (en) | 1996-02-13 | 2001-03-20 | Kabushiki Kaisha Sega Enterprises | Image generating device, method thereof, game device and storage medium |
US6400463B2 (en) | 1993-11-22 | 2002-06-04 | Canon Kabushiki Kaisha | Image processing system |
US6471649B1 (en) | 2000-11-09 | 2002-10-29 | Koninklijke Philips Electronics N.V. | Method and apparatus for storing image information in an ultrasound device |
US6747666B2 (en) | 2000-01-26 | 2004-06-08 | New York University | Method and system for facilitating wireless, full-body, real-time user interaction with digitally generated text data |
US6891561B1 (en) * | 1999-03-31 | 2005-05-10 | Vulcan Patents Llc | Providing visual context for a mobile active visual display of a panoramic region |
-
2001
- 2001-01-26 US US09/771,011 patent/US6747666B2/en not_active Ceased
- 2001-01-26 WO PCT/US2001/002738 patent/WO2001056010A1/en active Application Filing
- 2001-01-26 AU AU2001234601A patent/AU2001234601A1/en not_active Abandoned
-
2006
- 2006-06-06 US US11/448,581 patent/USRE40293E1/en not_active Expired - Lifetime
Patent Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4521773A (en) | 1981-08-28 | 1985-06-04 | Xerox Corporation | Imaging array |
US4599611A (en) | 1982-06-02 | 1986-07-08 | Digital Equipment Corporation | Interactive computer-based information display system |
US4843568A (en) | 1986-04-11 | 1989-06-27 | Krueger Myron W | Real time perception of and response to the actions of an unencumbered participant/user |
US5129013A (en) | 1987-10-13 | 1992-07-07 | At&T Bell Laboratories | Graphics image editor |
US5051904A (en) | 1988-03-24 | 1991-09-24 | Olganix Corporation | Computerized dynamic tomography system |
US5341155A (en) | 1990-11-02 | 1994-08-23 | Xerox Corporation | Method for correction of position location indicator for a large area display system |
US5886707A (en) | 1990-11-20 | 1999-03-23 | Berg; David A. | Method for real-time on-demand interactive graphic communication for computer networks |
US5544317A (en) | 1990-11-20 | 1996-08-06 | Berg; David A. | Method for continuing transmission of commands for interactive graphics presentation in a computer network |
US5666175A (en) | 1990-12-31 | 1997-09-09 | Kopin Corporation | Optical systems for displays |
US5534917A (en) | 1991-05-09 | 1996-07-09 | Very Vivid, Inc. | Video image based control system |
US5552824A (en) | 1993-02-18 | 1996-09-03 | Lynx System Developers, Inc. | Line object scene generation apparatus |
US5490240A (en) | 1993-07-09 | 1996-02-06 | Silicon Graphics, Inc. | System and method of generating interactive computer graphic images incorporating three dimensional textures |
US6400463B2 (en) | 1993-11-22 | 2002-06-04 | Canon Kabushiki Kaisha | Image processing system |
US5473364A (en) | 1994-06-03 | 1995-12-05 | David Sarnoff Research Center, Inc. | Video technique for indicating moving objects from a movable platform |
US5563988A (en) | 1994-08-01 | 1996-10-08 | Massachusetts Institute Of Technology | Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment |
US6203425B1 (en) | 1996-02-13 | 2001-03-20 | Kabushiki Kaisha Sega Enterprises | Image generating device, method thereof, game device and storage medium |
US6040873A (en) | 1996-05-17 | 2000-03-21 | Sony Corporation | Apparatus and method for processing moving image data |
US6069637A (en) | 1996-07-29 | 2000-05-30 | Eastman Kodak Company | System for custom imprinting a variety of articles with images obtained from a variety of different sources |
US5721585A (en) | 1996-08-08 | 1998-02-24 | Keast; Jeffrey D. | Digital video panoramic image capture and display system |
US6005493A (en) | 1996-09-20 | 1999-12-21 | Hitachi, Ltd. | Method of displaying moving object for enabling identification of its moving route display system using the same, and program recording medium therefor |
US6067367A (en) | 1996-10-31 | 2000-05-23 | Yamatake-Honeywell Co., Ltd. | Moving direction measuring device and tracking apparatus |
US6008865A (en) | 1997-02-14 | 1999-12-28 | Eastman Kodak Company | Segmentation-based method for motion-compensated frame interpolation |
US6094215A (en) | 1998-01-06 | 2000-07-25 | Intel Corporation | Method of determining relative camera orientation position to create 3-D visual images |
US6891561B1 (en) * | 1999-03-31 | 2005-05-10 | Vulcan Patents Llc | Providing visual context for a mobile active visual display of a panoramic region |
US6747666B2 (en) | 2000-01-26 | 2004-06-08 | New York University | Method and system for facilitating wireless, full-body, real-time user interaction with digitally generated text data |
US6471649B1 (en) | 2000-11-09 | 2002-10-29 | Koninklijke Philips Electronics N.V. | Method and apparatus for storing image information in an ultrasound device |
Non-Patent Citations (20)
Title |
---|
"Mandala VR News, " Fall/Winter 1993. * |
GX Mandala, Technical Manual, Vivid Group, Feb. 22, 2000. |
GX System Manual, Vivid Group Inc. Wireless Virtual Reality Games. |
GX100 Entertainment System Installation Notes, www.vividgroup.com. |
GX200 System Virtual Theatre Installation Notes, www.vividgroup.com. |
http://web.archive.org/web/19991002085816http://vividgroup.com/, 1999. |
http://web.archive.org/web/1999100285816http://vividgroup.com/, 1999. |
http://web.archive.org/web/1999101310223http:/vividgroup.com/, 1999. |
http://web.archive.org/web/19991109111010/http://vividgroup.com/, 1999. |
http://web.archive.org/web/19991109111010http://vividgroup.com/, 1999. |
http://www.archive.org/web/1999101310223/http://vividgroup.com/, 1999. |
http://www.snibbe.com/scott/bf/video.html, 2002-2004. |
http://www.snibble.com/scott/bf/, 1998. |
http://www.vividgroup.com/pressrelease/full<SUB>-</SUB>court.htm, Nov. 10, 1998. |
http://www.vividgroup.com/pressrelease/soccerzg.htm, Oct. 10, 1998. |
http://www.vividgroup.com/pressrelease/virtual<SUB>-</SUB>court.htm, Mar. 10, 1999. |
International Search Report for PCT Patent Application No. PCT/US01/02738 mailed May 14, 2001. |
Kruger, "Video Place", Virtual Reality II Chapter 3, pp. 34-64. |
M. Kruger, Artificial Reality II (Addison-Wesley Publishing Co., 1991). * |
Vincent, "Mandala: Virtual Village" and Stanfel, "Mandala: Virtual Cities", Proceedings of ACM Siggraph 1993 at 207-208 (1993). |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230083741A1 (en) * | 2012-04-12 | 2023-03-16 | Supercell Oy | System and method for controlling technical processes |
US11771988B2 (en) * | 2012-04-12 | 2023-10-03 | Supercell Oy | System and method for controlling technical processes |
US20230415041A1 (en) * | 2012-04-12 | 2023-12-28 | Supercell Oy | System and method for controlling technical processes |
Also Published As
Publication number | Publication date |
---|---|
AU2001234601A1 (en) | 2001-08-07 |
US20010035865A1 (en) | 2001-11-01 |
WO2001056010A9 (en) | 2002-10-24 |
WO2001056010A1 (en) | 2001-08-02 |
US6747666B2 (en) | 2004-06-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
USRE40293E1 (en) | Method and system for facilitating wireless, full-body, real-time user interaction with digitally generated text data | |
US8878949B2 (en) | Camera based interaction and instruction | |
Christian et al. | Digital smart kiosk project | |
US20150332515A1 (en) | Augmented reality system | |
CN108109010A (en) | A kind of intelligence AR advertisement machines | |
US20090204880A1 (en) | Method of story telling presentation and manufacturing multimedia file using computer, and computer input device and computer system for the same | |
JP6859640B2 (en) | Information processing equipment, evaluation systems and programs | |
CN109391848B (en) | Interactive advertisement system | |
KR102517028B1 (en) | Real estate selling promoting apparatus using augmented reality | |
Davis et al. | A robust human-silhouette extraction technique for interactive virtual environments | |
van Eck et al. | The augmented painting: playful interaction with multi-spectral images | |
JP6819194B2 (en) | Information processing systems, information processing equipment and programs | |
CN101388067A (en) | Implantation method for interaction entertainment trademark advertisement | |
Gajic et al. | Egocentric human segmentation for mixed reality | |
JP6752007B2 (en) | Drawing image display system | |
KR102489564B1 (en) | Real estate selling promoting apparatus using drone and augmented reality | |
JP2009003606A (en) | Equipment control method by image recognition, and content creation method and device using the method | |
CN113342176A (en) | Immersive tourism interactive system | |
KR20010105012A (en) | Golf swing comparative analysis visual system using internet | |
US9977565B2 (en) | Interactive educational system with light emitting controller | |
Weede et al. | Virtual welcome guide for interactive museums | |
KR102581583B1 (en) | Digital signage apparatus | |
Kegeleers et al. | IMOVE: A Motion Tracking and Projection Framework for Social Interaction Applications | |
US20030123728A1 (en) | Interactive video installation and method thereof | |
Lönnqvist | Development directions for interactive media surfaces in an elevator context |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FPAY | Fee payment |
Year of fee payment: 8 |
|
AS | Assignment |
Owner name: OPEN INVENTION NETWORK, NORTH CAROLINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NEW YORK UNIVERSITY;REEL/FRAME:030785/0764 Effective date: 20111121 |
|
FEPP | Fee payment procedure |
Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 12 |