EP1374019A2 - Browser system and method of using it - Google Patents
Browser system and method of using itInfo
- Publication number
- EP1374019A2 EP1374019A2 EP01929798A EP01929798A EP1374019A2 EP 1374019 A2 EP1374019 A2 EP 1374019A2 EP 01929798 A EP01929798 A EP 01929798A EP 01929798 A EP01929798 A EP 01929798A EP 1374019 A2 EP1374019 A2 EP 1374019A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- virtual
- browser
- browser apparatus
- information
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1626—Constructional details or arrangements for portable computers with a single-body enclosure integrating a flat display, e.g. Personal Digital Assistants [PDAs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1686—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/32—Operator till task planning
- G05B2219/32014—Augmented reality assists operator in maintenance, repair, programming, assembly, use of head mounted display with 2-D 3-D display and voice feedback, voice and gesture command
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2200/00—Indexing scheme relating to G06F1/04 - G06F1/32
- G06F2200/16—Indexing scheme relating to G06F1/16 - G06F1/18
- G06F2200/163—Indexing scheme relating to constructional details of the computer
- G06F2200/1637—Sensing arrangement for detection of housing movement or orientation, e.g. for controlling scrolling or cursor movement on the display of an handheld computer
Definitions
- the invention relates to a browser system for displaying spatially arranged information, and in particular to a hand held browser system.
- the invention also relates to a method of using the browser system.
- a wide variety of systems suitable for displaying information from a computer system are known.
- the most widely available are the conventional cathode ray tube displays of a conventional computer monitor.
- Smaller computer systems such as laptops and palmtops generally use liquid crystal displays and a number of other forms of display are known.
- a particular form of display is a head-up display which is used in virtual reality systems.
- the intention of a virtual reality system is to immerse the wearer of the system in a virtual world created in the computer.
- the head-up display therefore includes a display in the form of goggles or a helmet fixed on the head of a user and displaying an image of a virtual world.
- the virtual world is a virtual 3 -dimensional space with a number of objects within it which can be seen, viewed and frequently also manipulated by the virtual reality user.
- Augmented reality combines an experience of reality with additional computer generated virtual reality.
- Applications may include tele-medicine, architecture, entertainment, repair and construction.
- Augmented reality differs from virtual reality; the former adds information to the real world whereas the latter is the simulation of a real or imagined environment which can be experienced visually in 3 -dimensions and can provide an interactive experience.
- augmented reality systems almost all employ either static displays on traditional computer monitors or projectors or alternatively head mounted virtual-reality type displays.
- a camera is used together with complex image analysis software which matches the image captured on the camera from the real world with information contained in the virtual world.
- this requires determining the position of one or more real reference objects in the real world in the field of view of the camera. Knowledge of the position of these reference objects with respect to the camera is then used to generate the virtual world.
- augmented reality systems due to Sony, known as the Navi-Cam system, which reads bar-code information from the real world and displays information relating to the bar code as virtual information on a display.
- barcodes can be provided on a library shelf keyed to information about the contents of that shelf.
- the browser may be either a conventional virtual reality display together with a camera or a palmtop together with a camera. When the camera is directed at the bar codes in the real world an image of the real world taken from the camera is combined with data referenced by the bar code .
- augmented reality systems that combine a small display of information with information captured by a camera from the real world. See for example Feiner, S and Shamash, A "Hybrid user interface: Breeding virtually bigger interfaces for physically small computers", Proc . UIST. 1991 ACM. symposium on user interface software and technology, Hilton Head, November 11 to 13 1991 pages 9 - 17.
- Palmtops are widely known small computer systems. Another project includes tilt sensors in a palmtop. This can provide one-handed operation of the palmtop.
- One application of such a tilt sensor is a game in which the object is to guide a ball through a maze. The maze and the ball are displayed on the screen of the palmtop and the tilt of the palmtop is used to generate the acceleration for the ball.
- This game is known as "Maulg II", and is available for download from the Interne .
- “City of News” is an immersive 3 -dimensional web browser which represents a dynamically growing urban landscape of information.
- the browser fetches and displays URLs so as to form virtual skyscrapers and alleys of text and images through which the user can 'fly' .
- the control is by conventional computer controls, for example using a mouse.
- a method of displaying to a user portions of a virtual information space having information arranged spatially in a virtual co-ordinate system comprising: providing a browser apparatus having a display and a position detector for determining the position of the browser apparatus, the browser apparatus being movable by the user to different real space positions including different positions relative to the user's eye; determining information characterising the position and orientation of the browser apparatus in real space and an inferred position of the user's eye; calculating a projected view, from the inferred position of the user's eye, of the spatially arranged information in the virtual information space projected onto the display, depending on the inferred position of the user's eye, the position and orientation of the display and a relationship between real space and virtual reality co-ordinate systems; and displaying on the display of the browser the calculated projected view of the virtual information space; whereby the displayed view of the virtual information space can be changed by moving the browser or changing the relative position or orientation of the browser and the infer
- the browser apparatus is moveable by the user to change the view of the virtual world; conveniently the browser apparatus may be hand-held.
- the apparatus may be a computer such as a palmtop.
- the browser apparatus may have mobile telephone connectivity and a display.
- the method according to the invention alleviates a limitation of devices, especially low resolution devices, in viewing large amounts of graphical and textual information.
- By moving the display further information may be viewed or selected.
- movement of the display may be used to provide a scroll and zoom control of the information displayed.
- the apparatus may have a network connection for connection to a network.
- the network connection may be, for example, Bluetooth, infrared or a mobile telephone connection.
- the apparatus may permit a networked device with a display screen to become a motion sensitive device for browsing virtual spaces in relation to a real physical world.
- the data may include a plurality of 2 -dimensional
- the browser apparatus may be moved to change the displayed resolution of the 2 -dimensional image.
- the virtual information space may be a virtual 3 -dimensional world including at least one, preferably many, 3 -dimensional objects.
- the method may further comprise the steps of controlling at least one navigation parameter by the position and/or orientation of the browser apparatus in space, and navigating through the virtual world by updating the position of the browser apparatus in the virtual world depending on the value of the said at least one navigation parameter.
- the step of navigating through the virtual world may update the position of the browser apparatus by reading the velocity of the browser apparatus in the virtual world, updating the velocity depending on the value of the said at least one navigation parameter, updating the position of the browser apparatus using the updated velocity, and storing the updated velocity of the browser apparatus. In this way the appearance of inertia may be provided.
- the navigation may be abstract, for example for navigation through a virtual world of web pages.
- the navigation parameter may be a direct simulation of a control.
- the orientation of the browser may determine the position of the steering wheel in the driving simulation and the position of the browser in the virtual world may then be updated using the position of the steering wheel and other parameters characterising the driving simulation.
- the eye position may be calculated by making one of a number of assumptions, not limited to those presented here.
- One possibility for calculating the eye position is to assume that its position is fixed regardless of the position or orientation of the screen.
- the eye may be assumed to be in a fixed position in the frame of reference of the screen, i.e. a fixed position in front of the screen, taking into account the orientation of the screen.
- the position may be taken to be a convenient measure of arm length in front of the screen, for example 0.3m to 1.0m away from the screen, and located on an axis perpendicular to the screen and passing through the centre of the screen.
- the browser apparatus may be switchable between each of the above modes for inferring eye position.
- the eye position of the user may be measured. This may be done, for example, by fixing a sensor to the head of the user and detecting the position of the sensor, or by including a camera in the browser apparatus and measuring the eye position by recording an image of the user's head, identifying the eyes in the image and calculating the user's eye position therefrom.
- the browser apparatus may include a tilt sensor to determine the orientation.
- the browser apparatus may include an accelerometer in order to use dead reckoning to calculate the fine movement of the browser apparatus . Movement on a large scale may be obtained in a number of systems, for example Global Positioning System (GPS) .
- GPS Global Positioning System
- the method may also include the step of selecting at least one object in the virtual world, de-coupling the position of the object from the virtual world, and updating the position of the selected object not by using the virtual world model but as a function of the movement of the browser. This may be done by updating the position of the selected object or objects by keeping the selected object or objects fixed with respect to the browser apparatus. Alternatively, the object or objects may be fixed to be along the line of sight from the eye position through the browser apparatus.
- the position along the line of sight may also be modified, for example by moving the browser along the line of sight or by using additional keys or controls on the browser.
- the selected object or objects may be moved in virtual space by moving the browser apparatus in real space. This allows rearrangement of objects in the virtual 3 -dimensional world by the user.
- the apparatus may also be used to author the virtual world, i.e. to create, delete and modify objects, in a similar way.
- the object position can be calculated by the conventional rules determining object positions in the virtual world.
- a method of displaying to a user portions of a virtual information space having objects arranged spatially in a virtual coordinate system using a browser apparatus movable by the user to different positions in real space, the browser apparatus having a display and a position detector for determining the position of the browser apparatus, the method comprising, displaying an image of the virtual world including at least one object on the browser apparatus, selecting an object in the virtual world, and moving the browser apparatus to move the selected object in the virtual world.
- the system may be a palmtop, PDA or mobile telephone with a display.
- the invention in another aspect, relates to a browser apparatus for displaying to a user portions of a virtual information space having information arranged spatially in a virtual co-ordinate system, wherein the browser apparatus is movable by the user to different positions in real space and different positions relative to the eye.
- the browser apparatus comprises a display, a memory, and a position detector for determining the position of the browser apparatus.
- the memory contains stored code for: determining the position of the browser apparatus in real space; determining the relationship between an inferred eye position in real space, the browser apparatus position in real space and the virtual co-ordinate system; calculating a projected view.
- the browser apparatus may further comprise a transmitter and a receiver for remotely connecting the browser apparatus to a network.
- the browser apparatus may additionally include a decompression unit for decompressing compressed data received from the network by the receiver, and/or a rendering engine for rendering image data so as to permit so called thin-client rendering .
- the invention in another aspect, relates to a network system, comprising a browser apparatus as described above with a transmitter and receiver for networking the browser apparatus, linked to a server network having a store containing data about the virtual world, and a transmitter and receiver for transmitting information between the browser apparatus and the server.
- the server network may include a filter for selecting information relating to part of the virtual world and transmitting it to the browser.
- FIG. 2 shows a block diagram of the palmtop computer
- Figure 3 is a flow chart illustrating use of the browser apparatus
- Figure 4 illustrates the co-ordinate systems
- Figure 5 is a flow chart of the projection method
- Figure 6 is a block diagram illustrating the component processes carried out in the browser
- Figure 7 illustrates a virtual world and a projection screen.
- a browser apparatus in the form of a palmtop computer 1 has a body 6 supporting a liquid crystal display screen 3 on its upper surface.
- a selection button 25 is provided, which can be used to select objects.
- Figure 2 shows a block diagram of the palmtop and base station.
- the palmtop also contains a camera 11 which can capture images of the surroundings of the palmtop and a transceiver 13 for communicating with a base station 15, in turn connected to a server 17.
- the palmtop contains a CPU 19, a graphics processor 21 capable of carrying out 3 dimensional graphic calculations and a memory 23.
- the palmtop is a conventional palmtop fitted with a transceiver for remotely connecting to a network; both such components are known and will not be further described.
- the palmtop exists in the real, three-dimensional world.
- the palmtop is used to display information about a virtual information space, namely a set of visually displayable data that is spatially arranged.
- the data may relate to objects in the real world or may relate to a totally imaginary world, containing imaginary objects.
- the properties of objects in the imaginary, virtual world are limited only by the need to calculate the properties of the objects.
- the objects can be calculated to move in ways that correspond to the physical laws in the real world using physics modelling systems, or they may be more abstract.
- FIG. 3 is a block diagram illustrating the use of the browser apparatus .
- the accelerometer is a small commercially available accelerometer, a One Analog DevicesTM ADXL202.
- the accelerometer outputs acceleration information which is twice integrated to obtain position information.
- the calculated velocity is damped to zero with a time constant of several seconds to avoid errors in the velocity accumulating.
- the unit is switchable between three modes; it is determined in step 33 which mode is currently selected.
- the modes relate to how the eye position is determined in the next step 35.
- a first mode the camera 11 takes a picture (image) of the space in front of the palmtop .
- the recorded image is then analysed to locate the users eyes. Then, from the distance between the eyes and their position in the image, the location of the user's eyes is determined.
- a second mode the position of the eyes is inferred.
- the position is determined by calculating a position 1 metre or other suitable representation of a fixed arm distance, for example in the range 0.2m to 1.2m directly in front of the centre of the screen.
- a third mode the eye is initially assumed to be in a fixed position in relation to the screen. Thereafter, the eye is assumed to remain fixed in space.
- the user may freely select between first, second and third modes as convenient. If the first mode becomes unavailable or fails to determine the eye position, the user may be presented with a choice between the second and third modes only.
- step 35 the virtual information space is projected onto the display screen.
- the eye position i.e. using the browser display as a movable window onto the virtual world.
- the projection of the virtual world onto the display uses as the projection point the virtual eye (camera) position. A straight line is drawn from each virtual world position to the eye position and those points that have such projection lines passing through the display are projected onto the position where the line passes through the display. Of course, such points will only be seen on the display if they are not obscured by objects closer to the display.
- Figure 4 illustrates the virtual camera, virtual screen and virtual origin together with the real world screen, eye and origin.
- the transformation between virtual and real co-ordinate systems are homogenous co-ordinate transforms.
- es ⁇ cv(sX) ⁇ l , where e is the eye position as a function of time, s the screen position, s' the virtual screen position, v the virtual origin with respect to the world and c the virtual camera position. All are functions of time, in general.
- the co-ordinate system in the virtual world is conventional in 3D graphics systems. Each object location or direction is given by a 4-vector ⁇ x,y,z,w ⁇ . The first three co-ordinates are the conventional position coordinates and w is either 0 or 1.
- the position of objects in the virtual world may be defined by a plurality of 4-vectors ⁇ x,y,z,l ⁇ giving the positions of vertices or control points of an object.
- Figure 5 illustrates the projection method of the projection step 37. The calculations may be performed either in real or virtual co-ordinates using the stored relationship therebetween to convert one to the other.
- the next step is to calculate the eye position in the virtual co-ordinate system (the virtual camera in Fig. 4) and the screen position in the virtual co-ordinate system. This is determined from a knowledge of the mapping between real and virtual worlds which will depend on the application. Then the positions of objects in the virtual world are projected onto the screen. This is done by projecting individual 4-vectors of position to produce a "projective point" ⁇ x,y,z,w ⁇ which is used for subsequent processing in a graphics pipeline in a conventional way. This projective point is used in subsequent parts of the graphics pipeline for viewport culling, hidden object test, etc, as is conventional for such graphics pipelines.
- the projection step is a simple geometric projection step that displays on the screen objects that would be seen from the eye position.
- the projection is in fact carried out in quite a complex manner. It should not however be forgotten that all the projection step is doing is carrying out a simple geometric projection.
- the co-ordinates are transformed so that the eye is at the origin and the screen is at a unit distance along the z axis.
- This transform may be carried out by multiplying the 4-vectors of position by a matrix T.
- T Let (ex,ey,ez) be the eye position, (sxx,sxy, sxz) the x direction of the screen, (syx, syy, syz) the y direction of the and (qx, qy, qz) the centre of the screen.
- U transforms x and y co-ordinates so that points at the edge of the viewing window move to the edges of a unit square. It also transforms the z and w co-ordinates so that the w co-ordinate is used in the perspective division process which leaves the z co-ordinate free to take a value between 0 and 1 for hidden surface removal and z-buffering.
- the projection matrix U is given by
- n is the distance to the near focal point, scaled by the distance from the user's eye to the screen and f is the distance to the far focal point, similarly calculated, b and h are the width (breadth) and height of the screen.
- multiplication by the matrix T followed by the matrix U may be implemented in a single step as multiplication by the matrix UT.
- the 4-vectors giving positions of objects are pre-multiplied by UT to transform to co-ordinates for input to a graphics pipeline.
- the remaining graphics processing step may be carried out in a conventional 3D rendering system with or without hardware support.
- the result is a signal capable of driving the display 3.
- step 37 the signal drives the display to display the projected image of the virtual world.
- Figure 6 illustrates schematically the various component processes of the method and where they take place. The dotted line represents the steps that take place in the browser 1.
- the user inputs mode data which is passed to the viewing mode interface 63.
- Information 65 to calculate the position and orientation of the browser is fed into the viewing mode interface together with information 67 characterising the user's eye position.
- the viewing mode interface process then passes information to the calculation process 69 that calculates the projection matrix P and passes the matrix P in turn to the graphics process 71.
- the user control process 61 also passes information to the settings process 73. This transmits settings to the server. Information regarding the position, orientation and eye position is likewise transmitted to the server from the viewing mode interface 63.
- the server process 75 which takes place in the server 17, data relating to the virtual world is retrieved and filtered to extract information relevant to the current browser position. This information is then transmitted back to the browser where it is fed into an internal object database 77 containing information regarding the vertices of the objects, their texture and other information such as lighting as required. The information from the database is then fed through the graphics process 71 where the screen image is calculated and fed 79 to the screen of the browser. As can be seen graphics rendering takes place in the browser apparatus. However, the information about the 3- dimensional world is stored on the server 17 and sent through base station 15 and transceiver 13 to the palmtop 1.
- Notable features present in the device illustrated in Figure 6 include the 3 dimensional rendering process, part of the graphics process 71.
- the invention is not limited to the above specific example .
- Devices according to the invention offer a new way to view a virtual world: the screen may become a window onto the world (or a magnifying glass) with the position and orientation of the screen moved by hand, independently of the browser apparatus's eye position, unlike stereoscopic glasses, and the contents of the virtual world as seen by the user projected onto the screen according to user's eye position and the screen position and orientation. Since the field of view is much smaller than a total immersion
- a virtual world on a small, hand-held device such as an organiser or mobile phone offers up huge possibilities.
- the projective, magnifying possibilities give a way easily to view and navigate web content on a small screen.
- a complete cyberspace may be set up with geographically relevant content.
- Exhibition halls may provide directions or even guiding avatars in this cyberspace.
- People may leave each other messages; cyber-graffiti only viewable by those you intend to view it, or the whole world if you are more artistically minded.
- the device not only offers passive viewing of a virtual world but also manipulation of that world, a world that becomes even richer if physically simulated. And most obviously, games can be played in the cyberspace.
- the multi-dimensional information space for display on the browser apparatus can be a virtual world containing objects, such as a virtual world used in playing a game.
- the objects may represent fixed elements of the virtual world, such as walls, floors or the like.
- Other objects can represent movable objects such as balls, creatures, furniture, or indeed any object at all that the designer of the virtual world wishes to include.
- the information space can be a more abstract virtual space with a number of information containing objects spatially arranged.
- the information containing objects may be web pages, database pages or the like, or more abstractly arranged information such as a network diagram of a networked computer system, telephone network or similar.
- the browser apparatus may be able to display any spatially arranged information. It is not even necessary that the information is arranged three dimensionally; a four, five or even higher dimensional space may be provided though a three-dimensional information space may be much easier to intuitively navigate through .
- the position of the browser apparatus may be detected using any of a number of methods.
- the position detector may include, merely by way of example, an accelerometer.
- the acceleration may be numerically integrated once over time to form velocity information and again to form position integration; such integration may be carried out using simple numerical information. Such an approach amounts to dead reckoning.
- the position and velocity information may suffer from systematic drift, i.e. increases or decreases from the expected value, for example because of inaccuracies in the integration.
- This may be corrected by subtracting a reference velocity or reference acceleration, determined for example from an average velocity and acceleration over an extended period. It may also be possible to make assumptions about typical usage to allow drift correction, for example by damping the velocity to zero with a time constant longer than the time for a typical gesture.
- Alternative position detection methods include using camera or ultrasound systems, for example by triangulating based on fixed reference points.
- Bluetooth a local communications system, might also be used by triangulating the position of the browser based on the distance to three transmitters arranged at fixed positions.
- Coarse position information may be obtained by GPS, radio triangulation or simulation, all of which are known. The skilled person will appreciate that there are many other ways of measuring or estimating the position of the browser apparatus .
- the orientation of the browser apparatus with respect to the real world may be obtained using the same techniques as finding the position of the browser apparatus, for example by finding the position of three different fixed, known points with respect to the browser apparatus.
- the orientation may be measured in the browser apparatus, for example by providing a tilt sensor in the browser apparatus .
- position information is intermittent and noisy from external sources (e.g. GPS, Blue Tooth)
- the palmtop must be able to intelligently calculate its current position and orientation by using onboard inertial sensors and attitude sensors. It may however be possible to use just external sources, depending on the application. It may be useful to combine the large- scale information from external sources and the small-scale information from sensors.
- Eye position may also be obtained in a number of ways.
- One approach is to infer the eye position based on one of a number of assumptions.
- One assumption is that the eye position is static in the real-world reference frame, which may be an accurate assumption if the user is seated.
- the device can then be tilted and moved relative to the users head to show different information.
- a second assumption is that the user's head is static in the frame of reference of the mobile device, i.e. always at the same position in front of the screen even as the screen is tilted.
- the browser apparatus may be fixed in either of the above modes or it may be switchable between them.
- the positions of the user's eyes may be measured.
- the user may wear a small transmitter on his head and the direction of the transmitter from the browser apparatus determined.
- the browser apparatus may include a camera and the camera may record an image of the user, the eyes or other portion or whole of the head of the user determined by image processing and the location of the eyes thus determined.
- the images in the virtual information space may contain text; an example of such images is web pages. This creates difficulties because the resolution of the text is essentially continuously variable.
- a text rendering scheme to produce such text is accordingly required; one example of an existing scheme is the TexFont package.
- the plane of the image may be fixed relative to the device so that it is face on, whilst the distance from the device may be varied by moving the device thus achieving intuitive magnification .
- the scale of real and virtual worlds may remain constant but the orientation and the point in the real world corresponding to the origin of the virtual world may vary (v no longer fixed) .
- Such an application may be suitable in games applications.
- the scale is also changeable, perhaps under user control.
- Such an approach may be particularly suitable for navigating through arbitrary information spaces that are not in the form of a virtual world.
- the view from the eye position on the browser apparatus may be calculated.
- the information used is the position of the user's eye, the position and orientation of the display on the browser apparatus and information about the virtual information space.
- the projection of the virtual world onto a screen may be carried out in a number of ways, for example by calculation in the CPU of a browser apparatus or in a specific graphics processor. This projection approach is well suited to 3D graphics pipelines; dedicated hardware is accordingly available for carrying out the projection. This is highly convenient for implementing a browser apparatus according to the invention in a cost-effective way.
- a 3D graphics pipeline performs calculations to take 3D geometric data and draw on a 2D screen the result of viewing objects, also known as primitives, from a certain viewpoint.
- Figure 7 illustrates the viewpoint, screen and primitives.
- the calculations transforms objects according to a reference frame hierarchy, lights the primitives according to their surface properties and virtual lights, projects the points from the space onto the plane, culls invisible objects outside the fields of view, removes hidden surfaces, and textures the polygons.
- the objects may be manipulated such that movement of the screen moves the picked object.
- a handheld browser apparatus may not be able to hold all the information about the space.
- An appropriate solution is a networked architecture where some information is delivered to the browser apparatus, for example through the mobile telephone network.
- the browser apparatus may accordingly include a mobile telephone transceiver for connecting to the mobile telephone network.
- Other possibilities to connect the browser apparatus to a base station include infra-red ports, wire, radio or the like.
- the camera position may be predicted ahead of time from the current position and motion. This allows faster display of three dimensional information because it allows faster updates.
- the simplest approach is to use dead-reckoning to predict the camera position and display position for the next display updates.
- High latency data paths such as loading large datasets from disks or across a network, can then be started before they are required by the display system.
- the database in the server 17 may accordingly perform efficient, large scale geographical culling of information in the virtual space to minimise the bandwidth required between database and handheld device .
- the palmtop may manage the information passed to it by the cyber-database intelligently, storing any texture and polygonal information likely to be needed again soon.
- Embodiments of the new system provide advantages over prior art approaches.
- embodiments of the invention permit the user to interact with a number of game components by moving the display, and to explore a wider viewpoint by moving the display.
- embodiments of the invention provide a movable screen very useful to explore particular areas of the web page .
- Prior augmented reality systems using head mounted displays have the limitation that the virtual and real worlds remain congruent. However, embodiments of the invention permit additional linear or angular motion on the virtual display following a path through the virtual environment .
- Hyperlinks may be selected using the picking procedure described above, i.e. a line-of-sight selection.
- a 3D object may be loaded into the virtual scene. By moving the handheld device and moving his own viewpoint, the entirety of the object can be explored as if it were fixed in real space. If the object is large, it may be manipulated itself, without the user having to move around it, and with additional scaling operations, if needed.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A browser system (1) is described that can display spatially arranged information on a browser (1) movable by a user, for example a handheld browser such as a PDA or a mobile telephony unit. The browser determines its position and orientation and calculates the view of the virtual world based on the position and orientation of the browser together with the inferred eye position of the user. The relevant view is projected onto the screen of the browser. Subsequent movement of the browser provides scroll and zoom control of the spatially arranged information displayed.
Description
TITLE: BROWSER SYSTEM AND METHOD OF USING IT
DESCRIPTION
The invention relates to a browser system for displaying spatially arranged information, and in particular to a hand held browser system. The invention also relates to a method of using the browser system.
BACKGROUND OF THE INVENTION A wide variety of systems suitable for displaying information from a computer system are known. The most widely available are the conventional cathode ray tube displays of a conventional computer monitor. Smaller computer systems such as laptops and palmtops generally use liquid crystal displays and a number of other forms of display are known.
A particular form of display is a head-up display which is used in virtual reality systems. The intention of
a virtual reality system is to immerse the wearer of the system in a virtual world created in the computer. The head-up display therefore includes a display in the form of goggles or a helmet fixed on the head of a user and displaying an image of a virtual world. The virtual world is a virtual 3 -dimensional space with a number of objects within it which can be seen, viewed and frequently also manipulated by the virtual reality user.
A development of virtual reality is so called augmented reality. Augmented reality combines an experience of reality with additional computer generated virtual reality. Applications may include tele-medicine, architecture, entertainment, repair and construction. Augmented reality differs from virtual reality; the former adds information to the real world whereas the latter is the simulation of a real or imagined environment which can be experienced visually in 3 -dimensions and can provide an interactive experience.
Existing augmented reality systems almost all employ either static displays on traditional computer monitors or projectors or alternatively head mounted virtual-reality type displays. In general, a camera is used together with complex image analysis software which matches the image captured on the camera from the real world with information contained in the virtual world. In general, this requires determining the position of one or more real reference objects in the real world in the field of view of the camera. Knowledge of the position of these reference
objects with respect to the camera is then used to generate the virtual world.
There is a development of such augmented reality systems due to Sony, known as the Navi-Cam system, which reads bar-code information from the real world and displays information relating to the bar code as virtual information on a display. For example, barcodes can be provided on a library shelf keyed to information about the contents of that shelf. The browser may be either a conventional virtual reality display together with a camera or a palmtop together with a camera. When the camera is directed at the bar codes in the real world an image of the real world taken from the camera is combined with data referenced by the bar code . There are other augmented reality systems that combine a small display of information with information captured by a camera from the real world. See for example Feiner, S and Shamash, A "Hybrid user interface: Breeding virtually bigger interfaces for physically small computers", Proc . UIST. 1991 ACM. symposium on user interface software and technology, Hilton Head, November 11 to 13 1991 pages 9 - 17.
Palmtops are widely known small computer systems. Another project includes tilt sensors in a palmtop. This can provide one-handed operation of the palmtop. One application of such a tilt sensor is a game in which the object is to guide a ball through a maze. The maze and the ball are displayed on the screen of the palmtop and the
tilt of the palmtop is used to generate the acceleration for the ball. This game is known as "Maulg II", and is available for download from the Interne .
A number of other display possibilities are known. For example, "City of News" is an immersive 3 -dimensional web browser which represents a dynamically growing urban landscape of information. The browser fetches and displays URLs so as to form virtual skyscrapers and alleys of text and images through which the user can 'fly' . In such a system the control is by conventional computer controls, for example using a mouse.
However, such traditional display systems and methods all have their disadvantages. Display systems that use a head-up display for augmented reality are bulky, unsightly and take time to adjust. Moreover, such systems are currently very expensive. Conversely, systems using conventional CRT displays are not flexible.
SUMMARY OF THE INVENTION According to the invention there is provided a method of displaying to a user portions of a virtual information space having information arranged spatially in a virtual co-ordinate system, the method comprising: providing a browser apparatus having a display and a position detector for determining the position of the browser apparatus, the browser apparatus being movable by the user to different real space positions including different positions relative to the user's eye; determining information characterising the position
and orientation of the browser apparatus in real space and an inferred position of the user's eye; calculating a projected view, from the inferred position of the user's eye, of the spatially arranged information in the virtual information space projected onto the display, depending on the inferred position of the user's eye, the position and orientation of the display and a relationship between real space and virtual reality co-ordinate systems; and displaying on the display of the browser the calculated projected view of the virtual information space; whereby the displayed view of the virtual information space can be changed by moving the browser or changing the relative position or orientation of the browser and the inferred position of the user's eyes.
The browser apparatus is moveable by the user to change the view of the virtual world; conveniently the browser apparatus may be hand-held. The apparatus may be a computer such as a palmtop. In alternative embodiments, the browser apparatus may have mobile telephone connectivity and a display.
The method according to the invention alleviates a limitation of devices, especially low resolution devices, in viewing large amounts of graphical and textual information. By moving the display, further information may be viewed or selected. In other words, movement of the display may be used to provide a scroll and zoom
control of the information displayed.
The method of viewing data allows new ways of exploring and manipulating information. Prior art approaches assume the screen to be fixed in relation to the eye - in prior art systems using goggles the screens are indeed fixed relative to the eye.
By including information relating to eye and browser apparatus position into the model of the virtual reality and allowing different parts of the virtual reality model to be displayed depending on the relative eye and display positions, including the orientation of the device, it may become much easier to view large amounts of data.
The apparatus may have a network connection for connection to a network. The network connection may be, for example, Bluetooth, infrared or a mobile telephone connection. Thus, the apparatus may permit a networked device with a display screen to become a motion sensitive device for browsing virtual spaces in relation to a real physical world. The data may include a plurality of 2 -dimensional
(2D) images arranged in a 3 -dimensional (3D) virtual information space; the 2 -dimensional images may be web pages. In embodiments, the browser apparatus may be moved to change the displayed resolution of the 2 -dimensional image.
Alternatively, the virtual information space may be a virtual 3 -dimensional world including at least one, preferably many, 3 -dimensional objects.
The method may further comprise the steps of controlling at least one navigation parameter by the position and/or orientation of the browser apparatus in space, and navigating through the virtual world by updating the position of the browser apparatus in the virtual world depending on the value of the said at least one navigation parameter.
The step of navigating through the virtual world may update the position of the browser apparatus by reading the velocity of the browser apparatus in the virtual world, updating the velocity depending on the value of the said at least one navigation parameter, updating the position of the browser apparatus using the updated velocity, and storing the updated velocity of the browser apparatus. In this way the appearance of inertia may be provided.
The navigation may be abstract, for example for navigation through a virtual world of web pages. Alternatively, the navigation parameter may be a direct simulation of a control. For example, for a driving simulation the orientation of the browser may determine the position of the steering wheel in the driving simulation and the position of the browser in the virtual world may then be updated using the position of the steering wheel and other parameters characterising the driving simulation.
The eye position may be calculated by making one of a number of assumptions, not limited to those presented
here. One possibility for calculating the eye position is to assume that its position is fixed regardless of the position or orientation of the screen. Alternatively, the eye may be assumed to be in a fixed position in the frame of reference of the screen, i.e. a fixed position in front of the screen, taking into account the orientation of the screen. The position may be taken to be a convenient measure of arm length in front of the screen, for example 0.3m to 1.0m away from the screen, and located on an axis perpendicular to the screen and passing through the centre of the screen.
The browser apparatus may be switchable between each of the above modes for inferring eye position.
Alternatively or additionally the eye position of the user may be measured. This may be done, for example, by fixing a sensor to the head of the user and detecting the position of the sensor, or by including a camera in the browser apparatus and measuring the eye position by recording an image of the user's head, identifying the eyes in the image and calculating the user's eye position therefrom.
The browser apparatus may include a tilt sensor to determine the orientation. Moreover, the browser apparatus may include an accelerometer in order to use dead reckoning to calculate the fine movement of the browser apparatus . Movement on a large scale may be obtained in a number of systems, for example Global Positioning System (GPS) .
The method may also include the step of selecting at least one object in the virtual world, de-coupling the position of the object from the virtual world, and updating the position of the selected object not by using the virtual world model but as a function of the movement of the browser. This may be done by updating the position of the selected object or objects by keeping the selected object or objects fixed with respect to the browser apparatus. Alternatively, the object or objects may be fixed to be along the line of sight from the eye position through the browser apparatus. The position along the line of sight may also be modified, for example by moving the browser along the line of sight or by using additional keys or controls on the browser. In these ways, the selected object or objects may be moved in virtual space by moving the browser apparatus in real space. This allows rearrangement of objects in the virtual 3 -dimensional world by the user.
The apparatus may also be used to author the virtual world, i.e. to create, delete and modify objects, in a similar way.
After the object has been moved to the required position, it may be deselected so that its position is no longer updated based on the movement of the browser apparatus. Instead, the object position can be calculated by the conventional rules determining object positions in the virtual world.
In another aspect there is provided a method of
displaying to a user portions of a virtual information space having objects arranged spatially in a virtual coordinate system, using a browser apparatus movable by the user to different positions in real space, the browser apparatus having a display and a position detector for determining the position of the browser apparatus, the method comprising, displaying an image of the virtual world including at least one object on the browser apparatus, selecting an object in the virtual world, and moving the browser apparatus to move the selected object in the virtual world.
In this way, a convenient and intuitive approach to moving virtual objects can be provided in a handheld system. The system may be a palmtop, PDA or mobile telephone with a display.
In another aspect, the invention relates to a browser apparatus for displaying to a user portions of a virtual information space having information arranged spatially in a virtual co-ordinate system, wherein the browser apparatus is movable by the user to different positions in real space and different positions relative to the eye. The browser apparatus comprises a display, a memory, and a position detector for determining the position of the browser apparatus. The memory contains stored code for: determining the position of the browser apparatus in real space; determining the relationship between an inferred eye
position in real space, the browser apparatus position in real space and the virtual co-ordinate system; calculating a projected view. from the inferred position of the user's eye, of the spatially arranged information in the virtual information space onto the display using the inferred eye position, the browser position and/or orientation and a relationship between the real and virtual co-ordinate systems, and displaying on the display of the browser the calculated portion of the virtual information space.
The browser apparatus may further comprise a transmitter and a receiver for remotely connecting the browser apparatus to a network. The browser apparatus may additionally include a decompression unit for decompressing compressed data received from the network by the receiver, and/or a rendering engine for rendering image data so as to permit so called thin-client rendering .
In another aspect, the invention relates to a network system, comprising a browser apparatus as described above with a transmitter and receiver for networking the browser apparatus, linked to a server network having a store containing data about the virtual world, and a transmitter and receiver for transmitting information between the browser apparatus and the server. The server network may include a filter for selecting information relating to part of the virtual world and transmitting it to the browser.
BRIEF DESCRIPTION OF THE DRAWINGS Specific embodiments of the invention will now be described, purely by way of example, with reference to the accompanying drawings in which: Figure 1 shows a palmtop computer in accordance with the invention,
Figure 2 shows a block diagram of the palmtop computer,
Figure 3 is a flow chart illustrating use of the browser apparatus,
Figure 4 illustrates the co-ordinate systems, Figure 5 is a flow chart of the projection method, Figure 6 is a block diagram illustrating the component processes carried out in the browser, and Figure 7 illustrates a virtual world and a projection screen.
SPECIFIC DESCRIPTION - BEST MODE
Referring to Figure 1, a browser apparatus in the form of a palmtop computer 1 has a body 6 supporting a liquid crystal display screen 3 on its upper surface. The palmtop
1 may be controlled using a stylus 5 which can be used to input data and instructions by writing on the display screen 3 which may be touch sensitive, as is known. A selection button 25 is provided, which can be used to select objects.
Figure 2 shows a block diagram of the palmtop and base station.
An accelerometer 7 and a tilt sensor 9 are installed
into the palmtop. The palmtop also contains a camera 11 which can capture images of the surroundings of the palmtop and a transceiver 13 for communicating with a base station 15, in turn connected to a server 17. The palmtop contains a CPU 19, a graphics processor 21 capable of carrying out 3 dimensional graphic calculations and a memory 23. The palmtop is a conventional palmtop fitted with a transceiver for remotely connecting to a network; both such components are known and will not be further described.
The palmtop exists in the real, three-dimensional world. However, the palmtop is used to display information about a virtual information space, namely a set of visually displayable data that is spatially arranged. As will be discussed later, the data may relate to objects in the real world or may relate to a totally imaginary world, containing imaginary objects. The properties of objects in the imaginary, virtual world are limited only by the need to calculate the properties of the objects. Thus, the objects can be calculated to move in ways that correspond to the physical laws in the real world using physics modelling systems, or they may be more abstract.
Figure 3 is a block diagram illustrating the use of the browser apparatus . In the first step 31, information about the position and orientation of the palmtop 1 is obtained from the tilt sensor and accelerometer . The accelerometer is a small commercially available accelerometer, a One Analog Devices™
ADXL202. The accelerometer outputs acceleration information which is twice integrated to obtain position information. The calculated velocity is damped to zero with a time constant of several seconds to avoid errors in the velocity accumulating.
The unit is switchable between three modes; it is determined in step 33 which mode is currently selected. The modes relate to how the eye position is determined in the next step 35. In a first mode, the camera 11 takes a picture (image) of the space in front of the palmtop . The recorded image is then analysed to locate the users eyes. Then, from the distance between the eyes and their position in the image, the location of the user's eyes is determined. In a second mode, the position of the eyes is inferred. The position is determined by calculating a position 1 metre or other suitable representation of a fixed arm distance, for example in the range 0.2m to 1.2m directly in front of the centre of the screen. In a third mode, the eye is initially assumed to be in a fixed position in relation to the screen. Thereafter, the eye is assumed to remain fixed in space.
The user may freely select between first, second and third modes as convenient. If the first mode becomes unavailable or fails to determine the eye position, the user may be presented with a choice between the second and third modes only.
In step 35 (Fig.3) the virtual information space is
projected onto the display screen. To do this, it is necessary to assume or calculate the relationship between real and virtual co-ordinate systems, the eye position and the browser display. From these pieces of information, it is possible to calculate the view of the virtual information space that would be viewed through the display from the eye position, i.e. using the browser display as a movable window onto the virtual world. The projection of the virtual world onto the display uses as the projection point the virtual eye (camera) position. A straight line is drawn from each virtual world position to the eye position and those points that have such projection lines passing through the display are projected onto the position where the line passes through the display. Of course, such points will only be seen on the display if they are not obscured by objects closer to the display.
Figure 4 illustrates the virtual camera, virtual screen and virtual origin together with the real world screen, eye and origin. The transformation between virtual and real co-ordinate systems are homogenous co-ordinate transforms. Mathematically, es~λ = cv(sX)~l , where e is the eye position as a function of time, s the screen position, s' the virtual screen position, v the virtual origin with respect to the world and c the virtual camera position. All are functions of time, in general.
At each time the relationship between the real and
virtual co-ordinate systems are stored in a memory. This information is updated as necessary, depending on the application. For example, when the virtual world provides information about the real world it is convenient to use a single fixed mapping between real and virtual worlds - one real world location always corresponds to the same virtual world location. In contrast, for a game application it will be necessary to alter the relationship depending on the position of the viewer in the virtual game world. The co-ordinate system in the virtual world is conventional in 3D graphics systems. Each object location or direction is given by a 4-vector {x,y,z,w}. The first three co-ordinates are the conventional position coordinates and w is either 0 or 1. When the 4-vector is a position vector w is 1, whereas when the 4-vector represents a direction not tied in space to a position w is 0. The position of objects in the virtual world may be defined by a plurality of 4-vectors {x,y,z,l} giving the positions of vertices or control points of an object. Figure 5 illustrates the projection method of the projection step 37. The calculations may be performed either in real or virtual co-ordinates using the stored relationship therebetween to convert one to the other.
After the eye position is inferred or measured in the real co-ordinate system as described above and the screen position determined from the measurements of screen position, the next step is to calculate the eye position in the virtual co-ordinate system (the virtual camera in Fig.
4) and the screen position in the virtual co-ordinate system. This is determined from a knowledge of the mapping between real and virtual worlds which will depend on the application. Then the positions of objects in the virtual world are projected onto the screen. This is done by projecting individual 4-vectors of position to produce a "projective point" {x,y,z,w} which is used for subsequent processing in a graphics pipeline in a conventional way. This projective point is used in subsequent parts of the graphics pipeline for viewport culling, hidden object test, etc, as is conventional for such graphics pipelines.
The projection step is a simple geometric projection step that displays on the screen objects that would be seen from the eye position. However, for computational efficiency and to use existing 3D hardware and software, the projection is in fact carried out in quite a complex manner. It should not however be forgotten that all the projection step is doing is carrying out a simple geometric projection.
The co-ordinates are transformed so that the eye is at the origin and the screen is at a unit distance along the z axis. This transform may be carried out by multiplying the 4-vectors of position by a matrix T. Let (ex,ey,ez) be the eye position, (sxx,sxy, sxz) the x direction of the screen, (syx, syy, syz) the y direction of the and (qx, qy, qz) the centre of the screen. Thus the s co-ordinates give the orientation of the screen. Then, the matrix T is given by
T. (ex, ey, ez, 1) = (0, 0, 0, 1)
T. (qx, qy, qz, 1) = (0, 0, 1, 1)
T. (sxx, sxy, sxz, 0) = (1, 0, 0, 0)
T. (syx, syy, syz, 0) = (0, 1, 0, 0) This set of equations can be inverted to find the 16 components of T, either algebraically or numerically, although given the special nature of the matrix problem
(all the Is and 0s) , algebraically is best.
If all that were required was to generate the screen position it would now be a simple matter to calculate the point on a screen corresponding to the object. By inspection of Figure 5, a point at (sx' , sy ' , sz ' ) after transformation by the matrix T would be projected onto a screen at (sx' /sz ' , sy ' /sz ' ) . However, in practice the display will be carried out by a conventional graphics pipeline. In this case the preparation of the object 4-vectors of position for feeding to the graphics pipeline can be thought of as taking place in two steps. Firstly the co-ordinates are transformed by the matrix T as above so that the eye position is at the origin and a screen position passes through (0,0,1) in the yz plane. Then, a further transformation U is carried out to calculate 4-vectors in a conventional form taking into account perspective. This conventional form is required as the input to conventional graphics pipelines. For completeness, the calculation of the 4-vectors of position in this format will now be presented.
U transforms x and y co-ordinates so that points at
the edge of the viewing window move to the edges of a unit square. It also transforms the z and w co-ordinates so that the w co-ordinate is used in the perspective division process which leaves the z co-ordinate free to take a value between 0 and 1 for hidden surface removal and z-buffering. The projection matrix U is given by
Here n is the distance to the near focal point, scaled by the distance from the user's eye to the screen and f is the distance to the far focal point, similarly calculated, b and h are the width (breadth) and height of the screen.
For convenience, multiplication by the matrix T followed by the matrix U may be implemented in a single step as multiplication by the matrix UT. The 4-vectors giving positions of objects are pre-multiplied by UT to transform to co-ordinates for input to a graphics pipeline.
The remaining graphics processing step may be carried out in a conventional 3D rendering system with or without hardware support. The result is a signal capable of driving the display 3.
In step 37 (Fig. 3) the signal drives the display to display the projected image of the virtual world.
Figure 6 illustrates schematically the various component processes of the method and where they take place. The dotted line represents the steps that take place in the browser 1. In step 61, the user inputs mode data which is passed to the viewing mode interface 63. Information 65 to calculate the position and orientation of the browser is fed into the viewing mode interface together with information 67 characterising the user's eye position. The viewing mode interface process then passes information to the calculation process 69 that calculates the projection matrix P and passes the matrix P in turn to the graphics process 71.
The user control process 61 also passes information to the settings process 73. This transmits settings to the server. Information regarding the position, orientation and eye position is likewise transmitted to the server from the viewing mode interface 63.
In the server process 75, which takes place in the server 17, data relating to the virtual world is retrieved and filtered to extract information relevant to the current browser position. This information is then transmitted back to the browser where it is fed into an internal object database 77 containing information regarding the vertices of the objects, their texture and other information such as lighting as required. The information from the database is then fed through the graphics process 71 where the screen image is calculated and fed 79 to the screen of the browser.
As can be seen graphics rendering takes place in the browser apparatus. However, the information about the 3- dimensional world is stored on the server 17 and sent through base station 15 and transceiver 13 to the palmtop 1.
Notable features present in the device illustrated in Figure 6 include the 3 dimensional rendering process, part of the graphics process 71. There is a graphics pipeline to perform projection and model transformations on vertices. Clipping, viewport culling, hidden line removal, fragment formation and pixelization may be provided in the pipeline. Texturing, operations on fragments and text rendering would ideally be present, too, with the attendant bitmap memory requirements. Much of this is possible in hardware using 3-dimensional (3D) graphics cards.
The invention is not limited to the above specific example .
Viewing of a virtual world has hitherto been limited to two modes of viewing; immersion using stereoscopic glasses (e.g. Virtuality) and projection onto a static computer screen (e.g. Quake) . The former requires extremely high resolution graphics with fast refresh rates and the concomitant bulky, expensive hardware. Complete immersion can also be disorienting although using a partially transparent, head-up-display type display may alleviate this when appropriate. Projection onto a static screen reduces the illusion of 3D immersion, so such systems are often referred to as "2.5D". The interface to
navigate this 2.5D world is usually complicated and non- intuitive. For example, in the well known computer game "Quake" a large number of key-strokes and mouse-key combinations are needed to control the game character - this requires significant effort and commitment from the user to learn the strokes.
Devices according to the invention offer a new way to view a virtual world: the screen may become a window onto the world (or a magnifying glass) with the position and orientation of the screen moved by hand, independently of the browser apparatus's eye position, unlike stereoscopic glasses, and the contents of the virtual world as seen by the user projected onto the screen according to user's eye position and the screen position and orientation. Since the field of view is much smaller than a total immersion
(Virtual reality) headset the graphical requirements are not so severe while the analogue, intuitive interface provided by movement of the viewing screen gives a very good feeling of there being a full, 3 dimensional world around the browser apparatus.
Viewing a virtual world on a small, hand-held device such as an organiser or mobile phone offers up huge possibilities. Firstly, the projective, magnifying possibilities give a way easily to view and navigate web content on a small screen. Secondly, a complete cyberspace may be set up with geographically relevant content. Exhibition halls may provide directions or even guiding avatars in this cyberspace. People may leave each other
messages; cyber-graffiti only viewable by those you intend to view it, or the whole world if you are more artistically minded. The device not only offers passive viewing of a virtual world but also manipulation of that world, a world that becomes even richer if physically simulated. And most obviously, games can be played in the cyberspace.
The multi-dimensional information space for display on the browser apparatus can be a virtual world containing objects, such as a virtual world used in playing a game. The objects may represent fixed elements of the virtual world, such as walls, floors or the like. Other objects can represent movable objects such as balls, creatures, furniture, or indeed any object at all that the designer of the virtual world wishes to include. Alternatively, the information space can be a more abstract virtual space with a number of information containing objects spatially arranged. The information containing objects may be web pages, database pages or the like, or more abstractly arranged information such as a network diagram of a networked computer system, telephone network or similar. Indeed, the browser apparatus may be able to display any spatially arranged information. It is not even necessary that the information is arranged three dimensionally; a four, five or even higher dimensional space may be provided though a three-dimensional information space may be much easier to intuitively navigate through .
The position of the browser apparatus may be detected
using any of a number of methods. Accordingly, the position detector may include, merely by way of example, an accelerometer. The acceleration may be numerically integrated once over time to form velocity information and again to form position integration; such integration may be carried out using simple numerical information. Such an approach amounts to dead reckoning.
Over extended periods of time the position and velocity information may suffer from systematic drift, i.e. increases or decreases from the expected value, for example because of inaccuracies in the integration. This may be corrected by subtracting a reference velocity or reference acceleration, determined for example from an average velocity and acceleration over an extended period. It may also be possible to make assumptions about typical usage to allow drift correction, for example by damping the velocity to zero with a time constant longer than the time for a typical gesture.
Alternative position detection methods include using camera or ultrasound systems, for example by triangulating based on fixed reference points. Bluetooth, a local communications system, might also be used by triangulating the position of the browser based on the distance to three transmitters arranged at fixed positions. Coarse position information may be obtained by GPS, radio triangulation or simulation, all of which are known. The skilled person will appreciate that there are many other ways of measuring or estimating the position of the
browser apparatus .
The orientation of the browser apparatus with respect to the real world may be obtained using the same techniques as finding the position of the browser apparatus, for example by finding the position of three different fixed, known points with respect to the browser apparatus. Alternatively, the orientation may be measured in the browser apparatus, for example by providing a tilt sensor in the browser apparatus . In situations where position information is intermittent and noisy from external sources (e.g. GPS, Blue Tooth) , the palmtop must be able to intelligently calculate its current position and orientation by using onboard inertial sensors and attitude sensors. It may however be possible to use just external sources, depending on the application. It may be useful to combine the large- scale information from external sources and the small-scale information from sensors.
Eye position may also be obtained in a number of ways. One approach is to infer the eye position based on one of a number of assumptions. One assumption is that the eye position is static in the real-world reference frame, which may be an accurate assumption if the user is seated. The device can then be tilted and moved relative to the users head to show different information. A second assumption is that the user's head is static in the frame of reference of the mobile device, i.e. always at the same position in front of the screen even as the screen is tilted.
The browser apparatus may be fixed in either of the above modes or it may be switchable between them.
Alternatively or additionally the positions of the user's eyes may be measured. For example, the user may wear a small transmitter on his head and the direction of the transmitter from the browser apparatus determined. Alternatively, the browser apparatus may include a camera and the camera may record an image of the user, the eyes or other portion or whole of the head of the user determined by image processing and the location of the eyes thus determined.
The images in the virtual information space may contain text; an example of such images is web pages. This creates difficulties because the resolution of the text is essentially continuously variable. A text rendering scheme to produce such text is accordingly required; one example of an existing scheme is the TexFont package.
To facilitate the viewing of text and images, the plane of the image may be fixed relative to the device so that it is face on, whilst the distance from the device may be varied by moving the device thus achieving intuitive magnification .
To calculate the portion of the information space that is to be displayed on the browser apparatus a mapping between the virtual information space and the real world is required.
Depending on the application the mapping may be a constant with the scale and origins of real and virtual
world co-ordinate systems fixed with respect to one another, i.e. v=0, c=e and s ' =s . Such an approach may be appropriate where the virtual world provides information about elements of the real world and accordingly needs to remain in register with it.
Alternatively, the scale of real and virtual worlds may remain constant but the orientation and the point in the real world corresponding to the origin of the virtual world may vary (v no longer fixed) . Such an application may be suitable in games applications.
A further possibility is that the scale is also changeable, perhaps under user control. Such an approach may be particularly suitable for navigating through arbitrary information spaces that are not in the form of a virtual world.
In prior art head mounted systems the eye was effectively fixed relative to the system, whereas in a monitor based system the monitor is effectively fixed in the position in real space. In the invention, these constraints may be relaxed.
In any event, once the mapping between the real world and the virtual information space is known the view from the eye position on the browser apparatus may be calculated. The information used is the position of the user's eye, the position and orientation of the display on the browser apparatus and information about the virtual information space.
The projection of the virtual world onto a screen may be carried out in a number of ways, for example by calculation in the CPU of a browser apparatus or in a specific graphics processor. This projection approach is well suited to 3D graphics pipelines; dedicated hardware is accordingly available for carrying out the projection. This is highly convenient for implementing a browser apparatus according to the invention in a cost-effective way. A 3D graphics pipeline performs calculations to take 3D geometric data and draw on a 2D screen the result of viewing objects, also known as primitives, from a certain viewpoint. Figure 7 illustrates the viewpoint, screen and primitives. Of course, in prior art approaches the use of the projection did not take into account the actual eye position of the user; rather the viewpoint was a virtual viewpoint existing in the virtual world, and the screen simply a window on the virtual world seen from the virtual viewpoint . The calculations transforms objects according to a reference frame hierarchy, lights the primitives according to their surface properties and virtual lights, projects the points from the space onto the plane, culls invisible objects outside the fields of view, removes hidden surfaces, and textures the polygons.
Traditional graphics pipelines have a projection stage and 3D graphics cards have hardware to perform the projection efficiently, so it is beneficial to put this
part, if not all, of the graphics pipeline into the browser device to avoid overloading the CPU with floating point divisions .
Finally, the information contained in the virtual world as calculated by the projection step is simply displayed on the screen 3.
It is possible to implement selecting and moving an object on the 2D screen by using a cursor on the screen. The objects may be manipulated such that movement of the screen moves the picked object.
If the 3D information space is a large one then a handheld browser apparatus may not be able to hold all the information about the space. An appropriate solution is a networked architecture where some information is delivered to the browser apparatus, for example through the mobile telephone network. The browser apparatus may accordingly include a mobile telephone transceiver for connecting to the mobile telephone network. Other possibilities to connect the browser apparatus to a base station include infra-red ports, wire, radio or the like.
In order to achieve fast updates of computer generated 3D images, conventional 3D systems use many algorithmic techniques for filtering out extraneous information that will not effect the final image. These methods use information about the current position and orientation. Only the required information is paged into memory.
Such systems are also suitable for incorporation into the browser apparatus. The camera position may be
predicted ahead of time from the current position and motion. This allows faster display of three dimensional information because it allows faster updates. The simplest approach is to use dead-reckoning to predict the camera position and display position for the next display updates. High latency data paths, such as loading large datasets from disks or across a network, can then be started before they are required by the display system.
The database in the server 17 may accordingly perform efficient, large scale geographical culling of information in the virtual space to minimise the bandwidth required between database and handheld device .
The palmtop may manage the information passed to it by the cyber-database intelligently, storing any texture and polygonal information likely to be needed again soon.
Embodiments of the new system provide advantages over prior art approaches.
The prior art approach to games playing on a static system gave a limited view of a virtual world, especially on small, low resolution handheld devices. In contrast, embodiments of the invention permit the user to interact with a number of game components by moving the display, and to explore a wider viewpoint by moving the display.
When viewing virtual web pages using a head up display, prior art systems have required additional equipment such as a data glove to manipulate the page. In contrast embodiments of the invention provide a movable screen very useful to explore particular areas of the web
page .
Prior augmented reality systems using head mounted displays have the limitation that the virtual and real worlds remain congruent. However, embodiments of the invention permit additional linear or angular motion on the virtual display following a path through the virtual environment .
On a small screen it is difficult to read and appreciate a typical World Wide Web page using traditional techniques. In traditional methods, the web page is rendered in relation to the actual display screen. If the web page is larger than the display the user can use "scroll bars" to move the view onto a different section of the web page. This method of interaction is not appropriate for small handheld screens because the screen resolution is much smaller than the web page. However, by treating the web page as a flat object within 3D space it is possible to see the web page in its entirety at low resolution or focus in on an area of interest to enlarge small text and graphics. For legibility, it is useful to maintain alignment between the web page and the plane of the virtual screen as previously described.
Using the invention presented here it is possible to explore the web page by physically moving the display. As we have previously discussed, the measurement of the motion affects the spatial relationships between the virtual camera, the screen and the virtual object. Using the viewing modes previously described, by moving the
handheld screen closer or further away, it is possible to see either the web page in its entirety or focus in on a specific area of graphic or textual information.
Hyperlinks may be selected using the picking procedure described above, i.e. a line-of-sight selection.
A 3D object may be loaded into the virtual scene. By moving the handheld device and moving his own viewpoint, the entirety of the object can be explored as if it were fixed in real space. If the object is large, it may be manipulated itself, without the user having to move around it, and with additional scaling operations, if needed.
Since virtual objects may be manipulated using the system, it is possible to composite a scene consisting of more than one object. Typical authoring functions such as changing positions, scale, applying surface colours, textures and features, modifying shapes, creating and destroying shapes may be presented as selectable icons within the virtual space or as part of a more traditional 2D interface overlaid onto the 3D world. The invention has now been described in detail for purposes of clarity of understanding. However, it will be appreciated that certain changes and modifications may be made. Thus, the scope and content of the invention should be determined in the light of the claims set forth below as well as the full range of equivalents to which those claims are entitled.
Claims
1. A method of displaying to a user portions of a virtual information space having information arranged spatially in a virtual co-ordinate system, the method comprising providing a browser apparatus having a display and a position detector for determining the position of the browser apparatus, the browser apparatus being movable by the user to different real space positions including different positions relative to a real or implied position of a user ' s eye, determining information characterising at least one of the position and orientation of the browser apparatus in real space, projecting the virtual information space onto the display using at least one of the inferred eye position, the browser position and orientation and a relationship between the real and virtual co-ordinate systems, and displaying on the display of the browser the calculated portion of the virtual information space.
2. A method according to claim 1 wherein the virtual information space includes a plurality of 2-dimentional images arranged in a 3 -dimensional space.
3. A method according to claim 2 wherein the 2- dimensional images are web pages.
4. A method according to claim 2 wherein the browser apparatus is moved to change the displayed resolution of the 2 -dimensional image.
5. A method according to claim 1 wherein the virtual information space is a virtual 3 -dimensional world including at least one 3 -dimensional object.
6. A method according to claim 5 further comprising the steps of selecting at least one object, and updating the position of the selected at least one object by determining the position of the browser apparatus and keeping the position of the selected at least one object in a predetermined functional relationship to the position of the browser apparatus so that the object can be moved in the virtual world by moving the browser apparatus .
7. A method according to claim 6 further including the step of deselecting the object to cease updating the position of the object based on the movement of the browser apparatus .
8. A method according to claim 1 in which the eye position is inferred to be at a fixed distance in front of the centre of the screen along an axis perpendicular to the screen.
9. A method according to claim 1 in which the eye position is inferred to be fixed.
10. A method according to claim 1 including the step of switching between a first mode in which the eye position is inferred to be at a fixed distance in front of the centre of the screen along an axis perpendicular to the screen, and a second mode in which the eye position is inferred to be at a fixed position.
11. A method according to claim 1 including the step of measuring the eye position. 5
12. A method according to claim 11 wherein the browser apparatus includes a camera and the step of measuring the eye position includes recording an image of the user's head on the camera, identifying the user's eyes in the image, and 10 calculating the user's eye position.
13. A method according to claim 1 wherein the browser apparatus includes an accelerometer, and the step of determining the position of the browser apparatus uses, at least in part, dead reckoning based on the output of the 15 accelerometer over time.
14. A method according to claim 5 wherein the objects relate to real world objects, the virtual information system providing additional information about the real world objects. 20
15. A method according to claim 5 wherein the virtual world is independent of the real world.
16. A method according to claim 1 further comprising the steps of controlling at least one navigation parameter by the 25 position and/or orientation of the browser apparatus in space, and navigating through the virtual world by updating the position of the browser apparatus in the virtual world depending on the value of the said at least one navigation parameter.
17. A method according to claim 16 wherein the step of navigating through the virtual world updates the position
5 of the browser apparatus by reading the velocity of the browser apparatus in the virtual world, updating the velocity depending on the value of the said at least one navigation parameter, 10 updating the position of the browser apparatus using the updated velocity, and storing the updated velocity of the browser apparatus .
18. A method according to claim 5 further comprising the 15 step of adding content to the virtual world under the control- of the browser apparatus.
19. A method according to claim 1 including the step of obtaining information about at least part of the virtual information space from a remote server.
20 20. A method of displaying to a user portions of a virtual information space having objects arranged spatially in a virtual co-ordinate system, using a browser apparatus movable by the user to different positions in real space, the browser apparatus having a display and a
25 position detector for determining the position of the browser apparatus, the method comprising displaying an image of the virtual world including at least one object on the browser apparatus, selecting an object in the virtual world, and moving the browser apparatus to move or change the selected object in the virtual world.
21. A browser apparatus for displaying to a user portions 5 of a virtual information space having information arranged spatially in a virtual co-ordinate system, comprising a display, a memory, a position detector for determining the position of 10 the browser apparatus, code stored in the memory for determining the position of the browser apparatus in real space, code stored in the memory for determining the relationship between an inferred eye position in real 15 space, the browser apparatus position in real space and the virtual co-ordinate system, code stored in the memory for projecting virtual information space onto the display using the inferred eye position and the browser position, and 20 code for displaying on the display of the browser the calculated portion of the virtual information space, wherein the browser apparatus is movable by the user to different positions in real space and different positions relative to the eye. 25
22. A browser apparatus according to claim 21 further comprising a transmitter and a receiver for remotely connecting the browser apparatus to a network.
23. A browser apparatus according to claim 22 further comprising a decompression unit for decompressing compressed data received from the network by the receiver.
24. A browser apparatus according to claim 21 further comprising a rendering engine for rendering image data.
25. A network system, comprising a browser apparatus movable by a user to different positions in real space and different positions relative to the user's eye, the browser apparatus having a display, a memory, a position detector for determining the position of the browser apparatus, a transmitter and receiver for networking the browser apparatus, code stored in the memory for determining the position of the browser apparatus in real space, code stored in the memory for determining the relationship between an inferred eye position in real space, the browser apparatus position in real space and the virtual co-ordinate system, code stored in the memory for projecting virtual information space onto the display using the inferred eye position and the browser position, and code for displaying on the display of the browser the calculated portion of the virtual information space; the network system further comprising a server network having a store containing data about the virtual world, and a transmitter and receiver for transmitting information between the browser apparatus and the server.
26. A network system according to claim 25, wherein the server network further comprises a filter in the server for selecting information relating to part of the virtual world and transmitting it to the browser.
27. A method of displaying on a browser apparatus movable by a user, portions of a virtual information space having information arranged spatially in a virtual co-ordinate system, the method comprising, determining the position and orientation of the browser apparatus, inferring the position of the user's eye, and displaying on the display a projected view, from the inferred position of the user's eye, of the spatially arranged information in the virtual information space.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GBGB0011455.3A GB0011455D0 (en) | 2000-05-13 | 2000-05-13 | Browser system and method for using it |
GB0011455 | 2000-05-13 | ||
PCT/GB2001/002066 WO2001088679A2 (en) | 2000-05-13 | 2001-05-11 | Browser system and method of using it |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1374019A2 true EP1374019A2 (en) | 2004-01-02 |
Family
ID=9891451
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP01929798A Withdrawn EP1374019A2 (en) | 2000-05-13 | 2001-05-11 | Browser system and method of using it |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP1374019A2 (en) |
JP (1) | JP2003533815A (en) |
AU (1) | AU2001256479A1 (en) |
GB (1) | GB0011455D0 (en) |
WO (1) | WO2001088679A2 (en) |
Families Citing this family (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6711547B1 (en) * | 2000-02-28 | 2004-03-23 | Jason Corey Glover | Handheld medical processing device storing patient records, prescriptions and x-rays used by physicians |
GB2387504B (en) * | 2002-04-12 | 2005-03-16 | Motorola Inc | Method and system of managing a user interface of a communication device |
FI117217B (en) * | 2003-10-01 | 2006-07-31 | Nokia Corp | Enforcement and User Interface Checking System, Corresponding Device, and Software Equipment for Implementing the Process |
DE102004061842B4 (en) * | 2003-12-22 | 2017-03-02 | Metaio Gmbh | Tracking system for mobile applications |
US7961909B2 (en) | 2006-03-08 | 2011-06-14 | Electronic Scripting Products, Inc. | Computer interface employing a manipulated object with absolute pose detection component and a display |
EP1811423A4 (en) * | 2004-10-15 | 2011-10-05 | Vodafone Plc | Linking operation method, and communication terminal device |
USD580387S1 (en) | 2007-01-05 | 2008-11-11 | Apple Inc. | Electronic device |
USD558758S1 (en) | 2007-01-05 | 2008-01-01 | Apple Inc. | Electronic device |
USD898736S1 (en) | 2007-01-05 | 2020-10-13 | Apple Inc. | Electronic device |
USD602486S1 (en) | 2007-08-31 | 2009-10-20 | Apple Inc. | Electronic device |
USD957385S1 (en) | 2007-08-31 | 2022-07-12 | Apple Inc. | Electronic device |
USD1033379S1 (en) | 2008-04-07 | 2024-07-02 | Apple Inc. | Electronic device |
USD602016S1 (en) | 2008-04-07 | 2009-10-13 | Apple Inc. | Electronic device |
USD615083S1 (en) | 2008-04-07 | 2010-05-04 | Apple Inc. | Electronic device |
USD602015S1 (en) | 2008-04-07 | 2009-10-13 | Apple Inc. | Electronic device |
KR100998182B1 (en) * | 2008-08-21 | 2010-12-03 | (주)미래컴퍼니 | 3D display system of surgical robot and control method thereof |
USD602017S1 (en) | 2008-09-05 | 2009-10-13 | Apple Inc. | Electronic device |
JP5087532B2 (en) * | 2008-12-05 | 2012-12-05 | ソニーモバイルコミュニケーションズ株式会社 | Terminal device, display control method, and display control program |
FR2941805A1 (en) * | 2009-02-02 | 2010-08-06 | Laurent Philippe Nanot | DEVICE FOR INTERACTIVE VIRTUAL GUIDED VISIT OF SITES / HISTORICAL EVENTS OR BUILDING PROJECTS AND TRAINING SCENARIOS |
USD637596S1 (en) | 2010-01-06 | 2011-05-10 | Apple Inc. | Portable display device |
USD627777S1 (en) | 2010-01-06 | 2010-11-23 | Apple Inc. | Portable display device |
USD627778S1 (en) | 2010-04-19 | 2010-11-23 | Apple Inc. | Electronic device |
USD633908S1 (en) | 2010-04-19 | 2011-03-08 | Apple Inc. | Electronic device |
USD864949S1 (en) | 2010-04-19 | 2019-10-29 | Apple Inc. | Electronic device |
USD681630S1 (en) | 2010-07-08 | 2013-05-07 | Apple Inc. | Portable display device with graphical user interface |
USD683730S1 (en) | 2010-07-08 | 2013-06-04 | Apple Inc. | Portable display device with graphical user interface |
USD642563S1 (en) | 2010-08-16 | 2011-08-02 | Apple Inc. | Electronic device |
USD680109S1 (en) | 2010-09-01 | 2013-04-16 | Apple Inc. | Electronic device with graphical user interface |
USD670692S1 (en) | 2011-01-07 | 2012-11-13 | Apple Inc. | Portable display device |
USD669468S1 (en) | 2011-01-07 | 2012-10-23 | Apple Inc. | Portable display device |
USD671114S1 (en) | 2011-02-25 | 2012-11-20 | Apple Inc. | Portable display device with cover |
US9285883B2 (en) | 2011-03-01 | 2016-03-15 | Qualcomm Incorporated | System and method to display content based on viewing orientation |
US8810598B2 (en) | 2011-04-08 | 2014-08-19 | Nant Holdings Ip, Llc | Interference based augmented reality hosting platforms |
WO2013078345A1 (en) | 2011-11-21 | 2013-05-30 | Nant Holdings Ip, Llc | Subscription bill service, systems and methods |
USD707223S1 (en) | 2012-05-29 | 2014-06-17 | Apple Inc. | Electronic device |
USD684571S1 (en) | 2012-09-07 | 2013-06-18 | Apple Inc. | Electronic device |
USD681632S1 (en) | 2012-08-11 | 2013-05-07 | Apple Inc. | Electronic device |
USD681032S1 (en) | 2012-09-11 | 2013-04-30 | Apple Inc. | Electronic device |
JP5519750B2 (en) * | 2012-09-11 | 2014-06-11 | オリンパスイメージング株式会社 | Image viewing system, image viewing method, image viewing server, and terminal device |
JP5519751B2 (en) * | 2012-09-11 | 2014-06-11 | オリンパスイメージング株式会社 | Image viewing system, image viewing method, image viewing server, and terminal device |
DE102013205593A1 (en) * | 2013-03-28 | 2014-10-02 | Hilti Aktiengesellschaft | Method and device for displaying objects and object data of a construction plan |
US9582516B2 (en) | 2013-10-17 | 2017-02-28 | Nant Holdings Ip, Llc | Wide area augmented reality location-based services |
USD845294S1 (en) | 2014-05-05 | 2019-04-09 | Apple Inc. | Housing for an electronic device with surface ornamentation |
GB2528319A (en) * | 2014-07-18 | 2016-01-20 | Ibm | Device display perspective adjustment |
AU361808S (en) | 2014-10-15 | 2015-05-14 | Apple Inc | Electronic device |
US11577159B2 (en) | 2016-05-26 | 2023-02-14 | Electronic Scripting Products Inc. | Realistic virtual/augmented/mixed reality viewing and interactions |
TWI687842B (en) | 2017-12-29 | 2020-03-11 | 宏碁股份有限公司 | Method for browsing virtual reality webpage content and electronic device using the same |
USD940127S1 (en) | 2018-04-23 | 2022-01-04 | Apple Inc. | Electronic device |
USD924868S1 (en) | 2018-04-23 | 2021-07-13 | Apple Inc. | Electronic device |
USD974352S1 (en) | 2019-11-22 | 2023-01-03 | Apple Inc. | Electronic device |
CN111459266A (en) * | 2020-03-02 | 2020-07-28 | 重庆爱奇艺智能科技有限公司 | Method and device for operating 2D application in virtual reality 3D scene |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6009210A (en) * | 1997-03-05 | 1999-12-28 | Digital Equipment Corporation | Hands-free interface to a virtual reality environment using head tracking |
DE69715816T2 (en) * | 1997-04-25 | 2003-04-30 | Texas Instr France Villeneuve | Video display system for displaying a virtual three-dimensional image display |
WO1999035633A2 (en) * | 1998-01-06 | 1999-07-15 | The Video Mouse Group | Human motion following computer mouse and game controller |
-
2000
- 2000-05-13 GB GBGB0011455.3A patent/GB0011455D0/en not_active Ceased
-
2001
- 2001-05-11 JP JP2001585009A patent/JP2003533815A/en active Pending
- 2001-05-11 WO PCT/GB2001/002066 patent/WO2001088679A2/en not_active Application Discontinuation
- 2001-05-11 AU AU2001256479A patent/AU2001256479A1/en not_active Abandoned
- 2001-05-11 EP EP01929798A patent/EP1374019A2/en not_active Withdrawn
Non-Patent Citations (1)
Title |
---|
See references of WO0188679A3 * |
Also Published As
Publication number | Publication date |
---|---|
AU2001256479A1 (en) | 2001-11-26 |
GB0011455D0 (en) | 2000-06-28 |
JP2003533815A (en) | 2003-11-11 |
WO2001088679A2 (en) | 2001-11-22 |
WO2001088679A3 (en) | 2003-10-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2001088679A2 (en) | Browser system and method of using it | |
US10928974B1 (en) | System and method for facilitating user interaction with a three-dimensional virtual environment in response to user input into a control device having a graphical interface | |
KR101823182B1 (en) | Three dimensional user interface effects on a display by using properties of motion | |
US10521028B2 (en) | System and method for facilitating virtual interactions with a three-dimensional virtual environment in response to sensor input into a control device having sensors | |
Mine | Virtual environment interaction techniques | |
US7382374B2 (en) | Computerized method and computer system for positioning a pointer | |
US6078329A (en) | Virtual object display apparatus and method employing viewpoint updating for realistic movement display in virtual reality | |
JP4115188B2 (en) | Virtual space drawing display device | |
US10330931B2 (en) | Space carving based on human physical data | |
EP2105905A2 (en) | Image generation apparatus | |
Piekarski | Interactive 3d modelling in outdoor augmented reality worlds | |
JPH08190640A (en) | Information display method and information provision system | |
WO2017195178A1 (en) | System and method for modifying virtual objects in a virtual environment in response to user interactions | |
EP1821258B1 (en) | Method and apparatus for automated dynamics of three-dimensional graphics scenes for enhanced 3D visualization | |
Cho et al. | Multi-scale 7DOF view adjustment | |
JP4493082B2 (en) | CG presentation device, program thereof, and CG display system | |
EP1720090B1 (en) | Computerized method and computer system for positioning a pointer | |
Wu et al. | Quantifiable fine-grain occlusion removal assistance for efficient vr exploration | |
Asiminidis | Augmented and Virtual Reality: Extensive Review | |
Wartell et al. | Interaction volume management in a multi-scale virtual environment | |
WO2024131405A1 (en) | Object movement control method and apparatus, device, and medium | |
Huelves | 3D Magic Lens Implementation using a Handheld Device in a 3D Virtual Environment | |
Kruszynski et al. | Tangible Interaction for 3D Widget Manipulation in Virtual Environments. | |
Barange et al. | Tabletop Interactive Camera Control | |
Andersson | VR Technology, TNM053 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20021125 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
AX | Request for extension of the european patent |
Extension state: AL LT LV MK RO SI |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20041201 |