Nothing Special   »   [go: up one dir, main page]

WO2001063560A1 - 3d game avatar using physical characteristics - Google Patents

3d game avatar using physical characteristics Download PDF

Info

Publication number
WO2001063560A1
WO2001063560A1 PCT/GB2001/000770 GB0100770W WO0163560A1 WO 2001063560 A1 WO2001063560 A1 WO 2001063560A1 GB 0100770 W GB0100770 W GB 0100770W WO 0163560 A1 WO0163560 A1 WO 0163560A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
dimensional
head
virtual
data
Prior art date
Application number
PCT/GB2001/000770
Other languages
French (fr)
Inventor
Gary Clive Bracey
Keith Michael Goss
Yanna Nikolova Goss
Original Assignee
Digimask Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digimask Limited filed Critical Digimask Limited
Priority to AU2001233947A priority Critical patent/AU2001233947A1/en
Publication of WO2001063560A1 publication Critical patent/WO2001063560A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/40Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
    • A63F2300/407Data transfer via internet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • A63F2300/695Imported photos, e.g. of the player

Definitions

  • This invention relates to 3-dimensional object creation and the application or use of such objects in virtual environments. More particularly, the invention relates to the provision of a system which enables the creation of personalised 3- dimensional human forms from 2-dimensional images of the subject human; and the application or use of such human forms in a variety of virtual environments adapted to receive them and to make use of such forms as an integral part of the environment.
  • Such environments could comprise electronic games or computer network sites but are not necessarily restricted thereto.
  • Electronic or virtual environments such as games or computer network (internet) sites, provide for interaction between characters or situations pre-programmed into the environment, and external players or users.
  • the graphic content of such environments is typically fixed in accordance with the rigid parameters laid down by the developers of such environments.
  • the invention provides a system for creating and making use of 3-dimensional human body features in virtual environments, said system including:
  • a subject interface configured to receive data of the human subject from which said 3-dimensional human body features can be created
  • a creation process communicable with said subject interface and being configured and operable to create virtual 3- dimensional human body representations from data received by said subject interface;
  • an environment interface configured and operable to permit the creation or adaption of virtual environments to integrate therein, 3-dimensional human body representations created by said creation process.
  • said system is configured for the creation and use of virtual 3-dimensional human head representations in virtual environments.
  • Said subject interface is typically a software item which may be distributed conventionally, such as by floppy disc or CD, or may be downloaded over the internet. More preferably said subject interface may be downloaded from a location at which said creation process is resident.
  • said system is configured to provide for data collected by said subject interface to be passed to said creation process via the internet.
  • said subject interface is configured to receive a combination of 2-dimensional visual images of a human subject together with text data relating to said human subject.
  • said subject interface includes prompting means operable to prompt a human subject to input specified data.
  • the specified data includes 2-dimensional digital photographic data and may also include other data relating to that subject such as: Name
  • the data may further include gender specific data.
  • said prompting means is operable to prompt a subject to input digital photographic images of the front and at least one profile of the subject's head.
  • Said prompting means may be further operable to prompt said subject to establish particular feature points on said photographic images.
  • said subject interface may present an inputted photographic image alongside a corresponding generic image and, on said generic image, highlight feature points for which said subject should identify corresponding points on said inputted photographic image.
  • said subject interface is configured to receive photographic images created using readily available digital cameras or scanned from conventional photographic film images.
  • said prompting means is further operable to prompt a subject to select a generic head form which corresponds most in shape and features to said subjects own head.
  • said prompting means is operable to present particular generic head options in response to data provided by a subject as to gender and ethnic appearance.
  • Said environment interface may be incorporated in an electronic game.
  • said environment interface may be incorporated in an environment in the form of a computer network or internet site, wherein personalised 3- dimensional human representations may be imported into said site to participate in activity pre-programmed into said site.
  • said site is adapted to allow personalised human head representations to be imported and graphically represented in virtual "chat rooms”.
  • a shopping site may be adapted to permit 3- dimensional human representations to be imported and graphically represented in virtual shopping situations.
  • sites may be adapted to permit the playing of multi- player games with two or more players being physically sited at different playing locations, the 3-dimensional head features of at least one of the players being present as part of one of the characters pre-programmed into the game.
  • said environment interface may be include in internet connectable wireless devices.
  • the invention provides a method of playing an electronic game adapted to receive and make use of personalised virtual 3-dimensional human representations, said method including the steps of:
  • Said memory device may comprise a memory card, computer hard drive or other computer memory device.
  • said virtual 3-dimensional representation may be loaded onto said memory card, computer hard drive or other computer memory device, from a databank containing a multiplicity of such representations, via the internet.
  • said method further includes nominating one or more locations in said game to be occupied by said 3-dimensional representations.
  • the invention provides a method of interacting with an internet site adapted to receive and make use of virtual 3-dimensional human features, said method including the steps of: connecting said internet site to a repository of said 3- dimensional human features via the internet;
  • said method further includes nominating one or more locations in said site to be occupied by said 3-dimensional features.
  • the invention provides a computer game when adapted to receive substitute 3-dimensional human body features according to the system hereinbefore set forth.
  • the invention provides an electronic network site when adapted to receive from an external source, 3- dimensional human body features according to the system hereinbefore set forth.
  • This network site may, for example, be an internet or intranet site.
  • Figure 1 A shows a general diagramatic view of the creation part of a system according to the invention
  • Figure IB shows a general diagramatic view of the use or exploitation part of a system according to the invention incorporating the creation part of Figure 1A
  • Figure 2 shows a frontal photographic image of a human subject displayed along side a corresponding generic head form and used to provide head and face data to be transferred into an electronic environment in accordance with the invention
  • Figure 3 shows a side or profile photographic image and corresponding generic head form to be used, in conjunction with the images shown in Figure 2, to form the data to be transferred into an electronic environment in accordance with the invention
  • Figure 4 shows a diagram of part of the draft MPEG-4 standard for identifying feature points when modelling human head features using digital modelling techniques
  • Figure 5 shows a data capture window to be forming part of the subject interface incorporated in a system according to the invention
  • Figures 6a show data capture windows subordinate to that & 6b shown in Figure 5;
  • Figure 7 shows a further data capture window subordinate to that shown in Figure 5;
  • Figure 8 shows a general flow diagram of the processing element of the invention
  • Figure 9 shows a projected head texture model showing an allocation and arrangement of triangular polygons therein;
  • Figure 10 shows a selection of HFPs used to demarcate the frontal region of a head for use in the invention described herein;
  • Figure 11 shows a general flow diagram of the operation of an environment interface or SDK.
  • This invention provides a system for allowing virtual 3- dimensional human features to be created, readily transferred into a multiplicity of pre-programmed electronic environments, and used therein as integral parts of the functions or activity pre-programmed into the environments.
  • the invention has been developed, in particular, to allow personalised 3- dimensional human head representations to be transferred into electronic or computer games and substituted for the head features of one or more characters pre-programmed into the game. In this way, players can readily import their own head representations into games and thereby assume the role of a character within the game.
  • head as used herein is to be interpreted as including face details.
  • inventive system described herein could be applied to games intended for use on purpose built games consoles or on personal computers; and could also be applied to personalising human activity over computer networks such as the internet, for example in "chat rooms", in on-line shopping situations and in multi-player games where the players or participants are physically sited at remote locations.
  • Representations created according to the invention might also be used with internet connectable wireless devices.
  • the system according to the invention provides for the creation of realistic virtual 3- dimensional human features and then provides for the use of those features in electronic environments such as electronic games and computer network sites.
  • the system includes a subject interface 20 whereby visual images and other non-visual text data relating to a human subject can be captured and inputted.
  • the system further includes a creation process 21 configured to receive data from the subject interface 20 and, from that data, create realistic virtual 3-dimensional human features.
  • functional elements such as a computer game 22 and/or an internet site 23 may be suitably adapted by the incorporation of an environment interface 24 which allows the 3-dimensional human representations generated at 21 to be functionally integrated into the game and/or site.
  • the system as described herein is preferably configured to so that all necessary data is inputted by the human subject whose features are to be replicated in virtual 3-dimensional form.
  • the system preferably makes use of a personal computer which is conveniently accessed via a network such as the internet.
  • the system may interact with a network connectable electronic games console, for example those sold under the trade mark DREAMCAST or PLAYSTATION 2.
  • the subject interface 20 comprises capture software, the functional elements of which are described in greater detail below.
  • This software may be distributed to users or potential users by floppy disc or CD, or may be downloaded via the internet, preferably from the same site at which the creation process software is resident.
  • this data can be transferred into the creation process 21 via an internet link as shown by the double lines in Figure 1A.
  • 3-dimensional human features created by the creation process 21 may be held in a local memory device 25 which may take the form of a memory card or memory disc, or a computer internal memory. Whatever the case, each 3-dimensional representation is preferably also held, in a suitably secure manner, in an electronic repository such as a network server or database 26.
  • virtual 3-dimensional data can be downloaded, at the command of the authorised subject, into local memory 25 or directly into adapted environments 22 and 23 to be used in a personal manner as an integral part of the pre-programmed environment.
  • the visual data which, in use is inputted by a subject user preferably comprises a minimum of two 2-dimensional photographs of the subject's head, a frontal view as shown in Figure 2 and a side profile as shown in Figure 3.
  • a further side profile image taken from the opposite side to that shown in Figure 3 could also be provided though, in most instances, it is convenient to assume that the third image is a mirror image of Figure 3 and to simply produce a mirror image of Figure 3, digitally.
  • the photographic images are loaded after prompting by the subject interface software loaded on to the user's computer.
  • the images must be loaded in digital format. This can be achieved by downloading into the computer, images obtained using a digital camera, directly from the camera. Alternatively photographic film images may be scanned using a digital scanner, and the data then downloaded to the computer.
  • HFPs head feature points
  • the subject interface software 20 prompts the identification and insertion of head feature points (HFPs] thereon.
  • HFPs are essentially 3-dimensional points lying on the head surface, each with its own unique identifier and each having a 2-dimensional projection on one or more photographs.
  • the creation process software 21 may incorporate feature recognition software which eliminates the need to insert HFPs.
  • the subject interface software 20 preferably displays each 2- dimensional user image alongside a corresponding generic image as shown in Figures 2 and 3. Feature points such as are indicated by reference numerals 27 and 28 in Figures 2 and 3, are then highlighted and the user is prompted to locate and click the mouse point on the corresponding part of the inputted image so as to establish that HFP on the imputted image.
  • HFPs A number of inserted HFPs are shown on the inputted images of Figures 2 and 3. These correspond to points identified in the draft MPEG 4 standard shown in Figure 4.
  • the number of HFPs which are inputted can vary, the greater the number inserted, the greater the detail and realism of the output virtual 3-dimensional representation.
  • HFPs are presented one at a time on the generic images. As each corresponding HFP is established on the inputted photographic image, a new HFP is highlighted on the generic image. This continues until all HFPs on the generic images have corresponding points identified on the photographic images or until the user elects to terminate the HFP allocation process.
  • seven HFPs are identified on the profile view and eight HFPs are identified on the frontal view.
  • the subject interface preferably prompts the collection of further text data relating to the subject user.
  • a window such as that shown could be displayed requesting the user to specify details such as : Name
  • Options for some of the above elements may be provided in the form of pull down menus.
  • the creation process 21 may operate by working from a single generic head model or, as shown, the subject interface 20 may further prompt subject users to select a base generic head which best represents the users overall head shape. Since head shape and basic features tend to follow trends established by ethnic origin, different generic head options may be sorted according to similarity which particular ethnic species.
  • Selecting the Ethnic Appearance option listed above preferably highlights a pull down menu such as is shown, by way of example only, in Figure 7.
  • a number of images may be presented both for males and females, the subject user opting for that one which, in user's view, most closely represents user's head characteristics.
  • Each may be labelled with an ethnic designation and, obviously, particular ethnic designations may be sub-divided and/or more than one option provided for a particular ethnic designation.
  • the data is transferred to the creation process 21 for creation of the virtual 3-diemnsional representations in a manner described in greater detail below.
  • the subject's head should be upright in the frame of each of the photographic images although, as will become more apparent from the following description, the modelling method adopted herein does provide for some tilting of the head parallel to the image plane.
  • the method will accommodate some forward tilting of the head.
  • the creation process 21 takes as input the photographs, the 2- dimensional co-ordinates of the marked HFPs and the choice of generic head. It then produces as output a 3-dimensional digital representation of the head of the subject in the photographs. This representation is a general likeness of that subject as he/she appears in the photographs in both shape and surface appearance. Following appropriate application of the environment interface 24, this 3-dimensional digital representation can subsequently be displayed by software or hardware applications, such as 3-dimensional computer games, web browsers, MPEG-4 players or any application that renders 3-dimensional data.
  • the software system has several generic head models loaded (or available to load) which cover the range of gender, age and other attributes. One of these is selected which matches the choice of generic head made by the subject. This model will henceforth be referred to as the generic model. A copy of it is made by the software system; this copy is altered to produce a likeness of the subject. This copy will henceforth be referred to as the subject model.
  • the generic model contains 3-dimensional co-ordinates of the HFPs on that model. It also contains data representing the surface of the head and its component parts. A very common format for this data is the polygonal mesh, and this format is assumed herein. Other common 3-dimensional graphics representations such as NURBS could equally well be employed, with appropriate adjustments to the morphing and texture extraction modules.
  • the generic model, and hence the subject model is placed in a co-ordinate frame referred to here as world space. As can be seen in Figure 4, the origin of this space is at the top of the neck, the positive X axis passes through the left side of the face, the positive Z axis passes underneath the nose and the positive Y axis passes vertically through the top of the head.
  • the 2-dimensional co-ordinate frame within which the marked HFPs in each photograph are stored is referred to as the image space of the photograph.
  • the positioning of the camera relative to world space is encoded by a co-ordinate frame referred to as the camera space of that photograph. Transformations between the world and camera spaces, and between camera and image space, are described in matrix notation in the conventional manner.
  • the 3-dimensional HFP coordinates of the subject model are constructed directly in world space. However, it is possible also to reconstruct a 3- dimensional structure in an intermediate space, and then construct a transformation to world space.
  • the first step in extracting the 3-dimensional shape of the subject's head is the calculation of the camera matrices for each photograph.
  • the camera matrices depend on the camera model.
  • a projective camera model incorporates the effect of perspective and hence is used in reconstruction from photographs of large objects such as buildings.
  • faces the depth of a face is usually not large compared to the distance from the camera to the face, and hence an affine camera model may be used.
  • a scaled orthographic camera model is a restriction on the affine model in which the world-to- camera space transformation is a rigid-body one and the object being photographed projects onto the image plane in a parallel fashion with uniform scaling only.
  • the system either needs to make assumptions about the camera parameters within the camera model, or about the shape of the subject's face.
  • One example of the latter approach under the affine camera model would be to extract the 3-dimensional co-ordinates of the HFPs in an intermediate space which is an unknown affine transformation away from the true euclidian structure of world space.
  • a database of head statistics containing average distances and angles (and standard deviations) between HFPs for different ages and genders could then be employed in the form of constraints to build the final affine-to-euclidian transformation [ref 2].
  • the camera parameters are encoded in a 3x4 matrix M which maps points from world space to image space. This is decomposed into two matrices:
  • f is the focal length
  • T encodes the transformation from head to camera space and is of the following form under the assumption of a rigid body transformation .
  • [TX, TX, TZ] T is a translation vector representing the origin of the head frame in the camera frame.
  • the matrix M is thus of the form
  • the requested camera positions for the supplied photographs require the camera to be at the front or side of the head. Letting the camera's RZ axis lie on the line of sight, then the RX and RY vectors lie in a plane parallel to the image plane and are orthogonal to each other. Hence the angle between them is one degree of freedom, which allows for any tilt of the head around the direction of view (although it is requested that heads are upright in the frame of the photograph this can be difficult to achieve). TX and TY form two more degrees of freedom, and f (the scale factor) forms the fourth and last degree of freedom.
  • the co-ordinates of two points in the front photograph and their corresponding XY co-ordinates in head space are sufficient to provide these degrees of freedom.
  • one HFP between the eyes and one HFP at the centre of the mouth can be chosen.
  • the desired XY co-ordinates of these points in world space are known from the generic model.
  • Each point in the photo is related to the world space coordinates as follows :
  • the next module has the responsibility of extracting the 3- dimensional world space co-ordinate data of the subject model's HFPs.
  • Rk is 2x3
  • Tk is 2x1.
  • the third row of Mk can be ignored as it just copies the final homogeneous co-ordinate from world space to image space and hence will set it to be 1.
  • each XYZ is found solved by a linear least-squares technique such as a Singular Value Decomposition.
  • HFP found in this manner will differ in position from its location in the generic head. These differences in position give the information necessary to interpolate the new positions of the remaining HFPs (just like the vertices in section 4).
  • This interpolation is achieved via the technique of scattered data interpolation with radial basis functions. This technique is used at several places by the software system and is described in section 6. 4. Producing a Head Shape
  • the previous stage of the process has given us a full set of 3- dimensional HFPs for the subject model.
  • the next stage produces a geometric model whose shape is a likeness of the subject based on these points, by morphing the surface geometry of the subject head according to the spatial differences between the generic head's HFPs and the subject head's HFPs.
  • the process has produced a subject head whose shape is a likeness of the subject as depicted in the input photographs.
  • the next stage gives the subject head model the surface appearance of the subject. This is achieved via the creation of a texture image and its mapping onto the head model.
  • Texture mapping is a common and standard practice of 3- dimensional graphics software.
  • the 2- dimensional space of the texture image is addressed by uv coordinates, u horizontally and v vertically.
  • a pixel of a texture image may be referred to as a texel.
  • the mapping of the texture image onto the model is achieved by embedding uv co-ordinates with the vertex data - this serves to locate each vertex in the texture image.
  • the uv co-ordinates embedded in the model in this manner enable any subsequent rendering process (as in a game, browser or any other application that displays a virtual head created in accordance with this invention) to transfer texels from the appropriate portion of a texture image to the screen as it renders each triangle during display of the head.
  • Figure 9 illustrates the uv images of the triangles in a head model. It shows that component parts of a head model (such as eyes, ears, teeth and tongue) can have their uv images in different parts of the texture image.
  • the uv co-ordinates for head models are often assigned according to a cylindrical mapping. This tends to produce a blurry effect and allocates only a small portion of the texture image to the front of the face, [ref 3] describes a uv mapping which allocates more of the texture image to the front of the face and uses an orthographic projection for the front, since that is where most detail of interest lies.
  • the system described herein uses the uv values held in the vertices of the generic model unaltered in the subject model.
  • This allows the mapping strategy to be implemented as part of the generic model creation process (which is a separate, possibly time-consuming process finished before the virtual 3- dimensional creation system is running), and hence allows for possibilities such as the use of commercially-available texture mapping software tools, or specially-written software. Many such tools allow interactive adjustment of the uv co-ordinates of the vertices under operator control.
  • the generic heads used by the present implementation of the present invention are mapped such that the front of the head occupies more of the texture image than the sides, a constraint which carries over to the subject model, since vertices of the generic model morph to generally the same region of the head in the subject model.
  • Figure 9 shows one such mapping wherein a large proportion of the texture image is devoted to the front of the head.
  • the texture creation process is then a case of transferring colour information from the input photographs to the texture image, guided by the uv images of each triangle. This is achieved as follows:
  • step (1) Compute the XYZ co-ordinates and surface normal of the corresponding internal point of the triangle in world space, (2) Using the information from step (1), compute the weightings which express the relative contributions of each photograph to the texel, (3) Using the information from step (1), locate the point in each (remapped) photograph which step (2) has decided will contribute something to the texel, (4) Transfer colour information from the remapped photos to the texture image, weighted according to step (2).
  • Step (1) Computing the 3-dimensional location of a point
  • Step 1 is achieved via Barycentric co-ordinates; for each texel enclosed by the triangle's uv image, the Barycentric coordinates of that point within the triangle are computed. These Barycentric co-ordinates allow the calculation of the world space co-ordinates of that point based on the world space coordinates of each vertex. The surface normal is calculated trivially.
  • the weighting implemented in the system described herein is mutually exclusive; only one photograph is allowed to contribute to the texel.
  • the normal is not employed in determining which photograph; it is calculated for the purposes of software modularity since alternative implementations could use the dot product of the normal and the camera view vector to help determine weightings (e.g. as in [ref 1]).
  • weightings e.g. as in [ref 1]
  • only the location of the point is used. For example, if the triangle point lies at the front of the face, only the front photo is used to transfer the colour information.
  • HFPs HFPs to demarcate a frontal region of the face, which shall be referred to as the frontal mask.
  • An example list of HFPs for this purpose is as follows (working anti-clockwise when looking from the front of the face) and is illustrated in Figure 10:
  • This HFP list defines a straight-edged polygon when viewed from the front of the face as can be seen in Figure 10. If the point P lies within this polygon when viewed from the front and in the front half of the head, it is deemed to lie within the frontal mask, and the front photo is chosen. Otherwise the left or right photo is chosen according to which side of the head the point lies.
  • the HFPs of the subject model when projected back onto the photographs will not generally lie at their marked positions (due to the arbitration between the different views as described in section 3). For this reason, the photos are remapped in a preliminary step as follows. For each photograph, all HFPs are projected into the image space of the photograph via the camera matrix. The original photo is then warped such that the projected HFPs and the marked HFPs on that photo are realigned. The image warping is again achieved via 2- dimensional radial basis functions as in [ref 4] and section 6. During texture filling, the XYZ point is projected onto the remapped photo via the camera matrix to obtain the colour information to be transferred to the texture image. Since points will generally lie between integer pixel co-ordinates, bilinear interpolation is used to obtain a colour from the nearest pixels.
  • the new position of any point P to be interpolated is calculated from a vector- valued function f(P).
  • Ci, M and T are found by solving a set of linear equations expressing the above, plus equations expressing the following constraints for each co-ordinate (using the x coordinate as an example and assuming 3-d vectors):
  • the original positions Pi are as marked in the photographs, and the new positions Qi are the projections of the subject model's HFPs.
  • the vector values are 2-dimensional.
  • the original points are the 3-dimensional HFPs of the generic model and the new positions are the 3-dimensional HFPs of the subject model.
  • the subject model is output by the creation process software system 21 in a file format referred to, for present purposes as the DMK file format with file extension .DMK.
  • This file resides on the server and may also be sent via email to the subject/owner for residence on his hard drive or other memory device.
  • the final principal part of the system described and claimed herein is the environment interface 24.
  • the DMK file When the 3- dimensional model, created as described above, is to be imported into a piece of software, the DMK file must be available to the software (e.g. by connecting to the dedicated server) and that software must be capable of extracting a representation of the head that it is able to manipulate and display within its own 3-dimensional virtual space.
  • a broad description of the operating elements of the environment interface is shown in Figure 11.
  • SDK software development kit
  • the developers of that software have to utilise an application interface in the form of a software development kit (SDK) provided to enable importation of the 3- dimensional head model represented by the DMK file.
  • SDK software development kit
  • the SDK has been designed on the basis that the possible uses to which publishers and developers can put a 3-d model as produced herein, will always defeat any attempts to predict or enumerate those uses.
  • Applications that have been specifically anticipated include: 3-d Games, cosmetics web sites, talking heads, chat rooms, visual instant messaging, optician's web sites, online shopping, email greeting cards.
  • the SDK described herein provides a means for application developers to gain access to the information contained within a DMK file.
  • the run-time components of the SDK decompress and convert the .DMK file into an internal data structure in order to service requests from the application. These requests are made
  • API Application Programmer's Interface
  • the SDK consists of: A core run-time library
  • API Application Programmer's Interface
  • the core run-time library parses and decompresses the .DMK file, builds the internal scene data structure and allows the application to extract data in platform-independent formats. Subject to unusual exceptions where some information is only
  • Source code will usually be provided for these libraries.
  • the tasks may or may not be platform dependent.
  • Private extensions are in compiled form only and have the ability to hook directly into the internal data structures in order to set values or present data to the application. They usually provide platform-dependent functionality and provide a more efficient method of querying and converting the 3-d model data. A typical example is converting some data into a platform- specific or third-party format. These might also provide bespoke functionality exclusive to certain applications.
  • the API and library in the manner of software abstraction, provide the developers with their only perception of the data embedded in the DMK file. That is to say, it is irrelevant how the head representation is stored in the DMK file; developers only need know whether and how they can obtain a data representation of the head in a format that their software supports, where this representation includes not only surface geometry such as triangle mesh information, but also material properties (describing reflectance, shininess, and textures etc).
  • the API presents the data to the application much in the manner of a 3-d modelling program. This includes data to allow
  • the 3-d model to be animated.
  • the application makes calls to the API to obtain data in a format it can accommodate. If the application has a rendering component, that application has the responsibility of storing and/or further manipulating the data and presenting it when
  • the API supports this requirement in several ways.
  • the application has the ability to request a certain number of triangles in each mesh.
  • the SDK will return a mesh with the largest triangle count less than or equal to the requested number.
  • textures can be supplied to the application
  • SDK will be released from time to time.
  • the run-time components of the SDK take the form of dynamically linked libraries. Should there be a format change in DMK files, the owner of such a DMK should obtain
  • the SDK and API provided in accordance with the invention thus includes routines that provide a head representation in several formats and at different levels of detail. All that is required for this to be possible is for the internal DMK data to be capable of conversion to the specified output representation.
  • the output itself may be a multi-resolution format which would allow the calling software to display the head with varying numbers of triangles under that software's own control (e.g. more triangles when the head is closer to the camera).
  • the library thus has the tasks of
  • an abstract representation of the head may also be extracted. This can consist of only the feature point co-ordinates (with (u,v) coordinates), and the texture.
  • the feature points output may be those specified the ISO 14496 MPEG-4 standard for very-low bitrate audio-visual coding. These need not coincide with the HFPs used by the creation system described above, but so long as the MPEG-4 feature definition points are created for, and stored with, each generic model, they can be morphed to give the feature definition points for the subject model.
  • the MPEG-4 feature points plus the texture alone may be of use to application software developers who have their own proprietary format for representing a head but who wish to adjust their own model to conform to the individual's head.
  • MPEG-4 Facial Animation Parameters to be applied to the subject model.
  • the creation process described herein can produce a likeness of the subject suitable for animating according to the MPEG-4 protocols and in any MPEG-4 player.
  • other data necessary for other animation protocols can also be embedded in the DMK file such as smooth skinning.
  • the server 26 may convert the DMK format before sending the result. This would be in preference to sending the DMK file to the run-time library on the other machine, if it is determined that the resulting file will be significantly smaller than the DMK file (and hence have a speedier transfer time). This is in effect simply a matter of allowing some functionality of the run-time library of the SDK to be distributed onto the server 26.
  • a person wishing to use his/her own image in an electronic game or internet site adapted as above described loads on to his/her personal computer the subject interface 20. This can be loaded from a CD or other memory disc, or can be accessed from an internet site. After loading digital images and completing data entry in response to the prompts described above, the data is then uploaded to creation process 21 via the internet. The 3-dimensional modelling is then undertaken by the creation process 21 and the model thus created then returned to the supplier of the photographs by e-mail for storage on user's local memory device 25. A copy of the model is also stored in an electronic repository such as database or server 26. When the player wishes to make use of the model in a suitably formatted game, the player loads the model from the server 26 or the local memory device 25 into the games software. The game may allow the player to select the character in the game whose identity the player wishes to assume and may allow two or more players to take up identities within the game.
  • a subject may also downloaded his/her virtual 3-dimensional model into network sites 23 so that interactive activity can be personalised.
  • Virtual chat rooms could have bars into which participants can transfer head and other body features to enable more personal participation.
  • purchasers can use their own head and body dimensions to better gauge how they might look in particular clothes or with a particular hair style or wearing particular make-up or accessories.
  • the present invention provides a system for personalising activity in virtual environments which will is relatively simple to use and which considerably enhances the range of human activities using such environments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a three part system for creating and using three-dimensional virtual models, particularly human head models. The first part of the system comprises a subject interface (20) adapted to receive source digital images from a user and allow the user to load data relevant to the model creation process. The second element of the system is a creation element (21) which creates the three-dimensional model from the data collected by part (20). The third element is a use interface (24) which allows the created model to be received and used in virtual environments such as computer games.

Description

3D GAME AVATAR USING PHYSICAL CHARACTERISTICS
Field of the Invention
This invention relates to 3-dimensional object creation and the application or use of such objects in virtual environments. More particularly, the invention relates to the provision of a system which enables the creation of personalised 3- dimensional human forms from 2-dimensional images of the subject human; and the application or use of such human forms in a variety of virtual environments adapted to receive them and to make use of such forms as an integral part of the environment. Such environments could comprise electronic games or computer network sites but are not necessarily restricted thereto.
Background to the Invention
Electronic or virtual environments, such as games or computer network (internet) sites, provide for interaction between characters or situations pre-programmed into the environment, and external players or users. However, the graphic content of such environments is typically fixed in accordance with the rigid parameters laid down by the developers of such environments.
With the rapidly increasing use of electronic environments for both entertainment and commerce, there is an ongoing demand for interaction between player/user and the particular environment to not only be interactive, but also to be as entertaining and as personalised as possible. In particular, there is perceived to be a demand for "personalising" the role of the player or user to a greater extent than has been possible heretofore.
It is an object of this invention to provide methods of, and means for creating an applying 3-dimensional object images which will enhance user participation; or which will at least provide a novel and useful choice.
Summary of the Invention
Accordingly, in a first aspect, the invention provides a system for creating and making use of 3-dimensional human body features in virtual environments, said system including:
a subject interface configured to receive data of the human subject from which said 3-dimensional human body features can be created;
a creation process communicable with said subject interface and being configured and operable to create virtual 3- dimensional human body representations from data received by said subject interface; and
an environment interface configured and operable to permit the creation or adaption of virtual environments to integrate therein, 3-dimensional human body representations created by said creation process. Preferably said system is configured for the creation and use of virtual 3-dimensional human head representations in virtual environments.
Said subject interface is typically a software item which may be distributed conventionally, such as by floppy disc or CD, or may be downloaded over the internet. More preferably said subject interface may be downloaded from a location at which said creation process is resident.
Preferably said system is configured to provide for data collected by said subject interface to be passed to said creation process via the internet.
Preferably said subject interface is configured to receive a combination of 2-dimensional visual images of a human subject together with text data relating to said human subject.
Preferably said subject interface includes prompting means operable to prompt a human subject to input specified data. The specified data includes 2-dimensional digital photographic data and may also include other data relating to that subject such as: Name
Nickname e-mail address Age Height Weight
Gender Ethnic appearance If gender is specified, the data may further include gender specific data.
Preferably said prompting means is operable to prompt a subject to input digital photographic images of the front and at least one profile of the subject's head. Said prompting means may be further operable to prompt said subject to establish particular feature points on said photographic images. To this end, said subject interface may present an inputted photographic image alongside a corresponding generic image and, on said generic image, highlight feature points for which said subject should identify corresponding points on said inputted photographic image.
Preferably said subject interface is configured to receive photographic images created using readily available digital cameras or scanned from conventional photographic film images.
Preferably said prompting means is further operable to prompt a subject to select a generic head form which corresponds most in shape and features to said subjects own head. Preferably said prompting means is operable to present particular generic head options in response to data provided by a subject as to gender and ethnic appearance.
Said environment interface may be incorporated in an electronic game. Alternatively, or in addition, said environment interface may be incorporated in an environment in the form of a computer network or internet site, wherein personalised 3- dimensional human representations may be imported into said site to participate in activity pre-programmed into said site. Preferably said site is adapted to allow personalised human head representations to be imported and graphically represented in virtual "chat rooms". Alternatively, or in addition, a shopping site may be adapted to permit 3- dimensional human representations to be imported and graphically represented in virtual shopping situations. Still further, sites may be adapted to permit the playing of multi- player games with two or more players being physically sited at different playing locations, the 3-dimensional head features of at least one of the players being present as part of one of the characters pre-programmed into the game. As a further alternative said environment interface may be include in internet connectable wireless devices.
In a second aspect the invention provides a method of playing an electronic game adapted to receive and make use of personalised virtual 3-dimensional human representations, said method including the steps of:
connecting said electronic game to a memory device on which at least one said 3-dimensional human representation is stored;
loading said 3-dimensional human representation from said memory device into said game; and
playing the game according to the facilities pre-programmed into said game whilst embodying said virtual 3-dimensional representation as part of said game. Said memory device may comprise a memory card, computer hard drive or other computer memory device. Further, said virtual 3-dimensional representation may be loaded onto said memory card, computer hard drive or other computer memory device, from a databank containing a multiplicity of such representations, via the internet.
Preferably said method further includes nominating one or more locations in said game to be occupied by said 3-dimensional representations.
In a third aspect the invention provides a method of interacting with an internet site adapted to receive and make use of virtual 3-dimensional human features, said method including the steps of: connecting said internet site to a repository of said 3- dimensional human features via the internet;
downloading virtual 3-dimensional features into said site; and
interacting with said site according to the facilities preprogrammed into said site.
Preferably said method further includes nominating one or more locations in said site to be occupied by said 3-dimensional features.
In a fourth aspect the invention provides a computer game when adapted to receive substitute 3-dimensional human body features according to the system hereinbefore set forth.
In a fifth aspect the invention provides an electronic network site when adapted to receive from an external source, 3- dimensional human body features according to the system hereinbefore set forth. This network site may, for example, be an internet or intranet site.
Many variations in the way the present invention may be performed will present themselves to those skilled in the art. The description which follows is intended as an illustration only of one mode of performing the various aspects of the invention and the absence of description of particular alternatives or variants should in no way be applied to limit the scope of the invention. Such description of specific elements which follows should also be interpreted as including equivalents whether existing now or in the future. The scope of the invention should be defined solely by the appended claims.
Brief Description of the Drawings
One particular mode of performing the invention, and some modifications thereof, will now be described with reference to the accompanying drawings in which:
Figure 1 A: shows a general diagramatic view of the creation part of a system according to the invention;
Figure IB: shows a general diagramatic view of the use or exploitation part of a system according to the invention incorporating the creation part of Figure 1A; Figure 2: shows a frontal photographic image of a human subject displayed along side a corresponding generic head form and used to provide head and face data to be transferred into an electronic environment in accordance with the invention;
Figure 3: shows a side or profile photographic image and corresponding generic head form to be used, in conjunction with the images shown in Figure 2, to form the data to be transferred into an electronic environment in accordance with the invention;
Figure 4: shows a diagram of part of the draft MPEG-4 standard for identifying feature points when modelling human head features using digital modelling techniques;
Figure 5: shows a data capture window to be forming part of the subject interface incorporated in a system according to the invention;
Figures 6a: show data capture windows subordinate to that & 6b shown in Figure 5;
Figure 7: shows a further data capture window subordinate to that shown in Figure 5;
Figure 8: shows a general flow diagram of the processing element of the invention; Figure 9: shows a projected head texture model showing an allocation and arrangement of triangular polygons therein;
Figure 10: shows a selection of HFPs used to demarcate the frontal region of a head for use in the invention described herein; and
Figure 11 : shows a general flow diagram of the operation of an environment interface or SDK.
Detailed Description of One Mode of Performing the Invention
This invention provides a system for allowing virtual 3- dimensional human features to be created, readily transferred into a multiplicity of pre-programmed electronic environments, and used therein as integral parts of the functions or activity pre-programmed into the environments. The invention has been developed, in particular, to allow personalised 3- dimensional human head representations to be transferred into electronic or computer games and substituted for the head features of one or more characters pre-programmed into the game. In this way, players can readily import their own head representations into games and thereby assume the role of a character within the game. The term "head" as used herein is to be interpreted as including face details.
Whilst the above introduction makes specific reference to computer games, it should be appreciated that the inventive system described herein could be applied to games intended for use on purpose built games consoles or on personal computers; and could also be applied to personalising human activity over computer networks such as the internet, for example in "chat rooms", in on-line shopping situations and in multi-player games where the players or participants are physically sited at remote locations. Representations created according to the invention might also be used with internet connectable wireless devices.
Referring firstly to Figures 1A and IB, the system according to the invention provides for the creation of realistic virtual 3- dimensional human features and then provides for the use of those features in electronic environments such as electronic games and computer network sites.
In the form shown the system includes a subject interface 20 whereby visual images and other non-visual text data relating to a human subject can be captured and inputted. The system further includes a creation process 21 configured to receive data from the subject interface 20 and, from that data, create realistic virtual 3-dimensional human features. Finally, as can be seen in Figure IB, functional elements such as a computer game 22 and/or an internet site 23 may be suitably adapted by the incorporation of an environment interface 24 which allows the 3-dimensional human representations generated at 21 to be functionally integrated into the game and/or site.
The system as described herein is preferably configured to so that all necessary data is inputted by the human subject whose features are to be replicated in virtual 3-dimensional form. To this end the system preferably makes use of a personal computer which is conveniently accessed via a network such as the internet. Alternatively, or in addition, the system may interact with a network connectable electronic games console, for example those sold under the trade mark DREAMCAST or PLAYSTATION 2.
The subject interface 20 comprises capture software, the functional elements of which are described in greater detail below. This software may be distributed to users or potential users by floppy disc or CD, or may be downloaded via the internet, preferably from the same site at which the creation process software is resident.
Once the appropriate data has been collected using the subject interface 20, this data can be transferred into the creation process 21 via an internet link as shown by the double lines in Figure 1A.
3-dimensional human features created by the creation process 21 may be held in a local memory device 25 which may take the form of a memory card or memory disc, or a computer internal memory. Whatever the case, each 3-dimensional representation is preferably also held, in a suitably secure manner, in an electronic repository such as a network server or database 26. Thus virtual 3-dimensional data can be downloaded, at the command of the authorised subject, into local memory 25 or directly into adapted environments 22 and 23 to be used in a personal manner as an integral part of the pre-programmed environment.
Turning now to Figures 2 and 3, the visual data which, in use is inputted by a subject user, preferably comprises a minimum of two 2-dimensional photographs of the subject's head, a frontal view as shown in Figure 2 and a side profile as shown in Figure 3. A further side profile image taken from the opposite side to that shown in Figure 3 could also be provided though, in most instances, it is convenient to assume that the third image is a mirror image of Figure 3 and to simply produce a mirror image of Figure 3, digitally.
The photographic images are loaded after prompting by the subject interface software loaded on to the user's computer. The images must be loaded in digital format. This can be achieved by downloading into the computer, images obtained using a digital camera, directly from the camera. Alternatively photographic film images may be scanned using a digital scanner, and the data then downloaded to the computer.
Following loading and saving of the 2-dimensional visual images, the subject interface software 20 then prompts the identification and insertion of head feature points (HFPs] thereon. These HFPs are essentially 3-dimensional points lying on the head surface, each with its own unique identifier and each having a 2-dimensional projection on one or more photographs. Alternatively the creation process software 21 may incorporate feature recognition software which eliminates the need to insert HFPs.
Assuming the absence of feature recognition software, the subject interface software 20 preferably displays each 2- dimensional user image alongside a corresponding generic image as shown in Figures 2 and 3. Feature points such as are indicated by reference numerals 27 and 28 in Figures 2 and 3, are then highlighted and the user is prompted to locate and click the mouse point on the corresponding part of the inputted image so as to establish that HFP on the imputted image.
A number of inserted HFPs are shown on the inputted images of Figures 2 and 3. These correspond to points identified in the draft MPEG 4 standard shown in Figure 4. The number of HFPs which are inputted can vary, the greater the number inserted, the greater the detail and realism of the output virtual 3-dimensional representation.
In the form herein described, HFPs are presented one at a time on the generic images. As each corresponding HFP is established on the inputted photographic image, a new HFP is highlighted on the generic image. This continues until all HFPs on the generic images have corresponding points identified on the photographic images or until the user elects to terminate the HFP allocation process.
In the particular embodiment herein described, seven HFPs are identified on the profile view and eight HFPs are identified on the frontal view.
As an alternative all HFPs could be displayed simultaneously on the generic images.
After the HFP allocation process is complete, the subject interface preferably prompts the collection of further text data relating to the subject user. Referring now to Figure 5, a window such as that shown could be displayed requesting the user to specify details such as : Name
Nickname e-mail address Age Height
Weight Gender Ethnic Appearance
These additional details not only provide for ready identification of the subject but may also allow for the modelling not just of head and face features, but also adaptation of complete body shapes to make a body shape in the particular environment a more faithful representation of the subject's true body shape.
Options for some of the above elements may be provided in the form of pull down menus.
Once the basic information requested in the menu shown in Figure 5 has been provided, more detailed gender information may be sought by the triggering of the menus shown in Figures 6a and 6b. As can be seen the male menu option shown in Figure 6a prompts the user to enter particular physical measurements which will enhance the functioning of the virtual 3-dimensional features created by the system, in virtual shopping situations. The same applies to the factors listed in the female option depicted in Figure 6b.
The creation process 21 may operate by working from a single generic head model or, as shown, the subject interface 20 may further prompt subject users to select a base generic head which best represents the users overall head shape. Since head shape and basic features tend to follow trends established by ethnic origin, different generic head options may be sorted according to similarity which particular ethnic species.
Selecting the Ethnic Appearance option listed above preferably highlights a pull down menu such as is shown, by way of example only, in Figure 7. A number of images may be presented both for males and females, the subject user opting for that one which, in user's view, most closely represents user's head characteristics. Each may be labelled with an ethnic designation and, obviously, particular ethnic designations may be sub-divided and/or more than one option provided for a particular ethnic designation.
Once data in the format identified above is complete to the satisfaction of the user, the data is transferred to the creation process 21 for creation of the virtual 3-diemnsional representations in a manner described in greater detail below.
As can be seen from Figures 2 and 3, the subject's head should be upright in the frame of each of the photographic images although, as will become more apparent from the following description, the modelling method adopted herein does provide for some tilting of the head parallel to the image plane. For example, with reference to Figure 3, the method will accommodate some forward tilting of the head.
The creation process 21 takes as input the photographs, the 2- dimensional co-ordinates of the marked HFPs and the choice of generic head. It then produces as output a 3-dimensional digital representation of the head of the subject in the photographs. This representation is a general likeness of that subject as he/she appears in the photographs in both shape and surface appearance. Following appropriate application of the environment interface 24, this 3-dimensional digital representation can subsequently be displayed by software or hardware applications, such as 3-dimensional computer games, web browsers, MPEG-4 players or any application that renders 3-dimensional data.
A diagrammatic overview of the 3-dimensional creation process is given in Figure 8.
I . Generic Head Model
As stated above, the software system has several generic head models loaded (or available to load) which cover the range of gender, age and other attributes. One of these is selected which matches the choice of generic head made by the subject. This model will henceforth be referred to as the generic model. A copy of it is made by the software system; this copy is altered to produce a likeness of the subject. This copy will henceforth be referred to as the subject model.
The generic model contains 3-dimensional co-ordinates of the HFPs on that model. It also contains data representing the surface of the head and its component parts. A very common format for this data is the polygonal mesh, and this format is assumed herein. Other common 3-dimensional graphics representations such as NURBS could equally well be employed, with appropriate adjustments to the morphing and texture extraction modules. The generic model, and hence the subject model, is placed in a co-ordinate frame referred to here as world space. As can be seen in Figure 4, the origin of this space is at the top of the neck, the positive X axis passes through the left side of the face, the positive Z axis passes underneath the nose and the positive Y axis passes vertically through the top of the head.
2. Camera Matrices
The 2-dimensional co-ordinate frame within which the marked HFPs in each photograph are stored is referred to as the image space of the photograph. The positioning of the camera relative to world space is encoded by a co-ordinate frame referred to as the camera space of that photograph. Transformations between the world and camera spaces, and between camera and image space, are described in matrix notation in the conventional manner.
In the system described herein, the 3-dimensional HFP coordinates of the subject model are constructed directly in world space. However, it is possible also to reconstruct a 3- dimensional structure in an intermediate space, and then construct a transformation to world space.
Although it is possible to extract 3-dimensional structure without doing so, the first step in extracting the 3-dimensional shape of the subject's head is the calculation of the camera matrices for each photograph. The camera matrices depend on the camera model. A projective camera model incorporates the effect of perspective and hence is used in reconstruction from photographs of large objects such as buildings. However for faces, the depth of a face is usually not large compared to the distance from the camera to the face, and hence an affine camera model may be used. A scaled orthographic camera model is a restriction on the affine model in which the world-to- camera space transformation is a rigid-body one and the object being photographed projects onto the image plane in a parallel fashion with uniform scaling only.
To extract 3-dimensional information, the system either needs to make assumptions about the camera parameters within the camera model, or about the shape of the subject's face. One example of the latter approach under the affine camera model would be to extract the 3-dimensional co-ordinates of the HFPs in an intermediate space which is an unknown affine transformation away from the true euclidian structure of world space. A database of head statistics containing average distances and angles (and standard deviations) between HFPs for different ages and genders could then be employed in the form of constraints to build the final affine-to-euclidian transformation [ref 2].
An alternative approach used herein is to make reasonable assumptions about the camera parameters. The transformation between world space and camera space can be assumed to be partially constrained by the requested photographic views (front, left, right). The internal camera parameters can be derived under the assumption of a scaled orthographic camera model. This makes the head look like the photographs in shape, i.e. including its distortions, rather than imposing structure from normalized faces. Points in world space are denoted by capital P and represented by the homogeneous co-ordinates [X Y Z l]τ. Points on an input image are denoted by lower-case p = [x y l]τ.
The camera parameters are encoded in a 3x4 matrix M which maps points from world space to image space. This is decomposed into two matrices:
M = C T
C accounts for the internal camera parameters and has the following form under the scaled orthogonal model:
f 0 0 0 0 f 0 0 0 0 0 1
where: f is the focal length;
T encodes the transformation from head to camera space and is of the following form under the assumption of a rigid body transformation .
RXT TX
τ τγ
RZ1 TZ
0T 1 where
RXT RYT RZ
are the unit vectors in world space on which the x, y and z axes of the camera space lie and together from a 3x3 rotation matrix, and [TX, TX, TZ]T is a translation vector representing the origin of the head frame in the camera frame.
The matrix M is thus of the form
Figure imgf000022_0001
The requested camera positions for the supplied photographs require the camera to be at the front or side of the head. Letting the camera's RZ axis lie on the line of sight, then the RX and RY vectors lie in a plane parallel to the image plane and are orthogonal to each other. Hence the angle between them is one degree of freedom, which allows for any tilt of the head around the direction of view (although it is requested that heads are upright in the frame of the photograph this can be difficult to achieve). TX and TY form two more degrees of freedom, and f (the scale factor) forms the fourth and last degree of freedom.
For the front camera matrix, the co-ordinates of two points in the front photograph and their corresponding XY co-ordinates in head space are sufficient to provide these degrees of freedom. As an example, one HFP between the eyes and one HFP at the centre of the mouth can be chosen. The desired XY co-ordinates of these points in world space are known from the generic model. Each point in the photo is related to the world space coordinates as follows :
x = f*RXx*X + f*RXy*Y + TX y = f*RYx*X + f*RYy*Y + TY
Since RX is orthogonal to RY,
x = f*RYy*X - f*RYx*Y + TX
Hence the two matched points give four equations in four unknowns (f* RYX, f* RYy, TX, TY) which are solved trivially. Normalizing the found vector (f* RYX, f* RYy ) gives RY; the scale factor f is the length.
Matching the YZ head space co-ordinates with the 2- dimensional co-ordinates of the same two HFPs in each side photograph will give the side camera matrices in the same manner.
3. 3D HFP Extraction
The next module has the responsibility of extracting the 3- dimensional world space co-ordinate data of the subject model's HFPs.
Denoting the camera projection matrix for a camera k as Mk
MR P = pk Let Mk be partitioned as
Figure imgf000024_0001
Oτ
where Rk is 2x3, Tk is 2x1. The third row of Mk can be ignored as it just copies the final homogeneous co-ordinate from world space to image space and hence will set it to be 1. Then
Figure imgf000024_0002
For all the n images in which P is marked, the Rk on the left right are stacked to give a 2nx3 matrix. The k - Tk terms on the right are similarly stacked. Hence each XYZ is found solved by a linear least-squares technique such as a Singular Value Decomposition.
This gives us the 3-dimensional co-ordinates for those HFPs that are marked in at least two images. With the exception of the two points used in calculating the camera matrices, each
HFP found in this manner will differ in position from its location in the generic head. These differences in position give the information necessary to interpolate the new positions of the remaining HFPs (just like the vertices in section 4).
This interpolation is achieved via the technique of scattered data interpolation with radial basis functions. This technique is used at several places by the software system and is described in section 6. 4. Producing a Head Shape
The previous stage of the process has given us a full set of 3- dimensional HFPs for the subject model. The next stage produces a geometric model whose shape is a likeness of the subject based on these points, by morphing the surface geometry of the subject head according to the spatial differences between the generic head's HFPs and the subject head's HFPs.
For a polygonal representation, this means interpolating the vertices of the representation. A triangle representation ensures that the polygons remain planar after this operation. This morphing is again achieved via the technique of scattered data interpolation with radial basis functions, as in [ref 1] and section 6.
5. Texture Extraction
Thus far the process has produced a subject head whose shape is a likeness of the subject as depicted in the input photographs. The next stage gives the subject head model the surface appearance of the subject. This is achieved via the creation of a texture image and its mapping onto the head model.
Texture mapping is a common and standard practice of 3- dimensional graphics software. By convention the 2- dimensional space of the texture image is addressed by uv coordinates, u horizontally and v vertically. A pixel of a texture image may be referred to as a texel. The mapping of the texture image onto the model is achieved by embedding uv co-ordinates with the vertex data - this serves to locate each vertex in the texture image. The uv co-ordinates embedded in the model in this manner enable any subsequent rendering process (as in a game, browser or any other application that displays a virtual head created in accordance with this invention) to transfer texels from the appropriate portion of a texture image to the screen as it renders each triangle during display of the head.
For a triangular polygonal model, the three uv co-ordinates locate each triangle within the texture image - we shall refer to this as the uv image of the triangle. Figure 9 illustrates the uv images of the triangles in a head model. It shows that component parts of a head model (such as eyes, ears, teeth and tongue) can have their uv images in different parts of the texture image.
The uv co-ordinates for head models are often assigned according to a cylindrical mapping. This tends to produce a blurry effect and allocates only a small portion of the texture image to the front of the face, [ref 3] describes a uv mapping which allocates more of the texture image to the front of the face and uses an orthographic projection for the front, since that is where most detail of interest lies.
The system described herein uses the uv values held in the vertices of the generic model unaltered in the subject model. This allows the mapping strategy to be implemented as part of the generic model creation process (which is a separate, possibly time-consuming process finished before the virtual 3- dimensional creation system is running), and hence allows for possibilities such as the use of commercially-available texture mapping software tools, or specially-written software. Many such tools allow interactive adjustment of the uv co-ordinates of the vertices under operator control. The generic heads used by the present implementation of the present invention are mapped such that the front of the head occupies more of the texture image than the sides, a constraint which carries over to the subject model, since vertices of the generic model morph to generally the same region of the head in the subject model. Figure 9 shows one such mapping wherein a large proportion of the texture image is devoted to the front of the head.
The texture creation process is then a case of transferring colour information from the input photographs to the texture image, guided by the uv images of each triangle. This is achieved as follows:
For each triangle:
For each texel that lies within the triangle's uv image:
(1) Compute the XYZ co-ordinates and surface normal of the corresponding internal point of the triangle in world space, (2) Using the information from step (1), compute the weightings which express the relative contributions of each photograph to the texel, (3) Using the information from step (1), locate the point in each (remapped) photograph which step (2) has decided will contribute something to the texel, (4) Transfer colour information from the remapped photos to the texture image, weighted according to step (2).
Each step is elaborated upon below.
Step (1) Computing the 3-dimensional location of a point
Step 1 is achieved via Barycentric co-ordinates; for each texel enclosed by the triangle's uv image, the Barycentric coordinates of that point within the triangle are computed. These Barycentric co-ordinates allow the calculation of the world space co-ordinates of that point based on the world space coordinates of each vertex. The surface normal is calculated trivially.
Step (2) Computing the weighting between photographs
The weighting implemented in the system described herein is mutually exclusive; only one photograph is allowed to contribute to the texel. The normal is not employed in determining which photograph; it is calculated for the purposes of software modularity since alternative implementations could use the dot product of the normal and the camera view vector to help determine weightings (e.g. as in [ref 1]). Here, only the location of the point is used. For example, if the triangle point lies at the front of the face, only the front photo is used to transfer the colour information.
This requires the software system to be able to determine what part of the model constitutes the front of the face. Following [ref 3], it uses HFPs to demarcate a frontal region of the face, which shall be referred to as the frontal mask. An example list of HFPs for this purpose is as follows (working anti-clockwise when looking from the front of the face) and is illustrated in Figure 10:
* the top of the forehead,
* the right edge of the right eyebrow
* a right jaw point
* the chin tip * a left jaw point
* the left edge of the left eyebrow
* and finally the top of the forehead again.
This HFP list defines a straight-edged polygon when viewed from the front of the face as can be seen in Figure 10. If the point P lies within this polygon when viewed from the front and in the front half of the head, it is deemed to lie within the frontal mask, and the front photo is chosen. Otherwise the left or right photo is chosen according to which side of the head the point lies.
Steps (3] and (4) Mapping to input photos
The HFPs of the subject model when projected back onto the photographs will not generally lie at their marked positions (due to the arbitration between the different views as described in section 3). For this reason, the photos are remapped in a preliminary step as follows. For each photograph, all HFPs are projected into the image space of the photograph via the camera matrix. The original photo is then warped such that the projected HFPs and the marked HFPs on that photo are realigned. The image warping is again achieved via 2- dimensional radial basis functions as in [ref 4] and section 6. During texture filling, the XYZ point is projected onto the remapped photo via the camera matrix to obtain the colour information to be transferred to the texture image. Since points will generally lie between integer pixel co-ordinates, bilinear interpolation is used to obtain a colour from the nearest pixels.
Blending
In the texture image, different texels have their colour transferred from different photographs. The fighting conditions in different photographs may differ. For this reason, the photo used by each texel is recorded in a separate table. Once the texture has been filled in, boundaries in the texture between texels from different photos are blended by a technique such as multi-resolution pyramid decomposition as described in [ref 3 ].
6. Scattered Data Interpolation with Radial Basis Functions
Several modules described above utilise scattered data interpolation with radial basis functions. This section describes the technique.
Let Pi and Qi (i=l ... n) both denote the same set of n points; Pi denote the points in their original positions, Qi denote the points in their new positions.
The new position of any point P to be interpolated is calculated from a vector- valued function f(P). The form of this function is f(P) = ∑i Ci g( l I P-Pi l I ) + MP + T
Where O are vector-valued coefficients g is the radial function
M and T form an affine transformation.
A suggested function g for facial morphing is g(d) = e~r 64, with units measured in inches ([ref 1]) . Values suggested for image warping include g(d) = (d2+r2)0-5 where r is a constant [ref 4].
For each point i
Figure imgf000031_0001
Hence the Ci, M and T are found by solving a set of linear equations expressing the above, plus equations expressing the following constraints for each co-ordinate (using the x coordinate as an example and assuming 3-d vectors):
∑ C1X = 0
Figure imgf000031_0002
Each co-ordinate can be solved for separately ([ref 5] lays out the form of the equations clearly). With the Ci, M and T calculated, the new position of any point can be calculated by substituting its old position P into f(Pj.
For the image re-mapping of section 5, the original positions Pi are as marked in the photographs, and the new positions Qi are the projections of the subject model's HFPs. The vector values are 2-dimensional.
For the vertex morphing of section 3 the original points are the 3-dimensional HFPs of the generic model and the new positions are the 3-dimensional HFPs of the subject model.
7. Output
The subject model is output by the creation process software system 21 in a file format referred to, for present purposes as the DMK file format with file extension .DMK. This file resides on the server and may also be sent via email to the subject/owner for residence on his hard drive or other memory device.
The final principal part of the system described and claimed herein is the environment interface 24. When the 3- dimensional model, created as described above, is to be imported into a piece of software, the DMK file must be available to the software (e.g. by connecting to the dedicated server) and that software must be capable of extracting a representation of the head that it is able to manipulate and display within its own 3-dimensional virtual space. A broad description of the operating elements of the environment interface is shown in Figure 11.
In order to make a game or other piece of software capable of displaying such a model, the developers of that software have to utilise an application interface in the form of a software development kit (SDK) provided to enable importation of the 3- dimensional head model represented by the DMK file. As described herein, the SDK has been designed on the basis that the possible uses to which publishers and developers can put a 3-d model as produced herein, will always defeat any attempts to predict or enumerate those uses. Applications that have been specifically anticipated include: 3-d Games, cosmetics web sites, talking heads, chat rooms, visual instant messaging, optician's web sites, online shopping, email greeting cards.
The SDK described herein provides a means for application developers to gain access to the information contained within a DMK file. The run-time components of the SDK decompress and convert the .DMK file into an internal data structure in order to service requests from the application. These requests are made
through an Application Programmer's Interface (API).
For any platform, the SDK consists of: A core run-time library
An Application Programmer's Interface (API) to call the core run-time library.
One or more public extension libraries with attendant APIs
One or more private extension libraries with attendant APIs
Documentation, including tutorial documentation.
* Sample source code. * Other reference materials.
The core run-time library parses and decompresses the .DMK file, builds the internal scene data structure and allows the application to extract data in platform-independent formats. Subject to unusual exceptions where some information is only
available through a private extension library, applications can work solely at this level to obtain the information they need.
Public extensions provide utilities and routines for common tasks to make the task of developing for the SDK simpler and more efficient. They access the internal data structures only via
calls to the core run-time. Source code will usually be provided for these libraries. The tasks may or may not be platform dependent.
Private extensions are in compiled form only and have the ability to hook directly into the internal data structures in order to set values or present data to the application. They usually provide platform-dependent functionality and provide a more efficient method of querying and converting the 3-d model data. A typical example is converting some data into a platform- specific or third-party format. These might also provide bespoke functionality exclusive to certain applications.
The API and library, in the manner of software abstraction, provide the developers with their only perception of the data embedded in the DMK file. That is to say, it is irrelevant how the head representation is stored in the DMK file; developers only need know whether and how they can obtain a data representation of the head in a format that their software supports, where this representation includes not only surface geometry such as triangle mesh information, but also material properties (describing reflectance, shininess, and textures etc).
Although the model created in accordance with invention are designed to be rendered in 3-d, the API described herein is
designed such that it is not necessary to make calls to the API from within a rendering loop. There are no rendering calls in the API described herein.
The API presents the data to the application much in the manner of a 3-d modelling program. This includes data to allow
the 3-d model to be animated. Whenever required, the application makes calls to the API to obtain data in a format it can accommodate. If the application has a rendering component, that application has the responsibility of storing and/or further manipulating the data and presenting it when
required to the Tenderer. Help as to how to apply animation to the 3-d model data is provided in the form of source code. A typical point of calling in a game would be during start-up, level loading or in an options screen. On a web server an application might convert the 3-d model to a 3-d format and send that file over the web to a 3-d player on the client.
Although it is the decision of the developer when the SDK is closed down, it is intended that it will be closed immediately after reading the required contents.
The differing demands of applications enabled to make use of the 3-d models described herein inevitably leads to the issue of
scalability. The API supports this requirement in several ways. The application has the ability to request a certain number of triangles in each mesh. The SDK will return a mesh with the largest triangle count less than or equal to the requested number. In addition, textures can be supplied to the application
in any requested size.
Bearing in mind that files are constantly being created on the subject technology platform, from time to time, new forms of data will be embedded into the created DMK files to allow enhanced application capabilities. Similarly new versions of the
SDK will be released from time to time. To allow for forwards compatibility of enabled applications with 3-d models created in the future, the run-time components of the SDK take the form of dynamically linked libraries. Should there be a format change in DMK files, the owner of such a DMK should obtain
the new run-time components for the application to link with. Backwards compatibility is obtained by ensuring that parsing code in the core run-time library of the SDK is always backwards compatible with DMK files created up to that point. Some applications may wish to import a 3-d model of a specific person or people. One example could be a specific sports
personality in a game, or a company representative on a web site. The applicant herein take steps to ensure that 3-d models are not used outside their original intention. In addition, within the consumer realm, a distinction is drawn between a local 3-d model, and one served over the web. As one means of enforcing
this, when a developer is given access to a copy of SDK, he is given an application key (a character string) to unlock 3-d model files. The application must pass this key through to the SDK when opening a DMK file. The SDK checks that the application is allowed to use that particular 3-d model by
comparing usage data contained in the application key with data embedded in the DMK file at creation time or when it is extracted off the server for a particular purpose.
The SDK and API provided in accordance with the invention thus includes routines that provide a head representation in several formats and at different levels of detail. All that is required for this to be possible is for the internal DMK data to be capable of conversion to the specified output representation. In the case of a triangular representation, the output itself may be a multi-resolution format which would allow the calling software to display the head with varying numbers of triangles under that software's own control (e.g. more triangles when the head is closer to the camera).
The library thus has the tasks of
* performing this conversion,
* scaling, rotating and otherwise transforming the head such that it may be placed appropriately within the virtual space of the calling software,
* adjusting the neck to allow the head to sit on the intended body (if any) in the virtual space of the calling software, and * supplying the ancillary data.
In addition to a detailed head representation, an abstract representation of the head may also be extracted. This can consist of only the feature point co-ordinates (with (u,v) coordinates), and the texture. The feature points output may be those specified the ISO 14496 MPEG-4 standard for very-low bitrate audio-visual coding. These need not coincide with the HFPs used by the creation system described above, but so long as the MPEG-4 feature definition points are created for, and stored with, each generic model, they can be morphed to give the feature definition points for the subject model. The MPEG-4 feature points plus the texture alone may be of use to application software developers who have their own proprietary format for representing a head but who wish to adjust their own model to conform to the individual's head.
The output of MPEG-4 feature definition points also allows MPEG-4 Facial Animation Parameters to be applied to the subject model. Hence the creation process described herein can produce a likeness of the subject suitable for animating according to the MPEG-4 protocols and in any MPEG-4 player. Of course, other data necessary for other animation protocols can also be embedded in the DMK file such as smooth skinning.
Finally, without alteration of the general SDK concept, the possibility should be allowed for in the case of communication between a user website server and server 26 for the request of a model created according to the invention, that the server 26 may convert the DMK format before sending the result. This would be in preference to sending the DMK file to the run-time library on the other machine, if it is determined that the resulting file will be significantly smaller than the DMK file (and hence have a speedier transfer time). This is in effect simply a matter of allowing some functionality of the run-time library of the SDK to be distributed onto the server 26.
The use of the invention is as follows:
A person wishing to use his/her own image in an electronic game or internet site adapted as above described, loads on to his/her personal computer the subject interface 20. This can be loaded from a CD or other memory disc, or can be accessed from an internet site. After loading digital images and completing data entry in response to the prompts described above, the data is then uploaded to creation process 21 via the internet. The 3-dimensional modelling is then undertaken by the creation process 21 and the model thus created then returned to the supplier of the photographs by e-mail for storage on user's local memory device 25. A copy of the model is also stored in an electronic repository such as database or server 26. When the player wishes to make use of the model in a suitably formatted game, the player loads the model from the server 26 or the local memory device 25 into the games software. The game may allow the player to select the character in the game whose identity the player wishes to assume and may allow two or more players to take up identities within the game.
A subject may also downloaded his/her virtual 3-dimensional model into network sites 23 so that interactive activity can be personalised. Virtual chat rooms could have bars into which participants can transfer head and other body features to enable more personal participation. In shopping sites purchasers can use their own head and body dimensions to better gauge how they might look in particular clothes or with a particular hair style or wearing particular make-up or accessories.
It will thus be appreciated that the present invention, at least in the case of the particular embodiment thereof described above, provides a system for personalising activity in virtual environments which will is relatively simple to use and which considerably enhances the range of human activities using such environments.
References
[1] F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, D. H. Salesin "Synthesizing Realistic Facial Expressions from Photographs" Proceedings SIGGRAPH '98 ACM Press [2] Z. Zhang, K. Isono, S. Akamatsu
OEuclidian Structure from Uncalibrated Images Using Fuzzy Domain Knowledge: Application to Facial Image SynthesisO Proc. International Conference on Computer Vision (ICCV'98), Bombay, India, January 4—7, 1998
[3] Won-Sook Lee & N. Magnenat Thalmann "Head Modelling from Pictures and Morphing in 3D with Image Metamorphosis Based on Triangulation" Proceedings CAPTECH '98, pp 254-267 Springer- Verlag 1998
[4] D. Ruprecht & H. Muller
"Image Warping with Scattered Data Interpolation"
IEEE Computer Graphics and Applications, March 1995, pp37-
43
[5] F. L. Bookstein
"Principal Warps: Thin-Plate Splines and the Decomposition of
Deformations"
IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol II no 6, June 1989, pp 567-585.

Claims

Claims
1) A system for creating and making use of 3-dimensional human body features in virtual environments, said system including:
a subject interface configured to receive data of the human subject from which said 3-dimensional human body features can be created;
a creation process communicable with said subject interface and being configured and operable to create virtual 3-dimensional human body representations from data received by said subject interface; and
an environment interface configured and operable to permit the creation or adaption of virtual environments to integrate therein, 3-dimensional human body representations created by said creation process.
2) A system as claimed in claim 1 when configured for the creation and use of virtual 3-dimensional human head representations in virtual environments.
3) A system as claimed in claim 1 or claim 2 wherein said subject interface comprises a software item which may be distributed conventionally, such as by floppy disc or CD, or may be downloaded over the internet. ) A system as claimed in claim 3 wherein said subject interface is downloadable from a location at which said creation process is resident. ) A system as claimed in any one of the preceding claims when configured to provide for data collected by said subject interface to be passed to said creation process via the internet. ) A system as claimed in any one of the preceding claims wherein said subject interface is configured to receive a combination of 2-dimensional visual images of a human subject together with text data relating to said human subject. ) A system as claimed in any one of the preceding claims wherein said subject interface includes prompting means operable to prompt a human subject to input specified data. ) A system as claimed in claim 7 wherein said specified data includes 2-dimensional digital photographic data and also other data relating to that subject selected from the group comprising:
Name
Nickname e-mail address
Age Height
Weight
Gender Ethnic appearance
A system as claimed in claim 8 wherein said specified data further includes gender specific data. ) A system as claimed in any one of claims 7 to 9 wherein said prompting means is operable to prompt a subject to input digital photographic images of the front and at least one profile of the subject's head. ) A system as claimed in claim 10 wherein said prompting means is further operable to prompt said subject to establish particular feature points on said photographic images. ) A system as claimed in any one of claims 7 to 11 wherein said prompting means is still further operable to prompt a subject to select a generic head form which corresponds most in shape and features to said subjects own head. ) A system as claimed in any one of the preceding claims wherein said subject interface is configured to receive photographic images created using readily available digital cameras or scanned from conventional photographic film images. ) A system as claimed in any one of the preceding claims wherein said environment interface is incorporated in a computer game. ) A system as claimed in any one of the preceding claims wherein said environment interface is incorporated in an environment in the form of a computer network or internet site, wherein personalised 3-dimensional human representations may be imported into said site to participate in activity pre-programmed into said site. ) A method of playing an electronic game adapted to receive and make use of personalised virtual 3-dimensional human representations, said method including the steps of:
connecting said electronic game to a memory device on which at least one said 3-dimensional human representation is stored;
loading said 3-dimensional human representation from said memory device into said game; and
playing the game according to the facilities preprogrammed into said game whilst embodying said virtual 3-dimensional representation as part of said game. ) A method as claimed in claim 16 wherein said memory device comprises a memory card, computer hard drive or other computer memory device. ) A method as claimed in claim 17 wherein said virtual 3- dimensional representation may be loaded onto said memory card, computer hard drive or other computer memory device, from a databank containing a multiplicity of such representations, via the internet.
19) A method as claimed in any one of claims 16 to 18 further including nominating one or more locations in said game to be occupied by said 3-dimensional representations.
20) A method of interacting with an internet site adapted to receive and make use of virtual 3-dimensional human features, said method including the steps of:
connecting said internet site to a repository of said 3- dimensional human features via the internet;
downloading virtual 3-dimensional features into said site; and
interacting with said site according to the facilities preprogrammed into said site.
21) A method as claimed in claim 20 further including nominating one or more locations in said site to be occupied by said 3-dimensional features.
22) A computer game when adapted to receive substitute 3- dimensional human body features according to the system claimed in any one of claims 1 to 15.
23) An electronic network site when adapted to receive from an external source, 3-dimensional human body features according to the system claimed in any one of claims 1 to 15.
PCT/GB2001/000770 2000-02-22 2001-02-22 3d game avatar using physical characteristics WO2001063560A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001233947A AU2001233947A1 (en) 2000-02-22 2001-02-22 3d game avatar using physical characteristics

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB0004165.7A GB0004165D0 (en) 2000-02-22 2000-02-22 System for virtual three-dimensional object creation and use
GB0004165.7 2000-02-22

Publications (1)

Publication Number Publication Date
WO2001063560A1 true WO2001063560A1 (en) 2001-08-30

Family

ID=9886176

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2001/000770 WO2001063560A1 (en) 2000-02-22 2001-02-22 3d game avatar using physical characteristics

Country Status (3)

Country Link
AU (1) AU2001233947A1 (en)
GB (1) GB0004165D0 (en)
WO (1) WO2001063560A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003030086A1 (en) * 2001-09-28 2003-04-10 Koninklijke Philips Electronics N.V. Head motion estimation from four feature points
FR2843472A1 (en) * 2002-08-09 2004-02-13 Almiti Technologies 3D representation method, especially for representation of a human face, whereby a plurality of characteristic points is selected on a 3D-face representation and then matched with image points from corresponding images
WO2004081853A1 (en) * 2003-03-06 2004-09-23 Animetrics, Inc. Viewpoint-invariant image matching and generation of three-dimensional models from two-dimensional imagery
EP1794703A2 (en) * 2004-09-17 2007-06-13 Cyberextruder.com Inc. System, method, and apparatus for generating a three-dimensional representation from one or more two-dimensional images
WO2008009070A1 (en) * 2006-07-21 2008-01-24 Anthony James Trothe System for creating a personalised 3d animated effigy
US20080152213A1 (en) * 2006-01-31 2008-06-26 Clone Interactive 3d face reconstruction from 2d images
KR100861861B1 (en) * 2003-06-02 2008-10-06 인터내셔널 비지네스 머신즈 코포레이션 Architecture for a speech input method editor for handheld portable devices
WO2008144843A1 (en) * 2007-05-31 2008-12-04 Depth Analysis Pty Ltd Systems and methods for applying a 3d scan of a physical target object to a virtual environment
US7643671B2 (en) 2003-03-24 2010-01-05 Animetrics Inc. Facial recognition system and method
US7660431B2 (en) 2004-12-16 2010-02-09 Motorola, Inc. Image recognition facilitation using remotely sourced content
US8730231B2 (en) 2007-11-20 2014-05-20 Image Metrics, Inc. Systems and methods for creating personalized media content having multiple content layers
US8823642B2 (en) 2011-07-04 2014-09-02 3Divi Company Methods and systems for controlling devices using gestures and related 3D sensor
US9143721B2 (en) 2008-07-01 2015-09-22 Noo Inc. Content preparation systems and methods for interactive video systems
US10332560B2 (en) 2013-05-06 2019-06-25 Noo Inc. Audio-video compositing and effects

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114425162A (en) * 2022-02-11 2022-05-03 腾讯科技(深圳)有限公司 Video processing method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2287387A (en) * 1994-03-01 1995-09-13 Virtuality Texture mapping
EP0807902A2 (en) * 1996-05-16 1997-11-19 Cyberclass Limited Method and apparatus for generating moving characters
EP0883090A2 (en) * 1997-06-06 1998-12-09 AT&T Corp. Method for generating photo-realistic animated characters
EP0883089A2 (en) * 1997-06-03 1998-12-09 AT&T Corp. System and apparatus for customizing a computer animation wireframe
WO1999057900A1 (en) * 1998-05-03 1999-11-11 John Karl Myers Videophone with enhanced user defined imaging system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2287387A (en) * 1994-03-01 1995-09-13 Virtuality Texture mapping
EP0807902A2 (en) * 1996-05-16 1997-11-19 Cyberclass Limited Method and apparatus for generating moving characters
EP0883089A2 (en) * 1997-06-03 1998-12-09 AT&T Corp. System and apparatus for customizing a computer animation wireframe
EP0883090A2 (en) * 1997-06-06 1998-12-09 AT&T Corp. Method for generating photo-realistic animated characters
WO1999057900A1 (en) * 1998-05-03 1999-11-11 John Karl Myers Videophone with enhanced user defined imaging system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GANG XU ET AL: "THREE-DIMENSIONAL FACE MODELING FOR VIRTUAL SPACE TELECONFERENCING SYSTEMS", TRANSACTIONS OF THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS OF JAPAN,JP,INST. OF ELECTRONICS & COMMUNIC. ENGINEERS OF JAPAN. TOKYO, vol. E73, no. 10, 1 October 1990 (1990-10-01), pages 1753 - 1761, XP000176466 *
TAKAAKI AKIMOTO ET AL: "3D FACIAL MODEL CREATION USING GENERIC MODEL AND FRONT AND SIDE VIEWS OF FACE", IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS,JP,INSTITUTE OF ELECTRONICS INFORMATION AND COMM. ENG. TOKYO, vol. E75 - D, no. 2, 1 March 1992 (1992-03-01), pages 191 - 197, XP000301166, ISSN: 0916-8532 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1316416C (en) * 2001-09-28 2007-05-16 皇家飞利浦电子股份有限公司 Head motion estimation from four feature points
WO2003030086A1 (en) * 2001-09-28 2003-04-10 Koninklijke Philips Electronics N.V. Head motion estimation from four feature points
FR2843472A1 (en) * 2002-08-09 2004-02-13 Almiti Technologies 3D representation method, especially for representation of a human face, whereby a plurality of characteristic points is selected on a 3D-face representation and then matched with image points from corresponding images
WO2004081853A1 (en) * 2003-03-06 2004-09-23 Animetrics, Inc. Viewpoint-invariant image matching and generation of three-dimensional models from two-dimensional imagery
US7853085B2 (en) 2003-03-06 2010-12-14 Animetrics, Inc. Viewpoint-invariant detection and identification of a three-dimensional object from two-dimensional imagery
US7643683B2 (en) 2003-03-06 2010-01-05 Animetrics Inc. Generation of image database for multifeatured objects
US7643685B2 (en) 2003-03-06 2010-01-05 Animetrics Inc. Viewpoint-invariant image matching and generation of three-dimensional models from two-dimensional imagery
US7643671B2 (en) 2003-03-24 2010-01-05 Animetrics Inc. Facial recognition system and method
KR100861861B1 (en) * 2003-06-02 2008-10-06 인터내셔널 비지네스 머신즈 코포레이션 Architecture for a speech input method editor for handheld portable devices
EP1794703A2 (en) * 2004-09-17 2007-06-13 Cyberextruder.com Inc. System, method, and apparatus for generating a three-dimensional representation from one or more two-dimensional images
EP1794703A4 (en) * 2004-09-17 2012-02-29 Cyberextruder Com Inc System, method, and apparatus for generating a three-dimensional representation from one or more two-dimensional images
US7660431B2 (en) 2004-12-16 2010-02-09 Motorola, Inc. Image recognition facilitation using remotely sourced content
US8126261B2 (en) 2006-01-31 2012-02-28 University Of Southern California 3D face reconstruction from 2D images
US7856125B2 (en) 2006-01-31 2010-12-21 University Of Southern California 3D face reconstruction from 2D images
US20080152213A1 (en) * 2006-01-31 2008-06-26 Clone Interactive 3d face reconstruction from 2d images
WO2008009070A1 (en) * 2006-07-21 2008-01-24 Anthony James Trothe System for creating a personalised 3d animated effigy
AU2007276715B2 (en) * 2006-07-21 2012-04-26 Anthony James Trothe System for creating a personalised 3D animated effigy
WO2008144843A1 (en) * 2007-05-31 2008-12-04 Depth Analysis Pty Ltd Systems and methods for applying a 3d scan of a physical target object to a virtual environment
US8730231B2 (en) 2007-11-20 2014-05-20 Image Metrics, Inc. Systems and methods for creating personalized media content having multiple content layers
US9143721B2 (en) 2008-07-01 2015-09-22 Noo Inc. Content preparation systems and methods for interactive video systems
US8823642B2 (en) 2011-07-04 2014-09-02 3Divi Company Methods and systems for controlling devices using gestures and related 3D sensor
US10332560B2 (en) 2013-05-06 2019-06-25 Noo Inc. Audio-video compositing and effects

Also Published As

Publication number Publication date
AU2001233947A1 (en) 2001-09-03
GB0004165D0 (en) 2000-04-12

Similar Documents

Publication Publication Date Title
Pighin et al. Synthesizing realistic facial expressions from photographs
US6532011B1 (en) Method of creating 3-D facial models starting from face images
Wei et al. Modeling hair from multiple views
Pighin et al. Modeling and animating realistic faces from images
WO2001063560A1 (en) 3d game avatar using physical characteristics
US20100060662A1 (en) Visual identifiers for virtual world avatars
US20060109274A1 (en) Client/server-based animation software, systems and methods
Wu et al. Interactive normal reconstruction from a single image
Hauswiesner et al. Free viewpoint virtual try-on with commodity depth cameras
KR100859502B1 (en) Method for providing virtual fitting service and server of enabling the method
JP2004506996A (en) Apparatus and method for generating synthetic face image based on form information of face image
JP2002042169A (en) Three-dimensional image providing system, its method, morphing image providing system, and its method
WO2002069272A2 (en) Real-time virtual viewpoint in simulated reality environment
JP2013524357A (en) Method for real-time cropping of real entities recorded in a video sequence
JP2002342789A (en) Picture processor for generating three-dimensional character picture and picture processing method and recording medium with picture processing program recorded
KR20220075339A (en) Face reconstruction method, apparatus, computer device and storage medium
KR20230110607A (en) Face reconstruction methods, devices, computer equipment and storage media
Tarini et al. Texturing faces
US6795090B2 (en) Method and system for panoramic image morphing
Mundra et al. Livehand: Real-time and photorealistic neural hand rendering
JP2003090714A (en) Image processor and image processing program
WO2018182938A1 (en) Method and system for wireless ultra-low footprint body scanning
JP2002517840A (en) Three-dimensional image processing method and apparatus
JP2004178036A (en) Device for presenting virtual space accompanied by remote person's picture
Turner et al. Sketching a virtual environment: modeling using line-drawing interpretation

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ CZ DE DE DK DK DM DZ EE EE ES FI FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP