Nothing Special   »   [go: up one dir, main page]

USH2253H1 - Multiple personality articulation for animated characters - Google Patents

Multiple personality articulation for animated characters Download PDF

Info

Publication number
USH2253H1
USH2253H1 US12/215,666 US21566608A USH2253H US H2253 H1 USH2253 H1 US H2253H1 US 21566608 A US21566608 A US 21566608A US H2253 H USH2253 H US H2253H
Authority
US
United States
Prior art keywords
personality
component
model
indicia
animation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/215,666
Other versions
US20100302252A1 (en
Inventor
Lena Petrovic
John Anderson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pixar
Original Assignee
Pixar
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pixar filed Critical Pixar
Priority to US12/215,666 priority Critical patent/USH2253H1/en
Assigned to PIXAR reassignment PIXAR ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ANDERSON, JOHN, PETROVIC, LENA
Publication of US20100302252A1 publication Critical patent/US20100302252A1/en
Application granted granted Critical
Publication of USH2253H1 publication Critical patent/USH2253H1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6009Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content

Definitions

  • the present invention relates to computer animation. More specifically, embodiments of the present invention relate to methods and apparatus for creating and using multiple personality articulation object models.
  • Drawing-based animation techniques were refined in the twentieth century, by movie makers such as Walt Disney and used in movies such as “Snow White and the Seven Dwarfs” (1937) and “Fantasia” (1940).
  • This animation technique typically required artists to hand-draw (or paint) animated images onto a transparent media or cels. After painting, each cel would then be captured or recorded onto film as one or more frames in a movie.
  • Stop motion-based animation techniques typically required the construction of miniature sets, props, and characters. The filmmakers would construct the sets, add props, and position the miniature characters in a pose. After the animator was happy with how everything was arranged, one or more frames of film would be taken of that specific arrangement. Stop motion animation techniques were developed by movie makers such as Willis O'Brien for movies such as “King Kong” (1933). Subsequently, these techniques were refined by animators such as Ray Harryhausen for movies including “Mighty Joe Young” (1948) and Clash Of The Titans (1981).
  • Pixar is more widely known as Pixar Animation Studios, the creators of animated features such as “Toy Story” (1995) and “Toy Story 2” (1999), “A Bugs Life” (1998), “Monsters, Inc.” (2001), “Finding Nemo” (2003), “The Incredibles” (2004), “Cars” (2006), “Ratatouille” (2007) and others.
  • Pixar developed computing platforms specially designed for computer animation and CGI, now known as RenderMan®.
  • RenderMan® is now widely used in the film industry and the inventors of the present invention have been recognized for their contributions to RenderMan® with multiple Academy Awards®.
  • RenderMan® software One core functional aspect of RenderMan® software was the use of a “rendering engine” to convert geometric and/or mathematical descriptions of objects into images or data that are combined into other images. This process is known in the industry as “rendering.”
  • a user For movies or other features, a user (known as a modeler/rigger) specifies the geometric description of objects (e.g. characters), and a user (known as an animator) specifies poses and motions for the objects or portions of the objects.
  • the geometric description of objects includes a number of controls, e.g. animation variables (avars), and values for the controls (avars).
  • Another issue contemplated by the inventors of the present invention is that the time required to open thousands of different files making up an object is large. In cases where components of an object are stored in hard-coded storage locations, the inventors believe that locating thousands of files, opening thousands of files from disk, and transferring such data to working memory is very time consuming. In cases where components of an object are stored in a database, the inventors believe that retrieving thousands of files is even more inefficient compared to the hard-coded storage approach.
  • the present invention relates to methods and apparatus for providing and using multiple personality articulation models. More specifically, embodiments of the present invention relate to providing objects having consistent animation variable naming among multiple personalities of objects.
  • Various embodiments of the present invention allow users, such as an object modeler or rigger to create a single model of an object that can include multiple personalities.
  • Such personalities can be expressed in the form of alternative descriptions for a given object component.
  • alternative descriptions for object components may include different types of heads for an object, different types of arms, different types of body shape, different types of surface properties, and the like.
  • each of the alternative descriptions may include a common or identical component name/animation variable.
  • the multiple personality object is retrieved in the working environment of the user, such as an animator, a game player, etc. This typically includes retrieval of a single file, at one time, that includes each of the personalities for a given object component.
  • the user or the program the user uses e.g. game
  • the personality that is to be expressed.
  • the object is animated (e.g. posed or manipulated) while reflecting the desired personality. Because one file may include the different personalities, file management overhead, compared to file-referencing schemes, is greatly reduced.
  • FIG. 1 illustrates an example according to various embodiments of the present invention
  • FIG. 2 illustrates a flow diagram according to various embodiments of the present invention
  • FIG. 3 illustrates an example according to various embodiments of the present invention
  • FIGS. 4A-B illustrate a flow diagram according to various embodiments of the present invention.
  • FIG. 5 is a block diagram of typical computer system according to an embodiment of the present invention.
  • FIG. 1 illustrates various embodiments of the present invention. More specifically, FIG. 1 illustrates a multiple personality object 100 within a working environment such as an object modeling environment. As illustrated in this example, a multiple personality object includes a body portion 110 , and a number of personalities 120 for “arms” and a number of personalities 130 for “legs”.
  • a user such as a modeler or rigger specifies the different personalities to be expressed from the multiple personality object 100 .
  • a claw-type arm 140 a tentacle-type arm 150 , and an antenna type arm 160 are shown.
  • each of these personalities may be associated with an identifier, such as a personality identifier, a version number, or the like.
  • two personalities for legs legs 170 and wheels 180 .
  • the leg type personalities can also be associated with a personality identifier, version number, or the like.
  • a personality A (e.g. version A) is associated with claw type arm 140 and legs 170
  • personality B (version B) is associated with tentacle type arm 150 and wheels 180
  • personality C is associated with antenna type arm 160 , and wheels 180 .
  • different personality identifiers may be specified for each personality of each component.
  • personality identifiers A-C may be respectively associated with personalities 120 for “arms”
  • personality identifiers D-E may be respectively associated with personalities 130 for legs.
  • the different personalities of the components need not be connected to the same portion of body portion 110 .
  • arms 140 and 150 connect to different portions of body portion 110 than arms 160
  • legs 170 connect to the bottom of body portion 110 and wheels 180 connect to the sides of body portion 110 .
  • a personality need not be specified for each multiple personality component.
  • an object may have arms 160 , but no personality specified for its legs.
  • FIG. 2 illustrates a flow diagram according to various embodiments of the present invention. More specifically, FIG. 2 illustrates a process for creating an object with multiple personalities.
  • a number of different personalities for a component are determined, step 200 .
  • a number of different users may contribute for the definition of the different personalities.
  • users e.g. modelers
  • the modeler may specify the geometric construction of the component (e.g. joints, connection of parts, etc.); the surface of the component (e.g. hair, scales, etc.); and the like.
  • users e.g. riggers
  • control points e.g. animation variables, etc.
  • a user initiates a modeling environment and initiates definition of an object that will include a component having different personalities, step 210 .
  • the user may specify the component having different personalities before defining other portions of the object, or may define other portions of the object before specifying a component to have multiple personalities.
  • an entire object may be defined having components with different personalities. For example, a model for an object may require a “type A” head, “type D” body, “type N” arms, “type N” legs, or the like.
  • the Pixar modeling environment Menv may be used. However, it is contemplated that other embodiments of the present invention may utilize other modeling environments.
  • the user may specify the location where the multi-personality component is to be coupled to other portions of the object, step 220 .
  • the user may specify that the personalities 120 for “arms” are coupled to positions 195 on the object.
  • each of the different personalities may be associated with different positions on the object. For example, personality A type arms may be connected to the front surface of an object, whereas personality B type arms may be connected to the back surface of an object, or the like.
  • the models of the different personalities for the component are retrieved from disk and loaded within the modeling environment, step 230 . This may be done by physically opening each of the models of the different personalities within the modeling environment. In various embodiments, the user may be able to view the different personalities for components, in a similar manner as was illustrated in FIG. 1 .
  • additional control variables may be specified for the object with each of the different personalities, if desired step 240 .
  • animation variables may be specified that controls more than one component (and each personality of components) of the object at the same time.
  • a user may specify a similar reaction for different mortalities for an animation variable, and in other embodiments, the modeler may specify different reactions for different personalities for an animation variable.
  • a “surprised” animation variable value of 1.0 may be associated with the arms being raised up, and 0.0 may be associated with the arms being next to the object body.
  • a “surprised” animation variable of 1.0 may be associated with the arms of the object being elongated and touching the floor, and 0.0 may be associated with the arms being fully “retracted” into the object.
  • the object along with more than one model of personality of the multiple personality components are stored in a tangible media, such as a hard disk, a network storage, optical storage media, database, or the like, step 250 .
  • FIG. 3 illustrates various embodiments of the present invention. More specifically, FIG. 3 illustrates retrieval of a model 300 of a multiple personality object into a working environment, e.g. an animation environment, a video game environment, etc.
  • a working environment e.g. an animation environment, a video game environment, etc.
  • multiple personality object 300 is the same as multiple personality object 100 in FIG. 1 , and includes body portion 110 , and a number of personalities 120 for “arms” and a number of personalities 130 for “legs.”
  • a first personality for the multiple personality object 300 is desired, such as personality A, in FIG. 1 .
  • personality A in FIG. 1 .
  • only personality A components are provided for object 320 for the user within environment 310 .
  • object 320 includes claw-type arms 330 and legs 340 .
  • a different personality for the multiple personality object 300 is desired, such as personality B, in FIG. 1 .
  • personality B in FIG. 1 .
  • object 360 includes tentacle-type arms 370 and wheels 380 .
  • a different personality for the multiple personality object 300 may be desired, such as personality C, in FIG. 1 .
  • personality C components are provided to the user for object 390 , as shown by antenna-type arms 395 and legs 397 .
  • object 300 may serve as the template for the different personalities of the objects illustrated. Such embodiments could greatly reduce the amount of time required to generate, for example, an army of objects with different personalities.
  • the respective objects can then be manipulated or posed based upon output of software, e.g. video game software, crowd simulation software; based upon specification by a user, e.g. via the use of animation variables, inverse kinematics software; or the like.
  • software e.g. video game software, crowd simulation software
  • FIGS. 4A-B illustrate a flow diagram according to various embodiments of the present invention. More specifically, FIGS. 4A-B illustrate a process for manipulating an object with multiple personalities.
  • the object is used for non-real-time animation (e.g. defining animation for feature animation), real-time animation (e.g. video games), or the like.
  • a model of an object with multiple personality components is identified, step 400 .
  • the object may be identified by a user, by a computer program, or the like.
  • the computer program may be a video game, where in-game characters or other non-player characters are to be shown on the screen.
  • the computer program may be a crowd-simulation (multi-agent) type computer program that can specify/identify the different objects (agents) to form a crowd of objects.
  • software available from Massive Software from Auckland, New Zealand is used, although other brands of multi-agent software may also be used.
  • such software typically relies upon a user, e.g. an animator to broadly specify the types of agents, or objects for the crowd.
  • the model of the object including all the multiple personality components stored therein is retrieved from memory (e.g. optical memory, network memory) and loaded into a computer working memory, step 410 .
  • memory e.g. optical memory, network memory
  • opening one file including an object with multiple personalities is potentially more time efficient than opening many different files to “build-up” a specific configuration of an object.
  • the desired personality for components of the object are determined, step 420 .
  • the specific personality type is specifically selected by a user, or specified by a computer program.
  • an object may be a soldier-type character, and the different personalities may reflect different equipment being worn by the soldier.
  • a crowd-simulation computer program may specify a personality type for an object.
  • such software may select personalities for objects such that the crowd appears random, the crowd includes small groups of objects, or the like.
  • object 360 was specified to express personality B, and object 400 was specified to express personality C. Accordingly, object 360 includes tentacle type arms 370 and object 390 includes antenna-type arms 395 .
  • manipulations of the specific personality of object specified may be determined, step 430 .
  • the manipulation is typically specified in a pre-run-time environment.
  • a user such as an animator may manipulate the desired personality for the object via manipulation (e.g. GUI, keyboard) of animation variables, via inverse kinematics software, or the like.
  • the specified manipulation of the object may be determined via software, e.g. crowd simulation software, video game engine, artificial intelligence software, or the like.
  • the manipulations of the object may be viewed or reviewed, step 440 .
  • a user such as an animator may review the animation of the object within an animation environment. In various embodiments, this review may not be a full rendering of an image, but a preview rendering.
  • this step may also include displaying the animation of the object on a display to a user, such as a game developer.
  • a user such as a game developer.
  • the types of animation of in-game characters may include animation of “scripted” behavior.
  • the user may approve of the manipulations, step 450 .
  • Changes to versions of specific components of the object may be performed, even after step 450 .
  • the animator may select decide to replace arms 150 with 160 .
  • the manipulations e.g. animation variables
  • the manipulations may then be stored into a memory, step 460 .
  • the stored manipulations may be animation of the object, and in the context of a video game, these stored manipulations may be associated with “scripted” behavior for the object.
  • the stored manipulations may be retrieved from memory, step 470 , and used to animate the object.
  • an image of a scene including the posed object including the specified personality components is then created, step 480 .
  • the images are stored onto a tangible media, such as film media, an optical disk, a magnetic media, or the like, step 490 .
  • the representation of the images can later be retrieved and viewing by viewers, (e.g. audience) step 495 .
  • step 430 may be based upon input from a user or the game.
  • the user may move the character on the screen by hitting keys on a keyboard, such as A,S,D, or W.
  • This input would be used as input to animate the character on the screen to walk left, right, backwards, or forwards, or the like.
  • in-game health-type conditions of a character may also influence (e.g. restrict) movement of portions of that object.
  • the right leg of the character may be injured and splinted, thus the animation of the right leg of the object may have a restricted range of movement.
  • an image of the scene including the object can then be directly rendered in step 480 .
  • no review or storage of these inputs is thus required.
  • the rendered image is then displayed to the user in step 495 .
  • FIG. 5 is a block diagram of typical computer system 500 according to an embodiment of the present invention.
  • computer system 500 typically includes a display 510 , computer 520 , a keyboard 530 , a user input device 540 , computer interfaces 550 , and the like.
  • display (monitor) 510 may be embodied as a CRT display, an LCD display, a plasma display, a direct-projection or rear-projection DLP, a microdisplay, or the like. In various embodiments, display 510 may be used to visually display user interfaces, images, or the like.
  • user input device 540 is typically embodied as a computer mouse, a trackball, a track pad, a joystick, wireless remote, drawing tablet, voice command system, eye tracking system, and the like.
  • User input device 540 typically allows a user to select objects, icons, text and the like that appear on the display 510 via a command such as a click of a button or the like.
  • Embodiments of computer interfaces 550 typically include an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) unit, FireWire interface, USB interface, and the like.
  • computer interfaces 550 may be coupled to a computer network, to a FireWire bus, or the like.
  • computer interfaces 550 may be physically integrated on the motherboard of computer 520 , may be a software program, such as soft DSL, or the like.
  • computer 520 typically includes familiar computer components such as a processor 560 , and memory storage devices, such as a random access memory (RAM) 570 , disk drives 580 , and system bus 590 interconnecting the above components.
  • processor 560 a processor
  • memory storage devices such as a random access memory (RAM) 570 , disk drives 580 , and system bus 590 interconnecting the above components.
  • RAM random access memory
  • computer 520 includes one or more Xeon microprocessors from Intel. Further, in the present embodiment, computer 520 typically includes a UNIX-based operating system.
  • RAM 570 and disk drive 580 are examples of computer-readable tangible media configured to store data such as geometrical descriptions of different personality components, models including multiple personality components, procedural descriptions of models, values of animation variables associated with animation of an object, embodiments of the present invention, including computer-executable computer code, or the like.
  • Types of tangible media include magnetic storage media such as floppy disks, networked hard disks, or removable hard disks; optical storage media such as CD-ROMS, DVDs, holographic memories, or bar codes; semiconductor media such as flash memories, read-only-memories (ROMS); battery-backed volatile memories; networked storage devices, and the like.
  • computer system 500 may also include software that enables communications over a network such as the HTTP, TCP/IP, RTP/RTSP protocols, and the like.
  • software that enables communications over a network
  • HTTP HyperText Transfer Protocol
  • TCP/IP Transmission Control Protocol
  • RTP/RTSP protocols Remote Method Protocol
  • other communications software and transfer protocols may also be used, for example IPX, UDP or the like.
  • FIG. 5 representative of a computer system capable of embodying the present invention.
  • the computer may be a desktop, portable, rack-mounted or tablet configuration.
  • the computer may be a series of networked computers.
  • other micro processors are contemplated, such as CoreTM microprocessors from Intel; PhenomTM, TurionTM 64 , OpteronTM or AthlonTM microprocessors from Advanced Micro Devices, Inc; and the like.
  • animation of an object having a first personality may be easily reused by an object having a second personality.
  • animation used for one version of an object can be used for other versions of the object, since they simply have different versions of the same components.
  • an object having a first version of a component will have a directory path that can be used by an object having a second version of the component.
  • the consistency in nomenclature, or naming facilitates animation reuse. Accordingly, after animation for an object is finished, the user can easily change the version of a component, without having to worry about finding the correct directory path for the component.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method for a computer system includes determining a model for a first personality of a component of an object, wherein the model for the first personality of the component is associated with a component name and a first personality indicia, determining a model for a second personality of the component of the object, wherein the model for the second personality of the component is associated with the component name and the second personality indicia, determining a multiple personality model of the object, wherein the model of the object includes the model for the first personality of the component, the model of the second personality of the component, the first personality indicia, and the second personality indicia, and storing the multiple personality model of the object in a single file.

Description

The present invention relates to computer animation. More specifically, embodiments of the present invention relate to methods and apparatus for creating and using multiple personality articulation object models.
Throughout the years, movie makers have often tried to tell stories involving make-believe creatures, far away places, and fantastic things. To do so, they have often relied on animation techniques to bring the make-believe to “life.” Two of the major paths in animation have traditionally included, drawing-based animation techniques and stop motion animation techniques.
Drawing-based animation techniques were refined in the twentieth century, by movie makers such as Walt Disney and used in movies such as “Snow White and the Seven Dwarfs” (1937) and “Fantasia” (1940). This animation technique typically required artists to hand-draw (or paint) animated images onto a transparent media or cels. After painting, each cel would then be captured or recorded onto film as one or more frames in a movie.
Stop motion-based animation techniques typically required the construction of miniature sets, props, and characters. The filmmakers would construct the sets, add props, and position the miniature characters in a pose. After the animator was happy with how everything was arranged, one or more frames of film would be taken of that specific arrangement. Stop motion animation techniques were developed by movie makers such as Willis O'Brien for movies such as “King Kong” (1933). Subsequently, these techniques were refined by animators such as Ray Harryhausen for movies including “Mighty Joe Young” (1948) and Clash Of The Titans (1981).
With the wide-spread availability of computers in the later part of the twentieth century, animators began to rely upon computers to assist in the animation process. This included using computers to facilitate drawing-based animation, for example, by painting images, by generating in-between images (“tweening”), and the like. This also included using computers to augment stop motion animation techniques. For example, physical models could be represented by virtual models in computer memory, and manipulated.
One of the pioneering companies in the computer-aided animation/computer generated imagery (CGI) industry was Pixar. Pixar is more widely known as Pixar Animation Studios, the creators of animated features such as “Toy Story” (1995) and “Toy Story 2” (1999), “A Bugs Life” (1998), “Monsters, Inc.” (2001), “Finding Nemo” (2003), “The Incredibles” (2004), “Cars” (2006), “Ratatouille” (2007) and others. In addition to creating animated features, Pixar developed computing platforms specially designed for computer animation and CGI, now known as RenderMan®. RenderMan® is now widely used in the film industry and the inventors of the present invention have been recognized for their contributions to RenderMan® with multiple Academy Awards®.
One core functional aspect of RenderMan® software was the use of a “rendering engine” to convert geometric and/or mathematical descriptions of objects into images or data that are combined into other images. This process is known in the industry as “rendering.” For movies or other features, a user (known as a modeler/rigger) specifies the geometric description of objects (e.g. characters), and a user (known as an animator) specifies poses and motions for the objects or portions of the objects. In some examples, the geometric description of objects includes a number of controls, e.g. animation variables (avars), and values for the controls (avars).
As the rendering power of computers increased, users began to define and animate objects with higher levels of detail and higher levels of geometric complexity. The amount of data required to describe such objects therefore greatly increased. As a result, the amount of data required to store a scene that included many different objects (e.g. characters) also dramatically increased.
One approach developed by Pixar to manage such massive amounts of data has been through the use of modular components for objects. With this approach, an object may be separated into a number of logical components, where each of these logical components are stored in a separate data file. Further information is found in U.S. application Ser. No. 10/810,487 now U.S. Pat. No. 7,548,243 filed May 26, 2004, incorporated by reference herein for all purposes.
An issue contemplated by the inventors of the present invention is that this modular component approach required very careful file management, as objects could be created from thousands of disparate components. This approach tended to require the freezing of on-disk storage locations or paths or storage of components as soon as the components were used in a model. If the storage location of one file was moved or not located in a specified path, that component would fail to load, and the model of the object would be “broken.” The inventors of the present invention thus believe that it is undesirable to hard-code disk storage locations, as it greatly restricts the ability of users, e.g. modelers, to update and change models of components, for example.
Another issue contemplated by the inventors of the present invention is that the time required to open thousands of different files making up an object is large. In cases where components of an object are stored in hard-coded storage locations, the inventors believe that locating thousands of files, opening thousands of files from disk, and transferring such data to working memory is very time consuming. In cases where components of an object are stored in a database, the inventors believe that retrieving thousands of files is even more inefficient compared to the hard-coded storage approach.
In light of the above, what is desired are methods and apparatus that address many of the issues described above.
BRIEF SUMMARY OF THE INVENTION
The present invention relates to methods and apparatus for providing and using multiple personality articulation models. More specifically, embodiments of the present invention relate to providing objects having consistent animation variable naming among multiple personalities of objects.
Various embodiments of the present invention allow users, such as an object modeler or rigger to create a single model of an object that can include multiple personalities. Such personalities can be expressed in the form of alternative descriptions for a given object component. As merely an example, alternative descriptions for object components may include different types of heads for an object, different types of arms, different types of body shape, different types of surface properties, and the like. Typically, each of the alternative descriptions may include a common or identical component name/animation variable.
In various embodiments of the present invention, the multiple personality object is retrieved in the working environment of the user, such as an animator, a game player, etc. This typically includes retrieval of a single file, at one time, that includes each of the personalities for a given object component. Next, the user or the program the user uses (e.g. game), specifies the personality that is to be expressed. Then, using the common component name/animation variable, the object is animated (e.g. posed or manipulated) while reflecting the desired personality. Because one file may include the different personalities, file management overhead, compared to file-referencing schemes, is greatly reduced.
BRIEF DESCRIPTION OF THE DRAWINGS
In order to more fully understand the present invention, reference is made to the accompanying drawings. Understanding that these drawings are not to be considered limitations in the scope of the invention, the presently described embodiments and the presently understood best mode of the invention are described with additional detail through use of the accompanying drawings.
FIG. 1 illustrates an example according to various embodiments of the present invention;
FIG. 2 illustrates a flow diagram according to various embodiments of the present invention;
FIG. 3 illustrates an example according to various embodiments of the present invention;
FIGS. 4A-B illustrate a flow diagram according to various embodiments of the present invention; and
FIG. 5 is a block diagram of typical computer system according to an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 illustrates various embodiments of the present invention. More specifically, FIG. 1 illustrates a multiple personality object 100 within a working environment such as an object modeling environment. As illustrated in this example, a multiple personality object includes a body portion 110, and a number of personalities 120 for “arms” and a number of personalities 130 for “legs”.
In various embodiments, a user, such as a modeler or rigger specifies the different personalities to be expressed from the multiple personality object 100. In the example illustrated, a claw-type arm 140, a tentacle-type arm 150, and an antenna type arm 160 are shown. In various embodiments, each of these personalities may be associated with an identifier, such as a personality identifier, a version number, or the like. Also illustrated are two personalities for legs : legs 170 and wheels 180. In various embodiments, the leg type personalities can also be associated with a personality identifier, version number, or the like.
In FIG. 1, a personality A (e.g. version A) is associated with claw type arm 140 and legs 170, personality B (version B) is associated with tentacle type arm 150 and wheels 180, and personality C is associated with antenna type arm 160, and wheels 180. In other embodiments, different personality identifiers may be specified for each personality of each component. As an example, personality identifiers A-C may be respectively associated with personalities 120 for “arms” and personality identifiers D-E may be respectively associated with personalities 130 for legs.
As can be seen in FIG. 1, the different personalities of the components need not be connected to the same portion of body portion 110. For example, arms 140 and 150 connect to different portions of body portion 110 than arms 160, and legs 170 connect to the bottom of body portion 110 and wheels 180 connect to the sides of body portion 110.
In various embodiments, a personality need not be specified for each multiple personality component. For example, an object may have arms 160, but no personality specified for its legs.
FIG. 2 illustrates a flow diagram according to various embodiments of the present invention. More specifically, FIG. 2 illustrates a process for creating an object with multiple personalities.
Initially, a number of different personalities for a component are determined, step 200. In various embodiments, a number of different users may contribute for the definition of the different personalities. Typically, users (e.g. modelers) create models of the different personalities for components of an object. In various examples, the modeler may specify the geometric construction of the component (e.g. joints, connection of parts, etc.); the surface of the component (e.g. hair, scales, etc.); and the like. Additionally, users (e.g. riggers) specify connections for different portions of the components together and provides control points (e.g. animation variables, etc.) for moving the portions of the component in a coordinated manner. These different personalities for a component may be initially created and stored in a memory for later use.
Next, in FIG. 2, a user initiates a modeling environment and initiates definition of an object that will include a component having different personalities, step 210. In various embodiments, the user may specify the component having different personalities before defining other portions of the object, or may define other portions of the object before specifying a component to have multiple personalities. In various embodiments, an entire object may be defined having components with different personalities. For example, a model for an object may require a “type A” head, “type D” body, “type N” arms, “type N” legs, or the like.
In various embodiments, the Pixar modeling environment Menv may be used. However, it is contemplated that other embodiments of the present invention may utilize other modeling environments.
In various embodiments, the user may specify the location where the multi-personality component is to be coupled to other portions of the object, step 220. Referring to the example in FIG. 1, the user may specify that the personalities 120 for “arms” are coupled to positions 195 on the object. In some embodiments, each of the different personalities may be associated with different positions on the object. For example, personality A type arms may be connected to the front surface of an object, whereas personality B type arms may be connected to the back surface of an object, or the like.
Next, the models of the different personalities for the component are retrieved from disk and loaded within the modeling environment, step 230. This may be done by physically opening each of the models of the different personalities within the modeling environment. In various embodiments, the user may be able to view the different personalities for components, in a similar manner as was illustrated in FIG. 1.
In various embodiments, additional control variables may be specified for the object with each of the different personalities, if desired step 240. As mentioned above, animation variables may be specified that controls more than one component (and each personality of components) of the object at the same time. In various embodiments, a user may specify a similar reaction for different mortalities for an animation variable, and in other embodiments, the modeler may specify different reactions for different personalities for an animation variable. As an example, for personality “A” and “B” arms, a “surprised” animation variable value of 1.0 may be associated with the arms being raised up, and 0.0 may be associated with the arms being next to the object body. As another example, in contrast, with the above example, with personality “B” arms, a “surprised” animation variable of 1.0 may be associated with the arms of the object being elongated and touching the floor, and 0.0 may be associated with the arms being fully “retracted” into the object.
In various embodiments, after definition of the multiple personality object, the object along with more than one model of personality of the multiple personality components are stored in a tangible media, such as a hard disk, a network storage, optical storage media, database, or the like, step 250.
FIG. 3 illustrates various embodiments of the present invention. More specifically, FIG. 3 illustrates retrieval of a model 300 of a multiple personality object into a working environment, e.g. an animation environment, a video game environment, etc. As illustrated in this example, multiple personality object 300 is the same as multiple personality object 100 in FIG. 1, and includes body portion 110, and a number of personalities 120 for “arms” and a number of personalities 130 for “legs.”
In a first example, in a first environment 310, a first personality for the multiple personality object 300 is desired, such as personality A, in FIG. 1. In response, only personality A components are provided for object 320 for the user within environment 310. Specifically, as illustrated, object 320 includes claw-type arms 330 and legs 340.
In a second example, in a second environment 350, a different personality for the multiple personality object 300 is desired, such as personality B, in FIG. 1. In response, only personality B components are provided for object 300 within environment 350. Specifically, as illustrated, object 360 includes tentacle-type arms 370 and wheels 380. Still within environment 350, a different personality for the multiple personality object 300 may be desired, such as personality C, in FIG. 1. In response, personality C components are provided to the user for object 390, as shown by antenna-type arms 395 and legs 397.
In FIG. 3, it is envisioned that only one copy of object 300 be retrieved from memory 190 into environment 350. In this example, object 300 may serve as the template for the different personalities of the objects illustrated. Such embodiments could greatly reduce the amount of time required to generate, for example, an army of objects with different personalities.
Within each of the respective working environments, the respective objects can then be manipulated or posed based upon output of software, e.g. video game software, crowd simulation software; based upon specification by a user, e.g. via the use of animation variables, inverse kinematics software; or the like.
FIGS. 4A-B illustrate a flow diagram according to various embodiments of the present invention. More specifically, FIGS. 4A-B illustrate a process for manipulating an object with multiple personalities. In some embodiments of the present invention, the object is used for non-real-time animation (e.g. defining animation for feature animation), real-time animation (e.g. video games), or the like.
Initially, a model of an object with multiple personality components is identified, step 400. In various embodiments, the object may be identified by a user, by a computer program, or the like. In various embodiments, the computer program may be a video game, where in-game characters or other non-player characters are to be shown on the screen. In another embodiment, the computer program may be a crowd-simulation (multi-agent) type computer program that can specify/identify the different objects (agents) to form a crowd of objects. In one specific embodiment, software available from Massive Software from Auckland, New Zealand, is used, although other brands of multi-agent software may also be used. In various embodiments, such software typically relies upon a user, e.g. an animator to broadly specify the types of agents, or objects for the crowd.
Next, the model of the object including all the multiple personality components stored therein is retrieved from memory (e.g. optical memory, network memory) and loaded into a computer working memory, step 410. As discussed in the background, it is believed that opening one file including an object with multiple personalities is potentially more time efficient than opening many different files to “build-up” a specific configuration of an object.
In various embodiments of the present invention, the desired personality for components of the object are determined, step 420. In some embodiments, the specific personality type is specifically selected by a user, or specified by a computer program. For example, in a video game situation, an object may be a soldier-type character, and the different personalities may reflect different equipment being worn by the soldier. As another example, a crowd-simulation computer program may specify a personality type for an object. In aggregate, for a crowd of objects, such software may select personalities for objects such that the crowd appears random, the crowd includes small groups of objects, or the like. As illustrated in the example in FIG. 3, above, object 360 was specified to express personality B, and object 400 was specified to express personality C. Accordingly, object 360 includes tentacle type arms 370 and object 390 includes antenna-type arms 395.
Next, in various embodiments, manipulations of the specific personality of object specified may be determined, step 430. The manipulation is typically specified in a pre-run-time environment. In various embodiments of the present invention, a user such as an animator may manipulate the desired personality for the object via manipulation (e.g. GUI, keyboard) of animation variables, via inverse kinematics software, or the like. In other embodiments, the specified manipulation of the object may be determined via software, e.g. crowd simulation software, video game engine, artificial intelligence software, or the like.
In various embodiments, the manipulations of the object may be viewed or reviewed, step 440. In various embodiments, a user such as an animator may review the animation of the object within an animation environment. In various embodiments, this review may not be a full rendering of an image, but a preview rendering.
In other embodiments, such as video gaming, this step may also include displaying the animation of the object on a display to a user, such as a game developer. It is envisioned in this context, that the types of animation of in-game characters may include animation of “scripted” behavior.
In some embodiments of the present invention, after preview of the animation, the user may approve of the manipulations, step 450. Changes to versions of specific components of the object may be performed, even after step 450. For example, the animator may select decide to replace arms 150 with 160. The manipulations (e.g. animation variables) may then be stored into a memory, step 460. In context of animation, the stored manipulations may be animation of the object, and in the context of a video game, these stored manipulations may be associated with “scripted” behavior for the object.
Subsequently, at rendering run-time, the stored manipulations may be retrieved from memory, step 470, and used to animate the object. In various embodiments, an image of a scene including the posed object including the specified personality components, is then created, step 480. In the case of animation, the images are stored onto a tangible media, such as film media, an optical disk, a magnetic media, or the like, step 490. The representation of the images can later be retrieved and viewing by viewers, (e.g. audience) step 495.
In some embodiments of the present invention directed towards video games, step 430 may be based upon input from a user or the game. As an example, the user may move the character on the screen by hitting keys on a keyboard, such as A,S,D, or W. This input would be used as input to animate the character on the screen to walk left, right, backwards, or forwards, or the like. Additionally, in-game health-type conditions of a character may also influence (e.g. restrict) movement of portions of that object. As an example, the right leg of the character may be injured and splinted, thus the animation of the right leg of the object may have a restricted range of movement.
In such video game embodiments, an image of the scene including the object can then be directly rendered in step 480. In contrast to the embodiments above, no review or storage of these inputs is thus required. The rendered image is then displayed to the user in step 495.
FIG. 5 is a block diagram of typical computer system 500 according to an embodiment of the present invention.
In the present embodiment, computer system 500 typically includes a display 510, computer 520, a keyboard 530, a user input device 540, computer interfaces 550, and the like.
In various embodiments, display (monitor) 510 may be embodied as a CRT display, an LCD display, a plasma display, a direct-projection or rear-projection DLP, a microdisplay, or the like. In various embodiments, display 510 may be used to visually display user interfaces, images, or the like.
In various embodiments, user input device 540 is typically embodied as a computer mouse, a trackball, a track pad, a joystick, wireless remote, drawing tablet, voice command system, eye tracking system, and the like. User input device 540 typically allows a user to select objects, icons, text and the like that appear on the display 510 via a command such as a click of a button or the like.
Embodiments of computer interfaces 550 typically include an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) unit, FireWire interface, USB interface, and the like. For example, computer interfaces 550 may be coupled to a computer network, to a FireWire bus, or the like. In other embodiments, computer interfaces 550 may be physically integrated on the motherboard of computer 520, may be a software program, such as soft DSL, or the like.
In various embodiments, computer 520 typically includes familiar computer components such as a processor 560, and memory storage devices, such as a random access memory (RAM) 570, disk drives 580, and system bus 590 interconnecting the above components.
In some embodiments, computer 520 includes one or more Xeon microprocessors from Intel. Further, in the present embodiment, computer 520 typically includes a UNIX-based operating system.
RAM 570 and disk drive 580 are examples of computer-readable tangible media configured to store data such as geometrical descriptions of different personality components, models including multiple personality components, procedural descriptions of models, values of animation variables associated with animation of an object, embodiments of the present invention, including computer-executable computer code, or the like. Types of tangible media include magnetic storage media such as floppy disks, networked hard disks, or removable hard disks; optical storage media such as CD-ROMS, DVDs, holographic memories, or bar codes; semiconductor media such as flash memories, read-only-memories (ROMS); battery-backed volatile memories; networked storage devices, and the like.
In the present embodiment, computer system 500 may also include software that enables communications over a network such as the HTTP, TCP/IP, RTP/RTSP protocols, and the like. In alternative embodiments of the present invention, other communications software and transfer protocols may also be used, for example IPX, UDP or the like.
FIG. 5 representative of a computer system capable of embodying the present invention. It will be readily apparent to one of ordinary skill in the art that many other hardware and software configurations are suitable for use with the present invention. For example, the computer may be a desktop, portable, rack-mounted or tablet configuration. Additionally, the computer may be a series of networked computers. Further, the use of other micro processors are contemplated, such as Core™ microprocessors from Intel; Phenom™, Turion™ 64 , Opteron™ or Athlon™ microprocessors from Advanced Micro Devices, Inc; and the like. Further, other types of operating systems are contemplated, such as WindowsVista®, WindowsXP®, WindowsNT®, or the like from Microsoft Corporation, Solaris from Sun Microsystems, LINUX, UNIX, and the like. In still other embodiments, the techniques described above may be implemented upon a chip or an auxiliary processing board.
In various embodiments of the present invention, animation of an object having a first personality may be easily reused by an object having a second personality. In other words, animation used for one version of an object can be used for other versions of the object, since they simply have different versions of the same components. From a nomenclature point of view, an object having a first version of a component will have a directory path that can be used by an object having a second version of the component. In various embodiments, the consistency in nomenclature, or naming, facilitates animation reuse. Accordingly, after animation for an object is finished, the user can easily change the version of a component, without having to worry about finding the correct directory path for the component.
In other embodiments of the present invention, combinations or sub-combinations of the above disclosed invention can be advantageously made. The block diagrams of the architecture and graphical user interfaces are grouped for ease of understanding. However it should be understood that combinations of blocks, additions of new blocks, re-arrangement of blocks, and the like are contemplated in alternative embodiments of the present invention.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims.

Claims (6)

1. A method for a computer system includes:
determining a model for a first personality of a component of an object, wherein the model for the first personality of the component is associated with a component name and a first personality indicia;
determining a model for a second personality of the component of the object, wherein the model for the second personality of the component is associated with the component name and the second personality indicia;
determining a multiple personality model of the object, wherein the multiple personality model of the object includes the model for the first personality of the component, the model of the second personality of the component, the first personality indicia, and the second personality indicia; and
storing the multiple personality model of the object in a single file.
2. The method of claim 1
retrieving the multiple personality model of the object within a working environment;
receiving a specification of the first personality indicia and the component name within the working environment;
receiving a manipulation value for the component of the object; and
applying the manipulation value for the component to the model of the first personality of the component in response to the component name, the specification of the first personality indicia, and to the manipulation value.
3. The method of claim 2 further comprising:
determining a representation of an image including a representation of the manipulation value being applied to the model of the first personality of the component; and
displaying the image to a user.
4. The method of claim 3 wherein the working environment is selected from a group consisting of: an animation environment, a gaming environment.
5. A method for a computer system includes:
retrieving a multiple personality model of an object from a file, wherein the multiple personality model of the object includes a model of a first personality of a component, wherein the model for the first personality of the component is associated with a component name, and a first personality indicia, wherein the multiple personality of the model of the object includes a model of a second personality of the component, wherein the model for the second personality of the component is associated with the component name and a second personality indicia;
determining a desired personality indicia associated with the component;
determining a plurality of manipulation values associated with the component;
associating the plurality of manipulation values to the model for the first personality of the component when the desired personality indicia comprises the first personality indicia; and
associating the plurality of manipulation values to the model for the second personality of the component when the desired personality indicia comprises the second personality indicia.
6. The method of claim 1 further comprising rendering an image using the model of the first personality of the component when the desired personality indicia comprises the first personality indicia.
US12/215,666 2008-06-26 2008-06-26 Multiple personality articulation for animated characters Abandoned USH2253H1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/215,666 USH2253H1 (en) 2008-06-26 2008-06-26 Multiple personality articulation for animated characters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/215,666 USH2253H1 (en) 2008-06-26 2008-06-26 Multiple personality articulation for animated characters

Publications (2)

Publication Number Publication Date
US20100302252A1 US20100302252A1 (en) 2010-12-02
USH2253H1 true USH2253H1 (en) 2011-05-03

Family

ID=43219709

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/215,666 Abandoned USH2253H1 (en) 2008-06-26 2008-06-26 Multiple personality articulation for animated characters

Country Status (1)

Country Link
US (1) USH2253H1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics
US11380094B2 (en) 2019-12-12 2022-07-05 At&T Intellectual Property I, L.P. Systems and methods for applied machine cognition

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9386268B2 (en) 2012-04-09 2016-07-05 Intel Corporation Communication using interactive avatars
CN107004288B (en) 2014-12-23 2022-03-01 英特尔公司 Facial motion driven animation of non-facial features
US9830728B2 (en) 2014-12-23 2017-11-28 Intel Corporation Augmented facial animation
EP3241187A4 (en) 2014-12-23 2018-11-21 Intel Corporation Sketch selection for rendering 3d model avatar
WO2017101094A1 (en) 2015-12-18 2017-06-22 Intel Corporation Avatar animation system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040227757A1 (en) * 2003-05-14 2004-11-18 Pixar Hair rendering method and apparatus
US20050212803A1 (en) * 2004-03-26 2005-09-29 Pixar Dynamic scene descriptor method and apparatus
US20050212800A1 (en) * 2004-03-25 2005-09-29 Pixar Volumetric hair simulation

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040227757A1 (en) * 2003-05-14 2004-11-18 Pixar Hair rendering method and apparatus
US20050253842A1 (en) * 2003-05-14 2005-11-17 Pixar Hair rendering method and apparatus
US7098910B2 (en) * 2003-05-14 2006-08-29 Lena Petrovic Hair rendering method and apparatus
US7327360B2 (en) * 2003-05-14 2008-02-05 Pixar Hair rendering method and apparatus
US20050212800A1 (en) * 2004-03-25 2005-09-29 Pixar Volumetric hair simulation
US20050210994A1 (en) * 2004-03-25 2005-09-29 Pixar Volumetric hair rendering
US7450122B2 (en) * 2004-03-25 2008-11-11 Pixar Volumetric hair rendering
US7468730B2 (en) * 2004-03-25 2008-12-23 Pixar Volumetric hair simulation
US20050212803A1 (en) * 2004-03-26 2005-09-29 Pixar Dynamic scene descriptor method and apparatus
US7548243B2 (en) * 2004-03-26 2009-06-16 Pixar Dynamic scene descriptor method and apparatus

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics
US11380094B2 (en) 2019-12-12 2022-07-05 At&T Intellectual Property I, L.P. Systems and methods for applied machine cognition

Also Published As

Publication number Publication date
US20100302252A1 (en) 2010-12-02

Similar Documents

Publication Publication Date Title
USH2253H1 (en) Multiple personality articulation for animated characters
Li et al. Ganimator: Neural motion synthesis from a single sequence
CA2795739C (en) File format for representing a scene
US7564457B2 (en) Shot shading method and apparatus
US7221380B2 (en) Integrated object bend, squash and stretch method and apparatus
US7876326B2 (en) Posing articulated models in contact with a surface
US9147277B2 (en) Systems and methods for portable animation rigs
WO2000041139A1 (en) Three-dimensional skeleton data compressing device
US7839408B2 (en) Dynamic scene descriptor method and apparatus
US20180276870A1 (en) System and method for mass-animating characters in animated sequences
US20230177755A1 (en) Predicting facial expressions using character motion states
US8902234B1 (en) Simulation primitives
US20050253849A1 (en) Custom spline interpolation
US7129940B2 (en) Shot rendering method and apparatus
US7683904B2 (en) Manual component asset change isolation methods and apparatus
Andrus et al. Facelab: Scalable facial performance capture for visual effects
Jones et al. Dynamic sprites: artistic authoring of interactive animations
JP2007500395A (en) Improved paint projection and paint projection apparatus
Lake Technical animation in video games
Jorissen et al. The extream library: extensible real-time animations for multiple platforms
Adams et al. Up close with simulated crowds
Noon et al. Keyframe-Based Scenegraph Animation API for Virtual Reality Applications
Hayashi A Rapid Animation Tool Minako Hayashi, Makoto Maruya, Shigeo Nakagawa, Hirofumi Ishida, Yosuke Takashima NEC Corporation
Chen et al. Hierarchical Evolution for Animating 3D Virtual Creatures
JP2006525558A (en) Method and apparatus for shot rendering

Legal Events

Date Code Title Description
AS Assignment

Owner name: PIXAR, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PETROVIC, LENA;ANDERSON, JOHN;SIGNING DATES FROM 20100805 TO 20100809;REEL/FRAME:024813/0807

STCF Information on status: patent grant

Free format text: PATENTED CASE