NZ719982B2 - System, method and apparatus for rapid film pre-visualization - Google Patents
System, method and apparatus for rapid film pre-visualization Download PDFInfo
- Publication number
- NZ719982B2 NZ719982B2 NZ719982A NZ71998212A NZ719982B2 NZ 719982 B2 NZ719982 B2 NZ 719982B2 NZ 719982 A NZ719982 A NZ 719982A NZ 71998212 A NZ71998212 A NZ 71998212A NZ 719982 B2 NZ719982 B2 NZ 719982B2
- Authority
- NZ
- New Zealand
- Prior art keywords
- component
- accordance
- virtual
- director
- dimensional virtual
- Prior art date
Links
- 238000009877 rendering Methods 0.000 claims abstract description 44
- 230000000007 visual effect Effects 0.000 claims abstract description 7
- 238000004519 manufacturing process Methods 0.000 claims description 5
- 230000004048 modification Effects 0.000 claims description 5
- 238000006011 modification reaction Methods 0.000 claims description 5
- 238000003860 storage Methods 0.000 claims description 2
- 230000001815 facial Effects 0.000 description 14
- 238000000034 method Methods 0.000 description 10
- 241000243251 Hydra Species 0.000 description 8
- 239000000463 material Substances 0.000 description 6
- 230000014509 gene expression Effects 0.000 description 5
- 210000000988 Bone and Bones Anatomy 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 239000000047 product Substances 0.000 description 2
- 240000007419 Hura crepitans Species 0.000 description 1
- BGRDGMRNKXEXQD-UHFFFAOYSA-N Maleic hydrazide Chemical compound OC1=CC=C(O)N=N1 BGRDGMRNKXEXQD-UHFFFAOYSA-N 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000012467 final product Substances 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Abstract
Disclosed is a system for rapid film pre-visualization. The system includes a motion capture component (12), a virtual digital rendering component (16), a display component and a controller component (18). The motion capture component (12) interfaces with wearable motion capture sensors. The virtual digital rendering component (16) is configured to receive the captured motion and re-create such motion in a three dimensional virtual space. The display component is configured to display an output of the virtual digital rendering component (16). The controller component (18) is configured to interface with the virtual digital rendering component (16) and allow a user to navigate within the three dimensional virtual space to control the visual aspects of one or more shots within the three dimensional virtual space. digital rendering component (16) is configured to receive the captured motion and re-create such motion in a three dimensional virtual space. The display component is configured to display an output of the virtual digital rendering component (16). The controller component (18) is configured to interface with the virtual digital rendering component (16) and allow a user to navigate within the three dimensional virtual space to control the visual aspects of one or more shots within the three dimensional virtual space.
Description
SYSTEM, METHOD AND APPARATUS FOR RAPID FILM PRE-VISUALIZATION
CROSS REFERENCE TO RELATED APPLICATION
This is a further application for a patent for subject matter disclosed in New Zealand
patent application no. 626197, the entire contents of which are incorporated herein by
reference.
TECHNICAL FIELD
This invention relates generally to pre-visualization of film, e.g. feature films. More
particularly, this invention relates to methods, systems and apparatuses for rapid, near real-
time or real-time pre-visualization of films.
BACKGROUND OF THE INVENTION
Pre-visualization is a technique whereby a script or storyline is rendered into one or
more images representative of that script or storyline. Traditional methods involved the
generation of comics, storyboards, proposed frame sketches, etc., by an artist reading the
script in an attempt to capture a writer’s or director’s vision. More recently, computer
animation, possibly even using motion capture technologies with an actor or stuntman, have
been used to produce proposed pre-visualizations for later review by a director.
However, a common problem with all of these approaches is the fact that all of these
pre-visualization activities are merely attempts by others to capture the vision (action, style of
the shot, etc.) of a director on a scene by scene director. While a director or producer might
review a script with a traditional pre-visualization team prior to generation of the pre-
visualization materials, it is a common problem that the end result is not what a director or
producer ultimately wants. This may be on the level of disliking one particular action
sequence, not liking a series of pans or angles on some or all of the pre-visualization
materials, or simply not liking the feel of the pre-visualization materials.
Dislike of the pre-visualization materials by a director or producer sends a pre-
visualization team back to the drawing boards for generating second (or multiple) attempts to
capture the vision of the director before the film can move forward. Accordingly, this process
is expensive and inaccurate, involving many artists and/or animators over a several weeks or
months, before further production can proceed. Additionally, because the creative vision of
the producers and directors was not ongoing in the animators’ process, all this work might be
scrapped when the final product was shared with the Studio, exec producers or the director.
Also, in general, motion capture of live performance in real time has also been
extremely inefficient and expensive. For example, in the making of the film AVATAR, James
Cameron’s LightStorm production company developed a filming system and process
requiring; costly, tethered light reflective mo-cap suits, a huge (warehouse sized) volume
filled with IR cameras and HD cameras, and a heavy and bulky virtual hand-held, tethered
camera wielded by the director (weighing approximately 35 lbs). The footage secured within
that virtual camera was limited artistically to a camera lens view of the action, and the
walking distance of the director. Additionally, the actual actors whose performances were
necessary for the production spent months on call and on set to pre-capture their contributions
to the film, thus further representing huge financial and time expenses. Because of these
limitations, the captured footage was actual final film footage (which would have been
captured after the pre-visualization stage).
Accordingly, there is a need in the art for an improved system, method and apparatus
for rapid film pre-visualization that avoids the above described problems and disadvantages.
SUMMARY
The above described and other problems and disadvantages of the prior art are
overcome and alleviated by the present system, method and apparatus for rapid film pre-
visualization, including a motion capture component interfacing with wearable motion capture
sensors; a virtual digital rendering component configured to receive the captured motion and
re-create such motion in a three dimensional virtual space; a display component configured to
display an output of the virtual digital rendering component; and a controller component,
configured to interface with the virtual digital rendering component and to act as a virtual
camera in a three dimensional virtual environment, the controller configured with plural
handheld remote components configured to navigate within the three dimensional virtual
environment and to control the virtual camera using film camera controls to control the visual
aspects of shots within the three dimensional virtual environment.
In exemplary embodiments, a user (e.g., a director) can navigate through the space in
real time to generate pre-visualizations according to the user’s preference or vision.
Exemplary embodiments allow for rough pre-visualizations, e.g. using computer animation
software MAYA as the virtual digital rendering component to output flat shaded blasts for
approval. Other exemplary embodiments allow for more developed pre-visualzations, e.g.,
using an engine such as CRYENGINE 3 to provide development (e.g., viMAYArtual terrain,
etc.) to the three dimensional virtual space defined by the pre-visualization process.
Also in exemplary embodiments, the controller may be a handheld device
incorporating a screen along with one or more hand controllers, wherein the hand controllers
are configured to provide navigation in the three dimensional virtual space and to provide film
camera controls, such as pan, tilt, zoom, etc. In one particular exemplary embodiment, at
least one hand control includes a navigation control that provides six degrees of movement
within the three dimensional virtual space (for reference, the “SpaceNavigator” from
3dConnexion provides six degrees of motion control). In exemplary embodiments, the
controller’s physical position and positional changes are tracked via a magnetic field, e.g.,
such as is done with the Razer Hydra system in video gaming, to provide additional
navigation functionality to the controller. In other embodiments, rather than using a controller
hand component similar the “SpaceNavigator”, two hand controllers similar to the Razer
Hydra controller may be interconnected by a bar. In any of the controller embodiments, a
screen or viewfinder may or may not be used (e.g., mounted on a bar extending between left
and right hand controller units), according to the preference of the user.
In other exemplary embodiments, the motion capture component utilizes plural radio
frequency (RF) detectors in a motion grid (an exemplary motion grid may contain, e.g., nine
RF detectors and head and foot tags, which facilitate removing drift inherent in the system).
An exemplary system includes an XSENS system, including such a motion grid and MVN
suits (which include accelerometers therein). An exemplary system for interfacing with the
virtual digital rendering component (e.g., MAYA) includes an IKinema system, which
generates ‘stick figures’ from the positional data emitted by accelerometers in the suit(s). In
exemplary embodiments, the virtual digital rendering component (e.g., MAYA), provides the
environment framework for generating characters in a virtual three dimensional space.
In other exemplary embodiments, a motion capture component detects the position of
and motion of the face of a performer. In one such exemplary embodiment, a performer
wears an infrared camera on a head rig pointing back at the face of the performer.
Information from the facial capture may be fed into a virtual digital rendering component
(e.g., MAYA), either alone or in addition to the exemplary motion capture (utilizing
performer worn suits) described above. Subsequent pre-visualization processing of the data
may then be performed by a director or animator, either in real time with the motion capture
or subsequent to any motion capture.
According to some or all of the above exemplary embodiments, the present invention
thus provides systems, methods and apparatuses that provide fast pre-visualization for films
utilizing control input, such as input from a director, to shape the pre-visualization. Thus,
exemplary embodiments might provide a system where performers (actors, stuntmen, etc.)
wearing wireless suits are choreographed in real time by a film’s director. The director can sit
in front of a display that shows the output of the captured motion in a three dimensional
virtual environment and can both navigate and shape the visual shot within the three
dimensional virtual environment according to the director’s taste and vision. The pre-
visualizations can be output in basic form (e.g., flat shaded blasts) or within a virtual world
generated from an engine, such as the CRYENGINE 3, UNREAL engine, etc. The pre-
visualizations can be generated on-set with the motion capture and with the director, in
addition to the data being subsequently available (after motion capture) for off-set variations.
The above discussed and other features and advantages of the present invention will be
appreciated and understood by those skilled in the art from the following detailed description
and drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Referring to the exemplary drawings wherein like elements are numbered alike in the
several FIGURES.:
FIGURE 1 illustrates an exemplary flowchart for rapid film pre-visualization;
FIGURE 2 illustrates a perspective view of an exemplary pre-visualization setup in
accordance with the present disclosure;
FIGURE 3 illustrates a perspective view of an exemplary setup for data capture and
processing of data into a three dimensional virtual environment;
FIGURE 4 illustrates a perspective view of an exemplary setup for data capture;
FIGURE 5 illustrates a perspective view of an exemplary setup for processing of data
into a three dimensional virtual environment;
FIGURE 6 illustrates a perspective view of an exemplary setup for processing of data
into a three dimensional virtual environment;
FIGURE 7 illustrates a perspective view of an exemplary setup for a director’s
environment and an environment for processing directed material;
FIGURE 8 illustrates a perspective view of an exemplary setup for a director’s
environment;
FIGURE 9 illustrates a perspective view of an exemplary setup for an environment for
processing directed material;
FIGURE 10 illustrates an exemplary controller;
FIGURE 11 illustrates another exemplary controller including a virtual camera
viewfinder screen;
FIGURE 12 illustrates a standard Razer Hydra controller; and
FIGURE 13 illustrates an exemplary facial capture process.
DETAILED DESCRIPTION
Detailed illustrative embodiments are disclosed herein. However, specific functional
details disclosed herein are merely representative for purposes of describing example
embodiments. Example embodiments may, however, be embodied in many alternate forms
and should not be construed as limited to only the embodiments set forth herein.
Accordingly, while example embodiments are capable of various modifications and
alternative forms, embodiments thereof are shown by way of example in the drawings and
will herein be described in detail. It should be understood, however, that there is no intent to
limit example embodiments to the particular forms disclosed, but to the contrary, example
embodiments are to cover all modifications, equivalents, and alternatives falling within the
scope of example embodiments. Like numbers refer to like elements throughout the
description of the figures.
It will be further understood that, although the terms first, second, etc. may be used
herein to describe various steps or calculations, these steps or calculations should not be
limited by these terms. These terms are only used to distinguish one step or calculation from
another. For example, a first calculation could be termed a second calculation, and, similarly,
a second step could be termed a first step, without departing from the scope of this disclosure.
As used herein, the term "and/or" includes any and all combinations of one or more of the
associated listed items.
As used herein, the singular forms "a", "an" and "the" are intended to include the
plural forms as well, unless the context clearly indicates otherwise. It will be further
understood that the terms "comprises", "comprising,", "includes" and/or "including", when
used herein, specify the presence of stated features, integers, steps, operations, elements,
and/or components, but do not preclude the presence or addition of one or more other
features, integers, steps, operations, elements, components, and/or groups thereof.
It will also be understood that the terms “photo,” “photograph,” “image,” or any
variation thereof may be interchangeable. Thus, any form of graphical image may be
applicable to example embodiments.
It will also be understood that the terms “audio,” “audio tracks,” “music,” “music
tracks,” or any variation thereof may be interchangeable. Thus any form of audio may be
applicable to example embodiments.
It will also be understood that the terms “film,” “media,” “multi-media,” “video,” or
any variation thereof may be interchangeable. Thus any form of rich media may be applicable
to example embodiments.
It should also be understood that other terms used herein may be applicable based
upon any associated definition as understood by one of ordinary skill in the art, although other
meanings may be applicable depending upon the particular context in which terms are used.
Therefore, the terminology used herein is for the purpose of describing particular
embodiments only and is not intended to be limiting of example embodiments. It should also
be noted that in some alternative implementations, the functions/acts noted may occur out of
the order noted in the figures. For example, two figures shown in succession may in fact be
executed substantially concurrently or may sometimes be executed in the reverse order,
depending upon the functionality/acts involved.
Further to the brief description provided above and associated textual detail of each of
the figures, the following description provides additional details of example embodiments of
the present invention.
As described herein, example embodiments of the present invention may include
systems, methods and apparatus for rapid film pre-visualization, including a motion capture
component interfacing with wearable motion capture sensors; a virtual digital rendering
component configured to receive the captured motion and re-create such motion in a three
dimensional virtual space; a display component configured to display an output of the virtual
digital rendering component; and a controller component, configured to interface with the
virtual digital rendering component and allow a user to navigate within the three dimensional
virtual space to control the visual aspects of one or more shots within the three dimensional
virtual space.
In exemplary embodiments, a user (e.g., a director) can navigate through the space in
real time to generate pre-visualizations according to the user’s preference or vision.
Exemplary embodiments allow for rough pre-visualizations, e.g. using MAYA as the virtual
digital rendering component to output flat shaded blasts for approval. Other exemplary
embodiments allow for more developed pre-visualzations, e.g., using an engine such as
CRYENGINE 3 to provide development (e.g., virtual terrain, etc.) to the three dimensional
virtual space defined by the pre-visualization process.
Also in exemplary embodiments, the controller may be a handheld device
incorporating a screen along with one or more hand controllers, wherein the hand controllers
are configured to provide navigation in the three dimensional virtual space and to provide film
camera controls, such as pan, tilt, zoom, etc. In one particular exemplary embodiment, at
least one hand control includes a navigation control that provides six degrees of movement
within the three dimensional virtual space (for reference, the “SpaceNavigator” from
3dConnexion provides six degrees of motion control). In exemplary embodiments, the
controller’s physical position and positional changes are tracked via a magnetic field, e.g.,
such as is done with the Razer Hydra system in video gaming, to provide additional
navigation functionality to the controller. In other embodiments, rather than using a controller
hand component similar the “SpaceNavigator”, two hand controllers similar to the Razer
Hydra controller may be interconnected by a bar. In any of the controller embodiments, a
screen or viewfinder may or may not be used (e.g., mounted on a bar extending between left
and right hand controller units), according to the preference of the user.
In other exemplary embodiments, the motion capture component utilizes plural radio
frequency (RF) detectors in a motion grid (an exemplary motion grid may contain, e.g., nine
RF detectors and head and foot tags, which facilitate removing drift inherent in the system).
An exemplary system includes an XSENS system, including such a motion grid and MVN
suits (which include accelerometers therein). An exemplary system for interfacing with the
virtual digital rendering component (e.g., MAYA) includes an IKinema system, which
generates ‘stick figures’ from the positional data emitted by accelerometers in the suit(s). In
exemplary embodiments, the virtual digital rendering component (e.g., MAYA), provides the
environment framework for generating characters in a virtual three dimensional space.
In other exemplary embodiments, a motion capture component detects the position of
and motion of the face of a performer. In one such exemplary embodiment, a performer
wears an infrared camera on a head rig pointing back at the face of the performer.
Information from the facial capture may be fed into a virtual digital rendering component
(e.g., MAYA), either alone or in addition to the exemplary motion capture (utilizing
performer worn suits) described above. Subsequent pre-visualization processing of the data
may then be performed by a director or animator, either in real time with the motion capture
or subsequent to any motion capture.
Exemplary facial capture procedures in accordance with the above follow: An
exemplary process begins with an actor or stunt professional wearing a motion capture suit
with an infrared camera on a head rig pointing back at the face of the actor or stunt
professional. The infrared camera technology could be wireless or could have the camera
wired into a computer that is configured to capture a performance as a file (e.g., as a Quick
Time file). Additionally, the system could be configured to use facial markers for such
capture (utilizing placement of key readable markers physically placed directly on the
actors/stunt performers faces); or the system can be marker-less, e.g., similar to the Motek
system illustrated at
http://www.motekentertainment.com/index.php?option=com_content&task=view&id=17&Ite
mid=67. Reference is also made to FIGURE 13, which illustrates an example of such facial
capture utilized by the Motek Company.
In a further exemplary process, the actor or stunt performer runs through a broad range
of facial expressions. This enables the software within the facial capture system to
‘understand’ the actor’s or stunt performer’s features. At this juncture, the facial expressions
may be recorded in a convenient file format, e.g., a Quick Time video. This recording may be
fed into a suitable virtual digital rendering component, e.g. a MAYA system, along with any
information secured from the actor’s or stunt performer’s physical movements, e.g. as
detected by mo-cap suits.
Additionally or alternately, information from a bone driven face rig may be fed into
the virtual digital rendering component for the geometry and topography of the actor’s or
stunt performer’s face. The movements of the facial features captured by the infrared camera
and recorded as a video file may then be fed into a plug-in (i.e., a plug-in supporting such
bone driven face rig that is configured to feed into the virtual digital rendering component)
tied to the bone driven rig, so that the face appears within normal human parameters.
In exemplary embodiments, the data streams from both the mo-cap suit regarding the
physical placement of the actor’s or stunt performer’s body in time and space and the data
stream from the facial capture infrared camera which provides input about the facial
expression upon the actor’s or stunt performer’s face during performance can be processed
through the pre-visualization system described herein. These data streams enable the director
or animator to effectively pre-visualize a scene from both the actor’s or stunt performer’s
actions and facial expressions within an established environment in a cry-
engine/sandbox. Utilizing the virtual camera and overall pre- visualization system, the dual
data streams of physical movement, body placement in time and space and facial expression
expressed during those movements can be edited and re-edited into a seamless action
sequence.
While the above example describes pre-recording of an actor’s performance, it should
be recognized that such pre-recording is not necessary, if desired. In one such example, a
performance can be fed from a camera directly to a bone based rig and driven in real time.
According to some or all of the above exemplary embodiments, the present invention
thus provides systems, methods and apparatuses that provide fast pre-visualization for films
utilizing control input, such as input from a director, to shape the pre-visualization. Thus,
exemplary embodiments might provide a system where performers (actors, stuntmen, etc.)
wearing wireless suits are choreographed in real time by a film’s director. The director can sit
in front of a display that shows the output of the captured motion in a three dimensional
virtual environment and can both navigate and shape the visual shot within the three
dimensional virtual environment according to the director’s taste and vision. The pre-
visualizations can be output in basic form (e.g., flat shaded blasts) or within a virtual world
generated from an engine, such as the CRYENGINE 3, UNREAL engine, etc. The pre-
visualizations can be generated on-set with the motion capture and with the director, in
addition to the data being subsequently available (after motion capture) for off-set variations.
Further, due to the relatively small size of various components, the present system provides a
portable capture, processing and pre-visualization system that permits easy relocation and use
in office type settings.
Hereinafter, example embodiments of the present invention are described in detail.
Turning to FIGURE 1, a flowchart of an exemplary system includes a motion
capturing component 12, shown here as an XSENS system, including such a motion grid and
MVN suits (which include accelerometers therein). The present inventors have also modified
the suits with attachment points for harnesses via reinforced holes and reinforced
accelerometers.
An exemplary system for interfacing with the virtual digital rendering component 16
(e.g., MAYA) includes an IKinema system 14, which generates ‘stick figures’ from the
positional data emitted by accelerometers in the suit(s). In exemplary embodiments, the
virtual digital rendering component 16 (e.g., MAYA), provides the environment framework
for generating characters in a virtual three dimensional space. While the following portions of
the specification specifically refer to various specific systems, such as XSENS, IKinema,
MAYA, CRYENGINE 3, Adobe, etc., it should be recognized that they are merely exemplary
systems, and other systems may be used within the basic framework of the invention.
Referring still to FIGURE 1, in exemplary embodiments, a controller 18 acts as a
virtual camera within the framework provided by MAYA 16 via a virtual camera plugin 18.
Generated pre-visualization may then be output simply, e.g., as flat shaded blasts, for
approval, or with additional detail, such as a virtual world provided by an engine 22 such as
CRYENGINE 3. This additional detail may be provided with the MAYA or
MAYA/CINEBOX data for display to the director so that the director receives an immersive
image (even to the level of detail representative of actual film production) for use of the
virtual camera controller to direct action. Once a pre-visualization is considered satisfactory
(at 20), it may further be exported as a known or common format for storage (at 24).
Referring now to FIGURE 2, an exemplary diagram of a set including exemplary
aspects of the present invention is shown in perspective. An XSENS motion capture system
is illustrated, including performers in motion sensing suits 26. The XSENS system includes
RF detectors 28, which detect motion of the suits on a stage 30. Various terminals, shown
generally at 32, are also illustrated for motion capture and three dimensional virtual
environment rendering. An exemplary director’s area 34 and exemplary directed image
processing area 36 are also generally illustrated, but will be described in more detail below.
Referring now to FIGURE 3, an exemplary motion capture and virtual environment
rendering terminals are generally shown at 32. A first exemplary terminal 36 (labeled as a
MVN Studio) captures data from the suits 26. A second exemplary terminal 38 (labeled as
MAYA TD) renders the captured data into a three dimensional virtual environment. A third
exemplary terminal 40 (labeled as MAYA/CINEBOX AD) provides an optional image
enhancement, such as rendering the captured motion for a given suit a specific character for
viewing by the director or other user.
FIGURE 4 further shows exemplary capture of data within the MVN motion grid
(including the RF detectors 28) of position and motion of the MVN suits 26 at the MVN
Studio terminal 36.
FIGURE 5 further shows exemplary conversion of the “stick figure” information into
character information at the MAYA TD terminal 38.
FIGURE 6 shows exemplary and optional additional shading or stereoscopic
processing of the MAYA image at the MAYA/CINEBOX AD terminal 40.
FIGURE 7 shows an exemplary director’s area 34 and an exemplary directed image
processing area 36 together as a “virtual village.” The exemplary director’s area 34 includes a
controller 18 that acts as a director’s virtual camera in the virtual environment and a multi-
panel set view 42 to immerse the director in the virtual environment while directing the
camera action. The exemplary directed image processing area 36 (labeled in FIGURE 7 as
Adobe Premiere) includes software to edit the director’s virtual shots.
FIGURE 8 illustrates the director’s area 34 in more detail, including the exemplary
multi-panel set view 42 and an exemplary handheld controller 18 with an integrated screen 44
imitating a camera’s viewfinder.
FIGURE 9 similarly illustrates the directed image processing area 36 in more detail,
showing an editing interface 46 and a display 48 showing the finished editing product.
FIGURE 10 illustrates an exemplary controller 18 that acts as a virtual camera for a
director. The exemplary controller includes a first handheld portion 50, which is configured
with a navigation toggle 52 having six degrees of motion. The illustrated exemplary first
handheld portion 50 is a modified controller with a toggle portion from a “SpaceNavigator”
product made by 3dConnexion. A second handheld controller 54 is tethered to the first
handheld controller 50 by an adjustable bar 56. The illustrated exemplary second handheld
controller 54 is derived from a Razer Hydra control system (which is shown generally at 58 in
FIGURE 12). In exemplary embodiments, the illustrated system would also make use of the
magnetic Orb Controller (60 in FIGURE 12) from the Hydra kit.
FIGURE 11 shows another exemplary controller 18. This exemplary controller
includes first 50 and second 54 handheld controllers (with the first optionally configured as a
Razer Hydra controller), a connection bar 56, and a video screen 44 (also shown in FIGURE
8) that is configured to act as a virtual viewfinder for a director or user.
It should be emphasized that the above-described example embodiments of the present
invention, including the best mode, and any detailed discussion of particular examples, are
merely possible examples of implementations of example embodiments, and are set forth for a
clear understanding of the principles of the invention. Many variations and modifications
may be made to the above-described embodiment(s) of the invention without departing from
the spirit and scope of the invention. For example, the present invention should not be
construed as being limited to a pre-visualization setting, since it should be recognized that the
ability to direct via the controller captured action in a three dimensional virtual environment
may be equally applicable to capture of finished film shots as to capture of shots for pre-
visualization of films. All such modifications and variations are intended to be included
herein within the scope of this disclosure and the present invention and protected by the
following claims.
Claims (24)
1. A system for rapid film pre-visualization, comprising: a motion capture component interfacing with wearable motion capture sensors; a virtual digital rendering component configured to receive the captured motion and re-create such motion in a three dimensional virtual environment; a display component configured to display an output of the virtual digital rendering component; and a controller component, configured to interface with the virtual digital rendering component and to act as a virtual camera in a three dimensional virtual environment, the controller configured with plural handheld remote components configured to navigate within the three dimensional virtual environment and to control the virtual camera using film camera controls to control the visual aspects of shots within the three dimensional virtual environment.
2. A system in accordance with claim 1, wherein said motion capture component is an RF motion capture component that detects accelerometers in a wearable suit within an RF grid.
3. A system in accordance with claim 1, wherein said virtual digital rendering component comprises computer animation software.
4. A system in accordance with claim 1, wherein at least one handheld remote component includes a toggle allowing for at least six degrees of motional control.
5. A system in accordance with claim 1, wherein at least one handheld remote component includes a handheld remote that is sensitive to a reference magnetic field to provide real time positional information about the remote control relative to the reference magnetic field.
6. A system in accordance with claim 1, wherein two handheld remote components are coupled together.
7. A system in accordance with claim 6, further comprising a view screen attached to said handheld remote components, the view screen configured to act as a virtual viewfinder for the virtual camera.
8. A system in accordance with claim 1, wherein said controller component is configured to navigate as a virtual camera in said three dimensional virtual environment in real time to provide pre-visualization for a film.
9. A system in accordance with claim 8, wherein said display component comprises a multi-panel display configured to provide a user with immersive image data such that a user can navigate said virtual camera therein.
10. A system in accordance with claim 1, further comprising a virtual world component configured to provide the three dimensional virtual environment with realistic world detail.
11. A system in accordance with claim 1, wherein at least one of said handheld remote components includes a control component having six degrees of movement and wherein said film camera controls include pan, tilt and zoom controls.
12. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide varying levels of detail in said three dimensional virtual environment for pre-visualization approval.
13. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide rendering of flat-shaded blasts in said three dimensional virtual environment for pre-visualization approval.
14. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide additional shading and stereoscopic processing to rendered figures derived from said received data in said three dimensional virtual environment for pre- visualization approval.
15. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide additional detail development in said three dimensional virtual environment for pre-visualization approval.
16. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide virtual terrain in said three dimensional virtual environment for pre- visualization approval.
17. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide a level of detail that is representative of actual film production in said three dimensional virtual environment for pre-visualization approval.
18. A system in accordance with claim 1, wherein said display component comprises plural displays surrounding a director configured to immerse a director in the virtual environment while the director directs the camera action utilizing the controller component.
19. A system in accordance with claim 18, wherein captured and rendered data follows a data pipeline from data capture to data rendering to a director's station, with further manipulation of data responsive to the controller component utilized by the director.
20. A system in accordance with claim 1, wherein said motion capture component interfaces with wearable motion capture sensors on suits with attachment points for harnesses via reinforced holes and reinforced accelerometers.
21. A method for rapid film pre-visualization, comprising: capturing position and motion data of wearable position and motion capture sensors using a motion capture system including plural sensor detectors; digitally rendering said captured data and re-creating motion of the sensors in a three dimensional virtual environment; displaying an output of the motion in the three dimensional virtual environment; and providing a controller component that is configured to interface with a three dimensional virtual environment to act as a virtual camera in the three dimensional virtual environment, the controller configured with plural handheld remote components configured to navigate within the three dimensional virtual environment to control the virtual camera using film camera controls within the three dimensional virtual space.
22. A method in accordance with claim 21, wherein at least one of said handheld remote components includes a control component having six degrees of movement and wherein said film camera controls include pan, tilt and zoom controls.
23. A system in accordance with claim 22, wherein captured and rendered data follows a data pipeline from data capture to data rendering to a director's station, the director's station at least including said display component and said controller component, the director's station providing a modification point of the data pipeline input to the director's station, the data pipeline input comprising data from data capture through virtual digital rendering.
24. A system in accordance with claim 23, wherein said director's station is configured to provide an export of director pre-visualization to a storage component. FEG1262PCT FEG1262PCT FEG1262PCT FEG1262PCT FEG1262PCT FEG1262PCT FEG1262PCT FEG1262PCT FEG1262PCT FEG1262PCT FEG1262PCT FEG1262PCT FEG1262PCT
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161578695P | 2011-12-21 | 2011-12-21 | |
US61/578,695 | 2011-12-21 | ||
US201261644066P | 2012-05-08 | 2012-05-08 | |
US13/466,522 | 2012-05-08 | ||
US13/466,522 US9799136B2 (en) | 2011-12-21 | 2012-05-08 | System, method and apparatus for rapid film pre-visualization |
US61/644,066 | 2012-05-08 | ||
NZ62619712 | 2012-12-19 |
Publications (2)
Publication Number | Publication Date |
---|---|
NZ719982A NZ719982A (en) | 2018-01-26 |
NZ719982B2 true NZ719982B2 (en) | 2018-04-27 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9799136B2 (en) | System, method and apparatus for rapid film pre-visualization | |
US11232626B2 (en) | System, method and apparatus for media pre-visualization | |
CN109889914B (en) | Video picture pushing method and device, computer equipment and storage medium | |
US20230045393A1 (en) | Volumetric depth video recording and playback | |
CN106358036B (en) | A kind of method that virtual reality video is watched with default visual angle | |
JP7090410B2 (en) | Methods, devices and systems for automatic zooming when playing augmented reality scenes | |
KR101566543B1 (en) | Method and system for mutual interaction using space information argumentation | |
US11533438B2 (en) | Method to configure a virtual camera path | |
US20130321575A1 (en) | High definition bubbles for rendering free viewpoint video | |
US20080246759A1 (en) | Automatic Scene Modeling for the 3D Camera and 3D Video | |
KR101713772B1 (en) | Apparatus and method for pre-visualization image | |
US20130215229A1 (en) | Real-time compositing of live recording-based and computer graphics-based media streams | |
CN108369738A (en) | From mobile from the motion-captured of tracing equipment | |
Langlotz et al. | AR record&replay: situated compositing of video content in mobile augmented reality | |
US7002584B2 (en) | Video information producing device | |
AU2018203096B2 (en) | System, method and apparatus for rapid film pre-visualization | |
KR101697163B1 (en) | system and method for chroma-key composing for providing three-dimensional structure effect | |
WO2018195892A1 (en) | Method and apparatus for adding three-dimensional stereoscopic watermark, and terminal | |
NZ719982B2 (en) | System, method and apparatus for rapid film pre-visualization | |
Cha et al. | Client system for realistic broadcasting: A first prototype | |
KR20150031662A (en) | Video device and method for generating and playing video thereof | |
Vanherle et al. | Automatic Camera Control and Directing with an Ultra-High-Definition Collaborative Recording System | |
US20200204758A1 (en) | System and method for generating visual animation | |
US20210400255A1 (en) | Image processing apparatus, image processing method, and program | |
CN109982005A (en) | A kind of picture and text based on panoramic video follow synthetic method |