Nothing Special   »   [go: up one dir, main page]

WO2000031689A1 - Free-form video editing system - Google Patents

Free-form video editing system Download PDF

Info

Publication number
WO2000031689A1
WO2000031689A1 PCT/US1999/027792 US9927792W WO0031689A1 WO 2000031689 A1 WO2000031689 A1 WO 2000031689A1 US 9927792 W US9927792 W US 9927792W WO 0031689 A1 WO0031689 A1 WO 0031689A1
Authority
WO
WIPO (PCT)
Prior art keywords
free
key
frame
markup
image
Prior art date
Application number
PCT/US1999/027792
Other languages
French (fr)
Other versions
WO2000031689A9 (en
Inventor
Joseph Henry
Original Assignee
Synapix, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Synapix, Inc. filed Critical Synapix, Inc.
Priority to AU31029/00A priority Critical patent/AU3102900A/en
Publication of WO2000031689A1 publication Critical patent/WO2000031689A1/en
Publication of WO2000031689A9 publication Critical patent/WO2000031689A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20096Interactive definition of curve of interest

Definitions

  • the present invention relates to computer based image processing and in particular to an interactive editing system that generates free-form curves from key frames for a sequence of static images .
  • image segmentation is the singling out of one part of the sequence of images for special treatment.
  • Image segmentation is used in movies, for example, when it is desired to combine objects from different image sequences, such as in the movie Forest Gump where the lead character meets President Kennedy.
  • Image segmentation can also be used to indicate that part of an image should be treated differently from other parts. For example, a transparent window in an otherwise opaque wall should typically be treated differently for shading purposes.
  • the goal of image segmentation is to combine objects or regions from various still photographs or movie frames to ultimately create a scene which is believable and convincing.
  • a segment is identified in each of a certain number of frames in which it appears (called “key frames” ) .
  • a user specifies a mathematical expression for a curve in one image of a sequence of images called a key frame.
  • a second curve is also specified.
  • the system then proceeds to create curves for the intervening images such as by mathematical interpolation between the two curves.
  • Various techniques are then used to identify the segment in the frames which occur between the key frames . For example, the most commonly used techniques require the user to identify the boundaries of a segment by manually placing seed points and redefining the entire boundary in each key frame. This tends to be a tedious and inaccurate process. The system then automatically fills in the intermediate frames, causing the segment to morph from one position to the next. Manual segmentation in general is tedious and time consuming, lacks precision, and is impractical when applied to long image sequences. Furthermore, due to the wide variety of image types, most automatic segmentation techniques are inaccurate and require significant user input to control the process.
  • boundary definition methods use contours, splines, or so-called “snakes” to automatically improve a manually entered rough approximation. For example, after an initial rough boundary approximation, such processes mathematically adjust the boundary points in an attempt to minimize an error function.
  • Other classes of image segmentation techniques typically require a boundary template in the form of a manually entered rough approximation or figure of merit. The template is then used to impose directional sampling and/or searching constraints. Boundary extraction using graph searching techniques is therefore not interactive beyond the template specification, and loses the benefits of further human guidance and expertise.
  • the present invention is a technique for specifying key frames with free- form hand drawn curves that is not restricted by mathematical formulations.
  • the user is first enabled to draw any arbitrary or free-form shape in a first key frame.
  • the user then edits the appearance of the free-form shape overlaid in a subsequent key frame.
  • the process leaves unmodified the part of the free-form shape that the user did not edit .
  • the system then makes use of the edit information in order to automatically generate a representation of the shapes for the intermediate frames using the edit information.
  • the system may mathematically interpolate between the initial free-form shape and the edited form.
  • the interpolation information is then used to render a representation of the image segment for frames which occur between the two key frames .
  • the technique does not constrain the shapes to be particular types of predefined shapes such as Bezier curves, splines, or other curves defined by seed points. Rather, the technique works on for any arbitrary shape that can be sketched.
  • the technique allows the image segment to change from frame to frame in a way that appears more natural to the viewer then the output of most currently available systems that make use of automation of some type.
  • Fig. 1 is a diagram of a computer based video graphics system that operates on a sequence of images to define image segments according to the invention.
  • Fig. 2 is a more detailed view of a screen presented to the user of the system shown in Fig. 1.
  • Fig. 3 illustrates how a curve can be generated for an intermediate key frame according to the invention.
  • Figs. 4A through 4C depict a sequence of exemplary images in which the technique is applied.
  • Figs. 5A through 5C further illustrate a technique for interpolation. DETAILED DESCRIPTION OF THE INVENTION
  • Fig. 1 is a block diagram of the components of a digital image processing system 10 in which key- frameable free-form curves are developed according to the invention.
  • the system 10 includes a computer workstation 20, a computer monitor 21, and input devices such as a keyboard 22, mouse or stylus 23.
  • the workstation 20 also includes input/output interface 24, storage 25 such as a disk 26, and random access memory 27, as well as one or more processors 28.
  • the workstation 20 may, for example, be a graphics workstation such as the 02/Octane sold by Silicon Graphics, Inc., a windows NT-type workstation, or other suitable computers.
  • the monitor 21, keyboard 22, and input device 23 are used indirect with various software elements of the system that exists within the workstation 20 cause software programs to be run and data to be stored as more particularly described below.
  • the system 10 also includes hardware elements typical of an image processing system such as video monitor 30, audio monitor 31, hardware accelerator 32, and other user input devices 33. Also included are image capture devices such as a video cassette recorder (VCR) , video tape recorder (VTR) , and/or digital disc recorder 34 (DDR) , cameras 35 and/or film scanner/telecine 36. Sensors 38 might also provide information about the scene and emit capture devices.
  • the present invention may be used in particular for developing an edited sequence of images.
  • image segmentation is often used to single out one part of a sequence of images for special treatment .
  • the special treatment may include replacing part of the image with some other image (for example, a sequence of images captured with a video camera may need to have a portion therein replaced with a synthetically generated image) where that part of the image may need to be treated differently from other parts of an image during subsequent processing.
  • a transparent window in an otherwise opaque wall might need to be treated differently for the purposes of generating proper lighting, shadowing, and reflection affects.
  • the user is presented with a display such as through a window on the computer screen on the monitor 21 in which the sequence of images 40 may be viewed.
  • the user typically has available a set of VCR-like controls 42 that permit playing, fast forwarding, rewinding and stopping the sequence as desired.
  • the system 10 also provides the user with a set of editing tools 44 common in graphics editing such as pick tools, shape drawing tools, line sketching tools, and the like.
  • the user is working with a sequence of images that depict a locomotive train that is moving towards the camera .
  • Fig. 3 shows a sequence of steps that are performed in part by the user interacting with the display shown in Fig. 2 and in part by the system 10 automatically performing certain steps.
  • the sequence of steps :
  • the user may make use of key- framing techniques using only free-form hand drawn curves as a tool for identifying non-regularly shaped areas in image for further processing, such as through image segmentation.
  • the system 10 displays the first key frame, which is frame number "n" in this instance.
  • the user defines a first free-form curve such as by using the available drawing tools 44. This free-form curve is then stored as, for example, a series of x,y coordinate points by the system in its memory 27.
  • the user is presented with the view of frame number n and they may choose to define a first free- form curve 50 that comprises a portion of the edge of the locomotive 48.
  • the free-form curve 50 is defined as a set of three points 52-1, 52-2, 52-3.
  • the user moves to a different frame such as frame n+m.
  • the system causes the first free-form curve to be overlaid upon the view of frame n+m.
  • the dashed lines in Fig. 4C indicate the originally overlaid first curve 50 while the user is viewing frame n+m.
  • the user defines an edited free-form curve while viewing frame n+m.
  • the end result is not only a view of the edited curve 54, but also information that was used to transform the first curve 50 into the edited curve 54.
  • next state 112 as the user wishes to review intermediate frames located between frame n and frame n+m, the process for generating additional shapes proceeds .
  • a first state 114 points on the first curve 50 and the edited curve 54 are first identified.
  • the edited curve 54 may have, for example, been defined by four points 56-1, 56-2, 56-3, 56-4.
  • the original curve 50 thus may appear as the dotted lines in Fig. 4C.
  • the user redefines a portion of the edited curve at 54, such as by picking end points of a section and then dragging or
  • the system 10 then knows the areas of the two curves that are related to one another. In this instance, an interpolation process is used to change the section of the curve only where it has been edited, leaving the curve alone where it has not. Thus, for example, a first portion I of the curve is not changed; however, a second portion II of the curve which was edited will be used in the interpolation process .
  • a parameterization process is used to generate a parameter for each point for each edited curve in each key frame. For example, this process may begin by starting at a beginning point such as point 52-1 for edited segment 50, and performing a normalized distance calculation to a value of 1.
  • the second point 522 lies half-way along the segment 50 and is therefore given a parameter of 0.5.
  • the third point 52-3 lies at the end of the segment 50 and is given a parameter of 1.0.
  • the parameters may be determined, for example, by taking a cumulative distance of the point along the segment 50 from the beginning to the end.
  • the edited segment 54 also has a point 56-1 with a parameter of 0.0 and also a point 56-4 with a parameter of 1.0.
  • the edited segment 54 has points 56-2 and 56-3 that lie a distance of 0.33 and 0.66 respectively. These parameters are thus similarly determined for the edited segment 54.
  • next state 116 which is a normalization state, points are added to both the first segment 50 and edited segment 54 so that the number of points on each segment is the same.
  • a point For example, a point
  • 52-4 and 52-5 are added to the first segment 50 located at a corresponding parameter distance location 0.33 and 0.66.
  • a fifth point, 56-5 is added to the edited segment 54 at x,y location corresponding to a parameter of 0.5.
  • the final step 118 is to perform a linear interpolation assuming an x,y location for each point.
  • the linear interpolation may take the mathematical form of :

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

A technique for deriving multiple free-form curves from two or more key-frames such as may be used for segmenting portions of a sequence of static images. A user defines a free-form shape on a first key-frame. The free-form shape is then overlaid on a second key-frame. The user is allowed to make edits to the free-form shape in the second key-frame, and the information concerning these edits is recorded. The shape of the free-form curve for intermediate frames occurring between the first and second key-frames is then determined using the recorded edit information.

Description

FREE-FORM VIDEO EDITING SYSTEM
FIELD OF THE INVENTION
The present invention relates to computer based image processing and in particular to an interactive editing system that generates free-form curves from key frames for a sequence of static images .
BACKGROUND OF THE INVENTION
The composition of images using digital techniques has recently grown in popularity for providing special effects in movies, television programs, multimedia presentations, and in a variety of other applications. One of the tasks of such a system is so-called "image segmentation" which is the singling out of one part of the sequence of images for special treatment. Image segmentation is used in movies, for example, when it is desired to combine objects from different image sequences, such as in the movie Forest Gump where the lead character meets President Kennedy. Image segmentation can also be used to indicate that part of an image should be treated differently from other parts. For example, a transparent window in an otherwise opaque wall should typically be treated differently for shading purposes. The goal of image segmentation is to combine objects or regions from various still photographs or movie frames to ultimately create a scene which is believable and convincing. Typically, a segment is identified in each of a certain number of frames in which it appears (called "key frames" ) .
It is common, for example, to make use of the concept of key-frameable curves. In this instance, a user specifies a mathematical expression for a curve in one image of a sequence of images called a key frame. In another curve in some downstream image, a second curve is also specified. The system then proceeds to create curves for the intervening images such as by mathematical interpolation between the two curves. Various techniques are then used to identify the segment in the frames which occur between the key frames . For example, the most commonly used techniques require the user to identify the boundaries of a segment by manually placing seed points and redefining the entire boundary in each key frame. This tends to be a tedious and inaccurate process. The system then automatically fills in the intermediate frames, causing the segment to morph from one position to the next. Manual segmentation in general is tedious and time consuming, lacks precision, and is impractical when applied to long image sequences. Furthermore, due to the wide variety of image types, most automatic segmentation techniques are inaccurate and require significant user input to control the process.
Other boundary definition methods use contours, splines, or so-called "snakes" to automatically improve a manually entered rough approximation. For example, after an initial rough boundary approximation, such processes mathematically adjust the boundary points in an attempt to minimize an error function. Other classes of image segmentation techniques typically require a boundary template in the form of a manually entered rough approximation or figure of merit. The template is then used to impose directional sampling and/or searching constraints. Boundary extraction using graph searching techniques is therefore not interactive beyond the template specification, and loses the benefits of further human guidance and expertise.
SUMMARY OF THE INVENTION
The present invention is a technique for specifying key frames with free- form hand drawn curves that is not restricted by mathematical formulations.
The user is first enabled to draw any arbitrary or free-form shape in a first key frame. The user then edits the appearance of the free-form shape overlaid in a subsequent key frame. The process leaves unmodified the part of the free-form shape that the user did not edit . The system then makes use of the edit information in order to automatically generate a representation of the shapes for the intermediate frames using the edit information. For example, the system may mathematically interpolate between the initial free-form shape and the edited form. The interpolation information is then used to render a representation of the image segment for frames which occur between the two key frames . Unlike other algorithms, the technique does not constrain the shapes to be particular types of predefined shapes such as Bezier curves, splines, or other curves defined by seed points. Rather, the technique works on for any arbitrary shape that can be sketched.
Furthermore, the technique allows the image segment to change from frame to frame in a way that appears more natural to the viewer then the output of most currently available systems that make use of automation of some type.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Fig. 1 is a diagram of a computer based video graphics system that operates on a sequence of images to define image segments according to the invention.
Fig. 2 is a more detailed view of a screen presented to the user of the system shown in Fig. 1. Fig. 3 illustrates how a curve can be generated for an intermediate key frame according to the invention.
Figs. 4A through 4C depict a sequence of exemplary images in which the technique is applied. Figs. 5A through 5C further illustrate a technique for interpolation. DETAILED DESCRIPTION OF THE INVENTION
Turning attention now more particularly to the drawings, Fig. 1 is a block diagram of the components of a digital image processing system 10 in which key- frameable free-form curves are developed according to the invention. The system 10 includes a computer workstation 20, a computer monitor 21, and input devices such as a keyboard 22, mouse or stylus 23. The workstation 20 also includes input/output interface 24, storage 25 such as a disk 26, and random access memory 27, as well as one or more processors 28. The workstation 20 may, for example, be a graphics workstation such as the 02/Octane sold by Silicon Graphics, Inc., a windows NT-type workstation, or other suitable computers. The monitor 21, keyboard 22, and input device 23 are used indirect with various software elements of the system that exists within the workstation 20 cause software programs to be run and data to be stored as more particularly described below. The system 10 also includes hardware elements typical of an image processing system such as video monitor 30, audio monitor 31, hardware accelerator 32, and other user input devices 33. Also included are image capture devices such as a video cassette recorder (VCR) , video tape recorder (VTR) , and/or digital disc recorder 34 (DDR) , cameras 35 and/or film scanner/telecine 36. Sensors 38 might also provide information about the scene and emit capture devices. The present invention may be used in particular for developing an edited sequence of images. It should be understood, however, that are other applications for systems which permit the user to (a) create a curve in one image, (b) then create a curve in some other downstream image and (c) ask the system to automatically create curves for any intervening images. As explained above, image segmentation is often used to single out one part of a sequence of images for special treatment . The special treatment may include replacing part of the image with some other image (for example, a sequence of images captured with a video camera may need to have a portion therein replaced with a synthetically generated image) where that part of the image may need to be treated differently from other parts of an image during subsequent processing. For example, a transparent window in an otherwise opaque wall might need to be treated differently for the purposes of generating proper lighting, shadowing, and reflection affects.
As shown in Fig. 2, the user is presented with a display such as through a window on the computer screen on the monitor 21 in which the sequence of images 40 may be viewed. The user typically has available a set of VCR-like controls 42 that permit playing, fast forwarding, rewinding and stopping the sequence as desired. The system 10 also provides the user with a set of editing tools 44 common in graphics editing such as pick tools, shape drawing tools, line sketching tools, and the like. In the particular example shown, the user is working with a sequence of images that depict a locomotive train that is moving towards the camera . Fig. 3 shows a sequence of steps that are performed in part by the user interacting with the display shown in Fig. 2 and in part by the system 10 automatically performing certain steps. The sequence of steps :
• allows the user to draw any arbitrary shape in a first key frame; • allows the user to edit its appearance in a second key frame; and
• automatically, using the edit information, provide corresponding shapes for the frames occurring between the two key frames . The result is that the user may make use of key- framing techniques using only free-form hand drawn curves as a tool for identifying non-regularly shaped areas in image for further processing, such as through image segmentation. Now more particularly, from an idle state 100, in a first state 102, the system 10 displays the first key frame, which is frame number "n" in this instance. In a next state 104, the user defines a first free-form curve such as by using the available drawing tools 44. This free-form curve is then stored as, for example, a series of x,y coordinate points by the system in its memory 27.
As shown in Fig. 4A, the user is presented with the view of frame number n and they may choose to define a first free- form curve 50 that comprises a portion of the edge of the locomotive 48. In this instance, as shown in Fig. 5A, the free-form curve 50 is defined as a set of three points 52-1, 52-2, 52-3. In the next state 106, the user moves to a different frame such as frame n+m. In the next state 108, the system causes the first free-form curve to be overlaid upon the view of frame n+m. The dashed lines in Fig. 4C indicate the originally overlaid first curve 50 while the user is viewing frame n+m.
In state 110, the user defines an edited free-form curve while viewing frame n+m. The end result is not only a view of the edited curve 54, but also information that was used to transform the first curve 50 into the edited curve 54.
In the next state 112, as the user wishes to review intermediate frames located between frame n and frame n+m, the process for generating additional shapes proceeds .
More particularly as shown in states 114 through 118, in a first state 114, points on the first curve 50 and the edited curve 54 are first identified. The edited curve 54 may have, for example, been defined by four points 56-1, 56-2, 56-3, 56-4. The original curve 50 thus may appear as the dotted lines in Fig. 4C. However, by using a stylus 23 or mouse, the user redefines a portion of the edited curve at 54, such as by picking end points of a section and then dragging or
"rubber banding" the section to a different location
By using the information indicating how the user edited the curve 50, the system 10 then knows the areas of the two curves that are related to one another. In this instance, an interpolation process is used to change the section of the curve only where it has been edited, leaving the curve alone where it has not. Thus, for example, a first portion I of the curve is not changed; however, a second portion II of the curve which was edited will be used in the interpolation process . In the first state 114, a parameterization process is used to generate a parameter for each point for each edited curve in each key frame. For example, this process may begin by starting at a beginning point such as point 52-1 for edited segment 50, and performing a normalized distance calculation to a value of 1. In this instance, the second point 522 lies half-way along the segment 50 and is therefore given a parameter of 0.5. The third point 52-3 lies at the end of the segment 50 and is given a parameter of 1.0. The parameters may be determined, for example, by taking a cumulative distance of the point along the segment 50 from the beginning to the end.
A similar process takes place for the edited segment 54 as shown in Fig. 5C. In particular, the edited segment 54 also has a point 56-1 with a parameter of 0.0 and also a point 56-4 with a parameter of 1.0. However, the edited segment 54 has points 56-2 and 56-3 that lie a distance of 0.33 and 0.66 respectively. These parameters are thus similarly determined for the edited segment 54.
In the next state 116, which is a normalization state, points are added to both the first segment 50 and edited segment 54 so that the number of points on each segment is the same. Thus, for example, a point
52-4 and 52-5 are added to the first segment 50 located at a corresponding parameter distance location 0.33 and 0.66. Likewise, a fifth point, 56-5, is added to the edited segment 54 at x,y location corresponding to a parameter of 0.5.
The final step 118 is to perform a linear interpolation assuming an x,y location for each point. For example, the linear interpolation may take the mathematical form of :
Figure imgf000012_0001
m+l yn+, = yn + _i_ [yn+m - yn ] m+l
The result is an interpolated segment 60 such as shown in Fig. 5B having a number of points 62-1 through 62-5. Thus, when the user "rewinds" his view of the sequence of images to a frame n+i, a view of the free form segment 60 for the intermediate images is easily calculated and presented to the user as shown in Fig. 4C.
It should be understood that the user can make any number of edits to the original free-form curve, with the only constraint that the user cannot introduce a completely new free- form curve in the second key frame n+m.
It should also be understood that techniques other than strict mathematical interpolation may be used to derive the intermediate curves.
While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims

CLAIMSWhat is claimed is:
1. A method for specifying markup of an intermediate image in a sequence of image frames, the method comprising the steps of : defining an arbitrary free-form shape markup for a first key frame in the sequence of image frames; accepting input indicating user edits to the free- form shape to define a markup for a second key frame in the sequence of key frames; and defining a markup for an intermediate frame occurring between the first key frame and second key frame by deriving information from the user edits .
2. A method as in claim 1 wherein the markups indicate a segment of the image sequence.
3. A method as in claim 2 wherein the segment indicates an area of the image sequence where portions of another image are to be substituted.
4. A method as in claim 2 wherein the segment indicates an area of the image sequence where different visual effects are to be applied.
5. A method as in claim 1 additionally comprising the step of: before accepting input indicating user edits, overlaying a view of the arbitrary free-form shape onto the second key frame .
6. A method as in claim 1 wherein the arbitrary free- form shape is defined as a set of positional coordinate points in the first key frame.
7. A method as in claim 6 wherein the user edits to the free-form shape are defined as a set of positional coordinate points in the second key frame.
8. A method as in claim 7 wherein the step of defining a markup for the intermediate frame additionally comprises the step of: selecting a portion of the free-form shape to which user edits were made for interpolation.
9. A method as in claim 7 additionally comprising the step of : assigning a parameter to the coordinate points in the first and second key frames.
10. A method as in claim 9 wherein the parameter assigned to the coordinate points is a normalized distance from a common reference point .
11. A method as in claim 10 additionally comprising the step of: equalizing the number of coordinate points in the markup associated with each of the first and second key frames .
12. A method as in claim 11 additionally comprising the step of : determining a coordinate location for each point added by the equalizing step.
13. A method as in claim 12 wherein the step of defining a markup for the intermediate frame additionally comprises the step of: for each of the coordinate points in the first and second key frames, interpolating the corresponding coordinate points to determine a coordinate point of the markup for the intermediate frame.
14. A method as in claim 13 wherein the step of interpolating is a linear interpolation of two- dimensional positioned coordinates specified by
Xn+ι — Xn — 1 LXn+m " Xn 1 m+l yn+1 =yn+ _i__[yn+m -yn] m+l
wherein (Xn+1 , Yn+1) is the two-dimensional position of an interpolated point in the intermediate frame, (Xn, Yn) is the two-dimensional position of a corresponding point in the first key frame, and (Xn+m , Yn+m) is the two-dimensional position of a corresponding point in the second key frame .
PCT/US1999/027792 1998-11-24 1999-11-23 Free-form video editing system WO2000031689A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU31029/00A AU3102900A (en) 1998-11-24 1999-11-23 Free-form video editing system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US10963498P 1998-11-24 1998-11-24
US60/109,634 1998-11-24
US44702099A 1999-11-22 1999-11-22
US09/447,020 1999-11-22

Publications (2)

Publication Number Publication Date
WO2000031689A1 true WO2000031689A1 (en) 2000-06-02
WO2000031689A9 WO2000031689A9 (en) 2000-11-30

Family

ID=26807186

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/027792 WO2000031689A1 (en) 1998-11-24 1999-11-23 Free-form video editing system

Country Status (2)

Country Link
AU (1) AU3102900A (en)
WO (1) WO2000031689A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006044006A1 (en) * 2004-10-20 2006-04-27 Siemens Technology-To-Business Center, Llc Systems and methods for three-dimensional sketching
US9030462B2 (en) 2007-09-24 2015-05-12 Siemens Corporation Sketching three-dimensional(3D) physical simulations

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1989011257A1 (en) * 1988-05-23 1989-11-30 Augspurger Lynn L Method and system for making prosthetic device
WO1995012289A1 (en) * 1993-10-28 1995-05-04 Pandora International Limited Digital video processing
EP0829821A2 (en) * 1996-09-11 1998-03-18 Da Vinci Systems, Inc. User definable windows for selecting image processing regions
US5825941A (en) * 1995-03-17 1998-10-20 Mirror Software Corporation Aesthetic imaging system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1989011257A1 (en) * 1988-05-23 1989-11-30 Augspurger Lynn L Method and system for making prosthetic device
WO1995012289A1 (en) * 1993-10-28 1995-05-04 Pandora International Limited Digital video processing
US5825941A (en) * 1995-03-17 1998-10-20 Mirror Software Corporation Aesthetic imaging system
EP0829821A2 (en) * 1996-09-11 1998-03-18 Da Vinci Systems, Inc. User definable windows for selecting image processing regions

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006044006A1 (en) * 2004-10-20 2006-04-27 Siemens Technology-To-Business Center, Llc Systems and methods for three-dimensional sketching
US7586490B2 (en) 2004-10-20 2009-09-08 Siemens Aktiengesellschaft Systems and methods for three-dimensional sketching
US9030462B2 (en) 2007-09-24 2015-05-12 Siemens Corporation Sketching three-dimensional(3D) physical simulations

Also Published As

Publication number Publication date
AU3102900A (en) 2000-06-13
WO2000031689A9 (en) 2000-11-30

Similar Documents

Publication Publication Date Title
US6249285B1 (en) Computer assisted mark-up and parameterization for scene analysis
US10872637B2 (en) Video inpainting via user-provided reference frame
US7788585B2 (en) Split edits
US6297825B1 (en) Temporal smoothing of scene analysis data for image sequence generation
US7084875B2 (en) Processing scene objects
US6268864B1 (en) Linking a video and an animation
US9286941B2 (en) Image sequence enhancement and motion picture project management system
US7194676B2 (en) Performance retiming effects on synchronized data in an editing system
Ueda et al. IMPACT: An interactive natural-motion-picture dedicated multimedia authoring system
US6665450B1 (en) Interpolation of a sequence of images using motion analysis
US5768447A (en) Method for indexing image information using a reference model
US6081278A (en) Animation object having multiple resolution format
US7146022B2 (en) Spatiotemporal locator processing method and apparatus
EP0788063B1 (en) Apparatuses for setting anchors in moving images and hypermedia
US20160172002A1 (en) Multi-stage production pipeline system
US6492990B1 (en) Method for the automatic computerized audio visual dubbing of movies
US20030095720A1 (en) Video production and compaction with collage picture frame user interface
US20040255251A1 (en) Assembling verbal narration for digital display images
GB2330265A (en) Image compositing using camera data
US20130183023A1 (en) Motion picture project management system
AU6787298A (en) Computer system process and user interface for providing intelligent scissors for image composition
EP1097568A2 (en) Creating animation from a video
US6473094B1 (en) Method and system for editing digital information using a comparison buffer
US7129961B1 (en) System and method for dynamic autocropping of images
WO2000031689A1 (en) Free-form video editing system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: C2

Designated state(s): AU CA JP

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

COP Corrected version of pamphlet

Free format text: PAGES 1/4-4/4, DRAWINGS, REPLACED BY NEW PAGES 1/5-5/5; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

122 Ep: pct application non-entry in european phase