Nothing Special   »   [go: up one dir, main page]

CN101401130A - Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program - Google Patents

Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program Download PDF

Info

Publication number
CN101401130A
CN101401130A CN200780008655.1A CN200780008655A CN101401130A CN 101401130 A CN101401130 A CN 101401130A CN 200780008655 A CN200780008655 A CN 200780008655A CN 101401130 A CN101401130 A CN 101401130A
Authority
CN
China
Prior art keywords
video
place
model
sequence
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200780008655.1A
Other languages
Chinese (zh)
Other versions
CN101401130B (en
Inventor
迪尔克·罗斯
托尔斯滕·布莱克
奥利弗·施奈德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nero AG
Original Assignee
Nero AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nero AG filed Critical Nero AG
Priority claimed from PCT/EP2007/000024 external-priority patent/WO2007104372A1/en
Publication of CN101401130A publication Critical patent/CN101401130A/en
Application granted granted Critical
Publication of CN101401130B publication Critical patent/CN101401130B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/20Disc-shaped record carriers
    • G11B2220/25Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
    • G11B2220/2537Optical discs
    • G11B2220/2562DVDs [digital versatile discs]; Digital video discs; MMCDs; HDCDs

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An apparatus for providing a sequence of video frames on the basis of a scene model defining a scene comprises a video frame generator adapted to provide a sequence of a plurality of video frames on the basis of the scene model. The video frame generator is adapted to identify within the scene model a scene model object having a predetermined object name or a predetermined object property, to obtain an identified scene model object. The video frame generator is further adapted to generate a sequence of video frames such that user-provided content is displayed on a surface of the identified scene model object or as a replacement for the identified scene model object. An apparatus for creating a menu structure of a video medium comprises an apparatus for providing a sequence of video frames. The apparatus for providing a sequence of video frames is adapted to generate the sequence of video frames being part of the menu structure of the video medium on the basis of a scene model, on the basis of additional information, and on the basis of a menu structure-related characteristic. This concept allows the user-friendly generation of video transitions and menu structures.

Description

The equipment and the method for sequence of frames of video are provided, the equipment and the method for model of place is provided, model of place, the equipment and the method for establishment menu structure, and computer program
Technical field
The present invention relates generally to: be used to provide the equipment and the method for sequence of frames of video, be used to provide the equipment and the method for model of place, model of place is used to create the equipment and the method for menu structure and computer program.Particularly, the present invention relates to produce automatically cartoon scene to create the design of interactive menu and video scene.
In the past few years, the performance of home entertainment device has firmly improved.Therebetween, consumer even can produce themselves digital video and digital video is saved in storage medium.Yet, up to the present all can't be easily under the situation that programming language is not understood in depth between the video scene or creating meticulous transition between the menu page or between menu page and video scene.
In addition, provide independent code (separate code) because typically be necessary for any algorithm that is used to produce transition, so paid very large effort for the consumer provides for the software company of the solution of creating meticulous video transition for this task for attempting.
Summary of the invention
In view of more than, the objective of the invention is to propose to provide the design of sequence of frames of video, this design allows to generate flexibly the sequence of frames of video of customization.Other purpose is to provide user-friendly design for the menu structure of creating video media.
Utilization according to the equipment of claim 1, according to the equipment of claim 16, according to the equipment of claim 18, according to the method for claim 23 or 24, according to claim 25 be used to create video media menu structure equipment, according to claim 30 be used to create the method for menu structure of video media and the computer program of claim 31 has been realized this purpose.
The present invention proposes the equipment that is used for providing sequence of frames of video according to the model of place of definition scene according to claim 1.
Key idea of the present invention is, be presented on the surface after the sign of the model of place object after the sign of model of place by the content that the user is provided or be shown as replacement the model of place object after the sign of model of place, can be effectively and produce sequence of frames of video neatly.
Found in model of place, can utilize predetermined object oriented, surperficial title, object properties or surface properties to identify the surface of model of place object or model of place object.In case identified object or its surface, then can make to be suitable for producing on the surface after the content that the frame of video generator of sequence of frames of video provides the user (for example the image that provides of user, frame of video that the user provides or user provide video sequence) is presented at sign or being shown as replacement to the object after the sign according to the model of place that comprises object after the sign or surface.
Therefore, the user-defined content of two dimension can be introduced in the predefined model of place, wherein the surface of the object of predefined model of place or face are as the placeholder surface.
Alternatively, replace the placeholder after the sign in the model of place by utilizing the three dimensional object that the user provides, the object that three-dimensional user can be provided (or user provide content) is introduced in the sequence of frames of video of describing according to model of place.
In other words, find that in model of place surface and object can be with the placeholders that acts on the content that (for example, image, frame of video, sequence of frames of video or three dimensional object form) user provides.
Can utilize predetermined title or predetermined object properties sign placeholder object.Therefore can utilize the frame of video generator that is suitable for generating the sequence of a plurality of frame of video that the content that provides is introduced in the model of place according to the content that model of place and user provide.
The present invention also provides the equipment that is used to provide the model of place that has defined the 3 D video scene according to claim 16.This equipment comprises interface and the placeholder inserter (inserter) that is used to receive to the description of video scene.According to key idea of the present invention, the placeholder inserter is suitable for placeholder title or placeholder attribute are inserted in the model of place, makes placeholder title or placeholder attribute specify the object or the surface that will be associated with the content that the user provides.In other words, be used to provide the equipment of model of place to create model of place, described model of place provides the equipment of the present invention of sequence of frames of video to use for being used to.For this reason, be used for providing the equipment of model of place that model of place is introduced on placeholder surface or placeholder object, wherein can utilize the device identification described placeholder surface or the placeholder object that are used to provide sequence of frames of video, and described placeholder surface or placeholder object can be used in the content that explicit user provides.
The invention allows for model of place, at least one placeholder attribute that described model of place has at least one placeholder object or at least one placeholder title or placeholder object or placeholder surface and content that the user provides are associated according to claim 18.Therefore, model of place of the present invention is suitable for being used to provide the equipment of sequence of frames of video to use.
The invention allows for method according to claim 23 and 24.
The present invention proposes the equipment of menu structure that is used to create video media according to claim 25.
The advantage that the method that is used to create the menu structure of video media of the present invention is brought is by with menu structure relevant information and model of place combination, to make video structure automatically adapt to the menu structure relevant information.Therefore, use the menu structure relevant information that the frame of video that is produced by the equipment that is used to create menu structure is regulated.
In other words, revise the scene of describing by model of place according to the menu structure relevant information.Therefore, although still based on model of place, yet sequence of frames of video is suitable for user's demand.Therefore, the content that the user is provided is introduced sequence of frames of video, custom video frame sequence.Yet, still whole scenes are described by model of place, described model of place is as the template of predefine scene.
The invention allows for according to the method for the menu structure of the establishment video media of claim 30 and according to the computer program of claim 31.
By dependent claims definition other advantageous embodiment of the present invention.
Description of drawings
To be described the preferred embodiments of the present invention with reference to the accompanying drawings subsequently, in the accompanying drawing:
Fig. 1 shows and is used for according to the model of place of definition scene and the block scheme that the equipment of the present invention of sequence of frames of video is provided according to the content that the user provides;
Fig. 2 shows the diagrammatic representation of the cubical model of place of expression;
Fig. 3 shows the tabulation of describing model of place shown in Figure 2;
Fig. 4 shows according to the diagrammatic representation by time-varying field scape model and two defined transition between first sequence of frames of video and second sequence of frames of video of user-defined sequence of frames of video;
The process flow diagram that the content that provides according to model of place and user presents the method for frame is provided Fig. 5;
Fig. 6 shows the process flow diagram that uses content that the user provides and scene to produce the method for particular video frequency frame for how much;
Fig. 7 shows in the production process of the sequence of frames of video that is produced the diagrammatic representation to the use of the frame of first sequence of frames of video and second sequence of frames of video;
Fig. 8 shows the diagrammatic representation that utilizes the three-dimensional text object to replace the placeholder object;
Fig. 9 shows the diagrammatic representation of two sequences between the menu page;
Figure 10 shows the diagrammatic representation of advancing of the introductory film of schematic overview;
Figure 11 shows the diagrammatic representation of the animation of schematic overview intermediate sequence " chapter choice menus → film begins ";
Figure 12 shows the diagrammatic representation of the sequence between master menu and submenu;
Figure 13 shows the diagrammatic representation of the intelligent 3D scene graph with 6 chapter buttons;
Figure 14 shows the diagrammatic representation of the example of the menu with 4 chapters;
Figure 15 shows the diagrammatic representation for the example of the menu with 8 main chapters, and wherein the user can navigate to next and the Previous menu page;
Figure 16 shows the diagrammatic representation for the example of the menu with 8 main chapters, and wherein the first main chapter has 4 other sub-chapters, and wherein the user can by select " on " the button master menu that navigates back;
Figure 17 shows the diagrammatic representation for the example of the template of the master menu that presents in intelligent 3D internal representation, wherein above example is based on the template of described master menu;
Figure 18 shows the process flow diagram of the method for the present invention that produces sequence of frames of video;
Figure 19 shows the diagrammatic representation of the user interface that is used to select video title;
Figure 20 shows the diagrammatic representation of the user interface that is used to select predefined intelligent 3D template;
Figure 21 shows and is used to make intelligent 3D template to adapt to the diagrammatic representation of the user interface of user's request;
Figure 22 shows the diagrammatic representation of the user interface that presents the user-defined menu of being created by intelligent 3D device;
Figure 23 shows the highlighted diagrammatic representation of sheltering of " monitor " menu, comprises 6 buttons and 3 navigation keys (arrow); And
Figure 24 shows the diagrammatic representation of the general work flow process of Nero intelligence 3D environment.
Embodiment
Fig. 1 shows the block scheme that is used for providing according to the model of place of definition scene the equipment of the present invention of sequence of frames of video.Employing 100 is the equipment of index map 1 on the whole.Equipment 100 comprises frame of video generator 110.Frame of video generator 110 is suitable for receiving the content 114 that model of place 112 and user provide.In addition, the frame of video generator is suitable for providing sequence of frames of video 116.
It should be noted that the model of place 112 that is received by the frame of video generator comprises at least one the model of place object with object oriented or object properties.For example, model of place can comprise being arranged in two dimension or the preferably description of a plurality of objects in the three dimensions.At least one object has object oriented or the object properties that are associated with each object at least.
In addition, for example, the content 114 that the user provides can comprise: image, frame of video, sequence of frames of video or to the description of at least one two dimension or three dimensional object.
Frame of video generator 110 is suitable for generating according to the content that model of place and user provide the sequence 116 of a plurality of frame of video.Frame generator 110 is suitable in model of place 112 the model of place object with predetermined object oriented or predetermined object properties is identified, with the model of place object after obtaining identifying.Model of place object with predetermined object oriented or predetermined object properties is identified and can comprise: to the particular surface of the model of place object after identifying identify.
In addition, frame of video generator 110 is suitable for producing sequence of frames of video, and the feasible content 114 that the user is provided is presented on the surface of the model of place object after the sign.Alternatively, sequence of frames of video generator 110 can be suitable for content 114 that explicit user provides as the replacement to the model of place object after the sign.
Here it should be noted that if the content 114 that the user provides is image, frame of video or sequence of frames of video, the then preferred content that the user is provided is presented on the surface of the model of place object after the sign.On the other hand, if the content 114 that the user provides is that two dimension is replaced model of place object or three-dimensional description of replacing the model of place object, the content of then preferably utilizing the user to provide is replaced the model of place object after the sign.
Therefore, frame of video generator 110 provides sequence of frames of video 116, adopts the form by model of place 112 controls that the content that the user provides is shown in sequence of frames of video 116.Therefore, can think that model of place 112 is templates of describing the sequence of frames of video 116 of the scene that will show, the content of wherein utilizing the user to provide is replenished shown scene.
Hereinafter, will the content 114 that provide with model of place 112, user and the relevant further details of generation of sequence of frames of video 116 be described.
Fig. 2 shows the diagrammatic representation of the exemplary scene model that uses for the present invention.Adopt 200 to indicate model of place on the whole.Model of place 200 comprises cube 210 and observation station 212.Cube 210 and observation station 212 are arranged in the three dimensions, wherein, can reference frame 220 describe the position and the orientation of cube 210 and observation station 212.Although (have direction x, y z), yet can use any coordinate system arbitrarily only to show a plurality of in may coordinate systems one.
Here it should be noted that the cube 210 of also being appointed as " cube 1 " amounts to and comprises 6 surfaces, shows wherein three here.For example, cube 210 comprises first surface 230, second surface 232 and the 3rd surface 234.In addition, it should be noted, can define the preferred point of cube inside and the preferred orientations of cube inside, so that cubical position and orientation are described.For example, can cubical position and orientation be described according to the position and the cubical preferred orientations at the center (or center of gravity) of cube 210.For example, preferred orientations can be vertical on first surface 230 direction from first surface 230 directed outwards.Therefore, (z) position to cube 210 is described for coordinate x for example, y can to utilize three scalar coordinates with respect to the position of the initial point 222 indication cubes 210 of coordinate system 220.In addition, can use two additional coordinates (for example 2 angular position, θ) preferred orientations or the orientation of definition cube 210).
In addition, model of place 220 comprises observation station 212, for example, can utilize three coordinates of the initial point 222 of reference frame 220 that the position of observation station is described.In addition, alternatively, can or observe fan-shaped at observation station 212 definition direction of observations.In other words, can define and suppose the observer who is in observation station 212 places along which direction watches, and/or which zone of model of place is visible to the observer.For example, can the direction of observing be described according to two coordinates of assigned direction.In addition, with respect to observation station 212, can the horizontal viewing angle of definition and/or the indication of right-angle view angle be positioned at which part that the observer of observation station 212 can see model of place 220.
Usually, model of place 200 comprises: which part (for example, angle) according to the observation of model of place 220 is visible definition for the observer who is positioned at observation station 212.
In other words, model of place 200 comprises: at least one object (just cube 210), at least one Properties of Objects (for example title or attribute) and randomly relevant with the observer and defined definition for the characteristic of the part of the visible model of place 200 of the observer who is positioned at observation station 212.
Fig. 3 shows the sample list at the model of place of the model of place of Fig. 2.Adopt 300 tabulations of index map 3 integrally.
It should be noted, for example, can define the tabulation of model of place with structure description language (for example XML descriptive language, or proprietary descriptive language), and the tabulation of model of place can be adopted any possible description form.Be also to be noted that and think that all characteristics of summarizing choose wantonly in following example, and can utilize other characteristic to replace all characteristics of in following example, summarizing, or can be omitted in all characteristics of summarizing in the following example fully.
With reference to figure 3, tabulation 300 indication models of place 200 comprise cube 210.In tabulation 300, identifier " cube 1 " is used to specify cube 210.Tabulation 300 comprises the numerous characteristics of cube 210.For example, this characteristic can comprise: the position (attribute " position ") of title of cube 210 (characteristic " title ") and cube 210, for example cube 210 is in cartesian coordinate system (x, y, z) position in.The tabulation 300 that has defined model of place can also comprise the parameter (for example, being described according to 2 angular dimensions φ, θ) of the curl (rotation) that has defined cube 210.
In addition, can comprise other details to the description 300 of model of place 200 about the surface of cube 210.For example, can comprise the description of first surface 230 (by attribute " surface 1 " indication): the information relevant (attribute " texture "), the information (attribute " material ") relevant and/or the additional information (" attribute ") of first surface 230 with the material of first surface 230 with the texture of first surface 230.
In the example that provides, the model of place of model of place 200 is described 300 defined first surface 230 and have texture " video 1 ", described texture " video 1 " indication should be presented at the video content that first user provides on the first surface 230 of cube 210.
Can also provide other attribute at second surface (describing in 300 and be designated as " surface 2 " in tabulation or model of place).For example, definition second surface 232 (" surface 2 ") has the texture that name is called " video 2 ", and described texture " video 2 " indication should be presented at the video content that second user provides on the second surface 232.Can provide similar characteristic or attribute at other surface of cube 210.
Tabulation 300 model of place is described and is also comprised: the information relevant with observation station 212.For example, can (z) (referring to attribute " position ") and observation station provide the position of observation station 212 for x, y according to Cartesian coordinates.In addition, can be observation station definition direction of observation (just being positioned at the direction that the observer of observation station 212 is watching) (attribute " direction of observation ") according to each parameter.In addition, randomly, can define viewing angle (attribute " viewing angle ") for the observer who is in observation station 212 places.Which part that viewing angle has defined for the observer who is in observation station 212 places model of place is visible.
In addition, randomly, the model of place of tabulation 300 is described and can be described any motion of objects in model of place inside.For example, can describe cube 210 is how to move in time, wherein can provide description according to the position of cube 210 and/or the sequence of positional parameter.Alternatively, can utilize the model of place description of tabulation 300 that the moving direction of cube 210 and/or the translational speed of cube 210 are described.Here it should be noted that tabulation 300 model of place is described and can be comprised the differentiation in time of the position of cube 210 and the description of the orientation differentiation in time of cube 210.
In addition, alternatively or additionally, the model of place of tabulation 300 describes that the position that can comprise observation station changes in time and/or observer's direction of observation changes in time and/or observer's the time dependent description of viewing angle.
In other words, model of place is described and can be comprised: the description at given time instance place to model of place, and the description of model of place being carried out in time time-evolution.
In a preferred embodiment, frame of video generator 110 is suitable for assessment (for example being provided by tabulation 300) model of place to be described, and is suitable for describing generation sequence of frames of video 316 according to this model of place.For example, frame of video generator 110 can be to assessing to obtain first frame of video at very first time example place effective scene model description.Frame of video generator 110 can also be to assessing to obtain second frame of video at second time instance at the second time instance place effective scene model description.Can also in effective standalone scenario model description, provide at the model of place of the second time example and describe for second time instance, maybe can use at the model of place of very first time example describe and (having described the change of model of place between very first time example and second time instance) time-evolution (time development) is described or the model of place description of determining at the second time example is described in motion.
The figured example that the content 114 of using frame of video generator 110 to provide according to model of place 112 and user produces sequence of frames of video is provided Fig. 4.Employing 400 is the diagram of index map 4 integrally.The left column 410 of diagram 400 shows the top view at the model of place of different time example.Another row 420 show the frame of video for the different sequence of frames of video that time instance produced 116.First row 430 shows corresponding frame of video in the top view of very first time example place model of place and sequence of frames of video 116.Show the top view of cube 432 with first surface 434 and second surface 436 at the model of place of very first time example.Here it should be noted that cube 432 is equal to the cube 210 of Fig. 2.The first surface 434 of cube 432 is equal to the first surface 230 of cube 210, and the second surface 436 of cube 432 is equal to the second surface 232 of cube 210.The content associated attributes (for example, title, material designator, texture designator or characteristic) of indicating the first surface 432 and first user to provide is provided the first surface 434 of cube 432.In the example of Fig. 4, suppose that the sequence of frames of video that image that the first surface 434 and first user provide, frame of video that first user provides or first user provide is associated.In addition, suppose that the sequence of frames of video that image that the second surface 136 (by attribute is carried out corresponding setting) and second user provide, frame of video that second user provides or second user provide is related.At very first time example place, model of place also comprises the description to observation station 438 and viewing angle 439.The full-screen image of selecting viewing angle 439 to make that observer at observation station 438 places sees first surface 434.
(observer at observation station 438 places can check with viewing angle 439) that the observer saw as observation station 438 places, according to the model of place at very first time example, frame of video generator 110 produces the frame of video of the view that shows the scene of being described by model of place.Therefore the frame of video 440 that is produced by frame of video generator 110 shows the zone at the visible model of place of observer at observation station 438 places.As above definition, the definition model of place makes the full-screen image of observer's perception first surface 434 at observation station 438 places, and makes frame 440 that the full-screen image on surface 434 is shown.As in model of place, defining, the video sequence that the frame of video that the image that first user provides, first user provide or first user provide is associated with first surface 434, and the frame of video that is produced 440 that produces at very first time example shows the full-screen image of the frame of video of the full-screen image of the full-screen image of the image that first user provides, frame of video that first user provides or the video sequence that first user provides.
Second row 444 shows in the model of place at the second time instance place and the corresponding frame of video that produces.The model of place 446 at the second time instance place with the very first time example place model of place 431 similar.Yet, it should be noted that observation station 438 moves away from cube 432 between the very first time example and second time instance.Therefore, compare with previous observation station, the new observation station 448 at the second time instance place is farther from cube 432.Yet,, suppose the viewing angle 449 at the second time instance place and the viewing angle 439 equal (although viewing angle 449 might be different with viewing angle 439) at very first time example place for simply.Therefore, with the very first time example place situation compare, the observer who is in observation station 448 in second time instance will see bigger a part of scene.In other words, not only see the first surface 436 of cube 432 in second time instance the observer at observation station 448 places, also see cube 432 around a part (and may see cubical end face).
Therefore, according to the model of place 446 at the second time instance place, frame of video generator 110 produces the image (for example, 3-D view) that second frame of video, 450, the second frame of video 450 show cube 432.Because cubical first surface 436 is visible in second frame 450, and because the sequence of frames of video (following with these three contents that optional object factory provides for first user) that the image that first surface 436 and first user provide, frame of video that first user provides or first user provide is associated, so the content that in second frame of video 430 first user is provided is presented on the first surface 436 of cube 432.In order to realize this, for example, when producing the frame of video 450 of second generation, the content that frame of video generator 410 can provide first user is as the texture of the first surface 436 of cube 432.
Here it should be noted that the content that the content that first user who provides at very first time example provides can provide with first user in second time instance is different.For example, frame of video generator 110 can be provided by (for example user sequence of frames of video that provides) first frame of video at very first time example, and in (for example user sequence of frames of video that provides) second frame of video of second time instance.
Be also to be noted that in second time instance content that no longer first user is provided is shown as full-screen image, but is shown as the texture of the first surface 434 of having filled cube 432 in second frame of video that produces.Therefore, the content that provides of first user is only filled up the part of second frame of video 450 that produces.
The third line 454 illustrates model of place 456, and the 3rd frame of video 460 that produces that is produced.It should be noted,, suppose to have rotated along Z-axis (Z-axis is vertical with the diagram plane) with the different cubes 434 that only are of the model of place 446 at the second time instance place at the model of place 456 of the 3rd time instance for example shown in Figure 4.
Therefore, the observer at observation station 448 places can see the first surface 434 and the second surface 436 of cube 432.Also show the 3rd frame of video 460 that produces that is produced.It should be noted that the content that second user provides (for example, the image that provides of second user, frame of video that second user provides or second user provide sequence of frames of video) is associated with the second surface 436 of cube 432.Therefore, the content that in the 3rd frame of video 460 that produces second user is provided is presented on the second surface 436 of cube 432.In other words, when the content that provides according to model of place 456 and second user at frame of video generator 110 produced frame of video 460, the content that second user is provided was as the texture of the second surface 436 of frame 432.Similarly, frame of video generator 110 produce the 3rd produce frame of video 460 time, the content that first user is provided is as the texture of the first surface 434 of cube 432.In addition, it should be noted, show the content that the content that first user provides and second user provide in the frame of video 460 that the 3rd produces simultaneously, wherein the content that provides of the content that first user is provided and second user is presented on two different surfaces of cube 432.
For more vague generalization, the invention provides the solution of the content that shows simultaneously that on different surfaces the content that first user provides and second user provide, that the surface that wherein shows the content that the content that first user provides and second user provide can belong to is single (typically three-dimensional) object or different (typically three-dimensional) object.
Fourth line 464 shows in the model of place 466 at the 4th time instance place and the corresponding frame of video 470 that produces.As can seeing from model of place 466, model of place 466 only is with the difference of model of place 456: cube 432 is further rotated, makes the second surface 436 of cube 432 towards observation station 448.Frame of video generator 110 produces the 4th frame of video 470 that produces according to model of place 446.The 4th frame of video 470 that produces that is produced is similar with the frame of video 450 of second generation, and wherein the content that second user is provided is shown as the texture on the second surface 436 of cube 432, and the second surface 436 of cube 432 is towards observation station.
Fifth line 474 shows model of place 476 and the 5th frame of video 480 that produces.The difference of the 5th model of place 476 and the 4th model of place 466 is that the observation station 448 in observation station 482 to the four models of place 466 in the 5th model of place 476 is more near cube 432.Preferably, observation station 482 and cube 432 are arranged in model of place 476, to such an extent as to the observer at observation station 482 places sees (or perception) second surface 436 with full-screen image.Therefore, the 5th frame of video that produces illustrates the content that second user provides as full-screen image.
In sum, the sequence of the frame of video 440,450,460,470,480 of five generations shows the transition between the content that the content that first user provides and second user provide, wherein, first frame of video 440 that produces shows the full-screen image of the content that first user provides, and wherein the 5th frame of video that produces shows the full-screen image of the content that second user provides.
In alternative, model of place 431,446,456,466,476 can be illustrated in another transition between two scenes.For example, model of place 431,446,456,466,476 can be described the transition between the content that the menu page that shows a plurality of menu items and user provide.For example, first model of place 431 can be described the full-screen image of menu page, and last model of place 476 can be described the full-screen image of the content that the user provides.Like this, Zhong Jian model of place 446,456,466 is described in the intermediate steps that preferably seamlessly transits between first model of place 431 and the last model of place 476.
In alternative, model of place 431,446,456,466,476 can be described in the menu page that more than first menu item is shown and transition between the menu page of more than second menu item is shown.Like this, first model of place can be described the full-screen image of first menu page, and second model of place 476 can be described the full-screen image of second menu page.Middle model of place 446,456,466 is described in the intermediate steps of the transition between first model of place 431 and the last model of place 476.
In alternative, model of place 431,446,456,466,476 can be described in content that the user provides and the transition between the menu page.Like this, first model of place 431 can preferably be described the image of the content that the user provides, and last model of place 476 can be described the image of menu page.Menu is the image of the 3D scene of (for example, for standardized time parameter at time t=1 place) at very first time example place (for example, for standardized time parameter at time t=0 place) or at the second time instance place.Middle model of place 446,456,466 is described in the intermediate steps of (preferably level and smooth) transition between first model of place 431 and the last model of place 476.
Another possible application is, the presenting of the content that 430 expressions of first row provide the user, the content that the user is provided is illustrated in the frame of video 440.In addition, the third line 454 shows presenting menu with three buttons (usually, rather than 6 buttons).Shown in the third line 454, cubical three visible surfaces (shown in the frame of video 460) can be as the button in the scene.
Fig. 5 shows the block scheme of the method that presents frame of video, and this method is applicable to frame of video generator 110.Employing 500 is the method for index map 5 integrally.It should be noted, the method 500 of Fig. 5 can be carried out repeatedly to produce sequence of frames of video at a plurality of frames.
Method 500 comprises: in first step 510, obtain user content at frame of video, described frame of video has the index f that is used to illustrate.
Method 500 also comprises: in second step 520, obtain at how much of the scenes of frame of video f.
Method 500 also comprises: in third step 530, content and (for frame of video f's) scene of using (for frame of video f's) user to provide produce frame of video f how much.
Method 500 also comprises: in the 4th step 540, provide the frame of video f that is presented.
If find to have more frames that will present in decision steps 550, then repeating step 510,520,530,540.
Acquisition comprises at the first step 510 of the user content of frame f: determine which user content will be used for frame of video f.For example, if the content that all frames of the sequence of frames of video that will present all use identical (stable) user to provide is provided, then the content that provides of the user that can obtain the frame of video at first pre-treatment is reused.Yet, if the different frame of (or presenting) video sequence of finding that content that different users provides should be used to produce, the content that the user who obtains to be associated provides.
For example, if the content that the user provides is a sequence of frames of video, then the different frame of the sequence of frames of video that provides of user can be associated with the different frame of (or presenting) sequence of frames of video that produces.Therefore, in step 10, which frame that identifies the sequence of frames of video that the user is provided is used to produce the current frame of video that presents.
Here it should be noted, for the generation of (or presenting) frame of video of single generation, the frame of video that can use one or more user to provide.For example, the inside of (or presenting) frame of video of single generation is had: the corresponding video frame of the sequence of frames of video that first user provides, and the corresponding video frame of the sequence of frames of video that provides of second user.Show the example of the frame of video of use with reference to figure 7.
In second step 520, obtain at when how much of the scenes of the frame f of pre-treatment.For example, can adopt the form of the descriptive language of the feature of describing current geometric object in each frame to provide scene how much.For example, can be to describe at how much of the scenes of frame f with the similar descriptive language of the tabulation 300 of Fig. 3.In other words, scene description can comprise: the geometric configuration that will show in each frame or the tabulation of element, and a plurality of characteristics or the attribute that are associated with geometric object or shape.For example, such feature can comprise: the position of object and/or location, the material of the size of object, the title of object, object, will with the transparency of object or the texture that is associated with the distinct faces of object, object, or the like.Here it should be noted, any attribute can be used for geometric object or the geometric configuration known from the description of virtual reality world.
In addition, scene can comprise the information relevant with observer or observation station how much, has defined according to the point of its generation by the image observation scene of the scene of scene geometric description.Description to observation station and/or observer can comprise: the position of observation station, the direction of observation and viewing angle.
Here it should be noted, can be directly obtain at how much of the scenes of frame f according to the model of place that can be used for frame f.Alternatively, can use at the model of place of frame e (once showing before the frame f) and utilize with the mobile relevant information acquisition of object in the time of frame e and frame f at how much of the scenes of frame f.Can also assess the information relevant, to obtain at how much of the scenes of frame f with the direction of the moving of observation station, observation or viewing angle.Therefore, be for how much to the geometric object that will in frame f, show and/or the description of geometric configuration at the scene of frame f.
In third step 530, content that the use user provides and the scene that obtains in second step 520 produce frame of video f how much.To be described with reference to 6 pairs of details that produce frame of video f of figure subsequently.In third step 530, according to the frame of video that obtains for how much at the user content of frame of video f and at the scene of frame of video f to present.
Therefore, in the 4th step 540, provide the frame that presents f for further handling (for example, in order to form frame sequence, or for the original material of frame or frame sequence is carried out further coding).
Content and scene that Fig. 6 shows using the user to provide produce the block scheme that frame of video f is described how much.Employing 600 is the method for index map 6 integrally.
The generation of frame of video f comprises: first step 610, and at the object in the frame of video f sign model of place with predetermined title or predetermined object properties.If can identify such object in first step 610, the object that then utilizes the user to provide in second step 620 is replaced the object after the sign.In third step 630, in model of place, identify object with the surface that has predetermined surface properties.For example, Yu Ding surface properties can be superficial makings attribute, surfacing attribute or surperficial name attribute.Yet, it should further be appreciated that appear in the model of place if having the object of predetermined title, at least one particular surface of suppose object has predetermined surface properties so automatically.For example, can define: if model of place comprises and have predetermined title the cube of (for example video_object or NSG_Mov, wherein Mov represents film), then each cubical surface has the predetermined surface properties that is suitable for illustrating video thereon.
In other words, the crucial purpose of third step 630 is: the content that sign is suitable for that the user is provided shows at least one surface thereon; Or identifying at least one object, described object has the lip-deep attribute that indication is intended to the content that the user provides is presented at described object.
If identified the surface that is intended to the content that explicit user provides, then the content that the user is provided is presented on each surface.In order to reach this effect, the content that the frame of video generator can provide the user is wherein recognized to be intended to the content that the user provides is presented on the described surface as the texture on surface.
For example, the frame of video generator can be resolved scene description or model of place at frame f, is intended at least one surface of the content that explicit user provides with sign.For example, the frame of video generator can insert model of place with reference to (for example, link), and this content that user is provided with reference to indication is used as the texture of particular surface.In other words, the frame of video generator can be resolved feature title or characteristic attribute with sign object or surface to model of place or scene description, and the texture properties on object after the sign or surface is provided with the content that the user is provided is appointed as the texture that will use.
For example, for parsing, the frame of video generator can be obeyed predetermined resolution rules, for example definition: the surface that should utilize texture to fill to have predetermined surface title or surface properties according to the content that the user provides.
Alternatively, resolution rules can also be indicated: the content that should provide according to the user provides texture to i the predetermined surface of object with predetermined title.
If the content that provides according to the user has identified the surface that is intended to have texture in model of place or scene description, then frame of video generator 110 content that the user is provided is presented on the surface after the sign subsequently.For this reason, generation is by the diagrammatic representation of model of place or the described scene of scene description.Consider that object relative to each other and with respect to the relative position of observation station, will change into the diagrammatic representation of object by attribute (as position, size, orientation, color, material, texture, the transparency) object of describing according to object in model of place or scene description.In other words, will change into as the layout of model of place or the described object of scene description as from the being seen diagrammatic representation of observation station.In figured generation, consider the replacement of object in second step 620, and the content that provides of user is the fact that is intended to have the texture on the surface after the sign of such texture.
Should be noted that the figured generation by model of place or the described scene of scene description is known for artist/deviser.
Be also to be noted that and carry out institute in steps 610,620,630,640.On the contrary, in an embodiment, execution in step 610 and step 620 are with regard to enough (if step 610 success).Like this, frame of video generator 110 produces the frame of video that shows as the described scene of model of place, and wherein the object that utilizes the user to provide according to second step 620 is replaced the object after identifying.At last, execution in step 640 is to produce diagrammatic representation.
Yet, for example, under the situation that needn't replace any object, needn't carry out the first step 610 and second step 620.Like this, the step 630 on the surface of the content that provides of (for example with texture) explicit user is just enough thereon to carry out in model of place sign.After step 630, carry out the 4th step 640.In step 640, frame of video generator 110 produces the lip-deep frame of video after the content that the user is provided is presented at sign.
In other words, can: only utilize object that the user provides to carry out replacement (step 610 and 620) to the object after the sign; Only utilize the replacement (step 630) of user-defined object execution to the texture on surface; Or utilize object that the user provides to carry out the replacement (step 610 and 620) of the object after the sign and the object that utilizes the user to provide are carried out replacement (step 630) to superficial makings.
The diagrammatic representation of the frame of video of the frame sequence that provides at two users that are created in the transition between the sequence of frames of video that the sequence of frames of video that first user provides and second user provide is provided Fig. 7.Here suppose that transition comprises: in the time interval, the content of the sequence of frames of video that the sequence of frames of video that in this time interval first user is provided and second user provide is presented in the sequence of frames of video 116 that is produced.
For this reason, the user can define the overlapping region.In other words, for example, the overlapping region can comprise (corresponding with the specific duration) F frame.Therefore, the last F frame of the sequence of frames of video that use first user provides in transition.The frame of the sequence of frames of video that first user provides has been shown in first diagrammatic representation 710 of Fig. 7, and wherein the last F frame of first user's sequence of frames of video has index (n-F+1) to n.Here the last F frame of supposing the sequence of frames of video that first user provides is used for transition.Yet not necessarily use last F frame.But, can use F the frame that is arranged in the sequence of frames of video that first user provides.
In addition, the preceding F frame of supposing the sequence of frames of video that second user provides is used for the generation of the sequence of frames of video that produced.
Suppose that also the sequence of frames of video that is produced comprises F frame of video with index 1-F.Therefore, having the frame of index n-F+1 of the sequence of frames of video that first user provides and the frame with index 1 of the sequence of frames of video that second user provides is associated with first frame of the sequence of frames of video that is produced.Therefore, associated frame of video is used to produce first sequence of frames of video that produces.In other words, first frame of the sequence of frames of video that produces in order to calculate is used first frame of the sequence of frames of video that sequence of frames of video (n-F+1) frame that first user provides and second user provide.
On the contrary, the n frame of the sequence of frames of video that provides of first user and the F frame of the sequence of frames of video that second user provides are associated with the F frame of the video sequence of generation.
Here it should be noted that related between the frame of the frame of the sequence of frames of video that the user provides and the sequence of frames of video that is produced do not mean the particular frame of the sequence of frames of video that produces in order to calculate and the associated frame of needs automatically.Yet, if the frame of the sequence of frames of video that the sequence of frames of video that discovery needs first user provides during the process of the f frame of the sequence of frames of video that produces presenting and/or second user provide uses associated frame.
In other words, association between sequence of frames of video that first user provides, sequence of frames of video that second user provides and the sequence of frames of video that produced described above allows effectively to calculate the sequence of frames of video that is produced, and wherein the content that variable (or moving) user provides can be embedded in the sequence of frames of video that is produced.
In other words, the frame of the sequence of frames of video that provides of first user shows that as being intended to (or by sign with) frame on the surface of the sequence of frames of video that first user provides becomes (frame-variant) texture.
The frame of the sequence of frames of video that second user provides constitutes and is intended to (or by sign with) and shows that the frame on the surface of the sequence of frames of video that second user provides becomes texture.
Therefore, use frame to become texture the video sequence that is produced is provided.
To be also to be noted that the sequence of frames of video that produces in order calculating, can to switch the sequence of frames of video that sequence of frames of video that first user provides and/or second user provide with respect to the sequence of frames of video that is produced.In addition, can expand or compress with respect to the sequence of frames of video that the time provides first user.The sequence of frames of video that same suitable second user provides.What only need is that a frame of the sequence of frames of video that a frame of the sequence of frames of video that first user provides and second user provide is associated with each frame of the sequence of frames of video that is produced (content of having used those users to provide therein).
Fig. 8 shows the diagram of utilizing text to replace the text placeholder object.
Adopt 800 diagrammatic representations of index map 8 integrally.Can find out that from diagrammatic representation 800 scene description 810 (representing with the frame of video form) can comprise the text placeholder object here.For example, scene description 810 can be described and have that to have indicated cube or cuboid be the title of text placeholder object or the cube or the cuboid of attribute.Therefore, if sequence of frames of video generator 110 recognize model of place 112 comprise have indicated model of place to as if the model of place object of the predetermined title of text placeholder object or predetermined object properties, then the frame of video generator utilizes the expression of text to replace the text placeholder object.For example, frame of video generator 110 can utilize one or more object of the text of representing that the user provides to replace text placeholder object.In other words, the frame of video generator can be introduced model of place with the description of object (text that described object representation user provides).For example, the model of place generator can be suitable for receiving the text of string input form, and is suitable for producing the object of the text of representing the string input.Alternatively, the frame of video generator can adopt the form of one or more object to receive the description of the text that the user provides, and the shape of described one or more objects is represented text.Like this, for example, the frame of video generator can be suitable for the description that the user with text provides (with the form to the description of a plurality of objects) and be included in the model of place, and is suitable for according to comprising that the model of place to the description of the object of expression text produces frame of video.
As can seeing from Fig. 8, frame of video generator 110 produces the illustrated frame of video 820 that has comprised the text that the user provides.Here it should be noted, in a preferred embodiment, make the illustrated size of the content that the user provides be fit to the size of text placeholder object 812.For example, the outer boundary of the text that the text placeholder object can be provided as the user.In addition, can will be applied to the text that the user provides with text placeholder object 812 associated attributes (for example, color attribute or transparency attribute), and with whether with string or the text-independent that provides the user to provide with a plurality of objects.
Therefore, model of place 112 is defined in the outward appearance of the text that the user provides in the sequence of frames of video 116 as template.
Hereinafter, will further describe the present invention.In addition, will be described using the menu structure that the present invention produces the video data medium.In addition, will how the transition of can thought according to the present invention setting up between the different video content be described.In addition, will describe how can produce video effect and text effect.
Hereinafter, some general informations relevant with DVD menu, video transition, video effect and text effect will be provided.At first, will be described video transition, video effect and text effect.
Although crucial application the of the present invention is to create three-dimensional (3D) DVD menu, yet will be described 3 D video transition and 3 D video effect and three-dimensional text effects.Can think 3 D video transition, 3 D video effect and three-dimensional text effects be complicated DVD creation than simple version.
Typically, when making up or linking two video sequences (or video film), insert video transition to avoid sudden change (abrupt transition).For example, will be very simple two-dimensional (2D) video transition gradual change so that the first video blackening, subsequently on the contrary with the second video gradual change.Usually, video transition is sequence of frames of video (or film sequence), and described sequence of frames of video (or film sequence) begins the frame identical with first video is shown, and the frame identical with second video is shown at last.(frame of video) sequence is somebody's turn to do in montage between two videos then (or insertion), thereby allows (or level and smooth) the continuously transition between two videos.
For the 3 D video transition, sequence of frames of video (or film sequence) is the product that presents the 3 D video transition.In addition, under the situation of 3 D video transition, preferably first frame of sequence is identical with the frame of first video, and preferably the last frame of sequence is identical with the frame of second video.Except 3D scene and animation thereof, present engine and receive the synchronization frame of first video and second video as input.By hypothesis two videos are placed on over each otherly in overlapping mode, and the length of hypothesis overlay area definition video transition and scene that utilization presents replace the zone of described covering, it is contemplated that (producing transition) this process.The simple examples of 3 D video transition can be the plane, first video on the front as seen, second video the back back side on as seen.Then, this plane needs to move by this way: in the beginning front of animation (or transition) is visible full frame, is visible full frame at the end back side of animation.For example, this plane can be moved away from video camera (or observer or observation station), carries out the rotation half cycle around the transverse axis of symmetry, moves to video camera once more.
Usually 3 D video effect and three-dimensional text effects are to add the three dimensional object of video film (or sequence of frames of video) to.Like this, the frame of 3D scene and animation thereof and original video (or initial video) is the input that presents device.
For text effect, must determine (or setting) text string.The example of three-dimensional text effects can be envisioned for sequence (for example sequence of frames of video), wherein make up string, be rendered as the three-dimensional text character that is used for character, disappear once more subsequently.Original video (or initial video) continues at running background like this.
For example, the 3 D video effect can be the three dimensional object (for example, the football in rubber nipple in child's film or the football World Championships film) that happens suddenly to frame and suddenly disappear once more subsequently.
For example, under the situation of 3D video transition, 3D video effect and 3D text effect are combined.Present that engine receives the 3D scene and from the synchronization frame of one or more video and (randomly) one or more text string as input.Present engine then and produce short film frame by frame, wherein utilize external unit that film is further processed (for example, with other audio-visual-materials described film being made up or montage) subsequently.
Three-dimensional scenic goes for proprietary data form or universal data format, maybe can adopt proprietary data form or universal data format to provide three-dimensional scenic, wherein common described proprietary data form or universal data format can be the standard output data forms of any 3D modeling software.In principle, the input of 3D data layout (just describing the data layout of three-dimensional scenic) arbitrarily can be arranged.The detailed structure of document format data and the present invention are irrelevant.
In addition, preferably, can (wherein, for example material be equal to color and texture: material=color+texture) with geometric object grouping and for group, object and/or surface definition provide title.Like this, for example, can present engine by specific names (just characteristic or the predetermined title) notice of using the material on the front of above example midplane for the 3 D video transition: the frame that will on described surface, place (or illustrating) first video.In other words, the material for the positive page in plane provides specific names (for example NSG_Mov).This specific names (NSG_Mov) is to presenting engine indication: the frame that first video will just be shown on the front on plane in particular surface.In the same way, utilize the order of certain material title to present the frame of engine at back displays second video on plane.
For user's editable text being inserted in the three-dimensional scenic, use the three dimensional object such as cuboid, wherein utilize specific (or characteristic) title described three dimensional object to be labeled as the placeholder that is used for the three-dimensional text object.Presenting engine then can be in advance, and (for example before the diagrammatic representation that produces three-dimensional scenic) removes these objects, and presents the text by the end subscriber definition in the position of these images.The size of the three-dimensional text of being drawn meets the size of (or depending on) placeholder object.
Like this, 3D modeling person can create three-dimensional scenic, by providing title and grouping is interpreted as video transition, text effect or video effect with described three-dimensional scenic, wherein can use business tool (for example, can with any program of 3D data of description form output data) by intelligent 3D engine.3D modeling person is without any need for programming knowledge.Although under the situation of considering (video) transition and (video) effect, only have the rule of minority object oriented form, yet the establishment of functional DVD menu is more complicated.Yet it is identical that basic process keeps.
Hereinafter, will the generation of DVD menu be described.Here will be noted that except main film, most of commercial DVD comprise the additional video material, as performer's titbit or with interview.In addition, usually main film branch is come out as an article.For the end subscriber that allows DVD navigates by DVD, except audio-visual-materials described above, DVD also comprises video sequence, wherein by DVD player the additional video sequence is interpreted as menu structure.The data layout (or details of data layout) of definition video DVD in standard, the DVD that utilizes intelligent 3D design to produce does not break away from this standard.
The DVD menu can comprise a plurality of menu pages.The user can change between the page by the action such as selector button.In addition, the user can be by the specific chapter of action beginning specific video or beginning video.
Between the demonstration of two menu pages,, can define small video sequence similar with video transition, that avoid unexpected variation between the demonstration of menu page and video or between the blank screen and the master menu page after directly inserting DVD.Fig. 9,10,11,12,13,14,15,16 and 17 shows the schematic arrangement (or structure) of the DVD menu with sequence between menu.Design of the present invention (being also referred to as intelligent 3D) provides the possibility of sequence between use three-dimensional model (being also referred to as model of place) definition menu page and menu.
DVD menu page self also is short-sighted frequency sequence, thereby even also needn't show the image of complete static state during the stage that DVD user's (just using the people of DVD) can select.On the contrary, during the stage that DVD user can select, can move one or more animation.Use intelligent 3D to present these film sequence (just little animation) by the DVD Authoring program.
Therefore, on the user's computer of Authoring program or authoring software, carry out: produce sequence (for example sequence of frames of video) from three-dimensional scenic (or according to three-dimensional scenic).DVD player is only play (being included on the DVD that is produced by the DVD Authoring program) video with fixing order or according to DVD user's action.
To be described with reference to 9,10,11 and 12 pairs of representative transitions that appear on the DVD medium of figure subsequently.Fig. 9 shows the diagram of the sequence (for example sequence of frames of video) between two menu pages.Employing 900 is the diagram of index map 9 integrally.Fig. 9 shows first menu page 910.First menu page 910 comprises: can be used for the button 912,914,916,918,920,922 to the specific Zhang Jinhang selection that is included in the dvd content on the video DVD medium.Can utilize one or more Drawing Objects to represent button 912,914,916,918,920,922.In addition, but button 912,914,916,918,920,922 can comprise favored area and/or highlight regions, makes a highlighted button that can move the pointer in the button be used for selecting.The diagrammatic representation that is also to be noted that button 912,914,916,918,920,922 can comprise the content that sequence of frames of video that image that the user provides, frame of video that the user provides or user provide provides as the user.In other words, the diagrammatic representation of button can comprise static state or dynamic, just changeable graphical content.
Be also to be noted that preferably, describe menu page 910 according to the model of place that produces by 3D modeling person.Therefore be described with the form of scene description language element (for example geometric object) menu page 910.In addition, the model of place of menu page 910 can comprise placeholder object or placeholder surface, the object that makes it possible to utilize the user to provide (just user provide content) is replaced the placeholder object, and the content that the placeholder surface can (for example with texture) explicit user provides (for example the image that provides of user, frame of video that the user provides or user provide sequence of frames of video) is provided.
Fig. 9 shows second menu page 930.Second menu page 930 comprises a plurality of buttons 932,934,936,938,940,942.Button 932,934,936,938,940,942 can have outward appearance and the function similar with button 912,914,916,918,920,922.
Fig. 9 also show when the transition of carrying out between first menu page 910 and second menu page 930 will be by the menu of DVD player broadcast between sequence or menu-menu sequence 950.Preferably, what sequence 950 between the menu between first menu page 910 and second menu page 930 (cartoon scene or animation typically) was concerned about is: old, previous (or previous show) contents of menus disappears, and the scene (or content) that makes up new (subsequently or show subsequently) menu.According to the structure of menu, preferably show some navigational arrows (for example Lv Se arrow).Here it should be noted, as not being key component of the present invention, and should regard example as with reference to figure 9 described menu structures.In other words, the invention is not restricted to the specific menu structure.The diagrammatic representation of illustrated menu only is intended to explain the problem of dynamic menu establishment.In this article, " dynamically " be meant that at the time point place of design during menu the FINAL APPEARANCE of (just for example the time point when creating the menu template) menu is unknown.For example, at the time point place of design during menu, independent button (or effectively switch region) and optional additional (three-dimensional) object take (or distribution) and use is unknown.
Figure 10 shows the schematically process of the introductory film of general introduction.Adopt 1000 diagrammatic representations of integrally indicating Figure 10.Diagrammatic representation 1000 shows first menu page 1010 with a plurality of buttons 1012,1014,1016,1018,1020,1022.For example, first menu page 1010 can be identical with menu page 910.Diagrammatic representation 1000 also shows menu afterbody sequence 1030 (being also referred to as " introducing (intro) ").When DVD is inserted DVD player, introductory film (" introduction ") or afterbody are play once.Introductory film or afterbody end at first master menu of DVD.
In other words, menu afterbody 1030 is the sequence of frames of video that begin and finish with first master menu with blank screen.In addition, it should be noted, preferably,, describe menu afterbody sequence 1030 according to model of place as summarizing in the past.
Figure 11 shows the diagrammatic representation of the animation " chapter choice menus → film begins " of the intermediate sequence of schematic overview.Adopt 1100 diagrammatic representations of integrally indicating Figure 11, and the diagrammatic representation of Figure 11 shows menu page 1110.For example, menu page 1110 can be identical with the menu page 1010 of the menu page 930 of the menu page 910 of Fig. 9, Fig. 9 or Figure 10.The diagrammatic representation of Figure 11 also shows first frame 1120 of film (sequence of frames of video just).Diagrammatic representation 1100 also shows menu intermediate sequence or menu to title sequence 1130.
Preferably, menu intermediate sequence 1130 begins with the frame of video that shows menu page 1110, finishes with the identical frame of video of first frame of the frame of video 1120 that provides with the user.Here it should be noted, for example,, can describe menu intermediate sequence 1130 according to model of place as former general introduction.
In alternative, the menu intermediate sequence can be incorporated in the reverse menu.Therefore, can be when finishing video (its frame is illustrated as frame 1120) and will master menu be carried out back in transition backward the time play menu intermediate sequence 1130.In other words, can be provided for from the menu intermediate sequence of title to the menu transition.Corresponding transition can begin with the frame (last frame) of sequence of frames of video and can finish with menu page 1110.
Figure 12 shows the diagrammatic representation of the sequence between master menu and submenu.Adopt 1200 diagrammatic representations of integrally indicating Figure 12.Diagrammatic representation 1200 shows master menu 1212 and submenu 1220.For example, master menu 1212 can be identical with the menu page 1110 of the menu page 1010 of first menu page 910 of Fig. 9 or second menu page 930, Figure 10 or Figure 11.The submenu page 1220 can have and the structural similarity of the master menu page 1212 or identical structure.Yet for example, the submenu page 1220 can comprise the button that allows the sub-chapter on the visit DVD.Therefore, the submenu page 1220 can comprise a plurality of buttons 1222,1224,1226,1228,1230,1232.Diagrammatic representation 1200 also shows menu intermediate sequence or menu to submenu sequence 1240.
Under situation shown in Figure 12, (according to example embodiment) can occur up to n=6 chapter by every menu.For the template of exemplary menu intermediate sequence, preferably provide the object of n*4+10 appointment by deviser (for example by 3D modeling person).Therefore, if hypothesis can a maximum number n=6 chapter occur by every menu page, then should provide the object of 34 suitable appointments by the deviser.Particularly, should provide following object for example menu to menu animation sequence:
N " old " chapter image;
N " old " Zhang Wenben;
3 " old " navigational arrows;
1 " old " header (header);
1 " old " footer (footer);
N " newly " chapter image;
N " newly " Zhang Wenben;
3 " newly " navigational arrows;
1 " newly " header;
1 " newly " footer.
Closely link with above-mentioned object, must in three-dimensional scenic, correspondingly arrange n " old " and respective sets n " newly ".Which object is " old " and " newly " group defined belongs to menu button.In the example " monitor " of following detailed description, whole mechanisms of chapter 1 image, chapter 1 text and first monitor are summarised as first group.
Therefore, 3D modeling person can create the 3D menu by using business software to create a series of animations, makes animation meet above-mentioned rule.3D modeling person does not need to have any programming knowledge.In addition, the user of Authoring program does not need to have any knowledge about the 3D modeling yet.Intelligence 3D engine reads (being created by 3D modeling person) 3D scene, and according to 3D sequence and the short film sequence of information creating that obtains from the user of DVD Authoring program.Film sequence constitutes dynamic DVD menu on the DVD of compliant with the information about menu structure.
Hereinafter, be how with handling the 3D scene from the information of Authoring program with describing intelligent 3D engine to produce the menu intermediate sequence.
Different information is passed to intelligent 3D engine from Authoring program.The user may want (master) video of different numbers is incorporated among the DVD.The user can determine that at the frame of video of the button image in the 3D scene or sequence of frames of video, the user can provide the text of the label of header, footer or button, and the user can select the color and the transparency of highlighted sheltering (mask).Yet other information also is possible, as the material color in three-dimensional scenic or the background image.In order to regulate the 3D scene respectively, at first the 3D scene is converted into independent data structure, so-called scene graph.
Figure 13 shows the diagrammatic representation of scene graph.During presenting process,, and draw geometric object (rectangle node) according to being positioned at top conversion and material (just according to the material and the conversion that are positioned on scene graph more high-rise) by scene graph.The node that adopts " group " appointment in scene tree (or scene graph) is for dividing group objects to use.Generator uses for the animation that is positioned at following object.
When reading in the 3D contextual data and the 3D contextual data converted to internal data format, (on the fly) the placeholder object that will be used for text changes into the dynamic 3 D text object in real time.Adopt " text " in the scene tree to specify the 3D text object, three-dimensional text object expectation text string is as input value and generation three-dimensional text in the three-dimensional scenic that presents.
Actual present process before, can be according to the user's of authoring software hobby to appearing at data structure in memory adjustment.
For example, if the user only comprises (or link) 4 videos rather than 6 videos, it is necessary that 4 video button are then only arranged.For example, if the user then needs to shelter or omit two buttons for button provides 6 three dimensional objects.Title is come labeled button because can utilize specific (or feature), so this is very possible.Therefore, during presenting process, intelligent 3D engine only needs to save the respective branch in the scene tree.For the above example that provides (4 video button), intelligent 3D engine can save in the scene graph of Figure 13 by 5 and 6 branches that indicate.
Before presenting each menu intermediate sequence frame, the frame of the audio-visual-materials that should enclose or illustrate (for example user provide content) can be introduced (or sign or link) to respective material on three-dimensional button.For example, the image that adopts " chapter image 1 " indication on first button (button 1) of the menu of describing by the scene graph of Figure 13, to illustrate.
Therefore use the user of the DVD of intelligent 3D generation on DVD, to navigate by the 3D menu.For example, intermediate sequence is the short-sighted frequency film that is recorded in unchangeably on the DVD.The user is without any need for personal computer knowledge.The user of DVD Authoring program in advance by the input header character string, by the video film selecting to be used to integrate or by fixing chapter, determined the outward appearance of DVD menu.Intelligence 3D engine is according to these clauses and subclauses or information (title string input; The selection of video film; The selection of chapter; Will be presented at the selection of the image on the button or the selection of sequence of frames of video) and the help by the animation three-dimensional scenic produce the video intermediate sequence.The user of authoring software is without any need for 3D knowledge or programming knowledge.
Can produce the 3D scene by the 3D modeling person who uses standard software, wherein only need to keep several rules.3D modeling person is without any need for programming knowledge.Can add the 3 d menu of arbitrary number, three-dimensional transition and 3-D effect and source code not carried out any change.
Here it should be noted that Figure 14,15 and 16 shows existing three-dimensional DVD menu screenshotss in use.As 3D modeling person definition, Figure 17 shows the model of 3 d menu.
Inserting the chapter object comprises: the image area and the frame of video (or video image) that are used for chapter image, Zhang Wenben and optional additional model object (for example, the travel mechanism of monitor in the following example that is called " monitor " that illustrates).
, then object can be summarised in the group of corresponding name if but favored area (or highlight regions) comprises a plurality of objects.Bounding box by the occupied zone of the group objects on the screen automatically defines by mouse (or pointer) effectively optionally regional.
Hereinafter, will the transition of how creating between menu page and the menu page be described.Here it should be noted, suppose that 3D modeling person produces the model of place of scene (or scene description).For example, content that provides and the scene that changes into sequence of frames of video then are described model of place to having replenished the user subsequently according to the three-dimensional modeling language.In other words, model of place comprises according to object and object properties to the description (for example motion of motion of objects and/or observer or observation station) that develops in time of the description of scene, model of place and to the placeholder object that is used to embed the content that the user provides or the description on placeholder surface.
Hereinafter, suppose that modeling person is people or an equipment of creating the model of place of (preferably three-dimensional) scene.
In order to create 3D (three-dimensional) scene that can use in the DVD menu, modeling person need obey one group of rule.In these rules some are provided by the logical organization or the logical constitution of DVD menu.Need Else Rule with the adeditive attribute of three dimensional object (as, for example will become the attribute of button, maybe will be used for the highlighted attribute that calculates of sheltering) be notified to intelligent 3D engine.When the display menu page, highlighted to be sequestered in the choice phase be visible, and cover selected button by adopting by the defined color of the user of Authoring program, utilizes highlighted the sheltering of selected button sign.As illustrating,, be necessary the menu structure that intelligent 3D design is supported is described in more detail with respect to the definition of rule with respect to Fig. 9,10,11 and 12.
Can make up intelligent 3D menu according to master menu and a plurality of submenu.On the master menu page, can place up to 6 buttons.Preferably, arrange button, and provide specific (or feature) title for button by 3D modeling person.For example, can provide title " NSG_BS01 " to " NSG_BS06 " for 6 buttons.For example, if cause the more button of needs owing to during the process of DVD creation, will on DVD, firing 10 videos, then can add the entremets single page, in the navigation that can pass through between the described entremets single page on a left side/right arrow button executive level direction.In the process of DVD creation, the chapter mark additionally inserted under the situation in the video, add one or more menu page of submenu.Utilize upwards button, can get back to more high-rise (being positioned at top) menu page once more.Preferably, also arrow button is placed in the 3D scene, and utilize the name identification arrow button (for example: NSG_Up, NSG_Nxt, NSG_Pre).
Except above-mentioned element, go back label, Header Text and the footer text of button support in embodiments of the present invention.For this reason, the 3D modeling person placeholder object that will have a title (as employed title in text effect) of appointment adds the 3D scene to.For specific reasons, cuboid be preferred (for example: NSG_Hdr, NSG_Ftr).
The further name of three dimensional object and grouping determine consider which object for highlighted calculating of sheltering.Highlighted then calculating of sheltering is provided with these contours of objects with black white image.Figure 23 shows the highlighted example of sheltering for 6 menu buttons and 3 navigational arrows.
Respective packets also allows the accurate interpolation (or definition) to highlight regions, for example, and in response to the user-defined selection of Zhang Jinhang, to utilizing the highlighted definition of object of color.Typically, this zone (highlight regions just) is identical with the district that corresponding chapter image is positioned at.
Hereinafter, will carry out brief discussion to highlighted calculating of sheltering.For this reason, Figure 23 shows the highlighted diagrammatic representation of sheltering at menu structure shown in Figure 17.
According to the highlighted generation of sheltering of following execution: the object that only will have specific (highlighted sheltering) title (or belonging to specific group of objects) is plotted in the front of black background with full light (full-bright) white.
This has produced highlighted contours of objects, and is wherein in extract that described highlighted contours of objects is stacked with the master menu video that presents, with highlighted specific object (for example button).
Except the label of button, enclose in somewhere on the button or the image (or frame of video) that shows makes for DVD user at related between button and the video and becomes easy.Typically, image comes the frame or the short film sequence (sequence of frames of video) of the video or the video chapter of auto correlation.3D modeling person determines how and where to enclose (or illustrating) image by the placeholder texture in three-dimensional scenic.For this reason, 3D modeling person provides sign title (for example NSG_BS01 to NSG_BS06) for respective material.
At 3D modeling person's other boundary condition is that logical organization by the 3D model causes.Therefore, preferably, (as, for example with reference to illustrated in fig. 19) introductory animation begins and ends at menu page with black image.Menu to menu animation (or menu is to menu transition) and menu to submenu animation or submenu to menu animation finishes with menu page (or submenu page) beginning and with menu page (or submenu page).Menu to video cartoon begins with menu page and finishes with the corresponding video of full frame size.The animation that illustrates in the choice phase (menu page and user just are being shown can select time time durations) can be only with less mobile introducing menu in, for example perceive step (or uncontinuity) in addition in menu to the beginning of video transition during the point selection button at any time DVD user.Leading to the animation of second menu page from first menu page, must change button, label and arrow, must (for example provide twice with all objects (or the object that is associated with button, label and arrow at least) by 3D modeling person, NSG_BS01I to NSG_BS06I, NSG_UpI, or the like; Suffix " I " indication " input ").
Hereinafter, will be described with reference to figs. 14 to 17 pairs of examples at the DVD menu.The example of Figure 14 to 17 is based on three-dimensional template, and described three-dimensional model has been described (or illustrating) by the monitor after the modeling that system supported of connecting rod and piston.The template of example is called " monitor template ".
Figure 14 shows the diagrammatic representation at the example of the menu with 4 chapters.Adopt 1400 diagrammatic representations of integrally indicating Figure 14.
Figure 15 shows the diagrammatic representation for the example of the menu with 8 main chapters, and wherein the user can navigate to next and the Previous menu page (or first and second menu page).Adopt 1500 diagrammatic representations of integrally indicating Figure 15.
Diagrammatic representation 1400 shows 4 monitor screens 1410,1412,1414,1416.Menu item that each expression in the monitor screen is used for the Zhang Jinhang of the video content on the DVD is selected or menu button.It should be noted, according to having described three-dimensional scene models or the three-dimensional scene models template generation menu scene as shown in figure 14 that amounts to 6 monitors.For example, in the left side menu page 1510 of the diagrammatic representation 1500 of Figure 15, can see menu page with 6 monitors.Therefore, can find out, from three-dimensional scenic, remove latter two monitor (monitor on the right in monitor in the middle of just in the monitor of low row and the low row monitor) and (accordingly) chapter label from diagrammatic representation 1400.In addition, when the menu scene of Figure 14 is compared with the menu scene of Figure 15, can find out that the menu scene of Figure 14 does not comprise any arrow.This is owing to the following fact: owing to do not have entremets single-page by the represented menu of the menu scene of Figure 14, so do not need arrow.
It should be noted,, comprise two menu pages by the described menu of menu scene of Figure 15 for the diagrammatic representation 1500 of Figure 15.Adopt 1510 indications to comprise first menu page of 6 menu entries, adopt 1520 indications to comprise second menu page of 2 menu entries.In other words, the template of having supposed to define the menu scene comprises 6 menu entries, then the complete filling first master menu page 1510.First menu page 1510 also comprises navigational arrows 1530.Navigational arrows 1530 is used as navigation element, and can be called " next one " arrow.
On second menu page 1520 (being also referred to as the master menu page 2), amount in 8 videos and only kept 2, and correspondingly, stack (or demonstration) " retreating " arrow (or " last one " arrow)." retreat " arrow 1540 and allow to navigate back the previous page, just, first menu page 1510 navigates back.
Figure 16 shows the diagrammatic representation for the example of the menu with 8 main chapters.Adopt 1600 diagrammatic representations of integrally indicating Figure 16.Here it should be noted that the master menu of the example of Figure 16 can be identical with the master menu of the example of Figure 15.In other words, diagrammatic representation 1600 shows the first master menu page 1610 identical with the first master menu page 1510 of Figure 15.Diagram 1600 also shows submenu 1620.Here it should be noted that the first main chapter has 5 other sub-chapters.In other words, select and activate, can show submenu 1620 by first monitor (or button) 1630 to first menu page 1610.Because first monitor or first button, 1630 expressions, the first main chapter are so can visit four sub-chapters of the first main chapter on menu page 1620.Be also to be noted that " making progress " button 1640 by chooser menu page 1620, the user can (from the submenu page 1620) master menu that navigates back (or master menu page 1610).In addition, menu page 1610 comprises " next one "-button 1650, with visit (for example identical with menu page 1520) next master menu page.
In other words, in the example of Figure 16, set up submenu, wherein can carry out addressing via (or by) 1630 pairs of described submenus of first button.After the sequence, the user can see submenu (or submenu page 1620) between brachymedial, and wherein (randomly) two menus (just the master menu page 1610 and the submenu page 1620) all are visible during animation.In example embodiment, 6 monitors in the master menu page 1610 upwards shift out image (or upwards shifting out visible screen), and new monitor (for example 4 of the submenu page 1620 monitors) is caught up with from below.In given example, submenu (or submenu page 1620) comprises 4 videos and allows upwards to navigate back the corresponding navigation key head 1660 of the master menu or the master menu page 1610.
Figure 17 shows the diagrammatic representation of the template of the master menu that presents in intelligent 3D internal representation, example described above is based on the template of described master menu.
In template, the deviser provides 6 monitors 1710,1712,1714,1716,1718,1720 of maximum available number.In addition, need to occur three navigation elements 1730 " arrow retreats ", " the arrow next one " and " arrow upwards ".Header 1740 and footer 1750 and chapter title must be obeyed predetermined title agreement.In addition, the image-region at chapter image (or chapter frame of video) must have predetermined title material (NSG_BS01, NSG_BS02, NSG_BS03, NSG_BS04, NSG_BS05, NSG_BS06).
Independent monitor must be summarised in respectively (just, one group of each monitor makes all elements and/or the object that belong to particular watcher be included in the group that belongs to particular watcher) in the group by the respective name definition.As seeing from above example, if satisfy these conditions, then intelligent 3D engine can make scene dynamics ground adapt to menu content.
Here it should be noted that employing 1700 integrally indicating graphic represents 1700.It should be noted that template 1700 comprises a plurality of menu items.In typical embodiment, corresponding a plurality of geometric objects are associated with menu item.To be grouped in the geometric object that the certain menu project is associated, just be included in the group of geometric object.Therefore, by identifying one group of geometric object, can identify the geometric object that belongs to menu item.Suppose that model of place or scene template describe n menu item, template comprises n group, and each of n group summarized and belonged to specific menu item purpose object.For example, belonging to specific menu item purpose object can comprise:
-having the predetermined title or a surface of attribute, described predetermined title or attribute indication: this surface is intended to show the content that the user that is associated with menu item provides, and the content of not specifying specific user to provide.In other words, each surface is the placeholder surface by the content that provides at the user of characteristic title or attribute appointment.
-having the placeholder object of predetermined title, the text placeholder object that described predetermined title is replaced the text that is intended to be provided by the user identifies.For example, text placeholder can be intended to provide the video sequence that is associated with menu item relevant " title " and/or information.
Therefore, frame of video generator 110 can be suitable for what menu entries should being presented in the menu scene (or menu page) based on menu model of place sign.The frame of video generator can also be suitable for use in determining has occur for how many individual groups that defined independent or independent menu entries in the menu template.According to information described above, if menu model of place or menu template comprise that than the more menu entries of actual needs then frame of video generator 110 can will belong to selecting or remove more than the object cancellation of menu entries.Therefore, what can guarantee is, even need also can use the template of the video entry that comprises some than the menu entries still less that is included in the template.
Figure 18 shows the process flow diagram of the method for the present invention that is used to produce sequence of frames of video.1800 methods of integrally indicating Figure 18 of employing.In first step 1810, receive the model of place that has defined scene.Preferably, model of place comprises at least one the model of place object with object oriented and object properties.
Method 1800 also comprises second step 1820, receives the content that the user provides in second step 1820.
In third step 1830, in model of place, the model of place object with predetermined object oriented and the object properties of being scheduled to is identified.Therefore, the model of place object after the acquisition sign.In the 4th step 1840, produce sequence of frames of video, make the content that the user is provided be presented on the surface of model of place of sign, or be shown as replacement at the model of place object after the sign.
Here it should be noted, can utilize any step in the step described above (for example, utilizing) that the method 1800 of Figure 18 is replenished by any step in the performed step of sequence of frames of video of the present invention.
Hereinafter, will be described the equipment of the present invention of the menu structure that is used to create DVD (or video media) usually and the example embodiment of method.For this reason, Figure 19 shows and is used to select or the diagrammatic representation of the user interface of input video sequence.Adopt 1900 diagrammatic representations of integrally indicating Figure 19.According to embodiments of the invention, in first step, the user imports this user and wants to be presented on the video title that DVD goes up (or on any video media such as HD-DVD, on the Blu-ray disc or on what its video media in office).Randomly, can provide the chapter mark for each video.If for video has defined the chapter mark, then will create one or more submenu for this video title.Chapter position of each button indication in the submenu.Therefore, video title can begin with the chapter position of definition.
Figure 20 shows the diagrammatic representation of the user interface page that is used to select template or model of place.In other words, in an embodiment of the present invention, the user selects predefined or predetermined intelligent 3D template (model of place that just is pre-created) in second step.Figure 21 shows the diagrammatic representation of the screenshotss that are used for user interface that the attribute of DVD menu structure is selected.
In other words, according to the embodiment of the invention, the user can regulate the setting of 3D template to be fit to this user's needs in third step.This allows button text, Header Text, footer text and/or background music is changeable.In other words, for example, the user can import setting or the adjustment with respect to the chapter title that will show in model of place or scene template, replace the placeholder object.Similarly, can be replacement with Header Text and footer text definition to template Chinese version placeholder object.
In addition, the user can define use (from following may menu the tabulation of transition) which menu transition:
-introductory animation;
Transition cartoon between-two menus;
-transition cartoon between menu and Zhang Caidan;
-transition cartoon between menu and video title; And
-transition cartoon between video title and menu.
According to the embodiment of the invention, in the 4th step, can use the virtual remote control device in preview, to observe the menu structure of creating by intelligent 3D engine.Randomly, can utilize intelligent 3D engine to calculate the menu transition in real time.Therefore, Figure 22 shows the diagrammatic representation of the screenshotss of the user interface that allows the transition of user's preview menu.
According to embodiments of the invention, in the 5th (choosing wantonly) step, fire or preparing dvd (or blue light medium, HD-DVD or another video media).
Here it should be noted,, show the process of the intelligent 3D menu of establishment from user's viewpoint referring to figures 19 through 22.Be also to be noted that can with referring to figures 19 through 22 or wherein optional described user's clauses and subclauses input to the frame of video generator, with control: the content of utilizing the user to provide is replaced the placeholder object or the content that the user provides is presented on the placeholder surface.
Therefore, the user imports control: produce sequence of frames of video according to model of place (also be called as the scene template or only be called " template ") and according to the content that the user provides.
Hereinafter, will the summary according to the menu creating conception of the embodiment of the invention be described.
It should be noted that a DVD typically comprises the video of some.Visit these videos by one or more menu page, wherein, utilize selector button (for example, utilize in the menu page button) to represent each video, video chapter mark or another menu.Can be by button and menu page or video chain being fetched the content of navigation DVD.Therefore, the short-sighted frequency sequence of different fixing or rest image are represented different menu pages.
Design of the present invention (being also referred to as intelligent 3D technology) is to allow producing above-mentioned menu page automatically according to user-defined amount of video.In addition, calculating the transition video between two menu pages or between menu page (or at least one menu page) and user-defined video title.This is seamless for the user has provided, the illusion (illusion) of staggered and mutual video scene.Independent menu page and video no longer are the direct-cut operations of placing one by one, are melted into each other but seem in the virtual three-dimensional world.
Utilize intelligent 3D engine automatically to carry out establishment to the animated menu structure.The user specifies this user to want to appear at which content (one or more video title) on the disk simply and selects predefined intelligent 3D template (for example, from a template in the predetermined template list).Intelligent then 3D engine calculates between 2 menus or the menu between menu and video title, the button of each menu and the necessary number of transition video.
Independent, predetermined intelligent 3D template demonstration (or expression) 3 D video scene (or at least one 3 D video scene).For example, independent menu page can be interpreted as the not homonymy in room in the template.If the user navigates by different menus, the video sequence that then intelligent 3D engine is created is played and is transition.This transition shows the video transition scene that seamlessly is suitable for two menu scenes.Between menu page and video title, create the video transition scene of seamless adaptation.
Because intelligent 3D engine is integrated between creation application program and the creation engine, so can also and be that blue light medium and HD-DVD medium are created identical animated menu structure for the DVD video.
Hereinafter, will be with being described with respect to the demand of general installation and remarks some characteristics to the embodiment of the invention.
In order to summarize some aspects of the embodiment of the invention, can carry out following statement:
-by series connection,, can merge the film sequence of any number via the 3D transition of smoothness.
-(or merge or series connection) film sequence of link can be assembled into the common menu structure.
-menu comprises introductory sequence and one or more master menu page.Randomly, menu structure can provide the submenu page each chapter with the addressing movie streams.By level and smooth hyperlink transition menu page, wherein seamlessly transit and comprise: to the transition of first frame of each film (or at least to transition of first frame of a film).
Content is regulated on-menu scene dynamics ground.The existence of menu button (or correspondingly, the navigation button) and/or the number of menu chapter occurs depending on.Intelligence 3D engine is concerned about the dynamic adjustment to the menu scene.
-intelligent 3D engine is with high-rise content (user's input) and low layer content (having the universal model of special tag with the menu scene of enables dynamic explanation) and metadata (general menu sequence information, timestamp) combines, to produce the video output of the frame of video form that presents separately.In addition, intelligent 3D engine provides with the highlight bar that is used for menu navigation and selects the relevant information in district.
-in the 3D of menu scene model, use special tag (for example title or attribute) to utilize intelligent 3D engine automatically to produce data described above.
-each menu can have each row three-dimensional text, for example header, footer or chapter exercise question.Text is editable, just preferably produces the 3D grid of font characters in real time.
-to presenting of transition, 3-D effect and menu be interactively.Be hardware-accelerated by the Modern Graphic card of the high-performance visual development of three-dimensional scenic.
Hereinafter, some embodiment details will be described.
According to one embodiment of present invention, based on the idea of intelligent 3D design the three-dimensional data (3D data) that has structural information will be separated with the engine of explaining structure and presenting the dynamic 3 D model.For the tissue of data, with the fexible unit that uses at the 3D data.
In a preferred embodiment, all elements will obtain title, and the data element that exists permission that other element is divided into groups.Title and grouping can be specified special function (for example, as described above as the function of button) for 3D object or group.
In the realization of intelligent 3D, engine reads general 3D data layout.There, meta data block will define the function of 3D model.For example, for the DVD menu, this metadata can be summarized as menu to video transition with the 3D scene, and this will and play before selected video will be shown when end subscriber is selected video button in the DVD menu.Other information in the meta data block of being included in can be determined the number of buttons or the title of the DVD menu under this transition.
Then, a whole set of 3D data that are used to create video content comprise: the file with (may part at menu or video effect any) 3D and structured data.For the method that makes this content creating is applicable to other, can import other file layout except the generic-document form.As other parts, exist to specify the music that to play in certain menu part or video effect inside (or during) or the audio files of noise.
For intelligent 3D engine can be reacted to user's needs neatly, in the 3D model, there are some naming conventions at 3D object or branch set of pieces.For example, the special title of " NSG_BS04 " can appointed object be the 4th button in the DVD menu.Adopt this title, if do not need four buttons, for example the user has only inserted 3 video clipss, and then engine will be removed this object.Another title can be defined as the object or the group of the highlight regions definition of next possible in DVD menu button as " NSG_NxtH " (noting last " H " representative " highlighted " of title).Adopt the mode of grouping, can have the geometry that (if not necessary) will be removed by intelligent 3D engine, and the less geometry that when calculating highlight regions, will consider.Figure 23 illustrates the highlighted example of sheltering of " monitor " menu with 6 menu buttons and 3 navigational arrows.
In the data file, will be text interpretation common geometric object externally.Therefore, this object is lost as the meaning of the set of readable character, and can not reinterpret the meaning of this object to change text.Yet this is essential for the possibility that gives the text of user with oneself (will be the part of DVD menu or video content afterwards) insertion 3D scene.
For this reason, established a kind of method, can edit the object that the replacement of 3D text has the special title such as " header " to utilize, described in this example editable text is represented the title (heading) of DVD menu part.
In this scene, the realization of intelligent 3D allows independently that modeling person creates the creation and the video content of arbitrary number, and does not need software development is studied.The engine of intelligence 3D can be explained 3D structure of models and metadata, thereby knows the function of every part of 3D scene.
Usually, the application comprises and is used to produce cartoon scene to create method, equipment and the computer program of interactive menu and video scene.
Hereinafter, will be described other realization details with reference to Figure 24.Figure 24 is the diagrammatic representation of level of module that is used to create the content of video media.Adopt 2400 diagrammatic representations of integrally indicating Figure 24.The process of utilizing video editing and 2410 controls of creation application program that the content of video media is created.Video editing and creation application program 2410 receive one or more user video segment 2420.Video editing and creation application software also receive unshowned user's input in the diagrammatic representation of Figure 24.For example, the user's input to video editing and creation application software 2410 can comprise: will be included in information relevant on the video media with how many user video segments 2420 are arranged.This user profile can also comprise: with the relevant information of title that will be included in the video clips (or sequence of frames of video) on the video media.User's input can also comprise: the user relevant with the details of menu structure selects.For example, this user input can comprise: the menu structure which the menu template in a plurality of available menu templates (or model of place) should be used to produce video media is made definitions.User profile can also comprise additional the setting, as the selection of color setting, background image, the selection of music title, or the like.
The so-called intelligent 3D engine 2430 that utilization is equal to frame of video generator 110 is carried out being stored in presenting of video sequence on the video media.One or more template definition that intelligence 3D engine 2430 receives at scene and video effect.Template definition 2440 is equal to model of place 112 and according to object and grouping information and attribute information scene is described.
Intelligence 3D engine also receives one or more video flowing and the setting of one or more attribute from video editing and creation application program 2410, adopts 2450 instruction videos stream and attribute setting.Here it should be noted that video flowing is equal to user video segment 2420, or utilize video editing and the described video flowing of creation application software 2410 creation according to the user video segment.Intelligence 3D engine is suitable for creating one or more video flowing 2460 and one or more video flowing 2640 is sent it back video editing or creation application program 2410.It should be noted, video flowing 2460 be equal to sequence of frames of video 116.
Video editing and creation application program 2410 are suitable for making up according to the video flowing 2460 that is provided by intelligent 3D engine 2430 menu and the content structure of video media.For this reason, video editing and creation application program are suitable for (according to certain metamessage) video content of which kind of type of video flowing 2460 expression are carried out mark.For example, video editing and creation application program 2410 can be suitable for recognizing: specific video screen 2460 whether represent menu to menu transition, menu to the sequence of frames of video transition, sequence of frames of video to menu transition, (between blank screen and menu) introductory transition or sequence of frames of video to the sequence of frames of video transition.According to the information relevant with the type of video flowing, video editing and creation application program 2410 are placed on video flowing in the correct position of data structure inside of video media.
For example, if video editing and creation application program 2410 recognize that particular video stream 2460 is that menu is to video transition, then video editing and creation application program 2410 are set up the structure of video media, if make that the user selects to play specific film in the certain menu, then between specific respective menu and specific corresponding video (or film) play menu to video transition.
In another example, if the user for example passes through the selection of the specific button (next button) on first menu page, change to second menu page from first menu page, then should show that menu between first menu page and second menu page is to the menu transition to the user.Therefore, video editing and creation application program 2410 are arranged corresponding menu to the menu transition on video media, make that play menu is to the menu transition when the user selects above-mentioned button on first menu page.
(particularly, the menu structure of video media under) the situation, video editing and creation application program are being stored in information transmission on the video media to creating engine 2470 to have created structure in video editing and creation application program 2410.Creation engine 2470 is suitable for video editing and creates the data formatting that application program 2410 provides, the standard of data fit corresponding video medium (for example DVD medium, Blu-ray disc, HD-DVD or any other video media) like this.Composition apparatus 2470 also is suitable for video editing and the data that creation application program 2410 provides are write video media.
In sum, what can state is that Figure 24 shows the general work process flow diagram of intelligent 3D engine.
Hereinafter, the specific detail relevant with invention described above will be provided.
At first, will some additional details relevant with the calculating of transition video be described.It should be noted that for the calculating of transition video, the frame of video generator receives two video images or frame of video, wherein, from the video that disappears, obtain a frame of video, and from the video that manifests, obtain a frame of video.
Image or frame of video are all with corresponding as the identical time point of final video flowing (or final sequence of frames of video 116).The length of each input video stream (or input video) and the duration of overlapping or transition are depended in the temporal position of two images or frame of video in input video stream.Yet in a preferred embodiment, the 3D engine is not considered absolute time information.
According to two input pictures or input video frame, produce single output image or output video frame.In the generation of output video frame, utilize input video frame to replace the texture of the material of naming respectively in (describing) three-dimensional scenic by model of place.Therefore, output image or output video frame are the images of three-dimensional scenic, wherein utilize first input video frame to replace the texture of object, utilize second input video frame to replace another texture of object.
In addition, will be used to produce the DVD menu to which file or software is described:
-one or more file of three-dimensional scenic described in three-dimensional animation;
-one or more description document (for example, the title of 3D template, the type of intermediate sequence, or the like) that the structure and the additional animation data of scene graph is described;
-the video image software of view data or video data and recombination video data is provided;
-with view data and text data be incorporated in the 3D scene, according to the input data with scene format and the 3D engine that presents the 3D scene subsequently;
In order to produce the DVD menu, in an embodiment of the present invention, number and division according to chapter when producing DVD present any possible menu combination and menu intermediate sequence.In addition, in video file, menu combination and menu intermediate sequence are fired on the DVD.In addition, produce (having file-name extension " .ifo " and known from DVD video disc standard) navigate file.This navigate file allows DVD player to jump to corresponding sequence (just, for example jumping to the beginning of transition video).
In order to determine menu structure, the 3D scene of regulating corresponding modeling according to the number and the structure of available video chapter.The three-dimensional scenic (just unwanted menu item) of unwanted part modeling is automatically removed, made in the final sequence of frames of video that produces, not show them.In addition, produce user's editable text piece.
Thereby, produce 3 d menu, wherein playing animation sequence between menu page.In addition, automatically produce highlighted sheltering according to three dimensional object with predetermined title.Therefore, can create the highlighted of arbitrary shape shelters.
One of key advantage of the embodiment of the invention is that menu design person (for example 3D modeling person) only needs the general menu sequence of modeling in advance.The user who in this task, does not comprise the DVD authoring software.The characteristic of dividing according to chapter is automatically carried out adjusting and the generation to the menu video sequence.
Hereinafter, will describe how to link (or combination) a plurality of film sequence by connecting.Here suppose that video film comprises 30 independently vidclips.Therefore, for example comprise that the whole film of 30 independent vidclips can have the sequence of 29 transition.Alternatively, for example,, then there is the sequence of 31 transition if consider the effect and of fading in the effect of fading out of the end of film in beginning.
The 3D device is only handled the data of current transition.In other words, in first step, carry out the transition between first vidclip and second vidclip, in second step, calculate the transition between second vidclip and the 3rd vidclip, or the like.According to the viewpoint of montage (cutting) software, time course as described below:
-previous section of first vidclip is encoded, information encoded is stored in the video flowing of whole film;
-required view data (or video data or cinematic data) is uploaded to intelligent 3D engine (wherein the content that the user provides is partly formed in the beginning of the latter end of first video segment and second video segment) since the end of first video segment (video segment 1) and second video segment (video segment 2);
-read the view data (or video data or cinematic data or sequence of frames of video) of the transition that is presented from intelligent 3D engine;
-image (or frame of video) that independently presents is encoded, and information encoded is stored in the video flowing of whole film;
-center section of second video segment is encoded, and with the information stores handled in the video flowing of whole film;
-required video data is uploaded to intelligent 3D engine from the end and the 3rd video segment (video segment 3) of second frame of video (video segment 2);
-read the view data of the transition that is presented from intelligent 3D engine;
-image (or frame of video) that independently presents is encoded, and with the information stores that presented in the video flowing of whole film.
Can repeat described process till having calculated any required transition.It should be noted because will be independently video segment and transitional sequence be stored in the single video file, so can produce single video file by series connection described above.
Dynamic adjustments with respect to the menu scene it should be noted, the distribution (distributing to view data and text data) of authoring software decision chapter button.In addition, authoring software decision needs (from model of place) which object and needs adjusting which object (for example content of text) in special scenes.For example when presenting the menu video, the time point place when creating DVD makes corresponding decision.In a preferred embodiment of the invention, after creating DVD, no longer may make amendment to menu structure.
In addition, it should be noted, within periphery of the present invention, the data that term " high-rise content " designated user provides, for example video flowing, chapter image, image header or highlight color.On the other hand, term " low layer content " is provided by the 3D scene (for example, yet the content that is unsuitable for the user and provides comprises the model of place on placeholder object or placeholder surface) of general modeling.In addition, which 3D model file term " metadata " is described and is formed menu together.It should be noted that whole menu comprises: at the scene of the general selection page, and a plurality of animation intermediate sequences that link independent menu page by moving of standalone object.In a preferred embodiment, the different animation sequence of mutual definition for adopting the mutual of chapter 1 button and adopting the chapter 2 button.Metadata also comprise the information relevant with independent menu sequence, with the title of menu or the relevant information of reference of supplemental audio track.
With respect to highlight regions and selection zone, it should be noted, utilize each grouping of relevant object and name to specify highlight regions and select regional.
Generation with respect to the grid of font characteristics it should be noted, for the generation of the 3D grid of font characters, all fonts that will not be included in the font file all are expressed as 3D grid.On the contrary, when using font characters for the first time, calculate the grid of font characters.Subsequently, the grid of calculating is used to represent specific font characters.As example, described processing to font characters allows text " Hello World " is expressed as three-dimensional text, wherein, because the 3D grid (in the mode of switching) at character " 1 " can be used 3 times and character " o " can be used 2 times, so only need 7 3D grids (rather than 10 3D grids).
Here it should be noted that the generation of font characters is different with the generation of all the other frame of video.Provide except at any object or grid the 3D grid of font characters by deviser's (for example having created people's (also being called as " scene modeling person ") of model of place).The deviser places the frame of having named respectively and replaces 3D grid at font characters, and the text that wherein utilizes the user to import replaces described frame (three dimensional representation of text just) in working time.The height of frame and thickness are (for more common: the size of the frame) size of the three-dimensional font characters of definition.Also obtain texture properties and material properties (with the diagram text character) from frame.In other words, the three dimensional representation by the text character of user input has texture identical with frame and material properties.
Hereinafter, will the possible user interactions that can be used to present transition be described.For common, can influence the outward appearance of three-dimensional scenic from extraneous (just by the user) by dialogue.In description document described above, can be editable with each attribute flags.In dialogue, represent these attributes according to the type of these attributes.User one changes this attribute, just considers the attribute that changes in this scene.Like this, for example, can in predetermined scope, change object color, background image and/or (object) flight track.
Be also to be noted that in an embodiment of the present invention with respect to the speed that presents, it can be interactively presenting.The typically computed center processor of traditional montage program is with the expression effect.Typically this is very slow, and represents unsmooth.Therefore, (can be used for now almost any computing machine) 3D graphic hardware is used in design of the present invention (for example intelligent 3D engine).Only under the situation that the 3D graphics card does not have to occur, just select slow solution based on CPU.High performance expression has been contributed in use to the scene graph that is used to represent three-dimensional scenic.
Be also to be noted that and adopt similar mode (as traditional 2D engine) to visit intelligent 3D engine from the external world.Yet, in the processing of menu, consider additional intermediate sequence.In addition, encapsulated most of logic in intelligent 3D engine internal.
Be also to be noted that the form that can adopt computer program realizes the present invention.In other words, according to some realization demand of the inventive method, can in hardware or software, realize method of the present invention.Can use digital storage media (for example storing disk, DVD, CD, ROM, PROM, EPROM or the flash memory of electronically readable control signal) carry out to realize, institute's digital storage media feasible execution method of the present invention of cooperate with programmable computer system.Therefore, usually the present invention be have be stored in and readable carrier on the computer program of program code, described program code is effective for carrying out method of the present invention when computer program moves on computers.Therefore in other words, method of the present invention is a computer program, and described computer program has at least one the program code that is used for carrying out method of the present invention when computer program moves on computers.
In sum, the present invention has created based on the time and has produced the design to video transition and menu to the menu transition of video transition, menu.In addition, the present invention allows to produce interactive menu based on the time.Therefore, the present invention allows to create video media user friendlyly.

Claims (31)

1, a kind of being used for according to the model of place (200,300 that has defined scene; 431,446,456,466,476; 810; 2440) and the content (114 that provides according to the user; 2450) provide frame of video (1,2 ... F-1, F) sequence (116; 440,450,460,470,480; 2460) equipment (100; 2400), described model of place comprises at least one model of place object (210 of have object oriented (cube 1) or object properties; 432; 812), described equipment comprises:
Frame of video generator (110; 2430), be suitable for generating the sequence (440,450,460,470,480 of a plurality of frame of video according to model of place; 1,2 ..., F-1, F), wherein said frame of video generator is suitable in described model of place one or more model of place object with predetermined object oriented or predetermined object properties is identified, to obtain the model of place object after the sign; And
Wherein said frame of video generator is suitable for producing sequence of frames of video, and the feasible content that the user is provided is presented at the surface (230,232,234 of the model of place object after the sign; 432,436) go up or be shown as replacement at the model of place object (812) after the sign.
2, according to the equipment (100 of claim 1; 2400), wherein, described model of place (112; 200,300; 431,446,456,466,476) define scene according to the geometrical property that appears at the object in the scene.
3, according to the equipment (100 of claim 1 or 2; 2400), wherein, described model of place (112; 200,300; 431,446,456,466,476; 810; 2440) according to object (210; 432; 812) with respect to observer (212; 438,448,482) motion defines scene.
4, according to the equipment (100 of one of claim 1 to 3; 2400), wherein, described model of place (112; 200,300; 431,446,456,466,476; 810; 2440) according at least one model of place object (210; 432) material behavior or superficial makings characteristic define scene.
5, according to the equipment (100 of one of claim 1 to 4; 2400), wherein, described frame of video generator (110; 2430) be suitable for having the model of place object (210 of predetermined title, material behavior, texture features or character of surface; 432) surface (230,232,234; 434,436) identify, with the surface after the acquisition sign; And
Wherein said frame of video generator is suitable for producing the sequence of frames of video (116 that is produced; 2460) frame (440,450,460,470,480), the feasible video sequence (114 that the user is provided; 2450) or the frame of the image that provides of user be presented on the surface after the sign.
6, according to the equipment (100 of one of claim 1 to 5; 2400), wherein, described frame of video generator (110,2430) is suitable for model of place object (230; 432) first surface (230; 434) and the second surface (232 of described model of place object; 436) identify, wherein said first surface has the first predetermined title, predetermined material behavior or predetermined texture features, and described second surface has the second predetermined title, predetermined material behavior or predetermined texture features,
Described first title of being scheduled to is different with described second title of being scheduled to, and described first material behavior of being scheduled to is different with described second material behavior of being scheduled to, or described first texture features of being scheduled to is different with described second texture features of being scheduled to;
Wherein said frame of video generator is suitable for producing video sequence (116; 2460) frame (440,450,460,470,480), the feasible video sequence (114 that first user is provided; 2450) or the frame of the image that provides of first user be presented on the surface after first sign, and make the video sequence (414 that second user provides; 2450) or the frame of the image that provides of second user be presented on the surface after second sign.
7, according to the equipment (100 of one of claim 1 to 6; 2400), wherein, described sequence of frames of video generator (110; 2430) be suitable for model of place object (210; 432) first surface (230; 434) and the second surface (232 of model of place object; 436) identify,
Described first surface has first predetermined title, first material behavior of being scheduled to or the first predetermined texture features, and
Described second surface has second predetermined title, second material behavior of being scheduled to or the second predetermined texture features,
Described first title is different with described second title, and described first material behavior is different with described second material behavior, or described first texture features is different with described second texture features;
Wherein the frame of video generator is suitable for producing video sequence (116,440,450,460,470,480; 2460), the feasible sequence of frames of video (114 that first user is provided; The sequence of frame 2450) is presented on the first surface after the sign, and makes the video sequence (114 that second user is provided; The sequence of frame 2450) is presented on the second surface after the sign.
8, according to the equipment (100 of claim 7; 2400), wherein, described equipment is suitable for receiving and has defined the video sequence (114 that first user provides; 2450) and the video sequence (114 that provides of second user; 2450) user's input.
9, according to the equipment (100 of claim 7 or 8; 2400), wherein, described frame of video generator (110; 2430) be suitable for producing sequence of frames of video (116; 440,450,460,470,480; 2460), make that first frame (440) of the sequence of frames of video produced is the full frame version of the frame of the video sequence that provides of first user, and make that the last frame (480) of the sequence of frames of video that produced is the full frame version of the frame of the video sequence that provides of second user.
10, according to the equipment (100 of one of claim 7 to 9; 2400), wherein, described frame of video generator (110; 2430) be suitable at the video sequence that is produced (116; 440,450,460,470,480; Provide progressive or level and smooth transition between the last frame (480) of first frame (440) 2460) and the sequence of frames of video that is produced.
11, according to the equipment (100 of one of claim 1 to 10; 2400), wherein, described frame of video generator (110; 2430) be suitable for having obtained to show the content (114 that the user-defined text object of user-defined text provides as the user; 2450);
Wherein said frame of video generator (110; 2430) be suitable at model of place (112; 200,300; 431,446,456,466,476,810; 2440) in the model of place object (812) with predetermined object oriented or predetermined object properties is identified, the model of place behind described predetermined object oriented and the described predetermined object properties sign to as if the text placeholder object; And
Wherein said frame of video generator is suitable for producing sequence (116; 440,450,460,470,480; 2460) the text placeholder object (812) after, the text object that makes explicit user define replaces identifying.
12, according to the equipment (100 of claim 11; 2400), wherein, described frame of video generator (110; 2430) be suitable for producing sequence of frames of video (116; 440,450,460,470,480; 2460), make user-defined text object is represented described in the described sequence of frames of video size be suitable for spreading all over the size of the text placeholder object (812) of described sequence of frames of video.
13, according to the equipment (100 of one of claim 1 to 12; 2400), wherein, described equipment is suitable for basis will be at the sequence of frames of video that is produced (116; 440,450,460,470,480; 2460) menu item (912,914,916,918,920,922,932,934,936,938,940,942 that shows in; 1012,1014,1016,1018,1020,1024,1222,1224,1226,1228,1230,1232) number is selected the subclass of selected model of place object from a plurality of model of place objects that form described model of place, make selected model of place object factory sequence of frames of video (116; 440,450,460,470,480; 2460), shown menu item number is suitable for the menu item number that will show in described video sequence, and
Wherein said frame of video generator is suitable for generating described sequence of frames of video according to selected model of place object.
14, according to the equipment (100 of one of claim 1 to 13; 2400), wherein, described equipment comprises highlight regions model of place object identifier, and described highlight regions model of place object identifier is suitable for from model of place (112; 200,300; 431,446,456,466,476; 2440) determine to comprise the set of at least one highlight regions model of place object in,
Described highlighted model of place object has predetermined object oriented or object properties; And
Wherein, described equipment comprises that the highlight regions description provides device, described highlight regions is described provides device to be suitable for providing the description of highlight regions, the description of described highlight regions has defined the frame of video (440 of at least one object in the set of display high-brightness zone model of place object wherein, 450,460,470,480) district.
15, according to the equipment (100 of claim 14; 2400), wherein, described highlight regions is described provides device to be suitable for highlight regions is described as district by the defined frame of video of whole pixels (440,450,460,470,480) of display high-brightness zone model of place object.
16, a kind of equipment that is used to provide the model of place that defines the 3 D video scene, described equipment comprises:
Be used to import scene description (112; 200,300; 431,446,456,466,476; 2440) interface; And
The placeholder inserter is used for placeholder title or placeholder attribute are inserted model of place, makes the content (114 that the indication of placeholder title or placeholder attribute will provide with the user; 2450) object (210 that is associated; 432; 812) or the surface (230,232,234; 434,436).
17, according to the equipment of claim 16, wherein, described placeholder inserter is suitable for learning and placeholder title or the relevant sentence structure of placeholder attribute.
18, a kind of model of place (112 that has defined scene; 200,300; 431,446,456,466,476; 800,2400), described model of place has at least one placeholder object
(210; 432; 812), the content (114 that provides with the user is provided described placeholder object; 2450) placeholder title that is associated or placeholder attribute.
19, according to the model of place (112 of claim 18; 200,300; 431,446,456,466,476; 800,2400), wherein, described model of place comprises the first placeholder object and the second placeholder object,
The described first placeholder object has the first placeholder surface, and the frame of video that the title on the described first placeholder surface or surface properties indicate image that described first placeholder surface and first user provide or user to provide is associated,
The described second placeholder object has the second placeholder surface, and the frame of video that the title on the described second placeholder surface or surface properties indicate image that described second placeholder surface and second user provide or user to provide is associated;
Wherein said model of place has been described the observation station that the observer was positioned at (212; 438,448,482) with respect to the position of described object; And
Wherein described model of place is regulated making described model of place be described the first placeholder object and observer's layout at first, made from observation station (212; 438,448,482) see the full-screen image on the first placeholder surface, and
Make and make the final layout of describing second placeholder and observer of described model of place from observation station (212; 438,448,482) see the full-screen image on the second placeholder surface.
20, according to the model of place (112 of claim 18; 200,300; 431,446,456,466,476; 800,2400), wherein, described model of place (112; 200,300; 431,446,456,466,476; 2400) be suitable for as the observer at observation station place seen, at first to the menu page (910,930,1010 of the navigation menu of digital video media; 1110; 1212,1220; 1400,1510,1520,1610,1620,1700) be described;
Wherein said model of place (112; 200,300; 431,446,456,466,476; 2400) comprise having placeholder surface (230,232,234; 434,436) placeholder object (210; 432; 812), the title on described placeholder surface or surface properties are indicated the frame of video (114 that image that described placeholder surface and user provide or user provide; 2430) be associated; And
Wherein described model of place is regulated making described model of place finally be described, made from observation station (212 to described placeholder object and observer's layout; 438,448,482) see the full-screen image on described placeholder surface.
21, according to the model of place (112 of claim 18; 200,300; 431,446,456,466,476; 800,2400), wherein, described model of place (112; 200,300; 431,446,456,466,476; 800; 2400) comprising: have placeholder surface (230,232,234; 434,436) placeholder object (210; 432; 812), the title on described placeholder surface or surface properties are indicated described placeholder surface (230,232,234; 434,436) frame of video (114 that provides of image that provides with the user or user; 2450) be associated;
Wherein to described model of place (112; 200,300; 431,446,456,466,476; 800; 2400) regulate, make described model of place at first to described placeholder object (210; 432; 812) and observation station (212; 438,448,482) layout is described, and makes to see described placeholder surface (230,232,234 from described observation station; 434,436) full-screen image; And
Wherein to described model of place (112; 200,300; 431,446,456,466,476; 800,2400) regulate, make described model of place such as observation station (212; 438,448,482) locate the observer saw, finally the menu page (910,930,1010,1110,1212,1220,1400,1510,1520,1610,1620,1700) to the navigation menu of digital video media is described.
22, according to the model of place (112 of claim 18; 200,300; 431,446,456,466,476; 800,2400), wherein, to described model of place (112; 200,300; 431,446,456,466,476; 800; 2400) regulate, make described model of place as the first observation station place the observer saw, at first to first menu page (910 of the navigation menu of digital video media; 1212; 1510; 1610) be described; And
Wherein described model of place is regulated, make described model of place as the second observation station place the observer saw, finally to second menu page (930 of the navigation menu of digital video media; 1220; 1520; 1620) be described.
23, a kind of being used for according to the model of place (200,300 that has defined scene; 431,446,456,466,476; 810; 2440) and the content (114 that provides according to the user; 2450) provide frame of video (1,2 ... F-1, F) sequence (116; 440,450,460,470,480; 2460) method, described model of place comprise have object oriented (cube 1) or object properties at least one model of place object (210, cube 1; 432; 812), described method comprises:
Generate the sequence (440,450,460,470,480 of a plurality of frame of video according to described model of place; 1,2 ..., F-1, F);
The sequence that wherein generates a plurality of frame of video comprises:
In described model of place, the model of place object with predetermined object oriented or predetermined object properties is identified (1830), to obtain the model of place object after the sign; And
Produce (1840) sequence of frames of video, the feasible content that the user is provided is presented at the surface (230,232,234 of the model of place object after the sign; 432,436) go up or be shown as replacement at the model of place object (812) after the sign.
24, a kind of model of place (112 that has defined three-dimensional scenic that provides; 200,300; 431,446,456,466,476; 2440) method, described method comprises:
Input is to the description (112 of scene; 200,300; 431,446,456,466,476,2440); And
Placeholder title or placeholder attribute are inserted model of place, make the content (114 that the indication of described placeholder title or placeholder attribute will provide with the user; 2450) object (210 that is associated; 432; 812) or the surface (230,232,234; 434,436).
25, a kind of equipment (2400) is used for according to the model of place (112 that has defined scene; 200,300; 431,446,456,466,476; 800; 2440), according to the content (114 that has defined at least one menu structure correlation properties and provide according to the user; 2450) create the menu structure of video media, described model of place comprises at least one the model of place object (210 with object oriented or object properties; 432; 812), described equipment comprises:
Be used to provide sequence of frames of video (116 according to claim 1 to 15; 440,450,460,470,480; 2460) equipment (100; 2430),
Wherein saidly be used to provide the equipment (2430) of sequence of frames of video to be suitable for according to model of place, to produce described sequence of frames of video according to the additional information that has defined at least one menu structure correlation properties and according to the content that the user provides.
26, according to the equipment (2400) of claim 25, wherein, described menu structure relevant information comprises: the information relevant with the grouping of element;
Wherein said model of place (112; 200,300; 431,446,456,466,476; 800; 2440) i set of pieces is described and is used for the sequence of frames of video (114 that calling party provides; 2450) i menu button (912,914,916,918,920,922,932,934,936,938,940,942,1012,1014,1016,1018,1020,1024,1222,1224,1226,1228,1230,1232,1410,1412,1414,1416);
Wherein be used to provide sequence of frames of video (116; 440,450,460,470,480; 2460) equipment (110; 2430) be suitable for receiving the relevant information of video sequence number that provides with the user that will be included in the video media;
Wherein be used to provide the equipment (110 of sequence of frames of video; 2430) be suitable for using the relevant information of sequence of frames of video number that provides with the user to determine the needed menu button number of video sequence that calling party provides;
Wherein be used to provide the equipment (110 of sequence of frames of video; 2430) be suitable for marker elements group in described model of place, the element group after each sign is described menu button;
Wherein be used to provide the equipment (110 of sequence of frames of video; 2430) be suitable for selecting a plurality of element groups from described model of place, each selected element group is described menu button, makes to be suitable for the required menu button number of video sequence that calling party provides by the described menu button number of selected element group; And
Wherein be used to provide the equipment (110 of video sequence; 2430) be suitable for producing sequence of frames of video, make described sequence of frames of video show the element of selected element group, and make cancellation or reduce the extra objects of model of place, the extra objects of described model of place describe to be used for the original menu button of the sequence that calling party provides.
27, according to claim 25 or 26 equipment (2400), wherein said menu structure relevant information comprises: with model of place (112; 200,300; 431,446,456,466,476; 800; 2440) which element belongs to highlighted group of relevant information;
Wherein be used to provide sequence of frames of video (116; 440,450,460,470,480; 2460) equipment (110; 2430) be suitable for being created in the description in zone of the frame of video (440,450,460,470,480) of the object that has wherein shown highlighted group.
28, according to the equipment of claim 27, wherein to the frame of video (440 of the object of display high-brightness group, 450,460, the description in zone 470,480) comprises: sentence that first colored pixels is described and sentence the monochrome image that second colored pixels is described at the object of display high-brightness group not at the object that has shown highlighted group.
29, according to the equipment (2400) of one of claim 25 to 28, wherein said menu structure relevant information comprises: with described model of place (112; 200,300; 431,446,456,466,476; 800; 2440) the relevant information of video transition of which kind of type has been described;
The equipment that wherein is used to create described menu structure comprises and being used for by frame of video generator (110; 2430) sequence of frames of video (116 of Chan Shenging; 440,450,460,470,480; 2460) equipment in the menu structure of insertion video media;
The equipment that wherein is used to create menu structure be suitable for according to and model of place (112; 200,300; 431,446,456,466,476; 800; 2440) the relevant information of video transition of having described which kind of type is determined sequence of frames of video (112 in the menu structure; 200,300; 431,446,456,466,476; 800; 2440) position; And
The equipment that wherein is used for creating menu structure is suitable for recognizing and handles at least one of video transition of following type:
Menu is to the menu transition,
Blank screen is to the menu transition,
Menu is to the sequence of frames of video transition,
Sequence of frames of video is to the menu transition,
Sequence of frames of video is to the sequence of frames of video transition.
30, a kind of being used for according to the model of place (112 that has defined scene; 200,300; 431,446,456,466,476; 800; 2440), according to menu structure relevant information that has defined at least one menu structure correlation properties and the content (114 that provides according to the user; 2450) create the method for the menu structure of video media, described model of place comprises at least one the model of place object (210 with object oriented or object properties; 432; 812), described method comprises:
Provide sequence of frames of video (116 according to claim 23; 440,450,460,470,480; 2460),
Wherein provide sequence of frames of video to comprise: according to model of place, provide sequence of frames of video according to the additional information that has defined at least one menu structure correlation properties and according to the content that the user provides.
31, a kind of computer program of carrying out when being used for moving on computers according to claim 23,24 or 30 method.
CN200780008655.1A 2006-03-10 2007-01-03 Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program Expired - Fee Related CN101401130B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US78100606P 2006-03-10 2006-03-10
EP06005001.0 2006-03-10
EP06005001 2006-03-10
US60/781,006 2006-03-10
PCT/EP2007/000024 WO2007104372A1 (en) 2006-03-10 2007-01-03 Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program

Publications (2)

Publication Number Publication Date
CN101401130A true CN101401130A (en) 2009-04-01
CN101401130B CN101401130B (en) 2012-06-27

Family

ID=40518515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200780008655.1A Expired - Fee Related CN101401130B (en) 2006-03-10 2007-01-03 Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program

Country Status (3)

Country Link
JP (1) JP4845975B2 (en)
CN (1) CN101401130B (en)
RU (1) RU2433480C2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325135A (en) * 2013-07-17 2013-09-25 天脉聚源(北京)传媒科技有限公司 Resource display method, device and terminal
CN107180136A (en) * 2017-06-02 2017-09-19 王征 A kind of system and method for the 3D rooms texture loading based on interior wall object record device
CN112947817A (en) * 2021-02-04 2021-06-11 汉纳森(厦门)数据股份有限公司 Intelligent equipment page switching method and device
CN118762645A (en) * 2024-09-07 2024-10-11 深圳市伽彩光电有限公司 Energy-saving display method and system for LED display screen

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2523980C2 (en) * 2012-10-17 2014-07-27 Корпорация "САМУНГ ЭЛЕКТРОНИКС Ко., Лтд." Method and system for displaying set of multimedia objects on 3d display
JPWO2022124419A1 (en) * 2020-12-11 2022-06-16

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000196971A (en) * 1998-12-25 2000-07-14 Matsushita Electric Ind Co Ltd Video display device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325135A (en) * 2013-07-17 2013-09-25 天脉聚源(北京)传媒科技有限公司 Resource display method, device and terminal
CN107180136A (en) * 2017-06-02 2017-09-19 王征 A kind of system and method for the 3D rooms texture loading based on interior wall object record device
CN112947817A (en) * 2021-02-04 2021-06-11 汉纳森(厦门)数据股份有限公司 Intelligent equipment page switching method and device
CN112947817B (en) * 2021-02-04 2023-06-09 汉纳森(厦门)数据股份有限公司 Page switching method and device for intelligent equipment
CN118762645A (en) * 2024-09-07 2024-10-11 深圳市伽彩光电有限公司 Energy-saving display method and system for LED display screen
CN118762645B (en) * 2024-09-07 2024-11-05 深圳市伽彩光电有限公司 Energy-saving display method and system for LED display screen

Also Published As

Publication number Publication date
RU2433480C2 (en) 2011-11-10
CN101401130B (en) 2012-06-27
RU2008140163A (en) 2010-04-20
JP4845975B2 (en) 2011-12-28
JP2009529736A (en) 2009-08-20

Similar Documents

Publication Publication Date Title
US8462152B2 (en) Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program
US8174523B2 (en) Display controlling apparatus and display controlling method
CN102752640B (en) Metadata is used to process the method and apparatus of multiple video flowing
CN100471255C (en) Method for making and playing interactive video frequency with heat spot zone
US20120198412A1 (en) Software cinema
EP2044764A2 (en) Automatic generation of video from structured content
CN101401130B (en) Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program
US20100156893A1 (en) Information visualization device and information visualization method
CN101193250A (en) System, method and medium generating frame information for moving images
WO2018098340A1 (en) Intelligent graphical feature generation for user content
EP2428957B1 (en) Time stamp creation and evaluation in media effect template
KR20200093295A (en) The method for providing exhibition contents based on virtual reality
Ming Post-production of digital film and television with development of virtual reality image technology-advance research analysis
US20240029381A1 (en) Editing mixed-reality recordings
US20130156399A1 (en) Embedding content in rich media
Grahn The media9 Package, v1. 14
van Lammeren Geodata visualization: a rich picture of the future
CN116774902A (en) Virtual camera configuration method, device, equipment and storage medium
Lee et al. Efficient 3D content authoring framework based on mobile AR
CN116450588A (en) Method, device, computer equipment and storage medium for generating multimedia file
Chang et al. Automatically designed 3D environments for intuitive exploration

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee
CP02 Change in the address of a patent holder

Address after: Karlsruhe

Patentee after: NERO AG

Address before: Byrd, Germany

Patentee before: Nero AG

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120627

Termination date: 20220103