AU744893B2 - Applying a set of rules to a description of a resource - Google Patents
Applying a set of rules to a description of a resource Download PDFInfo
- Publication number
- AU744893B2 AU744893B2 AU13609/00A AU1360900A AU744893B2 AU 744893 B2 AU744893 B2 AU 744893B2 AU 13609/00 A AU13609/00 A AU 13609/00A AU 1360900 A AU1360900 A AU 1360900A AU 744893 B2 AU744893 B2 AU 744893B2
- Authority
- AU
- Australia
- Prior art keywords
- description
- descriptor
- resource
- descriptor components
- rules
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Description
S&F Ref: 489841
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT
ORIGINAL
Name and Address of Applicant: Canon Kabushiki Kaisha 30-2, Shimomaruko 3-chome Ohta-ku Tokyo Japan Actual Inventor(s): Address for Service: Invention Title: Alison Joan Lennon Spruson Ferguson St Martins Tower 31 Market Street Sydney NSW 2000 Applying a Set of Rules to a Description of a Resource
*S
S S ASSOCIATED PROVISIONAL APPLICATION DETAILS [33] Country [31] Applic. No(s) AU PP8370 AU PP8371 AU PP8372 [32] Application Date 29 Jan 1999 29 Jan 1999 29 Jan 1999 ihod of The following statement is a full description of this invention, including the be hod of performing it known to me/us:- 5815c APPLYING A SET OF RULES TO A DESCRIPTION OF A RESOURCE O Copyright Notice This patent document contains material subject to copyright protection. The copyright owner has no objection to the reproduction of this patent document or any related materials in the files of the Patent Office, but otherwise reserves all copyright whatsoever.
Field of Invention The present invention relates to a methods of applying a set of rules to a description of a resource. The invention also relates to an apparatus and a computer program product for implementing the methods.
Background As network connectivity has continued its explosive growth and digital storage has become smaller, faster, and less expensive, the quantity of electronically-accessible resources has increased enormously. So much so that the discovery and location of the 15 available resources has become a critical problem. These electronically-accessible oooo resources can be digital content digital images, video and audio) which may be available over the network, web-based resources HTML/XML documents) and electronic devices printers, displays, etc.). In addition, there are electronicallyaccessible catalogues of other resources, which may not be electronically accessible books, analog film media, etc.). What is needed is a consistent method of describing resources so that location of resources, electronically-accessible or otherwise, can be more readily achieved.
The problems of consistent resource description are twofold. First, there is the problem of acceptance of a standard (consistent) method of resource description. The second problem is related to the generation of descriptions. Often the cost of this process is significant. Generators of content are not generally inclined to manually annotate or describe their resources and, even if they do, their annotations might not be in the form in which people wish to retrieve the resources. For example, a digital camera user might annotate his or her personal photographs with details of who is in the shot Aunt Bessy and Uncle Bertrand) but a graphic designer might want to use these photographs for design purposes and may wish to retrieve resources from the collection on the basis of the type of scene depicted in the photograph an indoor or outdoor scene). Clearly the work involved in annotating each resource for all the potential uses of the resource is CFP1594AU IPR32-41 _GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841 .doc:PWM enormnous and, in addition, would result in a considerable amount of redundant information in the description that then must be stored.
If a consistent method of describing resources can be achieved then consistent methods of visualising resource descriptions can be contemplated. Visualisation of descriptions can assist in the resource discovery process. For example, retrieval of scenes from digital video resources is aided by a visualisation which may include the display of key frames from individual scenes. Other visual cues, like icons, can be used to simplify the formulation of queries in search engines where each icon could represent a component of a description.
In many cases their description or annotation might not be in the language in which another user might use to try and discover the resource. Although storage of the description of the resource in multiple languages would alleviate this problem it is expensive in storage and inefficient in that redundant information is stored.
Summar of the Invention 15 It is an object of the present invention to ameliorate one or more disadvantages of the prior art.
According to one aspect of the invention, there is provided a method of applying a set of rules to a description of an electronically-accessible resource, said method comprising the steps of: reading a said description of the resource, wherein said read description comprises one or more descriptor components; reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with a set of one or more specified actions; locating patterns of descriptor components of said read *description which correspond with said predetermined pattern; and performing said specified actions on said read description in response to locating a said predetermined pattern.
According to one aspect of the invention, there is provided an apparatus for applying a set of rules to a description of an electronically-accessible resource, said apparatus comprising: means for reading a said description of the resource, wherein said read description comprises one or more descriptor components; means for reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with a set of one or more specified actions; means for locating patterns of descriptor components of said read description which correspond with said predetermined CFP I 594AU IPR324 I -GRP2 489841 CFPI94AU1PR3-41_GRP 48941 :\ELEC\CISRA\IPR\IPR32-4 I GRP2]48984 I .doc:PWM pattern; and means for performing said specified actions on said read description in O response to locating a said predetermined pattern.
According to another aspect of the invention, there is provided a computer readable medium comprising a computer program for applying a set of rules to a description of an electronically-accessible resource, said computer program comprising: code for reading a said description of the resource, wherein said read description comprises one or more descriptor components; code for reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with a set of one or more specified actions; code for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and code for performing said specified actions on said read description in response to locating a said predetermined pattern.
According to still another aspect of the invention, there is provided a method of extending a description of an electronically-accessible resource, said method comprising the steps of: reading a said description of the resource, wherein said read description 15 comprises one or more descriptor components; reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more i specified actions; locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and performing said specified actions gon said read description in response to locating a said predetermined pattern, wherein each said action is inferred by the presence of the predetermined pattern of said descriptor i components in the description and comprises the creation of a new descriptor or the removal of an existing descriptor from the description.
According to still another aspect of the invention, there is provided an apparatus for extending a description of an electronically-accessible resource, said apparatus comprising: means for reading a said description of the resource, wherein said read description comprises one or more descriptor components; means for reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; means for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and means for performing said specified actions on said read description in response to locating a said predetermined pattern, wherein each said action is inferred by the presence of the predetermined pattern of said descriptor components in the description and CFP1594AUIPR32-41 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 I GRP21489841 .doc:PWM comprises the creation of a new descriptor or the removal of an existing descriptor from the description.
According to still another aspect of the invention, there is provided a computer readable medium comprising a computer program for extending a description of an electronically-accessible resource, said computer program comprising: code for reading a said description of the resource, wherein said read description comprises one or more descriptor components; code for reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; code for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and code for performing said specified actions on said read description in response to locating a said predetermined pattern, wherein each said action is inferred by the presence of the predetermined pattern of said descriptor components in the description and comprises the creation of a new descriptor or the removal of an existing descriptor from the description.
S 15 According to still another aspect of the invention, there is provided a method of visually presenting a description of an electronically-accessible resource, said method "i comprising the steps of: reading a said description of the resource, wherein said read description comprises one or more descriptor components; reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; locating patterns of descriptor components of said read description which correspond with said predetermined pattern; performing said specified actions on said read description in response to locating a said predetermined pattern, wherein said action comprises the addition or removal of a presentation property for one or more said descriptor components in said description to be visually presented; and visually presenting the read description using the presentation properties.
According to still another aspect of the invention, there is provided an apparatus for visually presenting a description of an electronically-accessible resource, said apparatus comprising: means for reading a said description of the resource, wherein said read description comprises one or more descriptor components; means for reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; means for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; means for performing said specified actions on said read description in response to CFP1594AU IPR32-4I_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41 GRP21489841 .doc:PWM .i~iSi~i locating a said predetermined pattern, wherein said action comprises the addition or Sremoval of a presentation property for one or more said descriptor components in said description to be visually presented; and means for visually presenting the read description using the presentation properties.
According to still another aspect of the invention, there is provided a computer readable medium comprising a computer program for visually presenting a description of an electronically-accessible resource, said computer program comprising: code for reading a said description of the resource, wherein said read description comprises one or more descriptor components; code for reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; code for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; code for performing said specified actions on said read description in response to locating a said predetermined pattern, wherein said action comprises the addition or removal of a presentation property for one 15 or more said descriptor components in said description to be visually presented; andcode for visually presenting the read description using the presentation properties.
S "According to still another aspect of the invention, there is provided a method of translating a description of an electronically-accessible resource, wherein said description is in a first language, said method comprising the steps of: requesting said description for further processing, wherein said request is in a second language; reading said description of the resource, wherein said read description comprises one or more descriptor components; reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and performing said specified actions on said read description in response to locating a said predetermined pattern, wherein each said action is inferred by the presence of the predetermined pattern of said descriptor components in the description and comprises the replacement of one or more existing descriptors in the description with one or more equivalent descriptors thereby achieving a full or partial translation of the read description from the first language to the second language.
According to still another aspect of the invention, there is provided an apparatus for translating a description of an electronically-accessible resource, wherein said description is in a first language, said apparatus comprising: means for requesting said description for CFP1594AU IPR32-4 I_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841 .doc:PWM further processing, wherein said request is in a second language; means for reading said description of the resource, wherein said read description comprises one or more descriptor components; means for reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; means for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and means for performing said specified actions on said read description in response to locating a said predetermined pattern, wherein each said action is inferred by the presence of the predetermined pattern of said descriptor components in the description and comprises the replacement of one or more existing descriptors in the description with one or more equivalent descriptors thereby achieving a full or partial translation of the read description from the first language to the second language.
According to still another aspect of the invention, there is provided a computer °readable medium comprising a computer program for translating a description of an 15 electronically-accessible resource, wherein said description is in a first language, said computer program comprising: code for requesting said description for further processing, i wherein said request is in a second language; code for reading said description of the resource, wherein said read description comprises one or more descriptor components; code for reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; code for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and code for performing said specified actions on said read description in response to locating a said predetermined pattern, wherein each said action is inferred by the presence of the predetermined pattern of said descriptor components in the description and comprises the replacement of one or more existing descriptors in the description with one or more equivalent descriptors thereby achieving a full or partial translation of the read description from the first language to the second language.
CFP1594AUPR32-41 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 1 GRP2]489841.doc:PWM i l- ;ji7 BRIEF DESCRIPTION OF THE DRAWINGS Embodiments of the invention are described with reference to the accompanying drawings, in which: Fig. lA shows a flow diagram of a method of generating a description of a resource in accordance with an embodiment; Fig. IB shows a flow diagram of a method of processing a description of a resource in accordance with another embodiment; Fig. IC shows a flow diagram of a method of encoding a description of a resource in accordance with another embodiment; Fig. ID shows a flow diagram of a method of decoding an encoded description of a resource in accordance with another embodiment; Figs. 2A shows a flow diagram of a prior art method of generating a document object model; Fig 2B show a flow diagram of a method of generating a Description Object Model °in accordance with another embodiment; Fig. 3 shows a UML class diagram showing core elements of the Dynamic Description Framework(DDF) data model; Fig. 4 shows a schematic drawing depicting the processing model of an exemplary description according to a DDF; Fig. 5 shows a schematic drawing depicting the processing model of another exemplary description according to a DDF; Fig. 6 shows a schematic drawing depicting the relationship between a description scheme (Document Type Definition) and descriptions( XML documents); Fig. 7A is a flow diagram of a method of generating a description of a resource in accordance another embodiment; Fig. 7B is a flow diagram of a method of processing a description of a resource in accordance another embodiment; Fig. 8 shows an example of the use of a descriptor handler in generating video segment description using camera metadata that is saved to the video; Fig. 9 shows an example of the use of descriptor handlers to support a query-byexample over remote image databases; CFP I 594AU IPR32-4 I_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41 GRP2]489841 .doc:PWM 1 -7 .n Fig. 10 shows an example of the use a descriptor handler for encoding/decoding; Fig. 11 shows an example of descriptor handlers implemented as Java classes; Fig. 12 is a flow diagram of a method of extending a description of a resource in accordance with another embodiment; Fig. 13 is a flow diagram of a method of using rules to add or remove attributes of a Description Scheme that are used to control the presentation of a description of a resource in accordance with another embodiment; Fig. 14 is a flow diagram of a method of selecting one or more descriptions or part of one or more descriptions of a resource in accordance with another embodiment; Fig. 15 is a flow diagram of a method of translating a description of a resource in accordance with another embodiment; Fig. 16 shows a schematic diagram Digital Video Browser System in accordance with another embodiment; •Fig. 17 shows an implementation of the Digital Video Browser System in a remote handheld device in accordance with another embodiment; Fig. 18 shows an alternative implementation of the Digital Video Browser System •"in a remote handheld device in accordance with another embodiment; Fig. 19 is a block diagram of a general-purpose computer for implementing any one or more said methods; and .4*4 Fig. 20 shows a schematic diagram of an Media Browser System in accordance with another embodiment.
Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
BRIEF DESCRIPTION OF THE APPENDICES Embodiments of the invention are also described with reference to the appendices, in which: Appendix A shows core DDF element definitions; Appendix B shows an example description scheme for an Australian Football League Game; CFP1594AUIPR3241 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 I GRP2]489841.doc:PWM .qi* 1, :ir- ,ii i r aU; ~-i--rlir~ii-I~-i i: l;i;r-;I Appendix C shows an example description generated from the description scheme in Appendix B; Appendix D shows a digital video resource description scheme; Appendix E shows an example description generated from the video description scheme in Appendix D; Appendix F shows presentation rules for the video description scheme in Appendix
D;
Appendix G shows a digital video library description scheme; Appendix H shows an example description generated from the digital video library description scheme in Appendix G; Appendix I shows video presentation description scheme; Appendix J shows an example description generated from the video presentation description scheme in Appendix I; and Appendix K shows DOM element nodes.
****DETAILED
DESCRIPTION
TABLE OF CONTENTS 1. INTRODUCTION 20 1.1 Terminology 1.1.1 Content 1.1.2 Resource 1.1.3 Feature 1.1.4 Descriptor 1.1.5 Description 1.1.6 Description Scheme 1.2 Descriptor Relationships 1.3 Overview Of Embodiments Of Methods 2. DYNAMIC DESCRIPTION FRAMEWORK 2.1 Overview 2.2 Object Model 2.2.1 Overview 2.2.2 Descriptor Class 2.2.3 Atomic Descriptor Value Class 2.2.4 Descriptor Handler Class 2.2.5 Description Class 2.3 API for Processing of Descriptions 2.4 Serialisation Syntax 2.4.1 Expression of Descriptor Relationships 2.4.1.1 Generalisation/Specialisation Relationships CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\1PR\1PR32-4 GRP2]489841.doc:PWM ii;- 2.4.1.1.1 2.4.1.1.2 2.4.1.1.3 2.4.1.2 2.4.1.3 2.4.1.4 2.4.1.5 2.4.2 Content Model Inheritance Attribute Inheritance Implementation Details Equivalence Relationships Association Relationships Spatial, Temporal and Conceptual Relationships Navigational Relationships Expression of Specific Data Types Implementation Issues 3. SERIALISATION SYNTAX SPECIFICATION 3.1.1 Element Definitions 3.1.2 Core DDF Element Definitions 3.1.2.1 Descriptor Definition 3.1.3 Descriptors Representing Spatial, Temporal and Conceptual Relationships 3.1.4 Elements Representing Navigational Relationships 4. DESOM API SPECIFICATION 4.1.1 Interface Descriptor 4.1.2 Interface DescriptorHandler 4.1.3 Interface AtomicDescriptorValue EXAMPLE OF A DESCRIPTION SCHEME 6. METHODS OF APPLYING PROCEDURES 6.1 Method of Generating Descriptions of Electronically-Accessible Resources 6.2 Methods of Applying Procedures to a Description 6.3 Examples of Methods of Generating Descriptions and Applying Procedures to Descriptions 7. RULE-BASED PROCESSING USING THE DESOM 8. METHOD OF EXTENDING DESCRIPTIONS OF RESOURCES 9. METHOD OF PRESENTING DESCRIPTIONS OF RESOURCES 10. METHOD OF SELECTING RESOURCE DESCRIPTIONS 11. METHOD OF TRANSLATING DESCRIPTIONS OF RESOURCES 12. FIRST EMBODIMENT OF APPARATUS 13. SECOND EMBODIMENT OF APPARATUS DIGITAL VIDEO BROWSER
SYSTEM
14. THIRD AND FOURTH EMBODIMENT OF APPARATUS REMOTE DIGITAL VIDEO BROWSER DEVICES FIFTH EMBODIMENT OF APPARATUS MEDIA BROWSER SYSTEM CFP I594AU IPR32-4 I GRP2 489841 S:\ELEC\CISRA\IPR\1PR32-41GRP2]489841.doc:PWM .r i E nr- rl;li iirr-- i-ii~iir-~l~iiiii;-~n;ih I-l-;--~Li~iii 1. INTRODUCTION For a better understanding of the embodiments, an introduction (Section 1) including a brief review of terminology (Section 1.1) is first undertaken, then there is provided a discussion of relationships between components of descriptions (Section 1.2), the DDF (Section the serialisation syntax specification (Section and the DesOM API specification (Section 4) used in the embodiments. A more detailed description of the embodiments is then given in Sections 6 to Some portions of the detailed descriptions which follow are explicitly or implicitly presented in terms of algorithms and symbolic representations of operations on data within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not ooooo 15 necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven S convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
ooooo It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the .i following discussions, it is appreciated that throughout the present invention, discussions utilising terms such as "processing", "computing", "generating", "creating", "operating" "communicating", "rendering", "providing", and "linking" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The present invention also relates to apparatus for performing the operations herein.
This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. The algorithms and displays presented herein are not inherently CFP1594AU IPR32-41 GRP2 489841 i :\ELEC\CISRA\IPR\lPR32-4 I GRP21489841 .doc:PWM i .t i* rai, -i related to any particular computer or other apparatus. Various general purpose machines may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialised apparatus to perform the required method steps.
The structure of a conventional general purpose computer will appear from the description below.
In addition, the present invention also relates to a computer program product comprising a computer readable medium including a computer program for implementing the preferred methods. The computer readable medium is taken herein to include any transmission medium for transmitting the computer program between a source and a designation. The transmission medium may include storage devices such as magnetic or optical disks, memory chips, or other storage devices suitable for interfacing with a general purpose computer. The transmission medium may also include a hard-wired medium such as exemplified in the Internet system, or wireless medium such as exemplified in the GSM mobile telephone system. The computer program is not intended to be limited to any particular programming language and implementation thereof. It will be appreciated that a variety of programming languages and implementations thereof may be used to implement the teachings of the invention as described herein.
1.1 Terminology Content Content is defined to be information, regardless of the storage, coding, display, transmission, medium, or technology. Examples of content include digital and analog video (such as an MIPEG-4 stream or a video tape), film, music, a book printed on paper, and a web page.
1.1.2 Resource A resource is a particular unit of the content being described. Examples of a resource include an MPEG-I video stream, a JPEG-2000 image, and a WAVE audio file.
1.1.3 Feature A feature is a distinctive part or characteristic of the resource which stands for something to somebody in some respect or capacity. A feature can be derived directly extracted) from the content dominant colour of an image) or can be a relevant characteristic of the content. Examples of features include the name of the person who recorded the image, the colour of an image, the style of a video, the title of a movie, the CFPIS94AU 1PR32-41_GRP2 489841 I:\ELECTcISRA\IPR\PR32-4 I RP2]489841 .doc:PWM Sc -13author of a book, the composer of a piece of music, pitch of an audio segment, and the actors in a movie.
1.1.4 Descriptor A descriptor associates a representation value to a feature, where the representation value can have an atomic or compound type. The representation value can have an atomic or compound type An atomic type is defined as one of a basic set of predetermined data types integer, string, date, etc.). A compound type is defined to be a collection of one or more descriptors. The descriptor comprises a feature representation value pair, where the representation value is associated with the feature. Example descriptors having atomic types include: Feature Author; Representation Value (string) "John Smith"; Feature DateCreated; Representation Value (date) "1998-08-08".
An example descriptor having a compound type is; Feature Colour; Representation Value ColourHistogramDescriptor.
1.1.5 Description A description is a descriptor having a compound type pertaining to a single resource.
1.1.6 Description Scheme A description scheme is a set of descriptor definitions and their relationships (associations, equivalence, specialisations, and generalisations). The descriptor relationships can be used to directly express the structure of the content or to create combinations of descriptors which form a richer expression of a higher-level concept. A description Scheme includes within its scope a comprehensive set of description schemes.
1.2 Descriptor Relationships In order to express the information required for a description scheme, the DDF preferably provides a minimum set of descriptor relationships. This minimum set includes: Generalisation/specialisation relationships, Association relationships, Equivalence relationships, Spatial, temporal and conceptual relationships, CFP1594AU [PR32-41 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841 .doc:PWM Lc- SNavigational relationships.
The generalisation/specialisation relationships specify that a particular descriptor is a more specific or more general form of another descriptor and hence can be viewed by a processing application as such. For example, a cat is a type of animal, and hence a search engine searching for occurrences of animal descriptors should also select descriptions which contain "cat" descriptors.
Association relationships are defined here to include descriptor containment and sequence and cardinality of occurrence. These relationships provide contextual information for a given descriptor and are necessary in order to provide a context in which a particular descriptor can be interpreted by an application. For example, a "Shot" descriptor which is contained within a "VideoScene" descriptor in a video description scheme would be interpreted differently from a "Shot" descriptor in another context in a sound effects description scheme.
An equivalence relationship is a form of a classification relationship where the relation is not necessarily of a generalisation/specialisation nature. Equivalence relationships are desirable between languages inter-language) and within a language intra-language). Typically equivalences will require the definition of synonyms (where two descriptors are equivalent) and quasi-synonyms (where two descriptors are equivalent to some specified extent). Also there is a need to define equivalence *relationships between non-textual values mean R, G and B values in an image) and a textual representative value red, green, etc.), and vice-versa.
Spatial, temporal and conceptual relationships between descriptors in a description may also be used. These relationships support the description of neighbouring objects in an image, sequential segments in a video scene, and similar concepts in a description.
Navigation relationships between descriptors are also desirable. Usage of descriptions will often involve navigation between a component of the description and an associated spatio-temporal extent in the resource (such as a key frame in a video resource).
Considered together these relationships can to some extent provide a level of semantic interoperability between different description schemes. Further levels of semantic interoperability could also be achieved at the application level.
1.3 Overview of Embodiments of Methods CFP1594AUIPR32-41 GRP2 489841 1\ELEC\CISRA\IPR\IPR32-41 GRP2]489841 .doc:PWM The methods described herein are specific examples of a generalised form of a method for generating and processing descriptions of resources utilising a Dynamic Description Framework (DDF). This framework provides an object model, a platformand language-neutral application programming interface (API) and a serialisation syntax for use in the description of resources, in particular audiovisual resources. The preferred DDF incorporates the benefits of declarative description of content with procedural methods for the creation and processing of descriptions and components of descriptions.
Fig. 1A shows an overview of a method of generating a description of an electronically-accessible resource. In this method, a description scheme (DS) 100A is read by a description generator 106A which in turn generates a representation 108A of a description 107A of the resource in memory. This representation 108A is an instance of the Description Object Model (DesOM) of the DDF. The representation 108A of the description 107A can be serialised as an XML document 110A for the purposes of storage Sand transport. Preferably, both the description scheme 100A and the serialised description 110A are textual and are both readable by machines and humans. It is further S. preferable that the description scheme 100A is provided with associated procedural code, called DescriptorHandler(s), so as to provide operations/processes which can unambiguously provide or generate descriptive information or other actions on the S" resource 104A. For example, the method in one operating mode is able to automatically generate a description 107A of the resource 104A. In this operating mode, the processes of the DescriptorHandler(s) operate on the resource 104A to generate a description 107A of that particular resource 104A. These description schemes 100A and descriptions 107A are defined in terms of the abovementioned DDF.
Fig. 1B shows an overview of the method of processing a description of a resource.
In this method, a serialised description 100B is parsed by a processor 102B which in turn generates a representation 104B of the description in memory. The representation 104B is an instance of the DesOM of the DDF. Such a serialised description 100B may be generated in accordance with the method of Fig. 1A. Preferably, the processor 102B and description generator 106A (Fig. 1) are incorporated as one unit. The serialised description 100B refers to a description scheme 106B which may in turn refer to a number of DescriptionHandler(s) 108B. The serialised description 100B also refers to the resource 11OB which the description describes. In this method, a set of rules may be applied to the DesOM representation of the description 104B to generate a modified CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 I GRP2]489841.doc:PWM l~l~ DesOM representation of the description 112B. [The term D+ has been used to indicate the modified DesOM representation of the description 112B in Fig. lB.] The modified DesOM representation of the description 112B can be serialised as an XML document.
This set of rules is defined by the Description Scheme 106B. The DescriptorHandler(s) 108B provide further processing of the DesOM representation of the description 114B or the modified DesOM representation of the description 116B. In one operating mode, the processing method is able to compute the similarity between resources 110B. In this mode, the DescriptorHandler provides a process for computing similarity between descriptions of resources. The processing method is further adapted to apply a set of rules 118B to the DesOM representation of the description 104B. The set of rules 118B provides one or more associated actions on the description 104B depending on the presence of pre-determined components of the serialised description 100B. The resultant output of these actions is itself a representation of a description which conforms to the DesOM 112B. Further, a description scheme may be read into memory and a set of rules provided for performing one or more associated actions on the description scheme itself.
These sets of rules are able to extend resource descriptions; translate resource descriptions; select one or more specific descriptions according to a query; visually present resource descriptions and many other actions.
Fig. 1C shows an overview of the method of encoding a description of a resource.
o In this method, a description scheme (DS) 104C is read by a description generator 108C which in turn generates a representation 110C of a description of the resource in memory.
This representation 11OC is an instance of the DesOM of the DDF. The description scheme 104C is provided with an associated procedural code, by means of a DescriptorHandlers so as to provide an encoding procedure 114C on the DesOM representation 11OC. The encoding procedures encodes the DesOM representation 11OC to provide an encoded DesOM 112C. The encoded DesOM representation 112C of the description can be serialised as an XML document for the purposes of storage and transport. The encoding procedure is preferably utilised for compression and/or encryption purposes.
Fig. ID shows an overview of a method of decoding an encoded description of a resource. This method has as its input a serialised description 104D which has been encoded by the method of Fig. 1C. In this method, the serialised description 104D is parsed by a processor 11OD which in turn generates an encoded representation 112D of CFP1594AU IPR32-41_GRP2 489841 1:\ELEC\CISRA\IPR\IPR32-4 I GRP2]489841.doc:PWM r- j~ -i -4t the description in memory. The representation 112D is an instance of the encoded DesOM. The description scheme 106D provided with associated procedural code, called descriptor handlers, so as to provide the decoding operation 114D which can decode the encoded DesOM representation 112D so as to provide the decoded representation 116D of the description in memory. The representation 116D is an instance of the DesOM of the DDF.
2. Dynamic Description Framework 2.1 Overview The preferred DDF attempts to incorporate the benefits of declarative description of content with procedural methods for the creation and processing of descriptors. It comprises an object model, an API for the processing of descriptions, and a serialisation syntax. The DDF can be used to adequately describe content using these components.
"The object model provides the core semantics of the description and is based on the *o descriptor entity. This model has the advantage that the containment relationship is inherent in the model. This containment relationship is particularly important in the description of audiovisual resources for two reasons. First, the structure of many audiovisual resources has an inherent hierarchical structure a video clip contains shots which contain key frames, etc.). Second, the representation values for many descriptors can be complex datatypes that can be represented in a hierarchical fashion a histogram contains bins which contain frequencies). The object model of the .'**preferred DDF is called the Description Object Model (DesOM). It is discussed in Section 2.2.
The preferred DDF also uses an API for the processing of descriptions. This enables applications and tools to perform further processing transformations, presentations, etc.) on serialised descriptions. The preferred API, which is described further in Section 2.3, is based on the Document Object Model called the DOM, which has been standardised by the W3C for use with XML documents.
The DesOM API also enables the application of rule-based processing, which can be used to: e Extend a description by inferring the presence of additional descriptors based on the existence or absence of stored descriptors; 0 Influence/control the presentation of a description; CFP1594AUIPR3241 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 I GRP2148984 I.doc:PWM ~~-Y'II-'-illll~i -i L;Fl~i~ii 1 Select descriptions or components of descriptions; Translate a stored description into another language on the basis of requirement; Transform a description to use a new description scheme.
This rule-based processing is described in more detail in Sections 7 to 11.
The tree-based structure of the DesOM (and for that matter, the DOM) is an appropriate representation of hierarchically structured data such as the preferred data model.
The DDF preferably uses a serialisation syntax for the purposes of storage and transport of descriptions and description schemes. Serialised descriptions can be parsed into an instance of the DesOM. In addition, the serialisation syntax provides a means for expressing the descriptor relationships detailed in Section 1.2. The syntax of XML Document Type Definitions (DTDs) is used to express description schemes and XML documents to serialise individual descriptions. The expression syntax of both description schemes and individual descriptions is referred to as the serialisation syntax.
XML is used as the serialisation syntax because of its inherent ability to express the containment relationship and its increasing acceptance as a form for the transmission of structured electronic data. A description scheme can be represented using the grammar of an XML DTD in which the individual element definitions represent the definitions of the descriptors and their relationships in the description scheme. Individual descriptions can be serialised as XML documents that conform to the DTD containing the relevant description scheme. Section 2.4 describes how the preferred object model and the Si required descriptor relationships are expressed using the serialisation syntax.
The use of XML as the serialisation syntax enables the possibility of DDF conformant descriptions to be interpreted, in theory, at two levels. First, any serialised description is able to be interpreted at an XML syntactical level. At this level the description could be parsed into an object model such as the DOM and a search/filter engine with no knowledge of the DDF could interpret the description in terms of its textual content the semantics of the DDF's object model are not used for the description's interpretation). Alternatively, the description could be parsed at a more semantic level by using the DDF object model, the DesOM, rather than the DOM.
In practice, however, it is necessary to parse the description scheme expressed using the XML DTD syntax into an XML DTD where descriptor specialisation/generalisation CFP1594AUIPR32-41 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841.doc:PWM -121 r relationships are validated and explicitly realised (see Section 2.4.1.1 for further details).
This step is necessary because no level of subclassing or inheritance is provided for in Version 1.0 of XML. We refer to this step as DDF interpretation and the process performing the step is a DDF Interpreter. To differentiate between the DTD containing the DDF definition of a description scheme and the DTD to which the description (ie., XML Version 1.0 document) conforms, we name the DDF DTD using an extension "ddf' rather than "dtd" as is typically used for an XML DTD.
A serialised description can then be parsed and represented using the DOM from its conformant DTD the DTD stored using the extension "dtd") by a standard XML Processor. This processor needs no knowledge of DDF and the content of the descriptions can be accessed at a textual level. [Textual access to the description could also be achieved by simply scanning the description (XML document) or using XML Processors that are not based on the DOM SAX)] Alternatively, a DDF Processor can parse the serialised description and represent it using the DesOM from the DTD containing the description scheme expressed using DDF the DTD stored using the extension The first step of the latter process is the one of DDF interpretation.
This process of two level interpretation is depicted in Figs. 2A and 2B, which show how different semantic levels of access can be obtained from a (DDF) description serialised using the XML syntax. The DesOm and DOM are similar in that both are treebased structures. However, the DesOM differs from the DOM in that DesOM contains element nodes which have a richer interface than the corresponding element nodes in the DOM. In addition, the element nodes of the DesOM can have an associated DescriptorHandler which provides procedures that are relevant to the element.
CFP1594AU IPR32-41-GRP2 489841 I:\ELEC\CISRA\IPRPR32-41 GRP2]489841.doc:PWM LIY-li 2.2 Object Model O 2.2.1 Overview The object model adopted for the preferred DDF is based on the definition of a core Descriptor object. As defined in Section 1.1.4 a descriptor can be viewed as an "featurerepresentative value" pair. The representative value can be of atomic type integer, string, date, etc.) or compound type, where a compound type is a collection of one or more descriptors. The object model is represented by the UML class diagram in Fig 3.
[Note that the use of capitals in Descriptor and Description implies the objects as defined in Figure 3 rather than the general terms defined in Section 1.1.4.] A Description object is defined as a specialisation of a Descriptor in which all the contained Descriptors pertain to a single resource. Description schemes will contain ':.definitions of descriptors and descriptions which are specialisations of the core Descriptor and Description objects, respectively.
4 In the preferred object model descriptors can represent properties and relationships 4444*4 of their parent descriptors. For example, a Region Descriptor for a Region Adjacency ***Graph of an image could contain a Label Descriptor (containing a textual representative 0 44 value) and a Neighbours Descriptor (containing a representative value comprising a list of references to other Region Descriptors). In this example, the Label Descriptor can be ob4e.: viewed as representing a property of a region and the Neighbours Descriptor as representing a spatial relationship involving the region. Descriptors representing relationships spatial, temporal, conceptual) typically have representative values that S. comprise one or more references to other descriptors in the description. In Section 2.4.1.4, a standard set of descriptors are proposed to express spatial, temporal and conceptual relationships.
2.2.2 Descriptor Class Each Descriptor has an associated id, language code and dataType enumeration.
The id attribute provides each Descriptor with a unique identity. This identity can be used to reference other Descriptor objects in a description. The language code attribute specifies the language of any text in the Descriptor's representative value. The dataType enumeration provides the data type of the representative value if that value is atomic (ie., not composed of other descriptors; see Section Each Descriptor object can also be associated with a Descriptor Handler which provides procedural methods associated with the Descriptor (see Section 2.2.4).
CFP1594AU IPR32-41 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41 GRP2489841 .doc:PWM Implementations of the preferred DDF object model can implement to various extent the descriptor relationships detailed in Section 1.2. This approach means that different implementations can utilise the properties of the particular serialisation syntaxes adopted. Section 2.4.1 describes in detail how the descriptor relationships detailed in Section 1.5 are realised using an XML serialisation syntax.
2.2.3 Atomic Descriptor Value Class A Descriptor's representative value can be atomic or compound composed of other Descriptor objects). If it is atomic, then the value is stored in an Atomic Descriptor Value object as a string object. The data type of this atomic value is interpreted using the dataType attribute of the parent Descriptor object. Therefore the extent to which data o.•typing is provided depends on the dataType attribute for particular implementations of .9 S this data model. For example, refer to Section 2.4.2 for data typing implementation *see .9 details using the preferred XML serialisation syntax.
o9 The Atomic Descriptor Value could also be represented by a data attribute of the oo oo Descriptor class. The Atomic Descriptor Value is represented here as a class because of the one-to-one correspondence of this entity to a Text node in the DOM (and ooo° AtomicDescriptorValue node in the DesOM; see Section 4.1.3).
2.2.4 Descriptor Handler Class •In the preferred DDF, a Descriptor Handler is a class which provides procedural methods that apply to the Descriptor. The methods of the Descriptor Handler preferably satisfy a specified interface. The Descriptor Handler classes can provide methods for the 9 creation of a Descriptor's representative value (or content) and the computation of the similarity between two descriptors of the same type that use the same Descriptor definition and hence Descriptor Handler). There is no reason why this set of procedures could not be extended if required. Fig. 3 details some examples of the Descriptor Handler methods provided in the preferred implementation of the DDF.
The methods mentioned above are preferably implemented as static (class) methods that satisfy a specified interface see Section The role of the Descriptor Handler is to provide unambiguous procedures for the generation and processing of Descriptors. The ability to pass parameters to Descriptor Handler methods is discussed in Section 3.1.2.1 with respect to the use of XML as a serialisation syntax.
Preferably, the programmatic interface for a Descriptor Handler is fixed. In other embodiments, the interface could be specified as an attribute of the Descriptor class or CFP1594AU IPR32-4 I_GRP2 489841 I:\ELEC\CISRA\1PR\1PR32-4 I GRP21489841 .doc:PWM specified for the description scheme. These alternative embodiments enable the Descriptor Handler interface to be customised for particular description schemes.
Descriptor Handler methods can also be provided for the encoding and decoding of a Descriptor's representative value. Encoding methods could be provided in order to either compress reduce in size) the serialised description and therefore more efficiently store and transport the description, or alternatively to encrypt the Descriptor's representative value.
In the case of encoding for compression, the encoding method could vary depending on the type of data to be encoded. For example, a Descriptor with a textual representative value could use a text compression method LZW), whereas a Descriptor that represented a colour histogram structure of an image resource may encode the bin frequencies of the histogram using a form of entropy encoding most commonly occurring frequencies are represented by codewords requiring fewer bits). Encoding for encryption could be used to allow only privileged users access to the Descriptor. Standard 15 encryption methods public key encryption) could be used.
2.2.5 Description Class The Description has some additional attributes to those of the Descriptor. It has an associated resource which contains either the URI or ENTITY of the item of content being described. It also contains a reference to the data when that resource was last modified and an attribute that contains the URIs or ENTITIES of sets of rules that can be applied to the Description. Rule-based processing of descriptions is discussed further in Section 7.
Since a Description object is defined as a specialisation of the Descriptor object, Description objects can be treated as Descriptor objects in other descriptions the attributes of the Description are ignored). In an alternate data model, the Description object can contain both Descriptor and Description objects. With this data model Description objects can exist in another tree of Descriptors and refer to resources other than that of the root description.
Another alternative implementation could use a data model which did not include a Description object, since a Description is essentially the same as a Descriptor having a compound representative type. In this case the additional attributes of the Description (ie resource, dateResourceLastModified and ruleSets) would be treated as attributes of the CFP1594AU [PR3241I GRP2 489841 I:\ELEC\CISRA\IPRIPR32-4 1 GRP2]489841 .doc:PWM Descriptor. With this data model the resource would only need to be specified at the top of the Descriptor tree where it was relevant.
2.3 API for Processing of Descriptions The inherent containment property of the core Descriptor object is represented by a treebased processing model parent-children data model) where each node of the tree is either a Descriptor or Atomic Descriptor Value object. [Atomic Descriptor Value objects can only exist as leaf nodes of the tree.] The DesOM also contains references and navigational links between nodes in the tree. References are typically used to indicate relationships spatial, temporal and/or conceptual) between Descriptor objects.
Navigational links are used to provide browsing properties for the description and enable linking between Descriptor objects in the description and spatio-temporal extents in the resource a particular frame in the video stream being described). A schematic depicting the description processing model is shown in Fig. 4.
For a description to conform to the preferred DDF, the root of the DesOM must be a 15 Description object. In other words, the root must specify the resource to which the description refers. Since a Description object is just a specialisation of the Descriptor object, any Description object can become a sub-tree of another Description object. In S other words, a new Description object can be created from a set of related Description objects. This process is shown in Fig. The DesOM extends the DOM by providing the required generalisation/specialisation relationships for descriptors, data typing for atomic representative values for descriptors, DescriptorHandlers and reference and S--navigational links. The DOM provides a standard set of objects for representing XML S-•documents, a standard model of how these objects can be combined, and a standard platform- and language-neutral interface for accessing and manipulating them. The DOM representation of an XML document is a tree structure where the content of an element is represented as child nodes of the element. The DOM specifies interfaces which can be used to manage XML documents. In other words, it can be implemented in any (or nearly all common) programming languages.
Similarly only interfaces are specified for the DesOM. These interfaces can be used to process XML documents that are DDF conformant. Just as an XML (DOM) Processor must implement a DOM interface, a DDF Processor must implement a DesOM interface (see Fig. As mentioned in Section 2.1, a DDF Processor first performs an CFPI594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41 GRP2]489841.doc:PWM l I;~CI Wiil i.Ta-~*:u-:ii interpretation step in which the generalisation/specialisation relationships of descriptors is validated and processed in a Version 1.0 XML DTD form. [Invalid subclassing in the description scheme expressed using the DDF and the syntax of XML DTDs should result in a description scheme parsing error.] The DDF Processor can then either parse the description into a DOM and transform that structure into a DesOM or parse the description directly into a DesOM.
Essentially the DesOM differs from the DOM in that element and text nodes are replaced by the richer interfaces of Descriptor and Atomic Descriptor Value nodes.
Interfaces for these nodes are described in Section 4 and section 6.3. A basic DesOM implementation could provide just that interface, however a more expansive implementation might provide some level of interpretation of the reference and navigational relationships. For example, a set of spatial, temporal and conceptual relationships could be defined for the DDF (see Section 3.1.3) and these could be interpreted at the DesOM level.
15 Implementations of the DesOM could optionally execute Descriptor Handler methods to create, encode or process descriptors. For example, a DesOM implementation might implement a Descriptor Handler's method to create the content for a Descriptor if the content did not already exist.
The DesOM provides a basis for the further processing of descriptions. The treestructure of the DesOM makes it amenable to rule-based processing where rules consist of a pattern and an associated action. Such processing could be performed by tools which implement the DesOM interface to process DDF descriptions. Rule-based processing is discussed further in Section 7 to 11.
2.4 Serialisation Syntax CFP1594AU IPR3241GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 I GRP2]48984 l.doc:PWM i ~i i.i- ;?li m. :ii- r I iiiiCi~ u; iLY11---: c~ii Oi.l The serialisation syntax preferably used for the storage and transport of descriptions and description schemes is XML Version 1.0. The XML standard was developed as a subset of Standard Generalised Markup Language (SGML). An XML document contains one or more elements, the boundaries of which are either delimited by start and end tags, or by an empty-element tag. Each element is identified by its name, sometimes also called its "generic identifier" (GI) and may have a set of attribute specifications. Each attribute specification has a name and a value. For further details on the XML Version standard, reference is made to the W3C website HTTP://www.w3.org/TR/1998/RECxml-19980210.
The preferred DDF uses a set of core elements which can be defined in an DDF Core DTD. A SGML-like DTD syntax is used to define element types and their associated attributes (as specified in the Version 1.0 XML standard). Each description can be represented by an XML document. This document the description) refers to the DTD the description scheme) to which the description conforms. In other words 15 the description is of the type specified by the DTD (see Fig. 6) The DDF Core DTD needs to provide definitions for the core elements required for the expression of the object model. The element definition that is central to the DDF is that of the Descriptor element. All descriptors can be defined as subclasses (specialisations) of this core element. For example, although a Description is defined to be a collection of descriptors pertaining to a single resource it is defined as a subclass of the Descriptor element. Other subclasses of the Descriptor element are used to provide linking functionality between the descriptors and the resources being described (see Section 3.1.4).
The data modelling requirements of the DDF are more extensive than those provided by the XML Specification version 1.0. Specifically the serialisation syntax of the DDF is able to: Express the required descriptor relationships (see Section 2); Provide data typing for the (atomic) representative value of a descriptor; These requirements are addressed in Sections 2.4.1 and 2.4.2 with respect to using Version 1.0 of the XML standard as the serialisation syntax.
CFP1594AUIPR3241 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841.doc:PWM ii a n i- 2.4.1 Expression of Descriptor Relationships 2.4.1.1 Generalisation/Specialisation Relationships Version 1.0 of the XML specification does not provide for the specification of generalisation/specialisation relationships. In addition, subclassing and inheritance in marked up documents is not well-defined. An element type is a subclass (specialisation) of another element type, the superclass, if it is substitutable wherever the superclass element occurs and is defined to be a subclass of the superclass. It is not essential for an element to be defined as a subclass of another element. The superclass can be viewed as a generalisation of the subclass. The notion of inheritance can be viewed as a code-saving mechanism which allows one element type to get (inherit) the properties of another element type "for free".
The preferred subclassing/inheritance guidelines for single subclassing/inheritance is described below in Sections 2.4.1.1.1 to 2.4.1.1.3. Multiple inheritance can be extended from the single subclassing/inheritance.
15 2.4.1.1.1 Content Model Inheritance A subclass should faithfully implement a base class's interface. Therefore, if a base class has a content model of "ANY" then a subclass can have either an "ANY" content model or a more restricted content model. This is necessary for the subclass to be substitutable for the parent class. This is a somewhat different scenario from object oriented programming (OOP) where a subclass must accept any input that its super (parent) class can. The content model of an element should be viewed as "output" not "input". If each element is considered as an object having methods to retrieve its content, then a subclass must also be able to satisfy these methods. Each element type in a content .i model can be viewed as having a role and the roles of a subclass's content model must match up with those of its parent class. A subclass cannot make more flexible or extend components of the content model of its parent class, however it can implement new child elements that will be ignored when that element is treated as its parent class.
For example, if AA, BB and CC are subclasses of A, B and C, respectively and A has a content model of C) then the following are all valid content models of AA; (BB, CC), (BB, C) and CC). The content models C, (BB, CC, D) and B, C) are also valid content models for AA because they match "roles" for and (B, In addition element, AA can contain child element D which will not be visible if CFP1594AU [PR3241 GRP2 489841 I:\ELEC\CISRA\IPR\lPR32-41GRP2]489841 .doc:PWM element AA is to be treated as an instance of element A. The content models and (C) are invalid because of the "role" of(B, C) in the content model of A is not matched.
It would be possible to allow the content model for a subclass to be left unspecified in which event the subclass's content model would default to be that of the superclass.
Preferably, unspecified content models should not be allowed as they do not represent a valid construct using XML/SGML DTD syntax.
2.4.1.1.2 Attribute Inheritance The same subclass and inheritance notions apply to attributes, however attributes are more intrinsically amenable to concepts of subclassing than content because they are "random access" in some sense as are methods in OOP. A subclass can declare new attributes which are essentially ignored when the subclass is treated as it parent class.
However, a subclass cannot extend, or make more flexible, attributes of the parent class.
The attribute defaults are only considered when assessing whether an attribute definition has or has not extended that of its parent class. Consequently a subclass and its 15 specified superclass should have the same attribute type, and only the attribute default can 4* S•be further restricted in the subclass. Valid restrictions of attribute default definition are as in Table 1. In addition, if the superclass has a default declaration of "#FIXED" and the value of the default can be interpreted as an element name then preferably the value of the default can be further restricted to a be a subclass of that element name.
Table 1. Permitted restrictions of the attribute default declaration in a subclass.
4* Superclass Attribute Subclass Attribute Default Declaration Default Declaration #IMPLIED #REQUIRED "value" #FIXED "value" #IMPLIED 1 1 #REQUIRED 4 "value" #FIXED "value" 2.4.1.1.3 Implementation Details In order to implement this subclassing/inheritance model using Version 1.0 of the XML Specification and the DOM, the superclass (or superElement) for an element is specified as an attribute in the element's defined attribute list. It is believed that this is not ideal and that subclassing information should be part of the element's definition. For CFP I594AU IPR324 I GRP2 489841 I:\ELEC\CISRA\IPR\lPR32-41 GRP2]489841.doc:PWM i example, the keyword "TYPEOF" has been suggested as a means of representing subclassing information <!ELEMENT Cat TYPEOF Animal>).
The subclassing/inheritance implied by the use of the superElement attribute needs to be interpreted and validated against the provided guidelines for subclassing/inheritance.
Failure to conform to these guidelines should result in a description scheme parsing error.
Also, in order for a serialised DDF description to be a valid XML document, the description needs to conform to a valid XML DTD. Therefore the DDF description scheme that is expressed using the syntax of XML DTDs needs to be parsed to create an XML DTD in which all the inheritance aspects of the subclassing relationships are processed. This involves: Making explicit content models which depend on subclassing (this may involve extending content models so that they represent valid XML DTD content models in the absence of subclassing semantics); The addition of inherited attribute definitions to subclassed Descriptor definitions.
2.4.1.2 Equivalence Relationships The location of described resources can be achieved by the method by formulating requests directly based on a description scheme or by more unstructured queries in which the contents of a description scheme are unknown. Typically the former approach will result in a more satisfactory result because the query is specifically formulated for the form of the descriptions. However, in some cases a query might be formulated without a
S
(complete) knowledge of a description scheme (and hence use different terms than those used in the description scheme) or in a language other than that used by particular descriptions.
*9 As highlighted in Section 1.2 there are three types of equivalences: Intra-language equivalences synonyms or quasi-synonyms); Inter-language equivalences translations); Inferred equivalences between textual and non-textual representative values.
Known intra-language equivalences could be incorporated into a descriptor's definition using an alias or sameAs attribute for elements. However, applications and tools that provide a level of intra-language equivalence interpretation exist and therefore it was considered unnecessary to provide this functionality. Separate search/query/filter engines can ultimately provide some level of intra-language equivalence interpretation.
CFP1594AUPR32-41 GRP2 489841 I:\ELEC\CISRA\PR\1PR32-41 GRP2]489841.doc:PWM 77,' 7l -i-t r-7 77777.. It is desirable to provide a means for inter-language equivalence as queries will not is always be formulated in the same language as the description. Although some degree of redundancy can be tolerated in a description scheme descriptors in different languages could be defined), it is not generally acceptable to express a description in multiple languages. The method can translate a parsed description into the language of the query by processing a set of rules that is defined for the description scheme. This set of rules effectively replaces the descriptors in the DesOM with equivalent descriptors in the same language as the query. This method provides a controlled mapping between descriptors in different languages rather than allowing a mapping to be estimated by a translation ability in the search/query/filter engine.
Equivalences between non-textual and textual descriptors can be provided in a similar manner. For example, if the colour of an object in an image is stored as a G, B) S. value then a rule could instantiate another descriptor in the DesOM that maps the "particular G, B) values to particular colours expressed as a text string red, green, 15 blue, orange, etc.).
The rules are stored as a rule set that can be specified as part of the description. The extra or translated descriptors are not serialised and are only generated when they are *needed. In other words, they only exist in the DesOM and not in the XML document that represents the description. Rule sets are a way of providing a richer, more flexible, o description at the time of the description being processed without increasing the overhead of storing redundant information.
*2.4.1.3 Association Relationships Association relationships specify the context in which a defined Descriptor can occur. The context includes relationships such as containment Descriptor A must occur within a Descriptor sequence Descriptors A, B, and C must occur in that order), and cardinality Descriptor B can occur only once in an instance of Descriptor
A).
To a large extent these association relationships can be specified in an XML DTD using an element's content model. A content model is a simple grammar governing the allowed types of child elements containment) and the order in which they are allowed to appear. Group connectors [and (comma), or (vertical bar)] are used to specify the order in which child elements can appear within the element. Occurrence indicators [one or more zero or more or zero or one are used to specify the cardinality or CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\lPR\1PR32-41 GRP2]489841.doc:PWM 1- Il l occurrence of the child elements in the element's content. Element content models are described in Section 3.2.1 of The XML content model 1.0 does not allow a specific non-zero cardinality to be defined an image can contain 0 to 20 objects) and consequently this association property is not provided in the preferred DDF implementation.
2.4.1.4 Spatial, Temporal and Conceptual Relationships Many descriptors will need to be able to model spatial, temporal and conceptual relationships often in addition to association relationships. For example, a Region Adjacency Graph which describes an image, comprises a graph object that contains a set of regions. In addition to being part of the graph object, each region also has a set of neighbouring regions spatial relationships). These relationships can be described using references to the relevant descriptors in the description.
In the method, these relationships are represented as Descriptors having atomic Descriptor values with IDREF or IDREFS data types. A set of core relationship 15 descriptors is defined in the DDF Core DTD to enable DesOM implementations to realise a greater extent of semantic interpretation. Examples of the types of descriptor definition to are included are provided in Section 3.1.3.
2.4.1.5 Navigational Relationships Many applications may require that descriptors can be explicitly linked to spatially and/or temporally localised extents in a resource. Although the resource is typically that being described, this is not always the case. The links should enable navigation from S. descriptors to indicated locations in a resource from a descriptor to a spatially and temporally localised extent in a digital video stream).
The means for expressing these links has been derived from an existing approach to this problem, namely the HyTime standard, which uses location address elements, or locators. This method requires that the resource must be declared as an external entity in the description. Link elements are then declared to create contextual (having a single linkend) and independent (having more than one linkend) links between locations in the description and extents in the declared entity. Locators provide a means for addressing extents in the resource being described.
The Locator and Extent elements defined in the DDF Core DTD are much simpler than those specified in the HyTime standard as the latter provided more than was required for the DDF requirement of linking. Also, because it is difficult to envisage all the CFP1594AU IPR3241_GRP2 489841 I:\ELEC\CISRAIPRIPR32-41 GRP2]48984 .doc:PWM r- r, l I-1 i-i-i.)-Xlli*J_ -i--~-i~:~:CLlii~Plii:iS~-.~-li-V~lli~l~ i_ possible different forms of locators required for the different media types it was believed that description scheme designers should not be limited in the scope of their design of required locators.
2.4.2 Expression of Specific Data Types The content model for an element can specify the order and cardinality of allowed child elements (see Section that the element has EMPTY or no content, that the element has parsed character data #PCDATA), or some mixture of parsed character data and child elements ANY). [The allowed content models of elements are detailed in Section 3.2.1 of XML 1.0 WC3 recommendation]. If the content of an element is used to store the representation value of a feature "DateCreated"), then the content model of the relevant Descriptor would need to be "#PCDATA" (or "ANY") and the content would be represented as a character string. Although this might be acceptable for a •textual interpretation of the description, this form of representation does not permit more advanced queries where, for example, descriptions may be required to be selected if the 15 "DateCreated" feature has a representation value that is later than some provided date. In other words, it is necessary to know how to parse the character content of the Descriptor the Atomic Descriptor Value).
The serialisation syntax of the DDF provides data typing of an element's content by using a dataType attribute for the element. Although it would not be explicit for a *oooo Version 1.0 XML (DOM) Processor, a DDF Processor can use the data type attribute to interpret an element's content appropriately. Datatyping of element content has been considered as part of the XML working group discussions and hence it would be preferable if the DDF could remain consistent with the XML standard.
In addition to the basic data types integer, floating-point value, string, date, time, etc.), the dataType attribute should allow types such as ID, IDREF and ENTITY in order to enable Atomic Descriptor Values to represent references to other Descriptors and links to entities external to the description. The XML concept of ENTITIES is preferred to using a URI data type in that the ENTITY type allows a URI to be linked to a type of the entity JPEG image, Java class, etc.).
2.5 Implementation Issues An implemented DDF Processor could use publicly available software (eg. IBM's XML parser; to parse descriptions into a DOM structure and then the method transforms CFP1594AUIPR3241 GRP2 489841 I:\ELEC\CISRA\IPR\lPR32-41 GRP2]489841.doc:PWM -32this structure into a DesOM structure. The Java language is preferably used to implement Descriptor Handler classes because of its cross-platform properties.
Actual implementations of a DDF Processor would not need to create a DOM as an intermediate step and could parse the XML document directly into a DesOM structure using the DDF description scheme. Such a processor would need to first interpret the subclassing information in the DDF description scheme (see Fig. 2A).
A DDF Processor implementation could also take advantage of other core relationship descriptors (see Section 2.4.1.4) to provide a richer semantic interpretation of descriptions. Implementations could also interpret the linking elements when providing a graphical representation of descriptions and incorporate rule-engines to process rules which might be applied to the DesOM.
3. Serialisation Syntax Specification 3.1.1 Element Definitions The preferred DDF includes the definition of a set of core elements using the 15 XML/SGML DTD syntax. This set, is preferably stored in a core, or set of core, DTDs.
Appendix A contains an example of such a DTD, Core.ddf Note that we use the extension "ddf' to differentiate this document from an XML DTD which would typically have the extension "dtd". A DDF set of definitions needs to have its subclassing/inheritance properties attribute inheritance from super elements) processed before a description can be interpreted with respect to set of DDF definitions.
The set of core elements can be used as a basis for the definition of application DTDs or description schemes. The element definitions in the Core.ddf effectively provide a set of "foundation" elements from which description schemes can be based.
This specification of the core element definitions for the proposed DDF is based on Version 1.0 of the XML Specification. Elements that are included in the proposed Core.ddf are named according to the naming conventions used for Java classes all words in the name are capitalised and concatenated).
CFP I594AU IPR324 I GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 GRP2]489841 .doc:PWM i i:ii-i"; iir;---rl 3.1.2 Core DDF Element Definitions 3.1.2.1 Descriptor Definition The Descriptor element is the basic element which provides the data modelling properties detailed in Section 2.2. Any element definition requiring any of these properties should be represented as a subclass of this element. The element is the markup equivalent of the object class of an object-oriented programming language.
The content model for the Descriptor element needs to allow for either parsed character data (atomic representation value) or one or more Descriptor elements (compound representative value). The content model of the Descriptor element is defined to be "ANY" so as to allow the necessary content and be a valid XML DTD syntactical construct. However in order to control content models more tightly, it is also possible to define two subclasses of the Descriptor, the Atomic Descriptor and the Compound Descriptor. The content models of these subclasses could then have the more restricted content models of #PCDATA and (Descriptor+), respectively.
S 15 In specialisations of the basic Compound Descriptor element, the "Descriptor+" would need to be interpreted by the DDF Interpreter as one or more Descriptor or subclasses of Descriptor elements. Specialisations of the Descriptor element that use this content model by default may have their content model extended to "ANY" during the DDF interpretation process (see Section 2.4.1.1.3) in order to form a valid XML DTD for 20 a description scheme.
<!ENTITY DataTypes "(Int I Float Double I String I Date I Time I ID I IDREF IDREFS I ENTITY I ENTITIES)"> <!ELEMENT Descriptor (ANY)> <!ATTLIST Descriptor id ID #IMPLIED xml:lang CDATA "en" dataType %DataTypes; "String" superElement NMTOKEN #IMPLIED handler ENTITY #IMPLIED CFP1594AUIPR3241 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841 .doc:PWM Attribute id The value of this attribute provides a natural way to refer to a particular element in references). It must be unique for the document.
Attribute xml:lang The attribute xml:lang is included in Version 1.0 of the XML Specification. It specifies the natural language or formal language in which the content (of the element) is written. The default language used by the Descriptor element is English. If a description scheme was defined in French, for example, then one approach would be to define a FrenchDescriptor in which the value of xml:lang was FIXED to and then derive all application descriptors from the FrenchDescriptor element Attribute dataType .**:.Preferably, the definitions of many descriptors require some control over the data *type of an element's character data content.
The allowed data types for character data content are specified by the (XML) S 15 internal parameter entity, DataTypes (see above). The dataType attribute is only utilised if the content model for the Descriptor contains #PCDATA and the provided content for the Descriptor contained character data. In other words, if the content of a Descriptor is specified to contain child elements a compound representative value) then the dataType attribute is not used. In an alternative implementation, the allowable data types 20 could include a "Compound" type which would make the use of a compound descriptor more explicit in its definition.
Character data content of a Descriptor is represented by a DDF Processor using a AtomicDescriptorValue node (see Section 4.1.3 for the interface specification) rather than a Text node as used by a DOM Processor.
The default value of the attribute dataType is "String". This means that the dataType attribute does not need to be included in a Descriptor element's definition if the content of the element is to be treated as a string. Preferably, the DDF Processor dates and times are based on the profile of ISO 8601. The types, ENTITY/ENTITIES/ID/IDREF should be parsed as defined for the XML Version 1 standard.
Although the data type of the Descriptor element's character data content cannot be directly used by XML version 1.0 and DOM version 1.0 specifications, it might in some way assist textual access to the description. Also placing the data type of the character CFP1594AU 1PR32-4 I_GRP2 489841 I:\ELEC\CISRA\IPR\lPR32-4 I GRP2]489841.doc:PWM data content in an attribute is consistent with many current proposals for data typing in
XML.
Some Descriptors will require their representative value to be limited to a list of possible values an enumeration). In these cases, it is preferable to construct Descriptor elements (having an EMPTY content model) for each of the enumerated values and then specify the enumeration in the content model for the parent Descriptor.
An alternate approach is to include an enumeration data type and use a #PCDATA content model.
Attribute superElement The value of this attribute is an element name which is the parent (or super) element of the Descriptor element. The parent element's definition must be available. Subclassing is implemented as described in Section 2.4.1.1.
The information in this attribute is used by the DDF interpretation process (see Fig.
2) to validate the defined subclassing and to process the inheritance of attributes. When S 15 accessed at the DOM level, this attribute provides only descriptive information about the immediate generalisation of the element. When processed at a DesOM level the subclassing relationship(s) for the element are represented as a node list or inheritance tree (see Section 4.1.1).
Attribute handler 20 The value of this attribute specifies an external entity for a Descriptor Handler to be used to provide methods for the Descriptor element. The Descriptor Handler is a class which contains methods that conform to a specified Descriptor Handler interface (see Section 4.1.2).
The Descriptor Handler is specified using an ENTITY which can be defined in the description scheme (preferably before the elements of the scheme are defined). The ENTITY declaration can use a NOTATION to declare the type of the external entity and a helper application required to process the ENTITY. In the example below, a NOTATION is declared for a JavaClass type and this type is linked to the "Java" helper application (ie., Java virtual machine). An individual Java class in then declared using an ENTITY declaration which uses the JavaClass NOTATION.
<!NOTATION JavaClass SYSTEM "Java"> <!ENTITY MyDescHandler SYSTEM "MyDescHandler.class" NDATA JavaClass> CFP I594AU IPR32-4 I GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841.doc:PWM Preferably, it is assumed that the methods provided by the Descriptor Handler do not require any parameters that are not available from the DesOM resource from a Description element). If methods of a Descriptor Handler require parameters to be set from individual descriptions, then attributes of a specialisation of the Descriptor element can be used to hold the parameter values. A Descriptor Handler could then have a method to set the parameters from the attribute values in the DesOM.
3.1.2.2 Description Definition A Description element is defined as a subclass of the Descriptor element. It represents the root node of an instance of the DesOM and should be the root element of a serialised description an XML document).
The content model for the element is defined as one or more Descriptors. This is a restriction of the content model of the Descriptor element. As with the Descriptor element, definitions of specialisations of this element need to be interpreted by the DDF Interpreter as one or more Descriptor or Descriptor subclass elements.
S* 15 <!ELEMENT Description (Descriptor+)> S" <!ATTLIST Description superElement NMTOKEN #FIXED "Descriptor" resource ENTITY #REQUIRED dateResourceLastModified CDATA #IMPLIED ruleSets ENTITIES #IMPLIED Attribute superElement Although the attribute superElement is inherited from the Descriptor element's definition, it is redefined here to declare that the Description element is a subclass of the Descriptor element. The default superElement is declared as #FIXED so that instances of the Description element cannot redefine the superElement value. Note, that a specialisation of the Description element can further restrict this default attribute value by specifying an element name that is a subclass of the Descriptor element (see 2.4.1.1.2).
Attribute resource This value of this attribute should contain an entity which references the resource being described by this description. The resource must have been declared as an entity in CFP1594AUPR32-41 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841.doc:PWM I the description before the Description can be declared. The resource type can be obtained by using a NOTATION, defined in either the description scheme or in the Core.ddf, to describe the type of entity: eg., <!NOTATION MPEG-2 SYSTEM "MPEG-2Player">.
The NOTATION can then be used by an external ENTITY declaration in the DOCTYPE declaration of the description: eg., <!ENTITY MyVideo SYSTEM "MyVideo.mpg" NDATA MPEG-2>.
Note, that this method of referencing the resource being described not only identifies it as an MPEG-2 resource but also provides the name of a processor (helper application) for the resource type.
Attribute dateResourceLastModified The value of this attribute is a string representation of the date that the resource was last modified. At any stage a process can check to see if this date has changed (by string comparison), and update the description if necessary.
S 15 Attribute ruleSets The value of this attribute contains one or more external ENTITIES. Each ENTITY refers to an XML document that contains a set of rules that can be applied to the description (see Section 7).
3.1.3 Descriptors Representing Spatial, Temporal and Conceptual Relationships 20 A set of Descriptor elements have been included to provide spatial, temporal and conceptual relationships between descriptors. These elements are preferably a part of the core DDF elements rather than specified in individual application description schemes in order to improve the semantic interpretation of description. These relationship Descriptor elements can have either atomic or compound representation values. The element set below is included more by way of example rather than attempting to demonstrate a complete list of the types of relationships that need to be modelled.
<!ELEMENT ParallelSequence (Descriptor+)> <!ATTLIST ParallelSequence CFP]594AUIPR32-41 GRP2 489841 I:\ELEC\CISRA\IPR\lPR32-41GRP2]489841.doc:PWM -38superElement NMTOKEN #FIXED "Descriptor" <!ELEMENT SerialSequence (Descriptor+)> <!ATTLIST SerialSequence superElement NMTOKEN <!ELEMENT Neighbours (#PCDATA)> <!ATTLIST Neighbours superElement NMTOKEN dataType %DataTypes; <!ELEMENT Before (#PCDATA)> <!IATTLIST Before superElement NMTOKEN dataType %DataTypes; #IFIXED "Descriptor" #FIXED "Descriptor" #FIXED 1
IDREFS"
#FIXED "Descriptor" #FIXED "[DREFS" <!ELEMENT After (#PCDATA)> <!ATTLIST After superElement dataType
NMTOKEN
%DataTypes; a.
*0 <!ELEMENT InFrontOf (#PCDATA)> <!ATTLIST InFrontOf superElement NMTOKEN dataType %DataTypes; #FIXED "Descriptor" #FIXED "IDREFS" #FIXED "Descriptor" #FIXED "IDREFS" CFP1594AU IPR32-41-GRP2 489841 CFPI94AU1PR3-41_GRP 48941 :\ELEC\CISRA\IPR\IPR32-41GRP2]489841 .doc:PWM <!ELEMENT Behind (#PCDATA)> <!ATTLIST Behind superElement NMTOKEN #FIXED "Descriptor" dataType %DataTypes; #FIXED "IDREFS" 3.1.4 Elements Representing Navigational Relationships The preferred DDF also includes some core elements that enable the linking of descriptions to spatio-temporal extents of the content being described. A spatio-temporal extent is defined to be a section of the content that is spatially and/or temporally localised.
For example, a spatio-temporal extent of a digital video signal might be represented as a rectangular region that extends for a number of frames. A contextual link, CLink, is defined to represent the common cross-reference or navigational link. A CLink connects 15 the location in the description where the link occurs to another location. In other words, a i C CLink has a single linkend attribute. An independent link, or ILink, is also defined for .applications that require links connecting more than two locations or stored separately from the link's location in the description. These elements are defined as subclasses of the basic Descriptor element so that they are interpreted by the DDF and represented as nodes in the DesOM. Since these elements do not require any of the data modelling properties described in Section 2.2, there may be a case for allowing elements, such as the set defined below, to not be based on the Descriptor element but still interpreted by a DDF processor.
The definitions of these linking elements are included in Core.ddf Note it might be preferable to include the definition of the core spatio-temporal linking elements in a separate (ddf) DTD.
<!ELEMENT CLink (#PCDATA)> <!ATTLIST CLink superElement NMTOKEN #FIXED "Descriptor" dataTvoe %DataTvoes; #FIXED "IDREF" j 1 <!ELEMENT ILink (#PCDATA)> CFP1594AU IPR32-41 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841 .doc:PWM i -i- <!ATTLIST Ilink superElement NMTOKEN #FIXED "Descriptor" dataType %DataTypes; #FIXED "IDREFS" The core Locator element simply provides an address for the location of one or more Extent elements within a particular resource. The value of the resource attribute identifies the resource using an ENTITY that has been previously declared in the description. This requires that the Core.ddf also includes a sufficiently rich set of NOTATIONS that include the types of resources that are going to be referenced by entities JPEG, TIFF, MPEG-1, MPEG-2, etc.). An instance of a Locator must contain one or more instances of an Extent. It is desirable to specify the resource even if it is the same resource specified for the description.
Several subclasses of Extent elements are defined in the Core.ddf The definitions of 15 these elements are included below. These element definitions provide an example of the types of Locator and Extent elements that could be required.
see*
S
0 00 a s 0 5 *555
S
SO
S0 S. S
S*
S
<!ELEMENT Locator (Extent+)> <!ATTLIST Locator superElement NMTOKEN resource ENTITY <!ELEMENT Extent (Descriptor+)> <!ATTLIST Extent superElement NMTOKEN <!ELEMENT ImageExtent (Descriptor+)> <!ATTLIST ImageExtent superElement NMTOKEN #FIXED "Descriptor
#REQUIRED
#FIXED "Descriptor" #FIXED "Extent" <!ELEMENT RectImageExtent (RectImageExtentXO, RectImageExtentYO, RectImageExtentHeight, RectImageExtentWidth)> CFP I594AU IP32-4 I GRP2 489841 I:\ELEC\CISRA\IPR\PRP 2-4 1GRP2]489841 .doc:PWM I 1 1-1/ ifi ~h;iiFj-.~~~~I~Pni.__iliii <!ATTLIST RecthmageExtent superElement NMTOKEN #FIXED "ImageExtent" <!ELEMENT RectlmageExtentXO CDATA)> <!ATTLIST RectlmageExtentXO superElement dataType
NMTOKEN
%DataTypes; #FIXED "Descriptor" #FIXED "Int" <!ELEMENT RectlmageExtentYO (#PCDATA)> <!ATTLIST RectlmageExtentYO superElement dataType
NMTOKEN
%DataTypes; #FIXED "Descriptor" #FIXED "Tnt" <!ELEMENT RecthmageExtentH eight (HP CDATA)> <!ATTLIST RectlmageExtentHeight superElement dataType
NMTOKEN
%DataTypes; #FIXED "Descriptor" #FIXED "it" <!ELEMENT RectlmageExtentWidth (#PCDATA)> <!ATTLIST RectlmnageExtentWidth superElement dataType
NMTOKEN
%DataTypes; #FIXED "Descriptor" #FIXED "Int" <!ELEMENT VideoExtent (VideoExtentStart, VideoExtentEnd, ImnageExtent?)> <!ATTLIST VideoExtent superElement NMTOKEN #FIXED "Extent" CFP1594AUIPR3241-GRP2 489841 CFPI94AU1PR2-41_GR2 488411:\ELEC\CISRA\IPR\IPR32-4 I GRP2]48984 I .doc:PWM U <!ELEMENT VideoExtentStart (#PCDATA)> <!ATTLIST VideoExtentStart superElement NMTOKEN #FIXED "Descriptor" dataType %DataTypes; #FIXED "Int" <!ELEMENT VideoExtentEnd (#PCDATA)> <!ATTLIST VideoExtentEnd superElement NMTOKEN #FIXED "Descriptor" dataType %DataTypes; #FIXED "Int" 4. DesOM API Specification The DesOM interface extends the existing DOM Object Model (DOM) interface 15 specification. The DOM is a platform and language-neutral interface that will allow programs and scripts to dynamically access and update the content, structure and style of XML and HTML documents. It provides a standard set of objects for representing HTML and XML documents, a standard model of how these objects can be combined, and a standard interface for accessing and manipulating them. Vendors can support the DOM 20 as an interface to their proprietary data structures and APIs, and content authors can write to the standard DOM interfaces rather than product-specific APIs, thus increasing interoperability on the Web.
The DOM interface does not stipulate how its associated methods are to be implemented. For, example, the method getElementsByTagName() (Appendix K) must satisfy the DOM interface, but can be implemented in any manner as a developer so choses. The implementation of the methods associated with DesOM and DOM interfaces are not essential to the invention and will not be described further.
The DOM Level 1 Specification is now publicly available; it has been reviewed by W3C Members and other interested parties and has been endorsed by the Director as aW3C Recommendation. For further details on the DOM version 1.0 standard reference is made to the W3C website HTTP://www.w3.org/TR/1998/REC-DOM-level-1- 199810001.
CFP1594AUIPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41 GRP2]489841.doc:PWM ii/ i x L ii;ii~ L. ;)I;I1(L;^_1_:;_jiii~;_iiiir*~i~"~ilir~ As mentioned, the DesOM requires extensions to the DOM. These extensions are 0) in the form of additional interface specifications. These specifications are detailed in this Section using the Object Management Group (OMG) Interface Definition Language (IDL). The specified interface represents a minimal interface for the DesOM.
4.1.1 Interface Descriptor The Descriptor node object in the DesOM is a subclass of the DOM Element node object (see Appendix Like the Element node object, the Descriptor node object represents both the Descriptor element, as well as any contained elements.
IDL Definition interface Descriptor: Element void setHandler(in DescriptorHandler handler); DescriptorHandler getHandler); Nodelterator getSuperElements(); Method setHandlerO Set the DescriptorHandler for this Descriptor node object. This handler can be instantiated on the basis of the handler ENTITY that is specified as the value of the handler attribute for the Descriptor element.
20 Parameters handler The DescriptorHandler to be assigned to this Descriptor node.
Returns void Exceptions This method throws no exceptions.
Method getHandlerO Returns the DescriptorHandler for this Descriptor node object.
Parameters None Returns DescriptorHandler for the Descriptor node object.
Exceptions This method throws no exceptions.
Method getSuperElementsO CFP1594AU IPR32-41-GRP2 489841 I:\ELEC\CISRA\IPR\lPR32-41 GRP2]489841.doc:PWM -44- Returns a list of Descriptor generalisations or superElements for the Descriptor node object.
Parameters None Returns Nodelterator Exceptions This method throws no exceptions.
4.1.2 Interface DescriptorHandler The DescriptorHandler object provides methods for a class of Descriptor nodes. A DescriptorHandler can provide methods for more than one type of Descriptor. For example, a collection of Descriptors might use the same similarity metric.
Preferably, the interface for the DescriptorHandler is fixed. In other embodiments this interface can be specified either for a Descriptor or description scheme.
The methods of a DescriptorHandler are generally implemented as class (static) methods.
IDL Definition 15 interface DescriptorHandler boolean canCreateDescriptorContent(); void createDescriptorContent(Descriptor descriptor, Entity resource); void removeDescriptorContent(Descriptor descriptor); double getSimilarity(Descriptor descriptorl, Descriptor descriptor2); 20 :Method can CreateDescriptorContentO Returns true if the DescriptorHandler contains an implemented method that can create the content for a descriptor..
Returns True if a method has been implemented else returns false.
Method createDescriptorContento Generates the content child nodes) of the specified Descriptor node object using the specified resource.
Parameters descriptor The Descriptor node object for which the content child nodes) is to be created from the resource.
CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 GRP2]489841.doc:PWM resource Returns Exceptions The resource, represented as an entity, from which the content is to be derived.
void This method throws a ResourceNotFoundException if the resource could not be found, or a IllegalResourceException if the resource is not compatible with the method.
Method removeDescriptorContentO Removes the content child nodes) of the specified Descriptor node. This method might be invoked to reduce the complexity of a description for storage and would typically only be invoked if the DescriptorHandler was capable of recreating the specified descriptor's content.
Parameters descriptor The Descriptor node object for which the content child nodes) r is to be removed.
void Returns Method getSimilarityO Returns a similarity metric in the range of 1.0] which provides a measure of the similarity between the two specified Descriptor node objects.
Parameters descriptorl The first of the two Descriptor node objects to be compared.
descriptor2 The second of the two Descriptor node objects to be compared.
Returns double Exceptions This method throws an UnmatchedDescriptorException if the two Descriptor node objects are of incompatible types.
4.1.3 Interface AtomicDescriptorValue The AtomicDescriptorNode object is a subclass of the Text (node) object that is specified as part of the DOM [The Text object contains the non-markup content of an Element]. It provides additional methods to the Text object which interpret the string data content of the Text object as other data types it is effectively a typed text node). The data types available are as specified for the dataType attribute of the Descriptor element (see Section It is assumed in this specification that the XML data types IDs, CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841.doc:PWM 1 i(il-F i~i i.i-ii---l -li* IDREFs, ENTITY, ENTITIES) would be interpreted from the string value of the AtomicDescriptorValue node.
Dates and times are represented using the date an time formats specified by the profile of ISO 8601. Implementations of the AtomicDescriptorValue object can provide further methods that provide extra date functions getDataAsDateYear), getDataAsDateMontho, etc.).
IDL Definition interface AtomicDescriptorValue Text .0 int getDataAsInt); float getDataAsFloat(); double getDataAsDouble); Date getDataAsDate); Time getDataAsTime(); Method getDataAslnto Returns the value of the Text node as an integer.
Parameters None 0 Returns Integer Exceptions This method throws a DDFDataFormatException if the character string could not be parsed as an integer.
Method getDataAsFloatO Returns the value of the Text node as a float value.
Parameters None Returns Float Exceptions This method throws a DDFDataFormatException if the character string could not be parsed as a float value.
Method getDataAsDoubleO Returns the value of the Text node as a double value CFP]594AUIPR3241 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841.doc:PWM Parameters None O Returns Double Exceptions This method throws a DDFDataFormatException if the character string could not be parsed as a double value.
Method getDataAsDateo Returns the value of the Text node as an ISO 8601 date.
Parameters None Returns ISO 8601 date Exceptions This method throws a DDFDataFormatException if the character string could not be parsed as an ISO 8601 date.
Method getDataAsTimeO Returns the value of the Text node as an ISO 8601 time.
Parameters None 15 Returns ISO 8601 time Exceptions This method throws a DDFDataFormatException if the character string could not be parsed as an ISO 8601 time.
Example of a Description Scheme An example of a description scheme expressed in DDF is contained in Appendix B.
20 The description scheme aims to provide a description for digital video footage of an Australian Football League (AFL) game. This description scheme makes use of some core element definitions that are contained in Appendix A. The Core.ddfis declared as an internal parameter entity Bl and then included in the description scheme using the operator (see B2). The indicated lines B1 and B2 of the description scheme result in all the element definitions included in Appendix A being available to the example description scheme.
In the definition of the descriptor AFLGameDescription B3 a descriptor handler B4 is specified. In this example, the descriptor handler is implemented as a Java class (AFLGameGen.class in the example contained in Appendix B) having a predetermined procedural method which automatically generates the (description) content for the AFLGameDescription descriptor by analysing the digital video signal containing the footage of the game being described.
CFP1594AU [PR3241 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841.doc:PWM It should be noted that although the AFLGameDescription element is defined as a specialisation of a Description element, a Description element is just a specialisation of a Descriptor element, and so the AFLGameDescription can also be treated as a Descriptor.
An example description generated from the description scheme contained in Appendix B is shown in Appendix C. This example description would typically have been initially generated by the descriptor handler for the AFLGameDescription descriptor, however manual creation is also possible if an annotator so desires. The procedural method to generate the content for the descriptor AFLGameDescription would typically analyse the digital video resource signal containing the footage of the game to be described, identify the start and end of the four quarters of play, and within each quarter track and, if possible, identify individual tracked players. The tracking could be achieved using motion analysis of the digital video resource with player identification being 00:: achieved by attempting to recognise a player's number from his/her jersey. It is not an 0 object of this invention to specify a method for generating the content of the description.
i 15 Clearly it is unlikely that all the information required for the description, as specified by the description scheme, could be automatically generated from an analysis of the digital video resource signal. Where information is not available date and location of the game), the content generation method can either generate empty descriptors or simply omit the descriptors from the description. At a later date an 20 annotator can add this information manually if it is required. Similarly, it might be too difficult for an automatic analysis to classify the action of each tracked player. For example, it might be difficult to automatically analyse whether the player was involved in a mark, a kick or a tackle. This information could also be provided at a later date. In fact, an annotator could use a Digital Video Browser System, as described in Section 13, to browse the digital video resource and annotate as required. On completion of annotation the Digital Video Browser System could also be used to select to play all those sections of the digital video resource in which a particular player was involved, or all those sections in which a mark occurred. In other words, the Digital Video Browser System could be used to complete any annotation tasks and browse the described digital video resource.
Another example of a method to create the content for a descriptor, is one where the resource to be described has already been described using another description scheme.
For example, a digital video camera might generate a description (using, for example, a Video Capture Description Scheme) for a digital video resource as it is being captured.
CFPI594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41 GRP21489841.doc:PWM -r ~ri ;~ii~il~TIT:~C~-Y~li -iXia The automatically generated description might contain information such as exposure, O focus, eye-gaze location, shot boundaries, etc. It might be desirable to maintain some, if not all, of the information automatically recorded using the source description scheme, however it might be preferable to describe the digital video resource using another more generally accepted description scheme, in this case the destination description scheme. In this case the descriptor handler(s) in the destination description scheme could provide a mapping of descriptors from the source to destination descriptions. This mapping would typically be provided in the content creation method of descriptor handler for the Description element of the destination description scheme. This transformation from one description scheme to another could also be achieved by applying rules to the DesOM (see Section 7).
S6. Methods of Applying Procedures.
6.1 Method of Generating Descriptions of Electronically-Accessible Resources :o Turning now to Fig. 7A, there is a shown a method of generating descriptions of an :i 15 electronically accessible resource. The method commences at step 700A and continues at step 702A where a description scheme is read by a processor the description generator). In the next step 704A, a processor identifies the one or more DescriptorHandlers in the description scheme and afterwards the method continues to step 706A. In step 706A, the processor identifies the procedures corresponding to the S 20 previously identified DescriptorHandlers. These procedures are in the form of procedural code contained in the DescriptorHandlers. In the next step 708A the procedures are applied to the resource. The procedure generates a representative value which is associated with an attribute feature) of the resource. The method then outputs at step 710A the results of the application of the procedures. The method terminates at step 712A. Preferably, these procedures result in the automatic generation of a description of the resource in the form of a DesOM which may be subsequently serialised as a XML document. However other procedures or processes may be envisaged. Further this resultant description is preferably interpretable by both humans and machines.
6.2 Methods of Applying Procedures to a Description Turning now to Fig. 7B, there is shown a flow diagram of a method of applying procedures to description(s) of resource(s). The method commences at step 700B and continues at step 702B where a description is parsed by a DDF processor. In the next step 704B, the DDF processor identifies within the associated description scheme one or more CFPI594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41 GRP2]48984 I.doc:PWM DescriptorHandlers. In the next step 706B of the method, the DFF processor identifies the one or more procedures associated with the previously identified DescriptorHandlers.
These procedures are in the form of procedural code contained in the DescriptorHandlers.
In the next step 708B the procedures are applied to the DesOM corresponding to the description. The method then outputs at step 710B the results of the application of the procedures. The method terminates at step 712B. The method envisages many different types of procedures that can used in the method. In one embodiment, the method computes the similarity between two descriptors of the same type. In this embodiment, the descriptions are parsed by the DDF processor and a common descriptor definition is identified by the processor. The DDF processor then identifies within the description scheme containing the common descriptor definition an associated DescriptorHandler •which contains procedural code for computing similarity between two descriptors. The method then applies the procedural code to the DesOMs associated with the descriptions i- and determines the similarity of the descriptors and hence the similarity of the two 15 resources. The method then outputs the results of the similarity computation. This embodiment has particular application in searching/querying descriptions of resources. In another embodiment, the procedural code of the method can encode and/or decode one or more descriptor components of the description of a resource. This embodiment has particular application for efficient and/or secure transport or storage of descriptor components of descriptions of resources.
6.3 Examples of Methods of Generating Descriptions and Applying Procedures to Descriptions The method of generating descriptions and applying procedures standardises the way descriptors and description schemes are defined. These descriptors and description schemes can be used to describe various types of multimedia information. Using the descriptors and description schemes, descriptions that allow fast and efficient searching can be created and associated with multimedia content. The preferred embodiment provides for automatic extraction of descriptors. However, in general, this is only possible for low-level features. Features that represent higher level of abstraction usually have to be set manually or, at least, semi-automatically.
The method also provides a standard mechanism for associating descriptors with procedural code that generate them, which can greatly facilitate the deployment of the description schemes. For instance, such association will allow the development of very CFP 1594AU IPR32-4 I_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841 .doc:PWM -51general applications such as a multimedia database server, that make use of the procedural 0 code to generate descriptors for new descriptions or for comparing descriptions. Apart from procedures for generating descriptors, procedures for validating descriptors, computing the similarity between two descriptors of the same type as well as encoding and decoding descriptors can be made available to applications through a standard interface.
For example, the following exemplary applications are possible utilising such procedural code. One can whistle a melody to find a song, play a few notes on a keyboard and get in return a list of musical pieces, draw a few lines on a screen and get in return a set of images containing similar "graphics, logos, ideograms, g define objects, including colour patches or textures and get in return examples, using an excerpt of Pavarotti's voice, and getting a list of Pavarotti's record, etc.
The above scenarios involve the user providing some example content with his/her query. Standardisation of description schemes (in addition to a language for the exposure °ofDSS) would facilitate querying over multiple remote multimedia databases.
There a number of problems relating to the standardisation of descriptors and description schemes. For example, there is a problem with even the relatively simple colour histogram descriptor. Even if two description providers use the same colour histogram descriptor, they might use it in a different manner such as using different quantisation. This will mean that histogram bin i may mean different things to different description providers. When one uses the histogram of an image as an example to search multiple image databases, one either has to compare and/or convert between different histograms. Both of these alternatives are difficult to achieve and error prone. It is also not practical to have every database server to re-compute the histogram of its images in the same way the example histogram was generated.
The inventors propose two possible approaches for standardisation of description schemes: 1. Standardise completely, to the last detail, the colour space, the colour quantisation (bins) for colour histograms, and consequently the matrix of cross-bin similarities.
2. Use the image itself as the example. Then, each database uses its own extraction CFP1594AUIPR32-41 GRP2 489841 I:\ELEC\CIS RA\IPRlPR32-41 I GRP2]489841 .doc:PWM method to compute the histogram of the received query image and then compare the histogram with the rest of its database.
The first option is not very practical, as people will never agree with every detail of the histogram's specification. Option 2 is more practical and cleaner, for histogram and most of other image queries. Option 2 means that each database can use their own particular parameters for descriptors and also their own methods of computing similarity between descriptions.
Only a low-level descriptor the colour histogram that could be generated from the content was considered as an example in these approaches. In practice, the query might also contain textual description or keywords that the user input which can be mapped to some high-level descriptors such as the photographer's name, the caption of the image, etc.
As described previously, a base Descriptor class from which all descriptors and descriptions are to be based is defined and a description is treated as a compound 15 descriptor. The base Descriptor class includes an attribute that allows the URI of a handler that implements the descriptor's procedures to be stored. It provides a standard mechanism for associating descriptors with procedural code. The handler is called a descriptor handler. A standard API (application program interface) for the descriptor handler is based on a DescriptorHandler class. The DescriptorHandler provides methods for generating the content (or value) of the descriptor, createDescriptorContent(), and computing similarity between two descriptors of the same type, getSimilarity(, (see section 4).
An alternative embodiment of the DesOM interface is now described below. The detailed definitions of the DesOm interface of the DescriptorHandler class can be found in Code Definitions A. In short, the DesOM interface of the DescriptorHandler class specifies the following methods: ParameterList getParameterList(in string methodName) for getting a list of parameters that are relevant to the specified method.
String getParameter(in string parameterName) for getting the parameter value.
Void setParameter(in string parameterName, in string parameterValue) CFP I 94AU IPR324 I GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841 .doc:PWM -53for setting the value of a parameter.
Descriptor createDescriptorContent(URI resource) for creating the descriptor for the resource.
Double getSimiliarity(in Descriptor descriptorlObject, in Descriptor ect) for computing the similarity between two descriptors of the same type.
Boolean validate(in Descriptor descriptorObject) for validating the content of the descriptor.
ByteArray encode(in Descriptor descriptorObject) for encoding the descriptor for transmission or archive.
Descriptor decode(in ByteArray encodedString) for decoding an encoded descriptor.
In addition, the Descriptor class provides the following methods: o Descriptor parseDescriptorString(in string XMLString) for parsing an XML formatted string into a descriptor object. (Note e that, in this case, the parsing only check for well-formedness).
String getXML( for returning the XML serialisation of the descriptor including its start o and end tags.
Different descriptor handlers can be implemented (usually by different developers) for any descriptor with different trade-offs among performance, functionality and complexity. However, all descriptor handlers of a descriptor must comply with the definition of the descriptor. That is, the descriptor handlers can only generate descriptor value that conforms to the definition of the descriptor. At the same time, the descriptor handler can assume that any input descriptor will conform to the definition of the descriptor.
A descriptor designer may assign a default descriptor handler to a descriptor.
However, a user of the descriptor is free to choose another handler or none at all.
Not all descriptors are required to have a descriptor handler. Indeed, many descriptors of higher level of abstraction are expected to be handler-less. Nevertheless, CFP1594AUIPR3241 GRP2 489841 I:\ELEC\CISRA\1PR\IPR32-41 GRP2]489841 .doc: PWM -54even low-level descriptors may not have a handler. For instance, while a handler may 0 exist for a histogram descriptor, we don't expect a handler would be required for a descriptor that holds the creation date of a document.
Even if a descriptor has a descriptor handler, a description is not required to use or reference the handler. In addition, different instances of the same descriptor class may refer to different descriptor handlers. For instance, due to the different characteristics of different classes of images, for each class of image a different handler with a more efficient segmentation algorithm is used for creating its region descriptors. Moreover, applications are not restricted or required to use the descriptor handler referred to by a descriptor instance.
At the same time, not all descriptor handlers (which are subclasses of the DescriptorHandler class) will override the default implementation of all the methods of the base DescriptionHandler class, that is, provide support for all methods of the base class. For instance, a validation method may be implemented to check that an ISBN has the right format; however, no method is implemented to generate the ISBN. Another example is that while a descriptor handler may support the getSimilarity( method of a certain descriptor for an non-electronically accessible resource, it would not support the corresponding createDescriptorContent() method.
Having the description (or the default handlers in the description schemes) pointing to the relevant descriptor handlers and a standard interface for the descriptor handler make it possible to build very general applications. For instance, the database server application in the above mentioned option 2 does not need to have a predefined set of procedures linked in. Indeed, as is explained below, all the description providers in the above option 2 can use the same database server application despite the different set of optional descriptors and the different descriptor parameters they used.
Code Definitions A IDL definitions of the DescriptorHandler interface: //File: DescHdlr.idl //Descriptor Handler IDL #ifndef DescHdlr idl CFP1594AU 1PR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841.doc:PWM #define _DescHdlr-idi_ #pragma prefix "canon.com" #include <Descriptor. idl> #include <URI.idl> module DescriptorHandler{ interface DescriptorHandler; typedef sequence<octet> ByteArray; typedef sequence<string> ParameterList; tyedfseuec~srig MetricList; enum ExceptionType I INVALID_-METHODNAME,
METHODNOT_-SUPPORTED,
INAI*PRMTE.AE
INVALID PARAMETER NAME,
XMLNOTWELLFORMED,
NOACCESSPRIVILEGE,
RESOURCEUNAVAILABLE,
RESOURCENOTFOUND,
FORMATNOTSUPPORTED,
READERROR,
WRITEERROR,
INVALIDDESCRIPTORCLASS,
INVALIl)_DESCRIPTORATTRIBUTE,
INVALIDDESCRIPTORCONTENT,
INVALIDMETRIC,
INVALIDENCODEDSTRING,
OUTOFMEMORY,
UNKNOWNEXCEPTION CFP1594AU IPR32-4i-GRP2 489841 CFPI54AU 1R32-4 _GRP 48981 I:ELEC\CISRA\IPRU1R32-4 I GRP2]48984 I .doc:PWM -56- Exceptions O exception DescriptorHandlerException ExceptionType error; wstring description; 1; interface DescriptorHandler Get the list of parameters used by the descriptor.
ParameterList getParameterList( raises (DescriptorHandlerException); Get/set the specified parameters.
string getParameter(in string parameterName) raises (DescriptorHandlerException); Svoid setParameter(in string parameterName, in string parameterValue) raises (DescriptorHandlerException); The method creates the content of the descriptor using the specified resource based on the current value of the parameter attributes. If not supported, a METHOD_NOT_SUPPORTED exception is raised.
Descriptor createDescriptorContent(in URI resource) raises (DescriptorHandlerException); The methods compute the similarity between the two descriptors passed in. The similarity measure computed //is returned. If not supported, a METHODNOTSUPPORTED exception is raised.
CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\lPR32-4 I GRP2]489841 .doc:PWM double getSimilarity(in Descriptor descriptorl, in Descriptor descriptor2) raises (DescriptorHandlerException); Get the list of metrics supported for computing similarity.
The first in the list is the default metric used.
MetricList getSimilarityMetrics() raises (DescriptorHandlerException); //Set the metric to be used for computing similarity.
string setSimilarityMetric(in string metricName) raises (DescriptorHandlerException); Validate the content of the descriptor. If the content is valid, return true; otherwise, return false. The //default implementation always return true.
boolean validate(in Descriptor descriptor) raises (DescriptorHandlerException); Encode (compress) the descriptor for transmission or for archiving. If not supported, a METHODNOT_SUPPORTED exception is raised.
ByteArray encode(in Descriptor descriptor) raises (DescriptorHandlerException); Decode (decompress) the encoded descriptor. If not supported, a METHOD_NOT_SUPPORTED exception is raised.
Descriptor decode(in ByteArray encodedStr) CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841.doc:PWM i -n~h=tn,,,rx~LIraises (DescriptorHandlerException); #endif DescHdlr idl The data (value) of a descriptor (or feature) usually depends on some parameters.
For instance, the data of the colour histogram will depend on the colour space used and the quantisation. These parameters, in general, are also used by the corresponding descriptor handler if one exists. Methods are provided in the DescriptorHandler interface for obtaining a list of relevant parameters and setting the value of the parameters. Note that the settings of the parameters control the characteristics of a descriptor instance but are not related to the actual content the descriptor instance described.
In the light of the fact that the DDF is XML based, the parameters are specified as XML attributes and data (value) describing the resource (content) should be part of the content model. For instance, an instance of the colour histogram descriptor may look like: rgbHistogram: each bin (marked by the <frequency> tags pair) stores the number of pixels whose value is between g, b) and (r+binSize-1, g+binSize-1, b+binSize-1) inclusive. <rgbHistogram binSize="32"> <frequency b="0">14009</frequency> <frequency r"32" g="32" b="32">21015</frequency> <frequency r="224" g="224" b="224">12434</frequency> </rgbHistogram> The bin size, which is a parameter of the histogram, and the starting rgb value of each bin, which are parameters of the (frequency) bin, are specified as XML attributes. In contrast, the bin frequency, which describes the number of occurrence of a range of rgb value in the content, appears as value of the content model. Nevertheless, the principle of using XML attributes for descriptor parameters can only be treated as a guideline for good CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\PR32-41GRP2]489841.doc:PWM t I- r 1- -59descriptor design and cannot be verified by a DDF processor.
As is evident from the interface defined for the DescriptorHandler class, a descriptor handler may be used in the automatic creation of low-level descriptions, generate example descriptors for searching database, computing similarity between descriptors of the same class, validating descriptor content, and encoding and decoding descriptor content.
Many low-level descriptors can be and, indeed, are expected to be extracted from the content automatically. It is even expected that some low-level descriptors can be created real time as the content is being captured. For instance, during the recording of a video or in subsequent processing, a descriptor handler for some generic video segment description scheme can use the metadata provided by the video camera to segment the video temporally into clips (segments) and generate a description describing the structure of the S video. Note that descriptor handlers of a non-standardised description scheme could also be used. For example, Fig. 8 shows a video processing application 808 generating a video segment description 802 utilising video and camera meta data 804. The video processing application 808 uses a video segment descriptor handler 800 from a standard library 806 in generating the description 802. As can be seen, the description 802 refers to the descriptor handler 800.
When generating the description, the processing application calls the following methods of the descriptor handler: the getParameters) method to get the list of relevant parameters, the setParameterValue() method to set the parameters required, and the createDescriptorContento method to generate the descriptor.
If required, the application may call the getXML) method of the descriptor to get the XML serialisation of the descriptor node. It is expected that other structural components of the video that are difficult to extract automatically and higher level descriptors that describe the semantics of the structural components would be added to the description later with the aid of interactive tools.
The descriptor handler approach also allows developers to develop different extraction algorithms for generating descriptions for (low-level) features, and market or distribute them as some sort of "plug-ins".
Individual database servers can use the same set of descriptor handlers referred to by the descriptions it stored to generate similar or compatible descriptors for any example object specified in the query. A database server can then use the descriptor handlers' CFP1594AU IPR32-4 I_GRP2 489841 :\ELEC\CISRA\IPR\lPR32-4 I GRP2]489841 .doc:PWM getSimilarity() method to compare the descriptors of the example object with those of the stored descriptions. For instance, in the above option 2, the client can send an example image with its query to multiple remote image databases. Each database will then generate a histogram descriptor of the image using the descriptor handler referred to by the descriptions of its images and the same parameter settings used by the descriptions of its images.
For example, Fig. 9 shows how descriptor handlers can be used to support query-byexample searches over multiple remote image databases. A client 900 sends an example image 902 with its query to description/content providers A to Z. Each description/content provider A to Z comprises a image database 904 for storing images, a description database 906 for storing colour histograms of the stored images, and a database search engine 908. The description/content provider A, upon receipt of the query, generates *oll.
910A a corresponding histogram descriptor 911A utilising the colour histogram handler 912A referred to by the image colour histograms 914A stored in its description database 906. The description/content providers B to Z generate corresponding histogram descriptors 911B,...,911Z in a similar manner. Namely, each provider generates a histogram descriptor of the example image using the descriptor handler referred to by the descriptions of its images and the same parameter settings used by its descriptions. The provider A then computes the similarity of the example histogram 911A with the image colour histograms 914A stored in its description database 906. The providers B to Z compute the similarity of the example histograms 911B,...911Z with the corresponding image colour histograms 914B,...,914Z in a similar manner. Those images and/or descriptions having a similar colour histogram are then retrieved from the databases 904, 906 and are transmitted by the providers as query results to the client 900. In this way, each provider may use different procedures for generating colour histograms, but at the same time provide consistent query results. The descriptor handler approach also allows a single database to use histograms with different parameters for the different classes of images it stores.
Descriptor handlers also provide a flexible mechanism for computing similarities between two descriptors of the same class. The simple interface of the getSimilarityO method hides the complexity in computing the similarity between two descriptors. It allows the use of an appropriate algorithm and similarity metric for each class of descriptor and takes into account the different parameters the descriptors used (such as the CFP1594AU IPR32-4 I_GRP2 489841 I:\ELEC\CISRA\IPR\lPR32-4 I GRP2]48984 I.doc:PWM -61different bin sizes used by two rgb histograms).
r The descriptor handler also provides a way of validating descriptor content. It is possible in the serialisation syntax of the DDF (or an equivalent description definition language) to support the declaration of constraints. However, such a declarative approach is only possible for simple constraints such as such as maximum value, minimum value, etc. Alternatively, the serialisation syntax can support the use of an object model such as DOM and script language such as ECMAScript for specifying complex constraints.
However, procedural code is generally a more efficient way for validating complex constraints.
Descriptor handlers also allow a more flexible approach for encoding and decoding descriptions or particular descriptors in descriptions. Instead of using a coo• single encoding/decoding algorithm for the entire description, more efficient encoding/decoding mechanisms can be developed for individual descriptors that make use of the characteristics of the individual descriptor. These mechanisms could be made available through the encodeo/decodeo method of descriptor handlers. The encoding/decoding procedure of any standardised descriptor and description schemes can be made available as methods of some descriptor handler library.
Fig. 10 shows an example how descriptor handlers might be used for encoding/decoding standardised descriptors. The processing applications 1004, 1006 of the description consumer A and description provider B make use of methods of the descriptor handlers 1002 of a standard library 1000 to decode 1010 and encode 1008 (as well in the case of the provider) the descriptor instance 1012.
The basic methods of the descriptor handler can be divided into two types: one that requires the resource (the content to be described) to be accessible, and the other that doesn't. Only the createDescriptorContent0 belongs to the first type and requires the content as well as some packages for processing the content to be available. The other methods such as GetSimilarityo, validate(), encode() and decode() only operate on descriptor instances and do not require the use of a special multimedia (handler) library.
In addition to it being inefficient to upload content to a remote site for description generation, security and privacy issues with regard to the content exist. Therefore, it is expected most descriptions will be generated locally on sites where the content is located.
Even in the case depicted in section 3.2, each database server uses a local descriptor handler to generate descriptors for the query example. In the case where content is CFP 1594AU IPR32-41 _GRP2 489841 I:\ELEC\CISRAUPR\IPR32-4 I GRP2148984 .doc:PWM downloaded for description generation, the application will still be using a local descriptor handler. Hence, as far as description generation is concerned, any descriptor handler used would be a local one.
Java presents an ideal object-oriented language for implementing descriptor handlers because of its cross-platfonnrm properties, its growing support on a large variety of devices of various sizes and its close tie to the Web through which most descriptions are expected to be delivered. Some concern has been expressed over the issues that Java applications are not as efficient as other compiled code and that most existing features extraction algorithms are not implemented in Java. The advent of Just-In-Time (JIT) compiler has greatly improved the performance of Java applications and applets. In addition, descriptions are likely to be generated using local descriptor handlers. That is, the createDescriptorContento method is typically invoked locally. Hence, it is free to use any locally installed multimedia library including non-Java library through the Java Native Interface. As for the other methods of the descriptor handlers, they deal with descriptions and not content. They are usually not as processing-intensive as the createDescriptorContento method. Standard Java packages are usually sufficient for their purposes. Hence, they can have a pure Java implementation which remote sites can download for execution. Signed applets can be used to lift the severe constraints that are typically imposed on standard applets. For instance, an appropriately signed descriptor handler may be allowed to write to and read from a specific local directory.
Fig. 11 shows an example of descriptor handlers implemented as Java applets. The S"content/description provider B comprises a content database 1102, a description database 1104, a description generator 1112 and a description server 1110. Descriptor handlers 1116 of standardised descriptors are included in a standard library 1106 while those of non-standardised descriptors 1114 are be available in other descriptor handler libraries 1108. The descriptor handlers of both libraries 1108 and 1116 are implemented in Java.
The descriptions stored on the description database 1104 can be generated by invoking the standard and/or non-standard createDescriptorContent() methods from the other descriptor handler libraries 1108 (non-standard) and/or standard library 1106. The description server 1110 retrieves requested descriptions from the description database 1104 and transmits them to the client A Before transmitting the descriptions, the description server 1110 may invoke encoding methods from the standard and/or nonstandard libraries 1106, 1108.
CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41 GRP2489841.doc:PWM -ll- i; The content consumer A comprises a processing application 1150 and a standard library 1152. The processing application 1150 receives the encoded description and is decoded by invoking a decoding method from the standard library 1152. The decoded description forms a descriptor instance. Non-standard descriptor handler applets 1114 could be downloaded from the provider A, if required, and all their methods could be executed on the client machine A. In particular, a non-standard decoding descriptor handler can be downloaded 1154 to form a descriptor handler instance 1156. The processing application 1150 then invokes the non-standard decoding descriptor handler 1156 to decode 1160 the encoded description 1199 to produce the descriptor instance 1158.
The content/description provider B can use a Java Native Interface 1120 as part of the Java packages and non-Java libraries 1122 of descriptor handlers. For instance the createDescriptorContento may be implemented in Non-Java code and thus can only be invoked locally on the server machine B through the Java Native Interface 1120.
In summary, descriptor handlers could be implemented as applets. Descriptor handlers of the standardised descriptors can be provided as part of a standard library (together with the definitions of the standardised descriptors and description schemes).
However, users are free to use any valid descriptor handlers. Descriptor handlers for nonstandardised descriptors would be available separately in other libraries. It is proposed that descriptor handler applets are properly signed and all methods of the DescriptorHandler interface except possibly createDescriptorContent() are expected to be downloadable to remote site for execution. The createDescriptorContent( method may require that special libraries or native libraries would cause exception when not invoked locally.
7. Rule-based Processing using the DesOM The internal memory structure of a description the DesOM) provides a convenient structure on which to perform further processing of a description (or indeed the relevant description scheme). This further processing can be achieved by locating patterns of nodes in the DesOM and performing specified actions in response to the located patterns. Each pattern-action association can be represented by a rule and a set of related rules can be collected into a rule set.
Rules can be used to used to automatically create further descriptors based on existing descriptors (see Section 8. Method of Extending Descriptions of Resources), to CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 GRP2]489841.doc:PWM provide presentation properties for descriptions and description schemes (see Section 9.
Method of Presenting Descriptions of Resources), and to represent queries (see Section Method of Selecting Resource Descriptions Rules can also be used to translate a description to the language of the query (see Section 11. Method of Translating a Description of a Resource). The Digital Video Browser System described in Section 13 uses a method for formulating rules common for each of these functions. This method is described below.
Each rule consists of a pattern (of nodes in the DesOM) and an associated one or more actions. For each of the different functions (inference, equivalence, presentation and selection), a different set of actions is often applicable. However each of these functions can be enabled using a common rule grammar which will be described in this section. The rule grammar can be defined in an XML DTD. The rules for the different functions can simply use the common rule grammar (this is the case for the Digital Video Browser System), or alternatively the allowable actions can be controlled by defining different DTDs for each of the different functions an InferenceRules.dtd, a PresentationRules.dtd, etc.).
~Rules can be represented as, or in a manner similar to, Extensible Style Language (XSL) rules. In the Digital Video Browser System (see Section 13), we have used the following basic rule grammar.
CFP I594AU IPR324 I GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841 .doc:PWM -1 i~_?~~tjiij Rules.dtd [<ELEMENT Rule (Action+)> <!ATTLIST Rule target pattern
<!ELEMENT
(Element I ElementDet) "Element" CDATA #REQUIRED Action AddAttribute I RemoveAttribute IAddElement I RemoveElemnent IAddAttributeDef I RemoveAttributeDef I Select)> <!ELEMENT AddAttribute (EMPTY)> <!ATTLIST AddAttribute attName att Value
CDATA
CDATA
#REQUIRED
#REQUIRED
#RIEQUIRED
<!ELEMENT RemoveAttribute (EMPTY)> <!ATTLIST RemoveAttribute attName CDATA a <!ELEMENT AddElement(#PCDATA)> <!ATTLIST AddElement position (SiblingBefore I SiblingAfter I AsFirstChild IAsLastChild)
#REQUIRED
<!ELEMENT RemoveElement (EMPTY)> <!ELEMENT AddAttributeDef (EMPTY)> <!ATTLIST AddAttributeDef attName CDATA #REQUIRED attType CDATA #REQUIRED attDefault CDATA #RIEQUIRED CFP1594AUIPR32-41-GRP2 489841 CFPI54AU 1R32-1_GRP 48981 I:ELEC\CISRA\IPRU1R32-41 GRP2]489841 .doc:PWM -66- <!ELEMENT RemoveAttributeDef (EMPTY)> <!ATTLIST RemoveAttributeDef attName CDATA <!ELEMENT Select (EMPTY)> <!ATTLIST Select attName CDATA attValue CDATA selectAncestors (YES I NO)
#REQUIRED
"selected"
"YES"
"YES"
a.
Each Rule element has a target attribute that has a default value of "Element" and a character string pattern attribute. The target attribute refers to the target of the defined Rule. Typically inference, equivalence and search rules are targeted at elements because the action of the rule results in either a new descriptor in the description or the selection of a descriptor for a query. Presentation rules, however are typically targeted at element definitions as their associated actions specify how a particular descriptor type is to be presented in an application. A set of rules can be serialised in an XML document. This is typically the case with inference, equivalence and presentation rules, but may not be required for selection rules which may often be processed on a single rule basis.
The role of the pattern character data string is to identify the particular elements (or element definitions) to which the action is applied. This character string can identify more than one element and can include element ancestry and attribute qualifiers.
Preferably, the pattern string is parsed according to the following Extended Backus-Naur Form (EBNF) notation.
Pattern ElementPatterns ConnectorOp AncestryOp ::=ElementPatterns(ConnectorOpElementPattems)* ::=ElementPattern (AncestryOp ElementPattern)* I Each pattern can consist of one or more alternative patterns represents an alternative) or must satisfy more than one ElementPattern connector operation).
Element ancestry is represented within a pattern by using the parent operator Two CFP1594AU [PR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 GRP2]489841.doc:PWM I I patterns separated by a parent operator match an element if the right hand side matches the element and the left hand side matches the parent of the element. For example, the following Shot elements that have a Scene element as a parent and a VideoClipDescription element as a grandparent match the following Rule's pattern: <Rule pattern "VideoClipDescription/Scene/Shot"> <Action> etc...</Action> </Rule> Two patterns separated by the ancestry operator match an element if the righthand side that matches the element has at least one ancestor that the left-hand side matches. So, for example, any Shot elements that have a VideoClipDescription as an ancestor element will match the following Rule's pattern: <Rule pattern "VideoClipDescription//Shot"> <Action> etc...</Action> S</Rule> </Rule> *9.
S SCFP1594AU IPR3241 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841.doc:PWM ElementPattern ElementTypePattern OneElementTypePattern ElementQualification Qualifiers Qualifier AttributeQualifier AttributePattern Attribute Value PositionalQualifier too* r 0@ :.66* 0.S ElementTypePattemn ElementQualification OneElementTypePattern ElementTypeName '['Qualifiers?']' Qualifier (','pualifier) ChildQualifier I AttributeQualifier I PositionalQualifier AttributePattern (''AttributeValue)? 'attribute' '('AttributeName')' Position'(') 'First~frype' I'NotFirst~frype' 'FirstOfAny' I'NotFirstOfmany' 'Last~frype' I'NotLast~frype' 'LastOfAny' I'NotLastOfAny' 'Only~frype' I'NotOnly~frype' 'OnlyOfAny' I'NotOnlyOfAny' Position An element within the pattern hierarchy may have qualifiers applied to it, which further constrain which elements match the term. These qualifiers may constrain the element to have certain attributes or sub-elements or may constrain its position with respect to its siblings. The qualifiers are specified in square brackets following the ElementTypeNamne (which is it tag name defined in the DTD). A pattern matches only if all of the qualifiers are satisfied.
For example, any Shot elements that have a child element KeyFrame will match the following Rule's pattern: <Rule pattern "Shot[KeyFrame]"> <Action> etc... .</Action> </Rule> CFP1594AU IPR32-41-GRP2 489841 CFPI94A 1P32-1 _RP248941 :\ELEC\C kSRA\IPR\IPR32-4 IGRI 21489841 .doc:PWM Attributes on the target element or any of its ancestor elements can also be used to 1O determine whether a particular rule applies to an element. An attribute qualifier can constrain an element to have either a specific attribute with a specific value, or to have a specific attribute with any value. For example, the following pattern matches a Bin descriptor which has as its parent a Histogram descriptor which has an attribute noBins with a value of'100': <Rule pattern "Histogram[attribute(noBins)=' 100']/Bin"> <Action> etc...</Action> </Rule> Positional qualifiers can also be used to further constrain the pattern to match on the element's position or uniqueness amongst its siblings. For example, the following example matches Object descriptors which are the only Objects in a KeyFrame descriptor: <Rule pattern "KeyFrame/Object[OnlyOfType()]"> <Action> etc...</Action> e </Rule> The above description of the matching method permits pattern matching only on elements (which are typically descriptors in the DesOM) or element definitions. Clearly there are many possible embodiments for defining the syntax of the node pattern matching without departing from the spirit and scope of the invention.
20 Each Rule can have one or more associated Action elements. In the Digital Video Browser System (see Section 13) the allowable Action elements for rules has been limited to the addition and removal of elements and attributes from elements descriptors) in descriptions and the addition and removal of attribute definitions from element definitions in a description scheme. The actions involving individual descriptions are generally used by inference, equivalence and selection rules (see Sections 8 and 10) and the actions involving description schemes are generally used by presentation rules (see Section 9).
The attributes of the Action elements, AddAttribute and RemoveAttribute, specify the attribute to be added or removed from a target element an element that has matched the specified pattern in the rule). The content of the AddElement action contains the element to be added to the DesOM as a relation of a target element. The position attribute of the AddElement element specifies where the new element should be added with respect to the target element. This position attribute can indicate that the new element is to be added as a sibling node before the target element (SiblingBefore), as a CFP 594AU IPR32-4 I_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841.doc:PWM h I sibling node after the target element (SiblingAfter), as the first child of the target element 1 (AsFirstChild), or as the last child of the target element (AsLastChild). Clearly, since the element to be added to the DesOM is represented as parsed character data (#PCDATA), an element hierarchy can also be added to the DesOM. The RemoveElement action will simply remove a target element. Any child elements of the target element will also be removed.
The AddAttributeDef and RemoveAttributeDef actions are only valid if the target for the rule is an element definition. These actions are typically used by presentation rules (see Section The AddAttributeDef action uses the attName, attType and attDefault attributes to specify the required information for the attribute definition to be added to an element definition. The RemoveAttributeDef action will simply remove the attribute definition that is identified by the value of the attName attribute of the action. Attribute definitions can be replaced by including both an AddAttributeDef and a RemoveAttributeDef action in a particular rule.
S 15 The Select action is typically only used by selection rules and is described in detail Section 10. Rules can also be used to transform a description. These rules are used to generate a second description conforming with a second description scheme.
8. Method of Extending Descriptions of Resources Given a description scheme, it is possible that further descriptors can be 20 automatically created by inference or a known equivalence in a description based on the existence or otherwise of a particular set of descriptors. For example, if a descriptor for a digitally captured image representing light exposure levels indicated outdoor lighting levels, then an additional descriptor could be automatically created to classify the image as an "Outdoor Scene". Since the latter classification can be inferred from the recorded light exposure levels there is no advantage in storing the classification because it can always be re-generated while the inference rule exists. Rules can also be used to generate textual descriptors based on non-textual descriptors or vice versa. For example, the colour of an object might be stored in a description as a G, B) value. A rule could be formulated which maps each G, B) value to one of a possible number of colours represented in a text string red, green, purple, etc.). The additional descriptors generated by inference or equivalence rules can result in a richer description that can be exploited by applications search engines, filter agents, etc.).
CFP1594AU [PR32-41 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 I GRP2]489841.doc:PWM A set of rules that is applicable for a given description scheme can be serialised O (stored) in an XML document. In the Digital Video Browser System (see Section 13), a reference to such an XML document is stored in the value of the ruleSets attribute of the Description element for the description scheme (see Section 3.1.2.2 Description Definition). It is possible to associate more than one rule set with a description scheme.
In the Digital Video Browser System see Section 13), if more than one rule set is specified then it is assumed that both rule sets can be applied the individual rule sets do not contain unresolvable rules). In other words, the individual rule sets are simply combined and treated as a single rule set, in which the order of rules to be processed is provided by the order of the listing of the individual rule sets and the order of the individual rules within each given rule set. Inference and equivalence rule sets can also be stored with an application without departing from the essence of the invention, however in this event the value of the rules is limited to the particular application.
•"Preferably, the Action elements typically used are the addition and removal of attributes and elements from the DesOM. Replacement can be achieved by using a removal followed by an addition Action element.
A set of inference rules is preferably invoked whenever a description is first processed into the DesOM. The rules are iteratively processed until no further changes can be made to the DesOM as some rules may depend on the actions of other rules. The o:ooo rule set may need to be (iteratively) reapplied whenever the description is updated a manual annotation in an application utilising the description). In the event that an application has permitted changes to be made to the description, then before serialising S-the altered description each change needs to be considered in light of the inference rules in order to ascertain whether the descriptor can be inferred from a knowledge of the other descriptors in the description. If a descriptor can be inferred then it is excluded from the serialised description.
The method preferably associates a set of inference and/or equivalence rules to a description scheme. This set of rules can be implemented according to the abovementioned description and results in a richer description structure without any additional storage or transport overhead which would result if the extra (inferred or equivalent) descriptors were included as part of the individual descriptions. Being able to represent this inferred or equivalent information as a set of rules that can be invoked when required represents a significant saving in storage and transport cost if a large digital CFPI594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\1PR32-41 GRP2]489841.doc:PWM library were to be described. In other words it can eliminate the storage and processing O costs of redundant information.
An important aspect of the method is that unlike existing stylesheet languages such as XSL, the inference and equivalence rules do not form the basis of a construction of a new tree structure which is typically used for rendering. In the method the rules are applied to the memory structure that represents the description (ie, the DesOM) and result in changes to that structure. The role of the rules is to provide a richer description of the resource that can be exploited by applications search engines, filter agents, etc.).
This richer description does not necessarily need to be serialised because the richer description can always be generated from the original description using the rules.
The embodiment for applying the inference and equivalence rules has a limited set of actions that can be performed on the selected elements (see Rules.dtd in Section 7.
Rule-based Processing using the DesOM. This set of actions is sufficient for the Digital Video Browser System described in Section 13, however it is possible that a more extensive set of rules may be required for other applications.
Turning now to Fig. 12, there is shown a flow diagram of a method ofextending a description of a resource. In step 1200, the method commences and a host application such as a search engine invokes a DDF processor and selects a description in response to a user request for further processing. The description may be generated as prviously .:oooi described. In the next step 1202, the DDF processor parses the description into a DesOM.
After step 1202 the method continues at step 1204, where an associated set of rules are accessed using the RuleSet attribute of the description scheme. These set of rules may be S"serialised in the form of an XML document. In the next step 1206, the first rule of the set is selected for processing.
The method then continues to decision block 1206, where a check is made whether a pattern associated with the selected rule can be found in the DesOM. The manner in which the pattern associated with the selected rule matches a pattern in the DesOM is described in more detail in 7. Rule-based processing using the DesOM. If the decision block 1208 returns true(yes), then the processing continues at the next step 810, where the inference or equivalence action associated with the rule is initiated on the DesOM. These actions preferably initiate addition and removal of attributes and elements from the DesOM thus modifying the DesOM. Afterwards, the method selects the next rule in step 1211 and the processing returns to decision block 1208. If the decision block 1208 CFP1594AU IPR32-4 I_GRP2 489841 I:\ELEC\CISRA\IPR\IPR324 I GRP2]48984 I.doc:PWM 1~ i(rt* ~T~uldt~-- returns false(no), the processing continues at decision block 1212, where a check is made 0whether all the selected rules have finally been processed without action. In this way, the rules are iteratively processed until no further changes can be made to the DesOM. This is advantageous in the situation where some rules are dependent on other rules. If, on the other hand, the decision block 1212 returns true(yes), the processing continues at step 1216 where the extended DesOM is output. The method then terminates at step 1218.
9. Method of Presenting Descriptions of Resources A description could be used by many applications. Each application might exploit different properties of the description and its defining description scheme. Some of these applications will invariably need to represent description schemes and/or descriptors in a graphical or pictorial manner. For example, many descriptors could be graphically S: represented by icons and a user's interaction with either a description or description scheme could be mediated by icon selection.
Presentation properties for descriptors could be included as part of the description S 15 scheme however this can be non-ideal for two reasons. First, the role of the description scheme and description is to describe classes of resources and a particular resource, respectively, and it is preferable to keep both entities as concise and precise as possible.
Presentation information would result in extra presentation information icons) being part of a description scheme (and perhaps descriptions) and would therefore increase the storage and transmission costs for each description scheme. Second, different applications might prefer to present descriptions and description schemes in different ways. In other words, the presentation properties of descriptions and description schemes can be application dependent.
It is advantageous, however, to have a set of presentation rules grouped in a rule set that can be serialised, transported with and used in conjunction with the description scheme so that other applications can, if they choose to, use a similar set of presentation rules. This would not be the case if the presentation rules were tightly linked with a particular application part of the application code base).
As with inference and equivalence rule sets, presentation rule sets can optionally be linked with a description scheme by specifying the XML document containing the presentation rule set as the value or part of the value of the ruleSets attribute in the Description element for the description scheme (see Section 3.1.2.2 Description Definition). Presentation rule sets can be included in the ruleSets attribute along with CFP1594AU IPR32-4 I_GRP2 489841 I:\ELEC\CISRA\IPR1PR32-41 GRP21489841 .doc:PWM other rule sets that might be concerned with inference and equivalence rules. In the 0I Digital Video Browser System, which is described in Section 13, the presentation rule sets are stored with the description scheme in the ruleSets attribute. Alternatively, they could be stored with the application rather than the description scheme. Presentation rule sets stored as part of the description scheme are processed like inference or equivalence rule sets. In other words, all the rules from the individual rule sets are combined into a single rule set. Resolution of rules is performed on the basis of rule order (as was described for inference rules in Section 8. Method of Extending Descriptions of Resources). If an alternative method of processing presentation rule set(s) is required then the presentation rule set(s) are best stored with the application so the application can control the processing.
Presentation properties can be attributed to the descriptor definitions in a description scheme or the descriptor elements of a description using application-specific presentation rules. Unlike, inference or equivalence rules, a presentation rule is typically applied to an element definition in a DTD. Its role is to provide presentation behaviour for the instances of the descriptors defined in the description scheme. In the Digital Video Browser System (see Section 13), presentation rules are only applied to descriptor definitions and not to descriptors within individual descriptions. However, it is conceivable that some applications might benefit from an ability to define presentation rules based on individual descriptors in descriptions. The rules in a presentation rule set can be formulated in a similar way to inference or equivalence rule sets.
Preferably, the Action elements of presentation rules typically involve the addition S"and removal of attribute definitions in element definitions (in the description scheme).
Consequently the rules are targeted at element definitions rather than elements.
Alternative embodiments could apply presentation rules to individual descriptions and therefore the target of these rules would be elements rather than element definitions.
Presentation rules are used in the Digital Video Browser System described in Section 13 for the following functions: STo classify descriptors as being structural (hence belonging in a Table of Contents) or of an index nature (hence belonging to an Index); CFP1594AU IPR3241 GRP2 489841 I:\ELEC\CISRAMPRiPR32-41GRP2]489841 .doc:PWM To assign icons to descriptors where the icons are assigned on a description scheme 0 basis by the addition of attribute definitions having default values to descriptor definitions), and; To add "Selected" attributes to all selectable descriptor definitions so that selection rules can interact with the presentation of the descriptions so the application can differentiate visually between selected and non-selected descriptors).
The method involves associating a set of rules with a description scheme that can influence the presentation properties of descriptors in descriptions which are conformant with a particular description scheme. It is an advantage to have these presentation rules grouped in a rule set that is either linked to a description scheme so that applications can utilise the defined set of presentation properties if required. Alternatively an application can select to use its own set of presentation rules.
Turning now to Fig. 13, there is shown a flow diagram of a method of_visually presenting a description of a resource. In step 1300, the method commences and a host 1i 5 application such as a search engine invokes a DDF processor. In the next step 1301, a description is selected for presentation. This selection can occur by way of user input or by way of another application. The method then continues at step 1302, where the associated defining description scheme is read into memory. The description scheme in memory comprises an array of element definitions where each element definition has an array of attribute definitions. Alternatively, the DDF processor can parse the description into a DesOM. After step 1302 the method continues at step 1304, where the presentation set of rules are accessed using the RuleSet Attribute of the description. In the next step S"1306, the first presentation rule of the set is selected for processing.
The method then continues to, decision block 1308, where a check is made whether a pattern associated with the selected rule can be found in the DesOM. A pattern matching process similar to that described in 7. Rule-based processing using the DesOM would be suitable. If the decision block 1308 returns true(yes), then the processing continues at the next step 1310, where the attribute definition(s) associated with the rule is removed or added to the array in memory. Afterwards, the method selects the next rule in step 1311 and the processing returns to decision block 1308. If the decision block 1308 returns false(no), the processing continues at decision block 1312, where a check is made whether all the selected rules have finally been processed without action. In this way, the rules are CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR3241 GRP2]489841 .doc:PWM iteratively processed until no further changes can be made to the array in memory. This is 0 advantageous in the situation where some rules are dependent on other rules. If, on the other hand, the decision block 1312 returns true(yes), the processing continues at step 1316 wherein a modified description is created using said modified description scheme as a template. This modified description is then output to an output device. For example, the modified description and it's associated resources, such as digital video resources or DVDs, can be rendered on a display or a printing device.
Method of Selecting Resource Descriptions Selection rules can be used to formulate queries directed at collections of descriptions digital libraries). A query can be viewed as a request to select those descriptions or components of descriptions descriptors) that match a specified 9999 *pattern. Like inference and equivalence rules, selection rules are typically directed at elements rather than element definitions. Unlike inference, equivalence and presentation 99rules, however, selection rules may be generated on a one-off basis and not collected in S 15 rule sets that are serialised in an XML document. For example, a query is usually formulated with help from the user, then processed, and the results presented to the user for their evaluation.
Selection rules often depend on presentation rules in that the selection action must be able to be interpreted by the application and presented to the user. For example, a selection action could simply set a (presentation) attribute for descriptors that match the specified pattern.
Selection rules are typically associated with the application. In the Digital Video 9. 9 S"Browser System (see section 13), selection rules use the same grammar as all other rules (see Section 7. Rule-based Processing using the DesOM). However, typically the only Action that is invoked by a selection rule is the Select action. Consequently it would be possible to define a more specific grammar for selection rules SelectionRules.dtd having just a Select action being allowed).
The Select action of a selection rule has three attributes which specify how the selection action is implemented. The value of the attribute attName refers to the attribute name used for a descriptor that is able to represent the action of being selected. This attribute would typically have been generated using a presentation rule. If the element matched by the pattern does not contain such an attribute, then the selection process will search for ancestors of the matched element in the DesOM up the description tree) CFP I 594AU IPR32-4 I _GRP2 489841 I:\ELEC\CISRAIPRPR32-41 GRP2]48984 I.doc:PWM until it locates an element with the specified attribute name. In the above DTD this 0 attribute name is provided with a default value of "selected". The value of the second attribute attValue refers to the value that the "selected" attribute should be assigned in order to indicate selection. The DTD also provides a default value of "YES". The third attribute specifies whether all selectable ancestors should also be selected. So, for example, if a user selects a Shot descriptor because of a matched descriptor contained in the Shot descriptor, then the user should also select the ancestors of the Shot descriptor (ie, the Scene descriptor and the VideoClipDescription descriptor).
In this way, the Select element provides information to the application on which elements have matched the specified pattern in the selection rule. Clearly the application needs to be aware of the attribute used to provide this information, hence the interaction between presentation and selection rules. In the Digital Video Browser System (see Section 13), selection rules are used to implement searches in a Digital Video Library.
The method involves that of representing queries by selection rules which attempt to 0 S 15 find matches to a rule's specified element pattern. The "select" action that is executed on a successful pattern match typically modifies attributes established by presentation rules, so that the selection process can interact with the application.
Turning now to Fig. 14, there is shown a flow diagram of a method of selecting one or more descriptions or part of one or more descriptions of a resource. In step 1400, the method commences and a host application such as a search engine invokes a DDF processor. In the next step 1402, a user inputs a query which is formulated as a rule in step 1404.The search engine then selects in step 1405 a first description for evaluation.
•0 The method then continues at step 1406, where the DDF processor parses the description into a DesOM.
The method then continues to decision block 1408, where a check is made whether a pattern associated with the selected rule can be found in the DesOM. The manner in which the pattern associated with the selected rule matches a pattern in the DesOM is described in more detail in 7. Rule-based processing using the DesOM. If the decision block 1408 returns true(yes), then the processing continues at the next step 1410, where the select action associated with the rule is initiated on the DesOM. The details of the select action is described above. Afterwards, the method then continues at decision block 1412 where a check is made whether the last description has been searched. If the decision block returns false(no) the processing continues at step 1414 where the next CFP 1594AU IPR32-4 I_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841 .doc:PWM description is selected. Otherwise, the processing continues at step 1416, where the 0 results of the searching process is output. The method then terminates at step 1418.
11. Method of Translating Descriptions of Resources Often descriptions of resources will be in a language different from the request.
Rather than store copies of the descriptions in each language, the method stores only one copy of the descriptions in one language. Preferably, the language is English. The method is then provided with a number of rule sets that enable the translation of the descriptions to the language of the request. For example, the description may have a "colour" attribute and a colour attribute value "red". If the request is received in French, then the method will translate the description to French. In the example given, "colour" and "red" will be translated to their French equivalent. This is a form of inter-language equivalence.
This procedure is similar to the way Inference Rules are processed, but on a conditional basis. Inference rules are preferably not processed on a conditional basis as described here for translation rules.
15 Turning now to Fig. 15, there is shown a flow diagram of a method of translating a description of a resource. In step 1500, the method commences and a host application such as a search engine invokes a DDF processor and selects a description in response to a user request for further processing. In the next step 1502, the DDF processor parses the description into a DesOM. After step 1502 the method continues at decision block 1503, 20 where a check is made whether the language of the request is different from the language of the description. This check is accomplished by comparing the language attributes of o to both the request and the description.
If the decision block 1503 returns true(yes), the processing continues at step 1504, where an associated translation set of rules are accessed using the RuleSet Attribute of the description. These translation set of rules may be serialised in the form of an XML document. On the other hand, if the decision block returns false(no) then the processing continues at step 1516. After completion of step 1504, the method continues at step 1506, where the first rule of the set is selected for processing.
The method then continues to decision block 1506, where a check is made whether a pattern associated with the selected rule can be found in the DesOM. The manner in which the pattern associated with the selected rule matches a pattern in the DesOM is described in more detail in 7. Rule-based processing using the DesOM. If the decision block 1508 returns true(yes), then the processing continues at the next step 1510, where CFPI594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\lPR32-41 GRP2]48984 I.doc:PWM the translation action associated with the rule is initiated on the DesOM. These actions initiate the removal and addition of attributes and elements from the DesOM. The removal and addition action substitutes the language of the attributes and elements for another. Afterwards, the method selects the next rule in step 1507 and the processing returns to decision block 1508. If the decision block 1508 returns false(no), the processing continues at decision block 1512, where a check is made whether all the selected rules have finally been processed without action. If, on the other hand, the decision block 1512 returns true(yes), the processing continues at step 1516 where the extended DesOM is output. The method then terminates at step 1518. Alternatively it is also possible to include an action of a rule which invokes a Descriptorflandler method to translate the content of the selected Descriptor.
12. First Embodiment of Apparatus The processes described in relation to Figs. IA to 15 can be practiced using a conventional general-purpose computer, such as the one shown in Fig. 19 wherein the processes may be implemented as software executing on the computer. In particular, the method steps are effected by instructions in the software that are carried out by the computer. The software may be divided into two separate parts; one part for carrying out the processing steps; and another part to manage the user interface between the latter and the user. The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer from the computer readable medium, and then executed by the computer. A computer g~e...readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the computer preferably effects an advantageous apparatus in accordance with the embodiments of the invention.
The computer system 1900 consists of the computer 1902, a video display 1916, and input devices 1918, 1920. In addition, the computer system 1900 can have any of a number of other output devices including line printers, laser printers, plotters, and other reproduction devices connected to the computer 1902. The computer system 1900 can be connected to one or more other computers via a communication interface 1 908b using an appropriate communication channel 1930 such as a modem communications path, a computer network, or the like. The computer network may include a local area network (LAN), a wide area network (WAN), an Intranet, and/or the Internet CFP1I594AU 1PR3241I GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 I GRP2]48984 I .doc: PWM The computer 1902 itself consists of a central processing unit(s) (simply referred to 0 as a processor hereinafter) 1904, a memory 1906 which may include random access memory (RAM) and read-only memory (ROM), input/output (IO) interfaces 1908a, 1908b 1908c, a video interface 1910, and one or more storage devices generally represented by a block 1912 in Fig. 19. The storage device(s) 1912 can consist of one or more of the following: a floppy disc, a hard disc drive, a magneto-optical disc drive, CD-ROM, magnetic tape or any other of a number of non-volatile storage devices well known to those skilled in the art. Each of the components 1904 to 1912 is typically connected to one or more of the other devices via a bus 1914 that in turn can consist of data, address, and control buses.
The video interface 1910 is connected to the video display 1916 and provides video signals from the computer 1902 for display on the video display 1916. User input to
S
operate the computer 1902 can be provided by one or more input devices 1908b. For example, an operator can use the keyboard 1918 and/or a pointing device such as the 15 mouse 1920 to provide input to the computer 1902.
The system 1900 is simply provided for illustrative purposes and other configurations can be employed without departing from the scope and spirit of the invention. Exemplary computers on which the embodiment can be practiced include IBM-PC/ATs or compatibles, one of the Macintosh TM family of PCs, Sun Sparcstation 20 TM, or the like. The foregoing are merely exemplary of the types of computers with which the embodiments of the invention may be practiced. Typically, the processes of the embodiments, described hereinafter, are resident as software or a program recorded on a 0e hard disk drive (generally depicted as block 1912 in Fig. 19) as the computer readable medium, and read and controlled using the processor 1904. Intermediate storage of the program and pixel data and any data fetched from the network may be accomplished using the semiconductor memory 1906, possibly in concert with the hard disk drive 1912.
In some instances, the program may be supplied to the user encoded on a CD-ROM or a floppy disk (both generally depicted by block 1912), or alternatively could be read by the user from the network via a modem device connected to the computer, for example.
Still further, the software can also be loaded into the computer system 1900 from other computer readable medium including magnetic tape, a ROM or integrated circuit, a magneto-optical disk, a radio or infra-red transmission channel between the computer and another device, a computer readable card such as a PCMCIA card, and the Internet and CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRAIPR\PR32-4 I GRP248984 1.doc:PWM I Intranets including email transmissions and information recorded on websites and the like. The foregoing are merely exemplary of relevant computer readable mediums. Other computer readable mediums may be practiced without departing from the scope and spirit of the invention.
The methods of the invention may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of methods of the invention. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories.
13. Second Embodiment of Apparatus Digital Video Browser System A Digital Video Browser System in accordance with a second embodiment of the apparatus is described in this section. The functionality of the Digital Video Browser System is enabled by the descriptions of digital video that are automatically generated 0: 0:: using a description scheme, designed for digital video resources, such as that included in Appendix D.
s ~The Digital Video Browser System allows a user to browse the digital video in a S o non-linear manner, manually annotate the digital video to provide additional descriptive information that was not able to be automatically generated, and to search for the presence of various descriptors in a description. It should be clear to the reader that all this functionality is enabled by an interaction of the user with the description scheme and the o individual descriptions of the digital video resources and that the browser that is described .i in the following section can in essence be applied to any other electronically-accessible resource.
An example of such a Digital Video Browser System is shown in Fig. 16. The system contains a Video Browser Panel 1600 which consists of a Viewing Panel 1601, a Table-of-Contents (or TOC) Panel 1602, and an Index Panel 1603. Outside of the Video Browser Panel 1600 but within the system are three buttons required for user interaction; a Search button 1605, a Play button 1606, and an On/Off button 1607.
User interaction with the panels of the Digital Video Browser System can be mediated by a touch-sensitive Video Browser Panel, however this feature is not necessary for the operation of the system. The operation of the Digital Video Browser System will now be discussed in the terms of Fig. 16.
CFP1594AU [PR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\lPR32-41GRP2]489841 .doc:PWM When a new digital video resource is added to the Digital Video Browser System a predetermined description scheme is applied to the digital video resource resulting in the content creation methods of the relevant descriptor handlers in the description schemes being initiated. Other implementations might provide more than one description scheme which can be applied to the digital video resources. For example, a Digital Video Browser System might provide the description schemes contained in Appendices B and D. In such an embodiment the user would require a means to select the description scheme that he/she would like to apply to each new digital video resource. So, for example, if he/she was adding a new digital video resource containing the footage from a football match then he/she would most likely use the description scheme in Appendix B, however if the digital video resource contained some footage of a recent holiday, then it's S: likely that the description scheme coprt: in Appendix D would be more appropriate.
If more than one description e is available then the selection of the most Sappropriate description scheme to u, could also be automated to some extent. The resource to be described could be analysed to see if it contained key features that typically indicate the use of a particular description scheme. For example, the sound track of a digital video resource could be analysed for repetitive whistle sounds arising from a referee's whistle. If detected, such sounds could provide evidence for the use of a ooooo particular description scheme the description scheme shown in Appendix B).
In a simple description scheme such as that included in Appendix B there is a single ***descriptor handler specified for the description (which is also a descriptor), which generates the entire content for the description.
In other description schemes, more than one descriptor may have an associated descriptor handler which is responsible for automatically generating the content of just that descriptor. For example, consider the description scheme shown in Appendix D.
The VideoDescription descriptor D1 has an associated descriptor handler D2 which provides a method to automatically segment the digital video resource into a series of individual shots. The Shot descriptor D3 has an associated descriptor handler D4 which provides a method to automatically select a key frame from a specific shot and then generate a series of semantic labels which provide some information about the content of the particular shot whether or not the shot contained people, was an indoors or outdoors shot, etc.). These descriptor handler methods are executed on the creation of a descriptor in the description being generated. Therefore the description can be CFP 1594AU 1PR32-4 I_GRP2 489841 I:\ELEC\CISRAIPR\IPR32-41GRP2]489841 .doc:PWM iprogressively constructed using the description scheme (effectively as a template) and the set of descriptor handlers that provide the methods for automatically generating the content for their relevant descriptors. An example of such a generated description is provided in Appendix E.
In the case of the Digital Video Browser System depicted in Fig. 16, the descriptors able to be accessed in the Index Panel, rather than the TOC Panel are classified as Index Descriptors. The classification of descriptors as Index or TOC descriptors is achieved using presentation rules (see Section 8. Method of Presenting Descriptions of Resources), with each description scheme being used by the Digital Video Browser System having a corresponding presentation rule set. For example, a presentation rule could be applied to each of the descriptor definitions in the description scheme to add an attribute definition S to the descriptor's definition for the purposes of this classification. The added attribute definition could have a attribute default of #FIXED "Index" or #FIXED "TOC" to classify an Index and TOC descriptor, respectively. [Note: The use of the #FIXED keyword in the default value means that changing the value of the classifier from its default value results in an invalid XML construct and hence an invalid description.] S Selecting which descriptors are to be used as Index descriptors is similar to selecting which key words or phrases you would include in the index of a book. In other words, it is an authoring task that results in presentation rules. In general, a descriptor that is classified as a TOC descriptor represents a structural element of the resource a component that would normally appear in the TOC of a book). So, for example, a Shot descriptor is a TOC descriptor. An Index descriptor typically represents a property of a TOC descriptor a Shot descriptor could contain people scenes, be an indoor or outdoor scene, etc.).
The Index descriptors are the leaf nodes of the internal tree structure used to represent the description [The internal representation of descriptions is discussed in detail in Section 2.3 Description Object Model (DesOM) In the absence of presentation rules, this property can also be used to implicitly differentiate between Index and TOC descriptors in an implemented Digital Video Browser System. In the Digital Video Browser System, explicit differentiation between Index and TOC Descriptors is achieved using presentation rules. A set of presentation rules applicable to the description scheme in Appendix D is shown in Appendix F.
CFP1594AUIPR32-41 GRP2 489841 I:\ELEC\CISRA\IPR\lPR32-41 GRP2]489841 .doc:PWM The Digital Video Browser System has access to a collection of digital video resources, which is hereinafter referred to a Digital Video Library. A newly described digital video resource can be simply appended to an existing collection of described digital video resources. Alternatively (see Section 11. Remote Digital Video Browser Devices), the user can insert a new item at the desired location using a drag-and drop means. The Digital Video Library is itself a resource able to be described. Therefore, on initialising the Digital Video Browser System a description scheme for a Digital Video Library is used to automatically generate a description for the Digital Video Library.
The description of the Digital Video Library can be very simple containing just a hierarchical representation of the individual descriptions of digital video resources described in the library. In other words, the description need not know about the location of the digital video resources described in the library. It is merely a catalogue of the descriptions of the digital video resources stored in the library. Each individual description has a reference to its corresponding digital video resource.
15 An example of a description scheme for a Digital Video Library is included in os* Appendix G. The Digital Video Library's description can contain zero or more Section elements or zero or more Item elements, where each Item element refers to an individual description in the Digital Video Library an XML document). A description of a i Digital Video Library conforming to the description scheme included in Appendix G is eoo e 20 shown in Appendix H.
During browsing the user can select sections of Digital Video Library by selecting S. the relevant descriptors in the TOC Panel 1602 in the Video Browser Panel 1600. This selection method provides non-linear access to the digital video resource(s). Typically these selections are highlighted in the TOC panel to indicate which are currently selected.
The user can choose to play all the highlighted selections by pressing the "Play" button 1606.
Alternatively the user can search for sections, items or parts of items of the Digital Video Library by selecting relevant Index descriptors in the Index Panel 1603. In a simple Digital Video Browser System implementation, the Index descriptors might imply simple boolean presence of a specified feature. For example, the PeopleScene Index descriptor (see D5 in Appendix D) could indicate whether people are either present or absent from the shot. In a more sophisticated Digital Video Browser System the Index CFP1594AU [PR32-41 GRP2 489841 I:\ELEC\CISRA\IPR\1PR32-41 GRP2]48984 I.doc:PWM descriptors might require some representative value. For example, a "Date" Index descriptor would require a specified value before a search could be performed.
Searches can be performed within a TOC context in the Digital Video Library. For example, if a user wanted to search for PeopleScene descriptors within a specific digital video resource, the user could select the TOC descriptor for that particular resource in the TOC Panel 1602 and then select the desired Index descriptor in the Index panel 1603 and press the "Search" button 1605 in the Digital Video Browser System. The search process would then result in all TOC descriptors that satisfied the search criteria becoming selected highlighted) in the TOC Panel 1602. The user could then select to play all the selected sections of the digital video resource by pressing the "Play" button 1606.
Searches can be implemented in the Digital Video Browser System using selection rules (see Section 10. Method of Selecting Resource Descriptions). The TOC context is automatically inserted as part of the pattern of the selection rule. The search process applies the selection rule pattern to each relevant description and updates a selection attribute that has been added for all selectable attributes using a presentation rule.
Selectable attributes will vary between description scheme and application. In the case of the description scheme included in Appendix D the only descriptors that might be classified as selectable would be the VideoDescription and Shot descriptors (see the 0 0 0 0presentation rules in Appendix F).
ooe 20 The Digital Video Browser System also provides functionality for manual o annotation, in conformance with the description scheme, of a digital video resource. If a o. o: particular TOC descriptor is selected, then the relevant Index descriptors 1609 can be displayed in the Index Panel 1603. The Index descriptors are preferably represented by icons (which in preferably are specified by presentation rules targeted at the descriptor definitions). The selected TOC descriptor can be viewed (played) and then manually annotated by dragging icons representing the Index descriptors 1610) into an Annotation Region 1604 of the Viewing Panel 1601. Annotations created in this fashion are then added to the description of the resource and are available for subsequent browsing.
Annotations in the form of titling various TOC Descriptors could also be possible in some implementations of a Digital Video Browser System. For example, in a Digital Video Browser System implemented in software on a regular personal computer, the screen representation of the Descriptor could be selected and then the title for the CFP1I594AU IPR324-4 GRP2 489041 I:\ELEC\C1SRA\1PR\1PR32-41 GPP2]48984 I .doc:PWM .~z~vs sf~~ descriptor could be entered using the computer's keyboard. In alternative embodiments, in which access to the Video Browser Panel 1600 is provided via a touch-sensitive display, user entry of textual titles could be mediated by a pen interface or via a method whereby a particular descriptor is selected by touch, and the title communicated by the user speaking the title words and a speech-to-text module in the Video Browser System converting the spoken words to text and displaying the result where a title is expected on the display.
Whenever new descriptions are retrieved for browsing the description is processed into a DesOM. Before the description is actually presented in the Video Browser System, any inference or equivalence rules (see Section 8. Method of Extending Descriptions of :.Resources) that are associated with the description's description scheme are processed.
S"This processing involves iterating through the defined inference rules until no more changes can be made to the description. Clearly, this rule processing requires that there are no circular dependencies in the rule set. The inference and/or equivalence rules will 15 result in the creation of new descriptors which have been inferred from those that were part of the serialised description. Preferably, any new descriptors created by this process will have been defined as part of the relevant description scheme (and as such will have been classified as an Index or TOC descriptor). The inference rules will need to be reprocessed in the event of any annotations being created.
14. Third and Fourth Embodiment of Apparatus Remote Digital Video Browser Devices .i :The Digital Video Browser System described in the previous section can also be implemented as a dedicated remote device. In this section two possible remote device embodiments of the Digital Video Browser System are described with respect to Fig. 17 and Fig. 18.
The first remote device of the Digital Video Browser System is shown in Fig. 17. In this embodiment the Video Browser 1700, contains no storage for the Digital Video Library. The Video Browser 1700 communicates with a Server 1710 using a wireless transmitter/receiver 1702 and a wireless connection 1703. The Server 1710 has a connection 1717 with a storage device that contains the Digital Video Library 1711. All the digital video resources that can be browsed by the Video Browser are stored in this Digital Video Library. Preferably, in this remote device all the descriptions of the digital video resources are also stored in this library 1711. The Server 1710 also has a CFP 1594AU IPR32-4 I_GRP2 489841 I:\ELEC\CISRA\IPR\PR32-4 I GRP2]489841.doc:PWM connection 1714 to a large display 1712 that can be used for public viewing of the digital video resources. Preferably, the connections between the Server 1710 and the Digital Video Library 1711 and between the Server 1710 and the large display 1712 are wired connections.
New digital video resources can be added to the Digital Video Library 1711 which is directly connected to the Server 1710 independently of the Video Browser device 1700.
As the resources are added to the Digital Video Library 1711 (from, for example, a digital video camera), descriptions for the digital video resources are automatically generated using the description scheme. Also at this time, usually after the description has been generated, the user could optionally title sections of the digital video resource. These titles would then be visible when browsing using the Digital Video Browser device.
On power-up the Video Browser device connects to the Server 1710 using the wireless connection 1703. The Server 1710 communicates to the Digital Video Browser device a description of the Digital Video Library. This description, like descriptions of the digital video resources, conforms to a description scheme (in this case for a Digital Video Library), and is serialised in an XML document. An example of a description of a Digital Video Library is shown in Appendix H.
The remote Digital Video Browser device 1700 can either store the relevant .e.o°i description schemes permanently, or download these description schemes at the time of making its connection with the Server 1710. The latter method of obtaining the description schemes is preferred. The description of the Digital Video Library and the i relevant description schemes contain all the information required to display an Index and TOC panel on the Digital Video Browser device 1700. The user can then use the Digital Video Browser device to navigate through the Digital Video Library, selecting or searching for video resources to view. Preferably, the navigation through the TOC and Index panels is enabled via a touch-sensitive screen. Other methods of navigation a pen or simple keyboard) could also be used.
Only when a Digital Video Browser user selects to "Play" a particular selection of digital video resources, is it necessary to transmit the required digital video resources from the Digital Video Library 1711 to the remote Digital Video Browser device 1700.
Preferably the digital video resources are stored and transmitted in compressed form (eg., MPEG-I or MPEG-2 therefore minimising the bandwidth of the required wireless CFP1594AU [PR3241 GRP2 489841 I:\ELEC\CIS RA\IPR\IPR32-41 GRP2]489841 .doc:PWM connection 1703 between the Server 1710 and the remote Digital Video Browser device 1700.
The remote Digital Video Browser device can optionally have an additional button (to those shown in Fig. 16), which can be used to direct the Viewing Panel 1701 of the remote Digital Video Browser device to a large display 1712 connected to the Server 1710. This redirection can be achieved by transmitting a description of the required presentation an XML document) from the remote Digital Video Browser device 1700 to the Server 1710. This description would conform to a Video Presentation Description Scheme (eg Appendix I) that could be as simple as just a list of all the selected sections of the selected digital video resources. An example description of a video presentation is "shown in Appendix J.
S. S The video browser system generates a description of a video presentation by first reading the description scheme for the presentation (eg Appendix This description
*S
scheme contains definitions of descriptors required for the video presentation (eg VideoDescription Reference, Shot Reference descriptor definitions in Appendix The video browser system then generates the description of the presentation using the description scheme read by the browser and information about the resources that have been selected for presentation. The result of this step is a description such as shown by way of example in Appendix I. This description can then be directed to an output device 1712 for rendering.
Preferably, this description is interpreted by the Server 1710 and the corresponding .o sections of the selected digital video resources would be rendered to the large display 1712. The rendering is performed by the Server 1710 and pixel data would be transmitted over the connection 1714, however if the large display 1712 had the processing ability to decode the compressed digital video resource, then the compressed resource could be transmitted over the connection 1714 and then decoded and rendered in the large display 1712.
Clearly, presentation rules could be applied to the presentation of the selected items in the same way as presentation rules are applied to a description of a digital video resource. Some presentation rules that could be applicable to the presentation of digital video resources include rules that specify the type of transitions to be inserted between shots of a particular digital video clip fades, cuts, wipes, etc.) and whether clip titles are to be rendered over the presented video and the style of title rendering to be used.
CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841.doc:PWM These rules could be collected in a presentation rule set that is linked with the Video Presentation Description Scheme in the same way that sets of presentation rules could be linked to the Digital Video Resource Description Scheme (see Appendix D).
Alternative Digital Video Browser implementations could allow users to specify additional presentation rules for the presentation of selected digital video resources. For example, an implementation could allow a user to specify whether a particular selection was to be played at recorded, slow or fast speed. Altering the speed of video playing can provide interesting presentation effects. Similarly, the Digital Video Browser user might also be able to specify the types of transitions to use on a one-off presentation basis rather than a default basis as provided by rules linked to the Video Presentation Description Scheme. These one-off presentation rules can be combined into a single rule set which is referenced by the Description element of the presentation description that is communicated to the Server 1710 when the user chooses to play the selected digital video resources (whether on the Digital Video Browser device itself or, more likely, when the presentation has been re-directed to the large display 1712).
An example of a Video Presentation Description Scheme, which could be used with S. the Video Description Scheme shown in Appendix D, is shown in Appendix I. In this description scheme, a standard set of presentation rules is provided as part of the description scheme. These rules have been collected into a rule set and stored in the XML 20 document which, in the case of the example is called "VideoPresentationRules.xml". The rule set has then been referenced by the description scheme by specifying an ENTITY for the ruleSets attribute II of the VideoPresentationDescription element. The attribute userPresentationRules 12 has been added to the VideoPresentationDescription subclass of the Description element to be able to contain an ENTITY that specifies an xml document that contains any presentation-specific rules.
An example of a video presentation description that conforms to the Video Presentation Description Scheme, which is included in Appendix I, is shown in Appendix J. A set of presentation-specific rules has been specified for the particular presentation using the userPresentationRules attribute of the VideoPresentationDescription element (see J1). Clearly the example description scheme and presentation description included in Appendices I and J pertain to the Video Description Scheme included in Appendix D since they refer to particular descriptors in that description scheme. For example, the VideoDescriptionReference element contains zero or more references to Shot elements in CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841.doc:PWM the referenced video descriptions. In particular the shotIDRef element J2 specifies a particular shot descriptor in the description contained VideoEgl.xml, by using a reference to the ID of that descriptor in the description. It is not necessary to use a Video Presentation Description Scheme that is directed so specifically at a particular description scheme. For example, if a Digital Video Browser System was implemented with more than one description scheme, then a more general Video Presentation Description Scheme can be used.
The ability to be able to re-direct the Viewing Panel 1701 to a large display 1712 connected to the Server 1710 is a useful feature as the user can select the sections of his/her Digital Video Library that he/she wishes to share with an audience using the remote Digital Video Browser device. That selection can then simply be played to the large display 1712.
A second remote device implementation of the Digital Video Browser System is shown in Fig. 18. In this implementation the Digital Video Browser 1800 is implemented as a remote device that has a capability to read Digital Video Disks (DVDs). Typically ooooo S•each DVD is treated like an independent Digital Video Library and consequently each DVD has its own description of the Digital Video Library contained on the DVD. When the DVD 1815 is inserted into the remote Digital Video Browser device 1800 the Video Browser 1800 reads the description of the Digital Video Library contained on the DVD.
In this device the description scheme required to interpret the Digital Video Library would preferably reside in the remote Digital Video Browser device, however it is conceivable that the description scheme could also be located on each DVD. Similarly the description schemes required to interpret the descriptions of the digital video resources could either be located on the DVD or in the remote Digital Video Browser device. In the preferred implementation of this device, all the required description schemes are located in the remote Digital Video Browser device 1800. New description schemes for digital video resources can be downloaded via the wireless transmitter/receiver 1802 and wireless connection 1804 to a server or computer 1813 connected to a network 1814.
Alternatively, the remote Digital Video Browser can be docked at a server or networked computer for the download of new description schemes.
Once the description of a Digital Video Library has been read from the DVD 1815 then the user can navigate through this Digital Video Library as described previously.
Sections of described digital video resources can be selected and played on the remote CFP1594AU IPR32-4 I_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 1 GRP2]48984 .doc:PWM device. The device is in many respects very similar to the device depicted in Fig. 17, with the exception that it does not require a Server to store the digital video resources and descriptions.
Sections of the selected digital video resources can be selected for viewing on a large display 1810 that has a wireless connection 1803 with the remote Digital Video Browser device. This large display 1810 must either contain, or be directly connected to, a processor able to decode and render the compressed digital video resource that is transmitted via the wireless connection 1803. As with the remote device depicted in Fig.
17, a description of the required presentation is communicated to the large display 1810.
In addition, any digital video resources required for the presentation description to be rendered must also be communicated. These resources are typically communicated in compressed (encoded) form MPEG-1 or MPEG-2). The processor either contained in, or directly connected to, the large display 1810 renders the presentation using the presentation description and its associated digital video resources. The rendering process 15 can typically adapt to the resolution of the large display 1810, which is usually greater than that of the handheld device.
In the preferred implementation of this device 1800, if the description of the S required presentation requires that only particular sections of a selected digital video resources be presented, then these required sections can be isolated from the original ooooo digital video resource, recorded if necessary in the handheld device, and then communicated to the large display 1810. This approach reduces the communication bandwidth of the wireless connection 1803. Alternatively, the entire digital video resource can be communicated and the processor that renders the presentation will need to extract the relevant sections of the digital video resource(s). The latter implementation is more costly in bandwidth but does not involve recording of digital video resources in the remote device.
In order to facilitate resource discovery on different DVDs, this remote Digital Video Browser device can also have an ability to generate printed DVD covers that display the contents of the DVD in a graphically pleasing manner. This printed presentation can be achieved in substantially the same manner as described for a vide presentation.
This facility can be achieved using a wireless connection 1805 to either a printer with some processing ability 1812, or to a computer directly connected to a printer (not shown CFP I 594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\lPR32-41 GRP2]48984 .doc:PWM Em p in Fig. 18). Typically the Digital Video Browser device would send to the printing device (1812 or the computer directly connected to a printer), a description of the (printed) presentation that is to be the printed DVD cover.
Description schemes for this presentation could be designed just as they can be designed for video presentations see Appendix For example, at the simplest level the Digital Video Library description could form the basis of the printed presentation.
Presentation rules could then be used to specify the spatial layout and colour arrangement of the printed presentation, and also the association of icons or key frames to particular descriptors in the description. The presence of visual reminders of the content of the DVD, such as icons or key frames, are important for purposes of identification and retrieval.
A processor, which is located either in the printer 1812 or in a computer connected to the printer could then use the description of the required printed presentation and any provided presentation rule sets to render a DVD cover for the particular DVD using the 15 provided key frames. This processor would need to be able to interpret the description of .o0.°i the printed presentation.
Fifth Embodiment of Apparatus Media Browser System Browsing of electronically-accessible resources other than digital video can also be enabled by descriptions that conform to identified description schemes. In an alternative embodiment, a Media Browser System can enable the description-based browsing of any electronically-accessible resource. Although the description schemes used to describe these different resources might be significantly different, a common browsing framework, i called here a Media Browsing System, can be used. The Video Browser System described in Section 13 is a more specific embodiment of the Media Browser System described in this Section. However, many aspects of the Video Browser System can also be implemented in the Media Browser System.
The browsing method requires that each resource is consistently described using the DDF) according to a description scheme and the resulting description contains a link to the resource or sections of the resource. Preferably, the DDF (see Section 2) is used to provide a consistent method of describing resources, however alternative methods of describing resources could also be used. For example, other schema languages such as XML-Schema of the W3C could also be used. In the case of XML-Schema, core CFP1594AU IPR3241_GRP2 489841 I:\ELEC\CIS RA\IPR\IPR32-4 I GRP2148984 I .doc:PWM -i---~ll=--LFli_-~ijr descriptor elements can be defined in substantially the same way as described for the DDF in section 3. 1.2.
In addition, in the embodiment described here, descriptor components of description schemes are further classified using predetermined classifications that provide axes of access to the resources. The preferred axes of access used in this embodiment are the structural access (Table of Contents(TOC) access) and the index access. These axes have been used because humans are familiar with their use in, for example, reference books.
Whereas the TOC-axis of access provides access to resources on the basis of context (ie., where a resource or section of a resource exists in relation to other resources or sections of resources), the index-axis effectively provides context-free access to resources Ojust as an index in a reference book). It should be clear to those skilled in the art of browsing *~:*.technologies that the value of this classification of descriptor components into TOC and index axes of access is that a Media Browser System can act both as a browser (in the sense of current web-browsing technology) and a search engine in one.
15 It is possible to use different axes of access and the number of axes is not liinited to two. For example, the Media Browser System could use an interface similar to that shown in Fig. 16 for the Digital Video Browser System, but having a further axis of access on the left hand side of the viewing panel to provide access to the digital video via audio events.
Another variation is one where more than one TOC axis can be used to allow more than one structural view to browsable content. For example, one TOC might provide browsing access using content category (eg. birthday images) and another axis might provide access by date of creation.
The predetermined classification of descriptor components into axes of access can be achieved using the methods described for the Digital Video System (see Section 13).
Any electronically-accessible resource can be accessed by the Media Browser System using a description of the resource as long as the Media Browser System can access the description scheme to which the description conforms and a processor for the class of resource. For example, a digital video resource might require a MPEG-i or MPEG-2 processor (player) to be present, an image might require a JPEG viewer and an audio object might require an MP3 audio processor. These processors are preferably stored with the browser and new processors are able to be downloaded when required by a resource.
The Media Browser using the DDF, identifies the processor required by the resource by the relevant NOTATION declaration in the description scheme (see Section 3.1.2.2).
CFP1I594AU IPR324-4 GRP2 489841 B:ELEC\CIS RA\IPR\1PR32-4 I GRP2]48984 I doc: PWM I The resources can be an electronic document or other resources available over the web. The resource can also be an electronic device. The description that appears in the TOC axis can also be located at different sites on the web. In this sense, the TOC axis can be compared to a set of description bookmarks. A TOC item may contain links to other descriptions, to individual resources or sections of resources (eg. a spatio temporal extent in a digital video).
It should be clear to someone skilled in the art that if resource library providers on the web described their resources using a consistent method such as the DDF, a TOC axis could be made to extend over all resource libraries of interest to a particular user. In other words, the TOC could represent an information landscape over which a user could browse and search for resources. This has the advantage of the user not having to visit each digital resource library site in turn in order to search for desired resources.
S The media can be browsed, annotated and searched in the same way as that described for digital video resources (see Section 13). Clearly, the descriptors that appear in the different axes during browsing will vary depending on the description schemes that are relevant with regard to the browsing context at any particular instance. For example, if more than one description is currently in context and these descriptions conform to S different description schemes then an index panel will reflect all the descriptor components which have been classified as index descriptors in the relevant description schemes. In other words, the set of index descriptors that are shown at any time in the Media Browsing System represents the union of the sets of index descriptors that arise S* .from all the description schemes that are relevant to the descriptions that are currently in context. In other embodiments, it would be possible to show only those index descriptors that represent the intersection of the sets of index descriptors that arise from the relevant description schemes. In this case, an index descriptor would need to exist in each of the relevant description schemes before it could be provided by the Media Browsing System as an index for browsing.
The selection of a TOC context for searching using the index panel is implemented in the Media Browser System using the method that is described for the Digital Video Browser System (see Section 13).
Links between descriptor components of descriptions and spatially and/or temporally localised sections of the resources can be represented in the descriptions using locators and extents (see Section 3.1.4 for how these constructs are used in the context of the CFPI594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPRUIPR32-41 GRP2]489841.doc:PWM DDF). Preferably, the navigation of these links is performed automatically by the Media 0 Browser when the user selects to play/view the selected resource(s). The Media Browser identifies the spatial and/or temporal extent(s) that are linked to a selected descriptor and plays/views the section(s) of the resource which contain the extent(s).
For example, a descriptor could refer to a spatially and temporally localised object in a digital video. This object can be represented spatially by a rectangular bounding box and temporally by a range of image frames. If a user selected the object's descriptor and chose to play the selections, the Media Browser would automatically identify the frames which contained the object and play that section of the video only.
The Media Browsing System differs from existing HTML browsers in that the entities mediating the browsing the (DDF) descriptions] contain only descriptions of the resources to be browsed. In the case of the HTML browser, the HTML documents represent the resource, control the presentation of the resource and also contain some description of the resource (the META tag). The browser typically does not use the 15 descriptive information. In contrast, the entities mediating the browsing in the Media Browser System are ONLY descriptions of resources. These descriptions contain links to the relevant resources or sections of the resource to be viewed or played. The key and most obvious advantage obtained by browsing using descriptions of resources is browsing access to non-textual resources digital signals). However, the Media Browser style of browsing also uses the descriptive information available about resources to provide a richer browser interface that can include annotation and searching. This richer interface is also available to textual documents XML and HTML documents).
Fig. 20 shows an example of the Media Browsing System in accordance with the fifth embodiment. The Media Browser system 2000 contains a viewing pane 2006, a Table of Contents or TOC) panel 2002 and an Index panel 2004. The TOC panel 2002 displays a Table of Contents of all resources and resource libraries that are of interest to a user. The TOC effectively represents a personal library where individual items are typically distributed across the web. It is like a set of bookmarks or pointers into different description libraries or single descriptions.
The TOC axis provides an information landscape for the user to browse. Individual items of TOC can be: expanded to show contained items or collapsed to hide contained items; (ii) selected to play/view; CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRAPRIPR32-41GRP2]489841.doc:PWM (iii) selected for the current context (eg. to search).
Preferably, these browsing functions are enabled in the following way. Each TOC item consists of a node symbol (eg. a large bullet symbol) and a node content which is defined by the corresponding descriptor in the description). The node content can be an image such as a key frame as is preferably used to represent a section of video) or some text which can describe the personal item whether it represents a section in the personal library or information landscape, or a resource).
TOC items are expanded and collapsed by checking on the node symbol (ie. the node symbol acts as a toggle). It is preferable for the node symbol to indicate whether an item can be expanded. For example, the node symbol can be displayed as an open bullet symbol if it has contained items (ie. can be expanded) and a filled bullet symbol if it cannot be expanded).
TOC items can be selected for viewing/playing by clicking on the node content.
Preferably, a single click indicates that an item should be queued to be viewed/played. A 15 double click action results in immediate viewing/playing of an item. Items that are queued for viewing/playing are only played when the user selects to present the media. A Media Presenter tool is described later in this Section. Preferably, a button appears in the control region 2016 to initiate presentation. Pressing this button involves the current presentation tool which is a plug-in tool of the user's preference. It should be clear that many such plug-in tools could be used. If a user was only interested in images then only an image plug-in tool would be required. If a user was browsing a range of content then a more sophisticated plug-in tool would be required. This tool would need to be able invoke more :specific tools for the playing/viewing of different type of resources.
Preferably, items that have been selected are differentiated from unselected items by highlighting the node content eg. displaying a node content's text in bold or highlighting the frame of an image or visual icon). Selection of an item that contains other items automatically selects the contained items.
TOC items can also be selected for context. Preferably, context selection is achieved by right clicking the node symbol. This action results in a coloured frame being displayed around both node symbol and the node content. Any context can be removed simply by right-clicking on the node symbol (ie. right-clicking on the node symbol acts as a toggle for select for context, just as left click on the node symbol acts as a toggle for expand/collapse. Right clicking of the node content of an item can be used to display CFP1I594AU [PR32-41I_GRP2 489841 I:\ELEC\CI5RA\IPR\1PR32-4 I GRP2]48984 I .doc:P'vVM properties of the corresponding item. Preferably, these properties contain the index 0 descriptors that pertain to that node item.
It should be clear to those skilled in the art that the browsing functions of the TOC axis can be implemented in many different ways without departing from the spirit and scope of the invention.
Preferably each axis of the Media Browser System can be scrolled. This means that a section of the TOC can be retained in an expanded form and a further section scrolled into view. The properties can be displayed in a small panel (like a callout) adjacent to the node content.
Initially, the Index panel 2004 displays all index items associated with all the table of contents items. In the present example, it can be seen that the item "Images Birthdays" 2008 of the Table of Contents has been currently selected for context by the user. The Media Browser System 2000 then displays in the index panel 2004 a list of indices determined by the description schemes that correspond to the currently selected item of the Table of Contents (eg. in this case birthday images).
The Media Browser System also allows the user to further describe or annotate TOC items. Annotation can be achieved by allowing the user to drag a displayed index item onto displayed TOC items. An annotation of this form is only allowed if the dragged index item is a valid descriptor for the corresponding TOC descriptor. For example, a user would be allowed to drag a "People" index item onto a particular birthday image if the corresponding "People" descriptor was a valid descriptor for the image descriptor. The descriptor is an example of a descriptor not having a representative value, ie. it acts like a boolean indicator of the presence of a person in, in this case, an image. Many index descriptors require representative values to be specified as part of the annotation process. In these cases, as the required index item is dragged onto the TOC item, if the corresponding descriptor is allowed for that TOC item, a field or edit box is displayed for the user to enter the required representative value. Preferably, the Media Browser System ensures that the entered representative value is in the required form for the descriptor eg.
dates may be specified to conform to a particular ISO standard. Datatyping of representative values is discussed in Section 2.
Index descriptors that have been added to TOC items can be viewed by selecting to view the properties of the TOC items. Preferably, properties of TOC items can be displayed by right clicking the node content.
CFP1I594AU [PR,32-4 IGRP2 489841 I:ELEC\CISRA\IPR\1R32-4 I GRP2]48984 I .doc: PWM -4 -98- Preferably, all descriptors are viewed as being annotable. In the event a description's O origin is a remote database which is not available for update, the annotations are stored locally as a copy of the updated description having a link to the remote description. The link for the relevant TOC item is updated to point to the updated local copy of the description. In a subsequent browsing session, the local copy of the description is read and the description which is used by the browser is constructed by obtaining the remote description and then modifying this description according to the local copy. This method ensures that changes in the remote description are available to the user ie. the local copy does not simply overwrite the original description). An alternative way of achieving this is to only store locally annotations in a partial description form. The form of these annotations could be defined by a special description scheme.
Another variation of the annotation procedure would be to allow read-only descriptors. For example, the core description element (as defined in Section 2) could be amended to include a read-only attribute. If a description was classified as a read-only 15 item then a user would not be permitted to annotate TOC items corresponding to that description.
*Preferably, the index panel 2004 contains an input box 2014 associated with each S,.2 index item for user entry of a query. In this way, a user may for example enter a date query eg. July 1999) in the input box associated with date index. The Media Browser ootoe will then highlight in the TOC any TOC items, that satisfy the query (ie. have a date value of July 1999) and are contained in the currently selected context of the TOC (ie. birthday images).
Preferably, the index panel 2004 also contains a input box 2010 for a user entry of a free query. This input box 2010 is used as the input interface for a searching engine across all description schemes. Preferably, the free query is entered as natural language then subsequent processed into a structured query which uses index descripors that correspond to the TOC context.
Alternatively, searching functionality can be provided by a plug-in tool that uses the Media presentation pane 2006 to help the user construct a query using the index panel 2004. This plug-in tool can be invoked by a user pressing a search button that can be located in the control region 2016 at the bottom of the screen. The search tool can allow a user to construct a query by dragging index items (which correspond to index descriptors) from the index panel 2004 to the Media presentation pane 2006. The plug-in tool can also CFP1594AU IPR3241_GRP2 489841 I:\ELEC\CISRA\IPRIPR32-4 I41GRP21489841.doc:PI'WM allow a user to combine various descriptors using the logical connectors typically used 0 with search engines (eg. AND, OR, NOT etc), and allow a user to formulate a free text query. Clearly any free text queries, whether entered using the search plug-in tool or the input box 2010 would need to formulated in terms of descriptor components. Inferencing techniques as employed in some expert systems) can be used for this purpose. The separate search plug-in tool could also optionally display the results of the search in the Media presentation pane 2006 and allow the user to select and play particular items returned by the search.
Other plug-in tools can provide additional functions to be applied to selected content.
Each of these tools could be invoked in the manner described above for the search tool.
Alternatively, these tools could be invoked by a pull down menu option. These additional functions could include emailing selected items to selected people (using, for example, 0t 0 a* addresses from an address book of a commonly used email tool), or generating an automatic presentation based on the selected content. The latter example could use .ooeo.
stylised templates that make presentation decisions based on the descriptor components of selected resources.
SThe foregoing description of the Media Browser System assumes that descriptors are classified according to their axis of access either by attributes which are part of the description schemes or by using a set of rules for adding the attributes to the description *Sol 20 schemes. In the latter case the rules may be associated only with the Media Browser 0400 application or can be more widely used in, for example, other applications.
S a* In the event that the classifications cannot be achieved with one of the foregoing methods, axes of access classifications for individual descriptors can be inferred by the Media Browser System. This inferred classification can use information about the base elements defined using the DDF. For example, a descriptor could be classified as a TOC descriptor if it is directly associated with either a resource or a section of the resource. If descriptions are generated according to the DDF then any descriptor which is a specialisation of a description element will have an associated resource through its definition (see Section 3.1.2.2) and any descriptor that contains a linking element to a section of a resource (see Section 3.1.4) will have an associated resource through the target of it link. These descriptors could be inferred to be TOC descriptors. All remaining descriptors could be treated as index descriptors. Although this method of classifying descriptors might not be ideal all non-TOC descriptors might not appear to be CFP1594AU IPR32-4 I_GRP2 489841 1 :\ELEC\CISRA\IPR\PR324 1GRP2489841 .doc:PWM -100sensible index descriptors), it does enable the Media Browser System to present a descriptions not having sufficient presentation rules.
The foregoing only describes a small number of embodiments of the present invention, however, modifications and/or changes can be made thereto by a person skilled in the art without departing from the scope and spirit of the invention. The present embodiments are, therefore, to be considered in all respects to be illustrative and not restrictive.
In the context of this specification and accompanying aspects of invention, the word "comprising" means "including principally but not necessarily solely". Variations of the word comprising, such as "comprise" and "comprises" have correspondingly varied meanings.
*o oo* CFP1594AU IPR32-41-GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 GRP2]489841.doc:PWM 1- Appendix A: Core DDF Element Definitions Core.ddf Core.ddf: Contains the definitions of core DDL elements <!NOTATION JavaClass SYSTEM "java"> <!ENTITY DataTypes "(mnt I Float I Double I String IDate I Time I lID I TDREF 1DREFS I ENTITY I ENTITIES)"> Definition of core elements <!ELEMENT Descriptor (ANY)> <!ATTLIST Descriptor id xml: lang dataType superElement handler
ID
CDATA
%DataTypes;
NMTOKEN
ENTITY
#11\MPLIIED "String"
#TM\PLIED
#TMPLIIED
#FIXED "Descriptor"
#REQUIRED
#TM4PLEED #I1MPLIIED <!ELEMENT Description (Descriptor+)> <!ATTLIST Description superElement NMTOKEN resource ENTITY dateResourceLastModified CDATA ruleSets ENTITIES CFP1594AU IPR3241-GRP2 489841 CFPI94AU1PR2-41_GR2 488411:\ELEC\CISRA\1PR\1PR32-4 I GRP2]489841 .doc:PWN4 -102- Definition of selected relationship elements <!ELEMENT ParallelSequence (Descriptor+)> <!ATTLIST ParallelSequence superElement NMTOKEN #FIXED "Descriptor" 0 000*00 <!ELEMENT SerialSequence (Descriptor+I)> <!ATTLIST SerialSequence superElement NMTOKEiN <.ELEMENT Neighbours (#PCDATA)> <!ATTLIST Neighbours superElement NMTOKEN dataType %DataTypes; <!ELEMENT Before (#PCDATA)> <!ATTLIST Before superElement dataType <!ELEMENT After (#PCDATA)> <!ATTLIST After superElement dataType #FIXED "Descriptor" #FIXED "Descriptor" #FIXED 'TDREFS' #FDXED "Descriptor" #FDXED "IDREFS" #FIXED "Descriptor" #FIXED "IIDREFS"
NMTOKEN
%DataTypes;
NMTOKEN
%DataTypes; CFP1594AU IPR32-41-GRP2 489841 CFPI94AU1PR2-41GRP 48941 :\ELEC\CISRA\IPR\IPPR324 I GRP2148984 I .doc:[PWM -103- <!ELEMENT InFrontOf (#PCDATA)> <!ATTLIST In~rontOf superElement NMTOKEN dataType %DataTypes; <!ELEMENT Behind (#PCDATA)> <!ATTLIST Behind superElement NMTOKEN dataType %DataTypes; #FI!XED "Descriptor" #FIXED "IDREFS" #FIXED "Descriptor" #FIXED "IiDREFS" Definition of link elements <!ELEMENT CLink (#PCDATA)> <!ATTLIST CLink superElement NMTOKEN #iFIXED "Descriptor" dataType %DataTypes; #FIXED "IDREF" <!ELEMENT Wink (#PCDATA)> <!ATTLIST Wink superElement dataType
NMTOKEN
%DataTypes; #FIXED "Descriptor" #FIXED "IDREFS" CFP1594AUIPR3241-GRP2 489841 CFPI94AU1PR2-41GRP 48941 :\ELEC\CISRA\IPR\IPR32-4 I GPP2]48984 I .dOC: PWM -104- Definition of locator and extent elements <!ELEMENT Locator (Extent+)> <!ATTLIST Locator superElement NMTOKEN #FIXED "Descriptor" resource ENTITY #REQUIRED <!ELEMENT Extent (Descriptor+)> <!ATTLIST Extent superElement NMTOKEN #FIXED "Descriptor" <!ELEMENT ImageExtent (Descriptor+)> <!ATTLIST ImageExtent superElement NMTOKEN #IFIXED "Extent" <!ELEMENT RectlmageExtent (RectlmageExtentXO, RectlmageExtentYO, RectlmageExtentHeight, RectlmageExtentWidth)> <!ATTLIST RectImageExtent superElement NMTOKIEN <!ELEMENT RectlmageExtentXO (#PCDATA)> <!ATTLIST RectlmageExtentXO superElement NMTOKEN dataType %DataTypes; <!ELEMENT RectlmageExtentYO (#PCDATA)> <!ATTLIST RectlmnageExtentYO superElement NMTOKIEN dataType %DataTypes; #FIXED, "ImageExtent" #FIXED "Descriptor" #FIXED "Int" #FIXED "Descriptor" #FIXED "Int" CFP1594AU IPR3241-GRP2 489841 CFPI94AU1PR2-41GRP 48941 :\ELEC\CISRA\IPR\IPR32-4 I GRP2]48984 I doc: PWM -105- <!ELEMENT RectlmageExtentHeight (#PCDATA)> <!ATTLIST RectlmageExtentHeight superElement NMTOKEN dataType %DataTypes; <!ELEMENT RectlmageExtentWidth (#PCDATA)> <ATTLIST RectimageExtent Width superElement NMTOKEN dataType %DataTypes; #FIXED "Descriptor" #FIXED "Tnt" #FIXED "Descriptor" #FIXED "Int" <!ELEMENT VideoExtent (VideoExtentStart, VideoExtentEnd, ImageExtent?)> <!ATTLIST VideoExtent superElement NMTOKEN #FIXED "Extent" <!ELEMENT VideoExtentStart (#PCDATA)> <I.ATTLIST VideoExtentStart superElement NMTOKiEN #FIXED "Descriptor' dataType %DataTypes; #FIXED "Int"
I
<!ELEMENT VideoExtentEnd (#PCDATA)> <ATTLIST VideoExtentEnd superElement NMTOKE-N dataType %DataTypes; #FIXED "Descriptor" #FIXED "Int" CFP I 594AU IPR32-4 I -GRP2 489841 CFPI54AU PR32-1 _GP2 48841 :\ELEC\CISRA\IPR\IPR32-4 I GRP2148984 L doc: IPWM -106- Appendix B: An Example Description Scheme for an Australian Football League Game Description Scheme (A-FLGame.ddf) Core.ddf included here <!ENTITY Core SYSTEM "Core.ddf'> %Core; q
B
Scheme specific entities <!ENTITY AFLGameGen SYSTEM "AFLGameGen.class" NDATA
JAVACLASS>
<!ENTITY PlayType "(Mark I Kick I Handball I Tackle)" Element definitions B3 *<.I!ELEMENT AFLGameDescription (Game, Locator*)> <!ATTLIST AFLGameDescription :superElement NMTOKEN #FIXED "Description" handler ENTITY #FIXED "AELGameGen" B4 <!ELEMENT Game (Location, Date, TeamName*, Quarter*)> <!ATTLIST Game superElement NMTOKEN #FIXED "Descriptor" <!ELEMENT Location (#PCDATA)> <!ATTLIST Location superElement NMTOKEN #FIXED "Descriptor" <!ELEMENT Date (#PCDATA)> CFP1594AU IPR32-41-GRP2 489841 1AELEC\C1SRAVPR\1PR32-4 I GPP2]48984 Ldoc: PWM -107- <!ATTLIST Date superElement dataType
NMTOKEN
%DataTypes; <!ELEMENT TeamName (#PCDATA)> <!ATTLIST TeamName superElement NMTOKEN <ELEMENT Quarter (Play*)> <!ATTLIST Quarter superElement NMTOKEN #FIXED "Descriptor" #FIXED "Date" #FIXED "Descriptor" #FIXED "Descriptor" <!ELEMENT Play (PlayerNo, PlayType, Annotator, CLink*)> <!ATTLIST Play superElement NMTOKEN #FIXED "Descriptor" <ELEMENT PlayerNo (#PCDATA)> <!ATTLIST PlayerNo superElement NMTOKEN dataType %DataTypes; <!ELEMENT PlayType (EMPTY)> <!ATTLIST PlayType superElement NMTOKEN value %PlayType; <!ELEMENT Annotator (#PCDATA)> <!ATTLIST Annotator superElement NMTO}CEN #FIXED "Descriptor" #FIXED "Int" #FIIXED "Descriptor"
#IREQUIRED
#FIXED "Descriptor" CFP1594AU IPR32-41-GRP2 489841 CFP594U 1R3241 GRP 488411:\ELEC\CI5RA\IPR\1PR32-4 IGRP2]489841 .doc:PWM -108- Appendix C: An Example Description generated from the Description Scheme in Appendix B Example Description (AiFLGameEg.xml) <?xml version=" 1.0" standalone "no" <!DOCTYPE AiFLGameDescription SYSTEM "NFLGame.ddf'[ <!ENTITY Match Video SYSTEM "Match Video.mpg" NDATA MPEG2> <AFLGameDescription resource "MatchVideo"> description of the game is contained in this section-> <Game> First some details of the game being player-> <Location>Sydney Cricket Ground</Location> <Date>1I998-08-09<fDate> <TeamName>Sydney Swans</TeamName> <TeamName>West Coast Eagles</TeamName> <!-Now add play information with links <Quarter id "Q 1I"> *<Play id ="P1> <PlayerNo>23 </PlayerNo> <PlayType value "Mark"!> <Annotator>John Smith</Annotator> <CLink linkend "Li"1 </Play> <Play id "P2"> <PlayType value <Annotator>Joe BloggsK/Annotator> <CLink linkend ="L9/ </Play> CFP1594AUIPR3241-GRP2 489841 CFPI54AU PR3241_GR2 48841 :\ELEC\CI5RA\PR\1PR32-4 I GRP2]48984 L doc: PWM -109- </Quarter> <Quarter id </Quarter> <Quarter id </Quarter> <Quarter id </Quarter> </Game> This section now contains the linkends for the various plays <Locator id "Li1" resource "Match Video"> <VideoExtent <VideoExtentStart>O</VideoExtentStart> <VideoExtentEnd>1 O</ideoExtentEnd> <RecthmageExtent> O</RectlmageExtentYO> <RecthmageExtentHeight> I 00</RectlmageExtentHei ght> </RectlmageExtent> </VideoExtent> <VideoExtent> <VideoExtentStart> 11 <IVideoExtentStart> <VideoExtentEnd>3 2<INideoExtentEnd> *.*<RectlmageExtent> <RectlmageExtentYO> I O0</Rect~mageExtentYO> <RecthmageExtentHeight>I 00</RectimageExtentHeight> </RectlmageExtent> </VideoExtent> </Locator> CFP1594AU IPR3241-GRP2 489841 CFPI94AU1PR3-41_GRP 48941 :\ELEC\CISRA\IPR\IPR32-41 GRP2]489841 .doc:PWM -110- <Locator id resource "MatchVideo"> <VideoExtent> <VideoExtentStart>O</VideoExtentStart> <VideoExtentEnd>2 <RectlmageExtent> <RectlmageExtentXO>200</RectlmageExtentXO> <RectlmageExtentYO> 15 O<IRectlmageExtentYO> <RectlmageExtentHeight>8O</RectlmageExtentHeiglht> <RectlmageExtentWidth>3 O/RectlmageExtent Width> </RectlmageExtent> <I ideoExtent> </Locator> AFLGameDescription 9. 9.* 9 99 9.
9 9 *99.
9*99 9 9999 9 9 9.
9. 9 9.
*b CFPI 594AU IPR32-41 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 GRP2148984 i .doc:PIWM -111- Appendix D: Digital Video Resource Description Scheme Description Scheme (Video.ddf) K!-Core.ddf included here <!ENTITY Core SYSTEM "Core.ddf' %Core; p.
0 p p p 9.0.
pp..
p pp..
p **pp pp p p pp p. p p
PP
Scheme specific entities <!ENTITY VideoDescGen SYSTEM "VideoDescGen.class" NDATA JAVACLAS <!ENTITY ShotAnalyser SYSTEM "ShotAnalyser.class" NDATA JAVACLAS <!ENTITY VideoPresRules SYSTEM "VideoPresentationRules.xml"> Video resource related element definitions <!ELEMENT VideoDescription (Title, Shot*, Locator*)> <ATTLIST VideoDescription superElement NMTOKEN #FIXED "Description" handler ENTITY #FIXED "VideoDescGen" ruleSets ENTITIES #FIIXED "VideoPresRules" D2 CFP1594AU [PR3241-GRP2 489841 CFPI54AU PR3241_GP2 49841I:\ELEC\CISPRA\IPR\IPR32-41IGRP2]489841 I doc:PWM -112- <!ELEMENT Title (#PCDATA)> <!ATTLIST Title superElement NMTOKEN 0**S
S
a. a
OS.,
a 555* S 5O*S SO St 6
S
a
S
0 005 5*Sa
S
*5@50@ a S.c.
S
a.
0 *0 SO S Oe 05 <!ELEMENT Shot (Descriptor*)> <!ATTLIST Shot superElement NMTOKEN handler ENTITY keyFrame ENTITY locator J\IDREF D4 <!ELEMENT PeopleScene (EMPTY)> ATTLIST PeopleScne superElement NMTOKEN <!ELEMENT CrowdScene (EMPTY)> <!ATTLIST CrowdScene superElement NMTOKEN <!ELEMENT PortraitScene (EMPTY)> <!ATTLIST PortraitScene superElement NMTOKEN <ELEMENT IndoorScene (EMPTY) <!ATTLIST IndoorScene superElement NMTOKEN <ELEMENT OutdoorScene (EMPTY) <!ATTLIST OutdoorShot superElement NMTOKEN #FIXED "Descriptor" D3 I#FIXED "Descriptor" #FIXED "ShotAnalyser"
#REQUIRED
#REQUIIRED
i#FIXED "Descriptor" #IXDED "PeopleScene" #FIXED "PeopleScene" #FIXED "Descriptor" #FIXED "Descriptor" CFP1594AU IPR.3241 _GRP2 489841 1:\ELEC\C1SRA\IPR\1PR32-4 I GRP2]48984 I doc:PWM -113- Appendix E: An Example Description generated from the Video Description Scheme in Appendix D Example Description (VideoEgl1.xml) <?xml version="1.O" standalone "no" <!DOCTYPE VideoDescription SYSTEM "Video.ddf'[ <!ENTITY MyVideo SYSTEM "MyVideo.mpg" NDATA MPEG2> <ENTITY KFramel SYSTEM "KFrame2.jpg" NDATA JPEG> <!ENTITY KFrame2 SYSTEM "KFrame2.jpg" NDATA JPEG> etc.
<VideoDescription resource "MyVideo"> <Title>Video Clip Title</Title> Shots detected in the digital video resource <Shot id "S I" keyFrame "KiramnelI" locator "L I" <CrowdScene!> <OutdoorScene!> </Shot> <Shot id "S2" keyFrame ="KiFrame2" locator "L" <PortraitScene!> <OutdoorScene!> :</Shot> Locators in the digital video resource <Locator id "L I" resource "MyVideo"> <VideoExtent <VideoExtentStart>O</VideoExtentStart> <!VideoExtent> </Locator> <Locator id CFP1594AU [PR32-41-GRP2 489841 CFPI94AU1PR2-41GRP 48941 :\ELEC\C1SRA\IPR\1PR32-4 I GRP2]489841 .dac: PWM -114- <VideoExtent <VideoExtentStart>2 1 </VideoExtentStart> O</VideoExtentEnd> <I VideoExtent> </Locator> </VideoDescription> CF19A .P341GP 4881.\LCCSAIRIR34IGP188 ~O: -115- Appendix F: Presentation Rules for the Video Description Scheme in AppendixD, Example Description (VideoPresentationRules.xml) <?xml version=" 1.O0" standalone "no" <!DOCTYPE PresentationRules SYSTEM "Rules.dtd" <!ENTITY CrowdScene SYSTEM "CrowdScenelconjpg" NDATA JPEG> <!ENTITY PortraitScene SYSTEM "PortraitScenelconjpg" NDATA JPEG> <ENTITY OutdoorScene SYSTEM "OutdoorScenelconjpg" NDATA JPEG> <ENTITY IndoorScene SYSTEM "JndoorScenelconjpg" NDATA JPEG> ****<PresentationRules> <Rule target ElementDefn pattern "VideoDescription"> <Action> <AddAttributeDef attName "selected" attType "CDATA" attDefault </Action> *..<Action> :<AddAttributeDef attName "presentationType" attType ="(IndexiTOC)" attDefault =#FIXED "TOC"/> </Action> </Rule> <Rule target ElementDefn pattern ="VideoDescription/Shot"> <Action> <AddAttributeDef attName "selected" CFP1594AUIPP,32-41-GRP2 489841 CFPI94AU1PR3-41_GRP 4894! :\ELEC\CI5RA\1PR\1PR32-4 I GRP2]48984 I .doc:PWM -116attType "CDATA" attDefault </Action> <Action> <AddAttributeDef attName "presentationType" attType "(IndexITOC)" attDefault #FIXED "TOC"/> </Action> </Rule> <Rule target ElementDefn pattern ="VideoDescription/Shot/CrowdScene"> <Action> <AddAttributeDef attName "presentationType" attType "(IndexITOC)" attDefault #FIXED "Index"/> </Action> <Action> <AddAttributeDef attName "icon"~ attType "ENTITY" attDefault #FIXED "CrowdScene"/> </Action> </Rule> a a a.
a a. a a a a a a a.
a. a a.
a.
CFP1594AU 1PR32-41 _GRP2 489841 1:\ELEC\CISRA\IPR\IPR32-4 I GRP2]489841 I doc:PWM -117- <Rule target= ElementDefni pattern ="VideoDescriptionlShot/PortraitScene"> <Action> <AddAttributeDef attName "presentationType" attType "(IndexITOC)" attDefault #FIXED "Index"'> </Action> <Action> <AddAttributeDef attName "icon"~ attType "ENTITY" attDefault #FIXED "PortraitScene"!> </Action> </Rule> <Rule target ElementDefn pattern= "VideoDescriptionlShot/IndoorScene"> <Action> <AddAttributeDef attName "presentationType" 0 0 20attType "(IndexITOC)" attDefault #iFIXED "Index"!> </Action> <Action> <AddAttributeDef attName "icon"~ attType "ENTITY" attDefault #FIXED "IndoorScene"!> </Action> </Rule> CFP I 594AU IPR324 I -GRP2 489841 CFPI54AU PR32-1 _GP2 48841 :\ELEC\CISRA\IPR\1PR32-41IGRP2]48984I .dOC:PWM -118- <Rule target=ElementDefni pattern ="VideoDescription/ShotlOutdoorScene"> <Action> <AddAttributeDef attName "presentationType" attType "(IndexITOC)" attDefault =#FIXED "Index"/> </Action> <Action> <AddAttributeDef attName "icon" attType "ENTITY" attDefault #tFIXED "OutdoorScene"/> </Action> </Rule> </PresentationRules> *9 9. 9
U
*9 9 99.- 9. 9* 9 9 *99.
9 9.9.
*9.
U
99**9 9 9 9999 9.
9 U 99 9
U*
*9 CFP I594AU IPR32-4 IGRP2 489841 I:\ELEC\CISRA\PR\1PR32-4 I GRP2]48984 I doc:PWM -119- Appendix G: Digital Video Library Description Scheme Description Scheme (Digital VideoLibrary.ddf) <!-Core.ddf included here-> <!ENTITY Core SYSTEM "Core.ddf'> %Core; Scheme specific entities <!IENTITY VideoLibraryGen SYSTEM "VideoLibraryGen.class" NDATA JAVACLASS> Digital Video Library related element definitions <!ELEMENT Digital VideoLibraryDescription (Section* IItem*)> <!ATTLIST Digital VideoLibraryDescription.
superElement NMTOKEN #FIXED "Description" handler ENTITY #tFIXED "VideoLibraryGen" .title CDATA #tIMPLEED <!ELEMENT Section (Section* I Item*)> <!ATTLIST Section superElement NMTOKEN #FIXED "Descriptor" title CDATA #IMVPLIED CFP1594AUIPR32-41-GRP2 489841 CFPI94AUIPRJ-41 GRP248981 I:ELEC\C1SRA\1PR\1PR32-41 GRP2]489841 .doc:PWM -120- <!ELEMENT Item (EMPTY)> <!ATTLIST Item superElement NMTOKEN #FIXED "Descriptor" description ENTITY #REQUIRED
S
S
S S S S S
S.
CFP I 594AU IPR32-4 I -GRP2 489841 CFPI54AU 1R32-4 _GRP 48981 I:ELEC\CISRA\IR1P-32-4 GRP2]48984 I .doc:PWM -121- Appendix H: An Example Description generated from the Digital Video *Library Description Scheme in Appendix G Example Description (VideoLibraryEg.xml) <?xml version="1 standalone 4no" <!DOCTYPE Digital VideoLibraryDescription SYSTEM "Digital VideoLibrary.ddf" <!ENTITY VideoEg 1 SYSTEM "VideoEgl1.xml"> <!ENTITY VideoEg2 SYSTEM "VideoEg2 .xml"> 10 <!ENTITY VideoEg3 SYSTEM "VideoEg3.xml"> etc.
<Digital VideoLibraryDescription title "My Personal Digital Video Library"> <Section title "Holiday Videos"> 15 <Item description "VideoEgl"1 <Item description "VideoEg2"/> etc.
</Section> <Scto til. BrhdyVdo <Section title "aysBirthday s"> <Item description "VideoEg3/> etc.
</Section> <Section title "John's Birthdays"> </Section> </Section> </DigitalVideoLibraryDescription> CFP1594AU IPR3241-GRP2 489841 CFPI94AU1PR3-41_GRP 48941 :\ELEC\CIS RA\IPR\IPR32-4 I GRP2]48984 I .doc:PWM -122- Appendix 1: Video Presentation Description Scheme Description Scheme (VideoPresentation.ddf) <!-Core.ddf included here <!ENTITY Core SYSTEM "Core.ddf'> %Core; Scheme specific entities <!ENTITY VideoPresentationGen SYSTEM "VideoPresentation. class" ::::*NDATA JAVACLASS> <!ENTITY VideoPresentationRules SYSTEM "VideoPresentionRules.xml"> Video Presentation related element definitions 0 <!ELEMENT VideoPresentationDescription (VideoDescriptionReference*)> <!ATTLIST VideoPresentationDescription *superElement NMTOKEN #FIXED "Description" handler ENTITY #FIXED "VideoPresentationGen" title CDATA #IMPLIED :ruleSets ENTITIES #FIXED "VideoPresentationRules" /userPresentationRules ENTITY #IMPLIED >11 7-11 <!ELEMENT VideoDescriptionReference (ShotReference*) <!ATTLIST VideoDescriptionReference superElement NMTOKEN #FIXED "Descriptor" videoDescription ENTITY #REQUIRED CFP1594AU IPR3241-GRP2 489841 CFPI94AU1PR3-41 GRP248981 I:ELEC\C1SRA\PR\1PR32-4 I GRP2]489841 .doc:PWM -123- <!ELEMENT ShotReference (EMPTY) <!ATTLIST ShotReference superElement NMTOKEN #FIXED "Descriptor" shotlDRef IDREF #REQUILRED
S
S
CFP1594AU IPR32-41_GRP2 489841 l:\ELEC\CISRA\IPR\1PR32-4 I GRP2]489841 I doc:PWM -124a 0 0 0 0 Appendix J: An Example Description generated from the Video Presentation Description Scheme in Appendix I Example Description .(VideoPresentationEg.xml) <?xml version="1 standalone "no" <!DOCTYPE VideoPresentationDescription SYSTEM "VideoPresentation.ddf'[ <!ENTITY UserPresentationRules SYSTEM "UserPresentationRules.xml"> <ENTITY VideoEgl SYSTEM "VideoEgl .xml"> <!ENTITY VideoEg2 SYSTEM "VideoEg2 .xml"> etc.
<VideoPresentationDescription userPresentationRules "UserPresentationRules"> <VideoDescriptionReference videoDescription "VideoEg 1"> <ShotReference shotliDRef 4 J2 i <ShotReference shotliDRef "2/ VideoDescriptionReference <VideoDescriptionReference description "VideoEg2"> <ShotReference shotIDRef <ShotReference shotiDRef VideoDescriptionReference etc.
</VideoPresentationDescription> CFP I594AU IPR32-4 IGRP2 489841 1:\ELEC\CISRA\IPR\1PR32-4 I GRP2]48984 I .doc:PWM -125- Appendix K: DOM Element Nodes 0 Extract from DOM Version 1.0 obtained on the Website HTTP://www.w3.org/TR/1998/REC-DOM-level-1-199810001 Interface Element By far the vast majority (apart from text) of node types that authors will generally encounter when traversing a document will be Element nodes. These objects represent both the element itself, as well as any contained notes. For example (in XML): <elementExample id="demo"> <subelementl/> <subelement2><subsubelement/></subelement2> </elementExample> S" When represented using DOM, the top node would be "elementExample", which contains two child Element nodes (and some space), one for "subelementl" and one for "subelement2". "subelementl" contains no child nodes of its own.
interface Element: Node wstring getTagName(); Nodelterator get Attributes(); wstring getAttribute (in name name); void setAttribute (in string name, in string value); void removeAttribute(in wstring name); Attribute getAttributeNode(in name name); void setAttributeNode(in Attribute newAttr); void removeAttributeNode(in Attribute oldAttr); void getElementsByTagName(in wstring tagname); void normalize(); Method getTagNameO This method returns the string that is the element's name. For example, in: CFP1594UIPP,3241 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841 .doc:PWM -lrujn~.r~~i i~u ii; -126r r <elementExample id="demo"> </elementExample> This would have the value "elementExample". Note that this is case-preserving, as are all of the operations of the DOM. See Name case in the DOM for a description why the DOM preserves case.
Parameters This method has no parameters Return Values wstring 10 Exceptions This method throws no exceptions Method getAttributesO The attributes for this element. In the elementExample example above, the attributes list would consist of the id attribute, as well as any attributes which were defined by the document type definition for this element which have default values.
Parameters This method has no parameters Return Values Nodelterator Exceptions This method throws no exceptions
I
Method getAttributeO CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\PR32-4 IGRP2]489841 .doc:PWM -i -127- Retrieves an Attribute value by name from an Element. object.
Parameters name The name of the attribute to retrieve Return Values wstring Exceptions This method throws no exceptions Method setAttributeO 10 Adds a new attribute/value pair to an Element node object. If an attribute by that name is already present in the element, its value is changed to that of the value parameter.
Parameters name value 15 Return Values void Exceptions This method throws no exceptions Method removeAttributeO Removes the specified attribute from an Element node object.
Parameters name Return Values
I
CFP1594AUIPR32-41 GRP2 489841 1:\ELEC\CISRA\IPR\IPR32-41GRP2]489841 .doc:PWM r4f -128- *t*0 void Exceptions This method throws no exceptions Method getAttributeNodeO Retrieves an Attribute node by name from an Element. object.
Parameters name The name of the attribute to retrieve Return Values Attribute Exceptions This method throws no exceptions Method setAttributeNodeO Adds a new attribute/value pair to an Element node object. If an attribute by that name is already present in the element, its value is changed to be that of the Attribute instance.
Parameters newAttr Return Values void Exceptions This method throws no exceptions
I
Method removeAttributeNodeO CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\C5RA\IPR\1PR32-4 I GRP248984 .doc:PWM II~ -129- Removes the specified attribute/value pair from an Element node object.
Parameters oldAttr Return Values void Exceptions a o O 10 This method throws no exceptions Method getElementsByTagNameO Returns an iterator through all subordinate elements with a given tag name.
Parameters tagname Return Values void Exceptions This method throws no exceptions Method normalizeO Puts all Tet nodes in the sub-tree underneath this Element into a "normal" form where only markup (eg tags, comment, PIs, CDATASections) and entity references separate Text nodes.
Parameters This method has no parameters Return Values CFP1594AU IPR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 I GRP2]489841 .doc:PWM -130void Exceptions This method throws no exceptions Attribute data This holds the actual content of the text node. Text nodes contain just plain text, without markup and without entities, both of which are represented as separate objects in the DOM.
a CFP1594AU 1PR32-41_GRP2 489841 I:\ELEC\CISRA\IPR\IPR324 I GRP2]48984 I .doc:PWM 4- 4
Claims (48)
1. A method of applying a set of rules to a description of an electronically-accessible resource, said method comprising the steps of: reading a said description of the resource, wherein said read description comprises one or more descriptor components; reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with a set of one or more specified actions; locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and •performing said specified actions on said read description in response to locating a o said predetermined pattern. i 2. The method as claimed in claim 1, wherein each said descriptor component comprises the association of a resource attribute with a representative value for that attribute. S3. The method as claimed in claim 1, wherein said descriptor components are defined in a description scheme using declarative description language.
4. The method as claimed in claim 1, wherein said read description is represented as a tree of descriptor components and one or more of said descriptor components have descriptor components as descendents.
5. The method as claimed in claim 4, wherein the said patterns of descriptor components are defined in context of said tree of descriptor components.
6. The method as claimed in claim 1, wherein the said resource is an item of digital content.
7. The method as claimed in claim 1, wherein the method iterates through the rules until no further changes can be made to the description. CFPI594AU IPR32-41 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 I GRP2]48984 Ldoc:PWM l -132-
8. The method as claimed in claim 1, wherein the pattern of descriptors in the rule can include logically combined patterns of descriptors.
9. The method as claimed in claim 3, wherein the set of rules is associated with a description scheme. Apparatus for applying a set of rules to a description of an electronically-accessible resource, said apparatus comprising: means for reading a said description of the resource, wherein said read description comprises one or more descriptor components; means for reading a set of rules, wherein each said rule associates a predetermined o° pattern of said descriptor components with a set of one or more specified actions; o* means for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and means for performing said specified actions on said read description in response to locating a said predetermined pattern. ,,211. Apparatus as claimed in claim 10, wherein each said descriptor component comprises the association of a resource attribute with a representative value for that attribute.
12. Apparatus as claimed in claim 10, wherein said descriptor components are defined in a description scheme using declarative description language.
13. Apparatus as claimed in claim 10, wherein said read description is represented as a tree of descriptor components and one or more of said descriptor components have descriptor components as descendents.
14. Apparatus as claimed in claim 13, wherein the said patterns of descriptor components are defined in context of said tree of descriptor components. Apparatus as claimed in claim 10, wherein the said resource is an item of digital content. CFP1594AU IPR3241-GPRP2 489841 1 :\ELEC\CISRA\IPR\IPR32-4 I GRP2]48984 i.doc:PWM -133-
16. Apparatus as claimed in claim 10, wherein the apparatus comprises means for iterating through the rules until no further changes can be made to the description.
17. Apparatus as claimed in claim 10, wherein the pattern of descriptors in the rule can include logically combined patterns of descriptors.
18. Apparatus as claimed in claim 6, wherein the set of rules is associated with a description scheme. :ee 19. A computer readable medium comprising a computer program for applying a set of 000* *rules to a description of an electronically-accessible resource, said computer program comprising: code for reading a said description of the resource, wherein said read description comprises one or more descriptor components; *sea '100code for reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with a set of one or more specified actions; °ocode for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and code for performing said specified actions on said read description in response to I locating a said predetermined pattern. A computer readable medium as claimed in claim 19, wherein each said descriptor component comprises the association of a resource attribute with a representative value for that attribute.
21. A computer readable medium as claimed in claim 19, wherein said descriptor components are defined in a description scheme using declarative description language.
22. A computer readable medium as claimed in claim 19, wherein said read description is represented as a tree of descriptor components and one or more of said descriptor components have descriptor components as descendents. CFP1594AUIPR32-41 GRP2 489841 I:\ELEC\CIS RA\IPR\IPR32-4 I GRP2]489841 .doc:PWM -134-
23. A computer readable medium as claimed in claim 22, wherein the said patterns of O descriptor components are defined in context of said tree of descriptor components.
24. A computer readable medium as claimed in claim 19, wherein the said resource is an item of digital content. A computer readable medium as claimed in claim 19, wherein the computer program comprises code for iterating through the rules until no further changes can be made to the description.
26. A computer readable medium as claimed in claim 19, wherein the pattern of descriptors in the rule can include logically combined patterns of descriptors.
27. A computer readable medium as claimed in claim 19, wherein the set of rules is oOO associated with a description scheme. 0
28. A method of extending a description of an electronically-accessible resource, said o method comprising the steps of: o reading a said description of the resource, wherein said read description comprises 20 one or more descriptor components; reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; *"locating patterns of descriptor components of said read description which e. o o o °correspond with said predetermined pattern; and performing said specified actions on said read description in response to locating a said predetermined pattern, wherein each said action is inferred by the presence of the predetermined pattern of said descriptor components in the description and comprises the creation of a new descriptor or the removal of an existing descriptor from the description.
29. The method as claimed in claim 28, wherein each said descriptor component comprises the association of a resource attribute with a representative value for that attribute. CFP1594AU [PR3-41 GRP2 489841 I :\ELEC\CISRA\IPR\IPR32-4 I GRP2489841 .doc:PWM -135- The method as claimed in claim 28, wherein said descriptor components are defined O in a description scheme using declarative description language.
31. The method as claimed in claim 28, wherein said read description is represented as a tree of descriptor components and one or more of said descriptor components have descriptor components as descendents.
32. The method as claimed in claim 31, wherein the said patterns of descriptor components are defined in context of said tree of descriptor components.
33. The method as claimed in claim 28, wherein the said resource is an item of digital content. o•
34. The method as claimed in claim 28, wherein the method iterates through the rules 15 until no further changes can be made to the description. The method as claimed in claim 28, wherein the pattern of descriptors in the rule can include logically combined patterns of descriptors.
36. The method as claimed in claim 30, wherein the set of rules is associated with said description scheme. Apparatus for extending a description of an electronically-accessible resource, said S"apparatus comprising: means for reading a said description of the resource, wherein said read description comprises one or more descriptor components; means for reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; means for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and means for performing said specified actions on said read description in response to locating a said predetermined pattern, wherein each said action is inferred by the presence of the predetermined pattern of said descriptor components in the description and CFP1594AU IPR32-4 I_GRP2 489841 I:\ELEC\CISRAlPR\IPR32-4 I GRP2]48984 I .doc:PWM -136- comprises the creation of a new descriptor or the removal of an existing descriptor from the description.
38. A computer readable medium comprising a computer program for extending a description of an electronically-accessible resource, said computer program comprising: code for reading a said description of the resource, wherein said read description comprises one or more descriptor components; code for reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; code for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and code for performing said specified actions on said read description in response to locating a said predetermined pattern, wherein each said action is inferred by the presence OorO of the predetermined pattern of said descriptor components in the description and comprises the creation of a new descriptor or the removal of an existing descriptor from the description.
39. A method of visually presenting a description of an electronically-accessible resource, said method comprising the steps of: reading a said description of the resource, wherein said read description comprises S one or more descriptor components; reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; S"locating patterns of descriptor components of said read description which correspond with said predetermined pattern; performing said specified actions on said read description in response to locating a said predetermined pattern, wherein said action comprises the addition or removal of a presentation property for one or more said descriptor components in said description to be visually presented; and visually presenting the read description using the presentation properties. CFP1594AUIPR3241 GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 I GRP2]489841 I.doc:PWM ~ii-ii-i S-137- The method as claimed in claim 39, wherein each said descriptor component comprises the association of a resource attribute with a representative value for that attribute.
41. The method as claimed in claim 39, wherein said descriptor components are defined in a description scheme using declarative description language.
42. The method as claimed in claim 39, wherein said read description is represented as a tree of descriptor components and one or more of said descriptor components have descriptor components as descendents.
43. The method as claimed in claim 42, wherein the said patterns of descriptor components are defined in context of said tree of descriptor components.
44. The method as claimed in claim 39, wherein the presentation property is an attribute i of a descriptor definition that specifies whether the descriptor can be selected by a user. The method as claimed in claim 39, wherein the presentation property is an attribute of a descriptor definition which classifies whether instances of the descriptor in the 20 description should be assigned a particular presentation classification which influences how the descriptor is presented in an application.
46. The method as claimed in claim 45, wherein the said presentation classification is a Table-of-Contents classification.
47. The method as claimed in claim 45, wherein the said presentation classification is an Index classification.
48. The method as claimed in claim 45, wherein the said presentation classification is a classification that informs the method that the said descriptor instances are not presentable. CFP1594AU IPR32-4 I_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41 GRP2148984 I.doc:PWM 1 -138-
49. The method as claimed in claim 39, wherein the presentation property is an icon which is used to graphically represent instances of the descriptor in an application. Apparatus for visually presenting a description of an electronically-accessible resource, said apparatus comprising: means for reading a said description of the resource, wherein said read description comprises one or more descriptor components; means for reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; means for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; means for performing said specified actions on said read description in response to locating a said predetermined pattern, wherein said action comprises the addition or removal of a presentation property for one or more said descriptor components in said description to be visually presented; and means for visually presenting the read description using the presentation properties.
51. A computer readable medium comprising a computer program for visually presenting a description of an electronically-accessible resource, said computer program comprising: code for reading a said description of the resource, wherein said read description comprises one or more descriptor components; code for reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; code for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; code for performing said specified actions on said read description in response to locating a said predetermined pattern, wherein said action comprises the addition or removal of a presentation property for one or more said descriptor components in said description to be visually presented; and code for visually presenting the read description using the presentation properties.
52. A method of translating a description of an electronically-accessible resource, wherein said description is in a first language, said method comprising the steps of: CFP I594AU IPR.32-41 _GRP2 489841 [:\ELEC\CISRA\IPR\IPR32-4I GRP2]48984 1.doc:PWM -139- requesting said description for further processing, wherein said request is in a second language; reading said description of the resource, wherein said read description comprises one or more descriptor components; reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and performing said specified actions on said read description in response to locating a said predetermined pattern, wherein each said action is inferred by the presence of the predetermined pattern of said descriptor components in the description and comprises the replacement of one or more existing descriptors in the description with one or more a. equivalent descriptors thereby achieving a full or partial translation of the read description from the first language to the second language.
53. Apparatus for translating a description of an electronically-accessible resource, wherein said description is in a first language, said apparatus comprising: means for requesting said description for fuirther processing, wherein said request is in a second language; means for reading said description of the resource, wherein said read description comprises one or more descriptor components; means for reading a set of rules, wherein each said rule associates a predetermined :pattern of said descriptor components with one or more specified actions; means for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and means for performing said specified actions on said read description in response to locating a said predetermined pattern, wherein each said action is inferred by the presence of the predetermined pattern of said descriptor components in the description and comprises the replacement of one or more existing descriptors in the description with one or more equivalent descriptors thereby achieving a fuill or partial translation of the read description from the first language to the second language. CFP1594AU 1PR32-41 _GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-4 I GRP2]48984 I .doc:PWM -140-
54. A computer readable medium comprising a computer program for translating a description of an electronically-accessible resource, wherein said description is in a first language, said computer program comprising: code for requesting said description for further processing, wherein said request is in a second language; code for reading said description of the resource, wherein said read description comprises one or more descriptor components; code for reading a set of rules, wherein each said rule associates a predetermined pattern of said descriptor components with one or more specified actions; code for locating patterns of descriptor components of said read description which correspond with said predetermined pattern; and code for performing said specified actions on said read description in response to locating a said predetermined pattern, wherein each said action is inferred by the presence of the predetermined pattern of said descriptor components in the description and comprises the replacement of one or more existing descriptors in the description with one or more equivalent descriptors thereby achieving a full or partial translation of the read description from the first language to the second language. A method of extending a description of an electronically-accessible resource, the method substantially as described herein with reference to the accompanying drawings.
56. Apparatus for extending a description of an electronically-accessible resource, the apparatus substantially as described herein with reference to the accompanying drawings.
57. A computer readable medium comprising a computer program for extending a description of an electronically-accessible resource, the computer program substantially as described herein with reference to the accompanying drawings.
58. A method of visually presenting a description of an electronically-accessible resource, the method substantially as described herein with reference to the accompanying drawings. CFPI 594AU 1PR32-4 I_GRP2 489841 I:\ELEC\CISRA\IPR\IPR32-41GRP2]489841.doc:PWM iri~z~;~~L -141-
59. Apparatus for visually presenting a description of an electronically-accessible resource, the apparatus substantially as described herein with reference to the accompanying drawings.
60. A computer readable medium comprising a computer program for visually presenting a description of an electronically-accessible resource, the computer program substantially as described herein with reference to the accompanying drawings.
61. A method of translating a description of an electronically-accessible resource, the method substantially as described herein with reference to the accompanying drawings.
62. Apparatus for translating a description of an electronically-accessible resource, the apparatus substantially as described herein with reference to the accompanying drawings.
63. A computer readable medium comprising a computer program for translating a description of an electronically-accessible resource, the computer program substantially as described herein with reference to the accompanying drawings. r r r r Dated this Twenty-Eighth Day of January 2000 Canon Kabushiki Kaisha Patent Attorneys for the Applicant SPRUSON FERGUSON CFP1594AUIPR2-41 GGRP2 489841 1:\ELEC\CISRA\IPR\IPR32-4 I GRP2]489841 .doc:PWM
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU13609/00A AU744893B2 (en) | 1999-01-29 | 2000-01-28 | Applying a set of rules to a description of a resource |
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AUPP8370A AUPP837099A0 (en) | 1999-01-29 | 1999-01-29 | Method and apparatus for translating a description of a resource |
AUPP8371A AUPP837199A0 (en) | 1999-01-29 | 1999-01-29 | Method and apparatus for a method of visually presenting a description of a resource |
AUPP8372 | 1999-01-29 | ||
AUPP8371 | 1999-01-29 | ||
AUPP8370 | 1999-01-29 | ||
AUPP8372A AUPP837299A0 (en) | 1999-01-29 | 1999-01-29 | Method and apparatus for extending a description of a resource |
AU13609/00A AU744893B2 (en) | 1999-01-29 | 2000-01-28 | Applying a set of rules to a description of a resource |
Publications (2)
Publication Number | Publication Date |
---|---|
AU1360900A AU1360900A (en) | 2000-08-24 |
AU744893B2 true AU744893B2 (en) | 2002-03-07 |
Family
ID=27422524
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU13609/00A Ceased AU744893B2 (en) | 1999-01-29 | 2000-01-28 | Applying a set of rules to a description of a resource |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU744893B2 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1989009971A2 (en) * | 1988-04-13 | 1989-10-19 | Digital Equipment Corporation | Method of integrating software application programs using an attributive data model database |
EP0938053A1 (en) * | 1998-02-20 | 1999-08-25 | Hewlett-Packard Company | Methods of refining descriptors |
-
2000
- 2000-01-28 AU AU13609/00A patent/AU744893B2/en not_active Ceased
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1989009971A2 (en) * | 1988-04-13 | 1989-10-19 | Digital Equipment Corporation | Method of integrating software application programs using an attributive data model database |
EP0938053A1 (en) * | 1998-02-20 | 1999-08-25 | Hewlett-Packard Company | Methods of refining descriptors |
Also Published As
Publication number | Publication date |
---|---|
AU1360900A (en) | 2000-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7287018B2 (en) | Browsing electronically-accessible resources | |
US7099946B2 (en) | Transferring a media browsing session from one device to a second device by transferring a session identifier and a session key to the second device | |
US7162691B1 (en) | Methods and apparatus for indexing and searching of multi-media web pages | |
US20030018607A1 (en) | Method of enabling browse and search access to electronically-accessible multimedia databases | |
US20020152267A1 (en) | Method for facilitating access to multimedia content | |
US20030225829A1 (en) | System and method for platform and language-independent development and delivery of page-based content | |
KR20080005491A (en) | Efficiently describing relationships between resources | |
Smith et al. | Visual annotation tool for multimedia content description | |
Van Ossenbruggen et al. | Smart style on the semantic web | |
AU745061B2 (en) | Applying procedures to electronically-accessible resources and/or descriptions of resources | |
AU744893B2 (en) | Applying a set of rules to a description of a resource | |
AU776284B2 (en) | Browsing electronically-accessible resources | |
AU743900B2 (en) | Browsing electronically-accessible resources | |
JP2000353120A (en) | Method for processing resource electronically accessible and/or description of resource | |
Coleman et al. | SGML as a Framework for Digital Preservation and Access. | |
Bekaert et al. | Packaging models for the storage and distribution of complex digital objectsin archival information systems: a review of MPEG-21 DID principles | |
King et al. | METIS: a flexible foundation for the unified management of multimedia assets | |
JP2000298681A (en) | Method for applying rule to resource description | |
Hu et al. | MD/sup 2/L: content description of multimedia documents for efficient process and search/retrieval | |
Di Bono et al. | WP9: A review of data and metadata standards and techniques for representation of multimedia content | |
Feng et al. | Languages for Metadata | |
JP2009187528A (en) | Method of improved hierarchal xml database | |
AU770877B2 (en) | Metadata processes for multimedia database access | |
AU768160B2 (en) | Method of enabling browse and search access to electronically-accessible multimedia databases | |
AU769026B2 (en) | Multimedia database access system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGA | Letters patent sealed or granted (standard patent) |