Nothing Special   »   [go: up one dir, main page]

US20100005104A1 - Method and system for media navigation - Google Patents

Method and system for media navigation Download PDF

Info

Publication number
US20100005104A1
US20100005104A1 US12/561,293 US56129309A US2010005104A1 US 20100005104 A1 US20100005104 A1 US 20100005104A1 US 56129309 A US56129309 A US 56129309A US 2010005104 A1 US2010005104 A1 US 2010005104A1
Authority
US
United States
Prior art keywords
media
descriptor
navigation
hierarchy
category list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/561,293
Inventor
Peter C. DiMaria
Markus K. Cremer
Vadim Brenner
Dale T. Roberts
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gracenote Inc
Original Assignee
Gracenote Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gracenote Inc filed Critical Gracenote Inc
Priority to US12/561,293 priority Critical patent/US20100005104A1/en
Publication of US20100005104A1 publication Critical patent/US20100005104A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/44Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • G06F16/637Administration of user profiles, e.g. generation, initialization, adaptation or distribution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/64Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Definitions

  • This application relates to media navigation, and more specifically to systems and methods for navigating media from a plurality of sources.
  • descriptive metadata ordinarily associated with the digital media may be unavailable, inaccurate, incomplete and/or internally inconsistent.
  • the metadata is often only available via textual tags that indicate only a single level of description (e.g., rock genre), and the level of granularity used in that single level may vary greatly even within a single defined vocabulary for the metadata. Often the level of granularity available is either too detailed or too coarse to meet the needs of a given user interface requirement. Additionally, for a given media object there may be multiple values available for a given descriptor type, and/or multiple values of the same type from multiple sources that may cause additional problems when attempting to navigate the digital media.
  • Portable devices may include constraints that the length of lists and the length of the terms used in such lists remain short and simple.
  • the application logic driving media applications such as automatic playlist engines, recommendation engines, user profiling and personalization functions, and community services benefits from increasingly detailed and granular descriptive data.
  • Systems that attempt to utilize the same set of descriptors for both user interface display and application logic may be unable to meet both needs effectively at once, while using two completely separate systems risk discontinuity and user confusion.
  • different users may use different media navigation structures, labeling and related content when accessing content from different geographic regions, in different languages, from various types of devices and applications, from different user types, and/or according to personal media preferences.
  • Existing navigation structures may not enable developers to easily select, configure and deliver appropriate navigational elements to each user group or individual user, especially in the case of an embedded device.
  • FIGS. 1A & 1B are block diagrams of example navigation systems
  • FIG. 2 is a block diagram of an example configuration system
  • FIG. 3 is a block diagram of an example navigation template
  • FIG. 4 is a block diagram of an example information architecture
  • FIG. 5 is a flowchart illustrating a method for pre-processing the configuration system of FIG. 2 in accordance with an example embodiment
  • FIG. 6 is a flowchart illustrating a method for creating a reference media database in accordance with an example embodiment
  • FIG. 7 is a flowchart illustrating a method for creating an information architecture in accordance with an example embodiment
  • FIG. 8 is a flowchart illustrating a method for creating a descriptor system that may be deployed in the navigation systems of FIGS. 1A & 1B in accordance with an example embodiment
  • FIG. 9 is a flowchart illustrating a method for defining a navigation package in accordance with an example embodiment
  • FIG. 10 is a flowchart illustrating a method for dynamically generating a navigation template in accordance with an example embodiment
  • FIG. 11 is a flowchart illustrating a method for coding descriptor codes in media objects in accordance with an example embodiment
  • FIG. 12 is a flowchart illustrating a method for preloading a client in accordance with an example embodiment
  • FIG. 13 is a flowchart illustrating a method for loading an information architecture in accordance with an example embodiment
  • FIG. 14 is a flowchart illustrating a method for coding media items in accordance with an example embodiment
  • FIG. 15 is a flowchart illustrating a method for loading content from a plurality of sources in accordance with an example embodiment
  • FIG. 16 is a flowchart illustrating a method for content recognition in accordance with an example embodiment
  • FIG. 17 is a flowchart illustrating a method for mapping content IDs in accordance with an example embodiment
  • FIG. 18 is a flowchart illustrating a method for receiving master descriptor codes and content IDs for content in accordance with an example embodiment
  • FIG. 19 is a flowchart illustrating a method for utilizing a navigation package in a client in accordance with an example embodiment
  • FIG. 20 is a flowchart illustrating a method for creating a navigational view in accordance with an example embodiment
  • FIG. 21 is a flowchart illustrating a method for updating navigational views in accordance with an example embodiment
  • FIG. 22 is a flowchart illustrating a method for presenting a navigation view on a client in accordance with an example embodiment
  • FIG. 23 is a flowchart illustrating a method for altering navigation on a client in accordance with an example embodiment
  • FIG. 24 illustrates a diagrammatic representation of machine in the example form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed;
  • FIG. 25 illustrates a block diagram of an example end-user system in which the client of FIGS. 1A & 1B may be deployed.
  • Example methods and systems for media navigation are described.
  • numerous specific details are set forth in order to provide a thorough understanding of example embodiments. It will be evident, however, to one skilled in the art that the present invention may be practiced without these specific details.
  • a navigation system may provide consistent, simple, effective, and efficient access to a vast scope of digital media, regardless of application, display device or user interface (UI) paradigm.
  • the navigation system may be deployed to enable end-users to perform customization and re-classification of their media collections without impacting the integrity of an underlying portion of the navigation system.
  • Example embodiments of the navigation system may be used to navigate digital content (e.g., digital audio) and may thus be deployed in a portable media device (e.g., MP3 player, iPOD, or audio any other portable audio player), in vehicle audio systems, home stereo systems, computers or the like.
  • a portable media device e.g., MP3 player, iPOD, or audio any other portable audio player
  • the navigation system may enable the efficient configuration and dynamic updating of diverse navigational structures across a plurality of devices and applications, while maintaining the integrity of the underlying metadata.
  • a configuration module of the navigation system may automatically pre-generate and load into a client (e.g., provided on a media player) appropriate alternative normalized media navigation structures.
  • the structures may be used to create an indexed application media database and navigational elements to present one or more views of media items of diverse and multiple sources, types and metadata.
  • the views may be switched on demand, personalized, dynamically adapted and updated without altering the underlying global granular source media identifiers, descriptor codes and/or labeling data.
  • An example system may be built upon a foundation of navigation structures that may include lists, ordering data, hierarchies, trees, weighted relations, mappings, links, and other information architecture elements, as well as entity grouping, labeling, filters and related navigational content.
  • the navigation structures may incorporate a set of semantic entities and descriptors designed to be understandable and appropriate and may be automatically optimized.
  • a user of a media device e.g., a portable media player or vehicle audio system
  • the navigation system may include a mapping of global internal granular annotation codes to a variety of alternative category lists and navigational content used for user interface purposes.
  • the category lists may be fluidly mixed and matched to generate a wide variety of hierarchies and navigation trees while maintaining the integrity of the parent-child mapping between each list and without any change in the global granular annotation codes.
  • a configuration module may be utilized to automatically select from a pre-defined super-set of options, pre-generate and load into a specific connected or unconnected client at time of manufacture or initial start-up an appropriate set of alternative normalized media navigation structures.
  • the options chosen may be determined based on a combination of device, application, region, language, user type, manufacturer/publisher, and/or end-user account IDs which are provided as parameters to the configuration module.
  • a client application may then utilize the navigational structures to create an indexed application media database, typically in conjunction with media recognition technology.
  • the index may compile media items sourced from diverse and multiple types, services and providers. Additionally, the media items may be accessed from diverse and multiple devices, connectivity and transport. The media items may furthermore contain identifiers and metadata of diverse types, completeness, consistency and accuracy without materially impacting the performance of the system.
  • the client application may utilize the navigational structures to generate default and alternative user interface elements such as browse trees, faceted navigation, and relational lists, and/or to drive aspects of application logic such as auto-playlisting, search, personalization, recommendation, internet community, media retail and on-demand subscription services, and the like.
  • default and alternative user interface elements such as browse trees, faceted navigation, and relational lists
  • application logic such as auto-playlisting, search, personalization, recommendation, internet community, media retail and on-demand subscription services, and the like.
  • the user interface elements present, uniquely for each user on a client, simple, unified (descriptor-based and other) views.
  • the views and structures may be altered on demand, personalized, dynamically adapted (based on personalized and/or contextual input) and updated remotely from a set of master databases via a network service and/or from a local database.
  • API application programming interface
  • the alternate views may be applied on a global (e.g., a country or several countries), region (e.g., regions within a country), application (e.g., an online catalog or store . . . ), device (e.g., portable media player, vehicle media player, etc.), and/or user-account level. All such changes at the presentation information architecture layer may be achieved without altering the underlying master granular media identifiers, descriptor codes, labeling data, and file tags.
  • the master media, information architecture, contextual data, and user profile databases may be refined and appended based on feedback from the clients and thus be modified based on user input.
  • the navigation system may be used to power various applications running on any device rendering media including a search, a recommendation, playlisting, internet community facilitation, stores, on-demand subscription services, and the like.
  • FIG. 1A illustrates an example navigation system 100 .
  • the system 10 may be deployed on any device that renders media (e.g., a portable media player, a vehicle media player such as a vehicle audio systems, etc.).
  • the navigation system 100 may include a client 102 in the form of an application or a device.
  • the client 102 may receive architecture information (e.g., selected category lists 122 ) from a plurality of descriptor systems 104 . 1 - 104 . n that may be used to enable the client 102 to navigate content from a plurality of media sources 106 . 1 - 106 . n (e.g., a radio station, a flash drive, an MP3 player . . . ) and/or a media service 108 (e.g., provided on a media player or by an online catalog) by use of a client application 110 .
  • architecture information e.g., selected category lists 122
  • the client application 110 may provide navigational access to content through a user interface 112 .
  • the user interface 112 may provide navigational information and other information to an end-user through a display device (e.g., on a device) or an application.
  • Navigation templates 114 may access one or more descriptor hierarchies 116 available to the client 102 through the user interface 112 to provide one or more navigation views.
  • Master descriptor code list 124 associated with content may be used to enable navigational access to the content (e.g., audio, video, or the like) by mapping it to selected descriptor category lists 122 that are contained within the descriptor hierarchies 116 .
  • the descriptor codes may describe an attribute an entity of a media item.
  • the selected category lists 122 may be selected from the plurality of available category lists 132 contained within a plurality of the descriptor systems 104 . 1 - 104 . n .
  • the master descriptor code list 123 of the client may not include all master descriptor codes of the master descriptor code list 124 of a descriptor system 104 . 1 .
  • a descriptor system 104 . 1 includes a master descriptor code list 124 from which master descriptor code list 124 may be selected.
  • the master descriptor code list 124 may include a list of a plurality of available master descriptor codes and associated names for a particular type of media descriptor.
  • the master descriptor code list 124 for a genre media descriptor may include coded identification of a greatest amount of granularity desired for genre.
  • the master descriptor code list 124 for genre may include, e.g., more than fifteen hundred different genres.
  • the master descriptor code list 124 may be a detailed genre list used for coding media items such as artists, albums, and recordings.
  • the master descriptor code list 124 for a particular type of descriptor may optionally be given a unique identifier that may be used for identification.
  • the media item may be associated with the descriptor categories by mapping the media item to the master descriptor code.
  • the most granular descriptor category list of the available category lists 132 may be referred to as the master descriptor category list.
  • the master descriptor codes are mapped directly into the master descriptor category list, which may contain an equal or smaller number of categories than there are codes in the master descriptor code list 124 .
  • mapping from the master descriptor code list 124 to each of the other descriptor category lists may not need to be directly updated as the mapping from the master descriptor category list to each of the other descriptor category lists may remain unchanged.
  • the plurality of available category lists 132 may include a number of levels of category lists as part of each descriptor system. Each category list may include a differing number of category codes and associated labels from the other category lists in the descriptor system. For example a category list at a first level may include five category codes and associated labels and at a second level may include twenty category codes and associated labels.
  • the most granular level of the plurality of available category lists 132 may be referred to as a master descriptor category list.
  • Each descriptor category code in each descriptor category list in each descriptor system may be mapped to its parent descriptor category code in the next less granular descriptor category List.
  • a number of child category codes may be mapped to each parent category code.
  • the mapping may have an indirect effect of mapping each master descriptor code to its appropriate parent category in every category list via the master descriptor category list, and mapping every category code to its appropriate parent and child category code in every non-adjacent category list.
  • the mappings from the master descriptor codes to the descriptor category codes may also be stored directly.
  • Each descriptor code may be a unique identifier.
  • mapping may be used.
  • a one to one “downward” mapping 130 may be created and stored for each descriptor category.
  • the descriptor categories may generally be of more aggregate nature than many of the master descriptor codes, the descriptor categories may typically be mapped to the more aggregated master descriptor codes to ensure than only the level of information actually known is encoded.
  • the user interface presented to the submitter may only display the applicable hierarchy of category lists, from which they select the annotation labels. Once selected, the item may be annotated with the mapped master descriptor code. For example, a venue operator may “publish” the music genres that are primarily associated with the venue, but do so by selecting from a simplified list of fifteen meta-genres.
  • a descriptor system 104 . 1 may be created for each desired viewpoint.
  • the viewpoints of the descriptor system 104 . 1 may be defined by a combination of regional, genre preference, psychographic, demographic and/or other factors that combine to define a preferred perception of the applicable descriptor types. Examples of viewpoints of descriptor systems 104 . 1 include North American Default, Japanese Classical Aficionado. South American, Teen, and Southern European Traditional.
  • different descriptor systems 104 . 1 - 104 . n may group the same master descriptor code list 124 into substantially different category arrangements. The grouping may enable the ability to substantially change the areas of category focus in different implementations while utilizing the same master descriptor codes.
  • a European genre descriptor system might include the genre “Chanson” in its shorter, more highly aggregated genre category lists, as a European end-user may desire quick access to music of this type, whereas for a North American genre descriptor system, the genre category might only be exposed at the lower levels, if at all, with content coded with “Chanson” master descriptor codes instead being included along with music from other related master genre codes in a genre category of “World”.
  • Navigational content 120 may be used to enhance navigational access on the user interface 112 .
  • the navigational content 120 may be provided during navigation to identify content and may include, by way of example, audio clips, media packaging graphics, photos, text, genre icons, genre mini-clips, genre descriptions, origin icons, origin mini-clips, origin descriptors, navigational icons (e.g., channel icons), phonetic data, and the like.
  • a recognition module 125 may be further included in the client to recognize media items 134 . 1 , 134 . 2 .
  • the recognition module 125 may optionally use a local database 127 and/or a remote database 129 to perform look and/or obtain metadata for the media items 134 . 1 , 134 . 2 .
  • An example embodiment of a method for recognizing the media items 134 . 1 , 134 . 2 that may be performed at the recognition module 125 is described in greater detail below.
  • FIG. 1B illustrates another example of a navigation system 131 in which a client may be deployed.
  • the navigation system 131 may include a client 135 that may have access to media objects 138 . 1 from one or more media sources 133 and media objects 138 . 2 from one or more media services 136 .
  • the media objects 138 . 1 may be associated with entity types 140 . 1
  • the media objects 138 . 2 may be associated with the entity types 140 . 2 .
  • the media objects 138 . 1 , 138 . 2 may be accessed through an indexing module 144 by a local information architecture 142 .
  • the media objects 138 . 1 , 138 . 2 may be recognized by use of a recognition module 146 .
  • the recognition module 146 may use a local media object lookup database 148 to identify the media objects 138 . 1 , 138 . 2 to assemble a local metadata database 150 of metadata for the media objects 138 . 1 , 138 . 2 .
  • the recognition module 118 may also, instead of or in addition to the local lookup, perform a remote lookup by contacting a recognition service 170 .
  • the recognition service 170 may use a master media object lookup database 172 to identify the media objects 138 . 1 , 138 . 2 and a master media metadata database 174 to obtain metadata for the media objects 138 . 1 , 138 . 2 .
  • the local information architecture 142 may be used by a navigation system application 160 to configure various navigational views as described in greater detail below.
  • the navigation system application 160 may configure the navigational views according to a personalization module and/or a contextualization module.
  • Hierarchies 152 and navigation trees 154 may be generated from the local information architecture 142 and used to provide the navigational views.
  • the hierarchies 152 and the navigation trees 154 may optionally be stored and available for later retrieval.
  • a navigation API 162 may be used to provide access to the navigation system application 160 .
  • One or more contextual data feeds and/or sensors may be used to provide contextual data through the navigation API 162 to the navigation system application 160 .
  • Personalized data may also be received from a master user profile database 194 by use of a personalization service 192 .
  • a navigational update module 156 may be used to update the local information architecture 142 .
  • the navigation update module 156 may optionally use one or more IDs 158 which may identify a build ID, a device ID, an application ID, a customer ID, a region ID, a taste profile ID, a user type ID, a client ID, and/or a user ID to receive the appropriate updates for the local information architecture 142 as deployed.
  • a navigation service 176 may provide updated information to the navigation update module 156 .
  • the navigation service 176 may obtain information used for navigation (e.g., navigational content) from one or more descriptor systems 178 and navigational content from a master navigation content object store 180 , utilizing a navigational content metadata database 182 .
  • the navigational content may be provided to aid in navigation and may include, by way of example, audio clips, media packaging images, text, genre icons, genre mini-clips, genre descriptions, origin icons, origin descriptors, channel icons, phonetic data, and the like.
  • the client application 166 may provide navigational access to media objects 138 . 1 , 138 . 2 through a user interface 168 .
  • the user interface 168 may provide navigational information and other information to an end-user through a display device (e.g., on a device) or an application.
  • One or more navigation templates 184 may be accessed by the navigation service 176 .
  • the navigation templates 184 may enable differing navigational views into one or more descriptor hierarchies 152 available to the client 135 through the user interface 168 .
  • Descriptor codes associated with content in the local metadata database 150 may be used to enable navigational access to the content by mapping content to selected category lists that are contained within the descriptor hierarchies 152 .
  • the selected category lists may be selected from the plurality of available category lists contained within a plurality of the descriptor systems 178 .
  • the plurality of available category lists may include a number of levels of category lists.
  • Each category list may include a differing number of category codes and associated labels. For example, a category list at a first level may include five category codes and associated labels and at a second level may include twenty category codes and associated labels.
  • FIG. 2 illustrates an example configuration system 200 .
  • the configuration system 200 may be used to create one or more hierarchies 116 from the descriptor systems 104 . 1 - 104 . n for use in the navigation system 100 (see FIG. 1 ).
  • the configuration system 200 includes a configuration application 202 in which a configuration module 216 may communicate with one or more master media sources 204 and a reference media database 206 .
  • the configuration application 202 may utilize one or more master descriptor code lists 124 . 1 - 124 . n for content (e.g., media items) available through the master media sources 204 .
  • a master code descriptor list 124 . 1 of the master descriptor code lists 124 . 1 - 124 . n may be provided to a plurality of descriptor systems 208 , 210 , 212 .
  • the configuration application 202 may also create a descriptor hierarchy 214 from an original descriptor system 208 or an alternate viewpoint descriptor system 210 . From each of these, an alternate label and/or a translated language descriptor system 212 may also be created.
  • the original descriptor system 208 includes the master descriptor code list 124 (see FIG. 1 ), third party descriptor label mappings, third party descriptor ID mappings, master category lists 126 , other category lists, system mapping tables 228 , and ordering value data 230 .
  • the third party label and identifier mapping tables may associate third party labels (e.g. genre label “R&R”) or IDs (e.g. ID3 Genre tag #96) with the appropriate master descriptor code.
  • 3 rd party mapping tables associate descriptor term labels and descriptor term unique identifiers utilized externally to the most appropriate master descriptor code for each applicable descriptor type. If more than one 3rd party system uses an identical descriptor label, this one label may be mapped to the single “best-fit” master descriptor code, or alternatively, the system may store a 3rd party organization ID with each instance of a descriptor label, such that it is possible to support alternate mapping form 3rd party descriptor to master descriptor code, depending on the source, as indicated by the 3rd party entity identifier. In the case of mapping 3rd party descriptor unique identifiers, it is always necessary to include the 3rd party organization ID. External descriptor labels do not have to be associated with a specific entity; they may represent colloquial expressions.
  • the ordering value data 230 may optionally be associated with the category lists 126 of the descriptor systems 208 , 210 , 212 .
  • the ordering values for each of the categories indicate an order in which the labels of a category list may be presented in the user interface 112 (see FIG. 1 ).
  • the ordering values may be based on judgmental similarity, judgmental importance, judgmental chronological, and the like.
  • the labeled hierarchy 210 may be assembled from a specific descriptor system 208 with specific labels associated with each category ID.
  • the labeled hierarchy 210 includes the master descriptor code list 124 , labeled category lists 224 , the system mapping tables 228 , and the ordering value data 230 .
  • the alternate master category lists 224 may include one or more alternate labels for the category lists contained in the master category lists 126 .
  • the alternate labels may include nicknames, short names, and the like.
  • the translated hierarchy 212 is a version of a labeled hierarchy 210 with translated labels (e.g., in Spanish, Japanese, and English).
  • the localized language descriptor system includes the master descriptor code list 124 , translated labeled category lists 226 , the system mapping tables 228 , and the ordering value data 230 .
  • the descriptor hierarchy 214 may include the selected category lists 122 (see FIG. 1 ), category list ordering data 234 , hierarchy mapping tables 236 , descriptor relational tables 238 , and entity hierarchies 240 .
  • the selected category lists 122 may be selected from the master category lists 126 , the alternate master category lists 224 , and/or the localized master category lists 226 .
  • the category list ordering data 234 may enable one or more alternate orderings of the category lists of the descriptor hierarchy 214 .
  • the third party mapping tables 236 may associate descriptor terms and unique identifiers used by third parties to the master descriptor codes contained in the master descriptor code list 124 .
  • a third party descriptor term may be mapped to a master descriptor code to which it is most similar. Thus discrepancies in terminology used may be accommodated.
  • the descriptor relational tables 238 relate master descriptor codes with other master descriptor codes in the master descriptor code list 124 .
  • the relations may be defined by a correlation value or a weighting for each relationship.
  • the entity hierarchies 240 define the parent/child relationship between at least some of the entity types.
  • An example of an entity item hierarchy is as follows: Series ⁇ -> Album ⁇ -> Edition ⁇ -> Release ⁇ -> SKU ⁇ -> Disc ⁇ -> Track ⁇ -> Recording ⁇ -> Movement ⁇ -> Composition.
  • FIG. 3 illustrates an example navigation template 300 .
  • the navigation template 300 may be used by a navigation service to select the appropriate elements to be included in a navigation package.
  • the client may utilize these elements to assemble navigation views for navigating content from one or more media sources 106 . 1 - 106 . n and/or the media library 108 (see FIG. 1 ).
  • the navigation template 300 may include a template ID 302 , a region 304 , one or more application parameters 306 , one or more device parameters 308 , a user type 310 , one or more third party IDs 312 , and/or one or more navigation trees 314 .
  • the template ID 302 may be an identifier (e.g., a unique identifier) that may be use to select the navigation template 300 (e.g., among the one or more navigation templates 114 ) for navigational use.
  • identifier e.g., a unique identifier
  • the region 304 may define the regional viewpoint that is used to select the appropriate descriptor systems and other navigational elements such that they can be assembled into a user interface that conforms to the cultural expectations of a specific regional user base.
  • the selection of the region may include the United States, Germany, Brazil, Japan, Korea, the Middle East, China, global, Eurasia, America, and/or Asia, U.S. East Coast, New England States, Chicago, and the like.
  • Other regions may also be available for selection
  • the one or more application parameters 306 indicate the application type (e.g. media player, a web site, a playlist generator, a collection manager, or other application) in which the navigation package may be deployed.
  • the application parameters 306 may cause alternate category labels to be made selected such as short names, standard names, or extended names.
  • the one or more device parameters 308 specify the device in which the navigation package may be deployed.
  • the device parameters 308 may indicate whether the device is a PC, a home media server, a vehicle stereo, a portable media player, a mobile phone, a digital media adapter, a connected CD/DVD/flash player, a remote control, and the like.
  • the user type 310 may identify a type of user that may use the deployment of the navigation template 300 in the client 102 .
  • the user type may identify a user of the navigation template 300 as a basic user, a simple user, a standard user, an advanced user, or a professional user. Other user types may also be used.
  • the one or more third party IDs 312 may identify third parties associated with a deployment of the navigation template 300 .
  • the third party IDs may be associated with third party customers and/or partners that have IDs (e.g., unique IDs) that define their navigational preferences for the end-user, as well as IDs of third parties whose labels or IDs may need to be mapped via the appropriate third party mapping tables.
  • the one or more navigation trees 314 may be a sequence of category lists taken from selected category lists 122 and/or entity types to be available for constructing elements of the user interface 112 based on the selection of the application parameters 306 , the device parameters 308 , the user type 310 , and/or the other third party IDs 312 .
  • the navigation trees 314 may also include an ordering of items in a category list.
  • the list ordering definition may include a judgmental unique ordering, alphabetical, dynamic item count, a dynamic popularity, or the like.
  • FIG. 4 illustrates example information architecture 400 .
  • the information architecture may be deployed in the client 102 (see FIG. 1 ) and/or in other applications and devices.
  • the information architecture 400 may include one or more descriptor hierarchies 116 , one or more navigation trees 314 , one or more other category lists 406 , default mappings 408 , one or more descriptor relation tables 238 , and/or navigational content 120 (see FIGS. 1-3 ).
  • the other category lists 406 may selected from one or more descriptor hierarchies 116 not included in the information architecture 400 .
  • the default mappings 408 may be used in the information architecture 400 to enable mapping of legacy descriptors IDs (e.g., from a provide or a third party) and/or to perform third party descriptor label mapping.
  • Other configurations of the information architecture 400 including different elements may also be deployed in the client 102 .
  • FIG. 5 illustrates a method for pre-processing content to enable deployment of the information architecture 400 (see FIG. 4 ) in the client 102 (see FIG. 1 ) for navigation of the content in accordance with an example embodiment.
  • the reference media database 206 may be created at block 502 .
  • the reference media database 206 may include the plurality of master descriptor code lists 124 . 1 - 124 . n .
  • An example embodiment of creating the reference media database 206 is described in greater detail below.
  • Information architecture 400 may be created at block 504 .
  • An example embodiment of creating the information architecture 400 is described in greater detail below.
  • a navigation template 300 may be defined at block 506 .
  • An example embodiment of defining a navigation template is described in greater detail below.
  • FIG. 6 illustrates a method 600 for creating a reference media database 206 (see FIG. 2 ) according to an example embodiment.
  • the method 600 may be performed at block 502 (see FIG. 5 ).
  • the reference media database 206 created through the operations of the method 600 may be used in the configuration system 200 ; however it may also be used in other systems.
  • Media descriptors may be associated with a plurality of entities at block 602 .
  • the plurality of entities may be from a single media source 106 . 1 (e.g., from a single content provider or fixed disk), the plurality of media sources 106 . 1 - 106 . n , and/or the media library 108 (see FIG. 1 ).
  • the entities may include a channel, a stream, a station, a program, a slot, a playlist, a web page, a recording artist, a composer, a composition, a movement, a performance, a recording, a recording mix track, a track segment, a track, a release, an edition, an album, an album series, a graphic image, a photograph, a video segment, a video, a video still image, a TV episode, a TV series, a film, a podcast, an event, a location, and/or a venue.
  • Other entities may also be used.
  • the media descriptors may use an identification (ID) code to identify a characteristic of an entity with which it is associated.
  • ID code used with the media descriptors may include a genre ID code, an origin ID code, a recording era ID code, a composition era ID code, an artist type ID code, a tempo ID code, a mood ID code, a situation ID code, a work type ID code, a topic ID code, a personal historical contextual IDs, a community historical contextual ID code, a timbre ID code, and the like.
  • Other ID codes may also be used with the media descriptors to identify characteristics of entities.
  • Each of a particular type of ID code may be associated with a single master descriptor code list 124 .
  • the genre ID codes may be on a genre master descriptor code list where each genre ID code is associated with a genre label
  • the mood ID codes may be on a mood master descriptor code list where each mood ID code is associated with a mood label.
  • a particular entity may be associated with a plurality of media descriptors.
  • One or a plurality of media descriptors with a same type of code (e.g., a genre ID code) may be associated with an entity.
  • the media descriptors may optionally be given a ranking and/or weighting to indicate their relevance among other media descriptors. For example, an album may be associated with a primary media descriptor of blues and a secondary media descriptor of rock, where the primary media descriptor is ranked higher than the secondary media descriptor.
  • a determination of which media descriptors to associate with an entity may optionally be based on information received from one or more data sources.
  • the data sources may include content information received from providers (e.g., label data feeds and/or content registries), expert editorial information, individual community submissions, collaborative community submissions, digital signal processing (DSP) analysis, statistical analysis, and the like.
  • providers e.g., label data feeds and/or content registries
  • expert editorial information e.g., individual community submissions, collaborative community submissions, digital signal processing (DSP) analysis, statistical analysis, and the like.
  • DSP digital signal processing
  • One or more flags may optionally be associated with one or more entities of the plurality of entities at block 604 .
  • the flags may be used to identify a certain type (e.g., a special type) of entity that may receive special handling during navigation.
  • a flag may be used to indicate that an entity is a various artist compilation, a soundtrack, a holiday related theme (e.g., Christmas), an interview, and/or a bootleg recording.
  • Other types of flags may also be used to identify other certain types of entities that may receive special handling during navigation.
  • Inheritance may be applied to the plurality of entities at block 606 .
  • Applying inheritance to the plurality of entities may enable access for at least some of the plurality of entities to selected media descriptors associated with one or more other entities.
  • Inheritance may optionally be used to reduce an amount of media descriptors associated with a plurality of entities based on the relationship among the entities.
  • the use of inheritance may provide greater efficiency and/or scalability of the reference media database 206 .
  • a set of media descriptors may be associated with a recoding artist.
  • a set of media descriptors may be associated for the album.
  • the individual recordings of the artist differ from that of an album on which the recordings were recorded, a set of media descriptors may be associated with the individual recordings. If one or more individual segments of a recording differ from a parent recording, a set of media descriptor may be associated with the individual segments.
  • Inheritance may be applied to entities in a cascading down manner (e.g., cascading down inheritance) to provide for inheritance from a parent and/or in a cascading up manner (e.g., cascading up inheritance) to provide for inheritance from a child.
  • Inheritance may be applied in a cascading down manner when a child entity does not have directly associated media descriptors. The child entity may then inherit the media descriptors associated with a parent entity. For example, if a genre ID code is associated with a recording artist and an album does not have an associated genre ID code, the album may inherit the genre ID code of the recording artist. If a recording on the album does not have an associated genre ID code, the recording may inherit the genre ID code of the album.
  • Inheritance may be applied in a cascading up manner when a parent entity does not have directly associated media descriptors.
  • the parent entity may then inherit the media descriptors associated with a child entity. For example, if a genre ID code is associated with one or more recordings and an album containing the recordings does not have associated genre ID code, the album may inherit the genre ID code associated with the most common genre ID code from its recordings. If an artist who recorded a plurality of recordings has not been associated with a genre ID code, the artist may inherit the most common genre ID code associated with albums associated with the artist.
  • An additional mapping may be created for the reference media database 206 at block 608 to provide for an additional association among at least two entities of the plurality of entities. For example, an alternate artist billing may be mapped to a primary artist ID code and/or an individual artist ID code may be mapped to a collaboration artist ID code. Other additional mappings may also be created.
  • the reference media database 206 may optionally be stored at block 610 .
  • the reference database 206 may include the association of the media descriptors with a plurality of entities, the association of a flag with the selected entities of the plurality of entities, applied inheritance to the plurality of entities, and/or the additional mapping among at least two entities of the plurality of entities.
  • FIG. 7 illustrates a method 700 for creating an information architecture 400 (see FIG. 4 ) that may be deployed in a client 102 (see FIG. 1 ) according to an example embodiment.
  • the method 700 may be performed at block 504 (see FIG. 5 ).
  • a descriptor system 104 may be accessed at block 702 .
  • the descriptor system 104 may include a plurality of available category lists 132 .
  • An example embodiment of creating a descriptor system 104 that may be accessed during the operations at block 702 is described in greater detail below.
  • a descriptor hierarchy 116 may be created at block 704 .
  • the descriptor hierarchy 116 may be created by selecting from a plurality of available category lists 132 of a corresponding descriptor type and mappings from the upward mapping 128 and/or downward mapping 130 associated with the selected category lists 232 .
  • the descriptor system 104 thereby may act as having a superset of available category lists 132 from which the selected category lists 232 of the descriptor hierarchy 116 are selected 122 .
  • a plurality of descriptor hierarchies 116 may be created for a particular descriptor type from one or more descriptor systems 104 . 1 - 104 . n (e.g., an original descriptor system 208 , an alternate language descriptor system 210 , and/or a localized language descriptor system 212 ). Two or more of the multiple descriptor hierarchies 116 may be linked together (e.g., through pointers) to facilitate version selection during viewing.
  • the category list ordering data 234 may be created at block 706 .
  • the category list ordering data 234 may enable one or more alternate orderings of the selected category lists 122 of the descriptor hierarchy 214 .
  • Third party mapping tables 236 may optionally be created at block 708 .
  • the third party mapping tables 236 may associate descriptor terms used by third parties to the master descriptor codes contained in the master descriptor code list 124 .
  • Descriptor relation tables 238 may be created at block 710 .
  • the descriptor relation tables 238 may be created at a macro and micro level.
  • a macro level a macro level correlate list may define correlation levels for the items on a single category list.
  • correlation levels may be defined between master descriptor codes. The micro level may then be mapped to the macro level to create the descriptor relation tables 238 .
  • the entity hierarchies 240 may be created at block 712 .
  • the information architecture 400 created during the operations at blocks 702 - 712 may optionally be deployed in the client 102 at block 714 .
  • FIG. 8 illustrates a method 800 for creating a descriptor system 104 that may be deployed in the navigation system 100 (see FIG. 1 ) or another system according to an example embodiment.
  • the created descriptor system 104 may be accessed during the operations at block 702 (see FIG. 7 ).
  • a master descriptor code list 124 may be generated at block 802 .
  • a media descriptor for which the master descriptor code list 124 may be generated may include, but is not limited to a genre media descriptor, an origin media descriptor, a recording era media descriptor, a composition era media descriptor, a time cycle media descriptor, an artist type media descriptor, a tempo media descriptor, a mood media descriptor, a situation media descriptor, or a topic media descriptor.
  • a descriptor system 104 may be generated from the master descriptor code list 124 at block 804 .
  • the descriptor system 104 is generated to include a plurality of available category lists 132 .
  • the available category lists 132 of the descriptor system 104 may be generated to include summarized versions of the master descriptor code list 124 with decreasing or increasing granularity.
  • the summarized versions of the master descriptor code list 124 may include a lesser number of list entries.
  • the amount of list entries in a category list of the descriptor system 104 and the number of category lists in the descriptor system 104 may be selected to provide flexibility when selecting a number of category lists for deployment as a hierarchy 116 as described in greater detail below.
  • Each category list may optionally be provided with a unique identifier.
  • a plurality of descriptor systems 104 . 1 - 104 . n may optionally be created for a particular descriptor type, where each descriptor system 104 of the particular descriptor type may have a different number of category lists and each category list may include a different number of list entries. For example, a plurality of descriptor systems 104 . 1 - 104 .
  • n may be created for different region focuses (e.g., a global descriptor system, a US descriptor system, a Japan descriptor system, etc.), different genre focuses (e.g., general descriptor system, classical descriptor system, internal descriptor system, etc.), different mood focuses (e.g., smooth, excited, reflective, etc.) or the like.
  • region focuses e.g., a global descriptor system, a US descriptor system, a Japan descriptor system, etc.
  • different genre focuses e.g., general descriptor system, classical descriptor system, internal descriptor system, etc.
  • different mood focuses e.g., smooth, excited, reflective, etc.
  • Master descriptor codes may be mapped to a master category list 222 of the descriptor system 104 at block 806 .
  • the master category list 222 may be the category list of the descriptor system with the highest granularity (e.g., number of list entries).
  • the master category list 222 of a descriptor system is not a master list, more than one descriptor code may be mapped into a category code of a same list entry.
  • the category lists of the descriptor system 104 may be mapped to one another at block 808 .
  • the category codes of the category lists may be mapped to a category code of a parent category list (e.g., a less granular category list) in the descriptor system.
  • the category codes of a parent category list may include one or more mappings from a category code of a child category list.
  • An alternate label version of the descriptor system 104 (e.g., the alternate language descriptor system 210 ) may be created at block 810 .
  • the alternate language descriptor system 210 may include the same category lists of the original descriptor system on which the alternate version is based, but contain the alternate master category lists 224 with an alternate language label substituted for some or all of the category names of the master category lists 222 .
  • the alternate language labels may include abbreviated names, short names, extended names, and the like.
  • Multiple alternate language label versions of an original descriptor system 208 may be created.
  • a localized label version of the descriptor system 104 may be created at block 812 .
  • the localized language descriptor system 212 may include a different label based on localization.
  • a localized label version of the descriptor system 104 may be in Japanese, Korean, German, French, and/or other different languages.
  • a localized label version may be created for an original descriptor system 208 and/or the alternate language descriptor system 210 .
  • Ordering value data may optionally be associated with the category lists of a descriptor system at block 814 .
  • the information may then be deployed as a descriptor system 104 at block 816 .
  • FIG. 9 illustrates a method 900 for defining a navigation package that may be deployed in a client 102 (see FIG. 1 ) according to an example embodiment.
  • a determination of a template selection for inclusion in a navigation package may be made at decision block 902 .
  • the determination may be made from explicit input, presence, data received from a geolocation service, dynamically, or the like.
  • the template selection may be for a selection of a pre-built template at block 904 , a dynamically generated template at block 906 , or a manually configured template at block 908 .
  • the pre-built template that may be selected at block 904 may be a navigation profile based on a common use case.
  • the pre-built template may identify included descriptor hierarchies 116 and other information architecture elements that are associated with the template.
  • the pre-built template may include a unique identifier to enable selection among other available templates. Examples of pre-built templates include pre-built templates for a basic user in the European market that plays classical music in a car using a media player, and a pre-built template for an advanced user in the Japanese market that plays music on a personal computer using a media player. Other pre-built templates may also be available for selection.
  • the dynamically generated template that may be selected at block 906 may be a template dynamically generated as needed based on use case parameters.
  • the template may be dynamically generated by input regions and/or markets for deployment, taste/demographics/psychographic profiles(s), application and/or device type, user type, and/or customer/partner preferences.
  • An example embodiment of dynamically generating a template is described in greater detail below.
  • the manually defined template that may be selected at block 908 may be manually defined based on a use case.
  • the manual selection may include:
  • a determination may be made at decision block 910 whether to make another template selection for the navigation package. If a determination is made to make another template selection, the method 900 may return to decision block 902 . If a determination is made not to make another selection at decision block 910 , the method 900 may proceed to block 912 .
  • Descriptor hierarchies 116 , mappings and navigational content 120 corresponding to the selected templates may be associated with the navigation package at block 912 .
  • Other lists may be associated with the navigation package at block 914 .
  • the other lists may include category lists not contained within the descriptor hierarchies 116 .
  • FIG. 10 illustrates a method 1000 for dynamically generating a navigation template 300 (see FIG. 3 ) for use in a navigation path according to an example embodiment.
  • the method 1000 may be used to dynamically generate a template during the operations performed at block 906 (see FIG. 9 ).
  • a region for deployment may be selected at block 1002 .
  • the selection of the region may be used in determining which descriptor systems 104 . 1 - 104 . 3 will be used by the navigation template 300 and the default and/or alternate language labels that will be used to label the descriptor hierarchy 116 .
  • the selection of the region may also be used in determining which navigational content parameters may be used.
  • the navigational content parameters may include basic, audio, graphic, voice, or advanced settings.
  • the selection of the region may be used in determining which text terms may be use in a mapping set. For example, slang terms, regional dialect terms, professional terms, and the like may be used as text terms in the mapping set.
  • the personal profiles may include a taste profile, based on age, based on sex, based on historical information, explicit input, and the like.
  • the taste profile may be a classical view, an electronica view, a boomer view, a generation X view, and the like.
  • the selection of the personal profile may be used in determining which taste profile variation may be selected.
  • the taste profile variation may be used in determining which view of the descriptor system 104 may be made.
  • the selection of the personal profile may also be used in determining alternate label terms selected for labelling of the descriptor hierarchy 116 .
  • the alternate labelling may include slang labels, regional dialect labels, informal labels, old school labels, and the like.
  • the selection of the personal profile may be used to determine which navigational content parameters and the related content parameters.
  • the related content parameters may include basic, audio, graphic, voice, or advanced settings.
  • the selection of the personal profile may also be used determining the text terms for a mapping set.
  • Application parameters 306 may be selected at block 1006 .
  • the selection of the application parameters may be used in determining a number of levels (e.g., a number of category lists) to include in a hierarchy system and label formatting parameters.
  • Device parameters 308 may be selected at block 1008 .
  • the selection of the device parameters may be used in determining a number of levels (e.g., a number of category lists) to include in a hierarchy system, the length of category lists used at each level of the category list, and label formatting parameters.
  • the category list at each level may be twenty for a single level, ten and seventy-five for two levels, twenty-five, two hundred and fifty, and eight hundred for three levels, and five for each of four levels.
  • the selection of the device parameters may also be used in determining the related content parameters.
  • a user type 310 may be selected at block 1010 .
  • the selection of the user type may be used in determining the length of category lists used at each level of the category list.
  • Third party IDs 312 may optionally be selected at block 1012 .
  • the selection of one or more third party IDs 312 may be used in determining which partner mapping selection may be used.
  • An identifier may be associated with a navigation tree 314 at block 1014 .
  • the navigation tree 314 may include a sequence of category lists that may be taken from selected hierarchies and/or entity types to be available for constructing UI elements based on the selection of application parameters, device parameters, user type, and/or other unique IDs.
  • a navigation template may be definable based on the selections that were made.
  • a unique identifier may optionally be associated with one or more navigation trees at block 1014 .
  • a template ID 302 may be generated at block 1016 .
  • the identifier (e.g., a unique identifier) may be generated and stored based on the combination of the selected attributes of the navigation template 300 .
  • Unique IDs for the component attributes and the navigational profiles that drive the component attributes may also be associated with the identifier of the navigation template 300 .
  • current updated version of corresponding descriptor hierarchies 116 , the mappings 236 and the navigational content 120 may be retrieved based on the parameters in the navigation template. Other lists may also be retrieved for the use case.
  • a navigation package including default hierarchies 214 , alternate hierarchies 214 , mapping tables and navigational content 120 appropriate for the use case may be available for deployment.
  • FIG. 11 illustrates a method 1100 for coding descriptor codes in media objects according to an example embodiment.
  • the media objects may be media available from the media sources 106 . 1 - 106 . n and/or the media library 108 (see FIG. 1 ).
  • a media object e.g., a media item
  • Identifiers and/or master descriptor codes may be embedded (e.g., as metadata containers) in media objects during production (e.g., during product and/or post-production) at block 1104 .
  • the embedded data may optionally be encrypted or otherwise obfuscated.
  • the master descriptor codes may be associated with one or multiple entities of the media object.
  • Unique identifiers and/or master descriptor codes may be associated with media objects during encoding (e.g., during encoding and/or other processing step as part of distribution) at block 1106 .
  • identifiers examples include DDEX, GRID, MI3P ID, UPC, EAN, ISRC, ISWC, DOI ID, commercial IDs, public IDs (e.g., FreeDB), proprietary recording ID, proprietary album ID, or a proprietary composition ID. Other unique identifiers may also be embedded and/or associated.
  • the method 1100 may proceed to decision block 1108 .
  • a determination may be made code another media object. If a determination is made to embed another media object, the method 1100 may return to decision block 1102 . If a determination is made not to embed another media item at decision block 1108 , the media objects may be delivered at block 1110 .
  • the media objects may be delivered with any metadata that has been embedded or associated.
  • the delivery methods for providing the metadata with the media objects may include as part of a media object's exposed predefined tag field, in a proprietary tag field, as a watermark, as MPEG-7 data, as MPV or HiMAT Tag, in an FM sideband (e.g. RDS), in a satellite radio data channel, in a digital radio data channel, or in an Internet radio data stream (e.g. MPEG ancillary data). Other delivery methods may also be used.
  • FIG. 12 illustrates a method 1200 for preloading a client 102 (see FIG. 1 ) according to an example embodiment.
  • a media collection may be pre-loaded on the client 102 at block 1202 .
  • the media collection may be pre-loaded prior to use by an end-user.
  • the media collection may be the same for all of a number of units, a finite set of target libraries geared for a specific genre, regional, lifestyle, or other interest, or may be personalized based on a personal profile of an end-user.
  • the media collection may include audio files, video files, image files, and the like.
  • Customer related content and information architecture elements may be ingested by the client 102 at block 1204 .
  • the related content may include the navigational content 120 and/or other content.
  • the other content may include album cover art, artist photos, concert posters, text artist factoids, lyrics, channel/station logos, and the like.
  • the information architecture 400 may be pre-loaded at block 1208 .
  • the information architecture may be made locally accessible and subject to customization based on the template ID 302 (see FIG. 3 ).
  • the template ID 302 may be stored on the client 102 for future use.
  • the preloaded information architecture 400 may include genre hierarchies, era hierarchies, origin hierarchies, navigation trees 404 , mappings 236 , ordering data, and station/channel directories.
  • FIG. 13 illustrates a method 1300 for loading an information architecture 400 (see FIG. 4 ) according to an example embodiment.
  • a request may be received for a navigation package including an information architecture 400 at block 1302 .
  • the request may also be made by obtaining a device ID, generating a default template ID using the device ID, and/or issuing a request for a navigation package from the default template ID. Other methods for requesting the navigation package may also be used.
  • Predefined descriptor hierarchies 116 may be accessed at block 1304 .
  • a default descriptor hierarchy 116 and optionally alternate descriptor hierarchies 116 may be accessed for use in the information architecture 400 .
  • the descriptor hierarchies 116 may optionally be accessed based on the template ID 302 .
  • the alternate hierarchies 116 may be accessed from one or more different descriptor systems 104 . 1 - 104 . n.
  • the predefined descriptor hierarchies 116 that may be accessed include one predefined and one or several alternate genre hierarchies, a predefined and alternate origin hierarchies, a predefined and alternate artist type hierarchies, a predefined and alternate recording era hierarchies, a predefined and alternate composition era hierarchies, a predefined and alternate composition mood hierarchies, a predefined and alternate temp or rhythm hierarchies, a predefined and alternate composition theme/topic hierarchies, and the like.
  • Predefined navigation trees 314 may be accessed at block 1306 .
  • a default and optionally alternate navigation trees may be accessed for use in the information architecture 400 .
  • the navigation trees 314 may be accessed from one or more descriptor systems 104 . 1 - 104 . n .
  • the navigation trees may optionally be accessed based on the template ID 302 .
  • the navigation trees 314 may include genre/era/track, genre/mood/tempo/recording, artist/mood/year/track, and genre/album/mood/track. Other navigation trees 314 may also be used.
  • Other category lists 406 may be accessed at block 1308 .
  • the other category lists 406 may be used to generate alternate navigation trees 314 , descriptor hierarchies 116 , faceted navigation, or other local navigation options that have not been pre-defined.
  • a descriptor system 104 may be retrieved to provide increased flexibility.
  • the other category lists 406 may include a selected genre category list from a genre descriptor system, a selected origin category list from an origin descriptor system, a selected artist type category list from an artist type descriptor system, a selected recording era category list from a recording era descriptor system, a selected composition era category list from a composition era descriptor system, a selected mood category list from a mood descriptor system, a tempo category list from a tempo descriptor system, a selected theme/topic category list from a theme/topic descriptor system, and the like.
  • the selected category lists may be supported by the client.
  • the default mappings 408 may be accessed at block 1310 .
  • the descriptor relational tables 238 may be accessed at block 1312 .
  • a filter may optionally be applied to the descriptor relational tables 238 that are accessed to obtain relational data for relationships with weightings above a threshold level.
  • Navigational content 120 may be accessed at block 1314 .
  • the default navigational content may be accessed using a navigational content ID.
  • the accessed material may then be loaded in the client 102 as the information architecture 400 at block 1316 .
  • FIG. 14 illustrates a method 1400 for coding media items according to an example embodiment.
  • the media items 134 may be coded when a media collection is initially provided and/or during a media collection update on the client 102 .
  • Media items may be provided at block 1402 .
  • the media items may be loaded and/or transmitted into the client 102 .
  • An example embodiment of providing media items is described in greater detail below.
  • the provided media items may be identified (e.g., through recognition) at block 1404 .
  • An example embodiment of recognizing the provided media items is described in greater detail below.
  • the media items may be mapped to entities at block 1406 .
  • An example embodiment of mapping media items to entities is described in greater detail below.
  • Codes and content IDs may be received for the entities at block 1408 .
  • An example embodiment of receiving codes and content IDS is described in greater detail below.
  • Indices may be created at block 1402 .
  • the index may be into a unified data system.
  • the index may physically reside on the client, on another client in a local network, on a remote server, or distributed across more than one client.
  • the indices may be created dynamically based on a client ID a user ID, and/or a combination of one or more parameters indicating a type of indexing that is relevant for a particular user.
  • the indices may optionally indicate which one or more users are associated with a media collection.
  • FIG. 15 illustrates a method 1500 for loading content from a plurality of sources according to an example embodiment.
  • the content may include media objects in a variety of forms including audio, video, image, text, and spoken word audio.
  • the formats of the media objects may include WAV, MP3, AAC, WMA, OggVorbis, FLAC, analog audio or video, MPEG2, WMV, QUICKTIME, JPEG, GIF, plaintext, MICROSOFT WORD, and the like.
  • the content may be loaded from a variety of media including, by way of example, optical media such as audio CD, CD-R, CD-RW, DVD, DVD-R, HD-DVD, Blu-Ray, hard disk drive (HDD) and other magnetic media, solid state media including SD, MEMORY STICK, and flash memory, stream, and other IP or data transport.
  • optical media such as audio CD, CD-R, CD-RW, DVD, DVD-R, HD-DVD, Blu-Ray, hard disk drive (HDD) and other magnetic media
  • solid state media including SD, MEMORY STICK, and flash memory, stream, and other IP or data transport.
  • the content may reside locally or be available through a tether using connections such as LAN, WAN, Wifi, WiMax, cellular networks, or the like.
  • the media objects may be taken from a variety of objects including an intra-media item segment, a media item (e.g., audio, video, photo, image text, etc.), a program, a channel, a collection, or a playlist.
  • a media item e.g., audio, video, photo, image text, etc.
  • a program e.g., a program, a channel, a collection, or a playlist.
  • the service/directory may include local content, AM/FM/HD radio, satellite radio, Internet on-demand and streaming radio, web page, satellite and cable TV, IPTV, RSS and other web data feeds, and other web services.
  • the devices may include, by way of example, a PC, a home media server, an auto stereo, a portable media player, a mobile phone, a PDA, a digital media adapter, a connected CD/DVD/Flash player/changer, a remote control, a connected TV, a connected DVD, in-flight entertainment, or a location based system (e.g., at a club, restaurant, or retail). Other devices may also be used.
  • FIG. 16 illustrates a method 1600 for content recognition according to an example embodiment.
  • the method 1600 may be performed at the block 1404 (see FIG. 14 ).
  • Identifier recognition may be performed by extracting the identifiers from the content at block 1604 , recognizing the content using one or more techniques at block 1606 , performing a lookup using a mapping table at block 1608 , and/or utilizing an embedded or associated identifier with the content at block 1610 .
  • the identifiers extracted during the operations at block 1604 may include, but are not limited to, TOCs, TOPs, audio file FP, audio stream FP, digital file/file system data (e.g., file tags, file names, folder names, etc.), image FP, voice FP, embedded entity IDs, embedded descriptor IDs, and a metadata data stream.
  • the one or more techniques used for recognizing the content at block 1606 may include optical media identifiers (e.g., TOCs or TOPs), digital file/stream identifiers (e.g., audio file FP, audio stream FP, or metadata data stream), digital file and digital system data (e.g., file tags, file names, or folder names), image recognition, voice/speech recognition, video FP recognition, text analysis, melody/humming recognition, and the like.
  • optical media identifiers e.g., TOCs or TOPs
  • digital file/stream identifiers e.g., audio file FP, audio stream FP, or metadata data stream
  • digital file and digital system data e.g., file tags, file names, or folder names
  • image recognition voice/speech recognition
  • video FP recognition text analysis
  • melody/humming recognition e.g., text analysis, melody/humming recognition, and the like.
  • An example of a digital fingerprint technique that may be used during the operations at block 1606 to identify digital content is robust hashing. For example in mono audio, a single signal may be sampled. If the audio is stereo, either hash signals may be extracted for the left and the right channels separately, or the left and the right channel are added prior to hash signal extraction. A short piece or segment of audio (e.g., of the order of seconds), may be used to perform the analysis. As audio can be seen as an endless stream of audio-samples, audio signals of an audio track may be divided into time intervals or frames to calculate a hash word for every frame. However any known technique that may used to identify content from a segment or portion of content (e.g., the actual audio content or video content) may be also used. Thus, in an example embodiment, the content may be identified independent of any watermark, tag or other identifier but rather from the actual content data (actual video data or actual audio data).
  • the embedded and/or associated identifiers that may be used in performing a lookup using a mapping table at block 1608 may include a UPC, an ISRC, an ISWC, a GRID/MI3P/DDEX, a third party ID, a SKU number, a watermark, HiMAT, MPV, and the like.
  • a determination may be made at decision block 1612 whether to perform text analysis recognition. If a determination is made to perform text analysis recognition, a text match of entity names may be performed at block 1614 and/or mapping tables may be utilized at block 1616 .
  • the text match of entity names to a more normalized entity ID at block 1614 may include artist name, album name, alternate spellings, abbreviations, misspellings, and the like.
  • the mapping tables may be utilized at block 1616 to map available textual descriptors from an available entity (e.g., album, artist, or track) to a normalized descriptor ID.
  • the available textual descriptors may include a genre name text, a mood name text, a situation name text, and the like.
  • the most granular descriptive data may be accessed at block 1618 .
  • the most granular descriptive data may be embedded in a media item and/or may be retrieved from the database of descriptors associated with the identifier.
  • the most authoritative descriptive data may be accessed at block 1620 .
  • the most authoritative descriptive data may be embedded in a media item and/or may be retrieved from the database of descriptors associated with the identifier.
  • Higher level descriptors may optionally be created at block 1622 .
  • the higher level descriptors may be created by extracting features, and classifying and creating one or more mid-level or high-level descriptors.
  • a taste profile may optionally be generated at block 1624 .
  • the taste profile may be generated by summarizing the media collection and using the summary to generate the taste profile (e.g., a preference of a user of the client 102 ).
  • FIG. 17 illustrates a method 1700 for mapping content IDs according to an example embodiment.
  • the method 1700 may be performed at block 1406 (see FIG. 14 ).
  • Media identifiers may be mapped to entities at block 1702 .
  • the media identifiers may optionally be mapped to normalized, semantically meaningfully core entities.
  • the media identifiers that may be mapped include, but are not limited to, a lyric ID, a composition ID, a recording ID, a track ID, a disc ID, a release ID, an edition ID, an album ID, a series ID, a recording artist ID, a composer ID, a playlist ID, a film/TV episode ID, a photo ID, or a text work ID.
  • Parent and child entities may be mapped at block 1704 .
  • the mapping may be a local map to hierarchically-related semantically meaningful entities.
  • Example of mappings between parent and child entities may include mapping between a composition ID, a recording ID, and a track ID; a lyric ID, a recording ID, and a track ID; a lyric ID, a composition ID, and a track ID; a lyric ID, a composition ID, and a recording ID; a track ID, a release ID, an edition ID, an album ID, and a series ID; a track ID, a disc ID, an edition ID, an album ID, and a series ID; a track ID, a disc ID, a release ID, an album ID, and a series ID; a track ID, a disc ID, a release ID, an album ID, and a series ID; a track ID, a disc ID, a release ID, an edition ID, and a series ID; and a track ID, a disc ID,
  • Relationally related entities may be mapped at block 1706 .
  • segments may be mapped.
  • An example of a map for relationally related entities includes a map between disc ID, release ID, edition ID, album ID, and series ID.
  • FIG. 18 illustrates a method 1800 for receiving master descriptor codes and content IDS for content according to an example embodiment.
  • the method 1800 may be performed at block 1408 (see FIG. 14 ).
  • Unrecognized descriptor IDs may be optionally mapped at block 1804 .
  • the mapping may translate the unrecognized descriptor code into a normalized descriptor code. Examples of maps that may be used to map the descriptor IDs include a genre map, an origin map, a recording era map, a composition era map, a mood map, a tempo map, and a situation map.
  • Descriptor IDs may optionally be mapped to entities without descriptor IDs at block 1806 .
  • a local mapping to an entities without a direct descriptor ID may be performed after retrieval of a parent or child descriptor ID.
  • Alternative billings and collaboration artist IDs may be received at block 1806 . Mappings of alternative billing IDs to primary artist IDs and mappings of collaborations to component individual contributors may be received.
  • Navigational and related content IDs may be received at block 1808 .
  • Navigational and related content IDs may include, by way of example, cover art, an artist photo, an artist bio, an album review, a track review, a label logo, a track audio preview, a track audio, a palette, or phonetic data.
  • Related content may be received at block 1810 .
  • Related content may be received by using a related content ID to request delivery of/or unlocking of related content from local and/or remote sources (e.g., a store)
  • local and/or remote sources e.g., a store
  • FIG. 19 illustrates a method 1900 for utilizing a navigation package in a client 102 (see FIG. 1 ) according to an example embodiment.
  • One or more default navigation views may be generated at block 1902 .
  • One or more alternate navigation views may be generated at block 1904 .
  • One or more personalized navigation views may be generated at block 1906 .
  • One or more contextually relevant views may be generated at block 1908 .
  • An example embodiment for creating the views that may be generating during the operations of the method 1900 is described in greater detail below.
  • FIG. 20 illustrates a method 2000 for creating a navigational view according to an example embodiment.
  • the method 2000 may be performed at block 1902 , block 1904 , block 1906 , and/or block 1908 .
  • a single descriptor type hierarchy 116 may be created at block 2002 .
  • Default templates may be used to organize media items into one or more single descriptor type hierarchies that may optionally be normalized (e.g., normalized single descriptor type hierarchy).
  • Options for the single descriptor type hierarchy may include a number of levels, an average number of categories under each note at each level, and/or the entity type contained in the lowest level categories (e.g., track, artist, album, or composition).
  • Example of single descriptor type hierarchies 116 include meta-genre/genre/sub-genre:track, meta-mood/mood/sub-mood:track, meta-origin/origin/sub-origin:track, meta-era/era/sub-era:track, meta-type/type/sub-type:track, meta-tempo/tempo/sub-tempo:track, meta-theme/theme/sub-theme:track, meta-situation/situation/sub-situation:track, and meta-work type/work type/sub-work type:track.
  • Each of the single descriptor hierarchies may optionally be identified with a unique identifier.
  • Hierarchical views may be created using normalized metadata at block 2004 .
  • the normalized metadata from recognition may be used to organize media items into summarized, normalized single-entity hierarchical views.
  • the hierarchical views may be a single level or multiple levels. For example, a main artist may be displayed at a top level and an alternate billing level may be displayed when applicable.
  • the hierarchical views may include recording artist:track, composer:track, album:track, recording:track, and composition:track.
  • Hierarchical views may be created using existing metadata and/or entity hierarchies from templates at block 2006 .
  • the existing metadata and/or entity hierarchies from the navigation templates 300 may be used to organize media items into summarized, normalized single-entity hierarchical views.
  • the views may include artist/album:track or composer/composition:recording.
  • Other views may also be created.
  • Navigation trees 314 may be created at block 2008 .
  • the default navigation template 300 may be used to organize media items into one or more navigation trees 314 .
  • the navigation trees 314 may optionally be normalized and may include a unique identifier for subsequent identification.
  • a particular navigation tree 314 may be defined by a number of levels of the tree, the container (e.g., a descriptor category or entity type) used at each level of the tree, and a number of categories for each category level of the tree (e.g. which category list is used for the levels using category lists), the entity type or types that are ultimately organized by the categories of the tree at the lowest level or alternatively the navigation tree may consistent only of levels of descriptors.
  • the particular navigation tree 314 may be further defined by the mappings from the master descriptor list to the category lists and the mappings from each category list to the other category lists.
  • Examples of navigation trees 314 may include genre/era:track, genre/mood/tempo:recording, artist/mood/year:track, genre/year:news article, genre/origin, and the like.
  • FIG. 21 illustrates a method 2100 for updating navigational views according to an example embodiment.
  • the navigational views generating during the method 2000 may be update.
  • the ordering options of the API input may include alpha-numerical, date/time, number of sub-categories, number of items in container, editorial semantic relationship B (e.g., similarity clustering), editorial semantic relationship B (e.g., liner sequence closest fit), editorial: importance or quality, popularity (e.g., may be static or dynamically generated), custom, and the like. If a determination is made that the parameter was not received at decision block 2102 , the method 2100 may proceed to decision block 2106 .
  • a determination may be made as to whether a display selection has been received. If a determination is made that the display selection has been received, the display selection may be processed at block 2108 .
  • the display selection may specify (e.g., through an API of the client 102 ) whether to display metadata as is being from a media item or alternatively display a higher-level category item label from a selected hierarchy category list for a descriptor/category field.
  • the display selection may optionally be used to override parameters inherited from a global profile for a device or application. If a determination is made that the display selection has not been received at decision block 2106 , the method 2100 may proceed to decision block 2110 .
  • the unified library of media items may be navigated at block 2114 .
  • the unified library may include the media items 134 available from the plurality of media sources 106 . 1 - 106 . n .
  • the unified library may be navigated from the client 102 and optionally another client on the network.
  • a determination may be made whether to modify the navigation. If a determination is made to modify the navigation, the method 2100 may return to decision block 2102 . If a determination is made not modify the navigation at decision block 2116 , the method 210 may proceed to decision block 2118 .
  • FIG. 22 illustrates a method 2200 for presenting a navigation view on a client 102 (see FIG. 1 ) according to an example embodiment.
  • the method 2200 may be used to enable multiple descriptor hierarchies 116 to be available in a single client (e.g., a device and/or application).
  • the multiple hierarchies 116 may be built from the same master descriptor code. For example, both a gender-oriented and group-type oriented view of artist type master descriptor codes and both a recording-era oriented and a classical composition oriented view of era master descriptor codes may be presented.
  • Global settings may be defined per client 102 at block 2202 and per navigation tree at block 2204 .
  • Settings may also be defined per navigation tree level at block 2206 .
  • FIG. 23 illustrates a method 2300 for altering navigation on a client 102 (see FIG. 1 ) according to an example embodiment.
  • Navigation may be personalized by providing tools for user tagging and to adapt tagging menus based on a profile. End-users may be able to personalize by manually overriding category names, entity item names, and category groupings.
  • Navigation may also be personalized by dynamic configuration of hierarchies 116 based on collection and/or user profile.
  • An end-user may also manually select and/or construct a hierarchy 114 to be used for navigation.
  • Content may be flagged and/or tagged by the user at the client 102 level and/or file level.
  • Preferences may be stored remotely to be retrieved by a difference client (e.g., application or device) of the user and/or may be transferred directly from one client to another via a wired or wireless connection.
  • a difference client e.g., application or device
  • a determination may be whether to use dynamic personalization. If a determination is made to use dynamic personalization, dynamic personalization may be used at block 2308 .
  • Assembled personal profile data may be accessed automatically to provide dynamic personalized navigation.
  • An implicit profile may be overridden with any explicit input provided by the user.
  • Sources used for dynamic personalization may include explicit input, an index of user media collection including counts of media items per entities and per categories, or an activity log that logs user activity to track activity across a current device or multiple devices of a user to an activity and consists of plays, clicks, time engages, and/or other factors.
  • the output of the dynamic personalization may be a combined profile.
  • the profile may indicate a relative level of interest/importance of category or entity item to the end-user and may be stored remotely and/or locally.
  • An alternative navigational tree 314 may be dynamically constructed based on profile input. A determination may be made as to how many levels to step up or down for sub-sections of the list in order to re-tune.
  • An example of the alternative navigation tree 314 dynamically constructed is as follows:
  • Composite graphical, audio, video or textual navigational icons may be dynamically selected for personalization based on current contents of the container.
  • the method 2300 may proceed to decision block 2310 .
  • Contextual adaptation may use real-time global textual adaptation and/or real-time personal contextual adaptation.
  • the data may be accessible by a client 102 either locally or remotely.
  • Examples of real-time global contextual adaptation include use of location (GPS, cell), time of day, week, season, year, date, holidays, weather, acceleration, speed, temperature, orientation, object presence (using technologies like RFID, BLUETOOTH), and public personal presence.
  • Examples of real-time personal contextual adaptation include use of personal location bookmarks (using data provided by GPS or cellular networks), personal time of day, week, year preference, birthdays, anniversaries, weather preference, personal tagged object presence, known tagged personal presence, heart rate, and other bio data feeds.
  • the method 2300 may proceed to decision block 2314 .
  • a determination may be whether to restore navigation. If a determination is made to restore navigation, navigation may be restored at block 2316 .
  • the method 2300 may proceed to decision block 2318 .
  • FIG. 24 shows a diagrammatic representation of machine in the example form of a computer system 2400 within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed.
  • the machine operates as a standalone device or may be connected (e.g., networked) to other machines.
  • the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a media player (e.g., portable audio player, vehicle audio player, or any media rendering device), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • tablet PC tablet PC
  • PDA Personal Digital Assistant
  • a cellular telephone e.g., cellular telephone
  • web appliance e.g., a web appliance, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • the example computer system 2400 includes a processor 2412 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), digital signal processor (DSP) or any combination thereof), a main memory 2404 and a static memory 2406 , which communicate with each other via a bus 2408 .
  • the computer system 2400 may further include a video display unit 2410 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)).
  • the computer system 2400 also includes an alphanumeric input device 2413 (e.g., a keyboard), a user interface (UI) navigation device 2414 (e.g., a mouse), a disk drive unit 2416 , a signal generation device 2418 (e.g., a speaker) and a network interface device 2420 .
  • an alphanumeric input device 2413 e.g., a keyboard
  • UI user interface
  • disk drive unit 2416 e.g., a disk drive unit
  • signal generation device 2418 e.g., a speaker
  • the disk drive unit 2416 includes a machine-readable medium 2422 on which is stored one or more sets of instructions and data structures (e.g., software 2424 ) embodying or utilized by any one or more of the methodologies or functions described herein.
  • the software 2424 may also reside, completely or at least partially, within the main memory 2404 and/or within the processor 2412 during execution thereof by the computer system 2400 , the main memory 2404 and the processor 2412 also constituting machine-readable media.
  • the software 2424 may further be transmitted or received over a network 2426 via the network interface device 2420 utilizing any one of a number of well-known transfer protocols (e.g., FTP).
  • FTP transfer protocol
  • machine-readable medium 2422 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions.
  • the term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.
  • FIG. 25 illustrates an example end-use system in which the client 102 , 132 may be deployed.
  • the end-use system 2500 may include a device processor 2502 in which a client 2504 may be deployed on device storage 2506 .
  • the device may include a car radio, an MP3 player, a home stereo, a computing system, ant the like.
  • the client 2504 may include the functionality of the client 102 and/or the client 135 (see FIGS. 1A and 1B ).
  • the device storage 2506 may be a fixed disk drive, flash memory, and the like.
  • the device processor 2502 may receive content from a plurality of sources 2508 . 1 - 2508 . 6 .
  • the plurality of sources as illustrated include a mobile phone/PDA 2508 . 1 , a CD/DVD/BLURAY/HD/DVD 2508 , a media player 2058 . 3 , local and/or remote storage 2508 . 4 , a radio station 2508 . 5 , internet access/streaming content 2508 . 6 .
  • sources of content may also be used.
  • the content may be made navigable by an end-user through user of a user interface 2516 coupled to a display 2518 , an audio output 2520 , and a user input 2514 (e.g., for voice and/or text).
  • the information obtain by the device processor 2502 may be stored locally on the device storage 2506 and/or remotely on a system database 2512 by use of a system server 2512 .
  • inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed.
  • inventive concept merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Information Transfer Between Computers (AREA)
  • Indexing, Searching, Synchronizing, And The Amount Of Synchronization Travel Of Record Carriers (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A method and system for media navigation. A descriptor hierarchy may be accessed. The descriptor hierarchy may include at least one category list. One or more media descriptors may be accessed for a plurality of media items. The plurality of media items may be accessible from a plurality of sources. The one or more media descriptors may be mapped to the at least one category list. The navigation may be processed through a user interface to enable selection of the plurality of media items from the plurality of sources.

Description

    CROSS-REFERENCE TO A RELATED APPLICATION
  • This application is a Divisional of U.S. application Ser. No. 11/716,269, filed on Mar. 9, 2007, which claims the benefit of United States Provisional Patent Application entitled “Method and System to Navigate Media from Multiple Sources”, Ser. No. 60/781,609, Filed 9 Mar. 2006, which applications are incorporated herein by reference in their entirety.
  • FIELD
  • This application relates to media navigation, and more specifically to systems and methods for navigating media from a plurality of sources.
  • BACKGROUND
  • The growth in popularity of digital media has brought increased benefits and accompanying new challenges. While developments have continued to improve the digitization, compression, and distribution of digital media, it has become difficult to easily and effectively navigating through such media. Accessing media from a variety of sources of potentially different types for navigation increases the difficulty of navigation, as each of the sources may include a large number of media items including songs, videos, pictures, and/or text in a variety of formats and with varying attributes.
  • The sheer volume of digital media available, along with significant diversity in metadata systems, navigational structures, and localized approaches, makes it difficult for any single device or application user interface to meet the needs of all users for accessing the digital media.
  • In general, descriptive metadata ordinarily associated with the digital media may be unavailable, inaccurate, incomplete and/or internally inconsistent. When available, the metadata is often only available via textual tags that indicate only a single level of description (e.g., rock genre), and the level of granularity used in that single level may vary greatly even within a single defined vocabulary for the metadata. Often the level of granularity available is either too detailed or too coarse to meet the needs of a given user interface requirement. Additionally, for a given media object there may be multiple values available for a given descriptor type, and/or multiple values of the same type from multiple sources that may cause additional problems when attempting to navigate the digital media.
  • While the scope and diversity of digital media available to users has increased, the user interface constraints of limited screen size and resolution in some devices have remained effectively fixed. Although increases in resolution have been gained, the screen size being used to access some digital media has decreased as the devices have become more portable. Portable devices may include constraints that the length of lists and the length of the terms used in such lists remain short and simple.
  • At the same time, the application logic driving media applications such as automatic playlist engines, recommendation engines, user profiling and personalization functions, and community services benefits from increasingly detailed and granular descriptive data. Systems that attempt to utilize the same set of descriptors for both user interface display and application logic may be unable to meet both needs effectively at once, while using two completely separate systems risk discontinuity and user confusion.
  • In addition, different users may use different media navigation structures, labeling and related content when accessing content from different geographic regions, in different languages, from various types of devices and applications, from different user types, and/or according to personal media preferences. Existing navigation structures may not enable developers to easily select, configure and deliver appropriate navigational elements to each user group or individual user, especially in the case of an embedded device.
  • Traditionally, access to media available from such alternatives has been organized in a source or service paradigm. If a user desired to hear jazz music, they would either pre-select just a single device or source to browse, such as a local HDD, or drill in and out of the navigation UI for each device/source separately to view available jazz content.
  • Existing media navigation solutions have been more or less static in terms of not being able to respond dynamically to changes in a user's media collection or other explicit or implicit, temporary or long-term personal preferences. Likewise, there has been only limited capability for media navigation options to respond dynamically to real-time or periodic changes in other global and personal contextual data sources including those related to time, location, motion, orientation, personal presence, object presence.
  • BRIEF DESCRIPTION OF DRAWINGS
  • Embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
  • FIGS. 1A & 1B are block diagrams of example navigation systems;
  • FIG. 2 is a block diagram of an example configuration system;
  • FIG. 3 is a block diagram of an example navigation template;
  • FIG. 4 is a block diagram of an example information architecture;
  • FIG. 5 is a flowchart illustrating a method for pre-processing the configuration system of FIG. 2 in accordance with an example embodiment;
  • FIG. 6 is a flowchart illustrating a method for creating a reference media database in accordance with an example embodiment;
  • FIG. 7 is a flowchart illustrating a method for creating an information architecture in accordance with an example embodiment;
  • FIG. 8 is a flowchart illustrating a method for creating a descriptor system that may be deployed in the navigation systems of FIGS. 1A & 1B in accordance with an example embodiment;
  • FIG. 9 is a flowchart illustrating a method for defining a navigation package in accordance with an example embodiment;
  • FIG. 10 is a flowchart illustrating a method for dynamically generating a navigation template in accordance with an example embodiment;
  • FIG. 11 is a flowchart illustrating a method for coding descriptor codes in media objects in accordance with an example embodiment;
  • FIG. 12 is a flowchart illustrating a method for preloading a client in accordance with an example embodiment;
  • FIG. 13 is a flowchart illustrating a method for loading an information architecture in accordance with an example embodiment;
  • FIG. 14 is a flowchart illustrating a method for coding media items in accordance with an example embodiment;
  • FIG. 15 is a flowchart illustrating a method for loading content from a plurality of sources in accordance with an example embodiment;
  • FIG. 16 is a flowchart illustrating a method for content recognition in accordance with an example embodiment;
  • FIG. 17 is a flowchart illustrating a method for mapping content IDs in accordance with an example embodiment;
  • FIG. 18 is a flowchart illustrating a method for receiving master descriptor codes and content IDs for content in accordance with an example embodiment;
  • FIG. 19 is a flowchart illustrating a method for utilizing a navigation package in a client in accordance with an example embodiment;
  • FIG. 20 is a flowchart illustrating a method for creating a navigational view in accordance with an example embodiment;
  • FIG. 21 is a flowchart illustrating a method for updating navigational views in accordance with an example embodiment;
  • FIG. 22 is a flowchart illustrating a method for presenting a navigation view on a client in accordance with an example embodiment;
  • FIG. 23 is a flowchart illustrating a method for altering navigation on a client in accordance with an example embodiment;
  • FIG. 24 illustrates a diagrammatic representation of machine in the example form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed; and
  • FIG. 25 illustrates a block diagram of an example end-user system in which the client of FIGS. 1A & 1B may be deployed.
  • DETAILED DESCRIPTION
  • Example methods and systems for media navigation are described. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of example embodiments. It will be evident, however, to one skilled in the art that the present invention may be practiced without these specific details.
  • A navigation system is described that may provide consistent, simple, effective, and efficient access to a vast scope of digital media, regardless of application, display device or user interface (UI) paradigm. The navigation system may be deployed to enable end-users to perform customization and re-classification of their media collections without impacting the integrity of an underlying portion of the navigation system. Example embodiments of the navigation system may be used to navigate digital content (e.g., digital audio) and may thus be deployed in a portable media device (e.g., MP3 player, iPOD, or audio any other portable audio player), in vehicle audio systems, home stereo systems, computers or the like.
  • The navigation system may enable the efficient configuration and dynamic updating of diverse navigational structures across a plurality of devices and applications, while maintaining the integrity of the underlying metadata. A configuration module of the navigation system may automatically pre-generate and load into a client (e.g., provided on a media player) appropriate alternative normalized media navigation structures. The structures may be used to create an indexed application media database and navigational elements to present one or more views of media items of diverse and multiple sources, types and metadata. The views may be switched on demand, personalized, dynamically adapted and updated without altering the underlying global granular source media identifiers, descriptor codes and/or labeling data.
  • An example system may be built upon a foundation of navigation structures that may include lists, ordering data, hierarchies, trees, weighted relations, mappings, links, and other information architecture elements, as well as entity grouping, labeling, filters and related navigational content. The navigation structures may incorporate a set of semantic entities and descriptors designed to be understandable and appropriate and may be automatically optimized. Thus, for example, a user of a media device (e.g., a portable media player or vehicle audio system) may be customized or optimized for one or more particular users or user location(s).
  • The navigation system may include a mapping of global internal granular annotation codes to a variety of alternative category lists and navigational content used for user interface purposes. The category lists may be fluidly mixed and matched to generate a wide variety of hierarchies and navigation trees while maintaining the integrity of the parent-child mapping between each list and without any change in the global granular annotation codes.
  • In a pre-processing phase, a configuration module may be utilized to automatically select from a pre-defined super-set of options, pre-generate and load into a specific connected or unconnected client at time of manufacture or initial start-up an appropriate set of alternative normalized media navigation structures. The options chosen may be determined based on a combination of device, application, region, language, user type, manufacturer/publisher, and/or end-user account IDs which are provided as parameters to the configuration module.
  • A client application may then utilize the navigational structures to create an indexed application media database, typically in conjunction with media recognition technology.
  • The index may compile media items sourced from diverse and multiple types, services and providers. Additionally, the media items may be accessed from diverse and multiple devices, connectivity and transport. The media items may furthermore contain identifiers and metadata of diverse types, completeness, consistency and accuracy without materially impacting the performance of the system.
  • The client application may utilize the navigational structures to generate default and alternative user interface elements such as browse trees, faceted navigation, and relational lists, and/or to drive aspects of application logic such as auto-playlisting, search, personalization, recommendation, internet community, media retail and on-demand subscription services, and the like.
  • In an example embodiment, the user interface elements present, uniquely for each user on a client, simple, unified (descriptor-based and other) views. Via an application programming interface (API) provided on the media player, the views and structures may be altered on demand, personalized, dynamically adapted (based on personalized and/or contextual input) and updated remotely from a set of master databases via a network service and/or from a local database.
  • The alternate views may be applied on a global (e.g., a country or several countries), region (e.g., regions within a country), application (e.g., an online catalog or store . . . ), device (e.g., portable media player, vehicle media player, etc.), and/or user-account level. All such changes at the presentation information architecture layer may be achieved without altering the underlying master granular media identifiers, descriptor codes, labeling data, and file tags.
  • The master media, information architecture, contextual data, and user profile databases may be refined and appended based on feedback from the clients and thus be modified based on user input.
  • The navigation system may be used to power various applications running on any device rendering media including a search, a recommendation, playlisting, internet community facilitation, stores, on-demand subscription services, and the like.
  • FIG. 1A illustrates an example navigation system 100. As mentioned above, the system 10 may be deployed on any device that renders media (e.g., a portable media player, a vehicle media player such as a vehicle audio systems, etc.). The navigation system 100 may include a client 102 in the form of an application or a device. The client 102 may receive architecture information (e.g., selected category lists 122) from a plurality of descriptor systems 104.1-104.n that may be used to enable the client 102 to navigate content from a plurality of media sources 106.1-106.n (e.g., a radio station, a flash drive, an MP3 player . . . ) and/or a media service 108 (e.g., provided on a media player or by an online catalog) by use of a client application 110.
  • The client application 110 may provide navigational access to content through a user interface 112. The user interface 112 may provide navigational information and other information to an end-user through a display device (e.g., on a device) or an application.
  • Navigation templates 114 may access one or more descriptor hierarchies 116 available to the client 102 through the user interface 112 to provide one or more navigation views.
  • Master descriptor code list 124 associated with content may be used to enable navigational access to the content (e.g., audio, video, or the like) by mapping it to selected descriptor category lists 122 that are contained within the descriptor hierarchies 116. The descriptor codes may describe an attribute an entity of a media item. The selected category lists 122 may be selected from the plurality of available category lists 132 contained within a plurality of the descriptor systems 104.1-104.n. In an example embodiment, the master descriptor code list 123 of the client may not include all master descriptor codes of the master descriptor code list 124 of a descriptor system 104.1.
  • A descriptor system 104.1 includes a master descriptor code list 124 from which master descriptor code list 124 may be selected. The master descriptor code list 124 may include a list of a plurality of available master descriptor codes and associated names for a particular type of media descriptor. For example, the master descriptor code list 124 for a genre media descriptor may include coded identification of a greatest amount of granularity desired for genre. By way of example, the master descriptor code list 124 for genre may include, e.g., more than fifteen hundred different genres. For example, the master descriptor code list 124 may be a detailed genre list used for coding media items such as artists, albums, and recordings. The master descriptor code list 124 for a particular type of descriptor may optionally be given a unique identifier that may be used for identification.
  • In an example embodiment, by linking the master descriptor code that is associated with the media items 134 to the corresponding master descriptor code in the master descriptor code list 124, the media item may be associated with the descriptor categories by mapping the media item to the master descriptor code.
  • The most granular descriptor category list of the available category lists 132 may be referred to as the master descriptor category list. The master descriptor codes are mapped directly into the master descriptor category list, which may contain an equal or smaller number of categories than there are codes in the master descriptor code list 124.
  • By maintaining the master descriptor category list separately from the other descriptor category lists, maintenance of the descriptor system 104.1 may be simplified. The mapping from the master descriptor code list 124 to each of the other descriptor category lists may not need to be directly updated as the mapping from the master descriptor category list to each of the other descriptor category lists may remain unchanged.
  • The plurality of available category lists 132 (e.g., the descriptor category list) may include a number of levels of category lists as part of each descriptor system. Each category list may include a differing number of category codes and associated labels from the other category lists in the descriptor system. For example a category list at a first level may include five category codes and associated labels and at a second level may include twenty category codes and associated labels. The most granular level of the plurality of available category lists 132 may be referred to as a master descriptor category list.
  • Each descriptor category code in each descriptor category list in each descriptor system may be mapped to its parent descriptor category code in the next less granular descriptor category List. A number of child category codes may be mapped to each parent category code. The mapping may have an indirect effect of mapping each master descriptor code to its appropriate parent category in every category list via the master descriptor category list, and mapping every category code to its appropriate parent and child category code in every non-adjacent category list. The mappings from the master descriptor codes to the descriptor category codes may also be stored directly. Each descriptor code may be a unique identifier.
  • To enable data annotation by end-users, artists, venue/label/content/broadcast partners, internal experts or others using the more simplified category lists, rather than the highly granular master descriptor lists, mapping may be used.
  • A one to one “downward” mapping 130 may be created and stored for each descriptor category. As the descriptor categories may generally be of more aggregate nature than many of the master descriptor codes, the descriptor categories may typically be mapped to the more aggregated master descriptor codes to ensure than only the level of information actually known is encoded. The user interface presented to the submitter may only display the applicable hierarchy of category lists, from which they select the annotation labels. Once selected, the item may be annotated with the mapped master descriptor code. For example, a venue operator may “publish” the music genres that are primarily associated with the venue, but do so by selecting from a simplified list of fifteen meta-genres.
  • A descriptor system 104.1 may be created for each desired viewpoint. The viewpoints of the descriptor system 104.1 may be defined by a combination of regional, genre preference, psychographic, demographic and/or other factors that combine to define a preferred perception of the applicable descriptor types. Examples of viewpoints of descriptor systems 104.1 include North American Default, Japanese Classical Aficionado. South American, Youth, and Southern European Traditional.
  • To provide for a more useful user interface 112, different descriptor systems 104.1-104.n may group the same master descriptor code list 124 into substantially different category arrangements. The grouping may enable the ability to substantially change the areas of category focus in different implementations while utilizing the same master descriptor codes. For example, a European genre descriptor system might include the genre “Chanson” in its shorter, more highly aggregated genre category lists, as a European end-user may desire quick access to music of this type, whereas for a North American genre descriptor system, the genre category might only be exposed at the lower levels, if at all, with content coded with “Chanson” master descriptor codes instead being included along with music from other related master genre codes in a genre category of “World”.
  • Navigational content 120 may be used to enhance navigational access on the user interface 112. The navigational content 120 may be provided during navigation to identify content and may include, by way of example, audio clips, media packaging graphics, photos, text, genre icons, genre mini-clips, genre descriptions, origin icons, origin mini-clips, origin descriptors, navigational icons (e.g., channel icons), phonetic data, and the like.
  • A recognition module 125 may be further included in the client to recognize media items 134.1, 134.2. The recognition module 125 may optionally use a local database 127 and/or a remote database 129 to perform look and/or obtain metadata for the media items 134.1, 134.2. An example embodiment of a method for recognizing the media items 134.1, 134.2 that may be performed at the recognition module 125 is described in greater detail below.
  • FIG. 1B illustrates another example of a navigation system 131 in which a client may be deployed.
  • The navigation system 131 may include a client 135 that may have access to media objects 138.1 from one or more media sources 133 and media objects 138.2 from one or more media services 136. The media objects 138.1 may be associated with entity types 140.1, while the media objects 138.2 may be associated with the entity types 140.2. The media objects 138.1, 138.2 may be accessed through an indexing module 144 by a local information architecture 142.
  • The media objects 138.1, 138.2 may be recognized by use of a recognition module 146. The recognition module 146 may use a local media object lookup database 148 to identify the media objects 138.1, 138.2 to assemble a local metadata database 150 of metadata for the media objects 138.1, 138.2. The recognition module 118 may also, instead of or in addition to the local lookup, perform a remote lookup by contacting a recognition service 170. The recognition service 170 may use a master media object lookup database 172 to identify the media objects 138.1, 138.2 and a master media metadata database 174 to obtain metadata for the media objects 138.1, 138.2.
  • The local information architecture 142 may be used by a navigation system application 160 to configure various navigational views as described in greater detail below. The navigation system application 160 may configure the navigational views according to a personalization module and/or a contextualization module.
  • Hierarchies 152 and navigation trees 154 may be generated from the local information architecture 142 and used to provide the navigational views. The hierarchies 152 and the navigation trees 154 may optionally be stored and available for later retrieval.
  • A navigation API 162 may be used to provide access to the navigation system application 160. One or more contextual data feeds and/or sensors may be used to provide contextual data through the navigation API 162 to the navigation system application 160. Personalized data may also be received from a master user profile database 194 by use of a personalization service 192.
  • A navigational update module 156 may be used to update the local information architecture 142. The navigation update module 156 may optionally use one or more IDs 158 which may identify a build ID, a device ID, an application ID, a customer ID, a region ID, a taste profile ID, a user type ID, a client ID, and/or a user ID to receive the appropriate updates for the local information architecture 142 as deployed.
  • A navigation service 176 may provide updated information to the navigation update module 156. The navigation service 176 may obtain information used for navigation (e.g., navigational content) from one or more descriptor systems 178 and navigational content from a master navigation content object store 180, utilizing a navigational content metadata database 182. The navigational content may be provided to aid in navigation and may include, by way of example, audio clips, media packaging images, text, genre icons, genre mini-clips, genre descriptions, origin icons, origin descriptors, channel icons, phonetic data, and the like.
  • The client application 166 may provide navigational access to media objects 138.1, 138.2 through a user interface 168. The user interface 168 may provide navigational information and other information to an end-user through a display device (e.g., on a device) or an application.
  • One or more navigation templates 184 may be accessed by the navigation service 176. The navigation templates 184 may enable differing navigational views into one or more descriptor hierarchies 152 available to the client 135 through the user interface 168.
  • Descriptor codes associated with content in the local metadata database 150 may be used to enable navigational access to the content by mapping content to selected category lists that are contained within the descriptor hierarchies 152. The selected category lists may be selected from the plurality of available category lists contained within a plurality of the descriptor systems 178.
  • The plurality of available category lists may include a number of levels of category lists. Each category list may include a differing number of category codes and associated labels. For example, a category list at a first level may include five category codes and associated labels and at a second level may include twenty category codes and associated labels.
  • FIG. 2 illustrates an example configuration system 200. The configuration system 200 may be used to create one or more hierarchies 116 from the descriptor systems 104.1-104.n for use in the navigation system 100 (see FIG. 1).
  • The configuration system 200 includes a configuration application 202 in which a configuration module 216 may communicate with one or more master media sources 204 and a reference media database 206. The configuration application 202 may utilize one or more master descriptor code lists 124.1-124.n for content (e.g., media items) available through the master media sources 204. A master code descriptor list 124.1 of the master descriptor code lists 124.1-124.n may be provided to a plurality of descriptor systems 208, 210, 212.
  • The configuration application 202 may also create a descriptor hierarchy 214 from an original descriptor system 208 or an alternate viewpoint descriptor system 210. From each of these, an alternate label and/or a translated language descriptor system 212 may also be created.
  • The original descriptor system 208 includes the master descriptor code list 124 (see FIG. 1), third party descriptor label mappings, third party descriptor ID mappings, master category lists 126, other category lists, system mapping tables 228, and ordering value data 230. The third party label and identifier mapping tables may associate third party labels (e.g. genre label “R&R”) or IDs (e.g. ID3 Genre tag #96) with the appropriate master descriptor code.
  • 3rd party mapping tables associate descriptor term labels and descriptor term unique identifiers utilized externally to the most appropriate master descriptor code for each applicable descriptor type. If more than one 3rd party system uses an identical descriptor label, this one label may be mapped to the single “best-fit” master descriptor code, or alternatively, the system may store a 3rd party organization ID with each instance of a descriptor label, such that it is possible to support alternate mapping form 3rd party descriptor to master descriptor code, depending on the source, as indicated by the 3rd party entity identifier. In the case of mapping 3rd party descriptor unique identifiers, it is always necessary to include the 3rd party organization ID. External descriptor labels do not have to be associated with a specific entity; they may represent colloquial expressions.
  • The ordering value data 230 may optionally be associated with the category lists 126 of the descriptor systems 208, 210, 212. The ordering values for each of the categories indicate an order in which the labels of a category list may be presented in the user interface 112 (see FIG. 1). For example, the ordering values may be based on judgmental similarity, judgmental importance, judgmental chronological, and the like.
  • The labeled hierarchy 210 may be assembled from a specific descriptor system 208 with specific labels associated with each category ID. The labeled hierarchy 210 includes the master descriptor code list 124, labeled category lists 224, the system mapping tables 228, and the ordering value data 230. The alternate master category lists 224 may include one or more alternate labels for the category lists contained in the master category lists 126. For example, the alternate labels may include nicknames, short names, and the like.
  • The translated hierarchy 212 is a version of a labeled hierarchy 210 with translated labels (e.g., in Spanish, Japanese, and English). The localized language descriptor system. The translated hierarchy 212 includes the master descriptor code list 124, translated labeled category lists 226, the system mapping tables 228, and the ordering value data 230.
  • The descriptor hierarchy 214 may include the selected category lists 122 (see FIG. 1), category list ordering data 234, hierarchy mapping tables 236, descriptor relational tables 238, and entity hierarchies 240. The selected category lists 122 may be selected from the master category lists 126, the alternate master category lists 224, and/or the localized master category lists 226.
  • The category list ordering data 234 may enable one or more alternate orderings of the category lists of the descriptor hierarchy 214.
  • The third party mapping tables 236 (e.g., tables provided by a third party content provider) may associate descriptor terms and unique identifiers used by third parties to the master descriptor codes contained in the master descriptor code list 124. For example, a third party descriptor term may be mapped to a master descriptor code to which it is most similar. Thus discrepancies in terminology used may be accommodated.
  • The descriptor relational tables 238 relate master descriptor codes with other master descriptor codes in the master descriptor code list 124. The relations may be defined by a correlation value or a weighting for each relationship.
  • The entity hierarchies 240 define the parent/child relationship between at least some of the entity types. An example of an entity item hierarchy is as follows: Series <-> Album <-> Edition <-> Release <-> SKU <-> Disc <-> Track <-> Recording <-> Movement <-> Composition.
  • FIG. 3 illustrates an example navigation template 300. The navigation template 300 may be used by a navigation service to select the appropriate elements to be included in a navigation package. The client may utilize these elements to assemble navigation views for navigating content from one or more media sources 106.1-106.n and/or the media library 108 (see FIG. 1).
  • The navigation template 300 may include a template ID 302, a region 304, one or more application parameters 306, one or more device parameters 308, a user type 310, one or more third party IDs 312, and/or one or more navigation trees 314.
  • The template ID 302 may be an identifier (e.g., a unique identifier) that may be use to select the navigation template 300 (e.g., among the one or more navigation templates 114) for navigational use.
  • The region 304 may define the regional viewpoint that is used to select the appropriate descriptor systems and other navigational elements such that they can be assembled into a user interface that conforms to the cultural expectations of a specific regional user base. For example, the selection of the region may include the United States, Germany, Brazil, Japan, Korea, the Middle East, China, global, Eurasia, America, and/or Asia, U.S. East Coast, New England States, Chicago, and the like. Other regions may also be available for selection
  • The one or more application parameters 306 indicate the application type (e.g. media player, a web site, a playlist generator, a collection manager, or other application) in which the navigation package may be deployed. For example, the application parameters 306 may cause alternate category labels to be made selected such as short names, standard names, or extended names.
  • The one or more device parameters 308 specify the device in which the navigation package may be deployed. For example, the device parameters 308 may indicate whether the device is a PC, a home media server, a vehicle stereo, a portable media player, a mobile phone, a digital media adapter, a connected CD/DVD/flash player, a remote control, and the like.
  • The user type 310 may identify a type of user that may use the deployment of the navigation template 300 in the client 102. For example, the user type may identify a user of the navigation template 300 as a basic user, a simple user, a standard user, an advanced user, or a professional user. Other user types may also be used.
  • The one or more third party IDs 312 may identify third parties associated with a deployment of the navigation template 300. The third party IDs may be associated with third party customers and/or partners that have IDs (e.g., unique IDs) that define their navigational preferences for the end-user, as well as IDs of third parties whose labels or IDs may need to be mapped via the appropriate third party mapping tables.
  • The one or more navigation trees 314 may be a sequence of category lists taken from selected category lists 122 and/or entity types to be available for constructing elements of the user interface 112 based on the selection of the application parameters 306, the device parameters 308, the user type 310, and/or the other third party IDs 312. The navigation trees 314 may also include an ordering of items in a category list. The list ordering definition may include a judgmental unique ordering, alphabetical, dynamic item count, a dynamic popularity, or the like.
  • FIG. 4 illustrates example information architecture 400. The information architecture may be deployed in the client 102 (see FIG. 1) and/or in other applications and devices. The information architecture 400 may include one or more descriptor hierarchies 116, one or more navigation trees 314, one or more other category lists 406, default mappings 408, one or more descriptor relation tables 238, and/or navigational content 120 (see FIGS. 1-3).
  • The other category lists 406 may selected from one or more descriptor hierarchies 116 not included in the information architecture 400. The default mappings 408 may be used in the information architecture 400 to enable mapping of legacy descriptors IDs (e.g., from a provide or a third party) and/or to perform third party descriptor label mapping. Other configurations of the information architecture 400 including different elements may also be deployed in the client 102.
  • FIG. 5 illustrates a method for pre-processing content to enable deployment of the information architecture 400 (see FIG. 4) in the client 102 (see FIG. 1) for navigation of the content in accordance with an example embodiment.
  • The reference media database 206 may be created at block 502. Upon creation, the reference media database 206 may include the plurality of master descriptor code lists 124.1-124.n. An example embodiment of creating the reference media database 206 is described in greater detail below.
  • Information architecture 400 (see FIG. 4) may be created at block 504. An example embodiment of creating the information architecture 400 is described in greater detail below.
  • A navigation template 300 (see FIG. 3) may be defined at block 506. An example embodiment of defining a navigation template is described in greater detail below.
  • FIG. 6 illustrates a method 600 for creating a reference media database 206 (see FIG. 2) according to an example embodiment. In an example embodiment, the method 600 may be performed at block 502 (see FIG. 5). The reference media database 206 created through the operations of the method 600 may be used in the configuration system 200; however it may also be used in other systems.
  • Media descriptors may be associated with a plurality of entities at block 602. The plurality of entities may be from a single media source 106.1 (e.g., from a single content provider or fixed disk), the plurality of media sources 106.1-106.n, and/or the media library 108 (see FIG. 1). For example, the entities may include a channel, a stream, a station, a program, a slot, a playlist, a web page, a recording artist, a composer, a composition, a movement, a performance, a recording, a recording mix track, a track segment, a track, a release, an edition, an album, an album series, a graphic image, a photograph, a video segment, a video, a video still image, a TV episode, a TV series, a film, a podcast, an event, a location, and/or a venue. Other entities may also be used.
  • The media descriptors may use an identification (ID) code to identify a characteristic of an entity with which it is associated. For example, the ID code used with the media descriptors may include a genre ID code, an origin ID code, a recording era ID code, a composition era ID code, an artist type ID code, a tempo ID code, a mood ID code, a situation ID code, a work type ID code, a topic ID code, a personal historical contextual IDs, a community historical contextual ID code, a timbre ID code, and the like. Other ID codes may also be used with the media descriptors to identify characteristics of entities.
  • Each of a particular type of ID code may be associated with a single master descriptor code list 124. For example, the genre ID codes may be on a genre master descriptor code list where each genre ID code is associated with a genre label, and the mood ID codes may be on a mood master descriptor code list where each mood ID code is associated with a mood label.
  • A particular entity may be associated with a plurality of media descriptors. One or a plurality of media descriptors with a same type of code (e.g., a genre ID code) may be associated with an entity. The media descriptors may optionally be given a ranking and/or weighting to indicate their relevance among other media descriptors. For example, an album may be associated with a primary media descriptor of blues and a secondary media descriptor of rock, where the primary media descriptor is ranked higher than the secondary media descriptor.
  • A determination of which media descriptors to associate with an entity may optionally be based on information received from one or more data sources. For example, the data sources may include content information received from providers (e.g., label data feeds and/or content registries), expert editorial information, individual community submissions, collaborative community submissions, digital signal processing (DSP) analysis, statistical analysis, and the like.
  • One or more flags may optionally be associated with one or more entities of the plurality of entities at block 604. The flags may be used to identify a certain type (e.g., a special type) of entity that may receive special handling during navigation. For example, a flag may be used to indicate that an entity is a various artist compilation, a soundtrack, a holiday related theme (e.g., Christmas), an interview, and/or a bootleg recording. Other types of flags may also be used to identify other certain types of entities that may receive special handling during navigation.
  • Inheritance may be applied to the plurality of entities at block 606. Applying inheritance to the plurality of entities may enable access for at least some of the plurality of entities to selected media descriptors associated with one or more other entities. Inheritance may optionally be used to reduce an amount of media descriptors associated with a plurality of entities based on the relationship among the entities. The use of inheritance may provide greater efficiency and/or scalability of the reference media database 206.
  • By way of an example, a set of media descriptors may be associated with a recoding artist. For an album of the recording artist where the media descriptors differ from the set of media descriptors associated with the recording artist, a set of media descriptors may be associated for the album. When the individual recordings of the artist differ from that of an album on which the recordings were recorded, a set of media descriptors may be associated with the individual recordings. If one or more individual segments of a recording differ from a parent recording, a set of media descriptor may be associated with the individual segments.
  • Inheritance may be applied to entities in a cascading down manner (e.g., cascading down inheritance) to provide for inheritance from a parent and/or in a cascading up manner (e.g., cascading up inheritance) to provide for inheritance from a child. Inheritance may be applied in a cascading down manner when a child entity does not have directly associated media descriptors. The child entity may then inherit the media descriptors associated with a parent entity. For example, if a genre ID code is associated with a recording artist and an album does not have an associated genre ID code, the album may inherit the genre ID code of the recording artist. If a recording on the album does not have an associated genre ID code, the recording may inherit the genre ID code of the album.
  • Inheritance may be applied in a cascading up manner when a parent entity does not have directly associated media descriptors. The parent entity may then inherit the media descriptors associated with a child entity. For example, if a genre ID code is associated with one or more recordings and an album containing the recordings does not have associated genre ID code, the album may inherit the genre ID code associated with the most common genre ID code from its recordings. If an artist who recorded a plurality of recordings has not been associated with a genre ID code, the artist may inherit the most common genre ID code associated with albums associated with the artist.
  • An additional mapping may be created for the reference media database 206 at block 608 to provide for an additional association among at least two entities of the plurality of entities. For example, an alternate artist billing may be mapped to a primary artist ID code and/or an individual artist ID code may be mapped to a collaboration artist ID code. Other additional mappings may also be created.
  • The reference media database 206 may optionally be stored at block 610. The reference database 206 may include the association of the media descriptors with a plurality of entities, the association of a flag with the selected entities of the plurality of entities, applied inheritance to the plurality of entities, and/or the additional mapping among at least two entities of the plurality of entities.
  • FIG. 7 illustrates a method 700 for creating an information architecture 400 (see FIG. 4) that may be deployed in a client 102 (see FIG. 1) according to an example embodiment. In an example embodiment, the method 700 may be performed at block 504 (see FIG. 5).
  • A descriptor system 104 (see FIG. 1) may be accessed at block 702. The descriptor system 104 may include a plurality of available category lists 132. An example embodiment of creating a descriptor system 104 that may be accessed during the operations at block 702 is described in greater detail below.
  • A descriptor hierarchy 116 (see FIG. 1) may be created at block 704. The descriptor hierarchy 116 may be created by selecting from a plurality of available category lists 132 of a corresponding descriptor type and mappings from the upward mapping 128 and/or downward mapping 130 associated with the selected category lists 232. The descriptor system 104 thereby may act as having a superset of available category lists 132 from which the selected category lists 232 of the descriptor hierarchy 116 are selected 122.
  • A plurality of descriptor hierarchies 116 may be created for a particular descriptor type from one or more descriptor systems 104.1-104.n (e.g., an original descriptor system 208, an alternate language descriptor system 210, and/or a localized language descriptor system 212). Two or more of the multiple descriptor hierarchies 116 may be linked together (e.g., through pointers) to facilitate version selection during viewing.
  • The category list ordering data 234 may be created at block 706. The category list ordering data 234 may enable one or more alternate orderings of the selected category lists 122 of the descriptor hierarchy 214.
  • Third party mapping tables 236 may optionally be created at block 708. The third party mapping tables 236 may associate descriptor terms used by third parties to the master descriptor codes contained in the master descriptor code list 124.
  • Descriptor relation tables 238 may be created at block 710. In an example embodiment, the descriptor relation tables 238 may be created at a macro and micro level. At a macro level, a macro level correlate list may define correlation levels for the items on a single category list. At a micro level, correlation levels may be defined between master descriptor codes. The micro level may then be mapped to the macro level to create the descriptor relation tables 238.
  • The entity hierarchies 240 may be created at block 712.
  • The information architecture 400 created during the operations at blocks 702-712 may optionally be deployed in the client 102 at block 714.
  • FIG. 8 illustrates a method 800 for creating a descriptor system 104 that may be deployed in the navigation system 100 (see FIG. 1) or another system according to an example embodiment. The created descriptor system 104 may be accessed during the operations at block 702 (see FIG. 7).
  • A master descriptor code list 124 (see FIG. 1) may be generated at block 802. A media descriptor for which the master descriptor code list 124 may be generated may include, but is not limited to a genre media descriptor, an origin media descriptor, a recording era media descriptor, a composition era media descriptor, a time cycle media descriptor, an artist type media descriptor, a tempo media descriptor, a mood media descriptor, a situation media descriptor, or a topic media descriptor.
  • A descriptor system 104 may be generated from the master descriptor code list 124 at block 804. The descriptor system 104 is generated to include a plurality of available category lists 132. The available category lists 132 of the descriptor system 104 may be generated to include summarized versions of the master descriptor code list 124 with decreasing or increasing granularity. The summarized versions of the master descriptor code list 124 may include a lesser number of list entries.
  • The amount of list entries in a category list of the descriptor system 104 and the number of category lists in the descriptor system 104 may be selected to provide flexibility when selecting a number of category lists for deployment as a hierarchy 116 as described in greater detail below. Each category list may optionally be provided with a unique identifier.
  • A plurality of descriptor systems 104.1-104.n may optionally be created for a particular descriptor type, where each descriptor system 104 of the particular descriptor type may have a different number of category lists and each category list may include a different number of list entries. For example, a plurality of descriptor systems 104.1-104.n may be created for different region focuses (e.g., a global descriptor system, a US descriptor system, a Japan descriptor system, etc.), different genre focuses (e.g., general descriptor system, classical descriptor system, internal descriptor system, etc.), different mood focuses (e.g., smooth, excited, reflective, etc.) or the like.
  • Master descriptor codes may be mapped to a master category list 222 of the descriptor system 104 at block 806. The master category list 222 may be the category list of the descriptor system with the highest granularity (e.g., number of list entries). When the master category list 222 of a descriptor system is not a master list, more than one descriptor code may be mapped into a category code of a same list entry.
  • The category lists of the descriptor system 104 may be mapped to one another at block 808. The category codes of the category lists may be mapped to a category code of a parent category list (e.g., a less granular category list) in the descriptor system. The category codes of a parent category list may include one or more mappings from a category code of a child category list.
  • An alternate label version of the descriptor system 104 (e.g., the alternate language descriptor system 210) may be created at block 810. The alternate language descriptor system 210 may include the same category lists of the original descriptor system on which the alternate version is based, but contain the alternate master category lists 224 with an alternate language label substituted for some or all of the category names of the master category lists 222. The alternate language labels may include abbreviated names, short names, extended names, and the like. Multiple alternate language label versions of an original descriptor system 208 may be created.
  • A localized label version of the descriptor system 104 (e.g., the localized language descriptor system 212) may be created at block 812. The localized language descriptor system 212 may include a different label based on localization. For example, a localized label version of the descriptor system 104 may be in Japanese, Korean, German, French, and/or other different languages. A localized label version may be created for an original descriptor system 208 and/or the alternate language descriptor system 210.
  • Ordering value data may optionally be associated with the category lists of a descriptor system at block 814.
  • The information may then be deployed as a descriptor system 104 at block 816.
  • FIG. 9 illustrates a method 900 for defining a navigation package that may be deployed in a client 102 (see FIG. 1) according to an example embodiment.
  • A determination of a template selection for inclusion in a navigation package may be made at decision block 902. The determination may be made from explicit input, presence, data received from a geolocation service, dynamically, or the like. The template selection may be for a selection of a pre-built template at block 904, a dynamically generated template at block 906, or a manually configured template at block 908.
  • The pre-built template that may be selected at block 904 may be a navigation profile based on a common use case. The pre-built template may identify included descriptor hierarchies 116 and other information architecture elements that are associated with the template. The pre-built template may include a unique identifier to enable selection among other available templates. Examples of pre-built templates include pre-built templates for a basic user in the European market that plays classical music in a car using a media player, and a pre-built template for an advanced user in the Japanese market that plays music on a personal computer using a media player. Other pre-built templates may also be available for selection.
  • The dynamically generated template that may be selected at block 906 may be a template dynamically generated as needed based on use case parameters. The template may be dynamically generated by input regions and/or markets for deployment, taste/demographics/psychographic profiles(s), application and/or device type, user type, and/or customer/partner preferences. An example embodiment of dynamically generating a template is described in greater detail below.
  • The manually defined template that may be selected at block 908 may be manually defined based on a use case. For example, the manual selection may include:
      • Default/alternate regions for a descriptor system selection,
      • Default/alternate taste profiles (e.g., general, classical view, electronica view) for a descriptor system selection,
      • A number of levels (e.g., category lists) from the descriptor system 104 for a descriptor hierarchy 116,
      • Which category lists are used at each level of the descriptor hierarchy 116;
      • Default and/or alternate languages,
      • Label alternate terms,
      • Label formatting alternatives (e.g., short names, full names, etc.),
      • Text term mappings to master codes and other mappings,
      • Navigational content 120, and/or
      • Navigation trees 314.
  • Upon completion of the selection at block 904, block 906, or block 908, a determination may be made at decision block 910 whether to make another template selection for the navigation package. If a determination is made to make another template selection, the method 900 may return to decision block 902. If a determination is made not to make another selection at decision block 910, the method 900 may proceed to block 912.
  • Descriptor hierarchies 116, mappings and navigational content 120 corresponding to the selected templates may be associated with the navigation package at block 912.
  • Other lists may be associated with the navigation package at block 914. The other lists may include category lists not contained within the descriptor hierarchies 116.
  • FIG. 10 illustrates a method 1000 for dynamically generating a navigation template 300 (see FIG. 3) for use in a navigation path according to an example embodiment. In an example embodiment, the method 1000 may be used to dynamically generate a template during the operations performed at block 906 (see FIG. 9).
  • A region for deployment may be selected at block 1002. The selection of the region may be used in determining which descriptor systems 104.1-104.3 will be used by the navigation template 300 and the default and/or alternate language labels that will be used to label the descriptor hierarchy 116.
  • The selection of the region may also be used in determining which navigational content parameters may be used. For example, the navigational content parameters may include basic, audio, graphic, voice, or advanced settings. The selection of the region may be used in determining which text terms may be use in a mapping set. For example, slang terms, regional dialect terms, professional terms, and the like may be used as text terms in the mapping set.
  • Personal profile parameters may be selected at block 1004. The personal profiles may include a taste profile, based on age, based on sex, based on historical information, explicit input, and the like. The taste profile may be a classical view, an electronica view, a boomer view, a generation X view, and the like.
  • The selection of the personal profile may be used in determining which taste profile variation may be selected. For example, the taste profile variation may be used in determining which view of the descriptor system 104 may be made. The selection of the personal profile may also be used in determining alternate label terms selected for labelling of the descriptor hierarchy 116. For example, the alternate labelling may include slang labels, regional dialect labels, informal labels, old school labels, and the like.
  • The selection of the personal profile may be used to determine which navigational content parameters and the related content parameters. For example, the related content parameters may include basic, audio, graphic, voice, or advanced settings. The selection of the personal profile may also be used determining the text terms for a mapping set.
  • Application parameters 306 (see FIG. 3) may be selected at block 1006. The selection of the application parameters may be used in determining a number of levels (e.g., a number of category lists) to include in a hierarchy system and label formatting parameters.
  • Device parameters 308 (see FIG. 3) may be selected at block 1008. The selection of the device parameters may be used in determining a number of levels (e.g., a number of category lists) to include in a hierarchy system, the length of category lists used at each level of the category list, and label formatting parameters. For example, the category list at each level may be twenty for a single level, ten and seventy-five for two levels, twenty-five, two hundred and fifty, and eight hundred for three levels, and five for each of four levels. The selection of the device parameters may also be used in determining the related content parameters.
  • A user type 310 (see FIG. 3) may be selected at block 1010. The selection of the user type may be used in determining the length of category lists used at each level of the category list.
  • Third party IDs 312 (see FIG. 3) may optionally be selected at block 1012. The selection of one or more third party IDs 312 may be used in determining which partner mapping selection may be used.
  • An identifier may be associated with a navigation tree 314 at block 1014. The navigation tree 314 may include a sequence of category lists that may be taken from selected hierarchies and/or entity types to be available for constructing UI elements based on the selection of application parameters, device parameters, user type, and/or other unique IDs.
  • Upon completion of the selections from the operations of blocks 1002-1012, a navigation template may be definable based on the selections that were made. A unique identifier may optionally be associated with one or more navigation trees at block 1014.
  • A template ID 302 (see FIG. 3) may be generated at block 1016. The identifier (e.g., a unique identifier) may be generated and stored based on the combination of the selected attributes of the navigation template 300. Unique IDs for the component attributes and the navigational profiles that drive the component attributes may also be associated with the identifier of the navigation template 300.
  • In an example embodiment, current updated version of corresponding descriptor hierarchies 116, the mappings 236 and the navigational content 120 may be retrieved based on the parameters in the navigation template. Other lists may also be retrieved for the use case.
  • Upon completion of the method 1000, a navigation package including default hierarchies 214, alternate hierarchies 214, mapping tables and navigational content 120 appropriate for the use case may be available for deployment.
  • FIG. 11 illustrates a method 1100 for coding descriptor codes in media objects according to an example embodiment. The media objects may be media available from the media sources 106.1-106.n and/or the media library 108 (see FIG. 1).
  • A determination may be made at decision block 1102 whether to code a media object (e.g., a media item). If a determination is made to code the media object, the media object may be embedded during production at block 1104 and/or during encoding at block 1106.
  • Identifiers and/or master descriptor codes may be embedded (e.g., as metadata containers) in media objects during production (e.g., during product and/or post-production) at block 1104. The embedded data may optionally be encrypted or otherwise obfuscated. The master descriptor codes may be associated with one or multiple entities of the media object.
  • Unique identifiers and/or master descriptor codes may be associated with media objects during encoding (e.g., during encoding and/or other processing step as part of distribution) at block 1106.
  • Examples of identifiers that may be embedded during the operations at block 1104 or associated at block 1106 include DDEX, GRID, MI3P ID, UPC, EAN, ISRC, ISWC, DOI ID, commercial IDs, public IDs (e.g., FreeDB), proprietary recording ID, proprietary album ID, or a proprietary composition ID. Other unique identifiers may also be embedded and/or associated.
  • If a determination is made not to code the media object at decision block 1102 or upon completion of the operations at block 1104 and/or block 1106, the method 1100 may proceed to decision block 1108.
  • At decision block 1108, a determination may be made code another media object. If a determination is made to embed another media object, the method 1100 may return to decision block 1102. If a determination is made not to embed another media item at decision block 1108, the media objects may be delivered at block 1110.
  • The media objects may be delivered with any metadata that has been embedded or associated. The delivery methods for providing the metadata with the media objects may include as part of a media object's exposed predefined tag field, in a proprietary tag field, as a watermark, as MPEG-7 data, as MPV or HiMAT Tag, in an FM sideband (e.g. RDS), in a satellite radio data channel, in a digital radio data channel, or in an Internet radio data stream (e.g. MPEG ancillary data). Other delivery methods may also be used.
  • FIG. 12 illustrates a method 1200 for preloading a client 102 (see FIG. 1) according to an example embodiment.
  • A media collection may be pre-loaded on the client 102 at block 1202. The media collection may be pre-loaded prior to use by an end-user. The media collection may be the same for all of a number of units, a finite set of target libraries geared for a specific genre, regional, lifestyle, or other interest, or may be personalized based on a personal profile of an end-user. The media collection may include audio files, video files, image files, and the like.
  • Customer related content and information architecture elements may be ingested by the client 102 at block 1204.
  • Related content may be pre-loaded (e.g., prior to use by an end-user) on the client 102 at block 1206. The related content may include the navigational content 120 and/or other content. The other content may include album cover art, artist photos, concert posters, text artist factoids, lyrics, channel/station logos, and the like.
  • The information architecture 400 (see FIG. 4) may be pre-loaded at block 1208. The information architecture may be made locally accessible and subject to customization based on the template ID 302 (see FIG. 3). The template ID 302 may be stored on the client 102 for future use. The preloaded information architecture 400 may include genre hierarchies, era hierarchies, origin hierarchies, navigation trees 404, mappings 236, ordering data, and station/channel directories.
  • FIG. 13 illustrates a method 1300 for loading an information architecture 400 (see FIG. 4) according to an example embodiment.
  • A request may be received for a navigation package including an information architecture 400 at block 1302. The request may also be made by obtaining a device ID, generating a default template ID using the device ID, and/or issuing a request for a navigation package from the default template ID. Other methods for requesting the navigation package may also be used.
  • Predefined descriptor hierarchies 116 may be accessed at block 1304. A default descriptor hierarchy 116 and optionally alternate descriptor hierarchies 116 may be accessed for use in the information architecture 400. The descriptor hierarchies 116 may optionally be accessed based on the template ID 302. The alternate hierarchies 116 may be accessed from one or more different descriptor systems 104.1-104.n.
  • By way of example, the predefined descriptor hierarchies 116 that may be accessed include one predefined and one or several alternate genre hierarchies, a predefined and alternate origin hierarchies, a predefined and alternate artist type hierarchies, a predefined and alternate recording era hierarchies, a predefined and alternate composition era hierarchies, a predefined and alternate composition mood hierarchies, a predefined and alternate temp or rhythm hierarchies, a predefined and alternate composition theme/topic hierarchies, and the like.
  • Predefined navigation trees 314 may be accessed at block 1306. A default and optionally alternate navigation trees may be accessed for use in the information architecture 400. The navigation trees 314 may be accessed from one or more descriptor systems 104.1-104.n. The navigation trees may optionally be accessed based on the template ID 302.
  • By way of example, the navigation trees 314 may include genre/era/track, genre/mood/tempo/recording, artist/mood/year/track, and genre/album/mood/track. Other navigation trees 314 may also be used.
  • Other category lists 406 may be accessed at block 1308. The other category lists 406 may be used to generate alternate navigation trees 314, descriptor hierarchies 116, faceted navigation, or other local navigation options that have not been pre-defined. In an example embodiment, a descriptor system 104 may be retrieved to provide increased flexibility.
  • By way of example, the other category lists 406 may include a selected genre category list from a genre descriptor system, a selected origin category list from an origin descriptor system, a selected artist type category list from an artist type descriptor system, a selected recording era category list from a recording era descriptor system, a selected composition era category list from a composition era descriptor system, a selected mood category list from a mood descriptor system, a tempo category list from a tempo descriptor system, a selected theme/topic category list from a theme/topic descriptor system, and the like. The selected category lists may be supported by the client.
  • The default mappings 408 may be accessed at block 1310.
  • The descriptor relational tables 238 may be accessed at block 1312. A filter may optionally be applied to the descriptor relational tables 238 that are accessed to obtain relational data for relationships with weightings above a threshold level.
  • Navigational content 120 may be accessed at block 1314. The default navigational content may be accessed using a navigational content ID.
  • The accessed material may then be loaded in the client 102 as the information architecture 400 at block 1316.
  • FIG. 14 illustrates a method 1400 for coding media items according to an example embodiment. The media items 134 may be coded when a media collection is initially provided and/or during a media collection update on the client 102.
  • Media items may be provided at block 1402. The media items may be loaded and/or transmitted into the client 102. An example embodiment of providing media items is described in greater detail below.
  • The provided media items may be identified (e.g., through recognition) at block 1404. An example embodiment of recognizing the provided media items is described in greater detail below.
  • The media items may be mapped to entities at block 1406. An example embodiment of mapping media items to entities is described in greater detail below.
  • Codes and content IDs may be received for the entities at block 1408. An example embodiment of receiving codes and content IDS is described in greater detail below.
  • Indices may be created at block 1402. The index may be into a unified data system. The index may physically reside on the client, on another client in a local network, on a remote server, or distributed across more than one client. The indices may be created dynamically based on a client ID a user ID, and/or a combination of one or more parameters indicating a type of indexing that is relevant for a particular user. The indices may optionally indicate which one or more users are associated with a media collection.
  • FIG. 15 illustrates a method 1500 for loading content from a plurality of sources according to an example embodiment.
  • A determination may be made at decision block 1502 whether to load content from a service/directory or from a device. The content may include media objects in a variety of forms including audio, video, image, text, and spoken word audio. For example, the formats of the media objects may include WAV, MP3, AAC, WMA, OggVorbis, FLAC, analog audio or video, MPEG2, WMV, QUICKTIME, JPEG, GIF, plaintext, MICROSOFT WORD, and the like.
  • The content may be loaded from a variety of media including, by way of example, optical media such as audio CD, CD-R, CD-RW, DVD, DVD-R, HD-DVD, Blu-Ray, hard disk drive (HDD) and other magnetic media, solid state media including SD, MEMORY STICK, and flash memory, stream, and other IP or data transport. The content may reside locally or be available through a tether using connections such as LAN, WAN, Wifi, WiMax, cellular networks, or the like.
  • The media objects may be taken from a variety of objects including an intra-media item segment, a media item (e.g., audio, video, photo, image text, etc.), a program, a channel, a collection, or a playlist.
  • If a determination is made to load from a service/directory, one or more media items may be loaded from the service/directory at block 1504. The service/directory may include local content, AM/FM/HD radio, satellite radio, Internet on-demand and streaming radio, web page, satellite and cable TV, IPTV, RSS and other web data feeds, and other web services.
  • If a determination is made to load from a device, one or more media items may be loaded from the device at block 1506. The devices may include, by way of example, a PC, a home media server, an auto stereo, a portable media player, a mobile phone, a PDA, a digital media adapter, a connected CD/DVD/Flash player/changer, a remote control, a connected TV, a connected DVD, in-flight entertainment, or a location based system (e.g., at a club, restaurant, or retail). Other devices may also be used.
  • Upon completion of the operations at block 1504 and/or 1506, a determination may be made at decision block 1508 as to whether additional media items may be loaded. If additional media items may be loaded, the method 1500 may return to decision block 1502. If no additional media items may be loaded at decision block 1508, the method 1500 may terminate.
  • FIG. 16 illustrates a method 1600 for content recognition according to an example embodiment. In an example embodiment, the method 1600 may be performed at the block 1404 (see FIG. 14).
  • A determination may be made at decision block 1602 as to how identifier recognition may be performed on content. Identifier recognition may be performed by extracting the identifiers from the content at block 1604, recognizing the content using one or more techniques at block 1606, performing a lookup using a mapping table at block 1608, and/or utilizing an embedded or associated identifier with the content at block 1610.
  • The identifiers extracted during the operations at block 1604 may include, but are not limited to, TOCs, TOPs, audio file FP, audio stream FP, digital file/file system data (e.g., file tags, file names, folder names, etc.), image FP, voice FP, embedded entity IDs, embedded descriptor IDs, and a metadata data stream.
  • The one or more techniques used for recognizing the content at block 1606 may include optical media identifiers (e.g., TOCs or TOPs), digital file/stream identifiers (e.g., audio file FP, audio stream FP, or metadata data stream), digital file and digital system data (e.g., file tags, file names, or folder names), image recognition, voice/speech recognition, video FP recognition, text analysis, melody/humming recognition, and the like.
  • An example of a digital fingerprint technique that may be used during the operations at block 1606 to identify digital content is robust hashing. For example in mono audio, a single signal may be sampled. If the audio is stereo, either hash signals may be extracted for the left and the right channels separately, or the left and the right channel are added prior to hash signal extraction. A short piece or segment of audio (e.g., of the order of seconds), may be used to perform the analysis. As audio can be seen as an endless stream of audio-samples, audio signals of an audio track may be divided into time intervals or frames to calculate a hash word for every frame. However any known technique that may used to identify content from a segment or portion of content (e.g., the actual audio content or video content) may be also used. Thus, in an example embodiment, the content may be identified independent of any watermark, tag or other identifier but rather from the actual content data (actual video data or actual audio data).
  • The embedded and/or associated identifiers that may be used in performing a lookup using a mapping table at block 1608 may include a UPC, an ISRC, an ISWC, a GRID/MI3P/DDEX, a third party ID, a SKU number, a watermark, HiMAT, MPV, and the like.
  • Upon completion of the operations at block 1604, block 1606, 1608, and/or 1610, a determination may be made at decision block 1612 whether to perform text analysis recognition. If a determination is made to perform text analysis recognition, a text match of entity names may be performed at block 1614 and/or mapping tables may be utilized at block 1616.
  • The text match of entity names to a more normalized entity ID at block 1614 may include artist name, album name, alternate spellings, abbreviations, misspellings, and the like. The mapping tables may be utilized at block 1616 to map available textual descriptors from an available entity (e.g., album, artist, or track) to a normalized descriptor ID. The available textual descriptors may include a genre name text, a mood name text, a situation name text, and the like.
  • If text analysis recognition is not to be performed or upon completion of the operations at block 1614 and/or block 1616, the most granular descriptive data may be accessed at block 1618. The most granular descriptive data may be embedded in a media item and/or may be retrieved from the database of descriptors associated with the identifier.
  • The most authoritative descriptive data may be accessed at block 1620. The most authoritative descriptive data may be embedded in a media item and/or may be retrieved from the database of descriptors associated with the identifier.
  • Higher level descriptors may optionally be created at block 1622. The higher level descriptors may be created by extracting features, and classifying and creating one or more mid-level or high-level descriptors.
  • A taste profile may optionally be generated at block 1624. The taste profile may be generated by summarizing the media collection and using the summary to generate the taste profile (e.g., a preference of a user of the client 102).
  • FIG. 17 illustrates a method 1700 for mapping content IDs according to an example embodiment. In an example embodiment, the method 1700 may be performed at block 1406 (see FIG. 14).
  • Media identifiers may be mapped to entities at block 1702. The media identifiers may optionally be mapped to normalized, semantically meaningfully core entities. The media identifiers that may be mapped include, but are not limited to, a lyric ID, a composition ID, a recording ID, a track ID, a disc ID, a release ID, an edition ID, an album ID, a series ID, a recording artist ID, a composer ID, a playlist ID, a film/TV episode ID, a photo ID, or a text work ID.
  • Parent and child entities may be mapped at block 1704. The mapping may be a local map to hierarchically-related semantically meaningful entities. Example of mappings between parent and child entities may include mapping between a composition ID, a recording ID, and a track ID; a lyric ID, a recording ID, and a track ID; a lyric ID, a composition ID, and a track ID; a lyric ID, a composition ID, and a recording ID; a track ID, a release ID, an edition ID, an album ID, and a series ID; a track ID, a disc ID, an edition ID, an album ID, and a series ID; a track ID, a disc ID, a release ID, an album ID, and a series ID; a track ID, a disc ID, a release ID, an edition ID, and a series ID; and a track ID, a disc ID, a release ID, an edition ID, and an album ID. Other mappings may also be used.
  • Relationally related entities may be mapped at block 1706. For example, segments may be mapped. An example of a map for relationally related entities includes a map between disc ID, release ID, edition ID, album ID, and series ID.
  • FIG. 18 illustrates a method 1800 for receiving master descriptor codes and content IDS for content according to an example embodiment. In an example embodiment, the method 1800 may be performed at block 1408 (see FIG. 14).
  • Descriptor codes may be received for the entities of the content at block 1802. The retrieved descriptor codes may be for granular ordered or weighted factual and descriptive codes. The descriptor codes may be received from one or a plurality of sources locally and/or over a network.
  • Unrecognized descriptor IDs may be optionally mapped at block 1804. The mapping may translate the unrecognized descriptor code into a normalized descriptor code. Examples of maps that may be used to map the descriptor IDs include a genre map, an origin map, a recording era map, a composition era map, a mood map, a tempo map, and a situation map.
  • Descriptor IDs may optionally be mapped to entities without descriptor IDs at block 1806. For example, when a descriptor has not been mapped to parent and child entities via cascading prior to retrieval, a local mapping to an entities without a direct descriptor ID may be performed after retrieval of a parent or child descriptor ID.
  • Alternative billings and collaboration artist IDs may be received at block 1806. Mappings of alternative billing IDs to primary artist IDs and mappings of collaborations to component individual contributors may be received.
  • Navigational and related content IDs may be received at block 1808. Navigational and related content IDs may include, by way of example, cover art, an artist photo, an artist bio, an album review, a track review, a label logo, a track audio preview, a track audio, a palette, or phonetic data.
  • Related content may be received at block 1810. Related content may be received by using a related content ID to request delivery of/or unlocking of related content from local and/or remote sources (e.g., a store)
  • FIG. 19 illustrates a method 1900 for utilizing a navigation package in a client 102 (see FIG. 1) according to an example embodiment. One or more default navigation views may be generated at block 1902. One or more alternate navigation views may be generated at block 1904. One or more personalized navigation views may be generated at block 1906. One or more contextually relevant views may be generated at block 1908. An example embodiment for creating the views that may be generating during the operations of the method 1900 is described in greater detail below.
  • FIG. 20 illustrates a method 2000 for creating a navigational view according to an example embodiment. In an example embodiment, the method 2000 may be performed at block 1902, block 1904, block 1906, and/or block 1908.
  • A single descriptor type hierarchy 116 may be created at block 2002. Default templates may be used to organize media items into one or more single descriptor type hierarchies that may optionally be normalized (e.g., normalized single descriptor type hierarchy). Options for the single descriptor type hierarchy may include a number of levels, an average number of categories under each note at each level, and/or the entity type contained in the lowest level categories (e.g., track, artist, album, or composition).
  • Example of single descriptor type hierarchies 116 include meta-genre/genre/sub-genre:track, meta-mood/mood/sub-mood:track, meta-origin/origin/sub-origin:track, meta-era/era/sub-era:track, meta-type/type/sub-type:track, meta-tempo/tempo/sub-tempo:track, meta-theme/theme/sub-theme:track, meta-situation/situation/sub-situation:track, and meta-work type/work type/sub-work type:track. Each of the single descriptor hierarchies may optionally be identified with a unique identifier.
  • Hierarchical views may be created using normalized metadata at block 2004. The normalized metadata from recognition may be used to organize media items into summarized, normalized single-entity hierarchical views. The hierarchical views may be a single level or multiple levels. For example, a main artist may be displayed at a top level and an alternate billing level may be displayed when applicable.
  • By way of example, the hierarchical views may include recording artist:track, composer:track, album:track, recording:track, and composition:track.
  • Hierarchical views may be created using existing metadata and/or entity hierarchies from templates at block 2006. The existing metadata and/or entity hierarchies from the navigation templates 300 may be used to organize media items into summarized, normalized single-entity hierarchical views. For example the views may include artist/album:track or composer/composition:recording. Other views may also be created.
  • Navigation trees 314 may be created at block 2008. The default navigation template 300 may be used to organize media items into one or more navigation trees 314. The navigation trees 314 may optionally be normalized and may include a unique identifier for subsequent identification.
  • A particular navigation tree 314 may be defined by a number of levels of the tree, the container (e.g., a descriptor category or entity type) used at each level of the tree, and a number of categories for each category level of the tree (e.g. which category list is used for the levels using category lists), the entity type or types that are ultimately organized by the categories of the tree at the lowest level or alternatively the navigation tree may consistent only of levels of descriptors. The particular navigation tree 314 may be further defined by the mappings from the master descriptor list to the category lists and the mappings from each category list to the other category lists.
  • Examples of navigation trees 314 may include genre/era:track, genre/mood/tempo:recording, artist/mood/year:track, genre/year:news article, genre/origin, and the like.
  • FIG. 21 illustrates a method 2100 for updating navigational views according to an example embodiment. In an example embodiment, the navigational views generating during the method 2000 (see FIG. 20) may be update.
  • A determination may be made at decision block 2102 whether a parameter was received. If a determination is made that the parameter was received, the parameter may be processed to enable control at block 2104. The parameter may be received to allow for control over a category list and entity item ordering at each level. For example, the ordering options of the API input may include alpha-numerical, date/time, number of sub-categories, number of items in container, editorial semantic relationship B (e.g., similarity clustering), editorial semantic relationship B (e.g., liner sequence closest fit), editorial: importance or quality, popularity (e.g., may be static or dynamically generated), custom, and the like. If a determination is made that the parameter was not received at decision block 2102, the method 2100 may proceed to decision block 2106.
  • At decision block 2106, a determination may be made as to whether a display selection has been received. If a determination is made that the display selection has been received, the display selection may be processed at block 2108. The display selection may specify (e.g., through an API of the client 102) whether to display metadata as is being from a media item or alternatively display a higher-level category item label from a selected hierarchy category list for a descriptor/category field. The display selection may optionally be used to override parameters inherited from a global profile for a device or application. If a determination is made that the display selection has not been received at decision block 2106, the method 2100 may proceed to decision block 2110.
  • A determination may be made at decision block 2110 whether to perform a conversion. If a determination is made to perform a conversion (e.g., based on instructions received through the API), the conversion may be preformed at block 2112. The conversion may be to convert text into alternate forms of presentation (e.g., TLS) for a name/label field. If a determination is made not to perform the conversion at decision block 2110, the method 2100 may proceed to block 2114.
  • The unified library of media items may be navigated at block 2114. The unified library may include the media items 134 available from the plurality of media sources 106.1-106.n. The unified library may be navigated from the client 102 and optionally another client on the network.
  • At decision block 2116, a determination may be made whether to modify the navigation. If a determination is made to modify the navigation, the method 2100 may return to decision block 2102. If a determination is made not modify the navigation at decision block 2116, the method 210 may proceed to decision block 2118.
  • A determination may be made at decision block 2118 whether to further navigate the unified library of media items. If a determination is made to further navigate the unified library, the method 2100 may return to block 2114. If a determination is made not to further navigate the unified library, the method 2100 may terminate.
  • FIG. 22 illustrates a method 2200 for presenting a navigation view on a client 102 (see FIG. 1) according to an example embodiment. The method 2200 may be used to enable multiple descriptor hierarchies 116 to be available in a single client (e.g., a device and/or application). The multiple hierarchies 116 may be built from the same master descriptor code. For example, both a gender-oriented and group-type oriented view of artist type master descriptor codes and both a recording-era oriented and a classical composition oriented view of era master descriptor codes may be presented.
  • Global settings may be defined per client 102 at block 2202 and per navigation tree at block 2204. Settings may also be defined per navigation tree level at block 2206.
  • Examples of settings that may be defined during the operation of the method 2200 is as follows:
  • Define Settings Per Navigation Tree Level
      • I. Define Navigation Tree Levels: Descriptor Levels
        • System
        • Hierarchy
        • Category List
      • II. Define Navigation Tree Levels: Entity Levels
        • Entity Type
        • Hierarchy
        • Category List
      • III. Navigation Tree Item Inclusion Options API Input (e.g., to what is a specific tree providing access)
        • Select Object Type(s) to be Included
        • Select Service(s) to be Included
        • Select Format(s) to be Included
        • Select Media(s) to be Included
        • Select Devices(s) to be Included
        • Select Descriptor Value(s) associated with Specific Entity Level to be Included
        • Select Most Granular Entities to Be included
      • IV. Entity Item Categorization Options API Input
        • Display Entity Item only under Primary Category that is coded to it or display under all Categories that are coded to it
        • Display Category and Entity Item if Applicable Descriptor is Coded to Which Entity Item Level(s)
    Parameters Input to the API Allow for the Control Over Whether Category Display
      • V. Category Display Options API Input (yes/no)
        • Display “All Other/General” Child Categories
        • Display child Category When only a single Child Category Exists
        • Move child Category Label \ to Parent level when only a single Child Category
        • Display Child Category only under Primary Parent, or display under all Parents
        • Skip Identical Categories at Adjacent Levels
        • Display Empty Categories
      • VI. Category Name Display API Input
        • Language
        • Use Short Terms
        • Use Alternate Labels
      • VII. Entity List Grouping API options
        • In Artist List, Group Artists appearing on Various Recording Artist compilations under single Category
        • In Artist List, do not list Artists appearing on Various Recording Artist compilations separately
        • In Composer List, Group Composers appearing on Various Composer compilations under single Category
        • In Composer List, do not list Composers appearing on Various Composer compilations separately
        • In Artist List, Group Artists with only a single track under single Category
        • In Artist List, Alternate Billings of Artist Name all Under Main Artist Name
        • In Artist List, List all Alternate Billings of Artist Name Separately
        • In Artist List, List all Alternate Billings of Artist Name Separately, and under Main Artist
        • For Collaborations, List only the Collaboration Name
        • For Collaborations, List the Collaboration name and the Primary Artist Name
        • For collaborations, List the Collaboration Name, Primary Artist Name and Secondary Artist Name
        • In Album List, group single-artist Collection/Anthology compilations under a single category
      • VIII. Item Name Profanity Filter Options API Input
        • Remove Item from Item List
        • Display with Flagged word(s) obscured
        • Display with all but first letter of flagged word(s) obscured
        • Display all Flagged words, but flag
        • No special Treatment
      • IX. Entity List Duplicate Item Handling
        • Flag duplicates based on Fingerprint
        • Flag duplicates based on TOC
        • Flag duplicates based on Filename
        • Flag duplicates based on Track Name+Artist Name\
        • Flag duplicates based on Track Name+Artist Name+Album Name
        • Flag duplicates based on File Hash
        • For each duplicate:
          • Display both, but flag
          • Display only one
      • X. Item Name List Display Options API Input
        • Display Short Names
        • Display First, Last
        • Display First, Last
        • Display “Name, The”
        • Display “The Name”
      • XI. Detail Item Display
        • Only Display Movement Name in Track Field for TLS Classical
        • Display Composer Name Last in Track for TLS Classical
        • Do not Display Soloist in Artist Field for TLS Classical
        • Display Short Names
        • Display First, Last
        • Display First, Last
        • Display “Name, The”
        • Display “The Name”
    Parameters Input to the API Allow for the Control Over Category List Ordering at the Levels
      • XII. Category Name Ordering Options API Input (options may be selected for a sort number (e.g., first and second sort) and ascending/descending)
        • Alpha-Numerical
        • Date/Time
        • Number of Sub-Categories
        • Number of Items in Container
        • Duration of Items in Container
        • Editorial Semantic Relationship B: (e.g. similarity clustering)
        • Editorial Semantic Relationship B: (e.g. Linear sequence closest fit)
        • Editorial: Importance or Quality
        • Popularity (may be dynamically generated)
        • Custom—Developer
        • Custom—End-User
        • Order based on other descriptor of Category
    Parameters Input to the API Allow for the Control Over Entity Item Ordering at Each Level
      • XIII. Item Ordering Options API Input (options may be selected for a sort number (e.g., first and second sort) and ascending/descending)
        • Alpha-Numerical (based on as-displayed, First Last Option, Preceding Article Option)
        • Flags (e.g. Release Type: Album, Single, Collection)
        • Chronological
        • Popularity (Can be dynamically generated)
        • Custom
        • Order based on other descriptor/attribute of Entity
      • XIV. Navigational content Items to Display API Input—Navigational content items can be dynamic by being updated in real-time or by periodic updates upon connection with a server (yes/no)
        • Album Cover Art
        • Artist Photo
        • Genre Icon
        • Era Icon
        • Origin Icon/Flag
        • Venue Logo
        • Time Cycle Icon
        • Mood Icon
        • Custom
        • Tempo Icon
        • Instrument Icon
        • Situation Icon
        • Theme Icon
        • Video Still
        • Moving Thumbnail
        • Thumbnail Sequence
        • Descriptor/Entity Type Icon
        • Audio Description of Category, Entity Item
        • Custom
  • FIG. 23 illustrates a method 2300 for altering navigation on a client 102 (see FIG. 1) according to an example embodiment.
  • A determination may be made at decision block 2302 whether to use personalized navigation. If a determination is made to use personalized navigation, personalized navigation may be used at block 2304.
  • Navigation may be personalized by providing tools for user tagging and to adapt tagging menus based on a profile. End-users may be able to personalize by manually overriding category names, entity item names, and category groupings.
  • Navigation may also be personalized by dynamic configuration of hierarchies 116 based on collection and/or user profile. An end-user may also manually select and/or construct a hierarchy 114 to be used for navigation. Content may be flagged and/or tagged by the user at the client 102 level and/or file level.
  • Preferences may be stored remotely to be retrieved by a difference client (e.g., application or device) of the user and/or may be transferred directly from one client to another via a wired or wireless connection.
  • If a determination is made not to use personalized navigation at decision block 2302, the method 2300 may proceed to decision block 2306.
  • At decision block 2306, a determination may be whether to use dynamic personalization. If a determination is made to use dynamic personalization, dynamic personalization may be used at block 2308.
  • Assembled personal profile data may be accessed automatically to provide dynamic personalized navigation. An implicit profile may be overridden with any explicit input provided by the user. Sources used for dynamic personalization may include explicit input, an index of user media collection including counts of media items per entities and per categories, or an activity log that logs user activity to track activity across a current device or multiple devices of a user to an activity and consists of plays, clicks, time engages, and/or other factors.
  • The output of the dynamic personalization may be a combined profile. The profile may indicate a relative level of interest/importance of category or entity item to the end-user and may be stored remotely and/or locally.
  • An alternative navigational tree 314 may be dynamically constructed based on profile input. A determination may be made as to how many levels to step up or down for sub-sections of the list in order to re-tune. An example of the alternative navigation tree 314 dynamically constructed is as follows:
      • A standard view may present a U.S. default 25-Item Genre Category List at Top level;
      • A customized 25-item list view may be presented instead by looking at the next higher level in the hierarchy specified by the template (e.g., a 6-item list). The profile may indicate that “rock” is the user's most preferred genre of those listed at the higher level.
      • The new 25-Item list may then be generated as follows: First, the remaining categories (e.g., 5 categories) from the higher list (e.g., 6 items) are used to cover all music other than Rock. This leaves twenty rock categories to be populated. These are taken by identifying the 19 most preferred categories from the rock section of the next lower level in the hierarchy (which, say, includes 30 Rock Categories), each of which is listed. The main category labelled “more rock” may contain all of the remaining rock categories from the next lower level.
      • The next level down under the “More Rock” category may either show the remaining 9 rock categories from the next lower level, or just show a single list of all sub-categories that would normally be displayed at that level.
  • Composite graphical, audio, video or textual navigational icons may be dynamically selected for personalization based on current contents of the container.
  • If a determination is made not to use personalized navigation at decision block 2306, the method 2300 may proceed to decision block 2310.
  • A determination may be made at decision block 2310 whether to use contextual adaptation. If a determination is made to use contextual adaptation, contextual adaptation may be used at block 2312.
  • Contextual adaptation may use real-time global textual adaptation and/or real-time personal contextual adaptation. The data may be accessible by a client 102 either locally or remotely.
  • Examples of real-time global contextual adaptation include use of location (GPS, cell), time of day, week, season, year, date, holidays, weather, acceleration, speed, temperature, orientation, object presence (using technologies like RFID, BLUETOOTH), and public personal presence. Examples of real-time personal contextual adaptation include use of personal location bookmarks (using data provided by GPS or cellular networks), personal time of day, week, year preference, birthdays, anniversaries, weather preference, personal tagged object presence, known tagged personal presence, heart rate, and other bio data feeds.
  • If a determination is made not to use contextual adaptation at decision block 2310, the method 2300 may proceed to decision block 2314.
  • At decision block 2314, a determination may be whether to restore navigation. If a determination is made to restore navigation, navigation may be restored at block 2316.
  • If a determination is made not to restore navigation at decision block 2314, the method 2300 may proceed to decision block 2318.
  • A determination may be made at decision block 2318 whether to make further modifications. If a determination is made to make further modifications, the method 2300 may return to decision block 2302. If a determination is made not to make further modification, the method 2300 may terminate.
  • FIG. 24 shows a diagrammatic representation of machine in the example form of a computer system 2400 within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a media player (e.g., portable audio player, vehicle audio player, or any media rendering device), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • The example computer system 2400 includes a processor 2412 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), digital signal processor (DSP) or any combination thereof), a main memory 2404 and a static memory 2406, which communicate with each other via a bus 2408. The computer system 2400 may further include a video display unit 2410 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 2400 also includes an alphanumeric input device 2413 (e.g., a keyboard), a user interface (UI) navigation device 2414 (e.g., a mouse), a disk drive unit 2416, a signal generation device 2418 (e.g., a speaker) and a network interface device 2420.
  • The disk drive unit 2416 includes a machine-readable medium 2422 on which is stored one or more sets of instructions and data structures (e.g., software 2424) embodying or utilized by any one or more of the methodologies or functions described herein. The software 2424 may also reside, completely or at least partially, within the main memory 2404 and/or within the processor 2412 during execution thereof by the computer system 2400, the main memory 2404 and the processor 2412 also constituting machine-readable media.
  • The software 2424 may further be transmitted or received over a network 2426 via the network interface device 2420 utilizing any one of a number of well-known transfer protocols (e.g., FTP).
  • While the machine-readable medium 2422 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.
  • FIG. 25 illustrates an example end-use system in which the client 102, 132 may be deployed. The end-use system 2500 may include a device processor 2502 in which a client 2504 may be deployed on device storage 2506. The device may include a car radio, an MP3 player, a home stereo, a computing system, ant the like. The client 2504 may include the functionality of the client 102 and/or the client 135 (see FIGS. 1A and 1B). The device storage 2506 may be a fixed disk drive, flash memory, and the like.
  • The device processor 2502 may receive content from a plurality of sources 2508.1-2508.6. The plurality of sources as illustrated include a mobile phone/PDA 2508.1, a CD/DVD/BLURAY/HD/DVD 2508, a media player 2058.3, local and/or remote storage 2508.4, a radio station 2508.5, internet access/streaming content 2508.6. However other sources of content may also be used.
  • The content may be made navigable by an end-user through user of a user interface 2516 coupled to a display 2518, an audio output 2520, and a user input 2514 (e.g., for voice and/or text). The information obtain by the device processor 2502 may be stored locally on the device storage 2506 and/or remotely on a system database 2512 by use of a system server 2512.
  • Although an embodiment of the present invention has been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof, show by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
  • Such embodiments of the inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description.
  • The Abstract of the Disclosure is provided to comply with 37 C.F.R. §1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.

Claims (23)

1. A method comprising:
accessing media descriptors for a plurality of media items, the media items provided by a plurality of different media sources;
accessing a media descriptor hierarchy, the media descriptor hierarchy comprising at least one unified category list;
mapping the media descriptors to the at least one unified category list; and
providing a user interface to display the at least one unified category list to allow a user to select at least one media item in the at least one unified category list.
2. The method of claim 1, wherein the plurality of different media sources comprises at least two media sources selected from the group consisting of a satellite radio station, an FM radio station, an AM radio station, an HD radio station, an Internet radio station, a TV station, a Compact Disk (CD), a portable media player, a mobile phone, and a local media database, and an online media database.
3. The method of claim 1, wherein the media items comprise digital audio tracks or digital video tracks.
4. The method of claim 3, comprising displaying track names of media items that have media descriptors mapped to the at least one unified category list.
5. The method of claim 1, wherein the media descriptor describes the media item and is a genre, an origin, a recording era, a composition era, an artist type, a tempo, a mood, a situation, a work type, a topic, personal historical context, a community historical context, and/or a timbre of the media item.
6. The method of claim 5, wherein each media descriptor is represented by a code.
7. The method of claim 1, wherein each media descriptor comprises a weighting to indicate a relevance of the media descriptor relative to at least one other media descriptor.
8. The method of claim 1, wherein the descriptor hierarchy defines a plurality of parent/child relationships of a plurality of unified category lists, and wherein at least one media descriptor is mapped to each unified category list.
9. The method of claim 8, wherein the unified category list comprise one or more of Album, Edition, Release, SKU, Disc, Track, Recording, Movement and/or Composition of a media item.
10. The method of claim 1, wherein a plurality of descriptor hierarchies are provided, each descriptor hierarchy corresponding to a region hierarchy, a demographic hierarchy, a psychographic hierarchy, genre hierarchy, an era hierarchy, an origin hierarchy, a navigation tree, an artist type hierarchy, an alternate recording hierarchy, and/or a mapping of the media items.
11. The method of claim 1, wherein a plurality of unified category lists are provided, each unified category list corresponding to a region, a demographic, a psychographic, genre, an era hierarchy, an origin, a navigation tree, an artist type, and/or an alternate recording.
12. The method of claim 1, wherein the at least one unified category list comprises a plurality of levels, higher levels in the unified category list including fewer items than lower levels in the unified category list.
13. The method of claim 1, wherein at least one media descriptor is embedded in a media item.
14. The method of claim 1, where accessing the media descriptors comprises:
performing a digital signal processing (DSP) analysis on plurality of media items; and
accessing the descriptors for the plurality of media items based on the DSP analysis of the plurality of media items.
15. The method of claim 1, where accessing the media descriptors for media content comprises:
obtaining a digital fingerprinting of each of the plurality of media items; and
accessing the media descriptors for the plurality of media items based on the digital fingerprinting.
16. The method of claim 1, wherein the user interface displays at least a portion of a navigation tree derived from the media descriptor hierarchy.
17. The method of claim 1, comprising displaying a portion of a unified category list on the user interface.
18. The method of claim 17, further comprising providing a plurality of navigation trees selected from a region navigation tree, an application parameter navigation tree, a device parameter navigation tree, a user type navigation tree, and/or a third party the navigation tree.
19. Apparatus comprising:
memory to store instructions; and
at least one processor to execute the instructions to:
access media descriptors for a plurality of media items, the media items provided by a plurality of different media sources;
access a media descriptor hierarchy, the media descriptor hierarchy comprising at least one unified category list;
map the media descriptors to the at least one unified category list; and
provide a user interface to display the at least one unified category list to allow a user to select at least one media item in the at least one unified category list.
20. The apparatus of claim 19, wherein the plurality of different media sources comprises at least two media sources selected from the group consisting of a satellite radio station, an FM radio station, an AM radio station, an HD radio station, an Internet radio station, a TV station, a Compact Disk (CD), a portable media player, a mobile phone, and a local media database, and an online media database.
21. The apparatus of claim 19, comprising local storage to store media items in the form of digital audio tracks or digital video tracks.
22. The apparatus of claim 21, wherein the track names of media items are displayed that have media descriptors mapped to the at least one unified category list.
23. A machine-readable medium embodying instructions which, when executed by the machine cause the machine to:
access media descriptors for a plurality of media items, the media items provided by a plurality of different media sources;
access a media descriptor hierarchy, the media descriptor hierarchy comprising at least one unified category list;
map the media descriptors to the at least one unified category list; and
provide a user interface to display the at least one unified category list to allow a user to select at least one media item in the at least one unified category list.
US12/561,293 2006-03-09 2009-09-17 Method and system for media navigation Abandoned US20100005104A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/561,293 US20100005104A1 (en) 2006-03-09 2009-09-17 Method and system for media navigation

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US78160906P 2006-03-09 2006-03-09
US11/716,269 US7908273B2 (en) 2006-03-09 2007-03-09 Method and system for media navigation
US12/561,293 US20100005104A1 (en) 2006-03-09 2009-09-17 Method and system for media navigation

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/716,269 Division US7908273B2 (en) 2006-03-09 2007-03-09 Method and system for media navigation

Publications (1)

Publication Number Publication Date
US20100005104A1 true US20100005104A1 (en) 2010-01-07

Family

ID=38475617

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/716,269 Active 2028-06-13 US7908273B2 (en) 2006-03-09 2007-03-09 Method and system for media navigation
US12/561,293 Abandoned US20100005104A1 (en) 2006-03-09 2009-09-17 Method and system for media navigation

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/716,269 Active 2028-06-13 US7908273B2 (en) 2006-03-09 2007-03-09 Method and system for media navigation

Country Status (4)

Country Link
US (2) US7908273B2 (en)
EP (1) EP2001583A4 (en)
JP (1) JP2009529753A (en)
WO (1) WO2007103583A2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050019008A1 (en) * 2001-12-05 2005-01-27 Digital Netoworks North America, Inc. Automatic identification of DVD title using internet technologies and fuzzy matching techniques
US20100115472A1 (en) * 2008-10-30 2010-05-06 Lee Kun-Bin Method of Facilitating Browsing and Management of Multimedia Files with Data Structure thereof
US7908273B2 (en) 2006-03-09 2011-03-15 Gracenote, Inc. Method and system for media navigation
US20110066723A1 (en) * 2008-03-18 2011-03-17 Civolution B.V. Generating statistics of popular content
US20110190032A1 (en) * 2010-02-04 2011-08-04 Sheldon Kerri I H Integrated Media User Interface
US8495699B2 (en) 2008-12-23 2013-07-23 At&T Intellectual Property I, L.P. Distributed content analysis network
US20130238964A1 (en) * 2012-03-06 2013-09-12 Apple Inc. Application for designing journals
US20130282956A1 (en) * 2012-04-20 2013-10-24 Pradeep Ramdeo Automobile MP3 System
US20140074839A1 (en) * 2012-09-12 2014-03-13 Gracenote, Inc. User profile based on clustering tiered descriptors
US8688712B1 (en) * 2011-09-20 2014-04-01 Google Inc. Personal media database
CN110597867A (en) * 2019-09-09 2019-12-20 珠海格力电器股份有限公司 Image-text data processing method and system
US11093544B2 (en) * 2009-08-13 2021-08-17 TunesMap Inc. Analyzing captured sound and seeking a match for temporal and geographic presentation and navigation of linked cultural, artistic, and historic content

Families Citing this family (94)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11259059B2 (en) 2004-07-30 2022-02-22 Broadband Itv, Inc. System for addressing on-demand TV program content on TV services platform of a digital TV services provider
US7590997B2 (en) 2004-07-30 2009-09-15 Broadband Itv, Inc. System and method for managing, converting and displaying video content on a video-on-demand platform, including ads used for drill-down navigation and consumer-generated classified ads
US7631336B2 (en) 2004-07-30 2009-12-08 Broadband Itv, Inc. Method for converting, navigating and displaying video content uploaded from the internet to a digital TV video-on-demand platform
US20060167956A1 (en) * 2005-01-27 2006-07-27 Realnetworks, Inc. Media content transfer method and apparatus (aka shadow cache)
US8510109B2 (en) 2007-08-22 2013-08-13 Canyon Ip Holdings Llc Continuous speech transcription performance indication
US20090005122A1 (en) * 2006-07-10 2009-01-01 David Elliot Goldfarb Advertisement-based dialing
US7792868B2 (en) 2006-11-10 2010-09-07 Microsoft Corporation Data object linking and browsing tool
US9973450B2 (en) 2007-09-17 2018-05-15 Amazon Technologies, Inc. Methods and systems for dynamically updating web service profile information by parsing transcribed message strings
US9654833B2 (en) 2007-06-26 2017-05-16 Broadband Itv, Inc. Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection
US11570521B2 (en) 2007-06-26 2023-01-31 Broadband Itv, Inc. Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection
WO2009020332A2 (en) * 2007-08-06 2009-02-12 Samsung Electronics Co, . Ltd. Method and apparatus for providing/receiving web-based service of plurality of service providers
KR101512321B1 (en) 2007-08-22 2015-04-16 삼성전자주식회사 / Method and apparatus for providing/receiving service of plurality of service providers
US20090106397A1 (en) * 2007-09-05 2009-04-23 O'keefe Sean Patrick Method and apparatus for interactive content distribution
US8145727B2 (en) * 2007-10-10 2012-03-27 Yahoo! Inc. Network accessible media object index
US8959085B2 (en) * 2007-10-10 2015-02-17 Yahoo! Inc. Playlist resolver
KR101531166B1 (en) 2007-11-27 2015-06-25 삼성전자주식회사 Method and apparatus for discovering IPTV service provider and IPTV service using SIP protocol
EP2223540B1 (en) * 2007-12-12 2019-01-16 III Holdings 2, LLC System and method for generating a recommendation on a mobile device
US8065325B2 (en) * 2007-12-12 2011-11-22 Packet Video Corp. System and method for creating metadata
US9497583B2 (en) 2007-12-12 2016-11-15 Iii Holdings 2, Llc System and method for generating a recommendation on a mobile device
US20090222412A1 (en) * 2008-02-29 2009-09-03 Microsoft Corporation Facet visualization
US8335259B2 (en) 2008-03-12 2012-12-18 Packetvideo Corp. System and method for reformatting digital broadcast multimedia for a mobile device
US8676577B2 (en) * 2008-03-31 2014-03-18 Canyon IP Holdings, LLC Use of metadata to post process speech recognition output
JP2011523727A (en) * 2008-03-31 2011-08-18 パケットビデオ コーポレーション System and method for managing, controlling and / or rendering media over a network
AU2008100718B4 (en) * 2008-04-11 2009-03-26 Kieran Stafford Means for navigating data using a graphical interface
KR101591085B1 (en) * 2008-05-19 2016-02-02 삼성전자주식회사 Apparatus and method for generating and playing image file
US8457575B2 (en) * 2008-09-26 2013-06-04 Microsoft Corporation Obtaining and presenting metadata related to a radio broadcast
US8544046B2 (en) * 2008-10-09 2013-09-24 Packetvideo Corporation System and method for controlling media rendering in a network using a mobile device
US9519772B2 (en) 2008-11-26 2016-12-13 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US8180891B1 (en) 2008-11-26 2012-05-15 Free Stream Media Corp. Discovery, access control, and communication with networked services from within a security sandbox
US10880340B2 (en) 2008-11-26 2020-12-29 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US20100131528A1 (en) * 2008-11-26 2010-05-27 Gm Global Technology Operations, Inc. System and method for identifying attributes of digital media data
US10977693B2 (en) 2008-11-26 2021-04-13 Free Stream Media Corp. Association of content identifier of audio-visual data with additional data through capture infrastructure
US9961388B2 (en) 2008-11-26 2018-05-01 David Harrison Exposure of public internet protocol addresses in an advertising exchange server to improve relevancy of advertisements
US9154942B2 (en) 2008-11-26 2015-10-06 Free Stream Media Corp. Zero configuration communication between a browser and a networked media device
US10334324B2 (en) 2008-11-26 2019-06-25 Free Stream Media Corp. Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device
US10419541B2 (en) 2008-11-26 2019-09-17 Free Stream Media Corp. Remotely control devices over a network without authentication or registration
US9986279B2 (en) 2008-11-26 2018-05-29 Free Stream Media Corp. Discovery, access control, and communication with networked services
US10631068B2 (en) 2008-11-26 2020-04-21 Free Stream Media Corp. Content exposure attribution based on renderings of related content across multiple devices
US10567823B2 (en) 2008-11-26 2020-02-18 Free Stream Media Corp. Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device
DK2370918T5 (en) * 2008-12-02 2019-09-02 Haskolinn I Reykjavik Multi-media identifier
WO2010065107A1 (en) * 2008-12-04 2010-06-10 Packetvideo Corp. System and method for browsing, selecting and/or controlling rendering of media with a mobile device
US8219513B2 (en) * 2008-12-19 2012-07-10 Eastman Kodak Company System and method for generating a context enhanced work of communication
CN102292722B (en) * 2009-01-21 2014-09-03 瑞典爱立信有限公司 Generation of annotation tags based on multimodal metadata and structured semantic descriptors
US8209313B2 (en) * 2009-01-28 2012-06-26 Rovi Technologies Corporation Structuring and searching data in a hierarchical confidence-based configuration
US8312061B2 (en) * 2009-02-10 2012-11-13 Harman International Industries, Incorporated System for broadcast information database
US20100228704A1 (en) * 2009-02-20 2010-09-09 All Media Guide, Llc Recognizing a disc
US20100228736A1 (en) * 2009-02-20 2010-09-09 All Media Guide, Llc Recognizing a disc
US9386139B2 (en) * 2009-03-20 2016-07-05 Nokia Technologies Oy Method and apparatus for providing an emotion-based user interface
US10008212B2 (en) * 2009-04-17 2018-06-26 The Nielsen Company (Us), Llc System and method for utilizing audio encoding for measuring media exposure with environmental masking
JP2010257162A (en) * 2009-04-23 2010-11-11 Brother Ind Ltd Install program
US8620967B2 (en) * 2009-06-11 2013-12-31 Rovi Technologies Corporation Managing metadata for occurrences of a recording
US11647243B2 (en) 2009-06-26 2023-05-09 Seagate Technology Llc System and method for using an application on a mobile device to transfer internet media content
US9195775B2 (en) 2009-06-26 2015-11-24 Iii Holdings 2, Llc System and method for managing and/or rendering internet multimedia content in a network
US20110072117A1 (en) 2009-09-23 2011-03-24 Rovi Technologies Corporation Generating a Synthetic Table of Contents for a Volume by Using Statistical Analysis
JP4930573B2 (en) * 2009-10-30 2012-05-16 株式会社デンソー In-vehicle system
US8490131B2 (en) * 2009-11-05 2013-07-16 Sony Corporation Automatic capture of data for acquisition of metadata
US8321394B2 (en) * 2009-11-10 2012-11-27 Rovi Technologies Corporation Matching a fingerprint
US9256608B2 (en) * 2009-11-12 2016-02-09 Hewlett-Packard Development Company, L.P. Mapping user content to folders in a file system
US20110126242A1 (en) * 2009-11-25 2011-05-26 Douglas Cline Inflight entertainment system with screen configurable video display unit roles
US9098507B2 (en) * 2009-12-03 2015-08-04 At&T Intellectual Property I, L.P. Dynamic content presentation
TWI492073B (en) * 2009-12-31 2015-07-11 Mediatek Inc Method and electronic apparatus of browsing and managing multimedia files, method for storing a multimedia file into an electronic apparatus, electronic apparatus for storing a metadata profile
JP5477635B2 (en) * 2010-02-15 2014-04-23 ソニー株式会社 Information processing apparatus and method, and program
US20110238679A1 (en) * 2010-03-24 2011-09-29 Rovi Technologies Corporation Representing text and other types of content by using a frequency domain
US8725766B2 (en) 2010-03-25 2014-05-13 Rovi Technologies Corporation Searching text and other types of content by using a frequency domain
US8239412B2 (en) 2010-05-05 2012-08-07 Rovi Technologies Corporation Recommending a media item by using audio content from a seed media item
JP5737728B2 (en) * 2010-07-20 2015-06-17 トムソン ライセンシングThomson Licensing How to play and output content during trick mode operation
WO2012015846A1 (en) 2010-07-26 2012-02-02 Rovi Technologies Corporation Delivering regional content information from a content information sources to a user device
WO2012109568A1 (en) 2011-02-11 2012-08-16 Packetvideo Corporation System and method for using an application on a mobile device to transfer internet media content
US8798777B2 (en) 2011-03-08 2014-08-05 Packetvideo Corporation System and method for using a list of audio media to create a list of audiovisual media
GB201108709D0 (en) * 2011-05-24 2011-07-06 Corethree Ltd Core engine
EP2716011A1 (en) * 2011-06-01 2014-04-09 Interdigital Patent Holdings, Inc. Content delivery network interconnection (cdni) mechanism
GB201109731D0 (en) 2011-06-10 2011-07-27 System Ltd X Method and system for analysing audio tracks
US20130067346A1 (en) * 2011-09-09 2013-03-14 Microsoft Corporation Content User Experience
US9280788B2 (en) * 2012-06-13 2016-03-08 Oracle International Corporation Information retrieval and navigation using a semantic layer
IL310457B1 (en) * 2013-03-14 2024-09-01 Wix Com Ltd System and method for dialog customization
US10324733B2 (en) 2014-07-30 2019-06-18 Microsoft Technology Licensing, Llc Shutdown notifications
US10592080B2 (en) 2014-07-31 2020-03-17 Microsoft Technology Licensing, Llc Assisted presentation of application windows
US9787576B2 (en) 2014-07-31 2017-10-10 Microsoft Technology Licensing, Llc Propagating routing awareness for autonomous networks
US10254942B2 (en) 2014-07-31 2019-04-09 Microsoft Technology Licensing, Llc Adaptive sizing and positioning of application windows
US9836464B2 (en) 2014-07-31 2017-12-05 Microsoft Technology Licensing, Llc Curating media from social connections
US10678412B2 (en) 2014-07-31 2020-06-09 Microsoft Technology Licensing, Llc Dynamic joint dividers for application windows
US10063925B2 (en) 2014-12-23 2018-08-28 Western Digital Technologies, Inc. Providing digital video assets with multiple age rating levels
US9827209B2 (en) 2015-02-09 2017-11-28 Microsoft Technology Licensing, Llc Display system
US10018844B2 (en) 2015-02-09 2018-07-10 Microsoft Technology Licensing, Llc Wearable image display system
US11086216B2 (en) 2015-02-09 2021-08-10 Microsoft Technology Licensing, Llc Generating electronic components
US9830903B2 (en) * 2015-11-10 2017-11-28 Paul Wendell Mason Method and apparatus for using a vocal sample to customize text to speech applications
US10622022B2 (en) * 2016-05-12 2020-04-14 Lumanary Inc. Automated video bumper system
US10671588B2 (en) 2016-07-08 2020-06-02 Ebay Inc. Multiple database updates using paths
US11544670B2 (en) * 2016-08-07 2023-01-03 Verifi Media, Inc. Distributed data store for managing media
JP2018097659A (en) * 2016-12-14 2018-06-21 株式会社デンソーテン Output processing apparatus and output processing method
WO2019193547A1 (en) 2018-04-05 2019-10-10 Cochlear Limited Advanced hearing prosthesis recipient habilitation and/or rehabilitation
US11270067B1 (en) * 2018-12-26 2022-03-08 Snap Inc. Structured activity templates for social media content
US10638206B1 (en) * 2019-01-28 2020-04-28 International Business Machines Corporation Video annotation based on social media trends
US20230368310A1 (en) * 2022-05-10 2023-11-16 Life Enhancement Media, Llc Experience based social media platform

Citations (92)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4677466A (en) * 1985-07-29 1987-06-30 A. C. Nielsen Company Broadcast program identification method and apparatus
US4870568A (en) * 1986-06-25 1989-09-26 Thinking Machines Corporation Method for searching a database system including parallel processors
US5019899A (en) * 1988-11-01 1991-05-28 Control Data Corporation Electronic data encoding and recognition system
US5132949A (en) * 1990-04-30 1992-07-21 Samsung Electronics Co., Ltd. Record medium searching apparatus for a recording/reproducing system
US5206949A (en) * 1986-09-19 1993-04-27 Nancy P. Cochran Database search and record retrieval system which continuously displays category names during scrolling and selection of individually displayed search terms
US5237157A (en) * 1990-09-13 1993-08-17 Intouch Group, Inc. Kiosk apparatus and method for point of preview and for compilation of market data
US5337347A (en) * 1992-06-25 1994-08-09 International Business Machines Corporation Method and system for progressive database search termination and dynamic information presentation utilizing telephone keypad input
US5341350A (en) * 1990-07-07 1994-08-23 Nsm Aktiengesellschaft Coin operated jukebox device using data communication network
US5388259A (en) * 1992-05-15 1995-02-07 Bell Communications Research, Inc. System for accessing a database with an iterated fuzzy query notified by retrieval response
US5392264A (en) * 1992-04-24 1995-02-21 Pioneer Electronic Corporation Information reproducing apparatus
US5410543A (en) * 1993-01-04 1995-04-25 Apple Computer, Inc. Method for connecting a mobile computer to a computer network by using an address server
US5446714A (en) * 1992-07-21 1995-08-29 Pioneer Electronic Corporation Disc changer and player that reads and stores program data of all discs prior to reproduction and method of reproducing music on the same
US5446891A (en) * 1992-02-26 1995-08-29 International Business Machines Corporation System for adjusting hypertext links with weighed user goals and activities
US5483598A (en) * 1993-07-01 1996-01-09 Digital Equipment Corp., Patent Law Group Message encryption using a hash function
US5488725A (en) * 1991-10-08 1996-01-30 West Publishing Company System of document representation retrieval by successive iterated probability sampling
US5615345A (en) * 1995-06-08 1997-03-25 Hewlett-Packard Company System for interfacing an optical disk autochanger to a plurality of disk drives
US5616876A (en) * 1995-04-19 1997-04-01 Microsoft Corporation System and methods for selecting music on the basis of subjective content
US5625608A (en) * 1995-05-22 1997-04-29 Lucent Technologies Inc. Remote control device capable of downloading content information from an audio system
US5642337A (en) * 1995-03-14 1997-06-24 Sony Corporation Network with optical mass storage devices
US5659732A (en) * 1995-05-17 1997-08-19 Infoseek Corporation Document retrieval over networks wherein ranking and relevance scores are computed at the client for multiple database documents
US5673322A (en) * 1996-03-22 1997-09-30 Bell Communications Research, Inc. System and method for providing protocol translation and filtering to access the world wide web from wireless or low-bandwidth networks
US5721827A (en) * 1996-10-02 1998-02-24 James Logan System for electrically distributing personalized information
US5727129A (en) * 1996-06-04 1998-03-10 International Business Machines Corporation Network system for profiling and actively facilitating user activities
US5740549A (en) * 1995-06-12 1998-04-14 Pointcast, Inc. Information and advertising distribution system and method
US5740304A (en) * 1994-07-04 1998-04-14 Sony Corporation Method and apparatus for replaying recording medium from any bookmark-set position thereon
US5751956A (en) * 1996-02-21 1998-05-12 Infoseek Corporation Method and apparatus for redirection of server external hyper-link references
US5751672A (en) * 1995-07-26 1998-05-12 Sony Corporation Compact disc changer utilizing disc database
US5757739A (en) * 1995-03-30 1998-05-26 U.S. Philips Corporation System including a presentation apparatus, in which different items are selectable, and a control device for controlling the presentation apparatus, and control device for such a system
US5761606A (en) * 1996-02-08 1998-06-02 Wolzien; Thomas R. Media online services access via address embedded in video or audio program
US5768222A (en) * 1994-05-25 1998-06-16 Sony Corporation Reproducing apparatus for a recording medium where a transferring means returns a recording medium into the stocker before execution of normal operation and method therefor
US5774666A (en) * 1996-10-18 1998-06-30 Silicon Graphics, Inc. System and method for displaying uniform network resource locators embedded in time-based medium
US5781889A (en) * 1990-06-15 1998-07-14 Martin; John R. Computer jukebox and jukebox network
US5781909A (en) * 1996-02-13 1998-07-14 Microtouch Systems, Inc. Supervised satellite kiosk management system with combined local and remote data storage
US5796393A (en) * 1996-11-08 1998-08-18 Compuserve Incorporated System for intergrating an on-line service community with a foreign service
US5809512A (en) * 1995-07-28 1998-09-15 Matsushita Electric Industrial Co., Ltd. Information provider apparatus enabling selective playing of multimedia information by interactive input based on displayed hypertext information
US5815471A (en) * 1996-03-19 1998-09-29 Pics Previews Inc. Method and apparatus for previewing audio selections
US5862223A (en) * 1996-07-24 1999-01-19 Walker Asset Management Limited Partnership Method and apparatus for a cryptographically-assisted commercial network system designed to facilitate and support expert-based commerce
US5884298A (en) * 1996-03-29 1999-03-16 Cygnet Storage Solutions, Inc. Method for accessing and updating a library of optical discs
US5894554A (en) * 1996-04-23 1999-04-13 Infospinner, Inc. System for managing dynamic web page generation requests by intercepting request at web server and routing to page server thereby releasing web server to process other requests
US5903816A (en) * 1996-07-01 1999-05-11 Thomson Consumer Electronics, Inc. Interactive television system and method for displaying web-like stills with hyperlinks
US5918223A (en) * 1996-07-22 1999-06-29 Muscle Fish Method and article of manufacture for content-based analysis, storage, retrieval, and segmentation of audio information
US5929857A (en) * 1997-09-10 1999-07-27 Oak Technology, Inc. Method and apparatus for dynamically constructing a graphic user interface from a DVD data stream
US6012112A (en) * 1997-09-30 2000-01-04 Compaq Computer Corporation DVD assembly, and associated apparatus, for a convergent device
US6025837A (en) * 1996-03-29 2000-02-15 Micrsoft Corporation Electronic program guide with hyperlinks to target resources
US6031795A (en) * 1996-12-02 2000-02-29 Thomson Consumer Electronics, Inc. Method and apparatus for programming a jukebox with information related to content on media contained therein
US6035329A (en) * 1995-12-07 2000-03-07 Hyperlock Technologies, Inc. Method of securing the playback of a DVD-ROM via triggering data sent via a cable network
US6034925A (en) * 1996-12-02 2000-03-07 Thomson Consumer Electronics, Inc. Accessing control method for identifying a recording medium in a jukebox
US6061306A (en) * 1999-07-20 2000-05-09 James Buchheim Portable digital player compatible with a cassette player
US6104334A (en) * 1997-12-31 2000-08-15 Eremote, Inc. Portable internet-enabled controller and information browser for consumer devices
US6112240A (en) * 1997-09-03 2000-08-29 International Business Machines Corporation Web site client information tracker
US6175857B1 (en) * 1997-04-30 2001-01-16 Sony Corporation Method and apparatus for processing attached e-mail data and storage medium for processing program for attached data
US6189030B1 (en) * 1996-02-21 2001-02-13 Infoseek Corporation Method and apparatus for redirection of server external hyper-link references
US6201176B1 (en) * 1998-05-07 2001-03-13 Canon Kabushiki Kaisha System and method for querying a music database
US6226672B1 (en) * 1997-05-02 2001-05-01 Sony Corporation Method and system for allowing users to access and/or share media libraries, including multimedia collections of audio and video information via a wide area network
US6230192B1 (en) * 1997-04-15 2001-05-08 Cddb, Inc. Method and system for accessing remote data based on playback of recordings
US6243725B1 (en) * 1997-05-21 2001-06-05 Premier International, Ltd. List building system
US6243328B1 (en) * 1998-04-03 2001-06-05 Sony Corporation Modular media storage system and integrated player unit and method for accessing additional external information
US6260059B1 (en) * 1998-04-16 2001-07-10 Matsushita Electric Industrial Co., Ltd. Knowledge provider system and knowledge providing method utilizing plural knowledge provider agents which are linked by communication network and execute message processing using successive pattern matching operations
US6356914B1 (en) * 1998-02-05 2002-03-12 Oak Technology, Inc. DVD system for seamless transfer between titles on a DVD disc which minimizes memory consumption
US6359914B1 (en) * 1999-10-04 2002-03-19 University Of Dayton Tunable pulsed narrow bandwidth light source
US20020033844A1 (en) * 1998-10-01 2002-03-21 Levy Kenneth L. Content sensitive connected content
US20020037083A1 (en) * 2000-07-14 2002-03-28 Weare Christopher B. System and methods for providing automatic classification of media entities according to tempo properties
US6377518B1 (en) * 1998-11-16 2002-04-23 U.S. Philips Corporation Method and device for recording real-time information
US20020082731A1 (en) * 2000-11-03 2002-06-27 International Business Machines Corporation System for monitoring audio content in a video broadcast
US6505160B1 (en) * 1995-07-27 2003-01-07 Digimarc Corporation Connected audio and other media objects
US20030009340A1 (en) * 2001-06-08 2003-01-09 Kazunori Hayashi Synthetic voice sales system and phoneme copyright authentication system
US20030023852A1 (en) * 2001-07-10 2003-01-30 Wold Erling H. Method and apparatus for identifying an unkown work
US20030033321A1 (en) * 2001-07-20 2003-02-13 Audible Magic, Inc. Method and apparatus for identifying new media content
US20030031260A1 (en) * 2001-07-16 2003-02-13 Ali Tabatabai Transcoding between content data and description data
US20030037010A1 (en) * 2001-04-05 2003-02-20 Audible Magic, Inc. Copyright detection and protection system and method
US20030046283A1 (en) * 1997-04-15 2003-03-06 Gracenote Inc. Method and system for finding approximate matches in database
US6535869B1 (en) * 1999-03-23 2003-03-18 International Business Machines Corporation Increasing efficiency of indexing random-access files composed of fixed-length data blocks by embedding a file index therein
US20030086341A1 (en) * 2001-07-20 2003-05-08 Gracenote, Inc. Automatic identification of sound recordings
US20030135488A1 (en) * 2002-01-11 2003-07-17 International Business Machines Corporation Synthesizing information-bearing content from multiple channels
US6609105B2 (en) * 2000-01-07 2003-08-19 Mp3.Com, Inc. System and method for providing access to electronic works
US20040034441A1 (en) * 2002-08-16 2004-02-19 Malcolm Eaton System and method for creating an index of audio tracks
US20040102973A1 (en) * 2002-11-21 2004-05-27 Lott Christopher B. Process, apparatus, and system for phonetic dictation and instruction
US20040099126A1 (en) * 2002-11-19 2004-05-27 Yamaha Corporation Interchange format of voice data in music file
US6766523B2 (en) * 2002-05-31 2004-07-20 Microsoft Corporation System and method for identifying and segmenting repeating media objects embedded in a stream
US6775374B2 (en) * 2001-09-25 2004-08-10 Sanyo Electric Co., Ltd. Network device control system, network interconnection apparatus and network device
US20050154588A1 (en) * 2001-12-12 2005-07-14 Janas John J.Iii Speech recognition and control in a process support system
US6983289B2 (en) * 2000-12-05 2006-01-03 Digital Networks North America, Inc. Automatic identification of DVD title using internet technologies and fuzzy matching techniques
US20060026162A1 (en) * 2004-07-19 2006-02-02 Zoran Corporation Content management system
US20060062363A1 (en) * 2004-09-19 2006-03-23 Sirenada, Inc. Method and apparatus for interacting with broadcast programming
US20060167903A1 (en) * 2005-01-25 2006-07-27 Microsoft Corporation MediaDescription data structures for carrying descriptive content metadata and content acquisition data in multimedia systems
US7181543B2 (en) * 2001-08-10 2007-02-20 Sun Microsystems, Inc. Secure network identity distribution
US20070050409A1 (en) * 2005-08-26 2007-03-01 Harris Corporation System, methods, and program product to trace content genealogy
US20070050408A1 (en) * 2005-08-30 2007-03-01 Microsoft Corporation Strategies for discovering media resources
US20070106405A1 (en) * 2005-08-19 2007-05-10 Gracenote, Inc. Method and system to provide reference data for identification of digital content
US7415129B2 (en) * 1995-05-08 2008-08-19 Digimarc Corporation Providing reports associated with video and audio content
US20090076821A1 (en) * 2005-08-19 2009-03-19 Gracenote, Inc. Method and apparatus to control operation of a playback device
US7908273B2 (en) * 2006-03-09 2011-03-15 Gracenote, Inc. Method and system for media navigation

Family Cites Families (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3036552C2 (en) 1980-09-27 1985-04-25 Blaupunkt-Werke Gmbh, 3200 Hildesheim Television reception system
JP2849161B2 (en) 1989-10-14 1999-01-20 三菱電機株式会社 Information playback device
JPH0786737B2 (en) * 1989-12-13 1995-09-20 パイオニア株式会社 Car navigation system
US5691964A (en) 1992-12-24 1997-11-25 Nsm Aktiengesellschaft Music playing system with decentralized units
US5464946A (en) 1993-02-11 1995-11-07 Multimedia Systems Corporation System and apparatus for interactive multimedia entertainment
US5475835A (en) 1993-03-02 1995-12-12 Research Design & Marketing Inc. Audio-visual inventory and play-back control system
EP1189231B1 (en) * 1993-05-26 2005-04-20 Pioneer Electronic Corporation Recording Medium for Karaoke
US5583560A (en) 1993-06-22 1996-12-10 Apple Computer, Inc. Method and apparatus for audio-visual interface for the selective display of listing information on a display
US5694162A (en) 1993-10-15 1997-12-02 Automated Business Companies, Inc. Method for automatically changing broadcast programs based on audience response
DE4427046C2 (en) 1994-07-29 2001-02-01 Yes Internat Ag Zug Method for reproducing additional information contained in a television or radio program signal
US6829368B2 (en) 2000-01-26 2004-12-07 Digimarc Corporation Establishing and interacting with on-line media collections using identifiers in media signals
US7562392B1 (en) * 1999-05-19 2009-07-14 Digimarc Corporation Methods of interacting with audio and ambient music
US6408331B1 (en) 1995-07-27 2002-06-18 Digimarc Corporation Computer linking methods using encoded graphics
US5822216A (en) * 1995-08-17 1998-10-13 Satchell, Jr.; James A. Vending machine and computer assembly
JP3898242B2 (en) 1995-09-14 2007-03-28 富士通株式会社 Information changing system and method for changing output of network terminal
US6314570B1 (en) 1996-02-08 2001-11-06 Matsushita Electric Industrial Co., Ltd. Data processing apparatus for facilitating data selection and data processing in at television environment with reusable menu structures
US5838910A (en) 1996-03-14 1998-11-17 Domenikos; Steven D. Systems and methods for executing application programs from a memory device linked to a server at an internet site
US5781897A (en) * 1996-04-18 1998-07-14 International Business Machines Corporation Method and system for performing record searches in a database within a computer peripheral storage device
US6138162A (en) * 1997-02-11 2000-10-24 Pointcast, Inc. Method and apparatus for configuring a client to redirect requests to a caching proxy server based on a category ID with the request
US5835914A (en) 1997-02-18 1998-11-10 Wall Data Incorporated Method for preserving and reusing software objects associated with web pages
US6005565A (en) 1997-03-25 1999-12-21 Sony Corporation Integrated search of electronic program guide, internet and other information resources
US5959945A (en) * 1997-04-04 1999-09-28 Advanced Technology Research Sa Cv System for selectively distributing music to a plurality of jukeboxes
US7308485B2 (en) 1997-04-15 2007-12-11 Gracenote, Inc. Method and system for accessing web pages based on playback of recordings
US5987454A (en) 1997-06-09 1999-11-16 Hobbs; Allen Method and apparatus for selectively augmenting retrieved text, numbers, maps, charts, still pictures and/or graphics, moving pictures and/or graphics and audio information from a network resource
US6131129A (en) * 1997-07-30 2000-10-10 Sony Corporation Of Japan Computer system within an AV/C based media changer subunit providing a standarized command set
JPH11232286A (en) 1998-02-12 1999-08-27 Hitachi Ltd Information retrieving system
US6138175A (en) * 1998-05-20 2000-10-24 Oak Technology, Inc. System for dynamically optimizing DVD navigational commands by combining a first and a second navigational commands retrieved from a medium for playback
EP0961209B1 (en) * 1998-05-27 2009-10-14 Sony France S.A. Sequence generation using a constraint satisfaction problem formulation
US6327233B1 (en) 1998-08-14 2001-12-04 Intel Corporation Method and apparatus for reporting programming selections from compact disk players
JP3583657B2 (en) 1998-09-30 2004-11-04 株式会社東芝 Relay device and communication device
JP2000194726A (en) * 1998-10-19 2000-07-14 Sony Corp Device, method and system for processing information and providing medium
US6304523B1 (en) * 1999-01-05 2001-10-16 Openglobe, Inc. Playback device having text display and communication with remote database of titles
US6654735B1 (en) 1999-01-08 2003-11-25 International Business Machines Corporation Outbound information analysis for generating user interest profiles and improving user productivity
US6941325B1 (en) * 1999-02-01 2005-09-06 The Trustees Of Columbia University Multimedia archive description scheme
WO2000058940A2 (en) 1999-03-29 2000-10-05 Gotuit Media, Inc. Electronic music and programme storage, comprising the recognition of programme segments, such as recorded musical performances a nd system for the management and playback of these programme segments
US7302574B2 (en) 1999-05-19 2007-11-27 Digimarc Corporation Content identifiers triggering corresponding responses through collaborative processing
US6941275B1 (en) * 1999-10-07 2005-09-06 Remi Swierczek Music identification system
EP1193616A1 (en) * 2000-09-29 2002-04-03 Sony France S.A. Fixed-length sequence generation of items out of a database using descriptors
ATE405101T1 (en) 2001-02-12 2008-08-15 Gracenote Inc METHOD FOR GENERATING AN IDENTIFICATION HASH FROM THE CONTENTS OF A MULTIMEDIA FILE
US7073193B2 (en) * 2002-04-16 2006-07-04 Microsoft Corporation Media content descriptions
JP2004309795A (en) * 2003-04-07 2004-11-04 Mitsubishi Electric Corp Music providing system
US20050015488A1 (en) * 2003-05-30 2005-01-20 Pavan Bayyapu Selectively managing data conveyance between computing devices

Patent Citations (99)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4677466A (en) * 1985-07-29 1987-06-30 A. C. Nielsen Company Broadcast program identification method and apparatus
US4870568A (en) * 1986-06-25 1989-09-26 Thinking Machines Corporation Method for searching a database system including parallel processors
US5206949A (en) * 1986-09-19 1993-04-27 Nancy P. Cochran Database search and record retrieval system which continuously displays category names during scrolling and selection of individually displayed search terms
US5019899A (en) * 1988-11-01 1991-05-28 Control Data Corporation Electronic data encoding and recognition system
US5132949A (en) * 1990-04-30 1992-07-21 Samsung Electronics Co., Ltd. Record medium searching apparatus for a recording/reproducing system
US5781889A (en) * 1990-06-15 1998-07-14 Martin; John R. Computer jukebox and jukebox network
US5341350A (en) * 1990-07-07 1994-08-23 Nsm Aktiengesellschaft Coin operated jukebox device using data communication network
US5237157A (en) * 1990-09-13 1993-08-17 Intouch Group, Inc. Kiosk apparatus and method for point of preview and for compilation of market data
US5488725A (en) * 1991-10-08 1996-01-30 West Publishing Company System of document representation retrieval by successive iterated probability sampling
US5446891A (en) * 1992-02-26 1995-08-29 International Business Machines Corporation System for adjusting hypertext links with weighed user goals and activities
US5392264A (en) * 1992-04-24 1995-02-21 Pioneer Electronic Corporation Information reproducing apparatus
US5388259A (en) * 1992-05-15 1995-02-07 Bell Communications Research, Inc. System for accessing a database with an iterated fuzzy query notified by retrieval response
US5337347A (en) * 1992-06-25 1994-08-09 International Business Machines Corporation Method and system for progressive database search termination and dynamic information presentation utilizing telephone keypad input
US5446714A (en) * 1992-07-21 1995-08-29 Pioneer Electronic Corporation Disc changer and player that reads and stores program data of all discs prior to reproduction and method of reproducing music on the same
US5410543A (en) * 1993-01-04 1995-04-25 Apple Computer, Inc. Method for connecting a mobile computer to a computer network by using an address server
US5483598A (en) * 1993-07-01 1996-01-09 Digital Equipment Corp., Patent Law Group Message encryption using a hash function
US5768222A (en) * 1994-05-25 1998-06-16 Sony Corporation Reproducing apparatus for a recording medium where a transferring means returns a recording medium into the stocker before execution of normal operation and method therefor
US5740304A (en) * 1994-07-04 1998-04-14 Sony Corporation Method and apparatus for replaying recording medium from any bookmark-set position thereon
US5642337A (en) * 1995-03-14 1997-06-24 Sony Corporation Network with optical mass storage devices
US5757739A (en) * 1995-03-30 1998-05-26 U.S. Philips Corporation System including a presentation apparatus, in which different items are selectable, and a control device for controlling the presentation apparatus, and control device for such a system
US5616876A (en) * 1995-04-19 1997-04-01 Microsoft Corporation System and methods for selecting music on the basis of subjective content
US7415129B2 (en) * 1995-05-08 2008-08-19 Digimarc Corporation Providing reports associated with video and audio content
US5659732A (en) * 1995-05-17 1997-08-19 Infoseek Corporation Document retrieval over networks wherein ranking and relevance scores are computed at the client for multiple database documents
US5625608A (en) * 1995-05-22 1997-04-29 Lucent Technologies Inc. Remote control device capable of downloading content information from an audio system
US5615345A (en) * 1995-06-08 1997-03-25 Hewlett-Packard Company System for interfacing an optical disk autochanger to a plurality of disk drives
US5740549A (en) * 1995-06-12 1998-04-14 Pointcast, Inc. Information and advertising distribution system and method
US6272078B2 (en) * 1995-07-26 2001-08-07 Sony Corporation Method for updating a memory in a recorded media player
US5751672A (en) * 1995-07-26 1998-05-12 Sony Corporation Compact disc changer utilizing disc database
US6247022B1 (en) * 1995-07-26 2001-06-12 Sony Corporation Internet based provision of information supplemental to that stored on compact discs
US6388958B1 (en) * 1995-07-26 2002-05-14 Sony Corporation Method of building a play list for a recorded media changer
US6388957B2 (en) * 1995-07-26 2002-05-14 Sony Corporation Recorded media player with database
US7349552B2 (en) * 1995-07-27 2008-03-25 Digimarc Corporation Connected audio and other media objects
US6505160B1 (en) * 1995-07-27 2003-01-07 Digimarc Corporation Connected audio and other media objects
US5809512A (en) * 1995-07-28 1998-09-15 Matsushita Electric Industrial Co., Ltd. Information provider apparatus enabling selective playing of multimedia information by interactive input based on displayed hypertext information
US6035329A (en) * 1995-12-07 2000-03-07 Hyperlock Technologies, Inc. Method of securing the playback of a DVD-ROM via triggering data sent via a cable network
US5761606A (en) * 1996-02-08 1998-06-02 Wolzien; Thomas R. Media online services access via address embedded in video or audio program
US5781909A (en) * 1996-02-13 1998-07-14 Microtouch Systems, Inc. Supervised satellite kiosk management system with combined local and remote data storage
US6189030B1 (en) * 1996-02-21 2001-02-13 Infoseek Corporation Method and apparatus for redirection of server external hyper-link references
US5751956A (en) * 1996-02-21 1998-05-12 Infoseek Corporation Method and apparatus for redirection of server external hyper-link references
US5815471A (en) * 1996-03-19 1998-09-29 Pics Previews Inc. Method and apparatus for previewing audio selections
US5673322A (en) * 1996-03-22 1997-09-30 Bell Communications Research, Inc. System and method for providing protocol translation and filtering to access the world wide web from wireless or low-bandwidth networks
US6025837A (en) * 1996-03-29 2000-02-15 Micrsoft Corporation Electronic program guide with hyperlinks to target resources
US5884298A (en) * 1996-03-29 1999-03-16 Cygnet Storage Solutions, Inc. Method for accessing and updating a library of optical discs
US5894554A (en) * 1996-04-23 1999-04-13 Infospinner, Inc. System for managing dynamic web page generation requests by intercepting request at web server and routing to page server thereby releasing web server to process other requests
US5727129A (en) * 1996-06-04 1998-03-10 International Business Machines Corporation Network system for profiling and actively facilitating user activities
US5903816A (en) * 1996-07-01 1999-05-11 Thomson Consumer Electronics, Inc. Interactive television system and method for displaying web-like stills with hyperlinks
US5918223A (en) * 1996-07-22 1999-06-29 Muscle Fish Method and article of manufacture for content-based analysis, storage, retrieval, and segmentation of audio information
US5862223A (en) * 1996-07-24 1999-01-19 Walker Asset Management Limited Partnership Method and apparatus for a cryptographically-assisted commercial network system designed to facilitate and support expert-based commerce
US5721827A (en) * 1996-10-02 1998-02-24 James Logan System for electrically distributing personalized information
US5774666A (en) * 1996-10-18 1998-06-30 Silicon Graphics, Inc. System and method for displaying uniform network resource locators embedded in time-based medium
US5796393A (en) * 1996-11-08 1998-08-18 Compuserve Incorporated System for intergrating an on-line service community with a foreign service
US6031795A (en) * 1996-12-02 2000-02-29 Thomson Consumer Electronics, Inc. Method and apparatus for programming a jukebox with information related to content on media contained therein
US6034925A (en) * 1996-12-02 2000-03-07 Thomson Consumer Electronics, Inc. Accessing control method for identifying a recording medium in a jukebox
US20030046283A1 (en) * 1997-04-15 2003-03-06 Gracenote Inc. Method and system for finding approximate matches in database
US6230192B1 (en) * 1997-04-15 2001-05-08 Cddb, Inc. Method and system for accessing remote data based on playback of recordings
US6175857B1 (en) * 1997-04-30 2001-01-16 Sony Corporation Method and apparatus for processing attached e-mail data and storage medium for processing program for attached data
US6535907B1 (en) * 1997-04-30 2003-03-18 Sony Corporation Method and apparatus for processing attached E-mail data and storage medium for processing program for attached data
US6434597B1 (en) * 1997-04-30 2002-08-13 Sony Corporation Animated virtual agent displaying apparatus, method for displaying a virtual agent, and medium for storing instructions for displaying a virtual agent
US6226672B1 (en) * 1997-05-02 2001-05-01 Sony Corporation Method and system for allowing users to access and/or share media libraries, including multimedia collections of audio and video information via a wide area network
US6243725B1 (en) * 1997-05-21 2001-06-05 Premier International, Ltd. List building system
US6112240A (en) * 1997-09-03 2000-08-29 International Business Machines Corporation Web site client information tracker
US5929857A (en) * 1997-09-10 1999-07-27 Oak Technology, Inc. Method and apparatus for dynamically constructing a graphic user interface from a DVD data stream
US6012112A (en) * 1997-09-30 2000-01-04 Compaq Computer Corporation DVD assembly, and associated apparatus, for a convergent device
US6104334A (en) * 1997-12-31 2000-08-15 Eremote, Inc. Portable internet-enabled controller and information browser for consumer devices
US6356914B1 (en) * 1998-02-05 2002-03-12 Oak Technology, Inc. DVD system for seamless transfer between titles on a DVD disc which minimizes memory consumption
US6243328B1 (en) * 1998-04-03 2001-06-05 Sony Corporation Modular media storage system and integrated player unit and method for accessing additional external information
US6260059B1 (en) * 1998-04-16 2001-07-10 Matsushita Electric Industrial Co., Ltd. Knowledge provider system and knowledge providing method utilizing plural knowledge provider agents which are linked by communication network and execute message processing using successive pattern matching operations
US6201176B1 (en) * 1998-05-07 2001-03-13 Canon Kabushiki Kaisha System and method for querying a music database
US20020033844A1 (en) * 1998-10-01 2002-03-21 Levy Kenneth L. Content sensitive connected content
US6377518B1 (en) * 1998-11-16 2002-04-23 U.S. Philips Corporation Method and device for recording real-time information
US6535869B1 (en) * 1999-03-23 2003-03-18 International Business Machines Corporation Increasing efficiency of indexing random-access files composed of fixed-length data blocks by embedding a file index therein
US6061306A (en) * 1999-07-20 2000-05-09 James Buchheim Portable digital player compatible with a cassette player
US6359914B1 (en) * 1999-10-04 2002-03-19 University Of Dayton Tunable pulsed narrow bandwidth light source
US6609105B2 (en) * 2000-01-07 2003-08-19 Mp3.Com, Inc. System and method for providing access to electronic works
US20020037083A1 (en) * 2000-07-14 2002-03-28 Weare Christopher B. System and methods for providing automatic classification of media entities according to tempo properties
US20020082731A1 (en) * 2000-11-03 2002-06-27 International Business Machines Corporation System for monitoring audio content in a video broadcast
US6983289B2 (en) * 2000-12-05 2006-01-03 Digital Networks North America, Inc. Automatic identification of DVD title using internet technologies and fuzzy matching techniques
US20030037010A1 (en) * 2001-04-05 2003-02-20 Audible Magic, Inc. Copyright detection and protection system and method
US20030009340A1 (en) * 2001-06-08 2003-01-09 Kazunori Hayashi Synthetic voice sales system and phoneme copyright authentication system
US20030023852A1 (en) * 2001-07-10 2003-01-30 Wold Erling H. Method and apparatus for identifying an unkown work
US20030031260A1 (en) * 2001-07-16 2003-02-13 Ali Tabatabai Transcoding between content data and description data
US20030033321A1 (en) * 2001-07-20 2003-02-13 Audible Magic, Inc. Method and apparatus for identifying new media content
US20030086341A1 (en) * 2001-07-20 2003-05-08 Gracenote, Inc. Automatic identification of sound recordings
US7181543B2 (en) * 2001-08-10 2007-02-20 Sun Microsystems, Inc. Secure network identity distribution
US6775374B2 (en) * 2001-09-25 2004-08-10 Sanyo Electric Co., Ltd. Network device control system, network interconnection apparatus and network device
US20050154588A1 (en) * 2001-12-12 2005-07-14 Janas John J.Iii Speech recognition and control in a process support system
US20030135488A1 (en) * 2002-01-11 2003-07-17 International Business Machines Corporation Synthesizing information-bearing content from multiple channels
US6766523B2 (en) * 2002-05-31 2004-07-20 Microsoft Corporation System and method for identifying and segmenting repeating media objects embedded in a stream
US20040034441A1 (en) * 2002-08-16 2004-02-19 Malcolm Eaton System and method for creating an index of audio tracks
US20040099126A1 (en) * 2002-11-19 2004-05-27 Yamaha Corporation Interchange format of voice data in music file
US20040102973A1 (en) * 2002-11-21 2004-05-27 Lott Christopher B. Process, apparatus, and system for phonetic dictation and instruction
US20060026162A1 (en) * 2004-07-19 2006-02-02 Zoran Corporation Content management system
US20060062363A1 (en) * 2004-09-19 2006-03-23 Sirenada, Inc. Method and apparatus for interacting with broadcast programming
US20060167903A1 (en) * 2005-01-25 2006-07-27 Microsoft Corporation MediaDescription data structures for carrying descriptive content metadata and content acquisition data in multimedia systems
US20070106405A1 (en) * 2005-08-19 2007-05-10 Gracenote, Inc. Method and system to provide reference data for identification of digital content
US20090076821A1 (en) * 2005-08-19 2009-03-19 Gracenote, Inc. Method and apparatus to control operation of a playback device
US20070050409A1 (en) * 2005-08-26 2007-03-01 Harris Corporation System, methods, and program product to trace content genealogy
US20070050408A1 (en) * 2005-08-30 2007-03-01 Microsoft Corporation Strategies for discovering media resources
US7908273B2 (en) * 2006-03-09 2011-03-15 Gracenote, Inc. Method and system for media navigation

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7856443B2 (en) 2001-12-05 2010-12-21 Gracenote, Inc. Automatic identification of DVD title using internet technologies and fuzzy matching techniques
US20050019008A1 (en) * 2001-12-05 2005-01-27 Digital Netoworks North America, Inc. Automatic identification of DVD title using internet technologies and fuzzy matching techniques
US7908273B2 (en) 2006-03-09 2011-03-15 Gracenote, Inc. Method and system for media navigation
US8959202B2 (en) * 2008-03-18 2015-02-17 Civolution B.V. Generating statistics of popular content
US20110066723A1 (en) * 2008-03-18 2011-03-17 Civolution B.V. Generating statistics of popular content
US20100115472A1 (en) * 2008-10-30 2010-05-06 Lee Kun-Bin Method of Facilitating Browsing and Management of Multimedia Files with Data Structure thereof
US8495699B2 (en) 2008-12-23 2013-07-23 At&T Intellectual Property I, L.P. Distributed content analysis network
US9843843B2 (en) 2008-12-23 2017-12-12 At&T Intellectual Property I, L.P. Distributed content analysis network
US9078019B2 (en) 2008-12-23 2015-07-07 At&T Intellectual Property I, L.P. Distributed content analysis network
US11093544B2 (en) * 2009-08-13 2021-08-17 TunesMap Inc. Analyzing captured sound and seeking a match for temporal and geographic presentation and navigation of linked cultural, artistic, and historic content
US10235017B2 (en) 2010-02-04 2019-03-19 Microsoft Technology Licensing, Llc Integrated media user interface
US20130298022A1 (en) * 2010-02-04 2013-11-07 Microsoft Corporation Integrated Media User Interface
US20110190032A1 (en) * 2010-02-04 2011-08-04 Sheldon Kerri I H Integrated Media User Interface
US8494590B2 (en) * 2010-02-04 2013-07-23 Microsoft Corporation Integrated media user interface
US9335903B2 (en) * 2010-02-04 2016-05-10 Microsoft Corporation Integrated media user interface
US20160306512A1 (en) * 2010-02-04 2016-10-20 Microsoft Technology Licensing, Llc Integrated Media User Interface
US8688712B1 (en) * 2011-09-20 2014-04-01 Google Inc. Personal media database
US9043340B1 (en) 2011-09-20 2015-05-26 Google Inc. Personal media database
US20130238964A1 (en) * 2012-03-06 2013-09-12 Apple Inc. Application for designing journals
US20130282956A1 (en) * 2012-04-20 2013-10-24 Pradeep Ramdeo Automobile MP3 System
US10140372B2 (en) * 2012-09-12 2018-11-27 Gracenote, Inc. User profile based on clustering tiered descriptors
EP2895971A4 (en) * 2012-09-12 2016-05-04 Gracenote Inc User profile based on clustering tiered descriptors
EP3543872A1 (en) * 2012-09-12 2019-09-25 Gracenote Inc. User profile based on clustering tiered descriptors
US10949482B2 (en) 2012-09-12 2021-03-16 Gracenote, Inc. User profile based on clustering tiered descriptors
US20210200825A1 (en) * 2012-09-12 2021-07-01 Gracenote, Inc. User profile based on clustering tiered descriptors
US20140074839A1 (en) * 2012-09-12 2014-03-13 Gracenote, Inc. User profile based on clustering tiered descriptors
US11886521B2 (en) * 2012-09-12 2024-01-30 Gracenote, Inc. User profile based on clustering tiered descriptors
CN110597867A (en) * 2019-09-09 2019-12-20 珠海格力电器股份有限公司 Image-text data processing method and system

Also Published As

Publication number Publication date
WO2007103583A3 (en) 2008-05-08
JP2009529753A (en) 2009-08-20
EP2001583A4 (en) 2010-09-01
US20070288478A1 (en) 2007-12-13
EP2001583A2 (en) 2008-12-17
US7908273B2 (en) 2011-03-15
WO2007103583A2 (en) 2007-09-13

Similar Documents

Publication Publication Date Title
US7908273B2 (en) Method and system for media navigation
US8321456B2 (en) Generating metadata for association with a collection of content items
US7392477B2 (en) Resolving metadata matched to media content
US7707500B2 (en) User interface for media item portion search tool
EP1900207B1 (en) Creating standardized playlists and maintaining coherency
US7912565B2 (en) Method for creating and accessing a menu for audio content without using a display
US8620967B2 (en) Managing metadata for occurrences of a recording
US20120239690A1 (en) Utilizing time-localized metadata
US20060085383A1 (en) Network-based data collection, including local data attributes, enabling media management without requiring a network connection
US20040019658A1 (en) Metadata retrieval protocols and namespace identifiers
KR101540429B1 (en) Method and apparatus for recommending playlist of contents
US20100023485A1 (en) Method of generating audiovisual content through meta-data analysis
US8762403B2 (en) Method of searching for media item portions
CN104182413A (en) Method and system for recommending multimedia content
US20090287649A1 (en) Method and apparatus for providing content playlist
US20120239689A1 (en) Communicating time-localized metadata
US7921140B2 (en) Apparatus and method for browsing contents
US20070250533A1 (en) Method, Apparatus, System, and Computer Program Product for Generating or Updating a Metadata of a Multimedia File
US8595266B2 (en) Method of suggesting accompaniment tracks for synchronised rendering with a content data item
US20040182225A1 (en) Portable custom media server
JP2008225584A (en) Article recommendation apparatus, article recommendation system, article recommendation method, and article recommendation program
EP1437738B1 (en) Method for creating and accessing a menu for audio content without using a display
Waters et al. Music metadata in a new key: Metadata and annotation for music in a digital world
Di Bono et al. WP9: A review of data and metadata standards and techniques for representation of multimedia content
Weller et al. The Future is Meta: Metadata, Formats and Perspectives towards Interactive and Personalized AV Content

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION