US20140324895A1 - System and method for creating and maintaining a database of annotations corresponding to portions of a content item - Google Patents
System and method for creating and maintaining a database of annotations corresponding to portions of a content item Download PDFInfo
- Publication number
- US20140324895A1 US20140324895A1 US14/195,046 US201414195046A US2014324895A1 US 20140324895 A1 US20140324895 A1 US 20140324895A1 US 201414195046 A US201414195046 A US 201414195046A US 2014324895 A1 US2014324895 A1 US 2014324895A1
- Authority
- US
- United States
- Prior art keywords
- content item
- annotation
- presentation
- user
- reference time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 90
- 230000003993 interaction Effects 0.000 claims description 85
- 238000003860 storage Methods 0.000 claims description 58
- 230000006855 networking Effects 0.000 claims description 45
- 238000012544 monitoring process Methods 0.000 claims description 33
- 230000000977 initiatory effect Effects 0.000 claims description 18
- 238000013507 mapping Methods 0.000 claims description 12
- 230000004044 response Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 7
- 230000000875 corresponding effect Effects 0.000 description 113
- 238000006243 chemical reaction Methods 0.000 description 75
- 230000007246 mechanism Effects 0.000 description 20
- 230000008901 benefit Effects 0.000 description 13
- 238000012545 processing Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 8
- 230000014759 maintenance of location Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000001276 controlling effect Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000003068 static effect Effects 0.000 description 5
- 230000033458 reproduction Effects 0.000 description 4
- 230000002123 temporal effect Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- BQCADISMDOOEFD-UHFFFAOYSA-N Silver Chemical compound [Ag] BQCADISMDOOEFD-UHFFFAOYSA-N 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 2
- 229910052737 gold Inorganic materials 0.000 description 2
- 239000010931 gold Substances 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 229910052709 silver Inorganic materials 0.000 description 2
- 239000004332 silver Substances 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 206010063659 Aversion Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000011273 social behavior Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G06F17/30882—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/955—Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
- G06F16/9558—Details of hyperlinks; Management of linked annotations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
-
- G06F17/30312—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
- G06F40/169—Annotation, e.g. comment data or footnotes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
- H04N21/23109—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion by placing content in organized collections, e.g. EPG data repository
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2668—Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
- H04N21/43079—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of additional data with content streams on multiple devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44222—Analytics of user selections, e.g. selection of programs or purchase activity
- H04N21/44224—Monitoring of user activity on external systems, e.g. Internet browsing
- H04N21/44226—Monitoring of user activity on external systems, e.g. Internet browsing on social networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/454—Content or additional data filtering, e.g. blocking advertisements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/458—Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4667—Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4784—Supplemental services, e.g. displaying phone caller identification, shopping application receiving rewards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8126—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
- H04N21/8133—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
Definitions
- the invention relates generally to methods, apparatuses, and/or systems for enabling a time-shifted, on-demand social network for watching, creating, and/or sharing time-shifted annotation datasets (e.g., commentary tracks) synced to any on-demand programming, and more particularly to creating and maintaining a database of annotations corresponding to portions of a content item.
- time-shifted annotation datasets e.g., commentary tracks
- users are able to disseminate information to others, as well as interact with one another, via various social networks.
- users may utilize a social networking service to inform others about movie and/or television episodes that they have watched, share their reactions to events occurring during an episode in real-time, and respond to one another's reactions to events in an episode.
- users that miss an episode during an original airing and watch the episode at a later time are typically unable to experience the reactions of other users as they watch the episode, for example, due to the significantly lower number of viewers that are watching the episode at the same time (for subsequent airings) or because they are watching it “on-demand” at a time when others are not viewing the episode.
- the shared reactions may act as “spoilers” that ruin the experience of the users that have yet to watch the episode.
- the invention addressing these and other drawbacks relates to methods, apparatuses, and/or systems for enabling a time-shifted, on-demand social network for watching, creating, and/or sharing time-shifted commentary tracks synced to any on-demand programming, according to an aspect of the invention.
- the invention may facilitate the presentation of content items, annotations associated with the content items, or related items.
- content items may include movies, television episodes, portions or segments of movies or television episodes, video clips, songs, audio books, e-books, or other content items.
- a presentation of a content item may be provided to a user via a content delivery service such as, for example, NETFLIX, HULU, AMAZON INSTANT VIDEO, a cable provider, a local service at a user device programmed to present content items stored locally at an electronic storage of the user device (e.g., a hard drive, a CD, a DVD, etc.), or other content delivery service.
- a content delivery service such as, for example, NETFLIX, HULU, AMAZON INSTANT VIDEO, a cable provider, a local service at a user device programmed to present content items stored locally at an electronic storage of the user device (e.g., a hard drive, a CD, a DVD, etc.), or other content delivery service.
- Presentations of a content item may include reproductions of the content item that are of varying versions (e.g., extended versions, versions with alternative endings or scenes, etc.), reproductions of the content item with auxiliary information (e.g., advertisements, warnings, etc.), or other presentations of the content item.
- versions e.g., extended versions, versions with alternative endings or scenes, etc.
- auxiliary information e.g., advertisements, warnings, etc.
- annotations may include reviews, comments, ratings, markups, posts, links to other media, or other annotations.
- Annotations may be manually entered by a user for a content item (or a portion thereof), or automatically determined for the content item (or portion thereof) based on interactions of the user with the content item (or portion thereof), interactions of the user with other portions of the content item or other content items, or other parameters.
- Annotations may be manually entered or automatically determined for the content item or the content item portion either before, during, or after a presentation of the content item.
- Annotations may be stored as data or metadata, for example, in association with information indicative of the content item or the content item.
- a database of annotations may be created and maintained.
- the database of annotations may, for example, include annotations corresponding to portions of a content item.
- the annotations may be created based on presentations of the content item that are provided via one or more content delivery services, and stored in a database for later use.
- an annotation in the database may correspond to a time at which a first portion of a content item is presented via a first content delivery service (e.g., NETFLIX), and another annotation in the database may correspond to a time at which a second portion of the content item is presented via a second content delivery service (e.g., HULU).
- a first content delivery service e.g., NETFLIX
- another annotation in the database may correspond to a time at which a second portion of the content item is presented via a second content delivery service (e.g., HULU).
- the annotations may be stored in the database respectively in association with reference times that correspond to portions of the content item.
- reference times associated with annotations may be utilized to subsequently provide the annotations such that they are presented in a time-synchronized fashion with corresponding portions of a content item during subsequent presentations of that content item.
- reactions of users to portions of the content item e.g., captured in the form of annotations
- audio content recognition may be performed for a portion of a content item when a user submits an annotation (e.g., comment) for the portion of the content item.
- a comparison of a resulting pattern of the audio content recognition with stored patterns associated with a set of reference times may then be performed to identify a reference time that corresponds to the portion of the content item.
- the annotation may be stored in association with the reference time so that the reference time may later be used to present the annotation when its corresponding portion of the content item is presented.
- audio content recognition may be performed on presentations of a content item to recognize and map portions of individual ones of the presentations to portions of other ones of the presentations. Because portions of one presentation are mapped to portions of other presentations (and/or vice versa), a particular portion of the content item (to which an annotation corresponds) may be determined when the corresponding portion of any one of the presentations is known.
- a master set of reference times may be maintained for portions of a content item, and used to identify a particular portion of the content item to which an annotation corresponds. For example, upon receipt of an annotation, a reference time corresponding to receipt of the annotation may be correlated to a master reference time of the master set. The annotation may then be stored in association with the correlated master reference time so that the master reference time may later be used to present the annotation when its corresponding portion of the content item is presented.
- portions of presentations of a content item may be mapped to portions of other ones of the presentations, and/or mapped to a master set of reference times, using other approaches.
- information regarding the portions of the presentations, the reference times of the presentations that correspond to the portions, etc. may be shared by content delivery services (through which access to the presentations are provided).
- the content delivery services may each provide an application programming interface (API) through which the shared information may be obtained.
- API application programming interface
- the information shared by the content delivery services may then be utilized to correlate the portions of the presentations to portions of other ones of the presentations, and/or to the master set of reference times.
- annotations corresponding to portions of a content item may be provided to one or more social networking services (e.g., FACEBOOK, TWITTER, etc.).
- a user may submit annotations during a presentation of a content item to a plurality of social networking services.
- a user interface may enable a user to experience a presentation of a content item, view annotations corresponding to portions of the content item during the presentation, submit annotations for portions of the content item during the presentation, select to provide a submitted annotation to different social networking services, or perform other operations.
- users may consume and share their experiences regarding the content item with subsequent viewers (or consumers) of the content item, and other users associated with social networking services.
- annotations may be selectively presented to a user based on one or more parameters.
- the parameters may, for instance, include annotation types, annotation sources (e.g., authors or other sources), annotation set identifiers, social distances, user relationship status, spatial proximity, temporal proximity, or other parameters.
- annotation sources e.g., authors or other sources
- annotation set identifiers e.g., social distances, user relationship status, spatial proximity, temporal proximity, or other parameters.
- the parameters may be manually selected by a user, or automatically selected.
- annotations may be provided by users, or automatically determined. For example, in one implementation, interactions of users with a presentation of a content item may be monitored, and a characteristic (e.g., funny, boring, 4.5/5 stars, etc.) of the content item may be determined. An annotation may then be generated for the content item based on the characteristic(s). In another implementation, a reference time for the annotation may be identified. The annotation may then be stored in association with the reference time, for example, so that the annotation may be presented when the portion of the content item is presented during a subsequent presentation.
- a characteristic e.g., funny, boring, 4.5/5 stars, etc.
- FIG. 1 is an exemplary illustration of a system for facilitating the presentation of content items, annotations associated with the content items, or other related items, according to an aspect of the invention.
- FIGS. 2A-2C are exemplary illustrations of a user interface at different times during presentation of a content item, according to aspects of the invention.
- FIG. 3 is an exemplary illustration of different presentations of a content item, according to an aspect of the invention.
- FIGS. 4A-4C are exemplary illustrations of user interfaces for presenting a content item, interacting with the presentation of the content item, and/or interacting with a social networking service, according to aspects of the invention.
- FIGS. 5A-5C are exemplary illustrations of user interfaces for presenting a content item, interacting with the content item, and/or sharing a portion of the content item, according to aspects of the invention.
- FIGS. 6A-6C are exemplary illustrations of a user interface for textually and/or graphically depicting information related to annotations at different times during a presentation of a content item, according to aspects of the invention.
- FIGS. 7A-7C are exemplary illustrations of a user interface for presenting a content item and annotations of a dataset related to the content item, according to aspects of the invention.
- FIGS. 8A-8B are exemplary illustrations of a user interface that depicts mechanisms in annotations that enable transactions related to products or services, according to aspects of the invention.
- FIGS. 9A-9C are exemplary illustrations of a user interface for enabling reactions to annotations and/or initiating a message thread via a social networking service, and a user interface for interacting with the message thread via the social networking service, according to aspects of the invention.
- FIGS. 10A-10D are exemplary illustrations of a user interface depicting an intelligent presentation of user interface elements, according to aspects of the invention.
- FIGS. 11A-11B are exemplary illustrations of user interfaces depicting presentations of a content item to a group of users, according to aspects of the invention.
- FIG. 12 is an exemplary illustration of a flowchart of a method of creating and maintaining a database of annotations corresponding to portions of a content item, according to an aspect of the invention.
- FIG. 13 is an exemplary illustration of a flowchart of a method of generating annotations for a content item based on interactions of users with presentations of the content item, according to an aspect of the invention.
- FIG. 14 is an exemplary illustration of a flowchart of a method of providing annotations corresponding to portions of a content item to social networking services, according to an aspect of the invention.
- FIG. 15 is an exemplary illustration of a flowchart of a method of presenting annotations corresponding to portions of a content item during a presentation of the content item, according to an aspect of the invention.
- FIG. 16 is an exemplary illustration of a flowchart of a method of facilitating rewards for the creation of annotations, according to an aspect of the invention.
- FIG. 17 is an exemplary illustration of a flowchart of a method of facilitating rewards based on interactions with annotations, according to an aspect of the invention.
- FIG. 18 is an exemplary illustration of a flowchart of a method of facilitating rewards based on execution of transactions enabled via annotations, according to an aspect of the invention.
- FIG. 19 is an exemplary illustration of a flowchart of a method of providing a dataset of annotations corresponding to portions of a content item, according to an aspect of the invention.
- FIG. 20 is an exemplary illustration of a flowchart of a method of facilitating rewards based on interactions with datasets, according to an aspect of the invention.
- FIG. 21 is an exemplary illustration of a flowchart of a method of facilitating rewards based on execution of transactions enabled via datasets, according to an aspect of the invention.
- FIG. 22 is an exemplary illustration of a flowchart of a method of facilitating the sharing of portions of a content item across different content delivery services, according to an aspect of the invention.
- FIG. 23 is an exemplary illustration of a flowchart of a method of facilitating the access of a portion of a content item, according to an aspect of the invention.
- FIG. 24 is an exemplary illustration of a flowchart of a method of enabling storage of reactions to annotations, according to an aspect of the invention.
- FIG. 25 is an exemplary illustration of a flowchart of a method of initiating conversations between users based on reactions to annotations, according to an aspect of the invention.
- FIG. 26 is an exemplary illustration of a flowchart of a method of presenting user interface elements based on relevancy, according to an aspect of the invention.
- FIG. 27 is an exemplary illustration of a flowchart of a method of facilitating control of presentations of a content item to a group of users, according to an aspect of the invention.
- FIG. 1 is an exemplary illustration of a system 100 that may enable a time-shifted, on-demand vertical social network for watching, creating, and/or sharing time-shifted commentary tracks synced to any on-demand programming, according to an aspect of the invention.
- system 100 may facilitate the presentation of content items, annotations associated with the content items, or other related items.
- content items may include movies, television episodes, portions or segments of movies or television episodes, video clips, songs, audio books, e-books, or other content items.
- a presentation of a content item may be provided to a user via a content delivery service such as, for example, NETFLIX, HULU, AMAZON INSTANT VIDEO, a cable provider, a local service at a user device programmed to present content items stored locally at an electronic storage of the user device (e.g., a hard drive, a CD, a DVD, etc.), or other content delivery service.
- a content delivery service such as, for example, NETFLIX, HULU, AMAZON INSTANT VIDEO, a cable provider, a local service at a user device programmed to present content items stored locally at an electronic storage of the user device (e.g., a hard drive, a CD, a DVD, etc.), or other content delivery service.
- Presentations of a content item may include reproductions of the content item that are of varying versions (e.g., extended versions, versions with alternative endings or scenes, etc.), reproductions of the content item with auxiliary information (e.g., advertisements, warnings, etc.), or other presentations of the content item.
- versions e.g., extended versions, versions with alternative endings or scenes, etc.
- auxiliary information e.g., advertisements, warnings, etc.
- annotations may include reviews, comments, ratings, markups, posts, links to other media, or other annotations.
- Annotations may be manually entered by a user for a content item (or a portion thereof), or automatically determined for the content item (or portion thereof) based on interactions of the user with the content item (or portion thereof), interactions of the user with other portions of the content item or other content items, or other parameters.
- Annotations may be manually entered or automatically determined for the content item or the content item portion either before, during, or after a presentation of the content item.
- Annotations may be stored as data or metadata, for example, in association with information indicative of the content item or the content item.
- System 100 may include one or more computers and sub-systems to create and maintain a database of annotations corresponding to portions of a content item, provide annotations corresponding to portions of a content item to social networking services, facilitate sharing of portions of a content item, facilitate aggregation of annotations, modify presentations of a content item based on annotations, selectively filter annotations, create datasets of annotations corresponding to portions of a content item, incentivize creation of annotations or datasets of annotations, manage replies or other reactions to annotations, intelligently present user interface elements, facilitate group control of presentations of a content item, or otherwise enhance the experience of users with respect to presentations of content items, annotations, or other related items.
- system 100 may comprise server 102 (or servers 102 ).
- Server 102 may comprise annotation subsystem 106 , content reference subsystem 108 , account subsystem 110 , interaction monitoring subsystem 112 , reward subsystem 114 , content presentation subsystem 116 , or other components.
- System 100 may further comprise a user device 104 (or multiple user devices 104 a - 104 n ).
- User device 104 may comprise any type of mobile terminal, fixed terminal, or other device.
- user device 104 may comprise a desktop computer, a notebook computer, a netbook computer, a tablet computer, a smartphone, a navigation device, an electronic book device, a gaming device, or other user device. Users may, for instance, utilize one or more user devices 104 to interact with server 102 or other components of system 100 .
- user device 104 may comprise user annotation subsystem 118 , user content presentation subsystem 120 , or other components.
- server 102 may initiate storage of an annotation in association with a reference time corresponding to a portion of a content item by providing the annotation, the reference time, and other information (e.g., instructions for storage, other parameters, etc.) to an annotation database
- user device 104 may initiate storage of an annotation in association with a reference time corresponding to a portion of a content item by providing the annotation, the reference time, and other information to the server for storage at the annotation database.
- Server 102 and/or user device 104 may be communicatively coupled to one or more content delivery services 122 a - 122 n , social networking services 124 a - 124 n , or other services.
- one or more of content delivery services 122 a - 122 n or social networking services 124 a - 124 n may be hosted at server 102 and/or user device 104 .
- server 102 may host a content delivery service 122 to provide users with access to content items or portions of content items.
- server 102 may host a social networking service 124 to offer a social network through which users may interact with one another, other entities of the social network, content on the social network, etc.
- one or more of content delivery services 122 a - 122 n or social networking services 124 a - 124 n may be hosted remotely from server 102 and/or user device 104 .
- the various computers and subsystems illustrated in FIG. 1 may comprise one or more computing devices that are programmed to perform the functions described herein.
- the computing devices may include one or more electronic storages (e.g., electronic storage 126 or other electric storages), one or more physical processors programmed with one or more computer program instructions, and/or other components.
- the computing devices may include communication lines, or ports to enable the exchange of information with a network or other computing platforms.
- the computing devices may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to the servers.
- the computing devices may be implemented by a cloud of computing platforms operating together as the computing devices.
- the electronic storages may comprise non-transitory storage media that electronically stores information.
- the electronic storage media of the electronic storages may include one or both of system storage that is provided integrally (e.g., substantially non-removable) with the servers or removable storage that is removably connectable to the servers via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.).
- a port e.g., a USB port, a firewire port, etc.
- a drive e.g., a disk drive, etc.
- the electronic storages may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media.
- the electronic storages may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources).
- the electronic storage may store software algorithms, information determined by the processors, information received from the servers, information received from client computing platforms, or other information that enables the servers to function as described herein.
- the processors may be programmed to provide information processing capabilities in the servers.
- the processors may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information.
- the processors may include a plurality of processing units. These processing units may be physically located within the same device, or the processors may represent processing functionality of a plurality of devices operating in coordination.
- the processors may be programmed to execute computer program instructions to perform functions described herein of subsystems 106 , 108 , 110 , 112 , 114 , 116 , 118 , 120 , or other subsystems.
- the processors may be programmed to execute computer program instructions by software; hardware; firmware; some combination of software, hardware, or firmware; and/or other mechanisms for configuring processing capabilities on the processors.
- subsystems 106 , 108 , 110 , 112 , 114 , 116 , 118 , or 120 may be eliminated, and some or all of its functionality may be provided by other ones of subsystems 106 , 108 , 110 , 112 , 114 , 116 , 118 , or 120 .
- additional subsystems may be programmed to perform some or all of the functionality attributed herein to one of subsystems 106 , 108 , 110 , 112 , 114 , 116 , 118 , or 120 .
- a database of annotations that correspond to portions of a content item may be created and/or maintained.
- the database of annotations may comprise annotations received during presentations of a content item that are provided via at least first and/or second content delivery services.
- An annotation in the database may, for instance, correspond to a time at which a first portion of a content item is presented via the first content delivery service (e.g., NETFLIX), and another annotation in the database may correspond to a time at which a second portion of the content item is presented via the second content delivery service (e.g., HULU).
- the annotations may be stored in the database respectively in association with reference times that correspond to portions of the content item.
- reference times associated with annotations may be utilized to provide the annotations such that the annotations are presented in a time-synchronized fashion with corresponding portions of a content item (e.g., for which the annotations are received) during subsequent presentations of the content item.
- annotations that are submitted by prior users e.g., prior viewers, listeners, etc.
- prior presentations of the content item may be presented to subsequent users as the subsequent users are experiencing corresponding portions of the content item (e.g., portions that correspond to reference times associated with the annotations).
- reactions of users to portions of the content item may be shared with other users regardless of the time at which the content item is experienced by users that submit the annotations, or regardless of the time at which the content item is experienced by users that are presented with the submitted annotations.
- users that experience a content item after annotations have been submitted by other users can do so without having to worry about prior annotations “spoiling” the user experience.
- reactions of users to portions of a content item may be shared with other users during subsequent presentations of the content item even though the reactions were submitted (e.g., in the form of annotations) during prior presentations of the content item.
- annotations can be received from any number of users in any order.
- any examples as set forth herein are for illustrative purposes only, and not intended to be limiting.
- user interface 202 may present a content item to a user.
- user interface 202 may present annotations (e.g., Annotations 1A, 1B, 2A, 3A, 3B, or other annotations) when portions of the content item that correspond to reference times associated with the annotations are presented.
- annotations e.g., Annotations 1A, 1B, 2A, 3A, 3B, or other annotations
- a first reference time associated with Annotations 1A and 1B may, for example, be represented by a first position of control element 204 on presentation time bar 206 .
- a second reference time associated with Annotation 2A may be represented by a second position of control element 204 on presentation time bar 206 .
- a third reference time associated with Annotations 3A and 3B may be represented by a third position of control element 204 on presentation time bar 206 .
- Annotation 1A may have been submitted by User X as User X was watching a first portion (Portion A) of a content item that corresponds to a first reference time during a presentation of the content item provided via Content Delivery Service #1 (e.g., NETFLIX).
- Annotation 1B may have been submitted by User X as User X was watching the first portion of the content item (that corresponds to the first reference time) during a presentation of the content item provided via Content Delivery Service #2 (e.g., HULU).
- Annotation 2A may have been submitted by User Y as User Y was watching a second portion (Portion B) of the content item that corresponds to a second reference time during a presentation of the content item provided via Content Delivery Service #3 (e.g., a local service at User Y's user device that presents a DVD version of the content item).
- Content Delivery Service #3 e.g., a local service at User Y's user device that presents a DVD version of the content item.
- Annotation 3A may have been submitted by User X as User X was watching a third portion (Portion C) of the content item that corresponds to a third reference time during a presentation of the content item provided via Content Delivery Service #1.
- Annotation 3B may been submitted by User Y as User Y was watching the third portion of the content item (that corresponds to the third reference time) during a presentation of the content item provided via Content Delivery Service #3.
- each of Annotations 1A, 1B, 2A, 3A, and 3B are presented when the corresponding portion of the content item (e.g., the corresponding portion for which the respective annotation was submitted) are presented.
- different content delivery services may provide different presentations of the same content item (e.g., presentations 302 , 304 , 306 , 308 , 310 , or other presentations).
- differences among the presentations may comprise different durations of the presentations, different orders of portions of the content item or auxiliary information within the presentations of the content item, different durations of auxiliary information within the presentations of the content item, different versions of the content item included in the presentations (e.g., extended versions, versions with different endings, etc.), or other differences.
- presentations 302 , 304 , and 306 may differ in duration from one another even though presentations 302 , 304 , and 306 include the same portions of a content item (e.g., the set of content item portions 314 ) due to, for example, formatting or for various other reasons (e.g., inclusion of advertisements, warnings, etc.).
- presentations 304 and 306 may include different sets of auxiliary information 316 and 318 (e.g., advertisements or other auxiliary information) where the auxiliary information of the different sets 316 and 318 are of different orders within their respective presentations and are of different duration.
- Presentations 308 and 310 may include versions of the content item that are different than the version of the content item in presentation 302 , and are further different from one another's version of the content item.
- the version of the content item in presentation 308 include additional portions 320 of the content item that are not in presentations 302 or 310
- the version of the content item in presentation 310 include additional portions 322 that are not in presentation 302 or 308 .
- reference times that correspond to portions of a content item may be utilized to present annotations regardless of the differences between the presentations of the content item that were provided to annotating users when the users submitted the annotations.
- the reference times on which presentations of the annotations are based may, for example, comprise a master set of reference times (e.g., reference set 312 ) with which other reference times (associated with different presentations) may be compared to identify a reference time from the master set with which an annotation is to be associated.
- the master set may include master reference times that correspond to portions of a content item where the master reference times are independent of the content delivery service through which a presentation of the content item is provided.
- reference set 312 may represent a master set of reference times associated with a content item.
- master reference time 1 in reference set 312 may correspond to reference time 1 of presentation 302 , reference time 1 of presentation 304 , reference time 3 of presentation 306 , reference time 1 of presentation 308 , and reference time 1 of presentation 310 .
- annotations that are submitted by users at time 1 during presentation 302 , time 1 during presentation 304 , time 3 during presentation 306 , time 1 during presentation 308 , and time 1 during presentation 310 may all be stored in association with master reference time 1 corresponding to a first portion of the content item so that the annotations may be presented during a subsequent presentation of the content item when the first portion is presented.
- master reference time 6 may correspond to reference time 6 of presentation 302 , reference time 7 of presentation 304 , reference time 9 of presentation 306 , reference time 8 of presentation 308 , and reference time 8 of presentation 310 .
- Annotations that are submitted by users at time 6 during presentation 302 , time 7 during presentation 304 , time 9 during presentation 306 , time 8 during presentation 308 , and time 8 during presentation 310 may all be stored in association with master reference time 6 corresponding to a second portion of the content item so that the annotations may be presented during a subsequent presentation of the content item when the second portion is presented.
- master reference time 11 may correspond to reference time 11 of presentation 302 , reference time 13 of presentation 304 , reference time 15 of presentation 306 , reference time 13 of presentation 308 , and reference time 13 of presentation 310 .
- annotations that are submitted by users at time 11 during presentation 302 , time 13 during presentation 304 , time 15 during presentation 306 , time 13 during presentation 308 , and time 13 during presentation 310 may all be stored in association with master reference time 11 corresponding to a third portion of the content item so that the annotations may be presented during a subsequent presentation of the content item when the third portion is presented.
- master reference time 16 may correspond to reference time 16 of presentation 302 , reference time 19 of presentation 304 , reference time 20 of presentation 306 , reference time 18 of presentation 308 , and reference time 18 of presentation 310 .
- annotations that are submitted by users at time 16 during presentation 302 , time 19 during presentation 304 , time 20 during presentation 306 , time 18 during presentation 308 , and time 18 during presentation 310 may all be stored in association with master reference time 16 corresponding to a fourth portion of the content item so that the annotations may be presented during a subsequent presentation of the content item when the fourth portion is presented.
- master reference time 21 may correspond to reference time 21 of presentation 302 , reference time 24 of presentation 304 , reference time 25 of presentation 306 , reference time 24 of presentation 308 , and reference time 24 of presentation 310 .
- annotations that are submitted by users at time 21 during presentation 302 , time 24 during presentation 304 , time 25 during presentation 306 , time 24 during presentation 308 , and time 24 during presentation 310 may all be stored in association with master reference time 21 corresponding to a fifth portion of the content item so that the annotations may be presented during a subsequent presentation of the content item when the fifth portion is presented.
- master reference set 312 may comprise master reference times that correspond to the additional portions of presentations 308 and 310 .
- annotations that are submitted by users at time 6 during presentation 308 may be stored in association with master reference time 22 (corresponding to the additional portion presented at time 6 during presentation 308 ) so that the annotations may be presented during a subsequent presentation of the content item (if and) when the additional portion is presented.
- annotations that are submitted by users at time 6 during presentation 310 may be stored in association with master reference time 25 (corresponding to the additional portion presented at time 6 during presentation 310 ) so that the annotations may be presented during a subsequent presentation of the content item (if and) when the additional portion is presented.
- master reference time 25 corresponding to the additional portion presented at time 6 during presentation 310
- a set of annotations submitted for a portion of a content item during prior presentations of the content item may be presented during a subsequent presentation of the content item to a user when the subsequent presentation to the user reaches the reference time corresponding to the portion of the content item for which the set of annotations are submitted.
- a first presentation of a content item may be utilized as a reference (e.g., as the master reference) for other presentations of the content item. For example, portions of the content item in the first presentation may be mapped to corresponding portions of the content item in a second presentation. The mapping of the first and second presentations may then be utilized to store annotations inputted during the second presentation in association with reference times corresponding to portions of the content item in the first presentation.
- the reference times may be utilized to present the annotations when corresponding portions of the content item are presented during the subsequent presentation by mapping the reference times to portions of the content item in the subsequent presentation.
- audio content recognition of a portion of a movie may be performed in response to a comment submitted by a user when the portion of the movie was presented to the user.
- the result of the audio content recognition e.g., an audio pattern, a visual pattern, or other result
- the result of the audio content recognition may then be compared to stored reference patterns associated with reference times corresponding to portions of the movie to identify the portion of the movie and the reference time corresponding to that movie portion.
- the annotation may be stored in association with the reference time so that the reference time may be utilized in the future to present the annotation when the portion of the movie is presented during subsequent presentations of the movie.
- annotation subsystem 106 may be programmed to receive a first annotation that corresponds to a time at which a first portion of a content item is presented via a first content delivery service, and/or receive a second annotation that corresponds to a time at which the first portion of the content item is presented via a second content delivery service.
- the first and second annotations may, for example, be received at annotation subsystem 106 from one or more user devices at which the first and second annotations are inputted by one or more users.
- the presentation via the first content delivery service may correspond to a first presentation that includes the first portion of the content item.
- the presentation via the second content delivery service may correspond to a second presentation that includes the first portion of the content item.
- the first presentation (provided via the first content delivery service) and the second presentation (provided via the second content delivery service) may be the same or different than one another.
- the first presentation may include a first portion of a content item but not a second portion of the content item, while the second presentation may include both the first and the second portions of the content item.
- presentation 302 may not include additional portion 320
- presentation 308 does include additional portion 320 .
- the first presentation may include the first portion of the content item and first auxiliary information (e.g., first advertisement), and the second presentation may include the first portion of the content item and second auxiliary information (e.g., second advertisement).
- presentation 304 may include a first set of auxiliary information 316
- presentation 306 may include a second set of auxiliary information 318 .
- annotation subsystem 106 may be programmed to initiate storage of the first annotation in association with a first reference time that corresponds to the first portion of the content item, and/or initiate storage of the second annotation in association with the first reference time.
- annotation subsystem 106 may be programmed to receive a third annotation corresponding to a time at which a second portion of the content item is presented (e.g., via the first content delivery service, the second content delivery service, or a third content delivery service), and initiate storage of the third annotation in association with a second reference time corresponding to the second portion of the content item.
- annotations may be stored in association with other information, such as an identifier of the content item for which the annotation is submitted, identifiers of the sources from which the annotations are received, an identifier of the content delivery service that provided the presentation of the content item during which the annotation is submitted by a user, or other information.
- content reference subsystem 108 may be programmed to identify a set of reference times corresponding to portions of the content item.
- Content reference subsystem 108 may be programmed to identify, based on the set of reference times, the first reference time as a reference time for the first annotation, the first reference time as a reference time for the second annotation, and/or the second reference time as a reference time for the third annotation.
- the annotations may be stored in association with the respective reference times and/or other information (e.g., an identifier of the content item, identifiers of the sources from which the annotations are received, an identifier of the content delivery service that provided the presentation of the content item, etc.).
- At least one of the first or second presentations of the content item may be associated with another set of reference times that correspond to portions of the first and/or second presentations.
- content reference subsystem 108 may correlate the identified set of reference times with the other set of reference times to determine a mapping between the reference times of the two different set of reference times. The mapping may then be utilized to identify the first reference time as a reference time for the first annotation, the first reference time as a reference time for the second annotation, and/or the second reference time as a reference time for the third annotation.
- content reference subsystem 108 may utilize the reference times of presentation 302 as at least part of a master set of reference times corresponding to portions of the content item with which other sets of reference times are mapped.
- annotation subsystem 106 may receive, from user device 104 , an annotation inputted via user device 104 during presentation 304 and information indicating that the annotation is associated with reference time 7 of presentation 304 (e.g., the annotation was inputted at time 7 during presentation 304 , the annotation was inputted for a portion of presentation 304 that corresponds to time 7, etc.).
- Content reference subsystem 108 may then identify reference time 6 of presentation 302 as a reference time for the annotation based on a determination that the annotation is associated with reference time 7 of presentation 304 .
- Annotation subsystem 106 may thereafter store the annotation in association with reference time 6.
- first and second annotations may be stored in association with a first reference time corresponding to a first portion of a content item.
- a third annotation may be stored in association with a second reference time corresponding to a second portion of the content item.
- annotation subsystem 106 may be programmed to provide, based on the first reference time, the first annotation such that the first annotation is presented when the first portion of the content item is presented during a subsequent presentation of the content item.
- Annotation subsystem 106 may be programmed to provide, based on the first reference time, the second annotation such that the second annotation is presented when the first portion of the content item is presented during the subsequent presentation.
- Annotation subsystem 106 may be programmed to provide, based on the second reference time, the third annotation such that the third annotation is presented when the second portion of the content item is presented during the subsequent presentation of the content item.
- annotation subsystem 106 may be programmed to identify one or more annotations of a first user regarding a content item, for example, in response to an initiation of a presentation of the content item by a second user associated with the first user (e.g., the second user may be a friend of the first user in a social network, a contact of the first user, or associated with the first user in some manner).
- interaction monitoring subsystem 112 may detect the initiation of a presentation of the content item by the second user.
- annotation subsystem 106 may be caused to identify stored annotations of the second user's friends (e.g., including annotations of the first user) in a social network, and transmit the annotations to a user device of the second user for display during the presentation of the content item to the second user.
- stored annotations of the second user's friends e.g., including annotations of the first user
- interaction monitoring subsystem 112 may be programmed to monitor interactions of users with presentations of a content item, and/or determine a characteristic of the content item based on the interactions.
- Annotation subsystem 106 may be programmed to generate an annotation for the content item based on the characteristic.
- content reference subsystem 108 may be programmed to identify, based on the interactions, a reference time for the annotation.
- Annotation subsystem 106 may be programmed to initiate storage of the annotation in association with the reference time.
- interaction monitoring subsystem 112 may determine that a majority of users that watch Content Item 1 activate the “Share Scene” button 212 to share a particular portion (e.g., Portions A, B, C, or other portions) with other users.
- annotation subsystem 106 may generate the comment (or annotation) “This portion is frequently shared!” and store the comment in association with a reference time corresponding to the frequently shared portion.
- the comment that the portion is frequency shared may encourage the other users to share the portion to their contacts.
- interaction monitoring subsystem 112 may determine that a majority of users skip over a particular scene in a movie when watching the movie.
- annotation subsystem 106 may generate the comment (or annotation) “This scene is often skipped over” and store the comment in association with a reference time corresponding to the frequently shared portion.
- the comment that the portion is often skipped over may inform the other users that the portion may not be worth watching.
- interaction monitoring subsystem 112 may determine that a significant number of a user's “friends” (or other associated set of users) watched a particular episode within a certain time period (e.g., 10% of the user's friends in a social network watched the episode within the last 24 hours).
- annotation subsystem 106 may generate the comment (or annotation) “This episode has recently been really popular with your friends!” and store the comment in association with the episode.
- the comment that the episode has recently been really popular may be presented to the user before, during, or after the user watches the episode.
- the comment may be presented to the user when the user is deciding what episode to watch, when the user is shown a promotion for the episode, when the user initiates a presentation of the episode, at the end of or after the presentation of the episode, or at other times.
- Other thresholds of viewership may be used (e.g., 20%, 30%, etc.) to trigger such a comment.
- a similar comment may be generated based on total viewership (e.g., for all viewers—not necessarily limited to a subset of users (such as friends)).
- interaction monitoring subsystem 112 may determine that a user has binge-watched every episode of a particular show for season during a weekend.
- annotation subsystem 106 may generate a comment (or annotation) indicating that the user has binge-watched every episode of the season during a single weekend, and store the comment in association with one or more of the episodes. Subsequently, when the user's friends watch an episode of the season, they may be presented with the comment to encourage them to continue watching the rest of the episodes of the season, or other comment noting the captivating nature of the show.
- users may be enabled to provide annotations received during presentations of a content item (and/or other information related to the content item) to a plurality of social networking services (e.g., FACEBOOK, TWITTER, etc.).
- a plurality of social networking services e.g., FACEBOOK, TWITTER, etc.
- user device 104 may be programmed to initiate a presentation of a content item.
- a user may execute an application installed on user device 104 to launch user interface 202 .
- User interface 202 may enable the user to initiate a presentation of a content item on user interface 202 and/or user interface 402 (e.g., using the play feature of the play/pause button 208 ).
- user interface 202 may be used to display and control a presentation of a content item, view annotations corresponding to portions of the content item during the presentation of the content item, input annotations for portions of the content item, or perform other operations.
- user interface 402 may be used to display a presentation of a content item, and user interface 202 may enable a second screen experience for the user.
- user interface 202 may enable the user to control the presentation of the content item (displayed on user interface 402 ), view annotations corresponding to portions of the content item during the presentation of the content item, input annotations for portions of the content item, or perform other operations.
- User interface 202 may, for instance, be a user interface that is displayed on user device 104
- user interface 402 may be a user interface that is displayed on another user device.
- user device 104 may be programmed to receive a first annotation at a time during which a first portion of the content item is presented, initiate storage of the first annotation in association with a first reference time corresponding to the first portion of the content item, and/or provide the first annotation to a first social networking service.
- user device 104 may be programmed to receive a second annotation at a time during which a second portion of the content item is presented, initiate storage of the second annotation in association with a second reference time corresponding to the second portion of the content item, and/or provide the second annotation to a second social networking service.
- user interface 202 may provide an “Add Annotation” button 210 that enables a user to submit an annotation corresponding to a portion of Content Item 1 that is currently being presented to the user (e.g., Portions A, B, C, or other portion).
- user interface 202 may provide the user with an annotation window 404 where the user may enter the user's reaction to (or comment concerning) a portion of the content item and/or select a thumbs-up or thumbs-down (or other “like” or “dislike” indication) rating for the portion of the content item.
- Annotation window 404 may further enable the user to submit the annotation comprising at least one of the textual reaction or the thumbs-up/thumbs-down rating to one or more of Social Networking Services #1, 2, 3, 4, etc.
- the user has submitted via user interface 202 both the textual reaction and a thumbs-up rating to Social Networking Service #3.
- the submission may, for example, cause the textual reaction and the thumbs-up rating to appear on user interface 406 (e.g., the user's page on Social Networking Service #3), along with storage of the textual reaction and the thumbs-up rating in association with a reference time that corresponds to the portion of the content item that was presented to the user when button 210 was activated.
- the user in the above example may submit another annotation at a later time during the presentation of Content Item 1 by activating the “Add Annotation” button 210 at the later time, and submitting an annotation via annotation window 404 .
- the user may, however, choose to submit the later annotation to another social networking service (e.g., Social Network Services #1, 2, or 4) different than the social networking service to which the earlier annotation was submitted (e.g., Social Networking Service #3).
- the later annotation may appear on a page (or interface) of the chosen social networking service, as well as stored in association with a reference time that corresponds to the portion of the content item presented at the later time.
- user interface 202 may enable a user to provide an annotation for a content item (and other information related to the content item) to a social networking service.
- a social networking service e.g., Social Networking Services #1, 2, 3, 4, etc.
- an application associated with user interface 202 may provide the annotation along with an identifier of the user, an identifier of the content item for which the annotation is submitted, an identifier of the content delivery service through which the presentation is provided, a reference time corresponding to a portion of the content item for which the annotation is submitted, a link to the portion of the content item, or other information.
- a link to the portion of the content item is provided along with the annotation to Social Networking Service #3
- the link may be posted along with the annotation on the user's page at Social Networking Service #3.
- other users having access to the user's page may utilize the link to jump to the portion of the content item using a content delivery service that is available to the other users to see the portion of the content item to which the user's annotation is related.
- a user may share access to portions of a content item across a plurality of content delivery services.
- a sharing user may share access to a portion of a content item to a recipient user even when the sharing user and the recipient user do not have access to the same content delivery service (e.g., the sharing user uses NETFLIX while the recipient user uses HULU).
- content presentation subsystem 116 may be programmed to receive, during a first presentation of a content item via a first content delivery service, a request to provide information to enable access to a first portion of the content item.
- Content presentation subsystem 116 may be programmed to associate a first reference time with the first portion of the content item.
- the first reference time may, for example, correspond to a time at which the first portion of the content item is presented via the first content delivery service.
- Content presentation subsystem 116 may be programmed to generate, based on the first reference time, reference information that enables access to the first portion of the content item in a second presentation of the content item via a second content delivery service.
- content presentation subsystem 116 may be programmed to receive the request from a first user device associated with a first user (e.g., user device 104 a ), and/or provide the reference information to a second user device associated with a second user (e.g., user device 104 b ) such that the reference information enables the second user to access the first portion of the content item via the second content delivery service.
- the reference information may be independent of the content delivery service that may be used by the second user to access the first portion of the content item.
- the reference information may, for example, indicate the content item (e.g., content item identifier), the first reference time, the first portion (e.g., scene identifier determined based on the first reference time), or other information.
- the indication of the content item and at least one of the indications of the first reference time or the first portion may be utilized to access the first portion of the content item via the secondary content delivery service.
- the reference information may be specific to the second content delivery service (e.g., a direct link to the first portion of the content item stored at the second content delivery service or other reference information). For example, a content item identifier of the content item and the first reference time may be processed to determine a presentation-specific start reference time when the first portion of the content item is presented via the second content delivery service. The reference information may then be generated to indicate the content item, the presentation-specific reference time, the second content delivery service, or other information.
- a content item identifier of the content item and the first reference time may be processed to determine a presentation-specific start reference time when the first portion of the content item is presented via the second content delivery service.
- the reference information may then be generated to indicate the content item, the presentation-specific reference time, the second content delivery service, or other information.
- a presentation of Content Item 1 via Content Delivery Service #1 may be displayed on user interface 202 and/or user interface 402 .
- user interface 202 may provide a “Share Scene” button 212 that enables a user to share a portion of Content Item 1 with other users.
- user interface 202 may provide the user with a recipient selection window 502 where the user may select a recipient user from a drop-down menu, or enter a recipient user's email address.
- user device 104 may generate a request to provide the recipient user with information to enable the recipient user to access Portion B of Content Item 1 (e.g., Portion B was playing or presented when button 212 was activated, Portion B corresponds to a start and/or end time manually entered by the user, etc.). Thereafter, user device 104 may transmit the request to server 202 .
- the request may include an item identifier associated with Content Item 1, a start reference time corresponding to Portion B, an end reference time corresponding to Portion B, a portion identifier associated with Portion B, or other information.
- content presentation subsystem 116 may process the request to generate a link (or other reference information) associated with Portion B.
- the portion link may, for instance, be independent of the content delivery service that the recipient user may utilize to access Portion B.
- an automated message comprising a portion link (e.g., the hyperlink embedded in “CLICK HERE”) is provided to the recipient user to enable the recipient user to access Portion B of Content Item 1 via Content Delivery Service #2.
- the portion link may, for instance, include the link “http://CDSIndepentSite.com/[CI1_ID]/[Master_Ref_Time_Corr_To_Portion_B]” or other link.
- clicking on the portion link may cause the recipient user's device to execute an application associated with Content Delivery Service #2 and begin rendering a presentation of Content Item 1 at a time corresponding to a start time of Portion B.
- a selection of a recipient user using user device 104 may be received by user device 104 as a request to provide the recipient user with information to enable the recipient user to access to Portion B of Content Item 1.
- User device 104 may then generate a link (or other reference information) to Portion B (e.g., a portion link that is independent of the content delivery service that the recipient user may utilize to access Portion B).
- User device 104 may thereafter transmit the portion link as part of a message (e.g., via email, short message service (SMS), multimedia messaging service (MMS), social networking service, etc.) to the recipient user.
- the message may comprise the portion link along with other information.
- the recipient user clicks on the portion link the recipient user's device may execute an application associated with Content Delivery Service #2 and begin rendering a presentation of Content Item 1 at a time corresponding to a start time of Portion B.
- content presentation subsystem 116 may be programmed to receive, during a first presentation of a content item via a first content delivery service, a request to provide information to enable access to a first portion of the content item.
- content reference subsystem 108 may be programmed to identify a set of reference times corresponding to portions of the content item (e.g., a master set of reference times). Content reference subsystem 108 may be programmed to identify which reference time of the set of reference times corresponds to the first portion of the content item. Upon identification of the corresponding reference time (for the first portion), reference information that enables access to a second presentation of the content item via a second content delivery service may be generated based on the corresponding reference time.
- At least one of the first or second presentations of the content item may be associated with another set of reference times that correspond to portions of the first and/or second presentations.
- the identified set of reference times may, for instance, include master reference times that correspond to portions of the content item independently of a content delivery service, while the other set of reference times include reference times that are specific to a presentation of the content item provided via a content delivery service.
- content reference subsystem 108 may correlate the identified set of reference times with the other set of reference times to determine a mapping between the reference times of the two different set of reference times. The mapping may then be utilized to identify a corresponding master reference for the first portion of the content item.
- content reference subsystem 108 may utilize the reference times of presentation 302 as at least part of a master set of reference times corresponding to portions of the content item with which other sets of reference times are mapped.
- content presentation subsystem 116 may receive, from user device 104 , a request to share a link to a scene of the content item that corresponds to a start time 7 and an end time 8 of presentation 304 .
- Content reference subsystem 108 may identify reference time 6 of presentation 302 as a start reference time for the scene and reference time 7 of presentation 302 as an end reference time for the scene based on a determination that the scene corresponds to start time 7 and end time 8 of presentation 304 .
- An identifier of the content item, reference time 6 of presentation 302 , and reference time 7 of presentation 302 may be utilized to generate the scene link.
- content presentation subsystem 116 may be programmed to generate reference information that is specific to a content delivery service. For example, upon receipt of a request from a first user to provide reference information to enable a second user to access a first portion of a content item, content presentation subsystem 116 may identify a content delivery service through which access to the first portion of the content item is available to a second user. The content delivery service may, for instance, be identified based on a determination that the second user has an account associated with the content delivery service. Content presentation subsystem 116 may be programmed to generate the reference information based on a first reference time corresponding to the first portion of the content item and the identification of the content delivery service.
- content presentation subsystem 116 may obtain account information associated with the second user that identifies content delivery service(s) with which the second user has account(s). After determining that the second user has an account with a given content delivery service, content presentation subsystem 116 may generate the reference information specifically for the given content delivery service based on a first reference time corresponding to the first portion of the content item.
- a user may access a portion of a content item via a content delivery service based on reference information.
- a portion of a content item may be accessed via a content delivery service based on reference information that is independent of the content delivery service to access the portion of the content item.
- the same reference information may, for example, be utilized to access a portion of a content item via different content delivery services.
- user content presentation subsystem 120 may be programmed to receive reference information related to a first portion of a content item.
- the reference information may be generated based on a user input that occurred during a first presentation of the content item via a first content delivery service (e.g., NETFLIX).
- the user input and/or a time of the user input may, for example, correspond to a presentation-specific reference time at which the first portion of the content item is presented during the first presentation.
- the reference information may then be generated based on the presentation-specific reference time to include information indicating the content item (e.g., content item identifier), the first portion (e.g., scene identifier), the presentation-specific reference time, a master reference time corresponding to the presentation-specific reference time, or other information.
- the content item e.g., content item identifier
- the first portion e.g., scene identifier
- the presentation-specific reference time e.g., a master reference time corresponding to the presentation-specific reference time, or other information.
- user content presentation subsystem 120 may be programmed to identify a second content delivery service (e.g., HULU) through which access to the first portion of the content item (in a second presentation of the content item) is available.
- User content presentation subsystem 120 may be programmed to provide, based on the reference information, the first portion of the content item (in the second presentation) via the second content delivery service.
- a second content delivery service e.g., HULU
- the reference information may be generated based on input from a first user during the first presentation of the content item.
- User content presentation subsystem 120 may be programmed to identify a second user to which the first portion of the content item (in the second presentation) is to be provided. Based on the identification of the second user, user content presentation subsystem 120 may identify the second content delivery service as a content delivery service through which access to the first portion of the first content item (in the second presentation of the first content item) is available to the second user.
- user content presentation subsystem 120 may identify content delivery service(s) with which that the second user has an account. Based on the identified content delivery service(s), user content presentation subsystem 120 may determine which (if any) of the content delivery service(s) provide access to the first portion of the content item. If, for instance, one of the identified content delivery service(s) provide access to the first portion of the content item, then the content delivery service (e.g., the second content delivery service) may be identified as a content delivery service that the second user can use to access the first portion of the content item (e.g., and, thus, available to the second user).
- the content delivery service e.g., the second content delivery service
- user content presentation subsystem 120 may identify a second content delivery service (e.g., HULU) through which access to the first portion of the content item (in a second presentation of the content item) is available.
- the first portion of the content item may then be provided via the second content delivery service based on a reference time of the first presentation, a reference time of the second presentation that corresponds to the reference time of the first presentation, or an identifier associated with the first portion of the first content item.
- the reference time of the first presentation, the reference time of the second presentation, or the first-portion identifier may be determined from the reference information and utilized to access the first portion of the content item via the second content delivery service.
- the reference information may include a content item identifier associated with the content item, and a first presentation-specific reference time at which the first portion of the content item is presented during the first presentation.
- user content presentation subsystem 120 may use information indicating the content item to identify a mapping of portions of the first presentation to portions of the second presentation (e.g., the mapping of portions of presentation 302 to portion of presentations 304 in FIG. 3 , the mapping of other portions shown in FIG. 3 , etc.).
- the first presentation-specific reference time and the mapping may then be utilized to identify a second presentation-specific reference time at which the first portion of the content item is presented during the second presentation.
- User content presentation subsystem 120 may execute an application associated with the second content delivery service (e.g., HULU application), and utilize the content item identifier and the second presentation-specific reference time with the application to jump to the first portion of the content item in the second presentation provided via the second content delivery service.
- an application associated with the second content delivery service e.g., HULU application
- the reference information may include an identifier associated with the content item and a scene identifier (or other portion identifier) associated with the first portion of the content item.
- user content presentation subsystem 120 may execute an application associated with the second content delivery service (e.g., HULU application).
- User content presentation subsystem 120 may then utilize the content item identifier and the scene identifier with the application to jump to the first portion of the content item in the second presentation provided via the second content delivery service.
- annotations may be aggregated to determine an overall experience of a user or a group of users with various presentation aspects (e.g., portions of a content item, the overall content item, individual annotations, a set of annotations, etc.).
- the overall experience may then, for example, be displayed to users during a presentation of the portions of the content item, the overall content item, the individual annotations, the set of annotations, etc.
- a user may be able to see how other users (e.g., the user's friends, the user's family members, the user's co-workers, users within the user's social network, or all system users, etc.) reacted to various presentation aspects as the user is experiencing the presentation aspects.
- annotation subsystem 106 may be programmed to identify annotations associated with one or more parameters, and/or process the identified annotations to determine one or more statistics with respect to various presentation aspects (e.g., the portions of the content item, the overall content item, the individual annotations, the set of annotations, etc.).
- the parameters may, for example, include annotation types, sources (e.g., authors or other sources), annotation set identifiers, social distances, user relationship status, spatial proximity, temporal proximity, or other parameters.
- the parameters may be manually selected by a user, or automatically selected for the user based on configurable system settings.
- annotations may be aggregated based on annotation type.
- numerical ratings associated with portions of a content item may be aggregated for each of the portions of the content item, normalized (e.g., a rating based on a 1-10 rating scale may be converted to a rating based on a 1-5 rating scale), and averaged to produce an average rating for each portion.
- a first portion of the content item may be associated with an average rating of 4.6/5
- a second portion of the content item may be associated with an average rating of 4.2/5
- a third portion of the content item may be associated with an average rating of 4.3/5, and so on.
- comments associated with portions of a content item may be aggregated and analyzed to determine a common characteristic associated with each of the portions of the content item.
- a characteristic may, for example, be determined to be a common characteristic based on a determination that terms associated with the characteristic are included in the most number of the aggregated comments, that the terms associated with the characteristic appear the most frequently in the aggregated comments, etc.
- the characteristic “funny” is determined to be the most common characteristic for first, second, and third portions (Portions A, B, and C) of the content item.
- Terms associated with the characteristic “funny” may, for example, include synonyms of “funny” or other related terms.
- annotations may be aggregated based on authorship.
- authorship As an example, if a user selects to only be presented with annotations from cast or crew members of a television episode or movie (or other content item) for which the annotations are submitted, then each of the aggregated annotations may be annotations authored by actors, actresses, directors, producers, or other cast or crew members of the television episode or movie.
- annotations may be aggregated based on social distances between authors of the annotations and a user satisfying a specified social distance threshold.
- Each of the aggregated annotations may, as an example, be annotations authored by other users that are at most 2 connections away from the user in a social network (e.g., friends of friends, two degrees away, etc.).
- the social distance threshold may be specified by a user.
- annotations may be aggregated based on authors of the annotations having a particular relationship with a user.
- Each of the aggregated annotations may, for example, be annotations authored by other users that are “friends” of the user in a social network (e.g., rather than an “acquaintance,” a “colleague,” etc.).
- a user relationship (of one user with another user) may refer to one or more definitions of how a user knows, knows of, or is connected to the other user. For example, a first user may have a user relationship with a second user based on the first user being a “friend,” “co-worker,” “family relative,” etc., of the second user.
- a first user may have a user relationship with a second user based on the first user “following” the social media posts of the second user, the first user being a “fan” of the second user, etc.
- the user relationships may be user-defined, or automatically defined.
- annotations may be aggregated based on authors of the annotations being associated with a location that is a threshold distance away from a user location.
- Each of the aggregated annotations may, for example, be annotations authored by other users that are currently within a particular distance from the current location of the user, that live within a particular distance from the user's residence, etc.
- annotations may be aggregated based on the annotations being submitted within a particular time period.
- each of the aggregated annotations may be annotations submitted during a time period when the content item was the most popular.
- Comments (or annotations) that are aggregated may be limited to comments provided when a television episode originally aired (e.g., to exclude comments submitted during re-runs), comments during a time period associated with a season (e.g., during a given season of a television series when the episode first aired), commented provided during a specified date range, etc.
- annotation subsystem 106 may be programmed to provide statistics associated with aggregated annotations. For example, the statistics may be presented to a user during a presentation of a content item to the user. In one scenario, as shown in FIG. 6C , statistics, such as an average rating, a common characteristic, etc., may be presented to a user during a presentation of a content item to the user in the form of text.
- statistics may be graphically presented to a user during a presentation of a content item.
- the statistics may be presented in the form of a line graph, a heat map, or other graphical representation.
- the line graph on time bar 602 , a heat map, or other graphical representation may, for instance, depict a degree of a characteristic corresponding to portions of the content item (e.g., a high point on the line graph may indicate a very funny portion, a low point on the graph may indicate a non-funny portion, a hot color on the heat map may indicate a very popular portion, a cold color on the map may indicate an unpopular portion, etc.).
- statistics (or other information) associated with aggregated annotations may be provided to various third party entities in exchange for compensation or other reasons.
- statistics regarding viewership of a television show (or other content item) or portions thereof may be provided to NIELSEN or other entity.
- portions of a content item may be presented based on annotations for the content item.
- a presentation of a content item may be based on annotations corresponding to portions of the content item and preferences of a user (e.g., selected by the user, inferred for the user, etc.) related to the content item portions and/or the annotations.
- portions of a content item may be removed from a presentation of the content item based on annotations for the portions indicating that the portions do not satisfy conditions related to the user's preferences (e.g., a user preference may indicate an aversion to violence, nudity, profanity, adult themes, etc.).
- playback of a first set of portions of a content item may be skipped, fast-forwarded, censored, blurred, decreased in volume, or otherwise adjusted during a presentation of the content item based on annotations for the portions of the first set indicating that the portions of the first set do not satisfy conditions related to the user's preferences.
- Playback of a second set of portions of the content item may be enhanced or occur normally during the presentation of the content item based on annotations for the portions of the second set indicating that the portions of the second set satisfy conditions related to the user's preferences.
- annotations for a content item may be utilized to enable a user (or other entity) to control or modify a presentation of the content item.
- Parents may, for example, censor their children from portions of a content item that are indicated by corresponding annotations as indecent or otherwise not for children, users may set their preferences to skip portions of a content item that are indicated by corresponding annotations as having an undesirable characteristic (e.g., boring, romantic, gruesome, or other characteristics that a user may deem undesirable), etc.
- an undesirable characteristic e.g., boring, romantic, gruesome, or other characteristics that a user may deem undesirable
- annotations corresponding to portions of Content Item 1 may be automatically obtained when a user initiates a presentation of Content Item 1 (e.g., detection of the user's request to play Content Item 1 may trigger a request for the annotations).
- Playback of scenes of Content Item 1 may be skipped, fast-forwarded, or sped up if the scenes are associated with an average rating of less than 4/5.
- playback of scenes of Content Item 1 may be skipped, fast-forwarded, or sped up if the scenes are not deemed as funny by at least a threshold number (e.g., fixed number or percentage) of users that submit comments for the scenes.
- Users may, for example, indicate desired ratings (e.g., only 4/5 or higher), threshold numbers, or other parameter via user-configurable settings. Other modification related to the presentation may of course be implemented.
- a user when a user initiates a presentation of a content item, the user may be presented with a set of tracks (associated with the content item) from which to select. Upon selection of a track, the content item may be presented in accordance with annotations of the selected track.
- a selection of a track by a user may trigger a presentation of a content item (to which annotations of the track corresponds) to be initiated. Upon initiation, the content item may be presented in accordance with annotations of the selected track.
- annotations may be selectively presented to a user based on one or more parameters.
- the parameters may, for instance, include annotation types, sources (e.g., authors or other sources), annotation set identifiers, social distances, user relationship status, spatial proximity, temporal proximity, or other parameters.
- the parameters may be manually selected by a user, or automatically selected.
- annotation subsystem 106 may be programmed to provide annotations to user device 104 based on one or more parameters associated with a user.
- user device 104 may selectively present annotations (e.g., from annotation subsystem 106 or other component) to a user during a presentation of a content item to the user based on one or more parameters associated with the user.
- a user may specify that he/she only desires to be presented with numerical ratings (e.g., out of 5 stars, on a 1-10 scale, etc., as opposed to comments, likes/dislikes, etc.). As such, the user may only be provided with numerical ratings. It should be appreciated that the foregoing values, ranges, etc., are exemplary in nature, and should not be viewed as limiting.
- particular authors of annotations may be selected for a user based on historical information associated with the user.
- Selected authors may, for instance, be chosen based on a determination that the authors are similar to authors that the user likes (e.g., the selected authors and the authors liked by the user have similar preferences for content items, annotations from the selected authors are similar in character to annotations from the authors liked by the user, etc.).
- the user may only be provided with annotations from the selected authors during presentation of a content item.
- a user may specify a social distance threshold (e.g., a number of connections away from the user) that authors of annotations must fall within in order for their annotations to be presented to the user.
- a social distance threshold e.g., a number of connections away from the user
- annotation “tracks” or other annotation datasets may be created.
- annotation datasets may each enable access to annotations from one or more sources, annotations that correspond to presentations from one or more content delivery services, annotations that are provided to one or more social networking services, or other annotations.
- Annotation datasets may, for example, enable annotations corresponding to portions of a content item to be presented when the portions of the content item are presented during a presentation of the content item.
- an annotation dataset may include information indicating reference times for annotations to enable the annotations to be presented when the corresponding portions of the content item are presented during the presentation of the content item.
- annotation tracks or other annotation datasets may enable annotations to be packaged and shared as a collection of annotations among users.
- the creation of annotation tracks may facilitate the creation of an “author ecosystem” where, for example, users may gain a following or become “trendsetters” based on their tracks.
- annotation tracks may be provided to one or more third party entities in exchange for compensation or other reasons.
- a network may want to re-broadcast a movie (or other content item) with a track of annotations provided by any one or more of the movie's director, actors, or other “insiders” or individuals associated with production of the movie. Other examples may be implemented.
- Annotation tracks may include tracks that are only accessible by a single user (e.g., the user that created the track, a user designated to access the track, etc.), tracks that are only accessible to a group of users (e.g., a user's friend as specified by the user that created the track), tracks that are publically available to all users, etc.
- privacy settings of a user's account may dictate by default how tracks created by the user are shared.
- Annotation tracks may, for example, be created when a user selects or approves annotations to be included in a track, or may be automatically created when the user enters annotations for a content item.
- Tracks associated with a user may, for instance, be created when the user inputs annotations for a movie or television episode for the first time, and/or updated when the user subsequently inputs annotations while re-watching the movie or television episode.
- tracks may be created automatically when a service selects annotations to be included in a track based on one or more parameters.
- tracks may be created and stored in a database that is searchable by users.
- tracks may be created on the fly for presentation to a user in response to a track request from the user (e.g., play the movie with a track having the highest rated comments, play the episode with a track having comments that my friends posted since yesterday, etc.).
- Tracks may, for example, comprise static tracks or dynamic tracks.
- a set of annotations that are available via a static track may, for instance, remain the same over time, unless the static track is modified by a user or a service (e.g., a user may be able to add/remove annotations to/from a static track).
- a set of annotations that are available via a dynamic track may change over time without modification by a user or service.
- the playing of a dynamic track during a presentation of an associated content item may cause annotations to be streamed and presented to a user such that the annotations during a first presentation of the track differ from those during a second presentation of the track.
- a track that is generated to present the most recent comments (e.g., within the last 7 days, within the last 24 hours, within the last hour, etc.) submitted by a user's friends for a particular episode may include different comments each time the episode is played.
- the track may, for instance, include a query that searches a database for the most recent comments authored by the user's friends for the episode each time playback of the episode is initiated.
- annotation subsystem 106 may be programmed to generate a dataset that enables access to a first annotation corresponding to a first portion of a content item, information indicating a first source of the first annotation (e.g., a user or other source from which the first annotation is received), information indicating a content item with which the first annotation is associated, information indicating a first reference time that corresponds to the first portion of the content item, or other information.
- a first source of the first annotation e.g., a user or other source from which the first annotation is received
- information indicating a content item with which the first annotation is associated e.g., information indicating a content item with which the first annotation is associated
- information indicating a first reference time that corresponds to the first portion of the content item e.g., a first reference time that corresponds to the first portion of the content item, or other information.
- the generated dataset may further enables access to a second annotation corresponding to a second portion of the content item, information indicating the first source as a source of the second annotation, information indicating the content item, information indicating a second reference time that corresponds to the second portion of the content item, or other information.
- the generated dataset may further enables access to a third annotation corresponding to a third portion of the content item, information indicating a second source of the third annotation, information indicating the content item, information indicating a third reference time that corresponds to the third portion of the content item, or other information.
- the STAR track may enables access to Annotations 1A, 1B, 2A, 3A, and 3B.
- the STAR track may also enable access to information which indicates that the annotations are associated with Content Item 1, User X is a source of Annotations 1A, 1B, and 3A, and User Y is a source of Annotations 2A and 3B.
- the STAR track may further enable access to information which indicates that Annotation 1A and 1B are associated with a first reference time (represented by a first position of control element 204 ), Annotation 2A is associated with a second reference (represented by a second position of control element 204 ), and Annotations 3A and 3B are associated with a third reference time (represented by a third position of control element 204 ).
- the STAR track enables Annotations 1A and 1B to be presented when Portion A is presented during the presentation of Content Item 1 (e.g., based on the first reference time corresponding to Portion A), Annotation 2A to be presented when Portion B is presented during the presentation of Content Item 1 (e.g., based on the second reference time corresponding to Portion B), and Annotations 3A and 3B to be presented when Portion C is presented during the presentation of Content Item 1 (e.g., based on the third reference time corresponding to Portion C).
- Table 1 below is an exemplary depiction of information included in the STAR track.
- the STAR track may include annotation identifiers that can be used to obtain the associated annotations from a database when the STAR track is played.
- Table 2 below is another exemplary depiction of information included in the STAR track. As shown in Table 2, the STAR track may include the content of the annotations.
- annotation subsystem 106 may be programmed to receive a request to generate a track.
- the request to generate the track may include information that indicates annotations for inclusion in the track.
- the request may indicate annotations that are selected by a user for inclusion in the track.
- the track may be generated to enable access to the selected annotations along with other information that enable the selected annotations to be presented when corresponding portions of a content item are presented during a presentation of the content item.
- the request may indicate a content item for which the track is targeted, a source indicating the origin of the annotations (e.g., an author of the annotations or other source), or other parameters.
- annotations associated with the content item and the source may be obtained to generate the track.
- a user may submit a request to generate a track that includes annotations associated with a content item that are authored by a particular person, whether it be an individual associated with the production of the content item (e.g., an actor, a director, a producer, etc.), or a viewer or consumer of the content item, such as a member of the user's social group (e.g., the user's friends, the user's colleagues, etc.).
- a member of the user's social group e.g., the user's friends, the user's colleagues, etc.
- User X may be an actor that stars in the content item
- User Y may be a member of the user's social group.
- Annotations 1A, 1B, 2A, 3A, 3B, and other annotations may be obtained to generate the STAR track.
- annotation subsystem 106 may be programmed to receive, from a user, a request to search for a track.
- the request may, for example, include a query that comprises keywords or other parameters (e.g., annotation types, sources, social distances, user relationship status, spatial proximity, temporal proximity, etc.).
- Annotation subsystem 106 may be programmed to process the request to identify a first track in a database based on the keywords or other parameters, after which the first track may be provided to the user.
- a user may submit a query for tracks by entering the question “What tracks are available for season #1, episode #6, of Family Guy?”
- annotation subsystem 106 may process the query, identify tracks for season #1, episode #6, of Family Guy in a database, and provide the identified tracks to the user.
- queries may include queries related to inputs, such as “Show me directors' or actors' tracks for Movie X,” “Show me the highest rated track for Television Show Y,” “Show me tracks by Famous Person Z,” “Show me tracks for Movie X that are rated PG-13,” “Show me tracks by members of my social group,” or other inputs. Any number and type of queries may be used.
- ratings, feedback, or classification of tracks may be facilitated.
- annotation subsystem 106 may be programmed to enable users to rate tracks or provide other feedback about the tracks.
- ratings or other feedback provided by users regarding a track may, for instance, be aggregated to determine an overall rating for the track (e.g., average rating, total number of “likes,” etc.) or other statistical information regarding the track (e.g., commonly characterized as “funny” and “interesting,” highly enjoyed by women between 18-30, a favorite among a user's friends, etc.).
- the STAR track may be associated with an average rating of 4.6/5.
- the average rating may, for instance, be an average of all the ratings given to the STAR track by users in general or by a particular set of users (e.g., a user's friends, users with a certain level of status, etc.). It should be appreciated that the foregoing values, ranges, etc., are exemplary in nature, and should not be viewed as limiting.
- annotation subsystem 106 may be programmed to infer ratings or other feedback for tracks.
- a track may be characterized as “popular” based on a determination that the track has been downloaded/streamed by users a threshold number of times, or that the track has been downloaded/streamed more times than a majority of other tracks.
- a track may be characterized as “cheerful” based on an analysis of the content of the annotations in the track indicating that many of the annotations include cheerful messages. Other characterization (e.g., positive, negative, angry, etc.) may of course be utilized without limitation.
- characteristics may be inferred for a track based on reactions associated with ratings or feedback of the track.
- interaction monitoring subsystem 112 may be programmed to identify a reaction associated with a rating or feedback of a track.
- Annotation subsystem 106 may be programmed to determine a characteristic for the track based on the reaction, and/or associate the characteristic with the track.
- users may submit a rating for each of the annotations of a track (e.g., thumbs-up/thumbs-down, like/dislike, etc.).
- a threshold number e.g., fixed number or percentage
- the STAR track may be associated with the characteristic of “more positive than not.”
- the threshold numbers are satisfied, and the number of positive ratings is 101% to 300% greater than the number of negative ratings, the STAR track may be associated with the characteristic of “well-liked.” If the threshold numbers are satisfied, and the number of positive ratings is over 300% greater than the number of negative ratings, the STAR track may be associated with the characteristic of “superb.”
- users may reply to annotations of a track during presentation of a content item and the track.
- Each of the replies to an annotation may be analyzed to determine one or more characteristics associated with the annotation. Based on the annotation characteristics, one or more characteristics may be determined for (and associated with) the overall track.
- an annotation in a track may be may be characterized as “funny” when a reply to the annotation includes terms associated with the characteristic “funny.”
- the track may be characterized as “funny” when a threshold number of the annotations in the track are characterized as “funny.”
- account subsystem 110 may be programmed to enable users to rate or provide other feedback about one another, and/or infer ratings or other feedback for a user.
- account subsystem 110 may enable users to submit ratings regarding other users. The ratings regarding a user may, for instance, be aggregated to determine statistics for the user (e.g., an average rating of the user, a number of likes vs. dislikes, etc.).
- account subsystem 110 may infer ratings or other feedback about a user based on ratings or other feedback that other users submitted for annotations created by the user created, tracks created by the user, etc.
- a database of annotations may be generated by incentivizing users to create annotations.
- users may be provided with rewards for the creation of annotations, interactions with the annotations, creating annotations that enable transaction via the annotations, or other reasons.
- users may be encouraged to create annotations that include quality feedback for content items with which others will positively interact, annotations that enable transactions to facilitate revenue earnings, or annotations that offer other benefits.
- annotation subsystem 106 may be programmed to receive an annotation by a user.
- the annotation may, for instance, correspond to a time at which a portion of a content item is presented.
- Account subsystem 110 may be programmed to associate the annotation with a user account (associated with the user).
- Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the receipt of the first annotation.
- a user may be compensated when the user creates annotations (e.g., 1 cent for every 20 annotations created, 1 point for each annotation created, etc.), when other users interact with the annotations created by the user, or when other conditions for rewards are satisfied. It should be appreciated that the foregoing values, reward types, etc., are exemplary in nature, and should not be viewed as limiting.
- interaction monitoring subsystem 112 may be programmed to monitor interactions with an annotation associated with a user account (e.g., interactions by a user of the user account, interaction by other users, etc.).
- the monitored interactions may, for example, include access of the annotation (e.g., viewing the annotation, listening to the annotation, etc.) during a presentation of an associated content item, reactions by users to the annotation (e.g., rating the annotation, replying to the annotation, etc.), execution of transactions enabled via the annotation, or other interactions.
- Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the interactions.
- interaction monitoring subsystem 112 may be programmed to identify requests by one or more users for an annotation associated with a user account (e.g., requests by other users for the annotation).
- Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the requests.
- the requests may, for example, include requests to be exposed to the annotation, to include the annotation in an annotation track, or other requests.
- an authoring user of comments may be rewarded based on the exposure of the comments to other users (e.g., 1 cent for every 100 comment views, 1 point for each comment view, etc.).
- a threshold number of comment views may need to be satisfied before the authoring user may begin to be compensated.
- the authoring user may be provided with a first type of reward (e.g., points that cannot be exchanged for real world money) until the authoring user obtains a particular status (e.g., Silver status, Gold status, etc.) that is achieved when a threshold number of comment views is satisfied.
- a particular status e.g., Silver status, Gold status, etc.
- the authoring user may be provided with a second type of reward (e.g., real world money, points that can be exchanged for real world money, etc.).
- a second type of reward e.g., real world money, points that can be exchanged for real world money, etc.
- interaction monitoring subsystem 112 may be programmed to identify reactions of one or more users to a comment associated with a user account (e.g., reactions of other users to the comment).
- Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the reactions.
- the reactions may, for example, include a rating of the comment, a reply to the comment, or other reactions to the comment.
- an authoring user of comments may be rewarded based on ratings given to the comments by other users (e.g., $1 for every 10 four-star (or higher) ratings given to the comments, 1 point for each “like” given to the comments, etc.).
- an authoring user of comments may be rewarded based on replies to the comments by other users (e.g., $1 for every 10 replies, 1 point for each reply, etc.). It should be appreciated that the foregoing values, reward types, etc., are exemplary in nature, and should not be viewed as limiting.
- interaction monitoring subsystem 112 may be programmed to identify an exposure of a promotion related to a product or service to one or more users via a comment associated with a user account (e.g., viewing a product/service promotion via the comment, listening to a product/service promotion via the comment, etc.).
- Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the exposure.
- the promotion may, for example, relate to a product or service that appears in a portion of a content item to which the comment corresponds.
- an authoring user of comments may be rewarded for including, in a comment, a reference to a product or service that is depicted in a portion of a television episode (corresponding to the comment) by compensating the authoring user when the reference to the product or service is exposed to other users.
- User X may be compensated for including in Annotation 1A a reference to a jacket that is depicted in Portion A of Content Item 1 when the reference is exposed to other users.
- User Y may be compensated for including in Annotation 2A a reference to a Brand X dress that is depicted in Portion B of Content Item 1 when the reference is exposed to other users.
- interaction monitoring subsystem 112 may be programmed to identify use of a mechanism (via a comment associated with a user account) that enables execution of a transaction related to a product or service (e.g., accessing a shopping site via a link in the comment).
- Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the use of the mechanism.
- the transaction may, for example, relate to a product or service that appears in a portion of a content item to which the comment corresponds.
- an authoring user of a comment may be rewarded based on execution of a mechanism in the comment that enables execution of a transaction. For example, with respect to FIG.
- User X may be compensated for including in Annotation 1A a link to a shopping site through which a jacket depicted in Portion A of Content Item 1 may be purchased when the link is clicked (or otherwise executed).
- User Y may be compensated for including in Annotation 2A a link to a product page of a shopping site through which a Brand X dress depicted in Portion B of Content Item 1 may be purchased when the link is clicked (or otherwise executed).
- interaction monitoring subsystem 112 may be programmed to identify an execution of a transaction related to a product or service that is enabled via a comment associated with a user account (e.g., purchasing of a product, a user sign-up with a service, etc.).
- Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the execution of the transaction.
- the transaction may, for example, relate to a product or service that appears in a portion of a content item to which the comment corresponds.
- an authoring user of a comment may be rewarded based on execution of transactions that are enabled via the comment. For example, with respect to FIG.
- User X may be compensated for including in Annotation 1A a reference (e.g., a link) to a shopping site through which a jacket depicted in Portion A of Content Item 1 may be purchased when the jacket is purchased by other users using the reference to the shopping site.
- User Y may be compensated for including in Annotation 2A a reference to a product page of a shopping site through which a Brand X dress depicted in Portion B of Content Item 1 may be purchased when the dress is purchased by other users using the reference to the product page.
- annotation subsystem 106 may be programmed to identify a reference associated with a product or service in a comment associated with a user account.
- Annotation subsystem 106 may be programmed to provide a mechanism in the comment to enable a transaction related to the product or service.
- the reference may, for example, include a product/service identifier, a product/service type identifier, a link to a website through which the transaction related to the product or service may be executed, or other reference.
- User X may include in Annotation 1A a hyperlink to a shopping site through which a jacket that is depicted in Portion A of Content Item 1 may be purchased.
- annotation subsystem 106 may modify the hyperlink to include an affiliate code associated with reward subsystem 114 (or an entity associated with reward subsystem 114 ).
- an account associated with reward subsystem 114 (or the entity associated with reward subsystem 114 ) may be provided with a portion of the revenue from the purchase of the jacket.
- Reward subsystem 114 may detect that the jacket purchase was made through the modified hyperlink, and compensate User X for including the original hyperlink to the shopping site in Annotation 1A.
- User Y may include the term “Brand X dress” in Annotation 2A without a link to a shopping site through which the Brand X dress depicted in Portion B of Content Item 1 may be purchased. Nevertheless, upon identification of the term “Brand X dress” and that a dress is depicted in Portion B of Content Item 1, annotation subsystem 106 may add a hyperlink, including an affiliate code associated with reward subsystem 114 (or an entity associated with reward subsystem 114 ), for the dress's product page on the shopping site to Annotation 2A.
- an account associated with reward subsystem 114 may be provided with a portion of the revenue from the purchase of the dress.
- Reward subsystem 114 may detect that the dress purchase was made through the hyperlink, and compensate User Y for including the term “Brand X dress” in Annotation 2A.
- a database of annotation datasets may be facilitated by incentivizing users to create tracks.
- users may be provided rewards for creation of tracks, interactions with the tracks, enabling of transactions via the tracks, or for other reasons.
- users may be encouraged to create tracks that include quality annotations, tracks that enable transactions to facilitate revenue earnings, or tracks that offer other benefits.
- account subsystem 110 may be programmed to associate a track created by a user with a user account associated with the user.
- Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the creation of the track.
- the track may, for example, enable access to comments corresponding to portions of a content item, information that allows the comments to be presented when the corresponding portions are presented during a presentation of the content item, or other information.
- interaction monitoring subsystem 112 may be programmed to monitor interactions with a track associated with a user account (e.g., interactions by a user of the user account, interaction by other users, etc.).
- the monitored interactions may, for example, include access of the track (e.g., downloading the track, viewing the comments in the track, listening to the comments in the track, etc.), reactions by users to the track (e.g., rating the track, rating comments of the track, replying to comments in the track, etc.), execution of transactions enabled via the track, or other interactions.
- Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the interactions.
- interaction monitoring subsystem 112 may be programmed to identify requests by one or more users for a track associated with a user account (e.g., requests by other users for the track).
- Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the requests.
- the requests may, for example, include requests to access the track.
- a creating user of tracks may be rewarded based on requests by other users to access the tracks (e.g., 1 cent for each track access, 1 point for each track access, etc.).
- a threshold number of track accesses may need to be satisfied before the creating user may begin to be compensated.
- the creating user may be provided with a first type of reward (e.g., points that cannot be exchanged for real world money) until the creating user obtains a particular status (e.g., Silver status, Gold status, etc.) that is achieved when a threshold number of track accesses is satisfied.
- a particular status e.g., Silver status, Gold status, etc.
- the creating user may be provided with a second type of reward (e.g., real world money, points that can be exchanged for real world money, etc.).
- a second type of reward e.g., real world money, points that can be exchanged for real world money, etc.
- interaction monitoring subsystem 112 may be programmed to identify reactions of one or more users to a track associated with a user account (e.g., reactions of other users to the track).
- Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the reactions.
- the reactions may, for example, include a rating of the track, ratings of comments of the track, a reply to a comment of the track, or other reactions to the track.
- a creating user of a track may be rewarded based on ratings given to the track (e.g., $1 for every 10 four-star (or higher) ratings given to the track, 1 point for each “like” given to the track, etc.).
- a creating user of a track may be rewarded based on ratings given to comments of the track by other users (e.g., 10 cents for every 10 four-star (or higher) ratings given to the comments, 1 point for every 10 “likes” given to the comments, etc.).
- a creating user of a track may be rewarded based on replies to comments of the track by other users (e.g., 1 cent for each reply, 1 point for each reply, etc.). It should be appreciated that the foregoing values, reward types, etc., are exemplary in nature, and should not be viewed as limiting.
- interaction monitoring subsystem 112 may be programmed to identify an exposure of a promotion related to a product or service to one or more users via an track associated with a user account (e.g., viewing a product/service promotion via the track, listening to a product/service promotion via the track, etc.).
- Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the exposure.
- the promotion may, for example, relate to a product or service that appears in a portion of a content item to which a comment of the track corresponds.
- a creating user of a track may be rewarded for including, in the track, a comment having a reference to a product or service that is depicted in a portion of a television episode (corresponding to the comment) by compensating the creating user when the reference to the product or service is exposed to other users.
- a creating user of the STAR track e.g., a user that generated the STAR track
- the creating user may be compensated for including Annotation 2A in the STAR track when a reference to a Brand X dress that is depicted in Portion B of Content Item 1 is exposed to other users.
- interaction monitoring subsystem 112 may be programmed to identify use of a mechanism (via a comment in a track associated with a user account) that enables execution of a transaction related to a product or service (e.g., accessing a shopping site via a link in the comment).
- Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the use of the mechanism.
- the transaction may, for example, relate to a product or service that appears in a portion of a content item to which the comment corresponds.
- a creating user of a track may be rewarded based on execution of a mechanism in a comment of the track that enables execution of a transaction. For example, with respect to FIG.
- a creating user of the STAR track may be compensated for including Annotation 1A in the STAR track when a link to a shopping site through which a jacket depicted in Portion 1A of Content Item may be purchased is clicked (or otherwise executed).
- the creating user may be compensated for including Annotation 2A in the STAR track when a link to a product page of a shopping site through which a Brand X dress depicted in Portion B of Content Item 1 may be purchased when the link is clicked (or otherwise executed).
- interaction monitoring subsystem 112 may be programmed to identify an execution of a transaction related to a product or service that is enabled via a track associated with a user account (e.g., purchasing of a product, a user sign-up with a service, etc.).
- Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the execution of the transaction.
- the transaction may, for example, relate to a product or service that appears in a portion of a content item to which a comment of the track corresponds.
- a creating user of a track may be rewarded based on execution of transactions that are enabled via comments in the track. For example, with respect to FIG.
- a creating user of the STAR track may be compensated for including Annotation 1A in the STAR track when the jacket depicted in Portion A of Content Item 1 is purchased by other users using a reference (e.g., a link) to a shopping site that sells the jacket.
- a creating user of the STAR track may be compensated for including Annotation 2A in the STAR track when the dress depicted in Portion B of Content Item 1 is purchased by other users using a reference to a product page of a shopping site that sells the dress.
- annotation subsystem 106 may be programmed to identify a reference associated with a product or service in a comment of a track associated with a user account.
- Annotation subsystem 106 may be programmed to provide a mechanism in the track (e.g., in the comment having the reference, in a reply to the comment, in another comment in the track, etc.) to enable a transaction related to the product or service.
- the reference may, for example, include a product/service identifier, a product/service type identifier, a link to a website through which the transaction related to the product or service may be executed, or other reference. In one scenario, with respect to FIG.
- Annotation 1A (which is accessible via the STAR track) may include a hyperlink to a shopping site through which a jacket that is depicted in Portion A of Content Item 1 may be purchased.
- annotation subsystem 106 may modify the hyperlink to include an affiliate code associated with reward subsystem 114 (or an entity associated with reward subsystem 114 ).
- an account associated with reward subsystem 114 (or the entity associated with reward subsystem 114 ) may be provided with a portion of the revenue from the purchase of the jacket.
- Reward subsystem 114 may detect that the jacket purchase was made through the modified hyperlink, and compensate a creating user of the STAR track for Annotation 1A in the STAR track.
- Annotation 2A (which is accessible via the STAR track) may include the term “Brand X dress” without a link to a shopping site through which the Brand X dress depicted in Portion B of Content Item 1 may be purchased. Nevertheless, upon identification of the term “Brand X dress” and that a dress is depicted in Portion B of Content Item 1, annotation subsystem 106 may add a hyperlink, including an affiliate code associated with reward subsystem 114 (or an entity associated with reward subsystem 114 ), for the dress's product page on the shopping site to Annotation 2A.
- an account associated with reward subsystem 114 may be provided with a portion of the revenue from the purchase of the dress.
- Reward subsystem 114 may detect that the dress purchase was made through the hyperlink, and compensate a creating user of the STAR track for including Annotation 2A in the STAR track.
- replies or other reactions to annotations may be handled in a number of ways.
- replies or other reactions to annotations may be stored in association with the annotations.
- annotation subsystem 106 may be programmed to obtain a first annotation corresponding to a portion of a content item.
- the first annotation may, for instance, be received at a time which the portion of the content item is presented during a first presentation of the content item, and stored in a database so that the first annotation may be subsequently obtained from the database.
- Annotation subsystem 106 may be programmed to provide the first annotation to enable the first annotation to be presented with the portion of the content item during a second presentation of the content item.
- Annotation subsystem 106 may be programmed to receive, during the second presentation, a second annotation as a reaction (or reply) to the first annotation.
- annotation subsystem 106 may initiate storage of the second annotation in association with the first annotation.
- User X may have previously watched a presentation of Content Item 1, and submitted Annotation 1B when Portion A of Content Item 1 was presented. As shown in FIG. 9A , User X submitted Annotation 1B to ask other users where he/she can purchase the hat depicted in Portion A.
- Annotation 1B is presented to User Z when Portion A is presented during the presentation of Content Item 1 ( FIG. 9B ).
- User Z may reply to Annotation 1B with a link to a shopping site through which the hat depicted in Portion A can be purchased to provide an answer to User X's question (e.g., using “Reply” button 902 and reply window 904 ).
- the reply may be stored as Annotation 1C in association with Annotation 1B such that Annotation 1C may appear as a reply to Annotation 1B when User X or other users (e.g., future viewers of Portion A) watch Portion A of Content Item 1.
- questions and their corresponding answers may be presented together during respective portions of a content item that are relevant to the question and answer combinations.
- the reply to Annotation 1B may cause Annotation 1B and the reply (e.g., Annotation 1C) to be provided to a social networking service (e.g., Social Networking Service #1) to store Annotation 1B and the reply as a message thread, and initiate a conversation between User X and User Z via the social networking service (e.g., Social Networking Service #1) based on the message thread.
- a social networking service e.g., Social Networking Service #1
- annotation subsystem 106 may be programmed to obtain a first track that enables access to a first annotation that corresponds to a portion of a content item.
- the first track may, for example, include an annotation identifier associated with the first annotation, a first reference time for the first annotation, or other information.
- the first reference time may correspond to the same portion of the content item as the first annotation, and may be utilized along with the annotation identifier to present the first annotation when the corresponding portion is presented during a presentation of the content item.
- annotation subsystem 106 may be programmed to provide the first track (e.g., to user device 104 ) to enable the first annotation to be presented with the corresponding portion of the content item.
- annotation subsystem 106 may initiate storage of the second annotation in association with the first annotation.
- the storage of the second annotation (in association with the first annotation) may, for instance, result in the first track further enabling access to the second annotation (e.g., the STAR track in FIGS. 9A-9B may further enable access to Annotation 1C).
- the second annotation may be stored in a database with information indicating that the second annotation is a reaction to the first annotation.
- the first track when the first track is played during a presentation of the content item, the first track may indicate that the first annotation is to be presented with its corresponding portion of the content item.
- the second annotation Based on a query of the database for the first annotation (e.g., using the annotation identifier of the first annotation), the second annotation may be obtained in addition to the first annotation as a result of the second annotation being identified in the database as a reaction to the first annotation. Subsequently, both the first annotation and the second annotation may be presented when the corresponding portion of the content item is presented.
- the first track may be updated to further enable access to the second annotation based on the receipt of the second annotation.
- the first track may be updated to further include an annotation identifier associated with the second annotation and information indicating that the second annotation is a reaction to the first annotation.
- a second track that enables access to the first annotation may be updated such that the second track further enables access to the second annotation.
- two different tracks e.g., annotation tracks or other tracks
- that enable access to two different sets of annotations may both be updated when a user submits a reaction to an annotation common to both tracks during playback of only one of the two tracks.
- annotation subsystem 106 may be programmed to obtain an annotation inputted by a first user during a first presentation of a content item.
- Annotation subsystem 106 may be programmed to present the annotation during a second presentation of the content item to a second user.
- Annotation subsystem 106 may be programmed to receive a reaction associated with the annotation from the second user. Based on the receipt of the reaction, annotation subsystem 106 may be programmed to provide the annotation and the reaction to the first user.
- annotation subsystem 106 may be programmed to initiate a message thread associated with the first user and the second user based on the receipt of the reaction.
- annotation subsystem 106 may cause the message thread to be generated at a messaging service (e.g., a social networking service, a chat service, a SMS service, a MMS service, etc.) that is accessible to the first user and the second user.
- a messaging service e.g., a social networking service, a chat service, a SMS service, a MMS service, etc.
- the annotation and the reaction may be provided to the user device (e.g., pulled by the user device, pushed to the user device, etc.).
- the reaction to the annotation may initiate a conversation between the first and second users even if the annotation by the first user had not been intended specifically for the second user, as well as without either user having to re-experience the portion of the content item to which the annotation corresponds.
- conversations may be initiated between users regarding subject matter of mutual interest, continued through a messaging service independent of an annotation service or a content delivery service, etc.
- annotation subsystem 106 may be programmed to provide the annotation and the reaction to the first user via a social networking service.
- annotation subsystem 106 may identify a social network service with which the first user and the second user both have accounts, and provide the annotation and the reaction to the first user via the social network service.
- Annotation 1B and its reaction may be provided to User X via Social Network Service #1.
- annotation subsystem 106 may be programmed to identify a social distance between the first user and the second user within a social network.
- Annotation subsystem 106 may be programmed to determine whether the social distance satisfies a social distance threshold, and provide the annotation and the reaction to the first user based on a determination that the social distance satisfies the social distance threshold.
- User X may be associated with a preference to only receive communications from users that are 1 degree away from the user.
- annotation subsystem 106 may know not to initiate a conversation between the user and another user that is only a friend of one of the user's friends (e.g., more than 1 degree away from the user).
- a reaction to an annotation in a track may result in the track be updated to include (or otherwise further enabling access to) the reaction.
- the annotation and the reaction may be provided to an authoring user of the annotation without the track being updated to include (or otherwise enabling access to) the reaction.
- display of user interface elements may be presented based on relevancy.
- user interface elements may be presented with varying characteristics based on, for example, the relevancy of the user interface elements to a user or other users, the relevancy of data associated with the user interface elements to the user or other users, the relevancy of the user interface elements to activity being performed by the user or other users, etc.
- the presentation of the user interface elements may allow a user to quickly identify relevant information, actions that may be of interest to the user, recommendations related to the user's interests, etc.
- Characteristics of the user interface elements may, for example, include one or more shapes, designs, sizes, colors, locations, animations, orientations, degrees of transparencies/opaqueness, degrees of sharpness or blurriness, labels (e.g., number, letter, etc.), or other characteristics.
- the characteristics of the user interface elements may change over time based on changes with respect to the number of user interface elements on display, data associated with each of the user interface elements, activities of a user or other users, etc.
- the user interface elements may be static in their presentation or may move dynamically in response to changes in the X, Y, or Z plane of the user interface. For example, rather than simply moving user interface elements horizontally (e.g., X plane), vertically (e.g., Y plane), or some combination thereof in the user interface, user interface elements may move into or out of the background of a user interface in a dynamic 3-dimensional fashion (e.g., Z plane). In this way, user interface elements of the user interface may be presented in a manner that simplifies the user interface while also providing the user with simultaneous access to many different user interface elements.
- user interface elements may be associated with content items (e.g., movies, episodes, video clips, songs, audio books, e-books, or other content items).
- Content presentation subsystem 116 may be programmed to determine relevancy information indicating the relevancy of each of the content items to a user.
- Content presentation subsystem 116 may be programmed to modify and/or present the user interface elements based on the relevancy information.
- user interface 1002 may include a display of user interface elements 1004 a - 1004 g that are associated with television shows. While FIG. 10A depicts user interface elements corresponding to television shows, it should be appreciated that a similar interface may be utilized for any other content item (e.g., movies, songs, etc.).
- the characteristics of user interface elements 1004 may, for example, be based on which shows are most frequently viewed by the user, which shows are most relevant to a specific genre specified by the user, which shows the user has viewed the most, etc.
- the size of user interface element 1004 b (e.g., associated with the “Big Bang Theory” television show) may be increased, and the location of user interface element 1004 b within the Z plane (e.g., depth) may be changed to feature user interface element 1004 b more prominently in front of other user interface elements (e.g., from the perspective of the user).
- changes to the size and location of user interface element 1004 b may be effectuated when a number of the user's friends begin to tune into one or more episodes of Big Band Theory.
- the change in the size and location of user interface element 1004 b may, for example, alert the user that there may be information of potential interest to the user associated with user interface 1004 b .
- the user may be presented with an information page that indicates the episodes of the Big Bang Theory that the user's friends are currently watching or have recently watched.
- the user may, for example, be inclined to start watching the episodes indicated on the information page in order to see the comments or other annotations that the user's friends have submitted for portions of the episodes.
- user interface elements may be associated with other users within a user's social group (e.g., the user's friends, the user's colleagues, the user's connections within a social network, or other group).
- Content presentation subsystem 116 may be programmed to determine relevancy information indicating the relevancy of each of the other users to the user.
- Content presentation subsystem 116 may be programmed to present user interface elements based on the relevancy information, for example, by modifying the characteristics of the user interface elements based on the relevancy of respective ones of the other users to the user. For example, as shown in FIG.
- the characteristics of user interface elements 1006 a - 1006 g may, for example, be based on the frequency of the user's interactions with the other users (e.g., reactions to the other users' annotations, conversations with the other users, etc.), the frequency of the interactions with one another, similarity of the other users' activities with the user's activities, etc.
- the size of user interface element 1006 b (e.g., associated with Karl Thomas) may be increased, and the location of user interface element 1006 b within the Z plane (e.g., depth) may be changed to feature user interface element 1006 b more prominently in front of other user interface elements (e.g., from the perspective of the user).
- the change to the size and location of user interface element 1006 b may be effectuated when an increase in interactions with items associated with Karl Thomas's account is detected.
- the change in the size and location of user interface element 1006 b may, for example, alert the user that there may be information of potential interest to the user associated with user interface 1006 b .
- the user may be presented with an information page that indicates the annotations that Karl Thomas has recently submitted for content items, the reactions that users have recently submitted for Karl Thomas's annotations, users that have recently engaged in conversation with Karl Thomas, etc.
- the user may be inclined to view Karl Thomas's annotations and the reactions associated with Karl Thomas's annotations, start submitting annotations for content items for which Karl Thomas has submitted annotations, initiate a conversation with Karl Thomas, or perform other activities.
- control of presentations of a content item to a group of users may be managed such that an application or a user may control playback or other features of the presentations of the content item to the group of users.
- a group of users e.g., friends
- an accompanying game e.g., a trivia game
- the users may not have access to the same content delivery service to watch a presentation of the movie or television show, or the users may be watching a presentation of the movie or television show on different applications or devices.
- one or more users of a group may be enabled to simultaneously control multiple presentations of a content item to respective users of the group even when the presentations are provided to the users via different content delivery services, different user applications, or different user devices.
- the group interaction experience e.g., group watching experience, group listening experience, group gaming experience, etc.
- group interaction experience may be enhanced.
- content presentation subsystem 116 may be programmed to manage presentations of a content item to at least two users.
- content presentation subsystem 116 may synchronize the presentations of the content item (e.g., based on presentation reference times or other information) so that users may experience the same portion of the content item at a given time.
- content reference subsystem 108 may be programmed to map portions of a first presentation of a content item to portions of a second presentation of the content item.
- the portions of the first and second presentations may, for example, be mapped to one another via a master set of reference times (as described in detail above with regard to FIG. 3 ).
- User A may watch a first presentation of Content Item 1 on user interface 202 a
- User B may watch a second presentation of Content Item 1 on user interface 202 b
- the first presentation may, for instance, be provided via a first content delivery service (e.g., NETFLIX)
- the second presentation may be provided via a second content delivery service (e.g., HULU).
- the two presentations may be synchronized so that User A and User B are watching the same portion of Content Item 1 at the same time.
- User A and User B may be playing a trivia game related to Content Item 1 (e.g., a movie, an episode, etc.).
- a remote application may control the presentations of Content Item 1 to User A and User B, and pause the presentations at particular times to ask trivia questions related to a portion of Content Item 1.
- User A and User B may be presented with a question (e.g., Question 4) on window 1102 , and they may each answer Question 4 using “Answer” button 1104 .
- Questions of the trivia game may, for example, be presented as comments on a track (e.g., a trivia game track or other track) created by administrators or other users, and answers to the trivia questions may be stored as reactions to the comments.
- User A or User B may control the ability to pause the presentations of Content Item 1. For example, when User A activates the play/pause button 208 a to pause the presentations of Content Item 1 at Portion A, both User A and User B may be presented with a question corresponding to Portion A. As illustrated in FIGS. 11A-11B , User A has 3 points for answering 3 previous questions correctly, and User B has 1 point for answering 1 previous question correctly.
- a threshold number of users in a group of users may need to issue a command in order for the command to be implemented for presentations of a content item to the group of users.
- both User A and User B may need to activate their play/pause buttons 208 when Portion A is presented to pause the presentations of Content Item 1 at Portion A and trigger the presentation of Question 4.
- both User A and User B may be allowed to gauge whether they are comfortable with questions regarding a certain portion of a content item before activating their play/pause button 208 to trigger a question.
- the control of presentations of a content item may be passed among a group of users based on a schedule, pass intervals (e.g., time intervals, use intervals, etc.), token-based criteria, or other criteria.
- a user may manually pass his/her control of the presentations to another user in the group.
- a schedule may indicate when each one of the group of users should be given full or partial control.
- passing of control may be performed after a user has had control for a particular time interval, or after a user has used all of his/her available number of commands to control presentations of the content item.
- a user may be given a certain number of tokens which may be exchanged for issuing commands to control the presentations of the content item to the group of users (e.g., 1 token to pause the presentations for 5 seconds, 3 tokens to rewind or fast-forward the presentations up to 5 minutes back or forward, 6 tokens to cause the presentations to jump to any portion of the content item, etc.).
- the control of the presentations may be passed to another user in the group that has available tokens.
- a user may, for example, be given an initial set of tokens for controlling the presentations of the content item for free, but may have the option to purchase additional tokens.
- the foregoing trivia game is but one example. It should be recognized that other examples may be implemented when a group of users wish to view a movie or television show together.
- third parties may generate and provide “group viewing tracks” to encourage social behavior.
- FIGS. 12-27 comprise exemplary illustrations of flowcharts of processing operations of methods that enable the various features and functionality of the system as described in detail above (and illustrated in FIGS. 1-11 ).
- the processing operations of each method presented below are intended to be illustrative and non-limiting. In some implementations, for example, the methods may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the processing operations of the methods are illustrated and described below is not intended to be limiting.
- the methods may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
- the one or more processing devices may include one or more devices executing some or all of the operations of the methods in response to instructions stored electronically on an electronic storage medium.
- the one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of the methods.
- FIG. 12 is an exemplary illustration of a flowchart of a method 1200 of creating and maintaining a database of annotations corresponding to portions of a content item, according to an aspect of the invention.
- a first annotation corresponding to a time at which a first portion of a content item is presented via a first content delivery service may be received. Operation 1202 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- a second annotation corresponding to a time at which the first portion of the content item is presented via a second content delivery service may be received.
- the presentation via the first content delivery service may correspond to a first presentation that includes the content item
- the presentation via the second content delivery service may correspond to a second presentation that includes the content item.
- the presentation via the first content delivery service may correspond to a first presentation that includes the first portion of the content item and does not include a second portion of the content item
- the presentation via the second content delivery service may include the first and second portions of the content item.
- Operation 1204 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- Operation 1206 storage of the first annotation in association with a first reference time corresponding to the first portion of the content item may be initiated. Operation 1206 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- Operation 1208 storage of the second annotation in association with the first reference time (corresponding to the first portion of the content item) may be initiated. Operation 1208 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- Operation 1210 a third annotation corresponding to a time at which a second portion of the content item is presented may be received. Operation 1210 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- Operation 1212 storage of the third annotation in association with a second reference time corresponding to the second portion of the content item may be initiated. Operation 1212 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- the first and/or second annotations may be provided based on the first reference time.
- the first and/or second annotations may be provided based on the first reference time such that the first and/or second annotations are presented when the first portion of the content item (to which the first reference time corresponds) is presented during a third presentation of the content item.
- the third presentation may, for example, be provided via the first content delivery service, the second content delivery service, or a third content delivery service.
- the third presentation may be the same as one of the first or second presentations of the content item, or different than both the first and second presentations of the content item.
- Operation 1214 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- the third annotation may be provided based on the second reference time.
- the third annotation may be provided based on the second reference time such that the third annotation is presented when the second portion of the content item (to which the second reference time corresponds) is presented during the third presentation of the content item.
- Operation 1216 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- FIG. 13 is an exemplary illustration of a flowchart of a method 1300 of generating annotations for a content item based on interactions of users with presentations of the content item, according to an aspect of the invention.
- Operation 1302 an interaction of a user with a presentation of a content item may be monitored. Operation 1302 may be performed by an interaction monitoring subsystem that is the same as or similar to interaction monitoring subsystem 112 , in accordance with one or more implementations.
- a characteristic of the content item may be determined based on the interaction. Operation 1304 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- an annotation may be generated for the content item based on the characteristic. Operation 1306 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- a reference time that corresponds to a portion of the content item may be identified for the annotation based on the interaction. Operation 1308 may be performed by a content reference subsystem that is the same as or similar to content reference subsystem 108 , in accordance with one or more implementations.
- Operation 1310 storage of the annotation in association with the reference time may be initiated. Operation 1310 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- the annotation may be provided based on the reference time such that the annotation is presented when the portion of the content item (to which the reference time corresponds) is presented during a subsequent presentation of the content item.
- Operation 1312 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- FIG. 14 is an exemplary illustration of a flowchart of a method 1400 of providing annotations (corresponding to portions of a content item) to social networking services, according to an aspect of the invention.
- a presentation of a content item may be initiated. Operation 1402 may be performed by a user content presentation subsystem that is the same as or similar to user content presentation subsystem 120 , in accordance with one or more implementations.
- a first annotation may be received at a time at which a first portion of the content item is presented. Operation 1404 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118 , in accordance with one or more implementations.
- Operation 1406 storage of the first annotation in association with a first reference time (corresponding to the first portion of the content item) may be initiated. Operation 1406 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118 , in accordance with one or more implementations.
- the first annotation may be provided to a first social networking service. Operation 1408 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118 , in accordance with one or more implementations.
- a second annotation may be received at a time at which a second portion of the content item is presented. Operation 1410 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118 , in accordance with one or more implementations.
- Operation 1412 storage of the second annotation in association with a second reference time (corresponding to the second portion of the content item) may be initiated. Operation 1412 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118 , in accordance with one or more implementations.
- the second annotation may be provided to a second social networking service. Operation 1414 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118 , in accordance with one or more implementations.
- FIG. 15 is an exemplary illustration of a flowchart of a method 1500 of presenting annotations corresponding to portions of a content item during a presentation of the content item, according to an aspect of the invention.
- Operation 1502 a selection of a content item to be presented to a user may be received. Operation 1502 may be performed by a user content presentation subsystem that is the same as or similar to user content presentation subsystem 120 , in accordance with one or more implementations.
- a first parameter associated with the user that is related to presentation of annotations may be received. Operation 1504 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118 , in accordance with one or more implementations.
- annotations corresponding to portions of the content item may be obtained based on the first parameter. Operation 1506 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118 , in accordance with one or more implementations.
- Operation 1508 a second parameter associated with the user that indicates a characteristic desired by the first user may be received. Operation 1508 may be performed by a user content presentation subsystem that is the same as or similar to user content presentation subsystem 120 , in accordance with one or more implementations.
- a presentation of the selected content item may be initiated such that the presentation of the selected content item is based on the second parameter.
- Operation 1510 may be performed by a user content presentation subsystem that is the same as or similar to user content presentation subsystem 120 , in accordance with one or more implementations.
- Operation 1512 a determination that the presentation of the selected content item has reached a first reference time corresponding to a first portion of the content item may be effectuated. Operation 1512 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118 , in accordance with one or more implementations.
- first and/or second annotations associated with the first reference time may be presented at a time corresponding to the first portion of the content item. Operation 1514 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118 , in accordance with one or more implementations.
- a third annotation by the user that corresponds to a time at which a second portion of the content item is presented may be received during the presentation of the content item.
- Operation 1516 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118 , in accordance with one or more implementations.
- Operation 1518 storage of the third annotation in association with a second reference time corresponding to the second portion of the content item may be initiated. Operation 1518 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118 , in accordance with one or more implementations.
- FIG. 16 is an exemplary illustration of a flowchart of a method 1600 of facilitating rewards for the creation of annotations, according to an aspect of the invention.
- an annotation corresponding to a time at which a portion of a content item is presented may be received from a user. Operation 1602 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- the annotation may be associated with a user account of the user. Operation 1604 may be performed by an account subsystem that is the same as or similar to account subsystem 110 , in accordance with one or more implementations.
- a reward to be provided (or credited) to the user account may be determined based on the annotation. For example, a reward may be provided (or credited) to the user account based on a receipt of annotation from a user, interactions of other users with the annotation, or other criteria. Operation 1606 may be performed by a reward subsystem that is the same as or similar to reward subsystem 114 , in accordance with one or more implementations.
- FIG. 17 is an exemplary illustration of a flowchart of a method 1700 of facilitating rewards based on interactions with annotations, according to an aspect of the invention.
- an annotation received during a presentation of a content item may be associated with a user account.
- Operation 1702 may be performed by an account subsystem that is the same as or similar to account subsystem 110 , in accordance with one or more implementations.
- interactions with the annotation may be monitored. Monitored interactions may, for instance, include access of the annotation (e.g., viewing the annotation, listening the annotation, etc.) by other users during presentation of the content item, reactions by users to the annotation (e.g., rating the annotation, replying to the annotation, etc.), execution of transactions enabled via the annotation, or other interactions. Operation 1704 may be performed by an interaction monitoring subsystem that is the same as or similar to interaction monitoring subsystem 112 , in accordance with one or more implementations.
- a reward to be provided (or credited) to the user account may be determined based on the interactions. For example, a determination of whether the interactions satisfy one or more criteria for compensating a user associated with the user account may be effectuated. The reward to be provided to the user account may be determined based on whether the interactions satisfy the compensation criteria. Operation 1706 may be performed by a reward subsystem that is the same as or similar to reward subsystem 114 , in accordance with one or more implementations.
- FIG. 18 is an exemplary illustration of a flowchart of a method 1800 of facilitating rewards based on execution of transactions enabled via annotations, according to an aspect of the invention.
- an annotation received during a presentation of a content item may be associated with a user account.
- Operation 1802 may be performed by an account subsystem that is the same as or similar to account subsystem 110 , in accordance with one or more implementations.
- a reference associated with a product or service may be identified in the annotation. Operation 1804 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- a mechanism that enables a transaction related to the product or service may be provided in the annotation.
- the mechanism may be provided in the annotation based on the identification of the reference associated with the product or service in the annotation.
- Operation 1806 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- an execution of the transaction enabled via the mechanism may be identified.
- the execution of the transaction may be identified based on use of the mechanism by a user to facilitate the execution of the transaction.
- Operation 1808 may be performed by an interaction monitoring subsystem that is the same as or similar to interaction monitoring subsystem 112 , in accordance with one or more implementations.
- a reward to be provided (or credited) to the user account may be determined based on the execution of the transaction. Operation 1810 may be performed by a reward subsystem that is the same as or similar to reward subsystem 114 , in accordance with one or more implementations.
- FIG. 19 is an exemplary illustration of a flowchart of a method 1900 of providing a dataset (or track) of annotations corresponding to portions of a content item, according to an aspect of the invention.
- a first annotation received from a first source during a first presentation of a content item may be stored.
- the first source may, for example, include an authoring user of the first annotation, an entity associated with the authoring user, or other entity.
- Operation 1902 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- a second annotation received from the first source during a second presentation of the content item may be stored. Operation 1904 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- a third annotation received from a second source during a third presentation of the content item may be stored.
- the second source may, for example, include an authoring user of the second annotation, an entity associated with the authoring user, or other entity.
- Operation 1906 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- the first, second, or third annotations may be identified for inclusion in a dataset (or track).
- the annotations may be identified for inclusion in the dataset based on a selection of the annotations by a user for inclusion in the dataset, one or more parameters selected by a user for creating the dataset, automatic creation of the dataset by a service without explicit user input to create the dataset, etc.
- Operation 1908 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- the dataset may be generated.
- the dataset may be generated such that the dataset enables access to the first, second, or third annotations.
- Operation 1910 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- the dataset may be provided to enable the first, second, or third annotations to be presented, respectively, at times corresponding to first, second, or portions of the content items.
- Operation 1912 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- Operation 1914 a reaction associated with the first, second, or third annotations may be identified. Operation 1914 may be performed by an interaction monitoring subsystem that is the same as or similar to interaction monitoring subsystem 112 , in accordance with one or more implementations.
- a characteristic may be determined for the dataset based on the reaction. Operation 1916 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- the characteristic may be associated with the dataset. Operation 1918 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- FIG. 20 is an exemplary illustration of a flowchart of a method 2000 of facilitating rewards based on interactions with datasets (e.g., tracks), according to an aspect of the invention.
- a dataset (or track) that enables access to annotations corresponding to portions of a content item may be generated. Operation 2002 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- the dataset may be associated with a user account.
- the dataset may be received from a first source, and the dataset may be associated with a user account of the first source.
- the first source may include a creating user of the dataset, an entity associated with the creating user, or other entity.
- Operation 2004 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- interactions with the dataset may be monitored. Monitored interactions may, for example, include access of the dataset (e.g., viewing annotations of the dataset, listening annotations of the dataset, etc.) by users during a presentation of the content item, reactions by users to the dataset (e.g., rating the dataset, rating annotations of the dataset, replying to annotation of the dataset, etc.), execution of transactions enabled via the dataset, or other interactions. Operation 2006 may be performed by an interaction monitoring subsystem that is the same as or similar to interaction monitoring subsystem 112 , in accordance with one or more implementations.
- a reward to be provided (or credited) to the user account may be determined based on the interactions. Operation 2008 may be performed by a reward subsystem that is the same as or similar to reward subsystem 114 , in accordance with one or more implementations.
- FIG. 21 is an exemplary illustration of a flowchart of a method 2100 of facilitating rewards based on execution of transactions enabled via datasets (e.g., tracks), according to an aspect of the invention.
- a reference associated with a product or service may be identified in an annotation that is to be included in a dataset (or track). Operation 2102 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- Operation 2104 a mechanism that enables a transaction related to the product or service may be generated. Operation 2104 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- the dataset may be generated such that the dataset enables access to the annotation and the mechanism.
- Operation 2106 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- the dataset may be associated with a user account.
- Operation 2108 may be performed by an account subsystem that is the same as or similar to account subsystem 110 , in accordance with one or more implementations.
- an execution of the transaction via the mechanism may be identified.
- the execution of the transaction may be identified based on use of the mechanism by a user to facilitate the execution of the transaction.
- Operation 2110 may be performed by an interaction monitoring subsystem that is the same as or similar to interaction monitoring subsystem 112 , in accordance with one or more implementations.
- a reward to be provided to the user account may be determined based on the execution of the transaction. Operation 2112 may be performed by a reward subsystem that is the same as or similar to reward subsystem 114 , in accordance with one or more implementations.
- FIG. 22 is an exemplary illustration of a flowchart of a method 2200 of facilitating the sharing of portions of a content item across different content delivery services, according to an aspect of the invention.
- a request to provide information to enable access to a portion of a content item may be received.
- the request may, for example, be based on a first presentation of the content item via a first content delivery service.
- Operation 2202 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- a reference time corresponding to the portion of the content item may be identified. Operation 2204 may be performed by a content reference subsystem that is the same as or similar to content reference subsystem 108 , in accordance with one or more implementations.
- reference information that enables access to the portion of the content item in a second presentation of the content item (via a second content delivery service) may be generated based on the reference time.
- Operation 2206 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- the reference information may be provided to enable access to the portion of the content item via the second content delivery service.
- Operation 2208 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- FIG. 23 is an exemplary illustration of a flowchart of a method 2300 of facilitating the access of a portion of a content item, according to an aspect of the invention.
- reference information related to a portion of a content item may be received.
- the reference information may be generated based on user input during a first presentation of the content item (via a first content delivery service).
- Operation 2302 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- a second content delivery service (through which access to the portion of the content item is available) may be identified based on the reference information.
- the reference information may include information indicating the content item (e.g., content item identifier), the portion of the content item (e.g., portion identifier), a reference time corresponding to the portion of the content item, or other information.
- the second content delivery service may be identified based on a determination that the second content delivery service offers access to the content item or the portion of the content item.
- Operation 2304 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- the portion of the content item may be provided in the second presentation (via the second content delivery service) based on the reference information.
- the reference information may enable a user to jump to the portion of the content item in the second presentation (e.g., using a content item identifier associated with the content item and a reference time corresponding to the portion of the content item).
- Operation 2306 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- FIG. 24 is an exemplary illustration of a flowchart of a method 2400 of enabling storage of reactions to annotations, according to an aspect of the invention.
- a first annotation (initially received at a time which a portion of a content item is presented during a first presentation of the content item) may be obtained. Operation 2402 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- the first annotation may be provided when the corresponding portion of the content item is presented during a second presentation of the content item. Operation 2404 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- a second annotation may be received during the second presentation as a reaction to the first annotation. Operation 2406 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- operation 2408 storage of the second annotation in association with the first annotation may be initiated.
- the second annotation may be stored in association with the first annotation based on a determination that the second annotation is a reaction to the first annotation.
- Operation 2408 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- FIG. 25 is an exemplary illustration of a flowchart of a method 2500 of initiating conversations between users based on reactions to annotations, according to an aspect of the invention.
- an annotation entered by a first user during a first presentation of a content item may be obtained. Operation 2502 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- the annotation may be presented during a second presentation of the content item (to a second user). Operation 2504 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- a reaction associated with the annotation may be received from the second user. Operation 2506 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- the annotation and the reaction may be provided via a messaging service to the first user.
- the annotation and/or the reaction may be provided to the first user based on a determination that the first and second users are associated with the same social network, a determination that the first and second users are within a social distance threshold from one another, or other criteria.
- Operation 2508 may be performed by an annotation subsystem that is the same as or similar to annotation subsystem 106 , in accordance with one or more implementations.
- FIG. 26 is an exemplary illustration of a flowchart of a method 2600 of presenting user interface elements based on relevancy, according to an aspect of the invention.
- Operation 2602 relevancy of a user interface element to a user may be determined with respect to a first time. Operation 2602 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- a first set of characteristics may be determined for the user interface element based on the determined relevancy with respect to the first time. Operation 2604 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- the user interface element may be presented based on the first set of characteristics. Operation 2606 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- relevancy of a user interface element to the user may be determined with respect to a second time (e.g., relevancy of the user interface element at the second time). Operation 2608 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- a second set of characteristics may be determined for the user interface element based on the determined relevancy with respect to the second time. Operation 2610 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- the user interface element may be modified during the presentation of the user interface element based on the second set of characteristics. Operation 2612 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- FIG. 27 is an exemplary illustration of a flowchart of a method 2700 of facilitating control of presentations of a content item to a group of users, according to an aspect of the invention.
- presentations of a content item to first and second users via first and second content delivery services, respectively, may be synchronized.
- Operation 2702 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- control of the presentations of the content item may be enabled for the first or second users.
- user control of the presentations of the content item may be enabled for the first user, while user control of the presentations of the content item may be disabled for the second user (or vice versa).
- Operation 2704 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- a control command may be received from a controlling user (e.g., first user, second user, etc.) during the presentations of the content item.
- Operation 2706 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
- Operation 2708 the presentations of the content item may be controlled based on the control command. Operation 2708 may be performed by a content presentation subsystem that is the same as or similar to content presentation subsystem 116 , in accordance with one or more implementations.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Social Psychology (AREA)
- Data Mining & Analysis (AREA)
- Computer Networks & Wireless Communication (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Artificial Intelligence (AREA)
- Information Transfer Between Computers (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Methods, apparatuses, and/or systems are provided for enabling a time-shifted, on-demand social network for watching, creating, and/or sharing time-shifted annotation datasets (e.g., commentary tracks) synced to any on-demand programming, and more particularly for creating and maintaining a database of annotations corresponding to portions of a content item.
Description
- This application claims priority to: (1) U.S. Provisional Patent Application Ser. No. 61/771,461, filed on Mar. 1, 2013, entitled “Marking (Annotating) Live or Captured Media;” (2) U.S. Provisional Patent Application Ser. No. 61/794,202, filed on Mar. 15, 2013, entitled “Method and Apparatus for Marking (Annotating) Live or Captured Media;” (3) U.S. Provisional Patent Application Ser. No. 61/771,467, filed on Mar. 1, 2013, entitled “Time Shift Local;” (4) U.S. Provisional Patent Application Ser. No. 61/794,271, filed on Mar. 15, 2013, entitled “Method and Apparatus for Conducting Time Shifted Interactions With Media;” (5) U.S. Provisional Patent Application Ser. No. 61/771,514, filed on Mar. 1, 2013, entitled “Group Media Control;” (6) U.S. Provisional Patent Application Ser. No. 61/794,322, filed on Mar. 15, 2013, entitled “Method and Apparatus for Controlling Media Being Experienced by a Group of Users;” (7) U.S. Provisional Patent Application Ser. No. 61/771,519, filed on Mar. 1, 2013, entitled “Relevant/Navigation Path User Interface;” (8) U.S. Provisional Patent Application Ser. No. 61/794,419, filed on Mar. 15, 2013, entitled “Method and Apparatus for Controlling Display of User Interface Elements Based on Relevancy;” and (9) U.S. Provisional Patent Application Ser. No. 61/819,941, filed on May 6, 2013, entitled “Method and Apparatus for Sharing Virtual Video Clips Resulting In Audio Playback Without Owning the Source Material,” each of which is hereby incorporated by reference herein in its entirety.
- This application is additionally related to the following, co-pending U.S. utility patent applications, filed on even date herewith: (1) U.S. patent application Ser. No. [Attorney Docket No. 022730-0429853], entitled “SYSTEM AND METHOD FOR PROVIDING A DATASET OF ANNOTATIONS CORRESPONDING TO PORTIONS OF A CONTENT ITEM;” (2) U.S. patent application Ser. No. [Attorney Docket No. 022730-0429854], entitled “SYSTEM AND METHOD FOR PROVIDING ANNOTATIONS RECEIVED DURING PRESENTATIONS OF A CONTENT ITEM;” (3) U.S. patent application Ser. No. [Attorney Docket No. 022730-0429855], entitled “SYSTEM AND METHOD FOR PROVIDING REWARDS BASED ON ANNOTATIONS;” (4) U.S. patent application Ser. No. [Attorney Docket No. 022730-0429856], entitled “SYSTEM AND METHOD FOR SHARING PORTIONS OF A CONTENT ITEM;” and (5) U.S. patent application Ser. No. [Attorney Docket No. 022730-0429856], entitled “SYSTEM AND METHOD FOR MANAGING REACTIONS TO ANNOTATIONS,” each of which is additionally hereby incorporated by reference herein in its entirety.
- The invention relates generally to methods, apparatuses, and/or systems for enabling a time-shifted, on-demand social network for watching, creating, and/or sharing time-shifted annotation datasets (e.g., commentary tracks) synced to any on-demand programming, and more particularly to creating and maintaining a database of annotations corresponding to portions of a content item.
- Through the advent of social media, users are able to disseminate information to others, as well as interact with one another, via various social networks. For example, users may utilize a social networking service to inform others about movie and/or television episodes that they have watched, share their reactions to events occurring during an episode in real-time, and respond to one another's reactions to events in an episode. However, users that miss an episode during an original airing and watch the episode at a later time (e.g., subsequent airing, online streaming, DVD presentation, etc.) are typically unable to experience the reactions of other users as they watch the episode, for example, due to the significantly lower number of viewers that are watching the episode at the same time (for subsequent airings) or because they are watching it “on-demand” at a time when others are not viewing the episode. Moreover, because the previously-shared reactions are available to users that have not watched the episode, the shared reactions may act as “spoilers” that ruin the experience of the users that have yet to watch the episode.
- In addition, it is not uncommon for users to want to use social networks to share information about different scenes (e.g., funny or poignant scenes) in a movie or television episode with other users of the social network. Although users may share information with others regarding a specific scene within an episode, they are generally unable to easily provide others with easy access to actually view the scene within the movie or television episode. As an example, users may describe the scene that they wish to share, or they may specify a particular content delivery service and a reference time at which the scene can be accessed by a user via the specified content delivery service. However, both of these approaches require users to manually search for the scene within the episode, and the latter approach further requires that the users have access to the specified content delivery service. These and other drawbacks exist.
- The invention addressing these and other drawbacks relates to methods, apparatuses, and/or systems for enabling a time-shifted, on-demand social network for watching, creating, and/or sharing time-shifted commentary tracks synced to any on-demand programming, according to an aspect of the invention. In particular, the invention may facilitate the presentation of content items, annotations associated with the content items, or related items.
- As used herein, “content items” may include movies, television episodes, portions or segments of movies or television episodes, video clips, songs, audio books, e-books, or other content items. A presentation of a content item may be provided to a user via a content delivery service such as, for example, NETFLIX, HULU, AMAZON INSTANT VIDEO, a cable provider, a local service at a user device programmed to present content items stored locally at an electronic storage of the user device (e.g., a hard drive, a CD, a DVD, etc.), or other content delivery service. Presentations of a content item may include reproductions of the content item that are of varying versions (e.g., extended versions, versions with alternative endings or scenes, etc.), reproductions of the content item with auxiliary information (e.g., advertisements, warnings, etc.), or other presentations of the content item.
- As used herein, “annotations” may include reviews, comments, ratings, markups, posts, links to other media, or other annotations. Annotations may be manually entered by a user for a content item (or a portion thereof), or automatically determined for the content item (or portion thereof) based on interactions of the user with the content item (or portion thereof), interactions of the user with other portions of the content item or other content items, or other parameters. Annotations may be manually entered or automatically determined for the content item or the content item portion either before, during, or after a presentation of the content item. Annotations may be stored as data or metadata, for example, in association with information indicative of the content item or the content item.
- According to an aspect of the invention, a database of annotations (e.g., comments) may be created and maintained. The database of annotations may, for example, include annotations corresponding to portions of a content item. The annotations may be created based on presentations of the content item that are provided via one or more content delivery services, and stored in a database for later use. As an example, an annotation in the database may correspond to a time at which a first portion of a content item is presented via a first content delivery service (e.g., NETFLIX), and another annotation in the database may correspond to a time at which a second portion of the content item is presented via a second content delivery service (e.g., HULU). The annotations may be stored in the database respectively in association with reference times that correspond to portions of the content item. In this way, reference times associated with annotations may be utilized to subsequently provide the annotations such that they are presented in a time-synchronized fashion with corresponding portions of a content item during subsequent presentations of that content item. Among other benefits, reactions of users to portions of the content item (e.g., captured in the form of annotations) may be shared with other users regardless of the time at which each user experiences the content item. In this manner, users can experience a content item (and its accompanying annotations) whenever they choose without having to worry about prior annotations “spoiling” the user experience.
- In one implementation, audio content recognition may be performed for a portion of a content item when a user submits an annotation (e.g., comment) for the portion of the content item. A comparison of a resulting pattern of the audio content recognition with stored patterns associated with a set of reference times may then be performed to identify a reference time that corresponds to the portion of the content item. Upon identification of the reference time, the annotation may be stored in association with the reference time so that the reference time may later be used to present the annotation when its corresponding portion of the content item is presented.
- In another implementation, audio content recognition may be performed on presentations of a content item to recognize and map portions of individual ones of the presentations to portions of other ones of the presentations. Because portions of one presentation are mapped to portions of other presentations (and/or vice versa), a particular portion of the content item (to which an annotation corresponds) may be determined when the corresponding portion of any one of the presentations is known.
- In yet another implementation, a master set of reference times may be maintained for portions of a content item, and used to identify a particular portion of the content item to which an annotation corresponds. For example, upon receipt of an annotation, a reference time corresponding to receipt of the annotation may be correlated to a master reference time of the master set. The annotation may then be stored in association with the correlated master reference time so that the master reference time may later be used to present the annotation when its corresponding portion of the content item is presented.
- In a further implementation, portions of presentations of a content item may be mapped to portions of other ones of the presentations, and/or mapped to a master set of reference times, using other approaches. As an example, information regarding the portions of the presentations, the reference times of the presentations that correspond to the portions, etc., may be shared by content delivery services (through which access to the presentations are provided). In one scenario, the content delivery services may each provide an application programming interface (API) through which the shared information may be obtained. The information shared by the content delivery services may then be utilized to correlate the portions of the presentations to portions of other ones of the presentations, and/or to the master set of reference times.
- In various implementations, annotations corresponding to portions of a content item may be provided to one or more social networking services (e.g., FACEBOOK, TWITTER, etc.). For example, a user may submit annotations during a presentation of a content item to a plurality of social networking services. In one scenario, for instance, a user interface may enable a user to experience a presentation of a content item, view annotations corresponding to portions of the content item during the presentation, submit annotations for portions of the content item during the presentation, select to provide a submitted annotation to different social networking services, or perform other operations. In this way, during a presentation of a content item, users may consume and share their experiences regarding the content item with subsequent viewers (or consumers) of the content item, and other users associated with social networking services.
- In some implementations, annotations may be selectively presented to a user based on one or more parameters. The parameters may, for instance, include annotation types, annotation sources (e.g., authors or other sources), annotation set identifiers, social distances, user relationship status, spatial proximity, temporal proximity, or other parameters. The parameters may be manually selected by a user, or automatically selected.
- In some implementations, annotations may be provided by users, or automatically determined. For example, in one implementation, interactions of users with a presentation of a content item may be monitored, and a characteristic (e.g., funny, boring, 4.5/5 stars, etc.) of the content item may be determined. An annotation may then be generated for the content item based on the characteristic(s). In another implementation, a reference time for the annotation may be identified. The annotation may then be stored in association with the reference time, for example, so that the annotation may be presented when the portion of the content item is presented during a subsequent presentation.
- Various other aspects, features, and advantages of the invention will be apparent through the detailed description of the invention and the drawings attached hereto. It is also to be understood that both the foregoing general description and the following detailed description are exemplary and not restrictive of the scope of the invention. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise. In addition, as used in the specification and the claims, the term “or” means “and/or” unless the context clearly dictates otherwise.
-
FIG. 1 is an exemplary illustration of a system for facilitating the presentation of content items, annotations associated with the content items, or other related items, according to an aspect of the invention. -
FIGS. 2A-2C are exemplary illustrations of a user interface at different times during presentation of a content item, according to aspects of the invention. -
FIG. 3 is an exemplary illustration of different presentations of a content item, according to an aspect of the invention. -
FIGS. 4A-4C are exemplary illustrations of user interfaces for presenting a content item, interacting with the presentation of the content item, and/or interacting with a social networking service, according to aspects of the invention. -
FIGS. 5A-5C are exemplary illustrations of user interfaces for presenting a content item, interacting with the content item, and/or sharing a portion of the content item, according to aspects of the invention. -
FIGS. 6A-6C are exemplary illustrations of a user interface for textually and/or graphically depicting information related to annotations at different times during a presentation of a content item, according to aspects of the invention. -
FIGS. 7A-7C are exemplary illustrations of a user interface for presenting a content item and annotations of a dataset related to the content item, according to aspects of the invention. -
FIGS. 8A-8B are exemplary illustrations of a user interface that depicts mechanisms in annotations that enable transactions related to products or services, according to aspects of the invention. -
FIGS. 9A-9C are exemplary illustrations of a user interface for enabling reactions to annotations and/or initiating a message thread via a social networking service, and a user interface for interacting with the message thread via the social networking service, according to aspects of the invention. -
FIGS. 10A-10D are exemplary illustrations of a user interface depicting an intelligent presentation of user interface elements, according to aspects of the invention. -
FIGS. 11A-11B are exemplary illustrations of user interfaces depicting presentations of a content item to a group of users, according to aspects of the invention. -
FIG. 12 is an exemplary illustration of a flowchart of a method of creating and maintaining a database of annotations corresponding to portions of a content item, according to an aspect of the invention. -
FIG. 13 is an exemplary illustration of a flowchart of a method of generating annotations for a content item based on interactions of users with presentations of the content item, according to an aspect of the invention. -
FIG. 14 is an exemplary illustration of a flowchart of a method of providing annotations corresponding to portions of a content item to social networking services, according to an aspect of the invention. -
FIG. 15 is an exemplary illustration of a flowchart of a method of presenting annotations corresponding to portions of a content item during a presentation of the content item, according to an aspect of the invention. -
FIG. 16 is an exemplary illustration of a flowchart of a method of facilitating rewards for the creation of annotations, according to an aspect of the invention. -
FIG. 17 is an exemplary illustration of a flowchart of a method of facilitating rewards based on interactions with annotations, according to an aspect of the invention. -
FIG. 18 is an exemplary illustration of a flowchart of a method of facilitating rewards based on execution of transactions enabled via annotations, according to an aspect of the invention. -
FIG. 19 is an exemplary illustration of a flowchart of a method of providing a dataset of annotations corresponding to portions of a content item, according to an aspect of the invention. -
FIG. 20 is an exemplary illustration of a flowchart of a method of facilitating rewards based on interactions with datasets, according to an aspect of the invention. -
FIG. 21 is an exemplary illustration of a flowchart of a method of facilitating rewards based on execution of transactions enabled via datasets, according to an aspect of the invention. -
FIG. 22 is an exemplary illustration of a flowchart of a method of facilitating the sharing of portions of a content item across different content delivery services, according to an aspect of the invention. -
FIG. 23 is an exemplary illustration of a flowchart of a method of facilitating the access of a portion of a content item, according to an aspect of the invention. -
FIG. 24 is an exemplary illustration of a flowchart of a method of enabling storage of reactions to annotations, according to an aspect of the invention. -
FIG. 25 is an exemplary illustration of a flowchart of a method of initiating conversations between users based on reactions to annotations, according to an aspect of the invention. -
FIG. 26 is an exemplary illustration of a flowchart of a method of presenting user interface elements based on relevancy, according to an aspect of the invention. -
FIG. 27 is an exemplary illustration of a flowchart of a method of facilitating control of presentations of a content item to a group of users, according to an aspect of the invention. - In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the implementations of the invention. It will be appreciated, however, by those having skill in the art that the implementations of the invention may be practiced without these specific details or with an equivalent arrangement. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the implementations of the invention.
- Exemplary System Description
-
FIG. 1 is an exemplary illustration of asystem 100 that may enable a time-shifted, on-demand vertical social network for watching, creating, and/or sharing time-shifted commentary tracks synced to any on-demand programming, according to an aspect of the invention. Particularly,system 100 may facilitate the presentation of content items, annotations associated with the content items, or other related items. - As used herein, “content items” may include movies, television episodes, portions or segments of movies or television episodes, video clips, songs, audio books, e-books, or other content items. A presentation of a content item may be provided to a user via a content delivery service such as, for example, NETFLIX, HULU, AMAZON INSTANT VIDEO, a cable provider, a local service at a user device programmed to present content items stored locally at an electronic storage of the user device (e.g., a hard drive, a CD, a DVD, etc.), or other content delivery service. Presentations of a content item may include reproductions of the content item that are of varying versions (e.g., extended versions, versions with alternative endings or scenes, etc.), reproductions of the content item with auxiliary information (e.g., advertisements, warnings, etc.), or other presentations of the content item.
- As used herein, “annotations” may include reviews, comments, ratings, markups, posts, links to other media, or other annotations. Annotations may be manually entered by a user for a content item (or a portion thereof), or automatically determined for the content item (or portion thereof) based on interactions of the user with the content item (or portion thereof), interactions of the user with other portions of the content item or other content items, or other parameters. Annotations may be manually entered or automatically determined for the content item or the content item portion either before, during, or after a presentation of the content item. Annotations may be stored as data or metadata, for example, in association with information indicative of the content item or the content item.
-
System 100 may include one or more computers and sub-systems to create and maintain a database of annotations corresponding to portions of a content item, provide annotations corresponding to portions of a content item to social networking services, facilitate sharing of portions of a content item, facilitate aggregation of annotations, modify presentations of a content item based on annotations, selectively filter annotations, create datasets of annotations corresponding to portions of a content item, incentivize creation of annotations or datasets of annotations, manage replies or other reactions to annotations, intelligently present user interface elements, facilitate group control of presentations of a content item, or otherwise enhance the experience of users with respect to presentations of content items, annotations, or other related items. - As shown in
FIG. 1 ,system 100 may comprise server 102 (or servers 102).Server 102 may compriseannotation subsystem 106, content reference subsystem 108,account subsystem 110,interaction monitoring subsystem 112,reward subsystem 114,content presentation subsystem 116, or other components. -
System 100 may further comprise a user device 104 (or multiple user devices 104 a-104 n). User device 104 may comprise any type of mobile terminal, fixed terminal, or other device. By way of example, user device 104 may comprise a desktop computer, a notebook computer, a netbook computer, a tablet computer, a smartphone, a navigation device, an electronic book device, a gaming device, or other user device. Users may, for instance, utilize one or more user devices 104 to interact withserver 102 or other components ofsystem 100. In some implementations, user device 104 may comprise user annotation subsystem 118, user content presentation subsystem 120, or other components. - It should be noted that while one or more operations are described herein as being performed by components of
server 102, those operations may, in some implementations, be performed by components of user device 104. In addition, while one or more operations are described herein as being performed by components of user device 104, those operations may, in some implementations, be performed by components ofservice 102. For example, whileserver 102 may initiate storage of an annotation in association with a reference time corresponding to a portion of a content item by providing the annotation, the reference time, and other information (e.g., instructions for storage, other parameters, etc.) to an annotation database, user device 104 may initiate storage of an annotation in association with a reference time corresponding to a portion of a content item by providing the annotation, the reference time, and other information to the server for storage at the annotation database. -
Server 102 and/or user device 104 may be communicatively coupled to one or more content delivery services 122 a-122 n, social networking services 124 a-124 n, or other services. In one implementation, one or more of content delivery services 122 a-122 n or social networking services 124 a-124 n may be hosted atserver 102 and/or user device 104. For example,server 102 may host a content delivery service 122 to provide users with access to content items or portions of content items. As another example,server 102 may host a social networking service 124 to offer a social network through which users may interact with one another, other entities of the social network, content on the social network, etc. In another implementation, one or more of content delivery services 122 a-122 n or social networking services 124 a-124 n may be hosted remotely fromserver 102 and/or user device 104. - In some implementations, the various computers and subsystems illustrated in
FIG. 1 may comprise one or more computing devices that are programmed to perform the functions described herein. The computing devices may include one or more electronic storages (e.g.,electronic storage 126 or other electric storages), one or more physical processors programmed with one or more computer program instructions, and/or other components. The computing devices may include communication lines, or ports to enable the exchange of information with a network or other computing platforms. The computing devices may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to the servers. For example, the computing devices may be implemented by a cloud of computing platforms operating together as the computing devices. - The electronic storages may comprise non-transitory storage media that electronically stores information. The electronic storage media of the electronic storages may include one or both of system storage that is provided integrally (e.g., substantially non-removable) with the servers or removable storage that is removably connectable to the servers via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). The electronic storages may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. The electronic storages may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). The electronic storage may store software algorithms, information determined by the processors, information received from the servers, information received from client computing platforms, or other information that enables the servers to function as described herein.
- The processors may be programmed to provide information processing capabilities in the servers. As such, the processors may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. In some implementations, the processors may include a plurality of processing units. These processing units may be physically located within the same device, or the processors may represent processing functionality of a plurality of devices operating in coordination. The processors may be programmed to execute computer program instructions to perform functions described herein of
subsystems - It should be appreciated that the description of the functionality provided by the
different subsystems subsystems subsystems subsystems subsystems - Attention will now be turned to a more detailed description of various implementations comprising one or more features relating to facilitating the presentation of content items, annotations associated with the content items, or other related items. It should be noted that features described herein may be implemented separately or in combination with one another.
- Creating and Maintaining a Database of Annotations
- In various implementations, a database of annotations that correspond to portions of a content item may be created and/or maintained. By way of example, the database of annotations may comprise annotations received during presentations of a content item that are provided via at least first and/or second content delivery services. An annotation in the database may, for instance, correspond to a time at which a first portion of a content item is presented via the first content delivery service (e.g., NETFLIX), and another annotation in the database may correspond to a time at which a second portion of the content item is presented via the second content delivery service (e.g., HULU). The annotations may be stored in the database respectively in association with reference times that correspond to portions of the content item. In this way, reference times associated with annotations may be utilized to provide the annotations such that the annotations are presented in a time-synchronized fashion with corresponding portions of a content item (e.g., for which the annotations are received) during subsequent presentations of the content item. As such, annotations that are submitted by prior users (e.g., prior viewers, listeners, etc.) during prior presentations of the content item may be presented to subsequent users as the subsequent users are experiencing corresponding portions of the content item (e.g., portions that correspond to reference times associated with the annotations). Among other benefits, reactions of users to portions of the content item (e.g., captured in the form of annotations) may be shared with other users regardless of the time at which the content item is experienced by users that submit the annotations, or regardless of the time at which the content item is experienced by users that are presented with the submitted annotations. In this manner, users that experience a content item after annotations have been submitted by other users can do so without having to worry about prior annotations “spoiling” the user experience.
- In addition, as illustrated in
FIGS. 2A-2C , reactions of users to portions of a content item may be shared with other users during subsequent presentations of the content item even though the reactions were submitted (e.g., in the form of annotations) during prior presentations of the content item. It should be appreciated that any number of annotations can be received from any number of users in any order. As such, any examples as set forth herein are for illustrative purposes only, and not intended to be limiting. - In one use case, with respect to
FIGS. 2 A-2C, user interface 202 (e.g., of an application hosted atserver 102, of an application hosted at user device 104, etc.) may present a content item to a user. During the presentation of the content item,user interface 202 may present annotations (e.g.,Annotations Annotations control element 204 onpresentation time bar 206. A second reference time associated withAnnotation 2A may be represented by a second position ofcontrol element 204 onpresentation time bar 206. A third reference time associated with Annotations 3A and 3B may be represented by a third position ofcontrol element 204 onpresentation time bar 206. - As indicated in
FIG. 2A , for example,Annotation 1A may have been submitted by User X as User X was watching a first portion (Portion A) of a content item that corresponds to a first reference time during a presentation of the content item provided via Content Delivery Service #1 (e.g., NETFLIX).Annotation 1B may have been submitted by User X as User X was watching the first portion of the content item (that corresponds to the first reference time) during a presentation of the content item provided via Content Delivery Service #2 (e.g., HULU). - As indicated in
FIG. 2B ,Annotation 2A may have been submitted by User Y as User Y was watching a second portion (Portion B) of the content item that corresponds to a second reference time during a presentation of the content item provided via Content Delivery Service #3 (e.g., a local service at User Y's user device that presents a DVD version of the content item). - As indicated in
FIG. 2C , Annotation 3A may have been submitted by User X as User X was watching a third portion (Portion C) of the content item that corresponds to a third reference time during a presentation of the content item provided via ContentDelivery Service # 1. Annotation 3B may been submitted by User Y as User Y was watching the third portion of the content item (that corresponds to the third reference time) during a presentation of the content item provided via ContentDelivery Service # 3. Nevertheless, despite the annotations being provided by users during presentations via different content delivery services, each ofAnnotations - As illustrated in
FIG. 3 , different content delivery services may provide different presentations of the same content item (e.g.,presentations - In one scenario, for instance,
presentations presentations presentations auxiliary information 316 and 318 (e.g., advertisements or other auxiliary information) where the auxiliary information of thedifferent sets Presentations presentation 302, and are further different from one another's version of the content item. For example, the version of the content item inpresentation 308 includeadditional portions 320 of the content item that are not inpresentations presentation 310 includeadditional portions 322 that are not inpresentation - Nevertheless, in some implementations, reference times that correspond to portions of a content item may be utilized to present annotations regardless of the differences between the presentations of the content item that were provided to annotating users when the users submitted the annotations. The reference times on which presentations of the annotations are based may, for example, comprise a master set of reference times (e.g., reference set 312) with which other reference times (associated with different presentations) may be compared to identify a reference time from the master set with which an annotation is to be associated. As an example, the master set may include master reference times that correspond to portions of a content item where the master reference times are independent of the content delivery service through which a presentation of the content item is provided.
- In one use case, with respect to
FIG. 3 , reference set 312 may represent a master set of reference times associated with a content item. By way of example,master reference time 1 in reference set 312 may correspond toreference time 1 ofpresentation 302,reference time 1 ofpresentation 304,reference time 3 ofpresentation 306,reference time 1 ofpresentation 308, andreference time 1 ofpresentation 310. As such, annotations that are submitted by users attime 1 duringpresentation 302,time 1 duringpresentation 304,time 3 duringpresentation 306,time 1 duringpresentation 308, andtime 1 duringpresentation 310 may all be stored in association withmaster reference time 1 corresponding to a first portion of the content item so that the annotations may be presented during a subsequent presentation of the content item when the first portion is presented. - As another example,
master reference time 6 may correspond toreference time 6 ofpresentation 302, reference time 7 ofpresentation 304,reference time 9 ofpresentation 306, reference time 8 ofpresentation 308, and reference time 8 ofpresentation 310. Annotations that are submitted by users attime 6 duringpresentation 302, time 7 duringpresentation 304,time 9 duringpresentation 306, time 8 duringpresentation 308, and time 8 duringpresentation 310 may all be stored in association withmaster reference time 6 corresponding to a second portion of the content item so that the annotations may be presented during a subsequent presentation of the content item when the second portion is presented. - As another example,
master reference time 11 may correspond toreference time 11 ofpresentation 302,reference time 13 ofpresentation 304,reference time 15 ofpresentation 306,reference time 13 ofpresentation 308, andreference time 13 ofpresentation 310. As a result, annotations that are submitted by users attime 11 duringpresentation 302,time 13 duringpresentation 304,time 15 duringpresentation 306,time 13 duringpresentation 308, andtime 13 duringpresentation 310 may all be stored in association withmaster reference time 11 corresponding to a third portion of the content item so that the annotations may be presented during a subsequent presentation of the content item when the third portion is presented. - As yet another example,
master reference time 16 may correspond toreference time 16 ofpresentation 302,reference time 19 ofpresentation 304,reference time 20 ofpresentation 306,reference time 18 ofpresentation 308, andreference time 18 ofpresentation 310. As such, annotations that are submitted by users attime 16 duringpresentation 302,time 19 duringpresentation 304,time 20 duringpresentation 306,time 18 duringpresentation 308, andtime 18 duringpresentation 310 may all be stored in association withmaster reference time 16 corresponding to a fourth portion of the content item so that the annotations may be presented during a subsequent presentation of the content item when the fourth portion is presented. - As a further example,
master reference time 21 may correspond toreference time 21 ofpresentation 302,reference time 24 ofpresentation 304,reference time 25 ofpresentation 306,reference time 24 ofpresentation 308, andreference time 24 ofpresentation 310. Thus, annotations that are submitted by users attime 21 duringpresentation 302,time 24 duringpresentation 304,time 25 duringpresentation 306,time 24 duringpresentation 308, andtime 24 duringpresentation 310 may all be stored in association withmaster reference time 21 corresponding to a fifth portion of the content item so that the annotations may be presented during a subsequent presentation of the content item when the fifth portion is presented. - In another use case, with respect to
FIG. 3 , master reference set 312 may comprise master reference times that correspond to the additional portions ofpresentations time 6 duringpresentation 308 may be stored in association with master reference time 22 (corresponding to the additional portion presented attime 6 during presentation 308) so that the annotations may be presented during a subsequent presentation of the content item (if and) when the additional portion is presented. - As another example, annotations that are submitted by users at
time 6 duringpresentation 310 may be stored in association with master reference time 25 (corresponding to the additional portion presented attime 6 during presentation 310) so that the annotations may be presented during a subsequent presentation of the content item (if and) when the additional portion is presented. In this way, regardless of differences of presentations that may be provided via different content delivery services, a set of annotations submitted for a portion of a content item during prior presentations of the content item may be presented during a subsequent presentation of the content item to a user when the subsequent presentation to the user reaches the reference time corresponding to the portion of the content item for which the set of annotations are submitted. - In some implementations, a first presentation of a content item may be utilized as a reference (e.g., as the master reference) for other presentations of the content item. For example, portions of the content item in the first presentation may be mapped to corresponding portions of the content item in a second presentation. The mapping of the first and second presentations may then be utilized to store annotations inputted during the second presentation in association with reference times corresponding to portions of the content item in the first presentation. When a subsequent presentation of the content item is initiated, the reference times may be utilized to present the annotations when corresponding portions of the content item are presented during the subsequent presentation by mapping the reference times to portions of the content item in the subsequent presentation.
- In one scenario, for example, audio content recognition of a portion of a movie may be performed in response to a comment submitted by a user when the portion of the movie was presented to the user. The result of the audio content recognition (e.g., an audio pattern, a visual pattern, or other result) may then be compared to stored reference patterns associated with reference times corresponding to portions of the movie to identify the portion of the movie and the reference time corresponding to that movie portion. Upon identification of the corresponding reference time, the annotation may be stored in association with the reference time so that the reference time may be utilized in the future to present the annotation when the portion of the movie is presented during subsequent presentations of the movie.
- According to an aspect of the invention,
annotation subsystem 106 may be programmed to receive a first annotation that corresponds to a time at which a first portion of a content item is presented via a first content delivery service, and/or receive a second annotation that corresponds to a time at which the first portion of the content item is presented via a second content delivery service. The first and second annotations may, for example, be received atannotation subsystem 106 from one or more user devices at which the first and second annotations are inputted by one or more users. - In some implementations, the presentation via the first content delivery service may correspond to a first presentation that includes the first portion of the content item. The presentation via the second content delivery service may correspond to a second presentation that includes the first portion of the content item. The first presentation (provided via the first content delivery service) and the second presentation (provided via the second content delivery service) may be the same or different than one another. In one use case, the first presentation may include a first portion of a content item but not a second portion of the content item, while the second presentation may include both the first and the second portions of the content item. As shown in
FIG. 3 , for example,presentation 302 may not includeadditional portion 320, whilepresentation 308 does includeadditional portion 320. - In another use case, the first presentation may include the first portion of the content item and first auxiliary information (e.g., first advertisement), and the second presentation may include the first portion of the content item and second auxiliary information (e.g., second advertisement). As shown in
FIG. 3 , for example,presentation 304 may include a first set ofauxiliary information 316, whilepresentation 306 may include a second set ofauxiliary information 318. - In various implementations,
annotation subsystem 106 may be programmed to initiate storage of the first annotation in association with a first reference time that corresponds to the first portion of the content item, and/or initiate storage of the second annotation in association with the first reference time. In some implementations,annotation subsystem 106 may be programmed to receive a third annotation corresponding to a time at which a second portion of the content item is presented (e.g., via the first content delivery service, the second content delivery service, or a third content delivery service), and initiate storage of the third annotation in association with a second reference time corresponding to the second portion of the content item. In addition, in one implementation, the annotations may be stored in association with other information, such as an identifier of the content item for which the annotation is submitted, identifiers of the sources from which the annotations are received, an identifier of the content delivery service that provided the presentation of the content item during which the annotation is submitted by a user, or other information. - In some implementations, content reference subsystem 108 may be programmed to identify a set of reference times corresponding to portions of the content item. Content reference subsystem 108 may be programmed to identify, based on the set of reference times, the first reference time as a reference time for the first annotation, the first reference time as a reference time for the second annotation, and/or the second reference time as a reference time for the third annotation. Upon identification of the respective reference times, the annotations may be stored in association with the respective reference times and/or other information (e.g., an identifier of the content item, identifiers of the sources from which the annotations are received, an identifier of the content delivery service that provided the presentation of the content item, etc.). As an example, at least one of the first or second presentations of the content item may be associated with another set of reference times that correspond to portions of the first and/or second presentations. As such, content reference subsystem 108 may correlate the identified set of reference times with the other set of reference times to determine a mapping between the reference times of the two different set of reference times. The mapping may then be utilized to identify the first reference time as a reference time for the first annotation, the first reference time as a reference time for the second annotation, and/or the second reference time as a reference time for the third annotation.
- In one use case, with respect to
FIG. 3 , content reference subsystem 108 may utilize the reference times ofpresentation 302 as at least part of a master set of reference times corresponding to portions of the content item with which other sets of reference times are mapped. For example,annotation subsystem 106 may receive, from user device 104, an annotation inputted via user device 104 duringpresentation 304 and information indicating that the annotation is associated with reference time 7 of presentation 304 (e.g., the annotation was inputted at time 7 duringpresentation 304, the annotation was inputted for a portion ofpresentation 304 that corresponds to time 7, etc.). Content reference subsystem 108 may then identifyreference time 6 ofpresentation 302 as a reference time for the annotation based on a determination that the annotation is associated with reference time 7 ofpresentation 304.Annotation subsystem 106 may thereafter store the annotation in association withreference time 6. - In certain implementations, first and second annotations may be stored in association with a first reference time corresponding to a first portion of a content item. A third annotation may be stored in association with a second reference time corresponding to a second portion of the content item. As such, in some implementations,
annotation subsystem 106 may be programmed to provide, based on the first reference time, the first annotation such that the first annotation is presented when the first portion of the content item is presented during a subsequent presentation of the content item.Annotation subsystem 106 may be programmed to provide, based on the first reference time, the second annotation such that the second annotation is presented when the first portion of the content item is presented during the subsequent presentation.Annotation subsystem 106 may be programmed to provide, based on the second reference time, the third annotation such that the third annotation is presented when the second portion of the content item is presented during the subsequent presentation of the content item. - In various implementations,
annotation subsystem 106 may be programmed to identify one or more annotations of a first user regarding a content item, for example, in response to an initiation of a presentation of the content item by a second user associated with the first user (e.g., the second user may be a friend of the first user in a social network, a contact of the first user, or associated with the first user in some manner). In one scenario,interaction monitoring subsystem 112 may detect the initiation of a presentation of the content item by the second user. Upon detection,annotation subsystem 106 may be caused to identify stored annotations of the second user's friends (e.g., including annotations of the first user) in a social network, and transmit the annotations to a user device of the second user for display during the presentation of the content item to the second user. - In certain implementations,
interaction monitoring subsystem 112 may be programmed to monitor interactions of users with presentations of a content item, and/or determine a characteristic of the content item based on the interactions.Annotation subsystem 106 may be programmed to generate an annotation for the content item based on the characteristic. In some implementations, content reference subsystem 108 may be programmed to identify, based on the interactions, a reference time for the annotation.Annotation subsystem 106 may be programmed to initiate storage of the annotation in association with the reference time. - As an example, with respect to
FIGS. 2 A-2C,interaction monitoring subsystem 112 may determine that a majority of users that watchContent Item 1 activate the “Share Scene”button 212 to share a particular portion (e.g., Portions A, B, C, or other portions) with other users. As a result,annotation subsystem 106 may generate the comment (or annotation) “This portion is frequently shared!” and store the comment in association with a reference time corresponding to the frequently shared portion. Thus, when the frequently shared portion is subsequently presented to other users, the comment that the portion is frequency shared may encourage the other users to share the portion to their contacts. - In another use case,
interaction monitoring subsystem 112 may determine that a majority of users skip over a particular scene in a movie when watching the movie. As a result,annotation subsystem 106 may generate the comment (or annotation) “This scene is often skipped over” and store the comment in association with a reference time corresponding to the frequently shared portion. Thus, when the often-skipped-over portion is subsequently presented to other users, the comment that the portion is often skipped over may inform the other users that the portion may not be worth watching. - In another user case,
interaction monitoring subsystem 112 may determine that a significant number of a user's “friends” (or other associated set of users) watched a particular episode within a certain time period (e.g., 10% of the user's friends in a social network watched the episode within the last 24 hours). As such,annotation subsystem 106 may generate the comment (or annotation) “This episode has recently been really popular with your friends!” and store the comment in association with the episode. As a result, the comment that the episode has recently been really popular may be presented to the user before, during, or after the user watches the episode. For example, the comment may be presented to the user when the user is deciding what episode to watch, when the user is shown a promotion for the episode, when the user initiates a presentation of the episode, at the end of or after the presentation of the episode, or at other times. Other thresholds of viewership may be used (e.g., 20%, 30%, etc.) to trigger such a comment. Further, a similar comment may be generated based on total viewership (e.g., for all viewers—not necessarily limited to a subset of users (such as friends)). - In another user case,
interaction monitoring subsystem 112 may determine that a user has binge-watched every episode of a particular show for season during a weekend. As a result,annotation subsystem 106 may generate a comment (or annotation) indicating that the user has binge-watched every episode of the season during a single weekend, and store the comment in association with one or more of the episodes. Subsequently, when the user's friends watch an episode of the season, they may be presented with the comment to encourage them to continue watching the rest of the episodes of the season, or other comment noting the captivating nature of the show. - Providing Annotations to Social Networking Services
- According to an aspect of the invention, users may be enabled to provide annotations received during presentations of a content item (and/or other information related to the content item) to a plurality of social networking services (e.g., FACEBOOK, TWITTER, etc.).
- In certain implementations, user device 104 may be programmed to initiate a presentation of a content item. For example, with respect to
FIGS. 4A-4B , a user may execute an application installed on user device 104 to launchuser interface 202.User interface 202 may enable the user to initiate a presentation of a content item onuser interface 202 and/or user interface 402 (e.g., using the play feature of the play/pause button 208). In one scenario, for instance,user interface 202 may be used to display and control a presentation of a content item, view annotations corresponding to portions of the content item during the presentation of the content item, input annotations for portions of the content item, or perform other operations. - In another scenario,
user interface 402 may be used to display a presentation of a content item, anduser interface 202 may enable a second screen experience for the user. For example,user interface 202 may enable the user to control the presentation of the content item (displayed on user interface 402), view annotations corresponding to portions of the content item during the presentation of the content item, input annotations for portions of the content item, or perform other operations.User interface 202 may, for instance, be a user interface that is displayed on user device 104, whileuser interface 402 may be a user interface that is displayed on another user device. - In various implementations, user device 104 may be programmed to receive a first annotation at a time during which a first portion of the content item is presented, initiate storage of the first annotation in association with a first reference time corresponding to the first portion of the content item, and/or provide the first annotation to a first social networking service. In some implementations, user device 104 may be programmed to receive a second annotation at a time during which a second portion of the content item is presented, initiate storage of the second annotation in association with a second reference time corresponding to the second portion of the content item, and/or provide the second annotation to a second social networking service.
- As an example, with respect to
FIG. 4B ,user interface 202 may provide an “Add Annotation”button 210 that enables a user to submit an annotation corresponding to a portion ofContent Item 1 that is currently being presented to the user (e.g., Portions A, B, C, or other portion). Upon activation ofbutton 210,user interface 202 may provide the user with anannotation window 404 where the user may enter the user's reaction to (or comment concerning) a portion of the content item and/or select a thumbs-up or thumbs-down (or other “like” or “dislike” indication) rating for the portion of the content item.Annotation window 404 may further enable the user to submit the annotation comprising at least one of the textual reaction or the thumbs-up/thumbs-down rating to one or more of SocialNetworking Services # user interface 406 ofFIG. 4C , the user has submitted viauser interface 202 both the textual reaction and a thumbs-up rating to SocialNetworking Service # 3. The submission may, for example, cause the textual reaction and the thumbs-up rating to appear on user interface 406 (e.g., the user's page on Social Networking Service #3), along with storage of the textual reaction and the thumbs-up rating in association with a reference time that corresponds to the portion of the content item that was presented to the user whenbutton 210 was activated. - In a further use case, the user in the above example (depicted in
FIGS. 4A-4C ) may submit another annotation at a later time during the presentation ofContent Item 1 by activating the “Add Annotation”button 210 at the later time, and submitting an annotation viaannotation window 404. The user may, however, choose to submit the later annotation to another social networking service (e.g., SocialNetwork Services # - In another use case,
user interface 202 may enable a user to provide an annotation for a content item (and other information related to the content item) to a social networking service. For example, when a user submits an annotation to a social networking service (e.g., SocialNetworking Services # user interface 202 may provide the annotation along with an identifier of the user, an identifier of the content item for which the annotation is submitted, an identifier of the content delivery service through which the presentation is provided, a reference time corresponding to a portion of the content item for which the annotation is submitted, a link to the portion of the content item, or other information. As a further example, if a link to the portion of the content item is provided along with the annotation to SocialNetworking Service # 3, the link may be posted along with the annotation on the user's page at SocialNetworking Service # 3. As such, other users having access to the user's page may utilize the link to jump to the portion of the content item using a content delivery service that is available to the other users to see the portion of the content item to which the user's annotation is related. - Sharing Portions of a Content Item
- According to an aspect of the invention, a user may share access to portions of a content item across a plurality of content delivery services. For example, a sharing user may share access to a portion of a content item to a recipient user even when the sharing user and the recipient user do not have access to the same content delivery service (e.g., the sharing user uses NETFLIX while the recipient user uses HULU).
- In various implementations,
content presentation subsystem 116 may be programmed to receive, during a first presentation of a content item via a first content delivery service, a request to provide information to enable access to a first portion of the content item.Content presentation subsystem 116 may be programmed to associate a first reference time with the first portion of the content item. The first reference time may, for example, correspond to a time at which the first portion of the content item is presented via the first content delivery service.Content presentation subsystem 116 may be programmed to generate, based on the first reference time, reference information that enables access to the first portion of the content item in a second presentation of the content item via a second content delivery service. - In some implementations,
content presentation subsystem 116 may be programmed to receive the request from a first user device associated with a first user (e.g., user device 104 a), and/or provide the reference information to a second user device associated with a second user (e.g., user device 104 b) such that the reference information enables the second user to access the first portion of the content item via the second content delivery service. In one use case, the reference information may be independent of the content delivery service that may be used by the second user to access the first portion of the content item. The reference information may, for example, indicate the content item (e.g., content item identifier), the first reference time, the first portion (e.g., scene identifier determined based on the first reference time), or other information. The indication of the content item and at least one of the indications of the first reference time or the first portion may be utilized to access the first portion of the content item via the secondary content delivery service. - In another use case, the reference information may be specific to the second content delivery service (e.g., a direct link to the first portion of the content item stored at the second content delivery service or other reference information). For example, a content item identifier of the content item and the first reference time may be processed to determine a presentation-specific start reference time when the first portion of the content item is presented via the second content delivery service. The reference information may then be generated to indicate the content item, the presentation-specific reference time, the second content delivery service, or other information.
- By way of example, with respect to
FIGS. 5 A-5B, a presentation ofContent Item 1 via ContentDelivery Service # 1 may be displayed onuser interface 202 and/oruser interface 402. As shown, inFIG. 5B ,user interface 202 may provide a “Share Scene”button 212 that enables a user to share a portion ofContent Item 1 with other users. Upon activation ofbutton 212,user interface 202 may provide the user with arecipient selection window 502 where the user may select a recipient user from a drop-down menu, or enter a recipient user's email address. - In one use case, in response to a selection of a recipient user (e.g., using the drop-down menu, the recipient's email address, etc.), user device 104 may generate a request to provide the recipient user with information to enable the recipient user to access Portion B of Content Item 1 (e.g., Portion B was playing or presented when
button 212 was activated, Portion B corresponds to a start and/or end time manually entered by the user, etc.). Thereafter, user device 104 may transmit the request toserver 202. The request may include an item identifier associated withContent Item 1, a start reference time corresponding to Portion B, an end reference time corresponding to Portion B, a portion identifier associated with Portion B, or other information. - Upon receipt of the request from user device 104,
content presentation subsystem 116 may process the request to generate a link (or other reference information) associated with Portion B. The portion link may, for instance, be independent of the content delivery service that the recipient user may utilize to access Portion B. As shown inuser interface 504, an automated message comprising a portion link (e.g., the hyperlink embedded in “CLICK HERE”) is provided to the recipient user to enable the recipient user to access Portion B ofContent Item 1 via ContentDelivery Service # 2. The portion link may, for instance, include the link “http://CDSIndepentSite.com/[CI1_ID]/[Master_Ref_Time_Corr_To_Portion_B]” or other link. As an example, clicking on the portion link may cause the recipient user's device to execute an application associated with ContentDelivery Service # 2 and begin rendering a presentation ofContent Item 1 at a time corresponding to a start time of Portion B. - In another use case, a selection of a recipient user using user device 104 may be received by user device 104 as a request to provide the recipient user with information to enable the recipient user to access to Portion B of
Content Item 1. User device 104 may then generate a link (or other reference information) to Portion B (e.g., a portion link that is independent of the content delivery service that the recipient user may utilize to access Portion B). User device 104 may thereafter transmit the portion link as part of a message (e.g., via email, short message service (SMS), multimedia messaging service (MMS), social networking service, etc.) to the recipient user. The message may comprise the portion link along with other information. As an example, when the recipient user clicks on the portion link, the recipient user's device may execute an application associated with ContentDelivery Service # 2 and begin rendering a presentation ofContent Item 1 at a time corresponding to a start time of Portion B. - As discussed, in some implementations,
content presentation subsystem 116 may be programmed to receive, during a first presentation of a content item via a first content delivery service, a request to provide information to enable access to a first portion of the content item. In some implementations, content reference subsystem 108 may be programmed to identify a set of reference times corresponding to portions of the content item (e.g., a master set of reference times). Content reference subsystem 108 may be programmed to identify which reference time of the set of reference times corresponds to the first portion of the content item. Upon identification of the corresponding reference time (for the first portion), reference information that enables access to a second presentation of the content item via a second content delivery service may be generated based on the corresponding reference time. - As an example, at least one of the first or second presentations of the content item may be associated with another set of reference times that correspond to portions of the first and/or second presentations. The identified set of reference times may, for instance, include master reference times that correspond to portions of the content item independently of a content delivery service, while the other set of reference times include reference times that are specific to a presentation of the content item provided via a content delivery service. As such, content reference subsystem 108 may correlate the identified set of reference times with the other set of reference times to determine a mapping between the reference times of the two different set of reference times. The mapping may then be utilized to identify a corresponding master reference for the first portion of the content item.
- In one use case, with respect to
FIG. 3 , content reference subsystem 108 may utilize the reference times ofpresentation 302 as at least part of a master set of reference times corresponding to portions of the content item with which other sets of reference times are mapped. For example,content presentation subsystem 116 may receive, from user device 104, a request to share a link to a scene of the content item that corresponds to a start time 7 and an end time 8 ofpresentation 304. Content reference subsystem 108 may identifyreference time 6 ofpresentation 302 as a start reference time for the scene and reference time 7 ofpresentation 302 as an end reference time for the scene based on a determination that the scene corresponds to start time 7 and end time 8 ofpresentation 304. An identifier of the content item,reference time 6 ofpresentation 302, and reference time 7 ofpresentation 302 may be utilized to generate the scene link. - As discussed, in some implementations,
content presentation subsystem 116 may be programmed to generate reference information that is specific to a content delivery service. For example, upon receipt of a request from a first user to provide reference information to enable a second user to access a first portion of a content item,content presentation subsystem 116 may identify a content delivery service through which access to the first portion of the content item is available to a second user. The content delivery service may, for instance, be identified based on a determination that the second user has an account associated with the content delivery service.Content presentation subsystem 116 may be programmed to generate the reference information based on a first reference time corresponding to the first portion of the content item and the identification of the content delivery service. - In one scenario, for example,
content presentation subsystem 116 may obtain account information associated with the second user that identifies content delivery service(s) with which the second user has account(s). After determining that the second user has an account with a given content delivery service,content presentation subsystem 116 may generate the reference information specifically for the given content delivery service based on a first reference time corresponding to the first portion of the content item. - Accessing Portions of a Content Item
- According to an aspect of the invention, a user may access a portion of a content item via a content delivery service based on reference information. For example, in some implementations, a portion of a content item may be accessed via a content delivery service based on reference information that is independent of the content delivery service to access the portion of the content item. The same reference information may, for example, be utilized to access a portion of a content item via different content delivery services.
- In some implementations, user content presentation subsystem 120 may be programmed to receive reference information related to a first portion of a content item. In one implementation, the reference information may be generated based on a user input that occurred during a first presentation of the content item via a first content delivery service (e.g., NETFLIX). The user input and/or a time of the user input may, for example, correspond to a presentation-specific reference time at which the first portion of the content item is presented during the first presentation. The reference information may then be generated based on the presentation-specific reference time to include information indicating the content item (e.g., content item identifier), the first portion (e.g., scene identifier), the presentation-specific reference time, a master reference time corresponding to the presentation-specific reference time, or other information.
- In some implementations, user content presentation subsystem 120 may be programmed to identify a second content delivery service (e.g., HULU) through which access to the first portion of the content item (in a second presentation of the content item) is available. User content presentation subsystem 120 may be programmed to provide, based on the reference information, the first portion of the content item (in the second presentation) via the second content delivery service.
- In one implementation, for example, the reference information may be generated based on input from a first user during the first presentation of the content item. User content presentation subsystem 120 may be programmed to identify a second user to which the first portion of the content item (in the second presentation) is to be provided. Based on the identification of the second user, user content presentation subsystem 120 may identify the second content delivery service as a content delivery service through which access to the first portion of the first content item (in the second presentation of the first content item) is available to the second user.
- As an example, user content presentation subsystem 120 may identify content delivery service(s) with which that the second user has an account. Based on the identified content delivery service(s), user content presentation subsystem 120 may determine which (if any) of the content delivery service(s) provide access to the first portion of the content item. If, for instance, one of the identified content delivery service(s) provide access to the first portion of the content item, then the content delivery service (e.g., the second content delivery service) may be identified as a content delivery service that the second user can use to access the first portion of the content item (e.g., and, thus, available to the second user).
- In another implementation, when reference information that is generated based on user input during a first presentation of a content item via a first content delivery service (e.g., NETFLIX) is received, user content presentation subsystem 120 may identify a second content delivery service (e.g., HULU) through which access to the first portion of the content item (in a second presentation of the content item) is available. The first portion of the content item may then be provided via the second content delivery service based on a reference time of the first presentation, a reference time of the second presentation that corresponds to the reference time of the first presentation, or an identifier associated with the first portion of the first content item. As further described in the use cases below, for example, the reference time of the first presentation, the reference time of the second presentation, or the first-portion identifier may be determined from the reference information and utilized to access the first portion of the content item via the second content delivery service.
- In one use case, the reference information may include a content item identifier associated with the content item, and a first presentation-specific reference time at which the first portion of the content item is presented during the first presentation. Upon identification of the second content delivery service, user content presentation subsystem 120 may use information indicating the content item to identify a mapping of portions of the first presentation to portions of the second presentation (e.g., the mapping of portions of
presentation 302 to portion ofpresentations 304 inFIG. 3 , the mapping of other portions shown inFIG. 3 , etc.). The first presentation-specific reference time and the mapping may then be utilized to identify a second presentation-specific reference time at which the first portion of the content item is presented during the second presentation. User content presentation subsystem 120 may execute an application associated with the second content delivery service (e.g., HULU application), and utilize the content item identifier and the second presentation-specific reference time with the application to jump to the first portion of the content item in the second presentation provided via the second content delivery service. - In another use case, the reference information may include an identifier associated with the content item and a scene identifier (or other portion identifier) associated with the first portion of the content item. Upon identifying the second content delivery service for the second user, user content presentation subsystem 120 may execute an application associated with the second content delivery service (e.g., HULU application). User content presentation subsystem 120 may then utilize the content item identifier and the scene identifier with the application to jump to the first portion of the content item in the second presentation provided via the second content delivery service.
- Aggregation of Annotations
- According to an aspect of the invention, annotations may be aggregated to determine an overall experience of a user or a group of users with various presentation aspects (e.g., portions of a content item, the overall content item, individual annotations, a set of annotations, etc.). The overall experience may then, for example, be displayed to users during a presentation of the portions of the content item, the overall content item, the individual annotations, the set of annotations, etc. In this way, among other benefits, a user may be able to see how other users (e.g., the user's friends, the user's family members, the user's co-workers, users within the user's social network, or all system users, etc.) reacted to various presentation aspects as the user is experiencing the presentation aspects.
- In certain implementations,
annotation subsystem 106 may be programmed to identify annotations associated with one or more parameters, and/or process the identified annotations to determine one or more statistics with respect to various presentation aspects (e.g., the portions of the content item, the overall content item, the individual annotations, the set of annotations, etc.). The parameters may, for example, include annotation types, sources (e.g., authors or other sources), annotation set identifiers, social distances, user relationship status, spatial proximity, temporal proximity, or other parameters. The parameters may be manually selected by a user, or automatically selected for the user based on configurable system settings. - In one use case, as illustrated in
FIGS. 6A-6C , annotations may be aggregated based on annotation type. As an example, numerical ratings associated with portions of a content item may be aggregated for each of the portions of the content item, normalized (e.g., a rating based on a 1-10 rating scale may be converted to a rating based on a 1-5 rating scale), and averaged to produce an average rating for each portion. As shown in the interfaces depicted inFIGS. 6A-6C , a first portion of the content item may be associated with an average rating of 4.6/5, a second portion of the content item may be associated with an average rating of 4.2/5, a third portion of the content item may be associated with an average rating of 4.3/5, and so on. It should be appreciated that the foregoing values, ranges, etc., are exemplary in nature, and should not be viewed as limiting. - As another example, comments associated with portions of a content item may be aggregated and analyzed to determine a common characteristic associated with each of the portions of the content item. A characteristic may, for example, be determined to be a common characteristic based on a determination that terms associated with the characteristic are included in the most number of the aggregated comments, that the terms associated with the characteristic appear the most frequently in the aggregated comments, etc. For example, as depicted in
FIGS. 6A-6C , the characteristic “funny” is determined to be the most common characteristic for first, second, and third portions (Portions A, B, and C) of the content item. Terms associated with the characteristic “funny” may, for example, include synonyms of “funny” or other related terms. - In another use case, annotations may be aggregated based on authorship. As an example, if a user selects to only be presented with annotations from cast or crew members of a television episode or movie (or other content item) for which the annotations are submitted, then each of the aggregated annotations may be annotations authored by actors, actresses, directors, producers, or other cast or crew members of the television episode or movie.
- In another use case, annotations may be aggregated based on social distances between authors of the annotations and a user satisfying a specified social distance threshold. Each of the aggregated annotations may, as an example, be annotations authored by other users that are at most 2 connections away from the user in a social network (e.g., friends of friends, two degrees away, etc.). The social distance threshold may be specified by a user.
- In another use case, annotations may be aggregated based on authors of the annotations having a particular relationship with a user. Each of the aggregated annotations may, for example, be annotations authored by other users that are “friends” of the user in a social network (e.g., rather than an “acquaintance,” a “colleague,” etc.). As used herein, a user relationship (of one user with another user) may refer to one or more definitions of how a user knows, knows of, or is connected to the other user. For example, a first user may have a user relationship with a second user based on the first user being a “friend,” “co-worker,” “family relative,” etc., of the second user. As another example, a first user may have a user relationship with a second user based on the first user “following” the social media posts of the second user, the first user being a “fan” of the second user, etc. The user relationships may be user-defined, or automatically defined.
- In another use case, annotations may be aggregated based on authors of the annotations being associated with a location that is a threshold distance away from a user location. Each of the aggregated annotations may, for example, be annotations authored by other users that are currently within a particular distance from the current location of the user, that live within a particular distance from the user's residence, etc.
- In another use case, annotations may be aggregated based on the annotations being submitted within a particular time period. As an example, each of the aggregated annotations may be annotations submitted during a time period when the content item was the most popular. Comments (or annotations) that are aggregated may be limited to comments provided when a television episode originally aired (e.g., to exclude comments submitted during re-runs), comments during a time period associated with a season (e.g., during a given season of a television series when the episode first aired), commented provided during a specified date range, etc.
- In some implementations,
annotation subsystem 106 may be programmed to provide statistics associated with aggregated annotations. For example, the statistics may be presented to a user during a presentation of a content item to the user. In one scenario, as shown inFIG. 6C , statistics, such as an average rating, a common characteristic, etc., may be presented to a user during a presentation of a content item to the user in the form of text. - In another scenario, as depicted on
time bar 602 ofFIG. 6C , statistics may be graphically presented to a user during a presentation of a content item. As an example, the statistics may be presented in the form of a line graph, a heat map, or other graphical representation. The line graph ontime bar 602, a heat map, or other graphical representation may, for instance, depict a degree of a characteristic corresponding to portions of the content item (e.g., a high point on the line graph may indicate a very funny portion, a low point on the graph may indicate a non-funny portion, a hot color on the heat map may indicate a very popular portion, a cold color on the map may indicate an unpopular portion, etc.). - In some implementations, statistics (or other information) associated with aggregated annotations may be provided to various third party entities in exchange for compensation or other reasons. As an example, statistics regarding viewership of a television show (or other content item) or portions thereof may be provided to NIELSEN or other entity.
- Filtered Presentation of Content Items
- According to an aspect of the invention, portions of a content item may be presented based on annotations for the content item. For example, a presentation of a content item may be based on annotations corresponding to portions of the content item and preferences of a user (e.g., selected by the user, inferred for the user, etc.) related to the content item portions and/or the annotations. In one scenario, for instance, portions of a content item may be removed from a presentation of the content item based on annotations for the portions indicating that the portions do not satisfy conditions related to the user's preferences (e.g., a user preference may indicate an aversion to violence, nudity, profanity, adult themes, etc.). In another scenario, playback of a first set of portions of a content item may be skipped, fast-forwarded, censored, blurred, decreased in volume, or otherwise adjusted during a presentation of the content item based on annotations for the portions of the first set indicating that the portions of the first set do not satisfy conditions related to the user's preferences. Playback of a second set of portions of the content item may be enhanced or occur normally during the presentation of the content item based on annotations for the portions of the second set indicating that the portions of the second set satisfy conditions related to the user's preferences. In this way, among other benefits, annotations for a content item may be utilized to enable a user (or other entity) to control or modify a presentation of the content item. Parents may, for example, censor their children from portions of a content item that are indicated by corresponding annotations as indecent or otherwise not for children, users may set their preferences to skip portions of a content item that are indicated by corresponding annotations as having an undesirable characteristic (e.g., boring, romantic, gruesome, or other characteristics that a user may deem undesirable), etc.
- In one example, with respect to
FIGS. 6 A-6C, annotations corresponding to portions ofContent Item 1 may be automatically obtained when a user initiates a presentation of Content Item 1 (e.g., detection of the user's request to playContent Item 1 may trigger a request for the annotations). Playback of scenes ofContent Item 1 may be skipped, fast-forwarded, or sped up if the scenes are associated with an average rating of less than 4/5. In another use case, playback of scenes ofContent Item 1 may be skipped, fast-forwarded, or sped up if the scenes are not deemed as funny by at least a threshold number (e.g., fixed number or percentage) of users that submit comments for the scenes. Users may, for example, indicate desired ratings (e.g., only 4/5 or higher), threshold numbers, or other parameter via user-configurable settings. Other modification related to the presentation may of course be implemented. - In another example, when a user initiates a presentation of a content item, the user may be presented with a set of tracks (associated with the content item) from which to select. Upon selection of a track, the content item may be presented in accordance with annotations of the selected track. In yet another example, a selection of a track by a user may trigger a presentation of a content item (to which annotations of the track corresponds) to be initiated. Upon initiation, the content item may be presented in accordance with annotations of the selected track.
- Filtered Presentation of Annotations
- According to an aspect of the invention, annotations may be selectively presented to a user based on one or more parameters. The parameters may, for instance, include annotation types, sources (e.g., authors or other sources), annotation set identifiers, social distances, user relationship status, spatial proximity, temporal proximity, or other parameters. The parameters may be manually selected by a user, or automatically selected.
- In various implementations,
annotation subsystem 106 may be programmed to provide annotations to user device 104 based on one or more parameters associated with a user. In some implementations, user device 104 may selectively present annotations (e.g., fromannotation subsystem 106 or other component) to a user during a presentation of a content item to the user based on one or more parameters associated with the user. - As an example, a user may specify that he/she only desires to be presented with numerical ratings (e.g., out of 5 stars, on a 1-10 scale, etc., as opposed to comments, likes/dislikes, etc.). As such, the user may only be provided with numerical ratings. It should be appreciated that the foregoing values, ranges, etc., are exemplary in nature, and should not be viewed as limiting.
- As another example, particular authors of annotations may be selected for a user based on historical information associated with the user. Selected authors may, for instance, be chosen based on a determination that the authors are similar to authors that the user likes (e.g., the selected authors and the authors liked by the user have similar preferences for content items, annotations from the selected authors are similar in character to annotations from the authors liked by the user, etc.). As a result, the user may only be provided with annotations from the selected authors during presentation of a content item.
- As another example, a user may specify a social distance threshold (e.g., a number of connections away from the user) that authors of annotations must fall within in order for their annotations to be presented to the user. Thus, the user may only be provided with annotations from authors within the social distance threshold.
- Creating Annotation Tracks or Other Annotation Datasets
- According to an aspect of the invention, annotation “tracks” or other annotation datasets may be created. By way of example, annotation datasets may each enable access to annotations from one or more sources, annotations that correspond to presentations from one or more content delivery services, annotations that are provided to one or more social networking services, or other annotations. Annotation datasets may, for example, enable annotations corresponding to portions of a content item to be presented when the portions of the content item are presented during a presentation of the content item. In one scenario, for instance, an annotation dataset may include information indicating reference times for annotations to enable the annotations to be presented when the corresponding portions of the content item are presented during the presentation of the content item. Among other benefits, the creation of annotation tracks or other annotation datasets may enable annotations to be packaged and shared as a collection of annotations among users. In addition, the creation of annotation tracks may facilitate the creation of an “author ecosystem” where, for example, users may gain a following or become “trendsetters” based on their tracks. Furthermore, annotation tracks may be provided to one or more third party entities in exchange for compensation or other reasons. As an example, a network may want to re-broadcast a movie (or other content item) with a track of annotations provided by any one or more of the movie's director, actors, or other “insiders” or individuals associated with production of the movie. Other examples may be implemented.
- Annotation tracks (or other annotation datasets) may include tracks that are only accessible by a single user (e.g., the user that created the track, a user designated to access the track, etc.), tracks that are only accessible to a group of users (e.g., a user's friend as specified by the user that created the track), tracks that are publically available to all users, etc. In one use case, for example, privacy settings of a user's account may dictate by default how tracks created by the user are shared.
- Annotation tracks (or other annotation datasets) may, for example, be created when a user selects or approves annotations to be included in a track, or may be automatically created when the user enters annotations for a content item. Tracks associated with a user may, for instance, be created when the user inputs annotations for a movie or television episode for the first time, and/or updated when the user subsequently inputs annotations while re-watching the movie or television episode. As another example, tracks may be created automatically when a service selects annotations to be included in a track based on one or more parameters. In one scenario, tracks may be created and stored in a database that is searchable by users. In another scenario, tracks may be created on the fly for presentation to a user in response to a track request from the user (e.g., play the movie with a track having the highest rated comments, play the episode with a track having comments that my friends posted since yesterday, etc.).
- Tracks may, for example, comprise static tracks or dynamic tracks. A set of annotations that are available via a static track may, for instance, remain the same over time, unless the static track is modified by a user or a service (e.g., a user may be able to add/remove annotations to/from a static track). On the other hand, a set of annotations that are available via a dynamic track may change over time without modification by a user or service. In one use case, the playing of a dynamic track during a presentation of an associated content item may cause annotations to be streamed and presented to a user such that the annotations during a first presentation of the track differ from those during a second presentation of the track. For example, a track that is generated to present the most recent comments (e.g., within the last 7 days, within the last 24 hours, within the last hour, etc.) submitted by a user's friends for a particular episode may include different comments each time the episode is played. The track may, for instance, include a query that searches a database for the most recent comments authored by the user's friends for the episode each time playback of the episode is initiated.
- In various implementations,
annotation subsystem 106 may be programmed to generate a dataset that enables access to a first annotation corresponding to a first portion of a content item, information indicating a first source of the first annotation (e.g., a user or other source from which the first annotation is received), information indicating a content item with which the first annotation is associated, information indicating a first reference time that corresponds to the first portion of the content item, or other information. - In one implementation, the generated dataset may further enables access to a second annotation corresponding to a second portion of the content item, information indicating the first source as a source of the second annotation, information indicating the content item, information indicating a second reference time that corresponds to the second portion of the content item, or other information. In another implementation, the generated dataset may further enables access to a third annotation corresponding to a third portion of the content item, information indicating a second source of the third annotation, information indicating the content item, information indicating a third reference time that corresponds to the third portion of the content item, or other information.
- By way of example, as shown in
FIGS. 7A-7C , the STAR track may enables access toAnnotations Content Item 1, User X is a source ofAnnotations Annotations 2A and 3B. The STAR track may further enable access to information which indicates thatAnnotation Annotation 2A is associated with a second reference (represented by a second position of control element 204), and Annotations 3A and 3B are associated with a third reference time (represented by a third position of control element 204). - In one example, as depicted in
FIGS. 7A-7C , the STAR track enablesAnnotations Annotation 2A to be presented when Portion B is presented during the presentation of Content Item 1 (e.g., based on the second reference time corresponding to Portion B), and Annotations 3A and 3B to be presented when Portion C is presented during the presentation of Content Item 1 (e.g., based on the third reference time corresponding to Portion C). - In another example, with respect to
FIGS. 7A-7C , Table 1 below is an exemplary depiction of information included in the STAR track. As an example, the STAR track may include annotation identifiers that can be used to obtain the associated annotations from a database when the STAR track is played. -
TABLE 1 Annotation Reference Time Content Item Source Delivery Service [1A Identifier] [First Time] [ CI 1 Identifier][User X] [ CDS# 1 Identifier][1B Identifier] [First Time] [ CI 1 Identifier][User X] [ CDS# 2 Identifier][2A Identifier] [Second Time] [ CI 1 Identifier][User Y] [ CDS# 3 Identifier][3A Identifier] [Third Time] [ CI 1 Identifier][User X] [ CDS# 1 Identifier][3B Identifier] [Third Time] [ CI 1 Identifier][User Y] [ CDS# 3 Identifier]. . . . . . . . . . . . . . . - In yet another example, with respect to
FIGS. 7A-7C , Table 2 below is another exemplary depiction of information included in the STAR track. As shown in Table 2, the STAR track may include the content of the annotations. -
TABLE 2 Annotation Reference Time Content Item Source Delivery Service [Content for 1A] [First Time] [ CI 1 Identifier][User X] [ CDS# 1 Identifier][Content for 1B] [First Time] [ CI 1 Identifier][User X] [ CDS# 2 Identifier][Content for 2A] [Second Time] [ CI 1 Identifier][User Y] [ CDS# 3 Identifier][Content for 3A] [Third Time] [ CI 1 Identifier][User X] [ CDS# 1 Identifier][Content for 3B] [Third Time] [ CI 1 Identifier][User Y] [ CDS# 3 Identifier]. . . . . . . . . . . . . . . - In certain implementations,
annotation subsystem 106 may be programmed to receive a request to generate a track. The request to generate the track may include information that indicates annotations for inclusion in the track. As an example, the request may indicate annotations that are selected by a user for inclusion in the track. As such, the track may be generated to enable access to the selected annotations along with other information that enable the selected annotations to be presented when corresponding portions of a content item are presented during a presentation of the content item. - As another example, the request may indicate a content item for which the track is targeted, a source indicating the origin of the annotations (e.g., an author of the annotations or other source), or other parameters. In response to the request, annotations associated with the content item and the source may be obtained to generate the track.
- In one use case, a user may submit a request to generate a track that includes annotations associated with a content item that are authored by a particular person, whether it be an individual associated with the production of the content item (e.g., an actor, a director, a producer, etc.), or a viewer or consumer of the content item, such as a member of the user's social group (e.g., the user's friends, the user's colleagues, etc.). With respect to
FIGS. 7A-7C , for example, User X may be an actor that stars in the content item, and User Y may be a member of the user's social group. In response to the request,Annotations - In various implementations,
annotation subsystem 106 may be programmed to receive, from a user, a request to search for a track. The request may, for example, include a query that comprises keywords or other parameters (e.g., annotation types, sources, social distances, user relationship status, spatial proximity, temporal proximity, etc.).Annotation subsystem 106 may be programmed to process the request to identify a first track in a database based on the keywords or other parameters, after which the first track may be provided to the user. - As an example, a user may submit a query for tracks by entering the question “What tracks are available for
season # 1,episode # 6, of Family Guy?” As a result,annotation subsystem 106 may process the query, identify tracks forseason # 1,episode # 6, of Family Guy in a database, and provide the identified tracks to the user. Other examples of queries may include queries related to inputs, such as “Show me directors' or actors' tracks for Movie X,” “Show me the highest rated track for Television Show Y,” “Show me tracks by Famous Person Z,” “Show me tracks for Movie X that are rated PG-13,” “Show me tracks by members of my social group,” or other inputs. Any number and type of queries may be used. - In some implementations, ratings, feedback, or classification of tracks may be facilitated. For example,
annotation subsystem 106 may be programmed to enable users to rate tracks or provide other feedback about the tracks. As an example, ratings or other feedback provided by users regarding a track may, for instance, be aggregated to determine an overall rating for the track (e.g., average rating, total number of “likes,” etc.) or other statistical information regarding the track (e.g., commonly characterized as “funny” and “interesting,” highly enjoyed by women between 18-30, a favorite among a user's friends, etc.). In one use case, as shown inFIGS. 7A-7C , the STAR track may be associated with an average rating of 4.6/5. The average rating may, for instance, be an average of all the ratings given to the STAR track by users in general or by a particular set of users (e.g., a user's friends, users with a certain level of status, etc.). It should be appreciated that the foregoing values, ranges, etc., are exemplary in nature, and should not be viewed as limiting. - In another implementation,
annotation subsystem 106 may be programmed to infer ratings or other feedback for tracks. As an example, in one scenario, a track may be characterized as “popular” based on a determination that the track has been downloaded/streamed by users a threshold number of times, or that the track has been downloaded/streamed more times than a majority of other tracks. In another scenario, a track may be characterized as “cheerful” based on an analysis of the content of the annotations in the track indicating that many of the annotations include cheerful messages. Other characterization (e.g., positive, negative, angry, etc.) may of course be utilized without limitation. - As another example, characteristics may be inferred for a track based on reactions associated with ratings or feedback of the track. In some implementations,
interaction monitoring subsystem 112 may be programmed to identify a reaction associated with a rating or feedback of a track.Annotation subsystem 106 may be programmed to determine a characteristic for the track based on the reaction, and/or associate the characteristic with the track. - In one example, with respect to
FIGS. 7A-7C , users may submit a rating for each of the annotations of a track (e.g., thumbs-up/thumbs-down, like/dislike, etc.). If, for example, at least a threshold number (e.g., fixed number or percentage) of the annotations of the STAR track are collectively rated by a threshold number of users, and the number of positive ratings is 1% to 100% greater than the number of negative ratings, the STAR track may be associated with the characteristic of “more positive than not.” If the threshold numbers are satisfied, and the number of positive ratings is 101% to 300% greater than the number of negative ratings, the STAR track may be associated with the characteristic of “well-liked.” If the threshold numbers are satisfied, and the number of positive ratings is over 300% greater than the number of negative ratings, the STAR track may be associated with the characteristic of “superb.” It should be appreciated that the foregoing track descriptors, values, ranges, etc., are exemplary in nature, and should not be viewed as limiting. - In another use case, users may reply to annotations of a track during presentation of a content item and the track. Each of the replies to an annotation may be analyzed to determine one or more characteristics associated with the annotation. Based on the annotation characteristics, one or more characteristics may be determined for (and associated with) the overall track. As an example, an annotation in a track may be may be characterized as “funny” when a reply to the annotation includes terms associated with the characteristic “funny.” The track may be characterized as “funny” when a threshold number of the annotations in the track are characterized as “funny.”
- In another implementation,
account subsystem 110 may be programmed to enable users to rate or provide other feedback about one another, and/or infer ratings or other feedback for a user. In one use case, for example,account subsystem 110 may enable users to submit ratings regarding other users. The ratings regarding a user may, for instance, be aggregated to determine statistics for the user (e.g., an average rating of the user, a number of likes vs. dislikes, etc.). In another use case,account subsystem 110 may infer ratings or other feedback about a user based on ratings or other feedback that other users submitted for annotations created by the user created, tracks created by the user, etc. - Incentivizing Creation of Annotations
- According to an aspect of the invention, a database of annotations may be generated by incentivizing users to create annotations. By way of example, users may be provided with rewards for the creation of annotations, interactions with the annotations, creating annotations that enable transaction via the annotations, or other reasons. In this way, among other benefits, users may be encouraged to create annotations that include quality feedback for content items with which others will positively interact, annotations that enable transactions to facilitate revenue earnings, or annotations that offer other benefits.
- In certain implementations,
annotation subsystem 106 may be programmed to receive an annotation by a user. The annotation may, for instance, correspond to a time at which a portion of a content item is presented.Account subsystem 110 may be programmed to associate the annotation with a user account (associated with the user).Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the receipt of the first annotation. As an example, a user may be compensated when the user creates annotations (e.g., 1 cent for every 20 annotations created, 1 point for each annotation created, etc.), when other users interact with the annotations created by the user, or when other conditions for rewards are satisfied. It should be appreciated that the foregoing values, reward types, etc., are exemplary in nature, and should not be viewed as limiting. - In some implementations,
interaction monitoring subsystem 112 may be programmed to monitor interactions with an annotation associated with a user account (e.g., interactions by a user of the user account, interaction by other users, etc.). The monitored interactions may, for example, include access of the annotation (e.g., viewing the annotation, listening to the annotation, etc.) during a presentation of an associated content item, reactions by users to the annotation (e.g., rating the annotation, replying to the annotation, etc.), execution of transactions enabled via the annotation, or other interactions.Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the interactions. - In one implementation,
interaction monitoring subsystem 112 may be programmed to identify requests by one or more users for an annotation associated with a user account (e.g., requests by other users for the annotation).Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the requests. The requests may, for example, include requests to be exposed to the annotation, to include the annotation in an annotation track, or other requests. In one scenario, an authoring user of comments may be rewarded based on the exposure of the comments to other users (e.g., 1 cent for every 100 comment views, 1 point for each comment view, etc.). In another scenario, a threshold number of comment views may need to be satisfied before the authoring user may begin to be compensated. In a further scenario, the authoring user may be provided with a first type of reward (e.g., points that cannot be exchanged for real world money) until the authoring user obtains a particular status (e.g., Silver status, Gold status, etc.) that is achieved when a threshold number of comment views is satisfied. After the threshold number of comment views is satisfied, the authoring user may be provided with a second type of reward (e.g., real world money, points that can be exchanged for real world money, etc.). It should be appreciated that the foregoing values, reward types, etc., are exemplary in nature, and should not be viewed as limiting. - In another implementation,
interaction monitoring subsystem 112 may be programmed to identify reactions of one or more users to a comment associated with a user account (e.g., reactions of other users to the comment).Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the reactions. The reactions may, for example, include a rating of the comment, a reply to the comment, or other reactions to the comment. In one use case, an authoring user of comments may be rewarded based on ratings given to the comments by other users (e.g., $1 for every 10 four-star (or higher) ratings given to the comments, 1 point for each “like” given to the comments, etc.). In another use case, an authoring user of comments may be rewarded based on replies to the comments by other users (e.g., $1 for every 10 replies, 1 point for each reply, etc.). It should be appreciated that the foregoing values, reward types, etc., are exemplary in nature, and should not be viewed as limiting. - In another implementation,
interaction monitoring subsystem 112 may be programmed to identify an exposure of a promotion related to a product or service to one or more users via a comment associated with a user account (e.g., viewing a product/service promotion via the comment, listening to a product/service promotion via the comment, etc.).Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the exposure. The promotion may, for example, relate to a product or service that appears in a portion of a content item to which the comment corresponds. In one scenario, an authoring user of comments may be rewarded for including, in a comment, a reference to a product or service that is depicted in a portion of a television episode (corresponding to the comment) by compensating the authoring user when the reference to the product or service is exposed to other users. For example, with respect toFIG. 8A , User X may be compensated for including inAnnotation 1A a reference to a jacket that is depicted in Portion A ofContent Item 1 when the reference is exposed to other users. As indicated inFIG. 8B , User Y may be compensated for including inAnnotation 2A a reference to a Brand X dress that is depicted in Portion B ofContent Item 1 when the reference is exposed to other users. - In another implementation,
interaction monitoring subsystem 112 may be programmed to identify use of a mechanism (via a comment associated with a user account) that enables execution of a transaction related to a product or service (e.g., accessing a shopping site via a link in the comment).Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the use of the mechanism. The transaction may, for example, relate to a product or service that appears in a portion of a content item to which the comment corresponds. In one use case, an authoring user of a comment may be rewarded based on execution of a mechanism in the comment that enables execution of a transaction. For example, with respect toFIG. 8A , User X may be compensated for including inAnnotation 1A a link to a shopping site through which a jacket depicted in Portion A ofContent Item 1 may be purchased when the link is clicked (or otherwise executed). As indicated inFIG. 8B , User Y may be compensated for including inAnnotation 2A a link to a product page of a shopping site through which a Brand X dress depicted in Portion B ofContent Item 1 may be purchased when the link is clicked (or otherwise executed). - In another implementation,
interaction monitoring subsystem 112 may be programmed to identify an execution of a transaction related to a product or service that is enabled via a comment associated with a user account (e.g., purchasing of a product, a user sign-up with a service, etc.).Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the execution of the transaction. The transaction may, for example, relate to a product or service that appears in a portion of a content item to which the comment corresponds. In one use case, an authoring user of a comment may be rewarded based on execution of transactions that are enabled via the comment. For example, with respect toFIG. 8A , User X may be compensated for including inAnnotation 1A a reference (e.g., a link) to a shopping site through which a jacket depicted in Portion A ofContent Item 1 may be purchased when the jacket is purchased by other users using the reference to the shopping site. With respect toFIG. 8B , User Y may be compensated for including inAnnotation 2A a reference to a product page of a shopping site through which a Brand X dress depicted in Portion B ofContent Item 1 may be purchased when the dress is purchased by other users using the reference to the product page. - In some implementations,
annotation subsystem 106 may be programmed to identify a reference associated with a product or service in a comment associated with a user account.Annotation subsystem 106 may be programmed to provide a mechanism in the comment to enable a transaction related to the product or service. The reference may, for example, include a product/service identifier, a product/service type identifier, a link to a website through which the transaction related to the product or service may be executed, or other reference. In one scenario, with respect toFIG. 8A , User X may include inAnnotation 1A a hyperlink to a shopping site through which a jacket that is depicted in Portion A ofContent Item 1 may be purchased. Upon identification of the shopping site hyperlink,annotation subsystem 106 may modify the hyperlink to include an affiliate code associated with reward subsystem 114 (or an entity associated with reward subsystem 114). As such, when the jacket is purchased through the modified hyperlink, an account associated with reward subsystem 114 (or the entity associated with reward subsystem 114) may be provided with a portion of the revenue from the purchase of the jacket.Reward subsystem 114 may detect that the jacket purchase was made through the modified hyperlink, and compensate User X for including the original hyperlink to the shopping site inAnnotation 1A. - In another scenario, with respect to
FIG. 8B , User Y may include the term “Brand X dress” inAnnotation 2A without a link to a shopping site through which the Brand X dress depicted in Portion B ofContent Item 1 may be purchased. Nevertheless, upon identification of the term “Brand X dress” and that a dress is depicted in Portion B ofContent Item 1,annotation subsystem 106 may add a hyperlink, including an affiliate code associated with reward subsystem 114 (or an entity associated with reward subsystem 114), for the dress's product page on the shopping site toAnnotation 2A. As such, when the dress is purchased through the hyperlink, an account associated with reward subsystem 114 (or the entity associated with reward subsystem 114) may be provided with a portion of the revenue from the purchase of the dress.Reward subsystem 114 may detect that the dress purchase was made through the hyperlink, and compensate User Y for including the term “Brand X dress” inAnnotation 2A. - Incentivizing Creation of Annotation Datasets
- According to an aspect of the invention, a database of annotation datasets (or tracks) may be facilitated by incentivizing users to create tracks. By way of example, users may be provided rewards for creation of tracks, interactions with the tracks, enabling of transactions via the tracks, or for other reasons. In this way, among other benefits, users may be encouraged to create tracks that include quality annotations, tracks that enable transactions to facilitate revenue earnings, or tracks that offer other benefits.
- In certain implementations,
account subsystem 110 may be programmed to associate a track created by a user with a user account associated with the user.Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the creation of the track. The track may, for example, enable access to comments corresponding to portions of a content item, information that allows the comments to be presented when the corresponding portions are presented during a presentation of the content item, or other information. - In some implementations,
interaction monitoring subsystem 112 may be programmed to monitor interactions with a track associated with a user account (e.g., interactions by a user of the user account, interaction by other users, etc.). The monitored interactions may, for example, include access of the track (e.g., downloading the track, viewing the comments in the track, listening to the comments in the track, etc.), reactions by users to the track (e.g., rating the track, rating comments of the track, replying to comments in the track, etc.), execution of transactions enabled via the track, or other interactions.Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the interactions. - In one implementation,
interaction monitoring subsystem 112 may be programmed to identify requests by one or more users for a track associated with a user account (e.g., requests by other users for the track).Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the requests. The requests may, for example, include requests to access the track. In one scenario, a creating user of tracks may be rewarded based on requests by other users to access the tracks (e.g., 1 cent for each track access, 1 point for each track access, etc.). In another scenario, a threshold number of track accesses may need to be satisfied before the creating user may begin to be compensated. In a further scenario, the creating user may be provided with a first type of reward (e.g., points that cannot be exchanged for real world money) until the creating user obtains a particular status (e.g., Silver status, Gold status, etc.) that is achieved when a threshold number of track accesses is satisfied. After the threshold number of track accesses is satisfied, the creating user may be provided with a second type of reward (e.g., real world money, points that can be exchanged for real world money, etc.). It should be appreciated that the foregoing values, reward types, etc., are exemplary in nature, and should not be viewed as limiting. - In another implementation,
interaction monitoring subsystem 112 may be programmed to identify reactions of one or more users to a track associated with a user account (e.g., reactions of other users to the track).Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the reactions. The reactions may, for example, include a rating of the track, ratings of comments of the track, a reply to a comment of the track, or other reactions to the track. In one use case, a creating user of a track may be rewarded based on ratings given to the track (e.g., $1 for every 10 four-star (or higher) ratings given to the track, 1 point for each “like” given to the track, etc.). In another use case, a creating user of a track may be rewarded based on ratings given to comments of the track by other users (e.g., 10 cents for every 10 four-star (or higher) ratings given to the comments, 1 point for every 10 “likes” given to the comments, etc.). In another use case, a creating user of a track may be rewarded based on replies to comments of the track by other users (e.g., 1 cent for each reply, 1 point for each reply, etc.). It should be appreciated that the foregoing values, reward types, etc., are exemplary in nature, and should not be viewed as limiting. - In another implementation,
interaction monitoring subsystem 112 may be programmed to identify an exposure of a promotion related to a product or service to one or more users via an track associated with a user account (e.g., viewing a product/service promotion via the track, listening to a product/service promotion via the track, etc.).Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the exposure. The promotion may, for example, relate to a product or service that appears in a portion of a content item to which a comment of the track corresponds. In one scenario, a creating user of a track may be rewarded for including, in the track, a comment having a reference to a product or service that is depicted in a portion of a television episode (corresponding to the comment) by compensating the creating user when the reference to the product or service is exposed to other users. For example, with respect toFIG. 8A , a creating user of the STAR track (e.g., a user that generated the STAR track) may be compensated for includingAnnotation 1A in the STAR track when a reference to a jacket that is depicted in Portion A ofContent Item 1 is exposed to other users. As another example, with respect toFIG. 8B , the creating user may be compensated for includingAnnotation 2A in the STAR track when a reference to a Brand X dress that is depicted in Portion B ofContent Item 1 is exposed to other users. - In another implementation,
interaction monitoring subsystem 112 may be programmed to identify use of a mechanism (via a comment in a track associated with a user account) that enables execution of a transaction related to a product or service (e.g., accessing a shopping site via a link in the comment).Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the use of the mechanism. The transaction may, for example, relate to a product or service that appears in a portion of a content item to which the comment corresponds. In one use case, a creating user of a track may be rewarded based on execution of a mechanism in a comment of the track that enables execution of a transaction. For example, with respect toFIG. 8A , a creating user of the STAR track may be compensated for includingAnnotation 1A in the STAR track when a link to a shopping site through which a jacket depicted inPortion 1A of Content Item may be purchased is clicked (or otherwise executed). As another example, with respect toFIG. 8B , the creating user may be compensated for includingAnnotation 2A in the STAR track when a link to a product page of a shopping site through which a Brand X dress depicted in Portion B ofContent Item 1 may be purchased when the link is clicked (or otherwise executed). - In another implementation,
interaction monitoring subsystem 112 may be programmed to identify an execution of a transaction related to a product or service that is enabled via a track associated with a user account (e.g., purchasing of a product, a user sign-up with a service, etc.).Reward subsystem 114 may be programmed to determine a reward to be provided (or credited) to the user account based on the execution of the transaction. The transaction may, for example, relate to a product or service that appears in a portion of a content item to which a comment of the track corresponds. In one use case, a creating user of a track may be rewarded based on execution of transactions that are enabled via comments in the track. For example, with respect toFIG. 8A , a creating user of the STAR track may be compensated for includingAnnotation 1A in the STAR track when the jacket depicted in Portion A ofContent Item 1 is purchased by other users using a reference (e.g., a link) to a shopping site that sells the jacket. As another example, with respect toFIG. 8B , a creating user of the STAR track may be compensated for includingAnnotation 2A in the STAR track when the dress depicted in Portion B ofContent Item 1 is purchased by other users using a reference to a product page of a shopping site that sells the dress. - In some implementations,
annotation subsystem 106 may be programmed to identify a reference associated with a product or service in a comment of a track associated with a user account.Annotation subsystem 106 may be programmed to provide a mechanism in the track (e.g., in the comment having the reference, in a reply to the comment, in another comment in the track, etc.) to enable a transaction related to the product or service. The reference may, for example, include a product/service identifier, a product/service type identifier, a link to a website through which the transaction related to the product or service may be executed, or other reference. In one scenario, with respect toFIG. 8A ,Annotation 1A (which is accessible via the STAR track) may include a hyperlink to a shopping site through which a jacket that is depicted in Portion A ofContent Item 1 may be purchased. Upon identification of the shopping site hyperlink,annotation subsystem 106 may modify the hyperlink to include an affiliate code associated with reward subsystem 114 (or an entity associated with reward subsystem 114). As such, when the jacket is purchased through the modified hyperlink, an account associated with reward subsystem 114 (or the entity associated with reward subsystem 114) may be provided with a portion of the revenue from the purchase of the jacket.Reward subsystem 114 may detect that the jacket purchase was made through the modified hyperlink, and compensate a creating user of the STAR track forAnnotation 1A in the STAR track. - In another scenario, with respect to
FIG. 8B ,Annotation 2A (which is accessible via the STAR track) may include the term “Brand X dress” without a link to a shopping site through which the Brand X dress depicted in Portion B ofContent Item 1 may be purchased. Nevertheless, upon identification of the term “Brand X dress” and that a dress is depicted in Portion B ofContent Item 1,annotation subsystem 106 may add a hyperlink, including an affiliate code associated with reward subsystem 114 (or an entity associated with reward subsystem 114), for the dress's product page on the shopping site toAnnotation 2A. As such, when the dress is purchased through the hyperlink, an account associated with reward subsystem 114 (or the entity associated with reward subsystem 114) may be provided with a portion of the revenue from the purchase of the dress.Reward subsystem 114 may detect that the dress purchase was made through the hyperlink, and compensate a creating user of the STAR track for includingAnnotation 2A in the STAR track. - Managing Replies or Other Reactions to Annotations
- According to an aspect of the invention, replies or other reactions to annotations may be handled in a number of ways.
- In various implementations, replies or other reactions to annotations may be stored in association with the annotations. For example, in some implementations,
annotation subsystem 106 may be programmed to obtain a first annotation corresponding to a portion of a content item. The first annotation may, for instance, be received at a time which the portion of the content item is presented during a first presentation of the content item, and stored in a database so that the first annotation may be subsequently obtained from the database.Annotation subsystem 106 may be programmed to provide the first annotation to enable the first annotation to be presented with the portion of the content item during a second presentation of the content item.Annotation subsystem 106 may be programmed to receive, during the second presentation, a second annotation as a reaction (or reply) to the first annotation. Upon receipt of the second annotation,annotation subsystem 106 may initiate storage of the second annotation in association with the first annotation. - In one use case, with respect to
FIG. 9A , User X may have previously watched a presentation ofContent Item 1, and submittedAnnotation 1B when Portion A ofContent Item 1 was presented. As shown inFIG. 9A , User X submittedAnnotation 1B to ask other users where he/she can purchase the hat depicted in Portion A. When User Z is watching a presentation ofContent Item 1,Annotation 1B is presented to User Z when Portion A is presented during the presentation of Content Item 1 (FIG. 9B ). In response, User Z may reply toAnnotation 1B with a link to a shopping site through which the hat depicted in Portion A can be purchased to provide an answer to User X's question (e.g., using “Reply”button 902 and reply window 904). - As an example, as illustrated in
FIG. 9B , the reply may be stored asAnnotation 1C in association withAnnotation 1B such thatAnnotation 1C may appear as a reply toAnnotation 1B when User X or other users (e.g., future viewers of Portion A) watch Portion A ofContent Item 1. As such, among other benefits, questions and their corresponding answers may be presented together during respective portions of a content item that are relevant to the question and answer combinations. - As another example, as indicated by
user interface 906 inFIG. 9C , the reply toAnnotation 1B may causeAnnotation 1B and the reply (e.g.,Annotation 1C) to be provided to a social networking service (e.g., Social Networking Service #1) to storeAnnotation 1B and the reply as a message thread, and initiate a conversation between User X and User Z via the social networking service (e.g., Social Networking Service #1) based on the message thread. - In certain implementations,
annotation subsystem 106 may be programmed to obtain a first track that enables access to a first annotation that corresponds to a portion of a content item. The first track may, for example, include an annotation identifier associated with the first annotation, a first reference time for the first annotation, or other information. The first reference time may correspond to the same portion of the content item as the first annotation, and may be utilized along with the annotation identifier to present the first annotation when the corresponding portion is presented during a presentation of the content item. - In some implementations,
annotation subsystem 106 may be programmed to provide the first track (e.g., to user device 104) to enable the first annotation to be presented with the corresponding portion of the content item. Upon receipt of a second annotation as a reaction to the first annotation during a presentation of the content item,annotation subsystem 106 may initiate storage of the second annotation in association with the first annotation. The storage of the second annotation (in association with the first annotation) may, for instance, result in the first track further enabling access to the second annotation (e.g., the STAR track inFIGS. 9A-9B may further enable access toAnnotation 1C). - In one implementation, the second annotation may be stored in a database with information indicating that the second annotation is a reaction to the first annotation. As an example, when the first track is played during a presentation of the content item, the first track may indicate that the first annotation is to be presented with its corresponding portion of the content item. Based on a query of the database for the first annotation (e.g., using the annotation identifier of the first annotation), the second annotation may be obtained in addition to the first annotation as a result of the second annotation being identified in the database as a reaction to the first annotation. Subsequently, both the first annotation and the second annotation may be presented when the corresponding portion of the content item is presented.
- In another implementation, the first track may be updated to further enable access to the second annotation based on the receipt of the second annotation. For example, the first track may be updated to further include an annotation identifier associated with the second annotation and information indicating that the second annotation is a reaction to the first annotation.
- In another implementation, a second track that enables access to the first annotation may be updated such that the second track further enables access to the second annotation. As an example, two different tracks (e.g., annotation tracks or other tracks) that enable access to two different sets of annotations may both be updated when a user submits a reaction to an annotation common to both tracks during playback of only one of the two tracks.
- As discussed, in various implementations, a reply or other reaction to an annotation may initiate a conversation between users. For example, in some implementations,
annotation subsystem 106 may be programmed to obtain an annotation inputted by a first user during a first presentation of a content item.Annotation subsystem 106 may be programmed to present the annotation during a second presentation of the content item to a second user.Annotation subsystem 106 may be programmed to receive a reaction associated with the annotation from the second user. Based on the receipt of the reaction,annotation subsystem 106 may be programmed to provide the annotation and the reaction to the first user. - In one implementation,
annotation subsystem 106 may be programmed to initiate a message thread associated with the first user and the second user based on the receipt of the reaction. As an example,annotation subsystem 106 may cause the message thread to be generated at a messaging service (e.g., a social networking service, a chat service, a SMS service, a MMS service, etc.) that is accessible to the first user and the second user. If, for instance, the first user's user device is logged into the messaging service, the annotation and the reaction may be provided to the user device (e.g., pulled by the user device, pushed to the user device, etc.). As such, the reaction to the annotation may initiate a conversation between the first and second users even if the annotation by the first user had not been intended specifically for the second user, as well as without either user having to re-experience the portion of the content item to which the annotation corresponds. Among other benefits, conversations may be initiated between users regarding subject matter of mutual interest, continued through a messaging service independent of an annotation service or a content delivery service, etc. - In another implementation,
annotation subsystem 106 may be programmed to provide the annotation and the reaction to the first user via a social networking service. For example,annotation subsystem 106 may identify a social network service with which the first user and the second user both have accounts, and provide the annotation and the reaction to the first user via the social network service. In one use case, as shown inFIG. 9C ,Annotation 1B and its reaction (Annotation 1C) may be provided to User X via SocialNetwork Service # 1. - In another implementation,
annotation subsystem 106 may be programmed to identify a social distance between the first user and the second user within a social network.Annotation subsystem 106 may be programmed to determine whether the social distance satisfies a social distance threshold, and provide the annotation and the reaction to the first user based on a determination that the social distance satisfies the social distance threshold. In one use case, for example, User X may be associated with a preference to only receive communications from users that are 1 degree away from the user. As such, while a conversation between the user and one of the user's friends may be initiated when the friend replies to one of User X's annotations corresponding to portions of a content item,annotation subsystem 106 may know not to initiate a conversation between the user and another user that is only a friend of one of the user's friends (e.g., more than 1 degree away from the user). - As discussed, in some implementations, a reaction to an annotation in a track may result in the track be updated to include (or otherwise further enabling access to) the reaction. On the other hand, in other implementations, the annotation and the reaction may be provided to an authoring user of the annotation without the track being updated to include (or otherwise enabling access to) the reaction.
- Intelligently Presenting User Interface Elements
- According to an aspect of the invention, display of user interface elements may be presented based on relevancy. By way of example, user interface elements may be presented with varying characteristics based on, for example, the relevancy of the user interface elements to a user or other users, the relevancy of data associated with the user interface elements to the user or other users, the relevancy of the user interface elements to activity being performed by the user or other users, etc. In this way, the presentation of the user interface elements may allow a user to quickly identify relevant information, actions that may be of interest to the user, recommendations related to the user's interests, etc.
- Characteristics of the user interface elements may, for example, include one or more shapes, designs, sizes, colors, locations, animations, orientations, degrees of transparencies/opaqueness, degrees of sharpness or blurriness, labels (e.g., number, letter, etc.), or other characteristics. The characteristics of the user interface elements may change over time based on changes with respect to the number of user interface elements on display, data associated with each of the user interface elements, activities of a user or other users, etc.
- In some implementations, the user interface elements may be static in their presentation or may move dynamically in response to changes in the X, Y, or Z plane of the user interface. For example, rather than simply moving user interface elements horizontally (e.g., X plane), vertically (e.g., Y plane), or some combination thereof in the user interface, user interface elements may move into or out of the background of a user interface in a dynamic 3-dimensional fashion (e.g., Z plane). In this way, user interface elements of the user interface may be presented in a manner that simplifies the user interface while also providing the user with simultaneous access to many different user interface elements.
- In certain implementations, user interface elements may be associated with content items (e.g., movies, episodes, video clips, songs, audio books, e-books, or other content items).
Content presentation subsystem 116 may be programmed to determine relevancy information indicating the relevancy of each of the content items to a user.Content presentation subsystem 116 may be programmed to modify and/or present the user interface elements based on the relevancy information. - In one example, with respect to
FIG. 10A ,user interface 1002 may include a display of user interface elements 1004 a-1004 g that are associated with television shows. WhileFIG. 10A depicts user interface elements corresponding to television shows, it should be appreciated that a similar interface may be utilized for any other content item (e.g., movies, songs, etc.). The characteristics of user interface elements 1004 may, for example, be based on which shows are most frequently viewed by the user, which shows are most relevant to a specific genre specified by the user, which shows the user has viewed the most, etc. - In another example, as shown in
FIG. 10B , the size of user interface element 1004 b (e.g., associated with the “Big Bang Theory” television show) may be increased, and the location of user interface element 1004 b within the Z plane (e.g., depth) may be changed to feature user interface element 1004 b more prominently in front of other user interface elements (e.g., from the perspective of the user). As an example, changes to the size and location of user interface element 1004 b may be effectuated when a number of the user's friends begin to tune into one or more episodes of Big Band Theory. The change in the size and location of user interface element 1004 b may, for example, alert the user that there may be information of potential interest to the user associated with user interface 1004 b. In one use case, upon selection of user interface element 1004 b, the user may be presented with an information page that indicates the episodes of the Big Bang Theory that the user's friends are currently watching or have recently watched. The user may, for example, be inclined to start watching the episodes indicated on the information page in order to see the comments or other annotations that the user's friends have submitted for portions of the episodes. - In various implementations, user interface elements may be associated with other users within a user's social group (e.g., the user's friends, the user's colleagues, the user's connections within a social network, or other group).
Content presentation subsystem 116 may be programmed to determine relevancy information indicating the relevancy of each of the other users to the user.Content presentation subsystem 116 may be programmed to present user interface elements based on the relevancy information, for example, by modifying the characteristics of the user interface elements based on the relevancy of respective ones of the other users to the user. For example, as shown inFIG. 10C , the characteristics of user interface elements 1006 a-1006 g may, for example, be based on the frequency of the user's interactions with the other users (e.g., reactions to the other users' annotations, conversations with the other users, etc.), the frequency of the interactions with one another, similarity of the other users' activities with the user's activities, etc. - In another example, as shown in
FIG. 10D , the size ofuser interface element 1006 b (e.g., associated with Karl Thomas) may be increased, and the location ofuser interface element 1006 b within the Z plane (e.g., depth) may be changed to featureuser interface element 1006 b more prominently in front of other user interface elements (e.g., from the perspective of the user). As an example, the change to the size and location ofuser interface element 1006 b may be effectuated when an increase in interactions with items associated with Karl Thomas's account is detected. The change in the size and location ofuser interface element 1006 b may, for example, alert the user that there may be information of potential interest to the user associated withuser interface 1006 b. In one use case, upon selection ofuser interface element 1006 b, the user may be presented with an information page that indicates the annotations that Karl Thomas has recently submitted for content items, the reactions that users have recently submitted for Karl Thomas's annotations, users that have recently engaged in conversation with Karl Thomas, etc. As a result, for example, the user may be inclined to view Karl Thomas's annotations and the reactions associated with Karl Thomas's annotations, start submitting annotations for content items for which Karl Thomas has submitted annotations, initiate a conversation with Karl Thomas, or perform other activities. - Managing Control of Presentations of a Content Item to a Group of Users
- According to an aspect of the invention, control of presentations of a content item to a group of users may be managed such that an application or a user may control playback or other features of the presentations of the content item to the group of users. For example, a group of users (e.g., friends) may wish to view a movie or television show together and play an accompanying game (e.g., a trivia game), engage in a related contest, etc. However, the users may not have access to the same content delivery service to watch a presentation of the movie or television show, or the users may be watching a presentation of the movie or television show on different applications or devices. As such, it may be difficult for the users to control multiple presentations of the movie or television show to play an accompanying game, engage in a related contest, etc. Accordingly, in some implementations, one or more users of a group may be enabled to simultaneously control multiple presentations of a content item to respective users of the group even when the presentations are provided to the users via different content delivery services, different user applications, or different user devices. In this way, among other benefits, the group interaction experience (e.g., group watching experience, group listening experience, group gaming experience, etc.) may be enhanced.
- In some implementations,
content presentation subsystem 116 may be programmed to manage presentations of a content item to at least two users. By way of example,content presentation subsystem 116 may synchronize the presentations of the content item (e.g., based on presentation reference times or other information) so that users may experience the same portion of the content item at a given time. - In some implementations, content reference subsystem 108 may be programmed to map portions of a first presentation of a content item to portions of a second presentation of the content item. The portions of the first and second presentations may, for example, be mapped to one another via a master set of reference times (as described in detail above with regard to
FIG. 3 ). - As shown in
FIGS. 11A-11B , for example, User A may watch a first presentation ofContent Item 1 onuser interface 202 a, and User B may watch a second presentation ofContent Item 1 onuser interface 202 b. The first presentation may, for instance, be provided via a first content delivery service (e.g., NETFLIX), and the second presentation may be provided via a second content delivery service (e.g., HULU). Nevertheless, the two presentations may be synchronized so that User A and User B are watching the same portion ofContent Item 1 at the same time. - In one scenario, with respect to
FIGS. 11A-11B , User A and User B may be playing a trivia game related to Content Item 1 (e.g., a movie, an episode, etc.). A remote application may control the presentations ofContent Item 1 to User A and User B, and pause the presentations at particular times to ask trivia questions related to a portion ofContent Item 1. As depicted inFIGS. 11A-11B , User A and User B may be presented with a question (e.g., Question 4) onwindow 1102, and they may each answerQuestion 4 using “Answer”button 1104. Questions of the trivia game may, for example, be presented as comments on a track (e.g., a trivia game track or other track) created by administrators or other users, and answers to the trivia questions may be stored as reactions to the comments. - In another scenario, User A or User B may control the ability to pause the presentations of
Content Item 1. For example, when User A activates the play/pause button 208 a to pause the presentations ofContent Item 1 at Portion A, both User A and User B may be presented with a question corresponding to Portion A. As illustrated inFIGS. 11A-11B , User A has 3 points for answering 3 previous questions correctly, and User B has 1 point for answering 1 previous question correctly. - In various implementations, a threshold number of users in a group of users may need to issue a command in order for the command to be implemented for presentations of a content item to the group of users. As an example, with respect to
FIGS. 11A-11B , both User A and User B may need to activate their play/pause buttons 208 when Portion A is presented to pause the presentations ofContent Item 1 at Portion A and trigger the presentation ofQuestion 4. As such, both User A and User B may be allowed to gauge whether they are comfortable with questions regarding a certain portion of a content item before activating their play/pause button 208 to trigger a question. - In some implementations, the control of presentations of a content item (e.g., partial control, full control, etc.) may be passed among a group of users based on a schedule, pass intervals (e.g., time intervals, use intervals, etc.), token-based criteria, or other criteria. In one use case, a user may manually pass his/her control of the presentations to another user in the group. In another use case, a schedule may indicate when each one of the group of users should be given full or partial control. In another use case, passing of control may be performed after a user has had control for a particular time interval, or after a user has used all of his/her available number of commands to control presentations of the content item.
- In yet another use case, a user may be given a certain number of tokens which may be exchanged for issuing commands to control the presentations of the content item to the group of users (e.g., 1 token to pause the presentations for 5 seconds, 3 tokens to rewind or fast-forward the presentations up to 5 minutes back or forward, 6 tokens to cause the presentations to jump to any portion of the content item, etc.). After the user has used all of his/her tokens, the control of the presentations may be passed to another user in the group that has available tokens. A user may, for example, be given an initial set of tokens for controlling the presentations of the content item for free, but may have the option to purchase additional tokens. The foregoing trivia game is but one example. It should be recognized that other examples may be implemented when a group of users wish to view a movie or television show together. In some implementations, third parties may generate and provide “group viewing tracks” to encourage social behavior.
- Exemplary Flowcharts
-
FIGS. 12-27 comprise exemplary illustrations of flowcharts of processing operations of methods that enable the various features and functionality of the system as described in detail above (and illustrated inFIGS. 1-11 ). The processing operations of each method presented below are intended to be illustrative and non-limiting. In some implementations, for example, the methods may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the processing operations of the methods are illustrated and described below is not intended to be limiting. - In some implementations, the methods may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of the methods in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of the methods.
- Exemplary Flowchart for Creating and Maintaining a Database of Annotations
-
FIG. 12 is an exemplary illustration of a flowchart of amethod 1200 of creating and maintaining a database of annotations corresponding to portions of a content item, according to an aspect of the invention. - In an
operation 1202, a first annotation corresponding to a time at which a first portion of a content item is presented via a first content delivery service may be received.Operation 1202 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1204, a second annotation corresponding to a time at which the first portion of the content item is presented via a second content delivery service may be received. As an example, the presentation via the first content delivery service may correspond to a first presentation that includes the content item, and the presentation via the second content delivery service may correspond to a second presentation that includes the content item. As another example, the presentation via the first content delivery service may correspond to a first presentation that includes the first portion of the content item and does not include a second portion of the content item, and the presentation via the second content delivery service may include the first and second portions of the content item.Operation 1204 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1206, storage of the first annotation in association with a first reference time corresponding to the first portion of the content item may be initiated.Operation 1206 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1208, storage of the second annotation in association with the first reference time (corresponding to the first portion of the content item) may be initiated.Operation 1208 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1210, a third annotation corresponding to a time at which a second portion of the content item is presented may be received.Operation 1210 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1212, storage of the third annotation in association with a second reference time corresponding to the second portion of the content item may be initiated.Operation 1212 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1214, the first and/or second annotations may be provided based on the first reference time. For example, the first and/or second annotations may be provided based on the first reference time such that the first and/or second annotations are presented when the first portion of the content item (to which the first reference time corresponds) is presented during a third presentation of the content item. The third presentation may, for example, be provided via the first content delivery service, the second content delivery service, or a third content delivery service. The third presentation may be the same as one of the first or second presentations of the content item, or different than both the first and second presentations of the content item.Operation 1214 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1216, the third annotation may be provided based on the second reference time. For example, the third annotation may be provided based on the second reference time such that the third annotation is presented when the second portion of the content item (to which the second reference time corresponds) is presented during the third presentation of the content item.Operation 1216 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - Exemplary Flowchart for Generating Annotations Based on User Interactions
-
FIG. 13 is an exemplary illustration of a flowchart of amethod 1300 of generating annotations for a content item based on interactions of users with presentations of the content item, according to an aspect of the invention. - In an
operation 1302, an interaction of a user with a presentation of a content item may be monitored.Operation 1302 may be performed by an interaction monitoring subsystem that is the same as or similar tointeraction monitoring subsystem 112, in accordance with one or more implementations. - In an
operation 1304, a characteristic of the content item may be determined based on the interaction.Operation 1304 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1306, an annotation may be generated for the content item based on the characteristic.Operation 1306 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1308, a reference time that corresponds to a portion of the content item may be identified for the annotation based on the interaction.Operation 1308 may be performed by a content reference subsystem that is the same as or similar to content reference subsystem 108, in accordance with one or more implementations. - In an
operation 1310, storage of the annotation in association with the reference time may be initiated.Operation 1310 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1312, the annotation may be provided based on the reference time such that the annotation is presented when the portion of the content item (to which the reference time corresponds) is presented during a subsequent presentation of the content item.Operation 1312 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - Exemplary Flowchart for Providing Annotations to Social Networking Services
-
FIG. 14 is an exemplary illustration of a flowchart of amethod 1400 of providing annotations (corresponding to portions of a content item) to social networking services, according to an aspect of the invention. - In an
operation 1402, a presentation of a content item may be initiated.Operation 1402 may be performed by a user content presentation subsystem that is the same as or similar to user content presentation subsystem 120, in accordance with one or more implementations. - In an
operation 1404, a first annotation may be received at a time at which a first portion of the content item is presented.Operation 1404 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118, in accordance with one or more implementations. - In an
operation 1406, storage of the first annotation in association with a first reference time (corresponding to the first portion of the content item) may be initiated.Operation 1406 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118, in accordance with one or more implementations. - In an
operation 1408, the first annotation may be provided to a first social networking service.Operation 1408 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118, in accordance with one or more implementations. - In an
operation 1410, a second annotation may be received at a time at which a second portion of the content item is presented.Operation 1410 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118, in accordance with one or more implementations. - In an
operation 1412, storage of the second annotation in association with a second reference time (corresponding to the second portion of the content item) may be initiated.Operation 1412 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118, in accordance with one or more implementations. - In an
operation 1414, the second annotation may be provided to a second social networking service.Operation 1414 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118, in accordance with one or more implementations. - Exemplary Flowchart for Presenting Annotations
-
FIG. 15 is an exemplary illustration of a flowchart of amethod 1500 of presenting annotations corresponding to portions of a content item during a presentation of the content item, according to an aspect of the invention. - In an
operation 1502, a selection of a content item to be presented to a user may be received.Operation 1502 may be performed by a user content presentation subsystem that is the same as or similar to user content presentation subsystem 120, in accordance with one or more implementations. - In an
operation 1504, a first parameter associated with the user that is related to presentation of annotations may be received.Operation 1504 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118, in accordance with one or more implementations. - In an
operation 1506, annotations corresponding to portions of the content item may be obtained based on the first parameter.Operation 1506 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118, in accordance with one or more implementations. - In an
operation 1508, a second parameter associated with the user that indicates a characteristic desired by the first user may be received.Operation 1508 may be performed by a user content presentation subsystem that is the same as or similar to user content presentation subsystem 120, in accordance with one or more implementations. - In an
operation 1510, a presentation of the selected content item may be initiated such that the presentation of the selected content item is based on the second parameter.Operation 1510 may be performed by a user content presentation subsystem that is the same as or similar to user content presentation subsystem 120, in accordance with one or more implementations. - In an
operation 1512, a determination that the presentation of the selected content item has reached a first reference time corresponding to a first portion of the content item may be effectuated.Operation 1512 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118, in accordance with one or more implementations. - In an
operation 1514, first and/or second annotations associated with the first reference time may be presented at a time corresponding to the first portion of the content item.Operation 1514 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118, in accordance with one or more implementations. - In an
operation 1516, a third annotation by the user that corresponds to a time at which a second portion of the content item is presented may be received during the presentation of the content item.Operation 1516 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118, in accordance with one or more implementations. - In an
operation 1518, storage of the third annotation in association with a second reference time corresponding to the second portion of the content item may be initiated.Operation 1518 may be performed by a user annotation subsystem that is the same as or similar to user annotation subsystem 118, in accordance with one or more implementations. - Exemplary Flowchart for Rewarding the Creation of Annotations
-
FIG. 16 is an exemplary illustration of a flowchart of amethod 1600 of facilitating rewards for the creation of annotations, according to an aspect of the invention. - In an
operation 1602, an annotation corresponding to a time at which a portion of a content item is presented may be received from a user.Operation 1602 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1604, the annotation may be associated with a user account of the user.Operation 1604 may be performed by an account subsystem that is the same as or similar toaccount subsystem 110, in accordance with one or more implementations. - In an
operation 1606, a reward to be provided (or credited) to the user account may be determined based on the annotation. For example, a reward may be provided (or credited) to the user account based on a receipt of annotation from a user, interactions of other users with the annotation, or other criteria.Operation 1606 may be performed by a reward subsystem that is the same as or similar toreward subsystem 114, in accordance with one or more implementations. - Exemplary Flowchart for Rewarding Based on Interactions with Annotations
-
FIG. 17 is an exemplary illustration of a flowchart of amethod 1700 of facilitating rewards based on interactions with annotations, according to an aspect of the invention. - In an
operation 1702, an annotation received during a presentation of a content item may be associated with a user account.Operation 1702 may be performed by an account subsystem that is the same as or similar toaccount subsystem 110, in accordance with one or more implementations. - In an
operation 1704, interactions with the annotation may be monitored. Monitored interactions may, for instance, include access of the annotation (e.g., viewing the annotation, listening the annotation, etc.) by other users during presentation of the content item, reactions by users to the annotation (e.g., rating the annotation, replying to the annotation, etc.), execution of transactions enabled via the annotation, or other interactions.Operation 1704 may be performed by an interaction monitoring subsystem that is the same as or similar tointeraction monitoring subsystem 112, in accordance with one or more implementations. - In an
operation 1706, a reward to be provided (or credited) to the user account may be determined based on the interactions. For example, a determination of whether the interactions satisfy one or more criteria for compensating a user associated with the user account may be effectuated. The reward to be provided to the user account may be determined based on whether the interactions satisfy the compensation criteria.Operation 1706 may be performed by a reward subsystem that is the same as or similar toreward subsystem 114, in accordance with one or more implementations. - Exemplary Flowchart for Rewarding Based on Annotation-Enabled Transactions
-
FIG. 18 is an exemplary illustration of a flowchart of amethod 1800 of facilitating rewards based on execution of transactions enabled via annotations, according to an aspect of the invention. - In an
operation 1802, an annotation received during a presentation of a content item may be associated with a user account.Operation 1802 may be performed by an account subsystem that is the same as or similar toaccount subsystem 110, in accordance with one or more implementations. - In an
operation 1804, a reference associated with a product or service may be identified in the annotation.Operation 1804 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1806, a mechanism that enables a transaction related to the product or service may be provided in the annotation. For example, the mechanism may be provided in the annotation based on the identification of the reference associated with the product or service in the annotation.Operation 1806 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1808, an execution of the transaction enabled via the mechanism may be identified. For example, the execution of the transaction may be identified based on use of the mechanism by a user to facilitate the execution of the transaction.Operation 1808 may be performed by an interaction monitoring subsystem that is the same as or similar tointeraction monitoring subsystem 112, in accordance with one or more implementations. - In an
operation 1810, a reward to be provided (or credited) to the user account may be determined based on the execution of the transaction.Operation 1810 may be performed by a reward subsystem that is the same as or similar toreward subsystem 114, in accordance with one or more implementations. - Exemplary Flowchart for Providing Annotation Tracks or Other Datasets
-
FIG. 19 is an exemplary illustration of a flowchart of amethod 1900 of providing a dataset (or track) of annotations corresponding to portions of a content item, according to an aspect of the invention. - In an
operation 1902, a first annotation received from a first source during a first presentation of a content item (via a first content delivery service) may be stored. The first source may, for example, include an authoring user of the first annotation, an entity associated with the authoring user, or other entity.Operation 1902 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1904, a second annotation received from the first source during a second presentation of the content item (via a second content delivery service) may be stored.Operation 1904 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1906, a third annotation received from a second source during a third presentation of the content item may be stored. The second source may, for example, include an authoring user of the second annotation, an entity associated with the authoring user, or other entity.Operation 1906 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1908, the first, second, or third annotations may be identified for inclusion in a dataset (or track). For example, the annotations may be identified for inclusion in the dataset based on a selection of the annotations by a user for inclusion in the dataset, one or more parameters selected by a user for creating the dataset, automatic creation of the dataset by a service without explicit user input to create the dataset, etc.Operation 1908 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1910, the dataset may be generated. For example, the dataset may be generated such that the dataset enables access to the first, second, or third annotations.Operation 1910 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1912, the dataset may be provided to enable the first, second, or third annotations to be presented, respectively, at times corresponding to first, second, or portions of the content items.Operation 1912 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1914, a reaction associated with the first, second, or third annotations may be identified.Operation 1914 may be performed by an interaction monitoring subsystem that is the same as or similar tointeraction monitoring subsystem 112, in accordance with one or more implementations. - In an
operation 1916, a characteristic may be determined for the dataset based on the reaction.Operation 1916 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 1918, the characteristic may be associated with the dataset.Operation 1918 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - Exemplary Flowchart for Rewarding Based on Interactions with Datasets
-
FIG. 20 is an exemplary illustration of a flowchart of amethod 2000 of facilitating rewards based on interactions with datasets (e.g., tracks), according to an aspect of the invention. - In an
operation 2002, a dataset (or track) that enables access to annotations corresponding to portions of a content item may be generated.Operation 2002 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 2004, the dataset may be associated with a user account. For example, the dataset may be received from a first source, and the dataset may be associated with a user account of the first source. The first source may include a creating user of the dataset, an entity associated with the creating user, or other entity.Operation 2004 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 2006, interactions with the dataset may be monitored. Monitored interactions may, for example, include access of the dataset (e.g., viewing annotations of the dataset, listening annotations of the dataset, etc.) by users during a presentation of the content item, reactions by users to the dataset (e.g., rating the dataset, rating annotations of the dataset, replying to annotation of the dataset, etc.), execution of transactions enabled via the dataset, or other interactions.Operation 2006 may be performed by an interaction monitoring subsystem that is the same as or similar tointeraction monitoring subsystem 112, in accordance with one or more implementations. - In an
operation 2008, a reward to be provided (or credited) to the user account may be determined based on the interactions.Operation 2008 may be performed by a reward subsystem that is the same as or similar toreward subsystem 114, in accordance with one or more implementations. - Exemplary Flowchart for Rewarding Based on Dataset-Enabled Transactions
-
FIG. 21 is an exemplary illustration of a flowchart of amethod 2100 of facilitating rewards based on execution of transactions enabled via datasets (e.g., tracks), according to an aspect of the invention. - In an
operation 2102, a reference associated with a product or service may be identified in an annotation that is to be included in a dataset (or track).Operation 2102 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 2104, a mechanism that enables a transaction related to the product or service may be generated.Operation 2104 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 2106, the dataset may be generated such that the dataset enables access to the annotation and the mechanism.Operation 2106 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 2108, the dataset may be associated with a user account.Operation 2108 may be performed by an account subsystem that is the same as or similar toaccount subsystem 110, in accordance with one or more implementations. - In an
operation 2110, an execution of the transaction via the mechanism may be identified. For example, the execution of the transaction may be identified based on use of the mechanism by a user to facilitate the execution of the transaction.Operation 2110 may be performed by an interaction monitoring subsystem that is the same as or similar tointeraction monitoring subsystem 112, in accordance with one or more implementations. - In an
operation 2112, a reward to be provided to the user account may be determined based on the execution of the transaction.Operation 2112 may be performed by a reward subsystem that is the same as or similar toreward subsystem 114, in accordance with one or more implementations. - Exemplary Flowchart for Facilitating the Sharing of Portions of a Content Item
-
FIG. 22 is an exemplary illustration of a flowchart of amethod 2200 of facilitating the sharing of portions of a content item across different content delivery services, according to an aspect of the invention. - In an
operation 2202, a request to provide information to enable access to a portion of a content item may be received. The request may, for example, be based on a first presentation of the content item via a first content delivery service.Operation 2202 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - In an
operation 2204, a reference time corresponding to the portion of the content item may be identified.Operation 2204 may be performed by a content reference subsystem that is the same as or similar to content reference subsystem 108, in accordance with one or more implementations. - In an
operation 2206, reference information that enables access to the portion of the content item in a second presentation of the content item (via a second content delivery service) may be generated based on the reference time.Operation 2206 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - In an
operation 2208, the reference information may be provided to enable access to the portion of the content item via the second content delivery service.Operation 2208 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - Exemplary Flowchart for Facilitating the Access of a Portion of a Content Item
-
FIG. 23 is an exemplary illustration of a flowchart of amethod 2300 of facilitating the access of a portion of a content item, according to an aspect of the invention. - In an
operation 2302, reference information related to a portion of a content item may be received. For example, the reference information may be generated based on user input during a first presentation of the content item (via a first content delivery service).Operation 2302 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - In an
operation 2304, a second content delivery service (through which access to the portion of the content item is available) may be identified based on the reference information. For example, the reference information may include information indicating the content item (e.g., content item identifier), the portion of the content item (e.g., portion identifier), a reference time corresponding to the portion of the content item, or other information. The second content delivery service may be identified based on a determination that the second content delivery service offers access to the content item or the portion of the content item.Operation 2304 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - In an
operation 2306, the portion of the content item may be provided in the second presentation (via the second content delivery service) based on the reference information. For example, the reference information may enable a user to jump to the portion of the content item in the second presentation (e.g., using a content item identifier associated with the content item and a reference time corresponding to the portion of the content item).Operation 2306 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - Exemplary Flowchart for Enabling Storage of Reactions to Annotations
-
FIG. 24 is an exemplary illustration of a flowchart of amethod 2400 of enabling storage of reactions to annotations, according to an aspect of the invention. - In an
operation 2402, a first annotation (initially received at a time which a portion of a content item is presented during a first presentation of the content item) may be obtained.Operation 2402 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 2404, the first annotation may be provided when the corresponding portion of the content item is presented during a second presentation of the content item.Operation 2404 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 2406, a second annotation may be received during the second presentation as a reaction to the first annotation.Operation 2406 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 2408, storage of the second annotation in association with the first annotation may be initiated. By way of example, the second annotation may be stored in association with the first annotation based on a determination that the second annotation is a reaction to the first annotation.Operation 2408 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - Exemplary Flowchart for Initiating Conversations Based on Annotation Reactions
-
FIG. 25 is an exemplary illustration of a flowchart of amethod 2500 of initiating conversations between users based on reactions to annotations, according to an aspect of the invention. - In an
operation 2502, an annotation entered by a first user during a first presentation of a content item may be obtained.Operation 2502 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 2504, the annotation may be presented during a second presentation of the content item (to a second user).Operation 2504 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 2506, a reaction associated with the annotation may be received from the second user.Operation 2506 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - In an
operation 2508, the annotation and the reaction may be provided via a messaging service to the first user. As an example, the annotation and/or the reaction may be provided to the first user based on a determination that the first and second users are associated with the same social network, a determination that the first and second users are within a social distance threshold from one another, or other criteria.Operation 2508 may be performed by an annotation subsystem that is the same as or similar toannotation subsystem 106, in accordance with one or more implementations. - Exemplary Flowchart for Presenting User Interface Elements Based on Relevancy
-
FIG. 26 is an exemplary illustration of a flowchart of amethod 2600 of presenting user interface elements based on relevancy, according to an aspect of the invention. - In an
operation 2602, relevancy of a user interface element to a user may be determined with respect to a first time.Operation 2602 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - In an
operation 2604, a first set of characteristics may be determined for the user interface element based on the determined relevancy with respect to the first time.Operation 2604 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - In an
operation 2606, the user interface element may be presented based on the first set of characteristics.Operation 2606 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - In an
operation 2608, relevancy of a user interface element to the user may be determined with respect to a second time (e.g., relevancy of the user interface element at the second time).Operation 2608 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - In an
operation 2610, a second set of characteristics may be determined for the user interface element based on the determined relevancy with respect to the second time.Operation 2610 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - In an
operation 2612, the user interface element may be modified during the presentation of the user interface element based on the second set of characteristics.Operation 2612 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - Exemplary Flowchart for Controlling Multiple Presentations of a Content Item
-
FIG. 27 is an exemplary illustration of a flowchart of amethod 2700 of facilitating control of presentations of a content item to a group of users, according to an aspect of the invention. - In an
operation 2702, presentations of a content item to first and second users via first and second content delivery services, respectively, may be synchronized.Operation 2702 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - In an
operation 2704, control of the presentations of the content item may be enabled for the first or second users. For example, user control of the presentations of the content item may be enabled for the first user, while user control of the presentations of the content item may be disabled for the second user (or vice versa).Operation 2704 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - In an
operation 2706, a control command may be received from a controlling user (e.g., first user, second user, etc.) during the presentations of the content item.Operation 2706 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - In an
operation 2708, the presentations of the content item may be controlled based on the control command.Operation 2708 may be performed by a content presentation subsystem that is the same as or similar tocontent presentation subsystem 116, in accordance with one or more implementations. - Although the present invention has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the invention is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present invention contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation.
Claims (20)
1. A method of creating and maintaining a database of annotations received during presentations of a content item via at least first and second content delivery services, the method being implemented by a computer system that includes one or more physical processors executing one or more computer program instructions which, when executed, perform the method, the method comprising:
receiving, at the computer system, a first annotation corresponding to a time at which a first portion of a first content item is presented via a first content delivery service, the presentation via the first content delivery service corresponding to a first presentation that includes the first content item;
receiving, at the computer system, a second annotation corresponding to a time at which the first portion of the first content item is presented via a second content delivery service, the presentation via the second content delivery service corresponding to a second presentation that includes the first content item;
initiating, by the computer system, storage of the first annotation in association with a first reference time corresponding to the first portion of the first content item;
initiating, by the computer system, storage of the second annotation in association with the first reference time;
receiving, at the computer system, a third annotation corresponding to a time at which a second portion of the first content item is presented; and
initiating, by the computer system, storage of the third annotation in association with a second reference time corresponding to the second portion of the first content item.
2. The method of claim 1 , wherein the first presentation includes the first content item and first auxiliary information, and wherein the second presentation includes the first content item and second auxiliary information, the method further comprising:
mapping, by the computer system, portions of the first content item in the first presentation with corresponding portions of the first content item in the second presentation.
3. The method of claim 2 , wherein at least one of a duration of the first auxiliary information is different than a duration of the second auxiliary information or an order of the first auxiliary information during the first presentation is different than an order of the second auxiliary information during the second presentation.
4. The method of claim 1 , further comprising:
identifying, by the computer system, a set of reference times corresponding to portions of the first content item, wherein at least one of the first presentation or the second presentation is associated with another set of reference times such that the other set of reference times corresponds to at least one of portions of the first presentation or portions of the second presentation; and
identifying, by the computer system, based on the set of reference times, at least one of the first reference time as a reference time for the first annotation, the first reference time as a reference time for the second annotation, or the second reference time as a reference time for the third annotation.
5. The method of claim 4 , wherein a reference time of the set of reference times and a reference time of the other set of reference time correspond to the same one of the portions of the first content item but are different than one another.
6. The method of claim 4 , wherein the set of reference times include the first reference time, and wherein identifying the first reference time as a reference time for the first annotation comprises:
receiving information regarding a first time, relative to the first presentation, at which the first annotation is initially received at a user device; and
identifying the first reference time as a reference time for the first annotation based on a determination that the first reference time corresponds to the first time.
7. The method of claim 1 , further comprising:
providing, by the computer system, based on the first reference time, at least one of the first annotation or the second annotation such that at least one of the first annotation or the second annotation is presented when the first portion of the first content item is presented during a third presentation of the first content item; and
providing, by the computer system, based on the second reference time, the third annotation such that the third annotation is presented when the second portion of the first content item is presented during the third presentation.
8. The method of claim 1 , further comprising:
identifying, by the computer system, one or more annotations of a first user regarding the first content item in response to an initiation of a third presentation of the first content item by a second user associated with the first user, wherein the one or more annotations include the first annotation; and
providing, by the computer system, the one or more annotations such that the first annotation is presented when the first portion of the first content item is presented during the third presentation, wherein the first annotation is presented based on the first reference time.
9. The method of claim 1 , further comprising:
monitoring, by the computer system, an interaction of a user with at least one of the first presentation or the second presentation;
determining, by the computer system, a characteristic of the first content item based on the interaction; and
generating, by the computer system, a fourth annotation for the first content item based on the characteristic.
10. The method of claim 9 , further comprising:
identifying, by the computer system, based on the interaction, a third reference time for the fourth annotation corresponding to a third portion of the first content item; and
initiating, by the computer system, storage of the fourth annotation in association with the third reference time.
11. A method of creating and maintaining a database of annotations received during presentations of a content item via at least first and second content delivery services, the method being implemented by a computer system that includes one or more physical processors executing one or more computer program instructions which, when executed, perform the method, the method comprising:
receiving, at the computer system, a first annotation corresponding to a time at which a first portion of a first content item is presented via a first content delivery service, the presentation via the first content delivery service corresponding to a first presentation that includes the first portion of the first content item;
receiving, at the computer system, a second annotation corresponding to a time at which the first portion of the first content item is presented via a second content delivery service, the presentation via the second content delivery service corresponding to a second presentation that includes the first portion of the first content item and a second portion of the first content item;
initiating, by the computer system, storage of the first annotation in association with a first reference time corresponding to the first portion of the first content item;
initiating, by the computer system, storage of the second annotation in association with the first reference time;
receiving, at the computer system, a third annotation corresponding to a time at which the second portion of the first content item is presented via the second content delivery service; and
initiating, by the computer system, storage of the third annotation in association with a second reference time corresponding to the second portion of the first content item.
12. The method of claim 11 , wherein the first presentation does not include the second portion of the first content item.
13. The method of claim 11 , further comprising:
mapping, by the computer system, portions of the first content item in the first presentation with corresponding portions of the first content item in the second presentation such that the first portion of the content item in the first presentation is mapped to the first portion of the content item in the second presentation.
14. The method of claim 11 , further comprising:
identifying, by the computer system, a set of reference times corresponding to portions of the first content item, wherein at least one of the first presentation or the second presentation is associated with another set of reference times such that the other set of reference times corresponds to at least one of portions of the first presentation or portions of the second presentation; and
identifying, by the computer system, based on the set of reference times, at least one of the first reference time as a reference time for the first annotation, the first reference time as a reference time for the second annotation, or the second reference time as a reference time for the third annotation.
15. A method of providing at least first and second social networking services with annotations received during a presentation of a content item, the method being implemented by a computer system that includes one or more physical processors executing one or more computer program instructions which, when executed, perform the method, the method comprising:
initiating, by the computer system, a first presentation of a first content item;
receiving, at the computer system, a first annotation at a time which a first portion of the first content item is presented during the first presentation;
initiating, by the computer system, storage of the first annotation in association with a first reference time corresponding to the first portion of the first content item;
providing by the computer system, the first annotation to a first social networking service;
receiving, at the computer system, a second annotation at a time which a second portion of the first content item is presented during the first presentation;
initiating, by the computer system, storage of the second annotation in association with a second reference time corresponding to the second portion of the first content item; and
providing, by the computer system, the second annotation to a second social networking service.
16. The method of claim 15 , further comprising:
identifying, by the computer system, a set of reference times corresponding to portions of the first content item, wherein the first presentation is associated with another set of reference times that correspond to portions of the first presentation; and
identifying, by the computer system, based on the set of reference times, at least one of the first reference time as a reference time for the first annotation or the second reference time as a reference time for the second annotation.
17. The method of claim 16 , wherein a reference time of the set of reference times and a reference time of the other set of reference time correspond to the same one of the portions of the first content item but are different than one another.
18. The method of claim 16 , wherein the set of reference times include the first reference time, and wherein identifying the first reference time as a reference time for the first annotation comprises:
receiving, at the computer system, information regarding a first time, relative to the first presentation, at which the first annotation is initially received at a user device; and
identifying, by the computer system, the first reference time as a reference time for the first annotation based on a determination that the first reference time corresponds to the first time.
19. The method of claim 15 , further comprising:
providing, by the computer system, based on the first reference time, the first annotation such that the first annotation is presented when the first portion of the first content item is presented during a third presentation of the first content item; and
providing, by the computer system, based on the second reference time, the second annotation such that the third annotation is presented when the second portion of the first content item is presented during the third presentation.
20. A system for creating and maintaining a database of annotations received during presentations of a content item via at least first and second content delivery services, the system comprising:
one or more physical processors programmed to execute one or more computer program instructions which, when executed, cause the one or more physical processors to:
receive a first annotation corresponding to a time at which a first portion of a first content item is presented via a first content delivery service, the presentation via the first content delivery service corresponding to a first presentation that includes the first content item;
receive a second annotation corresponding to a time at which the first portion of the first content item is presented via a second content delivery service, the presentation via the second content delivery service corresponding to a second presentation that includes the first content item;
initiate storage of the first annotation in association with a first reference time corresponding to the first portion of the first content item;
initiate storage of the second annotation in association with the first reference time;
receive a third annotation corresponding to a time at which a second portion of the first content item is presented; and
initiate storage of the third annotation in association with a second reference time corresponding to the second portion of the first content item.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/195,046 US20140324895A1 (en) | 2013-03-01 | 2014-03-03 | System and method for creating and maintaining a database of annotations corresponding to portions of a content item |
Applications Claiming Priority (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361771514P | 2013-03-01 | 2013-03-01 | |
US201361771467P | 2013-03-01 | 2013-03-01 | |
US201361771461P | 2013-03-01 | 2013-03-01 | |
US201361771519P | 2013-03-01 | 2013-03-01 | |
US201361794271P | 2013-03-15 | 2013-03-15 | |
US201361794419P | 2013-03-15 | 2013-03-15 | |
US201361794202P | 2013-03-15 | 2013-03-15 | |
US201361794322P | 2013-03-15 | 2013-03-15 | |
US201361819941P | 2013-05-06 | 2013-05-06 | |
US14/195,046 US20140324895A1 (en) | 2013-03-01 | 2014-03-03 | System and method for creating and maintaining a database of annotations corresponding to portions of a content item |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140324895A1 true US20140324895A1 (en) | 2014-10-30 |
Family
ID=51428885
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/195,046 Abandoned US20140324895A1 (en) | 2013-03-01 | 2014-03-03 | System and method for creating and maintaining a database of annotations corresponding to portions of a content item |
US14/195,336 Expired - Fee Related US9268866B2 (en) | 2013-03-01 | 2014-03-03 | System and method for providing rewards based on annotations |
US14/195,167 Abandoned US20140325542A1 (en) | 2013-03-01 | 2014-03-03 | System and method for providing a dataset of annotations corresponding to portions of a content item |
US14/195,403 Abandoned US20140325552A1 (en) | 2013-03-01 | 2014-03-03 | System and method for sharing portions of a content item |
US14/195,486 Abandoned US20140325333A1 (en) | 2013-03-01 | 2014-03-03 | System and method for managing reactions to annotations |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/195,336 Expired - Fee Related US9268866B2 (en) | 2013-03-01 | 2014-03-03 | System and method for providing rewards based on annotations |
US14/195,167 Abandoned US20140325542A1 (en) | 2013-03-01 | 2014-03-03 | System and method for providing a dataset of annotations corresponding to portions of a content item |
US14/195,403 Abandoned US20140325552A1 (en) | 2013-03-01 | 2014-03-03 | System and method for sharing portions of a content item |
US14/195,486 Abandoned US20140325333A1 (en) | 2013-03-01 | 2014-03-03 | System and method for managing reactions to annotations |
Country Status (2)
Country | Link |
---|---|
US (5) | US20140324895A1 (en) |
WO (1) | WO2014134603A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9268866B2 (en) | 2013-03-01 | 2016-02-23 | GoPop.TV, Inc. | System and method for providing rewards based on annotations |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021220058A1 (en) | 2020-05-01 | 2021-11-04 | Monday.com Ltd. | Digital processing systems and methods for enhanced collaborative workflow and networking systems, methods, and devices |
US9690759B2 (en) * | 2013-01-03 | 2017-06-27 | Cable Television Laboratories, Inc. | Content linking |
US20160155091A1 (en) * | 2014-12-01 | 2016-06-02 | Workiva Inc. | Methods and a computing device for maintaining comments for a document |
US10114810B2 (en) * | 2014-12-01 | 2018-10-30 | Workiva Inc. | Methods and a computing device for maintaining comments and graphical annotations for a document |
CN105743937B (en) * | 2014-12-08 | 2021-02-19 | 创新先进技术有限公司 | Method for displaying access content and server |
US9894120B2 (en) * | 2015-02-06 | 2018-02-13 | International Business Machines Corporation | Partial likes of social media content |
US10375443B2 (en) | 2015-07-31 | 2019-08-06 | Rovi Guides, Inc. | Method for enhancing a user viewing experience when consuming a sequence of media |
US11262893B1 (en) * | 2018-11-07 | 2022-03-01 | Allscripts Software, Llc | Apparatus, system and method for generating custom workspaces in a medical computer system environment |
US10965976B2 (en) * | 2019-03-29 | 2021-03-30 | Spotify Ab | Systems and methods for delivering relevant media content by inferring past media content consumption |
US20210121784A1 (en) * | 2019-10-23 | 2021-04-29 | Sony Interactive Entertainment Inc. | Like button |
US12003818B2 (en) * | 2019-11-26 | 2024-06-04 | Rovi Guides, Inc. | Systems and methods for providing binge-watching recommendations |
US20210274256A1 (en) * | 2020-03-02 | 2021-09-02 | Rovi Guides, Inc. | Systems and methods for improving content recommendations using a trained model |
US11551086B2 (en) | 2020-03-02 | 2023-01-10 | Rovi Guides, Inc. | Systems and methods for improving content recommendations using a trained model |
US11928315B2 (en) * | 2021-01-14 | 2024-03-12 | Monday.com Ltd. | Digital processing systems and methods for tagging extraction engine for generating new documents in collaborative work systems |
US12105948B2 (en) | 2021-10-29 | 2024-10-01 | Monday.com Ltd. | Digital processing systems and methods for display navigation mini maps |
US11800186B1 (en) * | 2022-06-01 | 2023-10-24 | At&T Intellectual Property I, L.P. | System for automated video creation and sharing |
US12056255B1 (en) | 2023-11-28 | 2024-08-06 | Monday.com Ltd. | Digital processing systems and methods for facilitating the development and implementation of applications in conjunction with a serverless environment |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050060741A1 (en) * | 2002-12-10 | 2005-03-17 | Kabushiki Kaisha Toshiba | Media data audio-visual device and metadata sharing system |
US20050210145A1 (en) * | 2000-07-24 | 2005-09-22 | Vivcom, Inc. | Delivering and processing multimedia bookmark |
US20090063995A1 (en) * | 2007-08-27 | 2009-03-05 | Samuel Pierce Baron | Real Time Online Interaction Platform |
US20090249223A1 (en) * | 2008-03-31 | 2009-10-01 | Jonathan David Barsook | Asynchronous online viewing party |
US20120236201A1 (en) * | 2011-01-27 | 2012-09-20 | In The Telling, Inc. | Digital asset management, authoring, and presentation techniques |
US20130004138A1 (en) * | 2011-06-30 | 2013-01-03 | Hulu Llc | Commenting Correlated To Temporal Point Of Video Data |
US8543454B2 (en) * | 2011-02-18 | 2013-09-24 | Bluefin Labs, Inc. | Generating audience response metrics and ratings from social interest in time-based media |
US20140075317A1 (en) * | 2012-09-07 | 2014-03-13 | Barstow Systems Llc | Digital content presentation and interaction |
US8725816B2 (en) * | 2011-05-03 | 2014-05-13 | Vmtv, Inc. | Program guide based on sharing personal comments about multimedia content |
US8990303B2 (en) * | 2013-01-31 | 2015-03-24 | Paramount Pictures Corporation | System and method for interactive remote movie watching, scheduling, and social connection |
US9110929B2 (en) * | 2012-08-31 | 2015-08-18 | Facebook, Inc. | Sharing television and video programming through social networking |
US9191422B2 (en) * | 2013-03-15 | 2015-11-17 | Arris Technology, Inc. | Processing of social media for selected time-shifted multimedia content |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW463503B (en) | 1998-08-26 | 2001-11-11 | United Video Properties Inc | Television chat system |
US7409362B2 (en) * | 2004-12-23 | 2008-08-05 | Diamond Review, Inc. | Vendor-driven, social-network enabled review system and method with flexible syndication |
US20100169786A1 (en) * | 2006-03-29 | 2010-07-01 | O'brien Christopher J | system, method, and apparatus for visual browsing, deep tagging, and synchronized commenting |
US7865927B2 (en) | 2006-09-11 | 2011-01-04 | Apple Inc. | Enhancing media system metadata |
US20090063994A1 (en) * | 2007-01-23 | 2009-03-05 | Cox Communications, Inc. | Providing a Content Mark |
US20090235297A1 (en) | 2008-03-13 | 2009-09-17 | United Video Properties, Inc. | Systems and methods for capturing program attributes |
US8839306B2 (en) * | 2009-11-20 | 2014-09-16 | At&T Intellectual Property I, Lp | Method and apparatus for presenting media programs |
US20120151345A1 (en) | 2010-12-10 | 2012-06-14 | Mcclements Iv James Burns | Recognition lookups for synchronization of media playback with comment creation and delivery |
US9153000B2 (en) * | 2010-12-13 | 2015-10-06 | Microsoft Technology Licensing, Llc | Presenting content items shared within social networks |
US8737820B2 (en) * | 2011-06-17 | 2014-05-27 | Snapone, Inc. | Systems and methods for recording content within digital video |
US8744237B2 (en) * | 2011-06-20 | 2014-06-03 | Microsoft Corporation | Providing video presentation commentary |
US20130290084A1 (en) * | 2012-04-28 | 2013-10-31 | Shmuel Ur | Social network advertising |
US20140122601A1 (en) | 2012-10-26 | 2014-05-01 | Milyoni, Inc. | Api translator for providing a uniform interface for users using a variety of media players |
US20140325557A1 (en) | 2013-03-01 | 2014-10-30 | Gopop. Tv, Inc. | System and method for providing annotations received during presentations of a content item |
US20140324895A1 (en) | 2013-03-01 | 2014-10-30 | GoPop.TV, Inc. | System and method for creating and maintaining a database of annotations corresponding to portions of a content item |
-
2014
- 2014-03-03 US US14/195,046 patent/US20140324895A1/en not_active Abandoned
- 2014-03-03 US US14/195,336 patent/US9268866B2/en not_active Expired - Fee Related
- 2014-03-03 US US14/195,167 patent/US20140325542A1/en not_active Abandoned
- 2014-03-03 US US14/195,403 patent/US20140325552A1/en not_active Abandoned
- 2014-03-03 WO PCT/US2014/019918 patent/WO2014134603A1/en active Application Filing
- 2014-03-03 US US14/195,486 patent/US20140325333A1/en not_active Abandoned
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050210145A1 (en) * | 2000-07-24 | 2005-09-22 | Vivcom, Inc. | Delivering and processing multimedia bookmark |
US20050060741A1 (en) * | 2002-12-10 | 2005-03-17 | Kabushiki Kaisha Toshiba | Media data audio-visual device and metadata sharing system |
US20090063995A1 (en) * | 2007-08-27 | 2009-03-05 | Samuel Pierce Baron | Real Time Online Interaction Platform |
US20090249223A1 (en) * | 2008-03-31 | 2009-10-01 | Jonathan David Barsook | Asynchronous online viewing party |
US20120236201A1 (en) * | 2011-01-27 | 2012-09-20 | In The Telling, Inc. | Digital asset management, authoring, and presentation techniques |
US8543454B2 (en) * | 2011-02-18 | 2013-09-24 | Bluefin Labs, Inc. | Generating audience response metrics and ratings from social interest in time-based media |
US8725816B2 (en) * | 2011-05-03 | 2014-05-13 | Vmtv, Inc. | Program guide based on sharing personal comments about multimedia content |
US20130004138A1 (en) * | 2011-06-30 | 2013-01-03 | Hulu Llc | Commenting Correlated To Temporal Point Of Video Data |
US9110929B2 (en) * | 2012-08-31 | 2015-08-18 | Facebook, Inc. | Sharing television and video programming through social networking |
US20140075317A1 (en) * | 2012-09-07 | 2014-03-13 | Barstow Systems Llc | Digital content presentation and interaction |
US8990303B2 (en) * | 2013-01-31 | 2015-03-24 | Paramount Pictures Corporation | System and method for interactive remote movie watching, scheduling, and social connection |
US9191422B2 (en) * | 2013-03-15 | 2015-11-17 | Arris Technology, Inc. | Processing of social media for selected time-shifted multimedia content |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9268866B2 (en) | 2013-03-01 | 2016-02-23 | GoPop.TV, Inc. | System and method for providing rewards based on annotations |
Also Published As
Publication number | Publication date |
---|---|
US20140325552A1 (en) | 2014-10-30 |
US20140325542A1 (en) | 2014-10-30 |
WO2014134603A1 (en) | 2014-09-04 |
US20140325333A1 (en) | 2014-10-30 |
US20140325543A1 (en) | 2014-10-30 |
US9268866B2 (en) | 2016-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9268866B2 (en) | System and method for providing rewards based on annotations | |
US20140325557A1 (en) | System and method for providing annotations received during presentations of a content item | |
US11991257B2 (en) | Systems and methods for resolving ambiguous terms based on media asset chronology | |
US20210274007A1 (en) | Methods and systems for recommending media content | |
US20220365994A1 (en) | Systems and methods for updating links between keywords associated with a trending topic | |
JP2022185066A (en) | System and method for identifying and storing portion of media asset | |
CN110168541B (en) | System and method for eliminating word ambiguity based on static and time knowledge graph | |
US20160227283A1 (en) | Systems and methods for providing a recommendation to a user based on a user profile and social chatter | |
US20150379132A1 (en) | Systems and methods for providing context-specific media assets | |
US11617020B2 (en) | Systems and methods for enabling and monitoring content creation while consuming a live video | |
US20140379456A1 (en) | Methods and systems for determining impact of an advertisement | |
US20220141535A1 (en) | Systems and methods for dynamically enabling and disabling a biometric device | |
US20150244972A1 (en) | Methods and systems for determining lengths of time for retaining media assets | |
US20150347357A1 (en) | Systems and methods for automatic text recognition and linking | |
WO2016123188A1 (en) | Systems and methods for providing a recommendation to a user based on a user profile | |
US10776824B2 (en) | Systems and methods for recommending electronic devices based on user purchase habits | |
US20170323348A1 (en) | Method, apparatus, and computer-readable medium for content delivery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GOPOP.TV, INC., VIRGINIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:EVANS, EUGENE;SMALL, JONATHAN;MARSH, DAVID;AND OTHERS;REEL/FRAME:034977/0082 Effective date: 20140717 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |