Nothing Special   »   [go: up one dir, main page]

US20150293995A1 - Systems and Methods for Performing Multi-Modal Video Search - Google Patents

Systems and Methods for Performing Multi-Modal Video Search Download PDF

Info

Publication number
US20150293995A1
US20150293995A1 US14/325,191 US201414325191A US2015293995A1 US 20150293995 A1 US20150293995 A1 US 20150293995A1 US 201414325191 A US201414325191 A US 201414325191A US 2015293995 A1 US2015293995 A1 US 2015293995A1
Authority
US
United States
Prior art keywords
video
search engine
video segments
keywords
segments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/325,191
Inventor
David Mo Chen
Huizhong Chen
Maryam Daneshi
Andre Filgueiras de Araujo
Bernd Girod
Shanghsuan Tsai
Peter Vajda
Matthew Chuck-Jun Yu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Leland Stanford Junior University
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US14/325,191 priority Critical patent/US20150293995A1/en
Assigned to THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY reassignment THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GIROD, BERND, CHEN, HUIZHONG, DE ARAUJO, ANDRE FILGUEIRAS, DANESHI, MARYAM, CHEN, David Mo, TSAI, SHANGHSUAN, VAJDA, PETER, YU, MATTHEW CHUCK-JUN
Publication of US20150293995A1 publication Critical patent/US20150293995A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • G06F17/30817
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/316Indexing structures
    • G06F16/319Inverted lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • G06F16/4387Presentation of query results by the use of playlists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • G06F17/30268
    • G06F17/3053
    • G06F17/30622
    • G06F17/30864
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/222Secondary servers, e.g. proxy server, cable television Head-end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/237Communication with additional data server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26258Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for generating a list of items to be played back in a given order, e.g. playlist, or scheduling item distribution according to such list
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2665Gathering content from different sources, e.g. Internet and satellite
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • H04N21/8405Generation or processing of descriptive data, e.g. content descriptors represented by keywords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring

Definitions

  • the present invention relates generally to video distribution systems and more specifically to generation of video recommendations based upon user preferences.
  • News aggregation sites such as the Google News service provided by Google, Inc. of Mountain View, Calif. and the Yahoo News service provided by Yahoo, Inc. of Sunnyvale, Calif. have garnered significant attention in recent years. These services provide a user interface via which users can customize the types of news stories they want to read. Furthermore, the sites can progressively learn each user's preferences from their reading history to improve future selections.
  • video content references video information
  • the term is typically utilized to encompass a combination of video, audio, and text data.
  • video content can also include and/or reference sources of metadata. While video news has traditionally between broadcast over-the-air or transmitted via cable networks, video content is increasingly being distributed via the Internet. Therefore, video news stories can be obtained from a variety of sources.
  • Next-generation media consumption is likely to be more personalized, device agnostic, and pooled from many different sources.
  • Systems and methods in accordance with embodiments of the invention can provide users with personalized video content feeds providing the video content that matters most to them.
  • a multi-modal segmentation process is utilized that relies upon cues derived from video, audio and/or text data present in a video data stream.
  • video streams from a variety of sources are segmented. Links are identified between video segments and between video segments and online articles containing additional information relevant to the video segments. The additional information obtained by linking a video segment to an additional source of data, such as an online article, can be utilized in the generation of personalized video playlists for one or more users.
  • the personalized video playlists are utilized to playback video segments via a television, personal computer, tablet computer, and/or mobile device such as (but not limited to) a smartphone, or a media player.
  • viewing histories and user interactions can be utilized to continuously optimize the personalization.
  • the dynamic mixing and aggregation of news videos from multiple sources can greatly enrich the news watching experience by providing more comprehensive coverage and varying perspectives.
  • processes for linking video segments to additional sources of data can be implemented as part of a video search engine service that constructs indexes including inverted indexes relating keywords to video segments to facilitate the retrieval of video segments relevant to a search query.
  • One embodiment includes a video search engine server system, including: at least one processor; and memory containing an indexing application and a search engine application.
  • the indexing application configures at least one processor to: identify a set of video segments; extract text data from a selected video segment in the set of video segments and use keywords from the extracted text data to identify candidate sources of relevant data based upon keywords contained within the candidate sources of relevant data; and identify images from the candidate sources of relevant data, where at least a portion of the image matches at least a portion of a frame of video from within the selected video segment; identify additional sources of relevant data from the candidate sources of relevant data based upon the extracted keywords, the keywords in the candidate sources of relevant data, and the identified images; and generate an inverted index of video segments in the set of video segments that are relevant to specific keywords using the extracted keywords and the keywords contained within the additional sources of relevant data.
  • the search engine application configures at least one processor to: receive a search query; identify video segments from the set of video segments that are relevant to the search query using the inverted index; score the relevancy of the identified video segments to the search query; and generate search results identifying at least one video segment relevant to the search query.
  • the search query is a text string
  • the search engine application configures at least one processor to extract query keywords from the text string and identify relevant video segments using the inverted index based upon the extracted query keywords.
  • the search query is an image
  • the search engine application configures at least one processor to identify frames from video segments in the set of video segments, where at least a portion of the image matches at least a portion of a frame of video from within a given video segment from the set of video segments.
  • the search engine application configures at least one processor to: identify keywords relevant to the image based upon the keywords that are relevant to video segments from the set of video segments that include at least one frame in which at least a portion of the image matches at least a portion of the frame; and identify relevant video segments using the inverted index based upon the keywords identified as relevant to the image.
  • the search query is a video segment; and the search engine application configures at least one processor to identify frames from video segments in the set of video segments, where at least a portion of a frame from the query video segment matches at least a portion of a frame of video from within a given video segment from the set of video segments.
  • the search engine application configures at least one processor to: identify keywords relevant to the query video segment based upon the keywords that are relevant to video segments from the set of video segments that include at least one frame in which at least a portion of the frame matches at least a portion of a frame from the query video segment; and identify relevant video segments using the inverted index based upon the keywords identified as relevant to the query video segment.
  • the indexing application further configures at least one processor to identify additional sources of relevant data from the candidate sources of relevant data based upon the extracted keywords, the keywords in the candidate sources of relevant data, the identified images, and timestamps associated with the selected video segment and the candidate sources of relevant data.
  • the indexing application further configures at least one processor to use keywords from the extracted text data to identify candidate sources of relevant data based upon keywords contained within the candidate sources of relevant data based upon bag-of-words histogram comparisons that enable matching of text segments from the extracted text data with similar distributions of words in a candidate source of relevant data.
  • the indexing application further configures at least one processor to calculate a term frequency-inverse document frequency (tf-idf) histogram intersection score (S(H a , H b )) as follows:
  • H a (w) and H b (w) are the L1 normalized histograms of the words in the two sets of words;
  • ⁇ f(w) ⁇ is the set of estimated relative word frequencies.
  • the indexing application further configures at least one processor to determine that a candidate source of relevant data is an additional source of relevant data when the tf-idf histogram intersection score (S(H a , H b )) exceeds a predetermined threshold.
  • the indexing application further configures at least one processor to: identify named entities within the text data extracted from the selected video segment; and determine that a candidate source of relevant data is an additional source of relevant data when a predetermined number of named entities are present within both the candidate source of relevant data and the text data extracted from the selected video segment.
  • the indexing application further configures at least one processor to identify additional named entities by performing object recognition.
  • the indexing application further configures at least one processor to identify candidate sources of relevant data by providing at least some of the keywords extracted from the selected video segment to a search engine.
  • the indexing application further configures at least one processor to identify a title from text extracted from at least one frame of video from a selected video segment and identify candidate sources of relevant data and the keyword provided to the search engine is the extracted title.
  • the indexing application further configures at least one processor to identify at least a portion of an image from a candidate source of relevant data that matches at least a portion of a frame of video from within the selected video segment by determining that a given frame of video contains a region that includes a geometrically and photometrically distorted version of a portion of an image obtained from the candidate source of relevant data.
  • the indexing application configures at least one processor to identify relationships between individual video segments in the set of video segments
  • the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the number of related video segments in the set of video segments.
  • timestamps are associated with the video segments in the set of video segments
  • the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the number of related video segments in the set of video segments with associated timestamps that are within a predetermined time period.
  • the indexing application configures at least one processor to identify whether video segments are related based upon keywords associated with the video segments.
  • the indexing application configures at least one processor to calculate a term frequency-inverse document frequency (tf-idf) histogram intersection score (S(H a , H b )) for the keywords associated with the two video segments as follows:
  • H a (w) and H b (w) are the L1 normalized histograms of the words in the two sets of words;
  • ⁇ f(w) ⁇ is the set of estimated relative word frequencies.
  • the indexing application configures at least one processor to determine that a first video segment is related to a second video segment when the term frequency-inverse document frequency (tf-idf) histogram intersection score exceeds a first threshold and the number of named entities associated with each of the video segments exceeds a second threshold.
  • the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the number of common keywords.
  • the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the frequency of the common keywords with respect to the specific video segment.
  • the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the number of images from the search query, where at least a portion of the image matches at least a portion of a frame from the specific video segment.
  • the search engine application configures at least one processor to weight the relevancy score of a specific video segment based upon user preferences.
  • the search results also include links to additional sources of relevant data that are relevant to the relevant video segments identified in the search results.
  • An embodiment of the method of the invention includes: identifying a set of video segments using a video search engine server system; extracting text data from a selected video segment in the set of video segments using the video search engine server system; identifying candidate sources of relevant data using the video search engine server system based upon keywords contained within the candidate sources of relevant data and keywords from the extracted text data; and identifying images from the candidate sources of relevant data using the video search engine server system, where at least a portion of the image matches at least a portion of a frame of video from within the selected video segment; identifying additional sources of relevant data from the candidate sources of relevant data based upon the extracted keywords, the keywords in the candidate sources of relevant data, and the identified images; and generating an inverted index of video segments in the set of video segments that are relevant to specific keywords using the video search engine server system based upon the extracted keywords and the keywords contained within the additional sources of relevant data; receiving a search query using the video search engine server system; identifying video segments from the set of video segments that are relevant to the search query using the video search engine server system based
  • FIG. 1 is a flow chart that conceptually illustrates a process for generating a personalized playlist of video segments in accordance with an embodiment of the invention.
  • FIG. 2 is a system diagram that conceptually illustrates a system for generating personalized playlists, distributing video segments to users based upon the personalized playlists, and collecting analytic data based upon user interactions with the video segments during playback in accordance with an embodiment of the invention.
  • FIG. 3 is a flowchart illustrating a process for generating personalized playlists, distributing video segments to users based upon the personalized playlists, and collecting analytic data based upon user interactions with the video segments during playback in accordance with an embodiment of the invention.
  • FIG. 4 is a system diagram that conceptually illustrates a system for recording video segments from cable and over-the-air television broadcasts in accordance with an embodiment of the invention.
  • FIG. 5A is a system diagram that conceptually illustrates a multi-modal video data stream segmentation system in accordance with an embodiment of the invention.
  • FIG. 5B is a flowchart illustrating a process for performing multi-modal segmentation of a video data stream in accordance with an embodiment of the invention.
  • FIG. 6 is a flowchart illustrating a process for detecting text segmentation cues in a video data stream in accordance with an embodiment of the invention.
  • FIG. 7A conceptually illustrates the location of a face within a frame of video as part of a video segmentation process in accordance with an embodiment of the invention.
  • FIG. 7B is a flowchart illustrating a process for detecting an anchor frame segmentation cue in accordance with an embodiment of the invention.
  • FIG. 8A conceptually illustrates the matching of a logo image to content within a frame of video in accordance with an embodiment of the invention.
  • FIGS. 8B and 8C conceptually illustrate the identification of a transition animation segmentation cue in accordance with an embodiment of the invention.
  • FIG. 9 is a flowchart illustrating a process for identifying a logo and/or transition animation segmentation cue in accordance with an embodiment of the invention.
  • FIG. 10 is a system diagram that conceptually illustrates a playlist generation server in accordance with an embodiment of the invention.
  • FIG. 11 conceptually illustrates a process for matching video segments to additional sources of data by matching visual and/or text features of the video segments to relevant additional data sources in accordance with an embodiment of the invention.
  • FIG. 12 is a flowchart that illustrates a process for identifying sources of additional data that are relevant to a video segment using text analysis in accordance with an embodiment of the invention.
  • FIGS. 13A-13D conceptually illustrate extraction of metadata concerning a video segment by detecting and recognizing text contained within frames of the video segment in accordance with embodiments of the invention.
  • FIG. 14 is a flowchart illustrating a process for obtaining metadata concerning a video segment and/or identifying relevant sources of additional data based upon text extracted from one or more frames of video in accordance with an embodiment of the invention.
  • FIG. 15 conceptually illustrates a process for obtaining metadata concerning a video segment by performing face recognition in accordance with an embodiment of the invention.
  • FIG. 16 is a flowchart illustrating a process for obtaining metadata concerning a video segment and/or identifying relevant sources of additional data by performing face recognition in accordance with an embodiment of the invention.
  • FIG. 17 is a flowchart illustrating a process for generating a personalized playlist based upon a set of video segments, user preferences, and/or a user's viewing history in accordance with an embodiment of the invention.
  • FIG. 18 is a flowchart illustrating a process for identifying related video segments in accordance with an embodiment of the invention.
  • FIG. 19 is a system diagram that conceptually illustrates a playback device configured to retrieve a personalized playlist and select video segments for playback utilizing the personalized playlist in accordance with an embodiment of the invention.
  • FIG. 20A conceptually illustrates a user interface generated by a playback device using a personalized playlist in accordance with an embodiment of the invention.
  • FIG. 20B conceptually illustrates a user interface generated by a playback device that enables a user to specify a preferred duration and user preferences with respect to specific categories, sources of video content, and/or keywords in accordance with an embodiment of the invention.
  • FIG. 21A conceptually illustrates a user interface generated by a playback device that employs a gesture based user interface during playback of a video segment in accordance with an embodiment of the invention.
  • FIG. 21B conceptually illustrates a user interface generated by a playback device that employs a gesture based user interface displaying available channels of video segments in accordance with an embodiment of the invention.
  • FIG. 22A conceptually illustrates a “second screen” user interface generated by a playback device that provides information concerning related video segments to a video segment being played back on another playback device in accordance with an embodiment of the invention.
  • FIG. 22B conceptually illustrates a “second screen” user interface generated by a playback device that provides information concerning related video segments to a video segment being played back on another playback device and playback controls that can be utilized by a user to control playback of video segments on another playback device in accordance with an embodiment of the invention.
  • FIG. 23 conceptually illustrates a log file maintained by a playlist generation server based upon user interactions with video segments accessed via a playback device in accordance with an embodiment of the invention.
  • FIG. 24 is a flowchart illustrating a process for generating a summary of video segments by combining portions of video segments based upon the content of the portions of the video segments in accordance with an embodiment of the invention.
  • FIG. 25 is a system diagram that conceptually illustrates a multi-modal video search engine system in accordance with an embodiment of the invention.
  • FIG. 26 is a system diagram that conceptually illustrates a multi-modal video search engine server system in accordance with an embodiment of the invention.
  • FIG. 27 is a flowchart illustrating a process for retrieving video segments relevant to a search query in accordance with an embodiment of the invention.
  • data streams of video content are aggregated from various sources. Relationships are identified between various segments of the video content and/or between segments of the video content and other relevant sources of information including (but not limited to) metadata databases, web pages and/or social media services. Relevant information concerning the video segments can then be utilized to generate personalized playlists of video content based upon each user's viewing history and preferences. Users can then utilize the playlists to playback segments of video content via any of a variety of playback devices.
  • the user interface presented to the user via the playback device and/or via a second screen can display and/or provide users with links to information related to the displayed video segment.
  • Online sources of video content such as news websites, typically provide video content in individual segments.
  • traditional broadcast sources of video content are typically provided in continuous streams.
  • the process of aggregating video content from various sources can include segmentation of continuous data streams of video content.
  • the streams of video content can be segmented into individual news stories.
  • the streams of video content can be segmented in accordance with other criteria including (but not limited to) commercial breaks, repeated events, slow motion sequences, camera shots, sentences, and/or anchor frames.
  • repeated sequences, slow motion sequences, and shots of the crowd are often indicative of important activity and can be utilized as segmentation boundaries.
  • the segmentation process is a multi-modal segmentation process that detects segmentation cues in video, audio, and/or text data available in the data stream.
  • Multi- modal segmentation processes in accordance with certain embodiments of the invention utilize specific text segmentation cues contained within closed caption text data.
  • specific video segmentation cues such as the recognition of a recurring face (e.g. an anchorperson), and/or recurring logo or logo animation are utilized to assist video segmentation.
  • any of a variety of segmentation techniques can be utilized as appropriate to the requirements of specific applications.
  • segments of video content are analyzed to identify links between the segments and other relevant sources of information including (but not limited to) metadata databases, web pages and/or short messages posted via social media services such as the Facebook service provided by Facebook, Inc. of Menlo Park, Calif. and the Twitter service provided by Twitter, Inc. of San Francisco, Calif.
  • a multi-modal search for relevant additional data sources is performed that utilizes textual analysis and visual analysis of the video segments to identify relevant sources of additional data.
  • the textual analysis involves extracting keywords from text data such as closed caption and/or subtitles. The extracted keywords can then be utilized to locate relevant text data.
  • the visual analysis involves recognizing elements within individual frames of video such as (but not limited to) text, faces, images and/or image patterns (e.g. clothing, scene background).
  • visual analysis can also involve object detection and/or detection of specific object events (e.g. gestures or specific object movements).
  • Text and faces of named entities can be extracted as metadata describing the video segment and utilized to locate sources of relevant text data.
  • some or all of a frame of video can be compared to images related to additional sources of data and matching images used to identify relevant sources of additional data.
  • any of a variety of text and/or visual analysis can be performed to identify relevant sources of additional information.
  • a multi-modal video search engine service that creates an index of video segments that are relevant to specific keywords based upon relevant keywords identified through the textual and visual analysis of the video segments.
  • the list of relevant keywords for a particular video segment can be expanded by identifying keywords from in additional sources of data identified through the textual and visual analysis of the video segment.
  • the index can be utilized to generate a list of video segments that are relevant to a text search query.
  • an image, a video segment, and/or a Universal Resource Locator (URL) identifying a data sources such as (but not limited to) an image, a video sequence, a web page, and/or an online article can be provided as an input to the search engine (as opposed to a text query) to generate a list of related video segments.
  • a Universal Resource Locator URL
  • any of a variety of multi-modal search engine services can be implemented as appropriate to the requirements of specific applications.
  • a personalized playlist can be constructed by selecting video segments of news stories that provide the greatest coverage of the stories taking into consideration an individual user's preferences concerning factors such as (but not limited to) content source, content category, anchorperson and/or any other factors appropriate to specific applications.
  • embodiments of the invention utilize an integer linear programming optimization or a suitable approximate solution that employs an objective function that weighs both content coverage and user preferences in the generation of a personalized playlist.
  • any of a variety of techniques for recommending video segments can be utilized in accordance with embodiments of the invention including (but not limited to) processes that generate playlists using video segments that do not contain cumulative content.
  • Playlist generation systems in accordance with embodiments of the invention perform multi-modal analysis of video segments to generate personalized playlists based upon factors including (but not limited to) a user's preferences, and/or viewing history. In a number of embodiments, the user's preferences can touch upon topic, content provider, and total playlist duration.
  • a playlist generation system configured to generate personalized playlists of news stories in accordance with an embodiment of the invention is conceptually illustrated in FIG. 1 .
  • the playlist generation system 100 obtains video data streams and video segments from a variety of sources including (but not limited to) over-the-air broadcasts and cable television transmissions ( 102 ), online news websites ( 104 ), and social media services ( 106 ).
  • continuous data streams such as (but not limited to) over-the-air broadcasts and cable television transmissions ( 102 ) are segmented and the video segments stored for later retrieval.
  • a multi-modal segmentation process is utilized that considers a variety of video, audio, and/or text cues in the determination of segmentation boundaries.
  • the system only sources previously segmented video.
  • any of a variety of segmentation processes can be utilized as appropriate to the requirements of specific applications. Segmentation processes that are utilized by various playlist generation systems in accordance with embodiments of the invention are described further below.
  • the playlist generation system 100 analyzes and indexes ( 108 ) the video segments.
  • a multi-modal process that performs textual and visual analysis is utilized to analyze and index the video segments.
  • the multi-modal process identifies keywords from text sources within the video segment including (but not limited to) closed caption, and subtitles. Keywords can also be extracted based upon text recognition, and object recognition.
  • various object recognition processes are utilized including facial recognition processes to identify named entities.
  • the set of keywords associated with a video segment can then be utilized to identify additional sources of data. Examples of additional sources of data include (but are not limited to) online articles and websites, and posting to social media services.
  • comparisons can be performed between frames of a video segment and images associated with additional sources of data as an additional modality for determining the extent of the relevance of an additional source of data.
  • any of a variety of analysis and indexing processes can be utilized as appropriate to the requirements of specific applications. Analysis and indexing processes that are utilized by various playlist generation systems in accordance with embodiments of the invention are discussed further below.
  • the indexed video segments can be utilized by the playlist generation system 100 to generate personalized playlists ( 110 ). Any of a variety of processes can be utilized to generate personalized playlists in accordance with embodiments of the invention. Several particularly effective processes for generating personalized playlists are described below. A number of embodiments are directed toward the generation of playlists in the context of news stories and select video segments that provide the greatest coverage of recent news stories in a manner that is informed by user preferences. In several embodiments, the selection process is further constrained by the need to generate a playlist having a playback duration that does not exceed a duration specified by the user.
  • Personalized playlists can be provided by the playlist generation system to playback devices.
  • the playlist can take the form of JSON playlist metadata.
  • any of a variety of data transfer techniques can be utilized including the creation of a top level index file such as (but not limited to) a SMIL file, or an MPEG-DASH file.
  • Client applications on playback devices can generate a user interface ( 112 ) that enables the user to obtain and playback the video segments identified within the playlist. In many instances, the user may simply enable the playback device to continuously play through the playlist.
  • the user interface provides the user with the ability to select video segments, express sentiment toward video segments (e.g.
  • the playlist generation system 100 logs user interactions via the user interface and uses the interactions to infer user preferences. In this way, the system can learn over time information about a user's preferences including (but not limited to) preferred content categories, content services, and/or anchorpeople.
  • playback devices can generate a so-called “second screen” user interface that can enable control of playback of a playlist on another playback device and/or provide information that complements a video segment and/or playlist being played back by another playback device.
  • the specific user interface generated by a playback device is typically only limited by the capabilities of the playback device and the requirements of a specific application.
  • playlist generation systems are described above with reference to FIG. 1
  • any of a variety of playlist generation systems that produce playlists of video segments from multiple sources that are personalized based upon the preferences of individual users can be utilized as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • Personalized video distribution systems that utilize personalized playlists in the distribution of video content in accordance with various embodiments of the invention are discussed further below.
  • the video distribution system 200 includes a playlist generation server system 202 that is configured to index video segments accessible via a content storage system 204 , a content distribution network 206 , web server systems 208 and/or social media server systems 210 , 214 .
  • the content storage system 204 contains video segments generated by a video segmentation system 212 that can segment and transcode continuous video data streams obtained from sources including (but not limited to) over-the-air broadcasts and cable television transmissions.
  • Various processes that can be utilized to perform segmentation of continuous data streams in accordance with embodiments of the invention are discussed below.
  • Playlist generation server systems 202 in accordance with many embodiments of the invention utilize multi-modal analysis of video segments to identify additional relevant sources of data accessible via the content storage system 204 , a content distribution network 206 , a web server system 208 and/or a social media server system 210 .
  • the playlist generation server system 202 annotates video segments with metadata extracted from the video segment and/or from additional sources of relevant data.
  • the metadata describing the video segments can be stored in a database 216 and utilized to generate personalized playlists based upon user preferences that can also be stored in the database.
  • Playback client applications installed on a variety of playback devices 218 can be utilized to request personalized playlists from a playlist generation server system 202 via a network 220 such as (but not limited to) the Internet.
  • the playback client applications can configure the playback devices 218 to display a user interface that enables a user to view and interact with the video segments identified in the user's personalized playlist.
  • the playlist generation server system and the playback devices can support multi-screen user interfaces.
  • a first playback device can be utilized to playback video segments identified in the playlist and a second playback device can be utilized to provide a “second screen” user interface enabling control of playback of video segments on the first playback device and/or additional information concerning the video segments and/or playlist being played back on the first playback device.
  • the playback devices 218 are personal computers and mobile phones.
  • playback client applications can be created for any of a variety of playback devices including (but not limited to) network connected consumer electronics devices such a televisions, game consoles, and media players, tablet computers and/or any other class of device that is typically utilized to view video content obtained via a network connection.
  • the process 300 includes crawling ( 302 ) the websites of video content sources to identify new video segments.
  • the process of identifying new video segments also includes aggregating video data from a variety of sources including (but not limited to) over-the-air broadcasts and cable television transmissions.
  • the aggregated video data may benefit from segmentation ( 304 ).
  • the result of the crawling and/or aggregation of video data is typically a list of video segments that can be recommended to a given user.
  • the process 300 seeks to annotate the video segments with metadata describing the content of the segments.
  • a video segment linking process ( 306 ) is performed that seeks to identify additional sources of relevant data that describe the content of the video segment.
  • the video segment linking process ( 306 ) also seeks to identify relationships between video segments.
  • knowledge concerning the relationship between video segments can be useful in identifying video segments that contain cumulative content and can be excluded from a playlist without significant loss of information or content coverage.
  • Information concerning the number of related stories can also provide an indication of the importance of the story.
  • Metadata describing a set of video segments can be utilized to generate ( 308 ) personalized playlists for one or more users.
  • a variety of processes can be utilized in the generation of a personalized playlist based upon the metadata generated by process 300 .
  • a number of embodiments utilize an integer linear programming optimization and/or an approximation of an integer linear programming optimization that employs an objective function that weighs both content coverage including (but not limited to) measured trending topics (e.g. breaking news, or popular stories) and user preferences in the generation of a personalized playlist.
  • any of a variety of processes for recommending video segments can be utilized as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • video segments are streamed to playback devices.
  • Many of the standards that exist for encoding video specify profiles and playback devices are typically constructed in a manner that enables playback of content encoded in accordance with one or more of the profiles specified within the standard. The same profile may not, however, be suitable or desirable for playing back content on different classes of playback device.
  • mobile devices are typically unable to support playback of profiles designed for home theaters.
  • a network connected television may be capable of playing back content encoded in accordance with a mobile profile.
  • playback quality may be significantly reduced relative to the quality achieved with a profile that demands the resources that are typically available in a home theater setting.
  • processes for generating personalized video playlists in accordance with many embodiments of the invention involve transcoding video segments into formats and/or profiles suitable for different classes of device.
  • the transcoding of media into target profiles can be performed in parallel with the processes utilized to perform video segment linking ( 306 ) and personalized playlist generation ( 308 ).
  • personalized playlists can be utilized by playback devices to obtain ( 312 ) and playback video segments identified within the playlists.
  • the video segments are streamed to the playback device and any of a variety of streaming technologies can be utilized including any of the common progressive playback or adaptive bitrate streaming protocols utilized to stream video content over a network.
  • a playback device can download the video segments using a personalized video playlist for disconnected (or connected) playback.
  • the personalized playlists are generated based upon user preferences. Therefore, the process of generating personalized playlists can be continuously improved by collecting information concerning user interactions with video segments identified in a personalized playlist. The interactions can be indicative of implicit user preferences and may be utilized to update explicit user preferences obtained from the user.
  • computers and television tuners are utilized to continually record media content from over-the-air broadcasts and cable television transmissions.
  • the recorded programs can include national morning and evening news programs (e.g., TODAY Show, ABC World News), investigative journalism (e.g., 60 Minutes), and late-night talk shows (e.g., The Tonight Show).
  • the closed caption (CC) and/or any subtitles and metadata that may be available within the broadcast data stream are recorded along with the media content for use in subsequent processing of the recorded media content.
  • content sources appropriate to the requirements of specific applications can be recorded.
  • segmentation is performed in real-time prior to storage.
  • the video data streams are recorded and segmentation is performed on the recorded data streams.
  • FIG. 4 A video segmentation system configured to aggregate and segment over-the air broadcasts and cable television transmissions in accordance with an embodiment of the invention is illustrated in FIG. 4 .
  • the video segmentation system 400 receives video data stream inputs 402 from over-the-air broadcasts and cable television transmissions.
  • the video segmentation system 400 uses a signal splitter 404 to split and amplify a signal received via a cable television service.
  • the signal is split into a number of inputs that are provided to a set of tuners 408 that possess the capability to demodulate a digital television signal from the cable television transmission and record the data stream to a storage device.
  • the tuners are controlled by a server based upon program guide information.
  • the server can utilize the program guide information to identify desired content and can control the tuners 408 to tune to the appropriate channel at the appropriate time to commence recording of the content.
  • the tuners 408 connect to a central storage system 410 via a high bandwidth digital switch 412 .
  • the data streams are recorded to the central storage system 410 and then a video segmentation server system 414 can commence the process of segmenting the data stream into discrete video segments.
  • tuner boxes 416 are utilized to tune to and demodulate digital television signals that are provided via a network 418 to the video segmentation server system 414 for segmentation.
  • the video segmentation server system records the over-the-air data streams to the central storage system 410 and then processes the recorded data streams.
  • the video segmentation server 414 system performs video segmentation in real-time and the video segments are recorded to the central storage system 410 .
  • local machines 420 can be utilized to administer the aggregation and segmentation of video and/or view video segments.
  • Video segmentation server systems and multi-modal segmentation processes that can be utilized in the segmentation of video data streams in accordance with embodiments of the invention are discussed further below.
  • video segmentation systems in accordance with many embodiments of the invention can utilize a variety of cues to reliably segment content.
  • the sources of information concerning the structure of the content include (but are not limited to) image data in the form of frames of video, audio data in the form of time synchronized audio tracks, text data in the form of closed caption and/or subtitles, and/or additional sources of video, audio, and/or text information indicated by metadata contained within the data stream (e.g. in a time synchronized metadata track).
  • the term structure can often be used to describe a common progression of content within a data stream. For example, many data streams include content interrupted by advertising.
  • video segmentation is to use information concerning the structure of content to divide a continuous video data stream into logical video segments such as (but not limited to) discrete news stories.
  • video segmentation is performed using multi-modal fusion of a variety of visual, auditory and textual cues. By combining cues from different types of data contained within the data stream, the segmentation process has a greater likelihood of correctly identifying structure within the content indicative of logical boundaries between video segments.
  • the multi-modal video segmentation server system 500 includes a processor 510 in communication with volatile memory 520 , non-volatile memory 530 , and a network interface 540 .
  • the non-volatile memory includes a video segmentation application 532 that configures the processor 510 to identify video segmentation boundaries in a video data stream 524 retrieved via the network interface 540 .
  • the segmentation boundaries are utilized to generate video segmentation metadata 526 that can be utilized in the subsequent transcoding of the video data into one or more target video profiles for distribution to playback devices.
  • processor is used with respect to all of the processing system described herein to refer to a single processor, multiple processors, and/or a combination of one or more general purpose processor and one/or more graphics coprocessors or graphics processing units (GPUs).
  • memory is used to refer to one or more memory components that may be housed within separate computing devices. Multi-modal video segmentation processes that can be performed using multi-modal video segmentation processes in accordance with embodiments of the invention are described in detail below.
  • Multi-modal video segmentation processes can utilize a variety of different types of data contained within a video data stream to identify cues indicative of the structure of the data stream.
  • a multi-modal video segmentation process that utilizes textual, audio and visual cues to identify segmentation boundaries in accordance with an embodiment of the invention is conceptually illustrated in FIG. 5B .
  • the process 550 involves detecting textual cues ( 552 ), audio cues ( 554 ), and visual cues ( 555 ). The detected cues and their associated timestamps are then fused to identify segmentation boundaries.
  • machine learning techniques can be utilized to train a system to identify segmentation boundaries based upon a fused stream of segmentation cues.
  • a supervised learning approach such as (but not limited to) the use of techniques including (but not limited to) a support vector machine, a neural network classifier, and/or a decision tree classifier are utilized to implement a segment that can identify segmentation boundaries based upon a training data set of video streams in which segmentation boundaries are manually identified.
  • any of a variety of techniques including but not limited to supervised and unsupervised machine learning techniques can be utilized to implement systems for identifying segmentation boundaries based upon multi-modal segmentation cues in accordance with embodiments of the invention.
  • the various textual, visual and audio cues that can be utilized in processes similar to those described above with reference to FIG. 5B are discussed further below.
  • segmentation analysis of closed caption data can be enhanced by looking for additional cues including (but not limited to) commonly used transition phrases that occur at segmentation boundaries.
  • string searches are performed within closed caption textual data and all >>> markers and transition phrases are identified as potential segmentation boundaries.
  • the list of transition phrases include “Now, we turn to . . . ” and “Stephanie Gross, NBC News, Seattle”.
  • any of a variety of text tags and/or phrases can be utilized as textual segmentation cues as appropriate to the requirements of specific applications.
  • automatic speech recognition can be performed with respect to the audio track and the timestamps of the audio track used to align the audio track textual data output by the automatic speech recognition process with text in the accompanying closed caption textual data.
  • the text data output by the automatic speech recognition process can also be analyzed to detect the presence of transition phrases.
  • the uncertainty in the time alignment between the closed caption text and the video content can be accommodated by the multi-modal segmentation process and a separate time alignment process is not required.
  • the process 600 includes extracting closed caption textual data ( 602 ) and performing automatic speech recognition ( 604 ). These processes can be performed in parallel and any of a variety of automatic speech recognition processes typically used to perform automated speech to text conversions can be utilized as appropriate to the requirements of specific applications.
  • the number of speakers may be limited and speech recognition models that are speaker dependent can be utilized to achieve greater accuracy in the speech to text conversion of speech by recurring speakers such as (but not limited to) news anchors.
  • Timestamps within the audio track utilized as the input to the automatic speech recognition process can be utilized to time synchronize ( 606 ) closed caption textual data with the video track within the video segment.
  • Text segmentation cues can be identified by performing string searches within the closed caption textual data.
  • Information concerning the textual cue and the timestamp associated with the textual cue can then be utilized in the identification of segmentation boundaries.
  • a confidence score is associated with the timestamp assigned to a textual cue and the confidence score can also be considered in the determination of a segmentation boundary.
  • Visual boundaries in video content can provide information concerning transitions in content that cannot be discerned from analysis of closed caption textual data alone.
  • an analysis of video content for visual cues indicative of segmentation boundaries can be utilized to identify additional segmentation boundaries and to confirm and/or improve the accuracy of boundaries identified using closed caption textual data.
  • the set of visual cues includes (but is not limited to) anchor frames, logo frames, logo animation sequences and/or dark frames. In other embodiments and/or contexts, any of a variety of visual cues can be utilized as appropriate to the requirements of specific applications.
  • anchor frame refers to a frame in which an anchorperson appears. Typically, one or more anchorpersons appear between stories to provide a graceful transition.
  • a face detector is applied to some or all of the video frames in a video data stream.
  • a face detector that can detect the presence of a face (without performing identification) is utilized to identify candidate anchor frames and then a facial recognition process is applied to the candidate anchor faces to detect anchor frames.
  • any of a variety of techniques can be used to identify the presence of a specific person's face within a frame in a video data stream as appropriate to the requirements of specific applications
  • FIG. 7A A process for detecting anchor frames in a data stream in accordance with an embodiment of the invention is conceptually illustrated in FIG. 7A .
  • the frame of video 700 contains an image of the face 702 of NBC News anchor Brian Williams.
  • a process for detecting that a region 704 of the frame 700 contains the face of a known anchorperson identifying the frame as an anchor frame is illustrated in FIG. 7B .
  • the process 750 includes selecting ( 752 ) a frame from the video data stream and detecting ( 754 ) a region of the frame containing a face.
  • a Viola-Jones or cascade of classifiers based face detector is utilized.
  • any of a variety of face detection techniques can be utilized as appropriate to the requirements of a specific application.
  • a face identification process can be performed within the region containing the detected face.
  • face identification is performed by generating a color histogram for a region containing a candidate face.
  • an elliptical region is utilized.
  • confidence information generated by the face detection process is utilized to define the region from which to form a histogram. The color histograms can be clustered from candidate anchor frames across the video data stream and dominant clusters identified as corresponding to an anchorperson.
  • the dominant clusters can then be used to identify candidate anchor frames that contain a face with a face having a color histogram that is close to one of the dominant “anchor” color histograms.
  • similarity is determined using the L1 distance between the color histograms.
  • any of a variety of metrics can be utilized as appropriate to the requirements of specific applications including metrics that consider the color histogram of a potential anchor face over more than one frame as appropriate to the requirements of specific application.
  • an anchor frame is detected ( 762 ).
  • factors including (but not limited to) the L1 distance, and the number of adjacent frames in which the anchor face are detected are utilized to generate a confidence score that can be used by a multi-modal segmentation process in combination with information concerning other cues to determine the likelihood of a transition indicative of a segmentation boundary.
  • logo appearance and position can vary unpredictably over time.
  • feature matching is performed between a set of logo images and frames from a video data stream.
  • a set of logo images can be obtained by periodically crawling the websites of news organizations and/or other appropriate sources.
  • Feature matching can also be performed between sequences of images in a transition animation and frames from a video data stream.
  • new transition animations can be periodically observed in video data streams generated by specific content sources and added to a library of transition animations.
  • FIGS. 8A Feature matching between logo images and frames of video in accordance with an embodiment of the invention is illustrated in FIGS. 8A .
  • the process involves comparing a logo image 800 with a frame of video 802 and identifying matches 804 between local features in the logo image 806 and in the frame of video 808 .
  • a match is identified and factors including (but not limited to) the similarity of the local features can be used to generate a confidence score indicating the reliability of the match.
  • a similar process can be utilized to identify a sequence of frames of video that match a sequence of frames in a transition animation. Local feature matching between frames in transition animations and sequences of frames of video in accordance with embodiments of the invention are illustrated in FIGS. 8B and 8C .
  • FIG. 8B A frame from a transition animation that has previously been identified as indicative of a segmentation boundary is illustrated in FIG. 8B .
  • the frame 850 from the transition animation shows two framed pictures 854 and 856 , a white ticker bar 858 positioned below the two framed pictures and a logo 860 in the larger ( 856 ) of the two frames.
  • Identification of the same features in the frame of video 852 can be indicative of the frame of video 852 belonging to a transition animation.
  • the content within the framed pictures and the ticker differ, however, the presence of a sufficiently large number of local features can be utilized to detect a match between the two frames.
  • additional features such as the presence of an anchorpersons face in the smaller of the two framed pictures can also be utilized in the detection of a frame of a transition animation.
  • any of a variety of features can be utilized to detect transition animations as appropriate to the requirements of specific applications including (but not limited to) analysis of an audio track to detect a musical accompaniment to a transition animation.
  • the process 900 involves selecting ( 902 ) frames from a video data stream. Local features can be extracted ( 904 ) from a reference image and the selected frames of video.
  • Local features can be extracted ( 904 ) from a reference image and the selected frames of video.
  • SURF features are extracted using processes similar to those described in H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346-359, 2008.
  • any of a variety of processes can be utilized to extract localized features in accordance with embodiments of the invention.
  • the localized features can be utilized to generate ( 906 ) global signatures and the selected frames ranked by comparing their global signatures to the global signature of the reference image.
  • the ranking can be utilized to select ( 908 ) a set of candidate frames that are compared in a pairwise fashion ( 910 ) with the logo image.
  • the pairwise comparisons can utilize the techniques described in D. Chen, S. Tsai, V. Chandrasekhar, G. Takacs, R. Vedantham, R. Grezeszczuk, and B. Girod, “Residual enhanced visual vector as a compact signature for mobile visual search,” Signal Processing, 2012.
  • a match is identified ( 912 ).
  • a match may represent that the candidate frame incorporates a logo and/or that the candidate frame corresponds to a frame from a transition animation.
  • the process of determining a match also involves determining a confidence metric that can also be utilized in the segmentation of a video data stream.
  • any of a variety of processes for comparing features within images can be utilized to detect logos, animations, and/or other features indicative of segmentation boundaries as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • the processes described above with respect to FIG. 9 can also be utilized in the indexing of video segments to identify the presence of images associated with additional sources of data within a video segment. While logos and transition animations can be strong indicators of segmentation boundaries in a video data stream, they are not the only visual cues that can be utilized to detect segmentation boundaries. Additional visual cues including dark frames that are indicative of segmentation boundaries are discussed further below.
  • dark frames are frequently inserted at the boundaries of commercials and hence provide another valuable visual cue for segmentation.
  • dark frames are detected by converting some or all frames in a video data stream to gray scale and comparing the mean and standard deviation of the pixel intensities.
  • a frame is determined to be a dark frame if the mean is below ⁇ b and the standard deviation is below ⁇ b .
  • any of a variety of processes can be utilized to identify dark frames in accordance with embodiments of the invention, including (but not limited) to processes that identify sequences of multiple dark frames and/or processes that provide a confidence measure that can be utilized by a multi-modal segmentation process in combination with information concerning other cues to determine the likelihood of a transition indicative of a segmentation boundary.
  • an audio track within a data stream can also be utilized as a source of segmentation cues.
  • Anchorpersons commonly pause momentarily or take a long breath before introducing a new story.
  • significant pauses in an audio track are utilized as a segmentation cue.
  • a significant pause is defined as a pause in speech having a duration of 0.3 seconds or longer.
  • any of a variety of classifiers can be utilized to detect pauses indicative of a segmentation boundary in accordance with embodiments of the invention including processes that provide a confidence measure that can be utilized by a multi-modal segmentation process in combination with information concerning other cues to determine the likelihood of a transition indicative of a segmentation boundary.
  • Pauses are not the only auditory cues that can be utilized in the detection of segmentation boundaries.
  • specific changes in tone and/or pitch can be utilized as indicative of segmentation boundaries as can musical accompaniment that is indicative of a transition to a commercial break and/or between segments.
  • any segmentation process that can be utilized to segment a video data stream in a manner that enables indexing of the video segments for the purposes of generating personalized playlists can be utilized in accordance with embodiments of the invention.
  • Processes for generating personalized video playlists based upon user preferences in accordance with embodiments of the invention are described further below.
  • Playlist generation systems in accordance with many embodiments of the invention are configured to index sets of video segments and generate personalized playlists based upon user preferences.
  • the user preferences can be explicit preferences specified by the user, and/or can be inferred based upon user interactions with previously recommended video segments (i.e. the user's viewing history).
  • the playlist generation system also generates playlists that are subject to time constraints in recognition of the limited time available to a user to consume content.
  • the playlist generation server system 1000 includes a processor 1010 in communication with volatile memory 1020 , non-volatile memory 1030 , and a network interface 1040 .
  • the non-volatile memory 1030 includes an indexing application 1032 that configures the processor 1010 to annotate video segments with metadata 1038 describing the content of video segment and generate an index relating video segments to keywords.
  • the indexing application 1032 configures the processor 1010 to extract metadata from textual analysis of textual data contained within a video segment and visual analysis of video data contained within the video segment.
  • the indexing application 1032 configures the processor 1010 to identify additional sources of relevant data that can be used to further annotate the video segment based upon textual and visual comparisons of the video segment and sources of additional data.
  • any of a variety of techniques including (but not limited to) manual annotation of video segments can be utilized to associate metadata with individual video segments.
  • the non-volatile memory 1030 can also contain a playlist generation application 1034 that configures the processor 1010 to generate personalized playlists for individual users based upon information collected by the playlist generation server system 1000 concerning user preferences and viewing histories 1036 .
  • a playlist generation application 1034 that configures the processor 1010 to generate personalized playlists for individual users based upon information collected by the playlist generation server system 1000 concerning user preferences and viewing histories 1036 .
  • Various processes for generating personalized video playlists in accordance with embodiments of the invention are discussed further below.
  • playlist generation server system implementations are described above with reference to FIG. 10
  • any of a variety of architectures including architectures where the indexing application and playlist generation application execute on different processors and/or on different server systems can be utilized to implement network clients in accordance with embodiments of the invention.
  • Processes for annotating and indexing video segments and processes for generating personalized video playlists in accordance with various embodiments of the invention are discussed separately below.
  • Metadata describing video segments can be utilized as inputs to a personalized video playlist generation system and to populate the user interfaces of playback devices with descriptive information concerning the video segments.
  • a great deal of metadata describing a video segment can be derived from the video segment itself.
  • Analysis of text data such as closed caption and subtitle text data can be utilized to identify relevant keywords.
  • Analysis of visual data using techniques such as (but not limited to) text recognition, object recognition, and facial recognition can be utilized to identify the presence of keywords and/or named entities within the content.
  • video segments can also include a metadata track that describes the content of the video segment.
  • Metadata describing video segments can also be obtained by matching the video segments to additional sources of relevant data.
  • video segments can be matched to online articles related to the content of the video segment.
  • visual analysis is used to match portions of images associated with online articles to frames of video as an indication of the relevance of the online article.
  • sources of additional data e.g. online news articles or Wikipedia pages
  • online articles matched to specific video segments can be utilized to generate titles for video segments and provide thumbnail images that can be used within user interfaces of playback devices. Hyperlinks to the online articles can also be provided via the user interfaces to enable a user to link to the additional content.
  • any of a variety of data sources appropriate to the requirements of the specific application can be utilized in the generation of user interfaces and/or personalized playlists in accordance with embodiments of the invention.
  • visual analysis and text analysis is utilized to match video segments to additional sources of data.
  • a process for matching a segment of video to an online news article in accordance with an embodiment of the invention is conceptually illustrated in FIG. 11 .
  • the process involves matching ( 1100 ) visual features, which can involve comparing a video segment 1102 to images 104 associated with additional sources of data to identify the presence of at least a portion of the image within at least one frame of video within the video segment.
  • the process can also involve matching ( 1108 ) text features.
  • keywords found in closed caption text data 1110 can be compared to keywords contained in text data 1112 present within additional sources of data.
  • computational complexity can be reduced by initially performing text analysis to identify candidate sources of additional data. Images related to the candidate sources of additional data can then be utilized to perform visual analysis and the final ranking of the candidate sources of additional data determined based upon the combination of the text and visual analysis.
  • the text and visual analysis can be performed in alternative sequences and/or independently. Processes for performing text analysis and visual analysis to identify additional sources of data relevant to the content of video segments in accordance with embodiments of the invention are discussed further below.
  • sources of text within a video segment including (but not limited to) closed caption, subtitles, text generated by automatic speech recognition processes, and text generated by text recognition (optical character recognition) processes can be utilized to annotate video segments and identify additional sources of relevant data.
  • time stamp metadata associated with additional sources of data and/or dates and/or times contained within text forming part of an additional source of data can be utilized in limiting the sources of additional data considered when determining relevancy.
  • the presence of common dates and/or times in text extracted from a video segment and text from an additional data source can be considered indicative of relevance.
  • bag-of-words histogram comparisons enable matching of text segments with similar distributions of words.
  • a term frequency-inverse document frequency (tf-idf) histogram intersection score (S(H a , H b )) is computed as follows:
  • H a (w) and H b (w) are the L1 normalized histograms of the words in the two sets of words (i.e. the text from the video segment and the additional data source);
  • ⁇ f(w) ⁇ is the set of estimated relative word frequencies.
  • a candidate additional data source is considered to have been identified when the tf-idf histogram intersection score (S(H a , H b )) exceeds a predetermined threshold.
  • the process of identifying relevant sources of additional data places particular significance upon named entities.
  • a database of named entities can be built using sources such as (but not limited to) Wikipedia, Twitter, the Stanford Named Entity Recognizer, and/or Open Calais. String searches can then be utilized to identify named entities in text extracted from a video segment and a potential source of additional data, such as an online article.
  • the presence of a predetermined number of common named entities is used to identify a source of additional data that is relevant to a video segment.
  • the presence of five or more named entities in common is indicative of a relevant source of additional data.
  • any of a variety of processes can be utilized to determine relevancy based upon named entities including processes that utilize a variety of matching rules such as (but not limited to) number of matching named entities, number of matching named entities that are people, number of matching named entities that are places and/or combinations of numbers of matching named entities that are people and number of matching named entities that are places.
  • FIG. 12 A process for performing text analysis of video segments to identify relevant sources of additional data in accordance with an embodiment of the invention is illustrated in FIG. 12 .
  • the process 1200 includes determining ( 1202 ) tf-idf for the annotated video segment(s). Similar processes can be utilized to determine ( 1204 ) tf-idf for additional sources of data such as online articles. Processes similar to those outlined above can be utilized to determine ( 1206 ) the similarity of the tf-idf histograms of the video segments and the additional sources of data.
  • the relevancy of additional sources of data to specific video segments can be confirmed by identifying ( 1208 ) named entities in text data describing a video segment, identifying ( 1210 ) named entities referenced in candidate additional sources of data that share common terms with the video segment, and determining ( 1212 ) that an additional source of data relates to the content of a video segment when a predetermined number of named entities are referenced in the text data extracted from the video segment and the additional source of data.
  • named entities associated with a video segment can be identified within text data extracted from the video segment and/or by performing object detection and/or facial recognition processes with respect to frames from the video segment.
  • any of a variety of processes can be utilized to identify relevant sources of additional data based upon text extracted from a video segment and the text associated with the additional data source as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • the frames of a video segment can contain a variety of visual information including images, faces, and/or text.
  • the text analysis processes similar to those described above can be augmented using relevant keywords identified through analysis of the visual information (as opposed to text data) within a video segment.
  • text recognition processes are utilized to identify text that is visually represented within a frame of video and relevant keywords can be extracted from the identified text.
  • additional relevant keywords can also be extracted from a video segment by performing object detection and/or facial recognition.
  • Text extraction processes can be used to detect and recognize letters forming words within frames in a video segment.
  • the text can be utilized to identify keywords that annotate the video segment.
  • keywords such as (but not limited to) “breaking news” can be utilized to categorize news stores both for the purpose of detecting additional sources of data and during the generation of personalized playlists.
  • text is extracted from frames of video and filtered to identify text that describes the video segment.
  • News stories commonly include title text and identification of the title text can be useful for the purpose of incorporating the title into a user interface and/or for using keywords in the title to identify relevant additional sources of data.
  • an extracted title is provided to a search engine to identify additional sources of potentially relevant data.
  • the title can be provided as a query to a vertical search engine (e.g. the Google News search engine service provided by Google, Inc. of Mountain View, Calif.) to identify additional sources of potentially relevant data.
  • the ranking of the search results is utilized to determine relevancy.
  • the search results are separately scored to determine relevancy.
  • FIGS. 13A-13D Processes for extracting relevant keywords from video segments for use in the annotation of video segments in accordance with embodiments of the invention are illustrated in FIGS. 13A-13D .
  • FIG. 13A is a frame of video containing visual representations of text.
  • the text includes the words “BREAKING NEWS” and “THREE MISSING GIRLS FOUND ALIVE”, which can be identified using common text recognition processes.
  • FIG. 13C another frame of video is shown containing visual representations of text.
  • the frame also includes the words “BREAKING NEWS” and the words “WITNESS TO TERROR” that can be identified using common text recognition processes.
  • FIG. 14 A process for extracting relevant keywords from frames of video using automatic text recognition in accordance with an embodiment of the invention is illustrated in FIG. 14 .
  • the process 1400 includes extracting ( 1402 ) text from one or more frames of video. With the exception of logos, the amount of time that text appears within a video segment can be highly correlated with the importance of the text. Therefore, many embodiments of the invention analyze multiple frames of video and filter text and/or keywords based upon the duration of the time period in which text and/or keywords are visible.
  • the extracted ( 1402 ) text can be analyzed to identify ( 1404 ) keywords.
  • the keywords can be filtered ( 1406 ) to identify relevant keywords and a library of key phrases, which can be utilized to annotate ( 1408 ) the video segment.
  • the text is filtered for “stop words” and a “stemming” process is applied to the remaining words to increase the matching results.
  • any of a variety of filtering and/or keyword expansion processes can be applied to recognized text to identify relevant keywords in accordance with embodiments of the invention.
  • any of a variety of processes for annotating video segments using keywords identified by analyzing frames of a video segment using automatic text recognition processes can be utilized as appropriate to the requirements of specific applications in accordance with embodiments of the invention. Additional automatic recognition tasks that can be performed to identify faces and objects during the annotation of video segments in accordance with various embodiments of the invention are discussed further below.
  • a variety of techniques are known for performing object detection including various face recognition processes. Processes for detecting anchor faces are described above with respect to video segmentation. As can readily be appreciated, recognizing the people appearing in video segments can be useful in identifying additional sources of data that are relevant to the content of the video segments. In a number of embodiments, similar processes can be utilized to identify a larger number of faces (i.e. more named entities than simply anchorpeople). In other embodiments, any of a variety of processes can be utilized to perform face recognition including processes that have high recognition precision across a large population of faces.
  • FIGS. 15 and 16 A process for performing face recognition based upon localized features during the annotation of a video segment in accordance with an embodiment of the invention is conceptually illustrated in FIGS. 15 and 16 .
  • the frame of video 1500 shown in FIG. 15 is a shot of Warren Buffett, Chairman of Berkshire Hathaway.
  • the subject of the shot can be ascertained by performing automated text recognition.
  • the presence of Mr. Buffett's face can be identified by performing a process 1600 involving initially performing ( 1602 ) a face detection process.
  • a region determined to contain a face can then be analyzed ( 1604 ) to locate landmark features 1502 such as the corners of the face's eyes, the tip of the face's nose, and the edges of the face's mouth.
  • such features can be utilized to perform facial recognition by matching ( 1606 ) the relationship of the landmark features against a database of facial landmark feature geometries. Once a face is recognized, the identity of the person visible in the frame of video can be utilized to annotate ( 1608 ) the video segment with a keyword corresponding to a named entity. A confidence score can also be associated with the name entity annotation and utilized in weighting the named entity keyword when identifying additional sources of data.
  • any of a variety of object detection processes can be utilized to annotate video segments with relevant keywords as appropriate to the requirements of specific applications in accordance with embodiments of the invention. While the processes described above with reference to FIGS. 13A-16 involve the analysis of visual information contained within frames of a video segment in order to identify keywords that are relevant to the content of the video segment, visual analysis can also be utilized to identify images that are relevant to the content of a video segment. Processes that utilize visual analysis to identify relationships between video segments and images in accordance with various embodiments of the invention are discussed further below.
  • Video segments and additional sources of data such as online articles, often utilize the same image, different portions of the same image, or different images of the same scene.
  • an image portion within one or more frames in a video segment can be matched to an image associated with additional sources of information to assist with establishing the relevancy of additional sources of data.
  • matching is performed by determining whether the frame of video contains a region that includes a geometrically and photometrically distorted version of a portion of an image obtained from the additional data source.
  • processes similar to those described above with reference to FIG. 9 can be utilized to determine a match between a portion of an image associated with an additional data source and a portion of a frame of video.
  • any of a variety of techniques can be utilized to determine whether portions of a frame of video and an image associated with an additional data source correspond.
  • index can be generated using keywords extracted from the video segment and/or additional sources of data that are relevant to the content of the video segment.
  • the resulting index and metadata can be utilized in the generation of personalized video playlists.
  • Playlist personalization is a complex problem that can consider user preferences, viewing history, and/or story relationships in choosing the video segments that are most likely to form the set of content that is of most interest to a user.
  • processes for generating personalized playlists for users involve consideration of a recommended set of content in recognition of the limited amount of time an individual user may have to view video segments.
  • processes in accordance with a number of embodiments of the invention can attempt to select a set of video segments having a combined duration less than a predetermined time period and spanning the content that is most likely to be of interest to the user.
  • the video segments can be further sorted into a preferred order.
  • the order can be determined based upon relevancy and/or based upon heuristics concerning sequences of content categories that make for “good television”.
  • the process of generating playlists involves the generation of multiple playlists including a personalized playlist and “channels” of content filtered by categories such as “technology” or keywords such as “Barack Obama”. Within categories, user preferences can still be considered in the generation of the playlist.
  • process for generating a personalized video playlist is simply applied to a smaller set of video segments.
  • processes for generating personalized playlists in accordance with many embodiments of the invention attempt to provide a comprehensive view of the day's news in a way that avoids duplicate or near-duplicate stories. Additionally, more recent video segments can receive higher weightings.
  • this formulation chooses trending video segments, which originated from news programs the user prefers, and are also associated with categories in which the user is interested.
  • the process of generating a personalized playlist is treated as a maximum coverage problem.
  • a maximum coverage problem typically involves a number of sets of elements, where the sets of elements can intersect (i.e. a single element can belong to multiple sets). Solving a maximum coverage problem involves finding the fixed number of elements that cover the largest number of sets of elements.
  • the elements are the video segments and video segments that relate to the same content are treated as belonging to the same set. Therefore, the concept of content coverage can be used to refer to the amount of different content covered by a set of video segments. As noted above, video segments can be compared to determine whether the content is related or unrelated.
  • an objective function for solving the maximum coverable problem can be weighted by a linear combination of several personalization factors. These factors can include (but are not limited to) explicit preferences specified by a user, personal information provided by the user and/or obtained from secondary sources including (but not limited to) online social networks, and implicit preferences obtained by analyzing a user's viewing history. Information concerning implicit preferences may be derived by analyzing a user's viewing history with respect to playlists generated by a playlist generation server system.
  • implicit preferences can be derived from additional sources of information including (but not limited to) a user's browsing activity (especially with respect to online articles relevant to video segment content), activity within an online social network, and/or viewing history with respect to video and/or audio content provided by one or more additional services.
  • FIG. 17 A process for generating personalized playlists from metadata describing a set of video segments based upon user preferences in accordance with an embodiment of the invention is illustrated in FIG. 17 .
  • the process 1700 involves obtaining ( 1702 ) user preferences, which can involve observing ( 1704 ) a user's viewing history.
  • the process of generating personalized playlists utilizes metadata identifying video segments having related content or cumulative content.
  • related video segments are identified ( 1706 ) and personalization weightings can be determined ( 1708 ) for a new set of video segments form which the personalized playlists will be generated based upon metadata describing the video segments.
  • metadata describing the relationships between video segments and the personalization weightings are utilized to generate ( 1710 ) personalized playlists.
  • the process of generating a personalized playlist can be constrained by a specified cumulative playback duration of the video segments identified in the playlist.
  • Personalized playlists can be provided to playback devices, which can utilize the playlists to stream ( 1712 ), or otherwise obtain, the video segments identified in the playlist and to enable the user to interact with the video segments.
  • the playback devices and/or the playlist generation server system to collect analytic data based upon user interactions with the video segments and/or additional data sources identified within the playlist.
  • the analytic information can be utilized to improve the manner in which personalization ratings are determined for specific users so that the playlist generation process can provide more relevant content recommendations over time.
  • any of a variety of processes can be utilized to perform playlist generation based upon metadata describing a set of video segments and information concerning user preferences in accordance with embodiments of the invention.
  • information concerning relationships between video segments and specifically with respect to the cumulative nature of video segments can be highly relevant in the generation of personalized playlists for certain types of video content including (but not limited to) news stories. Processes for identifying related and/or cumulative content in accordance with various embodiments of the invention are discussed further below.
  • playlist generation processes in accordance with many embodiments of the invention rely upon information concerning the relationships between the content in video segments to identify the greatest amount of information that can be conveyed within the shortest or a specified time period.
  • related video segments can be considered to be video segments that relate to the same news story.
  • care is taken when classifying two video segments relating to the same content as “related” to avoid classifying a video segment that includes updated information as related in the sense of being cumulative.
  • a video segment that contains additional information can be identified as a primary video segment and a video segment containing an earlier version of the content and/or a subset of the content can be classified as a related or cumulative video segment.
  • a related classification can be considered hierarchical or one directional. Stated another way, the classification of a first segment as related to a second segment does not imply that the second segment is related to (cumulative of) the first segment. In many embodiments, however, only bidirectional relationships are utilized.
  • FIG. 18 A process for identifying whether a first video segment is cumulative of the content in a second video segment based upon keywords associated with the video segments in accordance with an embodiment of the invention is illustrated in FIG. 18 .
  • the process 1800 includes determining ( 1802 ) the tf-idf histograms for both of the video segments and ( 1804 ) lists of named entities associated with each of the segments.
  • a decision concerning whether one of the video segments is cumulative of the other can be made by comparing the tf-idf histograms in the manner described above with respect to FIG. 12 .
  • a determination that one of the video segments is cumulative of the other video segment (or that both video segments are cumulative of each other) can be determined by comparing ( 1808 ) whether the number of shared named entities exceeds a predetermined threshold.
  • any of a variety of processes for determining whether the content of a first video segment is cumulative of a second video segment can be utilized as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • processes that identify relationships other than the cumulative nature of video segments such as processes that determine visual similarity between shots can be utilized to identify appealing and/or dominant shots for within video segments can be utilized in a variety of contexts.
  • metadata describing the relationships between video segments can be utilized in the generation of personalized video playlists in accordance with various embodiments of the invention is discussed further below.
  • personalized playlists are generated by formalizing the problem of generating a playlist for a user as an integer linear programming optimization problem, or more specifically a maximum coverage problem, as follows:
  • n is the number of today's videos
  • w coverage represents a weighting applied to the news story coverage relative to user preferences
  • x is a vector including an element for each identified video segment, where for i ⁇ [1 . . . n], x i ⁇ ⁇ 0,1 ⁇ is 1 if the i th video segment is selected,
  • y is a vector including an element for each identified video segment, where for i ⁇ [1 . . . n], y i ⁇ ⁇ 0,1 ⁇ is 1 if x i is covered by a video segment that has been already selected,
  • c is a vector representing a set of personalization weights c i determined with respect to each video segment x i based upon user preferences
  • R ⁇ ⁇ 0,1 ⁇ n ⁇ n denotes an adjacency matrix, where 1 represents a link between news stories.
  • duration of the news story and time limitations are represented by d i and t accordingly.
  • factors including (but not limited to) a user's preferences with respect to sources and/or categories of video segments (s source , s category ) , recency (s time ), and viewing history (s history ) are considered in calculating the personalization weights c.
  • viewing history (s history ) can be determined based upon the number of related news stories, which were watched previously by the user.
  • processes for detecting related and/or similar stories similar to those described above with respect to FIG. 18 can be utilized to identify similar video segments previously watched by a user.
  • a separate novelty metric is determined as part of the process of identifying similar stories and the novelty metric can be used to assess the extent to which the content of two similar video segments differs.
  • the novelty metric is related to the number of words that are not common between the two video segments. In other embodiments, any of a variety of factors can be considered in the calculation of a novelty metric.
  • the overall weightings c i for a video segment v i from the set of n recent video segments v can be expressed as follows:
  • c i w source ⁇ s source ( v i )+ w category ⁇ s category ( v i )+ w time ⁇ s time ( v i )+w history ⁇ s history (v i )
  • the weights can be selected arbitrarily and updated manually and/or automatically based upon user feedback.
  • s time (v i ) and s history (v i ) are defiend as follows:
  • Video is a set of all video segments (i.e. not just the recent segments v).
  • the function related(v i ,w) ⁇ ⁇ 0,1 ⁇ is 1 if video segments v i and w are linked.
  • a process similar to any of the processes described above with respect to FIG. 18 can be utilized to determine whether stories are cumulative.
  • the links identified by such processes are very specific in the sense that the process is intended to identify video segments that contain the same or very similar content. Accordingly, processes in accordance with many embodiments of the invention may (also) attempt to draw more general conclusions concerning viewing history such as keyword preferences, topic preferences, and source preferences.
  • more general preferences can be utilized to modify source and/or category preference scores that are separately used to weight video segments.
  • any of a variety of processes for scoring a specific video segment based upon viewing history can be utilized in accordance with embodiments of the invention.
  • the “importance” of a video segment can be scored and utilized to determine the order in which the video segments are presented in a playlist.
  • importance can be scored based upon factors including (but not limited to) the number of related video segments.
  • the number of related video segments within a predetermined time period can be indicative of breaking news. Therefore, the number of related video segments to a video segment within a predetermined time period can be indicative of importance.
  • any of a variety of techniques can be utilized to measure the importance of a video segment as appropriate to the requirements of specific applications.
  • the content of the video segments is utilized to determine the order of the video segments in a personalized video playlist.
  • sentiment analysis of metadata annotating a video segment can be utilized to estimate the sentiment of the video segment and heuristics utilized to order video segments based upon sentiment. For example, a playlist may start with the most important story. Where the story has a negative sentiment (a dispatch from a warzone), the process can select a second story that has more uplifting sentiment.
  • machine learning techniques can be utilized to determine processes for ordering stories from a set of stories to create a personalized playlist as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • any of a variety of processes can be utilized to generate personalized video playlists using a set of video segments based upon user preferences in accordance with embodiments of the invention including processes that indirectly consider viewing history by modifying source and category weightings. Furthermore, processes in accordance with many embodiments of the invention consider other user preferences including (but not limited to) keyword and/or named entity preferences.
  • Personalized video playlists can be provided to a host of playback devices to enable viewing of video segments and/or additional data sources identified in the playlists.
  • a playback device is configured via a client application to render a user interface based upon metadata describing video segments obtained using the playlist.
  • Playback devices can also be configured to provide a “second screen” display that can enable control of playback of video segments on another playback device and/or viewing of additional video segments and/or data related to the video segment being played back on the other playback device.
  • the user interfaces that can be generated by playback devices are largely only limited by the capabilities of the playback device and the requirements of specific applications.
  • the playback device 1900 includes a processor 1910 in communication with volatile memory 1920 , non-volatile memory 1930 , and a network interface 1940 .
  • the non-volatile memory 1930 includes a media decoder application 1932 that configures the processor 1010 to decode video for playback via display device a client application 1934 that configures the processor to render a user interface based upon metadata describing video segments contained within a personalized playlist 1926 retrieved from a playlist generation server system via the network interface 1940 .
  • any of a variety of playback device architectures can be utilized to playback video segments identified in a personalized playlists in accordance with embodiments of the invention.
  • User interfaces generated by playback devices that enable viewing and interaction with video segments identified in personalized playlists in accordance with embodiments of the invention are described further below.
  • the user interface generated by a playback device based upon a personalized playlist is typically determined by the capabilities of a playback device.
  • instructions for generating a user interface can be provided to a playback device by a remote server.
  • the instructions can be in a markup and/or scripting language that can be rendered by the rendering engine of a web browser application on a computing device.
  • the remote server provides structured data to a client application on a playback device and the client application utilizes the structured data to populate a locally generated user interface.
  • any of a variety of approaches to generating a user interface can be utilized in accordance with an embodiment of the invention.
  • FIG. 20A A user interface rendered by the rendering engine of a web browser application in accordance with an embodiment of the invention is illustrated in FIG. 20A .
  • the user interface 2000 includes a player region 2002 in which a video segment is played back.
  • the video segment being played back via the user interface is described by displaying the video segment's title 2004 , source 2006 , recency 2008 , and number of views 2010 above the player region 2002 .
  • any of a variety of information describing a video segment being played back within a player region can be displayed in any location(s) within a user interface as appropriate to the requirements of specific applications.
  • the player region 2002 includes user interface buttons for sharing a link to the current story 2012 , skipping to the previous 2014 or next story 2016 and expressing like 2018 or dislike 2020 toward the story being played back within the player region 2002 .
  • additional user interface affordances can be provided to facilitate user interaction including (but not limited to) user interface mechanisms that enable the user to select an option to follow stories related to the story currently being played back within the player region 2002 .
  • the user interface also includes a personalized playlist 2022 filled with tiles 2024 that each include a description 2025 of a video segment intended to interest the user and an accompanying image 2026 .
  • tiles 2024 in the playlist 2022 can also be easily reordered or removed.
  • the tile at the bottom of the list 2028 contains a description of the video segment being played back in the player region.
  • the tile also contains sliders 2030 indicating categories, sources, and/or keywords for which a user has or can provide an explicit user preference. In this way, the user is prompted to modify previously provided user preference information and/or provide additional user preference information during playback of the video segment.
  • any of a variety of affordances can be utilized to directly obtain user preference information via a user interface in which video segments identified within a playlist are played back as appropriate to the requirements of specific applications.
  • the query is executed by comparing keywords from the query to keywords contained within the segment of video content (e.g. speech, closed caption, metadata).
  • the query is executed by also considering the presence of keywords in additional sources of information that were determined to be related to the video segment during the process of generating the personalized playlist.
  • indexes relating keywords to video segments that are constructed as part of the process of generating personalized playlists can also be utilized to generate lists of video segments in response to text based search queries in accordance with embodiments of the invention. Implementation of various video search engines in accordance with embodiments of the invention are described further below.
  • the displayed user interface 2000 also includes an option 2042 to enter a settings menu for adjusting preferences toward different categories of video content and/or sources of video content.
  • a settings menu user interface in accordance with an embodiment of the invention is illustrated in FIG. 20B .
  • the settings menu user interface 2050 includes a set of sliders 2052 indicating user preferences provided and/or inferred based upon a user's viewing history.
  • a user can adjust an individual slider 2046 to modify the weighting attributed to the corresponding attribute of a video segment.
  • the user can add and/or remove any of a variety of factors to the list of factors considered by a playlist generation system.
  • the settings menu user interface can include a set of options 2056 that a user can select to specify a playlist duration.
  • playlist duration is a factor that can be considered in the selection of video segments to incorporate within a personalized playlist.
  • user preference information can be obtained via any of a variety of affordances provided via a user interface of a playback device as appropriate to the requirements of a specific application.
  • the display and input capabilities of a playback device can inform the user interface provided by the playback device.
  • a user interface for a touch screen computing device, such as (but not limited to) a tablet computer, in accordance with an embodiment of the invention is illustrated in FIG. 21 .
  • the user interface 2100 includes a player region 2102 in which a video segment is played back. Due to the limited display size, the majority of the display is devoted to the playback region, however, the title 2104 and source 2016 of the video segment being played back is displayed above the player region 2102 .
  • the user interface also includes a channels button 2108 that can be selected to display a list of available playlists.
  • a screen shot of a user interface in which channels are displayed in accordance with an embodiment of the invention is illustrated in FIG. 21B .
  • the channels list 2150 includes the personalized playlist of video segments 2152 and selections for personalized playlists generated by filtering video segments based upon specific categories, sources, and/or keywords.
  • a mobile computing device such as (but not limited to) a mobile phone or tablet computer can act as a second display enabling control of playlist playback on another playback device and/or providing additional information concerning a video segment being played back on a playback device.
  • a screen shot of a “second screen” user interface generated by a tablet computing device in accordance with an embodiment of the invention is illustrated in FIG. 22A .
  • the user interface 2002 includes a listing 2202 of video segments that are related to a video segment identified in a personalized playlist that is being played back on another playback device.
  • title 2204 , source 2208 , release data 2208 , text summaries 2206 and one or more images 2212 are provided to describe each video segment in the listing 2202 .
  • any of a variety of information can be presented to a user via a user interface to provide information concerning a video segment being played back on another playback device and/or related video segments.
  • FIG. 22B A screen shot of a “second screen” user interface generated by a tablet computing device enabling control of playback of video segments identified in a personalized playlist on another playback device in accordance with an embodiment of the invention is illustrated in FIG. 22B .
  • the user interface 2252 includes information ( 2204 - 2012 ) describing related videos and a set of controls 2252 that can be utilized to control playback of video segments identified in a personalized playlist on another playback device.
  • any of a variety of user interfaces can be generated using numerous techniques based upon personalized playlists obtained from playlist generation systems as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • appropriate user interfaces can be generated for wearable computing devices including (but not limited to) augmented reality headsets, and smart watches.
  • user interactions with a user interface and the user's viewing history can be logged into a database to update and/or infer user preferences.
  • logged user interactions can be analyzed to refine the manner in which future recommendations are generated. Processes for collecting and analyzing information concerning user interactions with video segments in accordance with embodiments of the invention are discussed further below.
  • the user interaction information that can be logged by a personalized playlist generation system in accordance with embodiments of the invention is typically only limited by the user interface generated by a playback device and the input modalities available to the playback device.
  • An example of a user interaction log generated based upon user interactions with a user interface generated to enable playback of video segments identified within a personalized playlist in accordance with an embodiment of the invention is illustrated in FIG. 23 .
  • the log includes information concerning video segments played by the user, the duration of playback, reordering of videos and other interactions related to the playback experiences such as volume control and display of closed caption text.
  • information concerning playback of video segments can be utilized to obtain metrics indicative of user interest such as (but not limited to) the percentage of a video segment played back.
  • the illustrated log also includes information concerning user mouse activity such as mouse over events.
  • any manner in which a user interacts with a user interface can be logged and/or a subset of interactions can be logged as appropriate to the needs of a specific playlist generation system including but not limited to user interactions indicating sentiment (e.g. “like”, or “dislike”), sharing of content, skipping of content, rearranging and/or deleting video segments from a playlist and percentage of video segment watched.
  • playlist generation considers some or all user interactions contained within a log file and techniques including (but not limited to) linear regressions can be utilized to determine weighting parameters to apply to each category of user interactions considered during playlist generation.
  • any of a variety of techniques can be utilized to consider user history as appropriate to the requirements of specific applications.
  • the ability to identify related video segments enables the generation of summaries of a number of related video segments or news briefs.
  • Text data extracted from video segments in the form of closed caption, or subtitle data or through use of automatic speech recognition can be utilized to identify sentences that include keywords that are not present in related video segments.
  • the portions of some or all of the related video segments in which the sentences containing the “unique” keywords occur can then be combined to provide a summary of the related video segments.
  • the news brief can be constructed in time sequence order so that the news brief provides a sense of how a particular story evolved over time.
  • the video segments that are combined can be filtered based upon factors including (but not limited to) user preferences and/or proximity in time. In other embodiments, any of a variety of criteria can be utilized in the filtering and/or ordering of related video segments in the creation of a video summary sequence.
  • the process 2400 includes ( 2402 ) identifying related video segments and identifying ( 2404 ) unique keywords related to the video segments.
  • the unique keywords are extracted from text data contained within the video segment and/or through the use of automatic speech recognition.
  • timestamps are associated with the keywords and a portion of the video segment such as (but not limited to) a sentence can be extracted ( 2406 ) from at least some of the related video segments.
  • the extracted portions of the video segments can then be combined ( 2410 ) and encoded to create a video segment that is a summary of all of the related video segments.
  • any of a variety of criteria can be utilized to determine the ordering of the portions of video segments and/or to filter the portions of video segments that are included in the video summary as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • the techniques described above for annotating video segments and utilizing the annotations to generate indexes relating keywords to video segments are not limited to the generation of personalized playlists, but can be utilized in a myriad of applications including the provision of a video search engine service.
  • a system for accessing video segments utilizing a video search engine service in accordance with an embodiment of the invention is illustrated in FIG. 25 .
  • the system 2500 includes a video search engine server system 2502 that is configured to crawl various servers including (but not limited to) content distribution networks 2508 , web servers 2510 , and social media server systems 2512 , 2514 to identify video segments.
  • the video search engine server can annotate the identified video segments using keyword and/or image metadata extracted from the video segment and/or from additional data sources identified as relevant utilizing processes similar to those described above with reference to FIGS. 11-18 .
  • the metadata annotations can be stored in a database 2516 and utilized to generate an inverted index relating keywords to identified video segments.
  • the video search engine server system 2502 can then utilize the inverted index to identify video segments in response to a search query received from a user device 2518 via a network connection 2520 .
  • the techniques described above for identifying the presence of image portions within a frame of a video segment can be utilized to provide a video search service that can accept images and/or video sequences as search query inputs.
  • the multi-modal video search engine server system 2600 includes a processor 2602 in communication with volatile memory 2620 , non-volatile memory 2630 , and a network interface 2640 .
  • the non-volatile memory 2630 includes an indexing application 2632 that configures the processor 2610 to annotate video segments with metadata 2622 describing the content of video segment and generate an inverted index 2624 relating video segments to keywords.
  • the indexing application 2632 configures the processor 2610 to extract metadata from textual analysis of text data contained within a video segment and visual analysis of video data contained within the video segment.
  • the indexing application 2632 configures the processor 2610 to identify additional sources of relevant data that can be used to annotate the video segment based upon textual and visual comparisons of the video segment and sources of additional data.
  • any of a variety of techniques including (but not limited to) manual annotation of video segments can be utilized to associate metadata with individual video segments.
  • the non-volatile memory 2630 can also contain a search engine application 2634 that configures the processor 2610 to generate a user interface via which a user can provide a search query.
  • a search query can be in the form of a text string, an image, and/or a video sequence.
  • the search engine application can utilize the inverted index to identify video segments relevant to text queries and can utilize the processes described above for locating image portions within frames of video to identify video segments relevant to images and/or video segments provided as search queries.
  • relevant video segments can also be found by comparing query images, or frames to images, or frames o video obtained from additional data sources known to be relevant to one or more video segments.
  • text data can be extracted from images and/or video sequences provided as search queries to the search engine application and a multi-modal search can be performed utilizing the extracted text and searches for portions of images within frames of indexed video segments.
  • identification of a video segment can also be utilized to identify other relevant video segments using the processes for identifying relationships between video segments described above with reference to FIG. 18 .
  • the functions of crawling, indexing, and responding to search queries can be distributed across a number of different servers in a video search engine server system.
  • the size of the database(s) utilized to store the metadata annotations and/or the inverted index may be sufficiently large as to necessitate the splitting of the database table across multiple computing devices utilizing techniques that are well known in the provision of search engine services. Accordingly, although specific architectures for providing online video search engine services are described above with reference to FIGS. 25 and 26 , any of a variety of system implementations can be utilized as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • FIG. 27 A process for generating multi-modal video search engine results in accordance with an embodiment of the invention is illustrated in FIG. 27 .
  • a set of video segments is provided and/or obtained by crawling video sources and the process 2700 identifies ( 2702 ) keywords related to the video segments using text and visual analysis of the video segments.
  • the identified keywords can be utilized to generate ( 2704 ) an inverted index mapping keywords to video segments.
  • keywords can be extracted from text, an image, and/or a video sequence provided as part of the search query and the keywords used to identify ( 2708 ) relevant videos from the inverted index.
  • a search can also be performed for one or more image portions within the frames of the indexed video segments.
  • the relevancy of the identified video segments can be scored ( 2710 ) and search results including a listing of one or more video segments can be returned.
  • the process of annotating the video segments includes identifying additional sources of relevant data and links to the additional sources of relevant data and/or excerpts of relevant data can be returned with the search results.
  • video segments are scored based upon a variety of factors including the number of related stories. Analysis of news story video segments reveals that related stories tend not to form fully connected graphs. Therefore, the number of related video segments (stories) can be indicative of the importance of the video segment. Time can also be an important measure of importance, the number of related video segments published within a predetermined time period can provide an even stronger indication of the relevance of a story to a particular query. In several embodiments, the relevance of a video segment to a search query can also be ranked based upon common keywords, frequency of common keywords, and/or common images.
  • a search query that includes an image, video sequence, and/or URL can be related to sources of additional data including (but not limited to) other video segments, and/or online articles.
  • the sources of additional data can be utilized to perform keyword expansion and the expanded set of keywords utilized in scoring the relevance of a specific video segment to the search query.
  • search result scores can be personalized based upon similar factors to those discussed above with respect to the generation of personalized video playlists. In this way, the most relevant search result for a specific user can be informed by factors including (but not limited to) a user's preferences with respect to content source, anchor people, and/or actors.
  • video search results can be scored and/or personalized in any of a variety of ways appropriate to the requirements of specific applications.
  • analytics are collected ( 2712 ) concerning user interactions with video segments selected by users.
  • metrics including (but not limited to) percentage of playback duration watched can be utilized to infer information concerning the relevancy of the video segment to the search query and update ( 2714 ) relevance parameters associated with an indexed video by a video search engine service.
  • any of a variety of analytics can be collected and utilized to improve the performance of the search results in accordance with embodiments of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Security & Cryptography (AREA)
  • Astronomy & Astrophysics (AREA)
  • Library & Information Science (AREA)
  • Marketing (AREA)
  • Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Systems and methods are described that can provide users with personalized video content feeds. In several embodiments, a multi-modal segmentation process is utilized that relies upon cues derived from video, audio and/or text data present in a video data stream. In a number of embodiments, video streams from a variety of sources are segmented. Links are identified between video segments and between video segments and online articles containing additional information relevant to the video segments. The additional information obtained by linking a video segment to an additional source of data can be utilized in the generation of personalized playlists. In the context of news programming, the dynamic mixing and aggregation of news videos from multiple sources can greatly enrich the news watching experience. In several embodiments, processes for linking video segments to additional sources of data can be implemented as part of a video search engine service.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The current application claims priority to U.S. Provisional Patent Application No. 61/978,988, filed Apr. 14, 2014, entitled “Systems and Methods for Generating Personalized Video Playlists”, to Chen et al., the disclosure of which is incorporated herein by reference in its entirety.
  • FIELD OF THE INVENTION
  • The present invention relates generally to video distribution systems and more specifically to generation of video recommendations based upon user preferences.
  • BACKGROUND
  • News aggregation sites such as the Google News service provided by Google, Inc. of Mountain View, Calif. and the Yahoo News service provided by Yahoo, Inc. of Sunnyvale, Calif. have garnered significant attention in recent years. These services provide a user interface via which users can customize the types of news stories they want to read. Furthermore, the sites can progressively learn each user's preferences from their reading history to improve future selections.
  • A great deal of news information is distributed in the form of video content. Although the term “video content” references video information, the term is typically utilized to encompass a combination of video, audio, and text data. In many instances, video content can also include and/or reference sources of metadata. While video news has traditionally between broadcast over-the-air or transmitted via cable networks, video content is increasingly being distributed via the Internet. Therefore, video news stories can be obtained from a variety of sources.
  • SUMMARY OF THE INVENTION
  • Next-generation media consumption is likely to be more personalized, device agnostic, and pooled from many different sources. Systems and methods in accordance with embodiments of the invention can provide users with personalized video content feeds providing the video content that matters most to them. In several embodiments, a multi-modal segmentation process is utilized that relies upon cues derived from video, audio and/or text data present in a video data stream. In a number of embodiments, video streams from a variety of sources are segmented. Links are identified between video segments and between video segments and online articles containing additional information relevant to the video segments. The additional information obtained by linking a video segment to an additional source of data, such as an online article, can be utilized in the generation of personalized video playlists for one or more users. In several embodiments, the personalized video playlists are utilized to playback video segments via a television, personal computer, tablet computer, and/or mobile device such as (but not limited to) a smartphone, or a media player. In many embodiments, viewing histories and user interactions can be utilized to continuously optimize the personalization. In the context of video streams containing news programming, the dynamic mixing and aggregation of news videos from multiple sources can greatly enrich the news watching experience by providing more comprehensive coverage and varying perspectives. In several embodiments, processes for linking video segments to additional sources of data can be implemented as part of a video search engine service that constructs indexes including inverted indexes relating keywords to video segments to facilitate the retrieval of video segments relevant to a search query.
  • One embodiment includes a video search engine server system, including: at least one processor; and memory containing an indexing application and a search engine application. In addition, the indexing application configures at least one processor to: identify a set of video segments; extract text data from a selected video segment in the set of video segments and use keywords from the extracted text data to identify candidate sources of relevant data based upon keywords contained within the candidate sources of relevant data; and identify images from the candidate sources of relevant data, where at least a portion of the image matches at least a portion of a frame of video from within the selected video segment; identify additional sources of relevant data from the candidate sources of relevant data based upon the extracted keywords, the keywords in the candidate sources of relevant data, and the identified images; and generate an inverted index of video segments in the set of video segments that are relevant to specific keywords using the extracted keywords and the keywords contained within the additional sources of relevant data. Furthermore, the search engine application configures at least one processor to: receive a search query; identify video segments from the set of video segments that are relevant to the search query using the inverted index; score the relevancy of the identified video segments to the search query; and generate search results identifying at least one video segment relevant to the search query.
  • In a further embodiment, the search query is a text string, and the search engine application configures at least one processor to extract query keywords from the text string and identify relevant video segments using the inverted index based upon the extracted query keywords.
  • In another embodiment, the search query is an image, and the search engine application configures at least one processor to identify frames from video segments in the set of video segments, where at least a portion of the image matches at least a portion of a frame of video from within a given video segment from the set of video segments.
  • In a still further embodiment, the search engine application configures at least one processor to: identify keywords relevant to the image based upon the keywords that are relevant to video segments from the set of video segments that include at least one frame in which at least a portion of the image matches at least a portion of the frame; and identify relevant video segments using the inverted index based upon the keywords identified as relevant to the image.
  • In still another embodiment, the search query is a video segment; and the search engine application configures at least one processor to identify frames from video segments in the set of video segments, where at least a portion of a frame from the query video segment matches at least a portion of a frame of video from within a given video segment from the set of video segments.
  • In a yet further embodiment, the search engine application configures at least one processor to: identify keywords relevant to the query video segment based upon the keywords that are relevant to video segments from the set of video segments that include at least one frame in which at least a portion of the frame matches at least a portion of a frame from the query video segment; and identify relevant video segments using the inverted index based upon the keywords identified as relevant to the query video segment.
  • In yet another embodiment, the indexing application further configures at least one processor to identify additional sources of relevant data from the candidate sources of relevant data based upon the extracted keywords, the keywords in the candidate sources of relevant data, the identified images, and timestamps associated with the selected video segment and the candidate sources of relevant data.
  • In a further embodiment again, the indexing application further configures at least one processor to use keywords from the extracted text data to identify candidate sources of relevant data based upon keywords contained within the candidate sources of relevant data based upon bag-of-words histogram comparisons that enable matching of text segments from the extracted text data with similar distributions of words in a candidate source of relevant data.
  • In another embodiment again, the indexing application further configures at least one processor to calculate a term frequency-inverse document frequency (tf-idf) histogram intersection score (S(Ha, Hb)) as follows:
  • S ( H a , H b ) = w idf ( w ) · min ( H a ( w ) , H b ( w ) ) idf ( w ) = log ( max x f ( x ) ) - log ( f ( w ) )
  • where, Ha(w) and Hb(w) are the L1 normalized histograms of the words in the two sets of words; and
  • {f(w)} is the set of estimated relative word frequencies.
  • In a further additional embodiment, the indexing application further configures at least one processor to determine that a candidate source of relevant data is an additional source of relevant data when the tf-idf histogram intersection score (S(Ha, Hb)) exceeds a predetermined threshold.
  • In another additional embodiment, the indexing application further configures at least one processor to: identify named entities within the text data extracted from the selected video segment; and determine that a candidate source of relevant data is an additional source of relevant data when a predetermined number of named entities are present within both the candidate source of relevant data and the text data extracted from the selected video segment.
  • In a still yet further embodiment, the indexing application further configures at least one processor to identify additional named entities by performing object recognition.
  • In still yet another embodiment, the indexing application further configures at least one processor to identify candidate sources of relevant data by providing at least some of the keywords extracted from the selected video segment to a search engine.
  • In a still further embodiment again, the indexing application further configures at least one processor to identify a title from text extracted from at least one frame of video from a selected video segment and identify candidate sources of relevant data and the keyword provided to the search engine is the extracted title.
  • In still another embodiment again, the indexing application further configures at least one processor to identify at least a portion of an image from a candidate source of relevant data that matches at least a portion of a frame of video from within the selected video segment by determining that a given frame of video contains a region that includes a geometrically and photometrically distorted version of a portion of an image obtained from the candidate source of relevant data.
  • In a still further additional embodiment, the indexing application configures at least one processor to identify relationships between individual video segments in the set of video segments, and the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the number of related video segments in the set of video segments.
  • In still another additional embodiment, timestamps are associated with the video segments in the set of video segments, and the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the number of related video segments in the set of video segments with associated timestamps that are within a predetermined time period.
  • In a yet further embodiment again, the indexing application configures at least one processor to identify whether video segments are related based upon keywords associated with the video segments.
  • In yet another embodiment again, the indexing application configures at least one processor to calculate a term frequency-inverse document frequency (tf-idf) histogram intersection score (S(Ha, Hb)) for the keywords associated with the two video segments as follows:
  • S ( H a , H b ) = w idf ( w ) · min ( H a ( w ) , H b ( w ) ) idf ( w ) = log ( max x f ( x ) ) - log ( f ( w ) )
  • where, Ha(w) and Hb(w) are the L1 normalized histograms of the words in the two sets of words; and
  • {f(w)} is the set of estimated relative word frequencies.
  • In a yet further additional embodiment, the indexing application configures at least one processor to determine that a first video segment is related to a second video segment when the term frequency-inverse document frequency (tf-idf) histogram intersection score exceeds a first threshold and the number of named entities associated with each of the video segments exceeds a second threshold.
  • In yet another additional embodiment, the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the number of common keywords.
  • In a further additional embodiment again, the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the frequency of the common keywords with respect to the specific video segment.
  • In another additional embodiment again, the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the number of images from the search query, where at least a portion of the image matches at least a portion of a frame from the specific video segment.
  • In a still yet further embodiment again, the search engine application configures at least one processor to weight the relevancy score of a specific video segment based upon user preferences.
  • In still yet another embodiment again, the search results also include links to additional sources of relevant data that are relevant to the relevant video segments identified in the search results.
  • An embodiment of the method of the invention includes: identifying a set of video segments using a video search engine server system; extracting text data from a selected video segment in the set of video segments using the video search engine server system; identifying candidate sources of relevant data using the video search engine server system based upon keywords contained within the candidate sources of relevant data and keywords from the extracted text data; and identifying images from the candidate sources of relevant data using the video search engine server system, where at least a portion of the image matches at least a portion of a frame of video from within the selected video segment; identifying additional sources of relevant data from the candidate sources of relevant data based upon the extracted keywords, the keywords in the candidate sources of relevant data, and the identified images; and generating an inverted index of video segments in the set of video segments that are relevant to specific keywords using the video search engine server system based upon the extracted keywords and the keywords contained within the additional sources of relevant data; receiving a search query using the video search engine server system; identifying video segments from the set of video segments that are relevant to the search query using the video search engine server system based upon the inverted index; scoring the relevancy of the identified video segments to the search query using the video search engine server system; and generating search results identifying at least one video segment relevant to the search using the video search engine server system.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a flow chart that conceptually illustrates a process for generating a personalized playlist of video segments in accordance with an embodiment of the invention.
  • FIG. 2 is a system diagram that conceptually illustrates a system for generating personalized playlists, distributing video segments to users based upon the personalized playlists, and collecting analytic data based upon user interactions with the video segments during playback in accordance with an embodiment of the invention.
  • FIG. 3 is a flowchart illustrating a process for generating personalized playlists, distributing video segments to users based upon the personalized playlists, and collecting analytic data based upon user interactions with the video segments during playback in accordance with an embodiment of the invention.
  • FIG. 4 is a system diagram that conceptually illustrates a system for recording video segments from cable and over-the-air television broadcasts in accordance with an embodiment of the invention.
  • FIG. 5A is a system diagram that conceptually illustrates a multi-modal video data stream segmentation system in accordance with an embodiment of the invention.
  • FIG. 5B is a flowchart illustrating a process for performing multi-modal segmentation of a video data stream in accordance with an embodiment of the invention.
  • FIG. 6 is a flowchart illustrating a process for detecting text segmentation cues in a video data stream in accordance with an embodiment of the invention.
  • FIG. 7A conceptually illustrates the location of a face within a frame of video as part of a video segmentation process in accordance with an embodiment of the invention.
  • FIG. 7B is a flowchart illustrating a process for detecting an anchor frame segmentation cue in accordance with an embodiment of the invention.
  • FIG. 8A conceptually illustrates the matching of a logo image to content within a frame of video in accordance with an embodiment of the invention.
  • FIGS. 8B and 8C conceptually illustrate the identification of a transition animation segmentation cue in accordance with an embodiment of the invention.
  • FIG. 9 is a flowchart illustrating a process for identifying a logo and/or transition animation segmentation cue in accordance with an embodiment of the invention.
  • FIG. 10 is a system diagram that conceptually illustrates a playlist generation server in accordance with an embodiment of the invention.
  • FIG. 11 conceptually illustrates a process for matching video segments to additional sources of data by matching visual and/or text features of the video segments to relevant additional data sources in accordance with an embodiment of the invention.
  • FIG. 12 is a flowchart that illustrates a process for identifying sources of additional data that are relevant to a video segment using text analysis in accordance with an embodiment of the invention.
  • FIGS. 13A-13D conceptually illustrate extraction of metadata concerning a video segment by detecting and recognizing text contained within frames of the video segment in accordance with embodiments of the invention.
  • FIG. 14 is a flowchart illustrating a process for obtaining metadata concerning a video segment and/or identifying relevant sources of additional data based upon text extracted from one or more frames of video in accordance with an embodiment of the invention.
  • FIG. 15 conceptually illustrates a process for obtaining metadata concerning a video segment by performing face recognition in accordance with an embodiment of the invention.
  • FIG. 16 is a flowchart illustrating a process for obtaining metadata concerning a video segment and/or identifying relevant sources of additional data by performing face recognition in accordance with an embodiment of the invention.
  • FIG. 17 is a flowchart illustrating a process for generating a personalized playlist based upon a set of video segments, user preferences, and/or a user's viewing history in accordance with an embodiment of the invention.
  • FIG. 18 is a flowchart illustrating a process for identifying related video segments in accordance with an embodiment of the invention.
  • FIG. 19 is a system diagram that conceptually illustrates a playback device configured to retrieve a personalized playlist and select video segments for playback utilizing the personalized playlist in accordance with an embodiment of the invention.
  • FIG. 20A conceptually illustrates a user interface generated by a playback device using a personalized playlist in accordance with an embodiment of the invention.
  • FIG. 20B conceptually illustrates a user interface generated by a playback device that enables a user to specify a preferred duration and user preferences with respect to specific categories, sources of video content, and/or keywords in accordance with an embodiment of the invention.
  • FIG. 21A conceptually illustrates a user interface generated by a playback device that employs a gesture based user interface during playback of a video segment in accordance with an embodiment of the invention.
  • FIG. 21B conceptually illustrates a user interface generated by a playback device that employs a gesture based user interface displaying available channels of video segments in accordance with an embodiment of the invention.
  • FIG. 22A conceptually illustrates a “second screen” user interface generated by a playback device that provides information concerning related video segments to a video segment being played back on another playback device in accordance with an embodiment of the invention.
  • FIG. 22B conceptually illustrates a “second screen” user interface generated by a playback device that provides information concerning related video segments to a video segment being played back on another playback device and playback controls that can be utilized by a user to control playback of video segments on another playback device in accordance with an embodiment of the invention.
  • FIG. 23 conceptually illustrates a log file maintained by a playlist generation server based upon user interactions with video segments accessed via a playback device in accordance with an embodiment of the invention.
  • FIG. 24 is a flowchart illustrating a process for generating a summary of video segments by combining portions of video segments based upon the content of the portions of the video segments in accordance with an embodiment of the invention.
  • FIG. 25 is a system diagram that conceptually illustrates a multi-modal video search engine system in accordance with an embodiment of the invention.
  • FIG. 26 is a system diagram that conceptually illustrates a multi-modal video search engine server system in accordance with an embodiment of the invention.
  • FIG. 27 is a flowchart illustrating a process for retrieving video segments relevant to a search query in accordance with an embodiment of the invention.
  • DETAILED DESCRIPTION
  • Turning now to the drawings, systems and methods for generating personalized video playlists for video content aggregated from a variety of content sources in accordance with embodiments of the invention are illustrated. In many embodiments, data streams of video content are aggregated from various sources. Relationships are identified between various segments of the video content and/or between segments of the video content and other relevant sources of information including (but not limited to) metadata databases, web pages and/or social media services. Relevant information concerning the video segments can then be utilized to generate personalized playlists of video content based upon each user's viewing history and preferences. Users can then utilize the playlists to playback segments of video content via any of a variety of playback devices. In a number of embodiments, the user interface presented to the user via the playback device and/or via a second screen can display and/or provide users with links to information related to the displayed video segment.
  • Online sources of video content, such as news websites, typically provide video content in individual segments. By contrast traditional broadcast sources of video content are typically provided in continuous streams. In many embodiments, the process of aggregating video content from various sources can include segmentation of continuous data streams of video content. In the context of a news personalization service, the streams of video content can be segmented into individual news stories. In other contexts, the streams of video content can be segmented in accordance with other criteria including (but not limited to) commercial breaks, repeated events, slow motion sequences, camera shots, sentences, and/or anchor frames. In the specific context of sporting events, repeated sequences, slow motion sequences, and shots of the crowd are often indicative of important activity and can be utilized as segmentation boundaries. In addition, certain camera angles are typically utilized to capture video of important regions of a sports field. Therefore, camera angle can also be utilized as segmentation boundaries. As can readily be appreciated, any of a variety of segmentation cues can be utilized to identify specific segmentation boundaries that are appropriate to the requirements of a given application. In a number of embodiments, the segmentation process is a multi-modal segmentation process that detects segmentation cues in video, audio, and/or text data available in the data stream. Multi- modal segmentation processes in accordance with certain embodiments of the invention utilize specific text segmentation cues contained within closed caption text data. In a number of embodiments, specific video segmentation cues such as the recognition of a recurring face (e.g. an anchorperson), and/or recurring logo or logo animation are utilized to assist video segmentation. In other embodiments, any of a variety of segmentation techniques can be utilized as appropriate to the requirements of specific applications.
  • In a number of embodiments, segments of video content are analyzed to identify links between the segments and other relevant sources of information including (but not limited to) metadata databases, web pages and/or short messages posted via social media services such as the Facebook service provided by Facebook, Inc. of Menlo Park, Calif. and the Twitter service provided by Twitter, Inc. of San Francisco, Calif. In several embodiments, a multi-modal search for relevant additional data sources is performed that utilizes textual analysis and visual analysis of the video segments to identify relevant sources of additional data. In a number of embodiments, the textual analysis involves extracting keywords from text data such as closed caption and/or subtitles. The extracted keywords can then be utilized to locate relevant text data. In certain embodiments, the visual analysis involves recognizing elements within individual frames of video such as (but not limited to) text, faces, images and/or image patterns (e.g. clothing, scene background). In several embodiments, visual analysis can also involve object detection and/or detection of specific object events (e.g. gestures or specific object movements). Text and faces of named entities can be extracted as metadata describing the video segment and utilized to locate sources of relevant text data. In several embodiments, some or all of a frame of video can be compared to images related to additional sources of data and matching images used to identify relevant sources of additional data. In other embodiments, any of a variety of text and/or visual analysis can be performed to identify relevant sources of additional information.
  • In a number of embodiments, a multi-modal video search engine service is provided that creates an index of video segments that are relevant to specific keywords based upon relevant keywords identified through the textual and visual analysis of the video segments. In several embodiments, the list of relevant keywords for a particular video segment can be expanded by identifying keywords from in additional sources of data identified through the textual and visual analysis of the video segment. Once generated, the index can be utilized to generate a list of video segments that are relevant to a text search query. In several embodiments, an image, a video segment, and/or a Universal Resource Locator (URL) identifying a data sources such as (but not limited to) an image, a video sequence, a web page, and/or an online article can be provided as an input to the search engine (as opposed to a text query) to generate a list of related video segments. In other embodiments, any of a variety of multi-modal search engine services can be implemented as appropriate to the requirements of specific applications.
  • With specific regard to the generation of personalized playlists, the ability to identify related video segments can be useful in generating a playlist having a specified duration that provides the greatest coverage of the content of a set of video segments. The ability to identify related and/or duplicate content in a set of video segments can be utilized in the selection of video segments to include in a playlist. In the context of news stories, a personalized playlist can be constructed by selecting video segments of news stories that provide the greatest coverage of the stories taking into consideration an individual user's preferences concerning factors such as (but not limited to) content source, content category, anchorperson and/or any other factors appropriate to specific applications. As discussed further below, many embodiments of the invention utilize an integer linear programming optimization or a suitable approximate solution that employs an objective function that weighs both content coverage and user preferences in the generation of a personalized playlist. However, any of a variety of techniques for recommending video segments can be utilized in accordance with embodiments of the invention including (but not limited to) processes that generate playlists using video segments that do not contain cumulative content.
  • Systems and methods for generating personalized video playlists, performing multi-modal video data stream segmentation, and generating video search results using multi-modal analysis of video segments in accordance with embodiments of the invention are discussed further below.
  • Playlist Generation Systems
  • Playlist generation systems in accordance with embodiments of the invention perform multi-modal analysis of video segments to generate personalized playlists based upon factors including (but not limited to) a user's preferences, and/or viewing history. In a number of embodiments, the user's preferences can touch upon topic, content provider, and total playlist duration. A playlist generation system configured to generate personalized playlists of news stories in accordance with an embodiment of the invention is conceptually illustrated in FIG. 1. The playlist generation system 100 obtains video data streams and video segments from a variety of sources including (but not limited to) over-the-air broadcasts and cable television transmissions (102), online news websites (104), and social media services (106). In several embodiments, continuous data streams such as (but not limited to) over-the-air broadcasts and cable television transmissions (102) are segmented and the video segments stored for later retrieval. In a number of embodiments, a multi-modal segmentation process is utilized that considers a variety of video, audio, and/or text cues in the determination of segmentation boundaries. In other embodiments, the system only sources previously segmented video. In other embodiments, any of a variety of segmentation processes can be utilized as appropriate to the requirements of specific applications. Segmentation processes that are utilized by various playlist generation systems in accordance with embodiments of the invention are described further below.
  • The playlist generation system 100 analyzes and indexes (108) the video segments. In several embodiments, a multi-modal process that performs textual and visual analysis is utilized to analyze and index the video segments. In a number of embodiments, the multi-modal process identifies keywords from text sources within the video segment including (but not limited to) closed caption, and subtitles. Keywords can also be extracted based upon text recognition, and object recognition. In certain embodiments, various object recognition processes are utilized including facial recognition processes to identify named entities. The set of keywords associated with a video segment can then be utilized to identify additional sources of data. Examples of additional sources of data include (but are not limited to) online articles and websites, and posting to social media services. In certain embodiments, comparisons can be performed between frames of a video segment and images associated with additional sources of data as an additional modality for determining the extent of the relevance of an additional source of data. In other embodiments, any of a variety of analysis and indexing processes can be utilized as appropriate to the requirements of specific applications. Analysis and indexing processes that are utilized by various playlist generation systems in accordance with embodiments of the invention are discussed further below.
  • The indexed video segments can be utilized by the playlist generation system 100 to generate personalized playlists (110). Any of a variety of processes can be utilized to generate personalized playlists in accordance with embodiments of the invention. Several particularly effective processes for generating personalized playlists are described below. A number of embodiments are directed toward the generation of playlists in the context of news stories and select video segments that provide the greatest coverage of recent news stories in a manner that is informed by user preferences. In several embodiments, the selection process is further constrained by the need to generate a playlist having a playback duration that does not exceed a duration specified by the user.
  • Personalized playlists can be provided by the playlist generation system to playback devices. In a number of embodiments, the playlist can take the form of JSON playlist metadata. In other embodiments, any of a variety of data transfer techniques can be utilized including the creation of a top level index file such as (but not limited to) a SMIL file, or an MPEG-DASH file. Client applications on playback devices can generate a user interface (112) that enables the user to obtain and playback the video segments identified within the playlist. In many instances, the user may simply enable the playback device to continuously play through the playlist. In several embodiments, the user interface provides the user with the ability to select video segments, express sentiment toward video segments (e.g. like/dislike), skip video segments, reorder and/or delete video segments from the playlist, and share video segments via email, messaging services, and/or social media services. In a number of embodiments, the playlist generation system 100 logs user interactions via the user interface and uses the interactions to infer user preferences. In this way, the system can learn over time information about a user's preferences including (but not limited to) preferred content categories, content services, and/or anchorpeople. In a number of embodiments, playback devices can generate a so-called “second screen” user interface that can enable control of playback of a playlist on another playback device and/or provide information that complements a video segment and/or playlist being played back by another playback device. As can readily be appreciated, the specific user interface generated by a playback device is typically only limited by the capabilities of the playback device and the requirements of a specific application.
  • Although specific playlist generation systems are described above with reference to FIG. 1, any of a variety of playlist generation systems that produce playlists of video segments from multiple sources that are personalized based upon the preferences of individual users can be utilized as appropriate to the requirements of specific applications in accordance with embodiments of the invention. Personalized video distribution systems that utilize personalized playlists in the distribution of video content in accordance with various embodiments of the invention are discussed further below.
  • Personalized Video Distribution Systems
  • A video distribution system incorporating a playlist generation server system in accordance with an embodiment of the invention is illustrated in FIG. 2. The video distribution system 200 includes a playlist generation server system 202 that is configured to index video segments accessible via a content storage system 204, a content distribution network 206, web server systems 208 and/or social media server systems 210, 214. In a number of embodiments, the content storage system 204 contains video segments generated by a video segmentation system 212 that can segment and transcode continuous video data streams obtained from sources including (but not limited to) over-the-air broadcasts and cable television transmissions. Various processes that can be utilized to perform segmentation of continuous data streams in accordance with embodiments of the invention are discussed below.
  • Playlist generation server systems 202 in accordance with many embodiments of the invention utilize multi-modal analysis of video segments to identify additional relevant sources of data accessible via the content storage system 204, a content distribution network 206, a web server system 208 and/or a social media server system 210. In several embodiments, the playlist generation server system 202 annotates video segments with metadata extracted from the video segment and/or from additional sources of relevant data. The metadata describing the video segments can be stored in a database 216 and utilized to generate personalized playlists based upon user preferences that can also be stored in the database.
  • Playback client applications installed on a variety of playback devices 218 can be utilized to request personalized playlists from a playlist generation server system 202 via a network 220 such as (but not limited to) the Internet. The playback client applications can configure the playback devices 218 to display a user interface that enables a user to view and interact with the video segments identified in the user's personalized playlist. In a number of embodiments, the playlist generation server system and the playback devices can support multi-screen user interfaces. For example, a first playback device can be utilized to playback video segments identified in the playlist and a second playback device can be utilized to provide a “second screen” user interface enabling control of playback of video segments on the first playback device and/or additional information concerning the video segments and/or playlist being played back on the first playback device. In the illustrated embodiment, the playback devices 218 are personal computers and mobile phones. As can be readily appreciated, playback client applications can be created for any of a variety of playback devices including (but not limited to) network connected consumer electronics devices such a televisions, game consoles, and media players, tablet computers and/or any other class of device that is typically utilized to view video content obtained via a network connection.
  • Generating Personalized Playlists
  • A process for generating a personalized playlist of video segments drawn from different content sources based upon user preferences in accordance with an embodiment of the invention is illustrated in FIG. 3. The process 300 includes crawling (302) the websites of video content sources to identify new video segments. In a number of embodiments, the process of identifying new video segments also includes aggregating video data from a variety of sources including (but not limited to) over-the-air broadcasts and cable television transmissions. In embodiments where video data is aggregated, the aggregated video data may benefit from segmentation (304). The result of the crawling and/or aggregation of video data is typically a list of video segments that can be recommended to a given user.
  • In order to generate a playlist of video segments personalized to a user's preferences, the process 300 seeks to annotate the video segments with metadata describing the content of the segments. In a number of embodiments, a video segment linking process (306) is performed that seeks to identify additional sources of relevant data that describe the content of the video segment. In a number of embodiments, the video segment linking process (306) also seeks to identify relationships between video segments. In various contexts, including in the generation of personalized playlists of news stories, knowledge concerning the relationship between video segments can be useful in identifying video segments that contain cumulative content and can be excluded from a playlist without significant loss of information or content coverage. Information concerning the number of related stories can also provide an indication of the importance of the story.
  • Metadata describing a set of video segments can be utilized to generate (308) personalized playlists for one or more users. As is described in detail below, a variety of processes can be utilized in the generation of a personalized playlist based upon the metadata generated by process 300. In the context of news stories, a number of embodiments utilize an integer linear programming optimization and/or an approximation of an integer linear programming optimization that employs an objective function that weighs both content coverage including (but not limited to) measured trending topics (e.g. breaking news, or popular stories) and user preferences in the generation of a personalized playlist. Although, any of a variety of processes for recommending video segments can be utilized as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • In many embodiments, video segments are streamed to playback devices. Many of the standards that exist for encoding video specify profiles and playback devices are typically constructed in a manner that enables playback of content encoded in accordance with one or more of the profiles specified within the standard. The same profile may not, however, be suitable or desirable for playing back content on different classes of playback device. For example, mobile devices are typically unable to support playback of profiles designed for home theaters. Similarly, a network connected television may be capable of playing back content encoded in accordance with a mobile profile. However, playback quality may be significantly reduced relative to the quality achieved with a profile that demands the resources that are typically available in a home theater setting. Accordingly, processes for generating personalized video playlists in accordance with many embodiments of the invention involve transcoding video segments into formats and/or profiles suitable for different classes of device. As can readily be appreciated, the transcoding of media into target profiles can be performed in parallel with the processes utilized to perform video segment linking (306) and personalized playlist generation (308).
  • As discussed above, personalized playlists can be utilized by playback devices to obtain (312) and playback video segments identified within the playlists. In a number of embodiments, the video segments are streamed to the playback device and any of a variety of streaming technologies can be utilized including any of the common progressive playback or adaptive bitrate streaming protocols utilized to stream video content over a network. In several embodiments, a playback device can download the video segments using a personalized video playlist for disconnected (or connected) playback. The personalized playlists are generated based upon user preferences. Therefore, the process of generating personalized playlists can be continuously improved by collecting information concerning user interactions with video segments identified in a personalized playlist. The interactions can be indicative of implicit user preferences and may be utilized to update explicit user preferences obtained from the user.
  • Although specific processes for generating personalized video playlists are described above with reference to FIG. 3, any of a variety of processes that annotate video segments from multiple video sources with metadata describing the content of the video segments and utilize the metadata annotations and user preferences to generate a playlist can be utilized as appropriate to the requirements of specific applications in accordance with embodiments of the invention. Video segmentation and playlist generation systems that can be utilized in the generation of personalized video playlists in accordance with embodiments of the invention are discussed further below.
  • Video Segmentation Systems
  • In a number of embodiments, computers and television tuners are utilized to continually record media content from over-the-air broadcasts and cable television transmissions. In the context of a playlist generation system configured to generate personalized video playlists of news stories, the recorded programs can include national morning and evening news programs (e.g., TODAY Show, ABC World News), investigative journalism (e.g., 60 Minutes), and late-night talk shows (e.g., The Tonight Show). In many embodiments, the closed caption (CC) and/or any subtitles and metadata that may be available within the broadcast data stream are recorded along with the media content for use in subsequent processing of the recorded media content. In other contexts, content sources appropriate to the requirements of specific applications can be recorded. In several embodiments, segmentation is performed in real-time prior to storage. In a number of embodiments, the video data streams are recorded and segmentation is performed on the recorded data streams.
  • A video segmentation system configured to aggregate and segment over-the air broadcasts and cable television transmissions in accordance with an embodiment of the invention is illustrated in FIG. 4. The video segmentation system 400 receives video data stream inputs 402 from over-the-air broadcasts and cable television transmissions. In the illustrated embodiment, the video segmentation system 400 uses a signal splitter 404 to split and amplify a signal received via a cable television service. The signal is split into a number of inputs that are provided to a set of tuners 408 that possess the capability to demodulate a digital television signal from the cable television transmission and record the data stream to a storage device. In a number of embodiments, the tuners are controlled by a server based upon program guide information. The server can utilize the program guide information to identify desired content and can control the tuners 408 to tune to the appropriate channel at the appropriate time to commence recording of the content.
  • In the illustrated embodiment, the tuners 408 connect to a central storage system 410 via a high bandwidth digital switch 412. The data streams are recorded to the central storage system 410 and then a video segmentation server system 414 can commence the process of segmenting the data stream into discrete video segments.
  • A similar process is utilized to record and segment data streams obtained from over-the-air broadcasts. In the illustrated embodiment, tuner boxes 416 are utilized to tune to and demodulate digital television signals that are provided via a network 418 to the video segmentation server system 414 for segmentation. In many embodiments, the video segmentation server system records the over-the-air data streams to the central storage system 410 and then processes the recorded data streams. In a number of embodiments, the video segmentation server 414 system performs video segmentation in real-time and the video segments are recorded to the central storage system 410. In a number of embodiments, local machines 420 can be utilized to administer the aggregation and segmentation of video and/or view video segments.
  • Although specific systems for performing video aggregation and segmentation are described above with reference to FIG. 4, any of a variety of video segmentation systems can be utilized to receive and segment video data streams in accordance with embodiments of the invention. Video segmentation server systems and multi-modal segmentation processes that can be utilized in the segmentation of video data streams in accordance with embodiments of the invention are discussed further below.
  • Multi-Modal Video Segmentation
  • Due to the diversity of video content generated by various broadcast and online content sources, video segmentation systems in accordance with many embodiments of the invention can utilize a variety of cues to reliably segment content. In a typical data stream of video content, the sources of information concerning the structure of the content include (but are not limited to) image data in the form of frames of video, audio data in the form of time synchronized audio tracks, text data in the form of closed caption and/or subtitles, and/or additional sources of video, audio, and/or text information indicated by metadata contained within the data stream (e.g. in a time synchronized metadata track). In the context of video data streams, the term structure can often be used to describe a common progression of content within a data stream. For example, many data streams include content interrupted by advertising. At a more sophisticated level many news services structure transitions between news stories to incorporate shots of an anchorperson, which can be referred to as anchor frames, and/or transition animations that often include a station logo. The goal of video segmentation is to use information concerning the structure of content to divide a continuous video data stream into logical video segments such as (but not limited to) discrete news stories. In a number of embodiments, video segmentation is performed using multi-modal fusion of a variety of visual, auditory and textual cues. By combining cues from different types of data contained within the data stream, the segmentation process has a greater likelihood of correctly identifying structure within the content indicative of logical boundaries between video segments.
  • Multi-Modal Video Segmentation Server Systems
  • A multi-modal video segmentation server system in accordance with an embodiment of the invention is illustrated in FIG. 5A. The multi-modal video segmentation server system 500 includes a processor 510 in communication with volatile memory 520, non-volatile memory 530, and a network interface 540. In the illustrated embodiment, the non-volatile memory includes a video segmentation application 532 that configures the processor 510 to identify video segmentation boundaries in a video data stream 524 retrieved via the network interface 540. In a number of embodiments, the segmentation boundaries are utilized to generate video segmentation metadata 526 that can be utilized in the subsequent transcoding of the video data into one or more target video profiles for distribution to playback devices.
  • Although specific multi-modal video segmentation server systems are described above with reference to FIG. 5A, any of a variety of architectures can be utilized to implement multi-modal segmentation server systems in accordance with embodiments of the invention. Furthermore, the term processor is used with respect to all of the processing system described herein to refer to a single processor, multiple processors, and/or a combination of one or more general purpose processor and one/or more graphics coprocessors or graphics processing units (GPUs). Furthermore, the term memory is used to refer to one or more memory components that may be housed within separate computing devices. Multi-modal video segmentation processes that can be performed using multi-modal video segmentation processes in accordance with embodiments of the invention are described in detail below.
  • Multi-Modal Video Segmentation Processes
  • Multi-modal video segmentation processes can utilize a variety of different types of data contained within a video data stream to identify cues indicative of the structure of the data stream. A multi-modal video segmentation process that utilizes textual, audio and visual cues to identify segmentation boundaries in accordance with an embodiment of the invention is conceptually illustrated in FIG. 5B. The process 550 involves detecting textual cues (552), audio cues (554), and visual cues (555). The detected cues and their associated timestamps are then fused to identify segmentation boundaries. In several embodiments, machine learning techniques can be utilized to train a system to identify segmentation boundaries based upon a fused stream of segmentation cues. In a number of embodiments, a supervised learning approach such as (but not limited to) the use of techniques including (but not limited to) a support vector machine, a neural network classifier, and/or a decision tree classifier are utilized to implement a segment that can identify segmentation boundaries based upon a training data set of video streams in which segmentation boundaries are manually identified. In other embodiments, any of a variety of techniques including but not limited to supervised and unsupervised machine learning techniques can be utilized to implement systems for identifying segmentation boundaries based upon multi-modal segmentation cues in accordance with embodiments of the invention. The various textual, visual and audio cues that can be utilized in processes similar to those described above with reference to FIG. 5B are discussed further below.
  • Textual Cues
  • Some of the most important cues for story boundaries can be found in closed caption textual data incorporated within a video data stream. Often, >>> and >> markers are inserted to denote changes in stories or changes in speakers, respectively. Due to human errors, relying solely on these markers can provide inaccurate segmentation results. Therefore, segmentation analysis of closed caption data can be enhanced by looking for additional cues including (but not limited to) commonly used transition phrases that occur at segmentation boundaries. In several embodiments, string searches are performed within closed caption textual data and all >>> markers and transition phrases are identified as potential segmentation boundaries. In a number of embodiments, the list of transition phrases include “Now, we turn to . . . ” and “Stephanie Gross, NBC News, Seattle”. In other embodiments, any of a variety of text tags and/or phrases can be utilized as textual segmentation cues as appropriate to the requirements of specific applications.
  • In many instances, there is a delay between the video and closed caption text that varies randomly even within the same segment of video content. Indeed, delays of the order of tens of seconds have been observed. In a number of embodiments, automatic speech recognition can be performed with respect to the audio track and the timestamps of the audio track used to align the audio track textual data output by the automatic speech recognition process with text in the accompanying closed caption textual data. In several embodiments, the text data output by the automatic speech recognition process can also be analyzed to detect the presence of transition phrases. In other embodiments, the uncertainty in the time alignment between the closed caption text and the video content can be accommodated by the multi-modal segmentation process and a separate time alignment process is not required.
  • A process for identifying textual segmentation cues in accordance with an embodiment of the invention is illustrated in FIG. 6. The process 600 includes extracting closed caption textual data (602) and performing automatic speech recognition (604). These processes can be performed in parallel and any of a variety of automatic speech recognition processes typically used to perform automated speech to text conversions can be utilized as appropriate to the requirements of specific applications. In the context of news services, the number of speakers may be limited and speech recognition models that are speaker dependent can be utilized to achieve greater accuracy in the speech to text conversion of speech by recurring speakers such as (but not limited to) news anchors. Timestamps within the audio track utilized as the input to the automatic speech recognition process can be utilized to time synchronize (606) closed caption textual data with the video track within the video segment. Text segmentation cues can be identified by performing string searches within the closed caption textual data. Information concerning the textual cue and the timestamp associated with the textual cue can then be utilized in the identification of segmentation boundaries. In a number of embodiments, a confidence score is associated with the timestamp assigned to a textual cue and the confidence score can also be considered in the determination of a segmentation boundary.
  • Visual Cues
  • Visual boundaries in video content can provide information concerning transitions in content that cannot be discerned from analysis of closed caption textual data alone. In several embodiments, an analysis of video content for visual cues indicative of segmentation boundaries can be utilized to identify additional segmentation boundaries and to confirm and/or improve the accuracy of boundaries identified using closed caption textual data.
  • In the context of segmentation of news stories, several embodiments of the invention rely upon one or more of a set of visual cues as strong indicators of a segmentation boundary. In a number of embodiments, the set of visual cues includes (but is not limited to) anchor frames, logo frames, logo animation sequences and/or dark frames. In other embodiments and/or contexts, any of a variety of visual cues can be utilized as appropriate to the requirements of specific applications.
  • Detecting Anchor Frames
  • The term anchor frame refers to a frame in which an anchorperson appears. Typically, one or more anchorpersons appear between stories to provide a graceful transition. In several embodiments, a face detector is applied to some or all of the video frames in a video data stream. In certain embodiments, a face detector that can detect the presence of a face (without performing identification) is utilized to identify candidate anchor frames and then a facial recognition process is applied to the candidate anchor faces to detect anchor frames. In other embodiments, any of a variety of techniques can be used to identify the presence of a specific person's face within a frame in a video data stream as appropriate to the requirements of specific applications
  • A process for detecting anchor frames in a data stream in accordance with an embodiment of the invention is conceptually illustrated in FIG. 7A. The frame of video 700 contains an image of the face 702 of NBC News anchor Brian Williams. A process for detecting that a region 704 of the frame 700 contains the face of a known anchorperson identifying the frame as an anchor frame is illustrated in FIG. 7B. The process 750 includes selecting (752) a frame from the video data stream and detecting (754) a region of the frame containing a face. In several embodiments, a Viola-Jones or cascade of classifiers based face detector is utilized. In other embodiments, any of a variety of face detection techniques can be utilized as appropriate to the requirements of a specific application.
  • When no faces are detected (756), then the frame is determined not to be an anchor frame. When a determination (756) is made that a face is present, then a face identification process (758) can be performed within the region containing the detected face. In several embodiments, face identification is performed by generating a color histogram for a region containing a candidate face. In several embodiments, an elliptical region is utilized. In a number of embodiments, confidence information generated by the face detection process is utilized to define the region from which to form a histogram. The color histograms can be clustered from candidate anchor frames across the video data stream and dominant clusters identified as corresponding to an anchorperson. The dominant clusters can then be used to identify candidate anchor frames that contain a face with a face having a color histogram that is close to one of the dominant “anchor” color histograms. In certain embodiments, similarity is determined using the L1 distance between the color histograms. In other embodiments, any of a variety of metrics can be utilized as appropriate to the requirements of specific applications including metrics that consider the color histogram of a potential anchor face over more than one frame as appropriate to the requirements of specific application.
  • When a determination (760) that an anchorperson's face is present, an anchor frame is detected (762). In several embodiments, factors including (but not limited to) the L1 distance, and the number of adjacent frames in which the anchor face are detected are utilized to generate a confidence score that can be used by a multi-modal segmentation process in combination with information concerning other cues to determine the likelihood of a transition indicative of a segmentation boundary.
  • Detecting Logo Frames
  • Many news programs insert a program logo or transition animation between stories or segments. Logo appearance and position can vary unpredictably over time. In a number of embodiments, feature matching is performed between a set of logo images and frames from a video data stream. A set of logo images can be obtained by periodically crawling the websites of news organizations and/or other appropriate sources. Feature matching can also be performed between sequences of images in a transition animation and frames from a video data stream. Similarly, new transition animations can be periodically observed in video data streams generated by specific content sources and added to a library of transition animations.
  • Feature matching between logo images and frames of video in accordance with an embodiment of the invention is illustrated in FIGS. 8A. The process involves comparing a logo image 800 with a frame of video 802 and identifying matches 804 between local features in the logo image 806 and in the frame of video 808. When a sufficiently large number of local features are present, a match is identified and factors including (but not limited to) the similarity of the local features can be used to generate a confidence score indicating the reliability of the match. A similar process can be utilized to identify a sequence of frames of video that match a sequence of frames in a transition animation. Local feature matching between frames in transition animations and sequences of frames of video in accordance with embodiments of the invention are illustrated in FIGS. 8B and 8C. A frame from a transition animation that has previously been identified as indicative of a segmentation boundary is illustrated in FIG. 8B. The frame 850 from the transition animation shows two framed pictures 854 and 856, a white ticker bar 858 positioned below the two framed pictures and a logo 860 in the larger (856) of the two frames. Identification of the same features in the frame of video 852 can be indicative of the frame of video 852 belonging to a transition animation. As can readily be appreciated the content within the framed pictures and the ticker differ, however, the presence of a sufficiently large number of local features can be utilized to detect a match between the two frames. In a number of embodiments, additional features such as the presence of an anchorpersons face in the smaller of the two framed pictures can also be utilized in the detection of a frame of a transition animation. In other embodiments, any of a variety of features can be utilized to detect transition animations as appropriate to the requirements of specific applications including (but not limited to) analysis of an audio track to detect a musical accompaniment to a transition animation.
  • A specific process for performing feature matching is illustrated in FIG. 9. The process 900 involves selecting (902) frames from a video data stream. Local features can be extracted (904) from a reference image and the selected frames of video. In a number of embodiments, SURF features are extracted using processes similar to those described in H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346-359, 2008. In other embodiments, any of a variety of processes can be utilized to extract localized features in accordance with embodiments of the invention.
  • The localized features can be utilized to generate (906) global signatures and the selected frames ranked by comparing their global signatures to the global signature of the reference image. The ranking can be utilized to select (908) a set of candidate frames that are compared in a pairwise fashion (910) with the logo image. In several embodiments, the pairwise comparisons can utilize the techniques described in D. Chen, S. Tsai, V. Chandrasekhar, G. Takacs, R. Vedantham, R. Grezeszczuk, and B. Girod, “Residual enhanced visual vector as a compact signature for mobile visual search,” Signal Processing, 2012. When the pairwise comparison yields a match exceeding a predetermined threshold, a match is identified (912). As noted above, a match may represent that the candidate frame incorporates a logo and/or that the candidate frame corresponds to a frame from a transition animation. In many embodiments, the process of determining a match also involves determining a confidence metric that can also be utilized in the segmentation of a video data stream.
  • Although specific processes are described above with references to FIGS. 8A-8C and FIG. 9, any of a variety of processes for comparing features within images can be utilized to detect logos, animations, and/or other features indicative of segmentation boundaries as appropriate to the requirements of specific applications in accordance with embodiments of the invention. Furthermore, as discussed below, the processes described above with respect to FIG. 9 can also be utilized in the indexing of video segments to identify the presence of images associated with additional sources of data within a video segment. While logos and transition animations can be strong indicators of segmentation boundaries in a video data stream, they are not the only visual cues that can be utilized to detect segmentation boundaries. Additional visual cues including dark frames that are indicative of segmentation boundaries are discussed further below.
  • Detecting Dark Frames
  • Dark frames are frequently inserted at the boundaries of commercials and hence provide another valuable visual cue for segmentation. In several embodiments, dark frames are detected by converting some or all frames in a video data stream to gray scale and comparing the mean and standard deviation of the pixel intensities. In many embodiments, a frame is determined to be a dark frame if the mean is below μb and the standard deviation is below σb. In several embodiments, values of μb=40 and σb=10 can be utilized for gray levels in the range [0, 255]. In other embodiments, any of a variety of processes can be utilized to identify dark frames in accordance with embodiments of the invention, including (but not limited) to processes that identify sequences of multiple dark frames and/or processes that provide a confidence measure that can be utilized by a multi-modal segmentation process in combination with information concerning other cues to determine the likelihood of a transition indicative of a segmentation boundary.
  • Auditory Cues
  • In a number of embodiments, an audio track within a data stream can also be utilized as a source of segmentation cues. Anchorpersons commonly pause momentarily or take a long breath before introducing a new story. In several embodiments, significant pauses in an audio track are utilized as a segmentation cue. In many embodiments, a significant pause is defined as a pause in speech having a duration of 0.3 seconds or longer. In other embodiments, any of a variety of classifiers can be utilized to detect pauses indicative of a segmentation boundary in accordance with embodiments of the invention including processes that provide a confidence measure that can be utilized by a multi-modal segmentation process in combination with information concerning other cues to determine the likelihood of a transition indicative of a segmentation boundary. Pauses are not the only auditory cues that can be utilized in the detection of segmentation boundaries. In many embodiments, specific changes in tone and/or pitch can be utilized as indicative of segmentation boundaries as can musical accompaniment that is indicative of a transition to a commercial break and/or between segments.
  • Although various systems and methods that utilize a variety of segmentation cues in the multi-modal segmentation of video data streams are described above with reference to FIGS. 5A-9, any segmentation process that can be utilized to segment a video data stream in a manner that enables indexing of the video segments for the purposes of generating personalized playlists can be utilized in accordance with embodiments of the invention. Processes for generating personalized video playlists based upon user preferences in accordance with embodiments of the invention are described further below.
  • Personalized Video Playlist Generation
  • Playlist generation systems in accordance with many embodiments of the invention are configured to index sets of video segments and generate personalized playlists based upon user preferences. The user preferences can be explicit preferences specified by the user, and/or can be inferred based upon user interactions with previously recommended video segments (i.e. the user's viewing history). In many embodiments, the playlist generation system also generates playlists that are subject to time constraints in recognition of the limited time available to a user to consume content.
  • A playlist generation server system configured to index video segments and generate personalized playlists in accordance with an embodiment of the invention is illustrated in FIG. 10. The playlist generation server system 1000 includes a processor 1010 in communication with volatile memory 1020, non-volatile memory 1030, and a network interface 1040. In the illustrated embodiment, the non-volatile memory 1030 includes an indexing application 1032 that configures the processor 1010 to annotate video segments with metadata 1038 describing the content of video segment and generate an index relating video segments to keywords. In several embodiments, the indexing application 1032 configures the processor 1010 to extract metadata from textual analysis of textual data contained within a video segment and visual analysis of video data contained within the video segment. In a number of embodiments, the indexing application 1032 configures the processor 1010 to identify additional sources of relevant data that can be used to further annotate the video segment based upon textual and visual comparisons of the video segment and sources of additional data. In other embodiments, any of a variety of techniques including (but not limited to) manual annotation of video segments can be utilized to associate metadata with individual video segments.
  • The non-volatile memory 1030 can also contain a playlist generation application 1034 that configures the processor 1010 to generate personalized playlists for individual users based upon information collected by the playlist generation server system 1000 concerning user preferences and viewing histories 1036. Various processes for generating personalized video playlists in accordance with embodiments of the invention are discussed further below.
  • Although specific playlist generation server system implementations are described above with reference to FIG. 10, any of a variety of architectures including architectures where the indexing application and playlist generation application execute on different processors and/or on different server systems can be utilized to implement network clients in accordance with embodiments of the invention. Processes for annotating and indexing video segments and processes for generating personalized video playlists in accordance with various embodiments of the invention are discussed separately below.
  • Automated Video Segment Annotation
  • Metadata describing video segments can be utilized as inputs to a personalized video playlist generation system and to populate the user interfaces of playback devices with descriptive information concerning the video segments. A great deal of metadata describing a video segment can be derived from the video segment itself. Analysis of text data such as closed caption and subtitle text data can be utilized to identify relevant keywords. Analysis of visual data using techniques such as (but not limited to) text recognition, object recognition, and facial recognition can be utilized to identify the presence of keywords and/or named entities within the content. In many instances video segments can also include a metadata track that describes the content of the video segment.
  • Metadata describing video segments can also be obtained by matching the video segments to additional sources of relevant data. In the context of news stories, video segments can be matched to online articles related to the content of the video segment. In a number of embodiments, visual analysis is used to match portions of images associated with online articles to frames of video as an indication of the relevance of the online article. These sources of additional data (e.g. online news articles or Wikipedia pages) can be used to identify additional keywords describing the content. In addition, online articles matched to specific video segments can be utilized to generate titles for video segments and provide thumbnail images that can be used within user interfaces of playback devices. Hyperlinks to the online articles can also be provided via the user interfaces to enable a user to link to the additional content. In other contexts, any of a variety of data sources appropriate to the requirements of the specific application can be utilized in the generation of user interfaces and/or personalized playlists in accordance with embodiments of the invention.
  • In several embodiments, visual analysis and text analysis is utilized to match video segments to additional sources of data. A process for matching a segment of video to an online news article in accordance with an embodiment of the invention is conceptually illustrated in FIG. 11. The process involves matching (1100) visual features, which can involve comparing a video segment 1102 to images 104 associated with additional sources of data to identify the presence of at least a portion of the image within at least one frame of video within the video segment. The process can also involve matching (1108) text features. In several embodiments, keywords found in closed caption text data 1110 can be compared to keywords contained in text data 1112 present within additional sources of data.
  • In a number of embodiments, computational complexity can be reduced by initially performing text analysis to identify candidate sources of additional data. Images related to the candidate sources of additional data can then be utilized to perform visual analysis and the final ranking of the candidate sources of additional data determined based upon the combination of the text and visual analysis. In other embodiments, the text and visual analysis can be performed in alternative sequences and/or independently. Processes for performing text analysis and visual analysis to identify additional sources of data relevant to the content of video segments in accordance with embodiments of the invention are discussed further below.
  • Text Analysis
  • In a number of embodiments, sources of text within a video segment including (but not limited to) closed caption, subtitles, text generated by automatic speech recognition processes, and text generated by text recognition (optical character recognition) processes can be utilized to annotate video segments and identify additional sources of relevant data. In the context of video segments that have a temporal relevancy component (e.g. news stories), time stamp metadata associated with additional sources of data and/or dates and/or times contained within text forming part of an additional source of data can be utilized in limiting the sources of additional data considered when determining relevancy. In many instances, the presence of common dates and/or times in text extracted from a video segment and text from an additional data source can be considered indicative of relevance.
  • In a number of embodiments, bag-of-words histogram comparisons enable matching of text segments with similar distributions of words. In certain embodiments, a term frequency-inverse document frequency (tf-idf) histogram intersection score (S(Ha, Hb)) is computed as follows:
  • S ( H a , H b ) = w idf ( w ) · min ( H a ( w ) , H b ( w ) ) idf ( w ) = log ( max x f ( x ) ) - log ( f ( w ) )
  • where, Ha(w) and Hb(w) are the L1 normalized histograms of the words in the two sets of words (i.e. the text from the video segment and the additional data source); and
  • {f(w)} is the set of estimated relative word frequencies.
  • In many embodiments, a candidate additional data source is considered to have been identified when the tf-idf histogram intersection score (S(Ha, Hb)) exceeds a predetermined threshold.
  • In a number of embodiments, the process of identifying relevant sources of additional data places particular significance upon named entities. A database of named entities can be built using sources such as (but not limited to) Wikipedia, Twitter, the Stanford Named Entity Recognizer, and/or Open Calais. String searches can then be utilized to identify named entities in text extracted from a video segment and a potential source of additional data, such as an online article. In several embodiments, the presence of a predetermined number of common named entities is used to identify a source of additional data that is relevant to a video segment. In certain embodiments, the presence of five or more named entities in common is indicative of a relevant source of additional data. In other embodiments, any of a variety of processes can be utilized to determine relevancy based upon named entities including processes that utilize a variety of matching rules such as (but not limited to) number of matching named entities, number of matching named entities that are people, number of matching named entities that are places and/or combinations of numbers of matching named entities that are people and number of matching named entities that are places.
  • A process for performing text analysis of video segments to identify relevant sources of additional data in accordance with an embodiment of the invention is illustrated in FIG. 12. The process 1200 includes determining (1202) tf-idf for the annotated video segment(s). Similar processes can be utilized to determine (1204) tf-idf for additional sources of data such as online articles. Processes similar to those outlined above can be utilized to determine (1206) the similarity of the tf-idf histograms of the video segments and the additional sources of data.
  • In a number of embodiments, the relevancy of additional sources of data to specific video segments can be confirmed by identifying (1208) named entities in text data describing a video segment, identifying (1210) named entities referenced in candidate additional sources of data that share common terms with the video segment, and determining (1212) that an additional source of data relates to the content of a video segment when a predetermined number of named entities are referenced in the text data extracted from the video segment and the additional source of data. As is discussed further below, named entities associated with a video segment can be identified within text data extracted from the video segment and/or by performing object detection and/or facial recognition processes with respect to frames from the video segment.
  • Although specific processes are described above with reference to FIG. 12, any of a variety of processes can be utilized to identify relevant sources of additional data based upon text extracted from a video segment and the text associated with the additional data source as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • Use of Visual Analysis to Extract Additional Keywords
  • The frames of a video segment can contain a variety of visual information including images, faces, and/or text. In a number of embodiments, the text analysis processes similar to those described above can be augmented using relevant keywords identified through analysis of the visual information (as opposed to text data) within a video segment. In several embodiments, text recognition processes are utilized to identify text that is visually represented within a frame of video and relevant keywords can be extracted from the identified text. In a number of embodiments, additional relevant keywords can also be extracted from a video segment by performing object detection and/or facial recognition.
  • Text Recognition
  • Text extraction processes can be used to detect and recognize letters forming words within frames in a video segment. In several embodiments, the text can be utilized to identify keywords that annotate the video segment. In the context of news stories, keywords such as (but not limited to) “breaking news” can be utilized to categorize news stores both for the purpose of detecting additional sources of data and during the generation of personalized playlists.
  • In a number of embodiments, text is extracted from frames of video and filtered to identify text that describes the video segment. News stories commonly include title text and identification of the title text can be useful for the purpose of incorporating the title into a user interface and/or for using keywords in the title to identify relevant additional sources of data. In many embodiments, an extracted title is provided to a search engine to identify additional sources of potentially relevant data. In the context of video segments within a specific category or vertical (e.g. news stories), the title can be provided as a query to a vertical search engine (e.g. the Google News search engine service provided by Google, Inc. of Mountain View, Calif.) to identify additional sources of potentially relevant data. In many embodiments, the ranking of the search results is utilized to determine relevancy. In several embodiments, the search results are separately scored to determine relevancy.
  • Processes for extracting relevant keywords from video segments for use in the annotation of video segments in accordance with embodiments of the invention are illustrated in FIGS. 13A-13D. FIG. 13A is a frame of video containing visual representations of text. As can be seen in FIG. 13B, the text includes the words “BREAKING NEWS” and “THREE MISSING GIRLS FOUND ALIVE”, which can be identified using common text recognition processes. In FIG. 13C, another frame of video is shown containing visual representations of text. As can be seen in FIG. 13D, the frame also includes the words “BREAKING NEWS” and the words “WITNESS TO TERROR” that can be identified using common text recognition processes. As can be readily appreciated, the presence of text information such as (but not limited to) scrolling tickers, and logos can introduce a great deal of textual “clutter” in a frame of video. Therefore, processes in accordance with many embodiments of the invention apply filters to recognized text in an effort to identify meaningful keywords. Furthermore, the regions within a frame of video searched using text recognition processes can be restricted to regions likely to contain text descriptive of the content of the video segments.
  • A process for extracting relevant keywords from frames of video using automatic text recognition in accordance with an embodiment of the invention is illustrated in FIG. 14. The process 1400 includes extracting (1402) text from one or more frames of video. With the exception of logos, the amount of time that text appears within a video segment can be highly correlated with the importance of the text. Therefore, many embodiments of the invention analyze multiple frames of video and filter text and/or keywords based upon the duration of the time period in which text and/or keywords are visible.
  • Referring again to the process 1400 shown in FIG. 14, the extracted (1402) text can be analyzed to identify (1404) keywords. The keywords can be filtered (1406) to identify relevant keywords and a library of key phrases, which can be utilized to annotate (1408) the video segment. In several embodiments, the text is filtered for “stop words” and a “stemming” process is applied to the remaining words to increase the matching results. In other embodiments, any of a variety of filtering and/or keyword expansion processes can be applied to recognized text to identify relevant keywords in accordance with embodiments of the invention.
  • Although specific processes for extracting additional relevant keywords from frames of video by performing automatic text recognition are described above with reference to FIG. 14, any of a variety of processes for annotating video segments using keywords identified by analyzing frames of a video segment using automatic text recognition processes can be utilized as appropriate to the requirements of specific applications in accordance with embodiments of the invention. Additional automatic recognition tasks that can be performed to identify faces and objects during the annotation of video segments in accordance with various embodiments of the invention are discussed further below.
  • Face Recognition
  • A variety of techniques are known for performing object detection including various face recognition processes. Processes for detecting anchor faces are described above with respect to video segmentation. As can readily be appreciated, recognizing the people appearing in video segments can be useful in identifying additional sources of data that are relevant to the content of the video segments. In a number of embodiments, similar processes can be utilized to identify a larger number of faces (i.e. more named entities than simply anchorpeople). In other embodiments, any of a variety of processes can be utilized to perform face recognition including processes that have high recognition precision across a large population of faces.
  • A process for performing face recognition based upon localized features during the annotation of a video segment in accordance with an embodiment of the invention is conceptually illustrated in FIGS. 15 and 16. The frame of video 1500 shown in FIG. 15 is a shot of Warren Buffett, Chairman of Berkshire Hathaway. As can readily be appreciated, the subject of the shot can be ascertained by performing automated text recognition. Alternatively, the presence of Mr. Buffett's face can be identified by performing a process 1600 involving initially performing (1602) a face detection process. A region determined to contain a face can then be analyzed (1604) to locate landmark features 1502 such as the corners of the face's eyes, the tip of the face's nose, and the edges of the face's mouth. As is well known, such features can be utilized to perform facial recognition by matching (1606) the relationship of the landmark features against a database of facial landmark feature geometries. Once a face is recognized, the identity of the person visible in the frame of video can be utilized to annotate (1608) the video segment with a keyword corresponding to a named entity. A confidence score can also be associated with the name entity annotation and utilized in weighting the named entity keyword when identifying additional sources of data.
  • Although specific processes for annotating video segments with named entity keywords by performing automatic face recognition are described above with reference to FIGS. 15 and 16, any of a variety of object detection processes can be utilized to annotate video segments with relevant keywords as appropriate to the requirements of specific applications in accordance with embodiments of the invention. While the processes described above with reference to FIGS. 13A-16 involve the analysis of visual information contained within frames of a video segment in order to identify keywords that are relevant to the content of the video segment, visual analysis can also be utilized to identify images that are relevant to the content of a video segment. Processes that utilize visual analysis to identify relationships between video segments and images in accordance with various embodiments of the invention are discussed further below.
  • Using Visual Analysis to Perform Image Linking
  • Video segments and additional sources of data, such as online articles, often utilize the same image, different portions of the same image, or different images of the same scene. In a number of embodiments, an image portion within one or more frames in a video segment can be matched to an image associated with additional sources of information to assist with establishing the relevancy of additional sources of data. In several embodiments, matching is performed by determining whether the frame of video contains a region that includes a geometrically and photometrically distorted version of a portion of an image obtained from the additional data source. As noted previously, processes similar to those described above with reference to FIG. 9 can be utilized to determine a match between a portion of an image associated with an additional data source and a portion of a frame of video. In other embodiments, any of a variety of techniques can be utilized to determine whether portions of a frame of video and an image associated with an additional data source correspond.
  • Personalized Playlist Generation
  • Once a set of video segments is annotated, and index can be generated using keywords extracted from the video segment and/or additional sources of data that are relevant to the content of the video segment. The resulting index and metadata can be utilized in the generation of personalized video playlists. Playlist personalization is a complex problem that can consider user preferences, viewing history, and/or story relationships in choosing the video segments that are most likely to form the set of content that is of most interest to a user. In many embodiments, processes for generating personalized playlists for users involve consideration of a recommended set of content in recognition of the limited amount of time an individual user may have to view video segments. Accordingly, processes in accordance with a number of embodiments of the invention can attempt to select a set of video segments having a combined duration less than a predetermined time period and spanning the content that is most likely to be of interest to the user. In several embodiments, the video segments can be further sorted into a preferred order. In a number of embodiments, the order can be determined based upon relevancy and/or based upon heuristics concerning sequences of content categories that make for “good television”. In certain embodiments, the process of generating playlists involves the generation of multiple playlists including a personalized playlist and “channels” of content filtered by categories such as “technology” or keywords such as “Barack Obama”. Within categories, user preferences can still be considered in the generation of the playlist. Effectively, the process for generating a personalized video playlist is simply applied to a smaller set of video segments. In the context of news stories, processes for generating personalized playlists in accordance with many embodiments of the invention attempt to provide a comprehensive view of the day's news in a way that avoids duplicate or near-duplicate stories. Additionally, more recent video segments can receive higher weightings. Intuitively, this formulation chooses trending video segments, which originated from news programs the user prefers, and are also associated with categories in which the user is interested.
  • In many embodiments, the process of generating a personalized playlist is treated as a maximum coverage problem. A maximum coverage problem typically involves a number of sets of elements, where the sets of elements can intersect (i.e. a single element can belong to multiple sets). Solving a maximum coverage problem involves finding the fixed number of elements that cover the largest number of sets of elements. In the context of generating a personalized playlist, the elements are the video segments and video segments that relate to the same content are treated as belonging to the same set. Therefore, the concept of content coverage can be used to refer to the amount of different content covered by a set of video segments. As noted above, video segments can be compared to determine whether the content is related or unrelated. In the context of news stories, many embodiments attempt to span the major news stories of the day and an objective function for solving the maximum coverable problem can be weighted by a linear combination of several personalization factors. These factors can include (but are not limited to) explicit preferences specified by a user, personal information provided by the user and/or obtained from secondary sources including (but not limited to) online social networks, and implicit preferences obtained by analyzing a user's viewing history. Information concerning implicit preferences may be derived by analyzing a user's viewing history with respect to playlists generated by a playlist generation server system. In other embodiments, implicit preferences can be derived from additional sources of information including (but not limited to) a user's browsing activity (especially with respect to online articles relevant to video segment content), activity within an online social network, and/or viewing history with respect to video and/or audio content provided by one or more additional services.
  • A process for generating personalized playlists from metadata describing a set of video segments based upon user preferences in accordance with an embodiment of the invention is illustrated in FIG. 17. The process 1700 involves obtaining (1702) user preferences, which can involve observing (1704) a user's viewing history. In many embodiments, the process of generating personalized playlists utilizes metadata identifying video segments having related content or cumulative content. In a number of embodiments, related video segments are identified (1706) and personalization weightings can be determined (1708) for a new set of video segments form which the personalized playlists will be generated based upon metadata describing the video segments. In several embodiments, metadata describing the relationships between video segments and the personalization weightings are utilized to generate (1710) personalized playlists. In a number of embodiments, the process of generating a personalized playlist can be constrained by a specified cumulative playback duration of the video segments identified in the playlist.
  • Personalized playlists can be provided to playback devices, which can utilize the playlists to stream (1712), or otherwise obtain, the video segments identified in the playlist and to enable the user to interact with the video segments. In several embodiments, the playback devices and/or the playlist generation server system to collect analytic data based upon user interactions with the video segments and/or additional data sources identified within the playlist. The analytic information can be utilized to improve the manner in which personalization ratings are determined for specific users so that the playlist generation process can provide more relevant content recommendations over time.
  • Although specific processes for performing personalized playlist generation with respect to a set of video segments based upon user preferences are described above with reference to FIG. 17, any of a variety of processes can be utilized to perform playlist generation based upon metadata describing a set of video segments and information concerning user preferences in accordance with embodiments of the invention. As noted above, information concerning relationships between video segments and specifically with respect to the cumulative nature of video segments can be highly relevant in the generation of personalized playlists for certain types of video content including (but not limited to) news stories. Processes for identifying related and/or cumulative content in accordance with various embodiments of the invention are discussed further below.
  • Identifying Related Video Segments
  • As is discussed in further detail below, playlist generation processes in accordance with many embodiments of the invention rely upon information concerning the relationships between the content in video segments to identify the greatest amount of information that can be conveyed within the shortest or a specified time period. In the context of video segments extracted from news programming, related video segments can be considered to be video segments that relate to the same news story. In many embodiments, care is taken when classifying two video segments relating to the same content as “related” to avoid classifying a video segment that includes updated information as related in the sense of being cumulative. In many embodiments, a video segment that contains additional information can be identified as a primary video segment and a video segment containing an earlier version of the content and/or a subset of the content can be classified as a related or cumulative video segment. In this way, a related classification can be considered hierarchical or one directional. Stated another way, the classification of a first segment as related to a second segment does not imply that the second segment is related to (cumulative of) the first segment. In many embodiments, however, only bidirectional relationships are utilized.
  • A process for identifying whether a first video segment is cumulative of the content in a second video segment based upon keywords associated with the video segments in accordance with an embodiment of the invention is illustrated in FIG. 18. The process 1800 includes determining (1802) the tf-idf histograms for both of the video segments and (1804) lists of named entities associated with each of the segments. A decision concerning whether one of the video segments is cumulative of the other can be made by comparing the tf-idf histograms in the manner described above with respect to FIG. 12. In the event that the tf-idf histograms are determined to be sufficiently similar, a determination that one of the video segments is cumulative of the other video segment (or that both video segments are cumulative of each other) can be determined by comparing (1808) whether the number of shared named entities exceeds a predetermined threshold.
  • Although specific processes for identifying whether one video segment is cumulative of another are described above with respect to FIG. 18, any of a variety of processes for determining whether the content of a first video segment is cumulative of a second video segment can be utilized as appropriate to the requirements of specific applications in accordance with embodiments of the invention. Furthermore, processes that identify relationships other than the cumulative nature of video segments such as processes that determine visual similarity between shots can be utilized to identify appealing and/or dominant shots for within video segments can be utilized in a variety of contexts. The manner in which metadata describing the relationships between video segments can be utilized in the generation of personalized video playlists in accordance with various embodiments of the invention is discussed further below.
  • Generating Personalize Playlists Using Integer Linear Programming Optimization
  • In several embodiments, personalized playlists are generated by formalizing the problem of generating a playlist for a user as an integer linear programming optimization problem, or more specifically a maximum coverage problem, as follows:
  • maximize w coverage i = 1 n y i + c T x subject to Rx y d T x t
  • where n is the number of today's videos,
  • wcoverage represents a weighting applied to the news story coverage relative to user preferences,
  • x is a vector including an element for each identified video segment, where for i ∈ [1 . . . n], xi ∈ {0,1} is 1 if the ith video segment is selected,
  • y is a vector including an element for each identified video segment, where for i ∈ [1 . . . n], yi ∈ {0,1} is 1 if xi is covered by a video segment that has been already selected,
  • c is a vector representing a set of personalization weights ci determined with respect to each video segment xi based upon user preferences, and
  • R ∈ {0,1}n×n denotes an adjacency matrix, where 1 represents a link between news stories.
  • In the above formulation, duration of the news story and time limitations are represented by di and t accordingly. As can readily be appreciated, the above objective function maximizes a weighted combination of the coverage of the day's new stories achieved within a specified time period (wcoverageΣi=1 nyi) and the user's preferences (cTx).
  • In a number of embodiments, factors including (but not limited to) a user's preferences with respect to sources and/or categories of video segments (ssource, scategory) , recency (stime), and viewing history (shistory) are considered in calculating the personalization weights c. In several embodiments, viewing history (shistory) can be determined based upon the number of related news stories, which were watched previously by the user. In several embodiments, processes for detecting related and/or similar stories similar to those described above with respect to FIG. 18, but with relaxed matching criteria, can be utilized to identify similar video segments previously watched by a user. In a number of embodiments, a separate novelty metric is determined as part of the process of identifying similar stories and the novelty metric can be used to assess the extent to which the content of two similar video segments differs. In a number of embodiments, the novelty metric is related to the number of words that are not common between the two video segments. In other embodiments, any of a variety of factors can be considered in the calculation of a novelty metric. The overall weightings ci for a video segment vi from the set of n recent video segments v can be expressed as follows:

  • c i =w source ·s source(v i)+w category ·s category(v i)+w time ·s time(v i)+whistory ·s history(vi)
  • As can readily be appreciated, the weights can be selected arbitrarily and updated manually and/or automatically based upon user feedback.
  • In certain embodiments, stime(vi) and shistory(vi) are defiend as follows:
  • s time ( v i ) = time v i - time current s history ( v i ) = w Videos related ( v i , w )
  • where, Videos is a set of all video segments (i.e. not just the recent segments v).
  • The function related(vi,w) ∈ {0,1} is 1 if video segments vi and w are linked. In several embodiments, a process similar to any of the processes described above with respect to FIG. 18 can be utilized to determine whether stories are cumulative. As can readily be appreciated, the links identified by such processes are very specific in the sense that the process is intended to identify video segments that contain the same or very similar content. Accordingly, processes in accordance with many embodiments of the invention may (also) attempt to draw more general conclusions concerning viewing history such as keyword preferences, topic preferences, and source preferences. In certain embodiments, video segments can be marked as related (i.e. related(vi,w)=1) based upon preferences identified in this manner. Alternatively, more general preferences can be utilized to modify source and/or category preference scores that are separately used to weight video segments. As can readily be appreciated, any of a variety of processes for scoring a specific video segment based upon viewing history can be utilized in accordance with embodiments of the invention.
  • Once a set of video segments is identified, a variety of choices can be made with respect to the ordering of the set of video segments to generate a playlist. In a number of embodiments, the “importance” of a video segment can be scored and utilized to determine the order in which the video segments are presented in a playlist. In several embodiments, importance can be scored based upon factors including (but not limited to) the number of related video segments. In the context of news stories, the number of related video segments within a predetermined time period can be indicative of breaking news. Therefore, the number of related video segments to a video segment within a predetermined time period can be indicative of importance. In other embodiments, any of a variety of techniques can be utilized to measure the importance of a video segment as appropriate to the requirements of specific applications. In a number of embodiments, the content of the video segments is utilized to determine the order of the video segments in a personalized video playlist. In several embodiments, sentiment analysis of metadata annotating a video segment can be utilized to estimate the sentiment of the video segment and heuristics utilized to order video segments based upon sentiment. For example, a playlist may start with the most important story. Where the story has a negative sentiment (a dispatch from a warzone), the process can select a second story that has more uplifting sentiment. As can readily be appreciated, machine learning techniques can be utilized to determine processes for ordering stories from a set of stories to create a personalized playlist as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • Although specific processes are described above for generating personalized video playlist using integer linear programming optimization process, any of a variety of processes can be utilized to generate personalized video playlists using a set of video segments based upon user preferences in accordance with embodiments of the invention including processes that indirectly consider viewing history by modifying source and category weightings. Furthermore, processes in accordance with many embodiments of the invention consider other user preferences including (but not limited to) keyword and/or named entity preferences.
  • Playback Devices
  • Personalized video playlists can be provided to a host of playback devices to enable viewing of video segments and/or additional data sources identified in the playlists. In a number of embodiments a playback device is configured via a client application to render a user interface based upon metadata describing video segments obtained using the playlist. Playback devices can also be configured to provide a “second screen” display that can enable control of playback of video segments on another playback device and/or viewing of additional video segments and/or data related to the video segment being played back on the other playback device. As can readily be appreciated, the user interfaces that can be generated by playback devices are largely only limited by the capabilities of the playback device and the requirements of specific applications.
  • A playback device in accordance with an embodiment of the invention is illustrated in FIG. 19. The playback device 1900 includes a processor 1910 in communication with volatile memory 1920, non-volatile memory 1930, and a network interface 1940. In the illustrated embodiment, the non-volatile memory 1930 includes a media decoder application 1932 that configures the processor 1010 to decode video for playback via display device a client application 1934 that configures the processor to render a user interface based upon metadata describing video segments contained within a personalized playlist 1926 retrieved from a playlist generation server system via the network interface 1940.
  • Although specific playback device implementations are illustrated in FIG. 19, any of a variety of playback device architectures can be utilized to playback video segments identified in a personalized playlists in accordance with embodiments of the invention. User interfaces generated by playback devices that enable viewing and interaction with video segments identified in personalized playlists in accordance with embodiments of the invention are described further below.
  • User Interfaces
  • The user interface generated by a playback device based upon a personalized playlist is typically determined by the capabilities of a playback device. In many embodiments, instructions for generating a user interface can be provided to a playback device by a remote server. In several embodiments, the instructions can be in a markup and/or scripting language that can be rendered by the rendering engine of a web browser application on a computing device. In a number of embodiments, the remote server provides structured data to a client application on a playback device and the client application utilizes the structured data to populate a locally generated user interface. In other embodiments, any of a variety of approaches to generating a user interface can be utilized in accordance with an embodiment of the invention.
  • A user interface rendered by the rendering engine of a web browser application in accordance with an embodiment of the invention is illustrated in FIG. 20A. The user interface 2000 includes a player region 2002 in which a video segment is played back. The video segment being played back via the user interface is described by displaying the video segment's title 2004, source 2006, recency 2008, and number of views 2010 above the player region 2002. As can readily be appreciated, any of a variety of information describing a video segment being played back within a player region can be displayed in any location(s) within a user interface as appropriate to the requirements of specific applications.
  • In the illustrated embodiment, the player region 2002 includes user interface buttons for sharing a link to the current story 2012, skipping to the previous 2014 or next story 2016 and expressing like 2018 or dislike 2020 toward the story being played back within the player region 2002. In other embodiments, additional user interface affordances can be provided to facilitate user interaction including (but not limited to) user interface mechanisms that enable the user to select an option to follow stories related to the story currently being played back within the player region 2002.
  • The user interface also includes a personalized playlist 2022 filled with tiles 2024 that each include a description 2025 of a video segment intended to interest the user and an accompanying image 2026. In many embodiments, tiles 2024 in the playlist 2022 can also be easily reordered or removed. In the illustrated embodiment, the tile at the bottom of the list 2028 contains a description of the video segment being played back in the player region. The tile also contains sliders 2030 indicating categories, sources, and/or keywords for which a user has or can provide an explicit user preference. In this way, the user is prompted to modify previously provided user preference information and/or provide additional user preference information during playback of the video segment. In other embodiments, any of a variety of affordances can be utilized to directly obtain user preference information via a user interface in which video segments identified within a playlist are played back as appropriate to the requirements of specific applications.
  • Beneath the player region 2002, there are several menus for video segment exploration showing: video segments related to the current video segment 2032, other (recent) video segments from the same source 2034, video segments from “channels” (i.e. playlists) generated around a specific category and/or keyword(s) 2036, and news briefs 2038 (i.e. aggregations of video segments across one or more sources to provide a news summary). As can readily be appreciated, any of a variety of playlists can be generated utilizing video segment metadata annotations generated in accordance with embodiments of the invention. Various processes for generating news brief video segments in accordance with embodiments of the invention are discussed further below.
  • At the top of the displayed user interface 2000, there is a search bar 2040 for receiving a search query. In several embodiments, the query is executed by comparing keywords from the query to keywords contained within the segment of video content (e.g. speech, closed caption, metadata). In a number of embodiments, the query is executed by also considering the presence of keywords in additional sources of information that were determined to be related to the video segment during the process of generating the personalized playlist. As can readily be appreciated, indexes relating keywords to video segments that are constructed as part of the process of generating personalized playlists can also be utilized to generate lists of video segments in response to text based search queries in accordance with embodiments of the invention. Implementation of various video search engines in accordance with embodiments of the invention are described further below.
  • The displayed user interface 2000 also includes an option 2042 to enter a settings menu for adjusting preferences toward different categories of video content and/or sources of video content. A settings menu user interface in accordance with an embodiment of the invention is illustrated in FIG. 20B. The settings menu user interface 2050 includes a set of sliders 2052 indicating user preferences provided and/or inferred based upon a user's viewing history. A user can adjust an individual slider 2046 to modify the weighting attributed to the corresponding attribute of a video segment. In several embodiments, the user can add and/or remove any of a variety of factors to the list of factors considered by a playlist generation system. In several embodiments, the settings menu user interface can include a set of options 2056 that a user can select to specify a playlist duration. As noted above, playlist duration is a factor that can be considered in the selection of video segments to incorporate within a personalized playlist. In other embodiments, user preference information can be obtained via any of a variety of affordances provided via a user interface of a playback device as appropriate to the requirements of a specific application.
  • Mobile User Interfaces
  • The display and input capabilities of a playback device can inform the user interface provided by the playback device. A user interface for a touch screen computing device, such as (but not limited to) a tablet computer, in accordance with an embodiment of the invention is illustrated in FIG. 21. The user interface 2100 includes a player region 2102 in which a video segment is played back. Due to the limited display size, the majority of the display is devoted to the playback region, however, the title 2104 and source 2016 of the video segment being played back is displayed above the player region 2102. The user interface also includes a channels button 2108 that can be selected to display a list of available playlists. A screen shot of a user interface in which channels are displayed in accordance with an embodiment of the invention is illustrated in FIG. 21B. The channels list 2150 includes the personalized playlist of video segments 2152 and selections for personalized playlists generated by filtering video segments based upon specific categories, sources, and/or keywords.
  • In a number of embodiments, a mobile computing device such as (but not limited to) a mobile phone or tablet computer can act as a second display enabling control of playlist playback on another playback device and/or providing additional information concerning a video segment being played back on a playback device. A screen shot of a “second screen” user interface generated by a tablet computing device in accordance with an embodiment of the invention is illustrated in FIG. 22A. The user interface 2002 includes a listing 2202 of video segments that are related to a video segment identified in a personalized playlist that is being played back on another playback device. In the illustrated embodiment, title 2204, source 2208, release data 2208, text summaries 2206 and one or more images 2212 are provided to describe each video segment in the listing 2202. In other embodiments, any of a variety of information can be presented to a user via a user interface to provide information concerning a video segment being played back on another playback device and/or related video segments.
  • A screen shot of a “second screen” user interface generated by a tablet computing device enabling control of playback of video segments identified in a personalized playlist on another playback device in accordance with an embodiment of the invention is illustrated in FIG. 22B. The user interface 2252 includes information (2204-2012) describing related videos and a set of controls 2252 that can be utilized to control playback of video segments identified in a personalized playlist on another playback device.
  • Although specific user interfaces are illustrated in FIGS. 20A-22B, any of a variety of user interfaces can be generated using numerous techniques based upon personalized playlists obtained from playlist generation systems as appropriate to the requirements of specific applications in accordance with embodiments of the invention. For example, appropriate user interfaces can be generated for wearable computing devices including (but not limited to) augmented reality headsets, and smart watches. In a number of embodiments, user interactions with a user interface and the user's viewing history can be logged into a database to update and/or infer user preferences. In several embodiments, logged user interactions can be analyzed to refine the manner in which future recommendations are generated. Processes for collecting and analyzing information concerning user interactions with video segments in accordance with embodiments of the invention are discussed further below.
  • Analytics
  • The user interaction information that can be logged by a personalized playlist generation system in accordance with embodiments of the invention is typically only limited by the user interface generated by a playback device and the input modalities available to the playback device. An example of a user interaction log generated based upon user interactions with a user interface generated to enable playback of video segments identified within a personalized playlist in accordance with an embodiment of the invention is illustrated in FIG. 23. The log includes information concerning video segments played by the user, the duration of playback, reordering of videos and other interactions related to the playback experiences such as volume control and display of closed caption text. In a number of embodiments, information concerning playback of video segments can be utilized to obtain metrics indicative of user interest such as (but not limited to) the percentage of a video segment played back. The illustrated log also includes information concerning user mouse activity such as mouse over events. In other embodiments, any manner in which a user interacts with a user interface can be logged and/or a subset of interactions can be logged as appropriate to the needs of a specific playlist generation system including but not limited to user interactions indicating sentiment (e.g. “like”, or “dislike”), sharing of content, skipping of content, rearranging and/or deleting video segments from a playlist and percentage of video segment watched. In a number of embodiments, playlist generation considers some or all user interactions contained within a log file and techniques including (but not limited to) linear regressions can be utilized to determine weighting parameters to apply to each category of user interactions considered during playlist generation. In other embodiments, any of a variety of techniques can be utilized to consider user history as appropriate to the requirements of specific applications.
  • Although specific processes are described above with respect to the logging of user interactions with user interfaces and the use of user interaction information to continuously update and improve personalized video playlist generation, any of a variety of techniques can be utilized to infer user preferences from user interactions and incorporate the user preferences in the generation of personalized playlists as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • Generation of News Briefs
  • The ability to identify related video segments enables the generation of summaries of a number of related video segments or news briefs. Text data extracted from video segments in the form of closed caption, or subtitle data or through use of automatic speech recognition can be utilized to identify sentences that include keywords that are not present in related video segments. The portions of some or all of the related video segments in which the sentences containing the “unique” keywords occur can then be combined to provide a summary of the related video segments. In the context of news stories, the news brief can be constructed in time sequence order so that the news brief provides a sense of how a particular story evolved over time. In several embodiments, the video segments that are combined can be filtered based upon factors including (but not limited to) user preferences and/or proximity in time. In other embodiments, any of a variety of criteria can be utilized in the filtering and/or ordering of related video segments in the creation of a video summary sequence.
  • A process for generating a summary of related video segments in accordance with an embodiment of the invention is illustrated in FIG. 24. The process 2400 includes (2402) identifying related video segments and identifying (2404) unique keywords related to the video segments. In a number of embodiments, the unique keywords are extracted from text data contained within the video segment and/or through the use of automatic speech recognition. In this way, timestamps are associated with the keywords and a portion of the video segment such as (but not limited to) a sentence can be extracted (2406) from at least some of the related video segments. The extracted portions of the video segments can then be combined (2410) and encoded to create a video segment that is a summary of all of the related video segments. As noted above any of a variety of criteria can be utilized to determine the ordering of the portions of video segments and/or to filter the portions of video segments that are included in the video summary as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • Video Search Engines
  • The techniques described above for annotating video segments and utilizing the annotations to generate indexes relating keywords to video segments are not limited to the generation of personalized playlists, but can be utilized in a myriad of applications including the provision of a video search engine service. A system for accessing video segments utilizing a video search engine service in accordance with an embodiment of the invention is illustrated in FIG. 25. The system 2500 includes a video search engine server system 2502 that is configured to crawl various servers including (but not limited to) content distribution networks 2508, web servers 2510, and social media server systems 2512, 2514 to identify video segments. The video search engine server can annotate the identified video segments using keyword and/or image metadata extracted from the video segment and/or from additional data sources identified as relevant utilizing processes similar to those described above with reference to FIGS. 11-18. The metadata annotations can be stored in a database 2516 and utilized to generate an inverted index relating keywords to identified video segments. The video search engine server system 2502 can then utilize the inverted index to identify video segments in response to a search query received from a user device 2518 via a network connection 2520. In a number of embodiments, the techniques described above for identifying the presence of image portions within a frame of a video segment can be utilized to provide a video search service that can accept images and/or video sequences as search query inputs.
  • A multi-modal video search engine server system that can be utilized to index video segments and respond to search queries in accordance with an embodiment of the invention is illustrated in FIG. 26. The multi-modal video search engine server system 2600 includes a processor 2602 in communication with volatile memory 2620, non-volatile memory 2630, and a network interface 2640. In the illustrated embodiment, the non-volatile memory 2630 includes an indexing application 2632 that configures the processor 2610 to annotate video segments with metadata 2622 describing the content of video segment and generate an inverted index 2624 relating video segments to keywords. In several embodiments, the indexing application 2632 configures the processor 2610 to extract metadata from textual analysis of text data contained within a video segment and visual analysis of video data contained within the video segment. In a number of embodiments, the indexing application 2632 configures the processor 2610 to identify additional sources of relevant data that can be used to annotate the video segment based upon textual and visual comparisons of the video segment and sources of additional data. In other embodiments, any of a variety of techniques including (but not limited to) manual annotation of video segments can be utilized to associate metadata with individual video segments.
  • The non-volatile memory 2630 can also contain a search engine application 2634 that configures the processor 2610 to generate a user interface via which a user can provide a search query. As noted above, a search query can be in the form of a text string, an image, and/or a video sequence. The search engine application can utilize the inverted index to identify video segments relevant to text queries and can utilize the processes described above for locating image portions within frames of video to identify video segments relevant to images and/or video segments provided as search queries. In a number of embodiments, relevant video segments can also be found by comparing query images, or frames to images, or frames o video obtained from additional data sources known to be relevant to one or more video segments. In several embodiments, text data can be extracted from images and/or video sequences provided as search queries to the search engine application and a multi-modal search can be performed utilizing the extracted text and searches for portions of images within frames of indexed video segments. As can readily be appreciated, identification of a video segment can also be utilized to identify other relevant video segments using the processes for identifying relationships between video segments described above with reference to FIG. 18.
  • As can readily be appreciated, the functions of crawling, indexing, and responding to search queries can be distributed across a number of different servers in a video search engine server system. Furthermore, depending upon the number of video segments indexed, the size of the database(s) utilized to store the metadata annotations and/or the inverted index may be sufficiently large as to necessitate the splitting of the database table across multiple computing devices utilizing techniques that are well known in the provision of search engine services. Accordingly, although specific architectures for providing online video search engine services are described above with reference to FIGS. 25 and 26, any of a variety of system implementations can be utilized as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
  • A process for generating multi-modal video search engine results in accordance with an embodiment of the invention is illustrated in FIG. 27. Typically, a set of video segments is provided and/or obtained by crawling video sources and the process 2700 identifies (2702) keywords related to the video segments using text and visual analysis of the video segments. The identified keywords can be utilized to generate (2704) an inverted index mapping keywords to video segments. When a search query is received (2706), keywords can be extracted from text, an image, and/or a video sequence provided as part of the search query and the keywords used to identify (2708) relevant videos from the inverted index. As noted above, a search can also be performed for one or more image portions within the frames of the indexed video segments. The relevancy of the identified video segments can be scored (2710) and search results including a listing of one or more video segments can be returned. In several embodiments, the process of annotating the video segments includes identifying additional sources of relevant data and links to the additional sources of relevant data and/or excerpts of relevant data can be returned with the search results.
  • In many embodiments, video segments are scored based upon a variety of factors including the number of related stories. Analysis of news story video segments reveals that related stories tend not to form fully connected graphs. Therefore, the number of related video segments (stories) can be indicative of the importance of the video segment. Time can also be an important measure of importance, the number of related video segments published within a predetermined time period can provide an even stronger indication of the relevance of a story to a particular query. In several embodiments, the relevance of a video segment to a search query can also be ranked based upon common keywords, frequency of common keywords, and/or common images. In several embodiments, a search query that includes an image, video sequence, and/or URL can be related to sources of additional data including (but not limited to) other video segments, and/or online articles. The sources of additional data can be utilized to perform keyword expansion and the expanded set of keywords utilized in scoring the relevance of a specific video segment to the search query.
  • In a number of embodiments, search result scores can be personalized based upon similar factors to those discussed above with respect to the generation of personalized video playlists. In this way, the most relevant search result for a specific user can be informed by factors including (but not limited to) a user's preferences with respect to content source, anchor people, and/or actors. In other embodiments, video search results can be scored and/or personalized in any of a variety of ways appropriate to the requirements of specific applications.
  • In several embodiments, analytics are collected (2712) concerning user interactions with video segments selected by users. In several embodiments, metrics including (but not limited to) percentage of playback duration watched can be utilized to infer information concerning the relevancy of the video segment to the search query and update (2714) relevance parameters associated with an indexed video by a video search engine service. In other embodiments, any of a variety of analytics can be collected and utilized to improve the performance of the search results in accordance with embodiments of the invention.
  • Although certain specific features and aspects of personalized video playlist generation systems, multi-modal video segmentation systems, and video search engine systems have been described herein, many additional modifications and variations would be apparent to those skilled in the art. For example, the features and aspects described herein may be implemented independently, cooperatively or alternatively without deviating from the spirit of the disclosure. It is therefore to be understood that the systems and methods disclosed herein may be practiced otherwise than as specifically described. Accordingly, the scope of the invention should be determined not by the described embodiments, but by the appended claims and their equivalents.

Claims (26)

What is claimed is:
1. A video search engine, comprising:
a video search engine server system, comprising:
at least one processor;
memory containing an indexing application and a search engine application;
wherein the indexing application configures at least one processor to:
identify a set of video segments;
extract text data from a selected video segment in the set of video segments and use keywords from the extracted text data to identify candidate sources of relevant data based upon keywords contained within the candidate sources of relevant data; and
identify images from the candidate sources of relevant data, where at least a portion of the image matches at least a portion of a frame of video from within the selected video segment;
identify additional sources of relevant data from the candidate sources of relevant data based upon the extracted keywords, the keywords in the candidate sources of relevant data, and the identified images; and
generate an inverted index of video segments in the set of video segments that are relevant to specific keywords using the extracted keywords and the keywords contained within the additional sources of relevant data;
wherein the search engine application configures at least one processor to:
receive a search query;
identify video segments from the set of video segments that are relevant to the search query using the inverted index;
score the relevancy of the identified video segments to the search query; and
generate search results identifying at least one video segment relevant to the search query.
2. The video search engine of claim 1, wherein:
the search query is a text string; and
the search engine application configures at least one processor to extract query keywords from the text string and identify relevant video segments using the inverted index based upon the extracted query keywords.
3. The video search engine of claim 1, wherein:
the search query is an image;
the search engine application configures at least one processor to identify frames from video segments in the set of video segments, where at least a portion of the image matches at least a portion of a frame of video from within a given video segment from the set of video segments.
4. The video search engine of claim 3, wherein the search engine application configures at least one processor to:
identify keywords relevant to the image based upon the keywords that are relevant to video segments from the set of video segments that include at least one frame in which at least a portion of the image matches at least a portion of the frame; and
identify relevant video segments using the inverted index based upon the keywords identified as relevant to the image.
5. The video search engine of claim 1, wherein:
the search query is a video segment
the search engine application configures at least one processor to identify frames from video segments in the set of video segments, where at least a portion of a frame from the query video segment matches at least a portion of a frame of video from within a given video segment from the set of video segments.
6. The video search engine of claim 5, wherein the search engine application configures at least one processor to:
identify keywords relevant to the query video segment based upon the keywords that are relevant to video segments from the set of video segments that include at least one frame in which at least a portion of the frame matches at least a portion of a frame from the query video segment; and
identify relevant video segments using the inverted index based upon the keywords identified as relevant to the query video segment.
7. The video search engine of claim 1, wherein the indexing application further configures at least one processor to identify additional sources of relevant data from the candidate sources of relevant data based upon the extracted keywords, the keywords in the candidate sources of relevant data, the identified images, and timestamps associated with the selected video segment and the candidate sources of relevant data.
8. The video search engine of claim 1, wherein the indexing application further configures at least one processor to use keywords from the extracted text data to identify candidate sources of relevant data based upon keywords contained within the candidate sources of relevant data based upon bag-of-words histogram comparisons that enable matching of text segments from the extracted text data with similar distributions of words in a candidate source of relevant data.
9. The video search engine of claim 8, wherein the indexing application further configures at least one processor to calculate a term frequency-inverse document frequency (tf-idf) histogram intersection score (S(Ha, Hb)) as follows:
S ( H a , H b ) = w idf ( w ) · min ( H a ( w ) , H b ( w ) ) idf ( w ) = log ( max x f ( x ) ) - log ( f ( w ) )
where, Ha(w) and Hb(w) are the L1 normalized histograms of the words in the two sets of words; and
{f(w)} is the set of estimated relative word frequencies.
10. The video search engine of claim 9, where the indexing application further configures at least one processor to determine that a candidate source of relevant data is an additional source of relevant data when the tf-idf histogram intersection score (S(Ha, Hb)) exceeds a predetermined threshold.
11. The video search engine of claim 1, wherein the indexing application further configures at least one processor to:
identify named entities within the text data extracted from the selected video segment; and
determine that a candidate source of relevant data is an additional source of relevant data when a predetermined number of named entities are present within both the candidate source of relevant data and the text data extracted from the selected video segment.
12. The video search engine of claim 11, wherein the indexing application further configures at least one processor to identify additional named entities by performing object recognition.
13. The video search engine of claim 1, wherein the indexing application further configures at least one processor to identify candidate sources of relevant data by providing at least some of the keywords extracted from the selected video segment to a search engine.
14. The video search engine of claim 13, wherein the indexing application further configures at least one processor to identify a title from text extracted from at least one frame of video from a selected video segment and identify candidate sources of relevant data and the keyword provided to the search engine is the extracted title.
15. The video search engine of claim 1, wherein the indexing application further configures at least one processor to identify at least a portion of an image from a candidate source of relevant data that matches at least a portion of a frame of video from within the selected video segment by determining that a given frame of video contains a region that includes a geometrically and photometrically distorted version of a portion of an image obtained from the candidate source of relevant data.
16. The video search engine of claim 1, wherein:
the indexing application configures at least one processor to identify relationships between individual video segments in the set of video segments; and
the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the number of related video segments in the set of video segments.
17. The video search engine of claim 1, wherein:
timestamps are associated with the video segments in the set of video segments; and
the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the number of related video segments in the set of video segments with associated timestamps that are within a predetermined time period.
18. The video search engine of claim 16, wherein the indexing application configures at least one processor to identify whether video segments are related based upon keywords associated with the video segments.
19. The video search engine of claim 18, wherein the indexing application configures at least one processor to calculate a term frequency-inverse document frequency (tf-idf) histogram intersection score (S(Ha, Hb)) for the keywords associated with the two video segments as follows:
S ( H a , H b ) = w idf ( w ) · min ( H a ( w ) , H b ( w ) ) idf ( w ) = log ( max x f ( x ) ) - log ( f ( w ) )
where, Ha(w) and Hb(w) are the L1 normalized histograms of the words in the two sets of words; and
{f(w)} is the set of estimated relative word frequencies.
20. The video search engine of claim 19, wherein the indexing application configures at least one processor to determine that a first video segment is related to a second video segment when the term frequency-inverse document frequency (tf-idf) histogram intersection score exceeds a first threshold and the number of named entities associated with each of the video segments exceeds a second threshold.
21. The video search engine of claim 1, wherein the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the number of common keywords.
22. The video search engine of claim 1, wherein the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the frequency of the common keywords with respect to the specific video segment.
23. The video search engine of claim 1, wherein the search engine application configures at least one processor to determine at least a portion of the relevancy score of a specific video segment to the search query based upon the number of images from the search query, where at least a portion of the image matches at least a portion of a frame from the specific video segment.
24. The video search engine of claim 1, wherein the search engine application configures at least one processor to weight the relevancy score of a specific video segment based upon user preferences.
25. The video search engine of claim 1, wherein the search results also include links to additional sources of relevant data that are relevant to the relevant video segments identified in the search results.
26. A method of providing a video search engine service, comprising:
identifying a set of video segments using a video search engine server system;
extracting text data from a selected video segment in the set of video segments using the video search engine server system;
identifying candidate sources of relevant data using the video search engine server system based upon keywords contained within the candidate sources of relevant data and keywords from the extracted text data; and
identifying images from the candidate sources of relevant data using the video search engine server system, where at least a portion of the image matches at least a portion of a frame of video from within the selected video segment;
identifying additional sources of relevant data from the candidate sources of relevant data based upon the extracted keywords, the keywords in the candidate sources of relevant data, and the identified images; and
generating an inverted index of video segments in the set of video segments that are relevant to specific keywords using the video search engine server system based upon the extracted keywords and the keywords contained within the additional sources of relevant data;
receiving a search query using the video search engine server system;
identifying video segments from the set of video segments that are relevant to the search query using the video search engine server system based upon the inverted index;
scoring the relevancy of the identified video segments to the search query using the video search engine server system; and
generating search results identifying at least one video segment relevant to the search using the video search engine server system.
US14/325,191 2014-04-14 2014-07-07 Systems and Methods for Performing Multi-Modal Video Search Abandoned US20150293995A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/325,191 US20150293995A1 (en) 2014-04-14 2014-07-07 Systems and Methods for Performing Multi-Modal Video Search

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461978988P 2014-04-14 2014-04-14
US14/325,191 US20150293995A1 (en) 2014-04-14 2014-07-07 Systems and Methods for Performing Multi-Modal Video Search

Publications (1)

Publication Number Publication Date
US20150293995A1 true US20150293995A1 (en) 2015-10-15

Family

ID=54265214

Family Applications (3)

Application Number Title Priority Date Filing Date
US14/325,191 Abandoned US20150293995A1 (en) 2014-04-14 2014-07-07 Systems and Methods for Performing Multi-Modal Video Search
US14/325,177 Abandoned US20150293928A1 (en) 2014-04-14 2014-07-07 Systems and Methods for Generating Personalized Video Playlists
US14/325,202 Expired - Fee Related US9253511B2 (en) 2014-04-14 2014-07-07 Systems and methods for performing multi-modal video datastream segmentation

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/325,177 Abandoned US20150293928A1 (en) 2014-04-14 2014-07-07 Systems and Methods for Generating Personalized Video Playlists
US14/325,202 Expired - Fee Related US9253511B2 (en) 2014-04-14 2014-07-07 Systems and methods for performing multi-modal video datastream segmentation

Country Status (1)

Country Link
US (3) US20150293995A1 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160063103A1 (en) * 2014-08-27 2016-03-03 International Business Machines Corporation Consolidating video search for an event
CN105389361A (en) * 2015-11-05 2016-03-09 百度在线网络技术(北京)有限公司 Search recommendation method and apparatus
US20170109327A1 (en) * 2015-05-20 2017-04-20 Shenzhen Skyworth-Rgb Electronic Co., Ltd Method and system for webpage processing
CN106682195A (en) * 2016-12-29 2017-05-17 北京奇虎科技有限公司 Method for processing search result page, search server and system
US9870800B2 (en) 2014-08-27 2018-01-16 International Business Machines Corporation Multi-source video input
WO2018026567A1 (en) * 2016-08-01 2018-02-08 Microsoft Technology Licensing, Llc Video segment playlist generation in a video management system
US10277953B2 (en) * 2016-12-06 2019-04-30 The Directv Group, Inc. Search for content data in content
US10448063B2 (en) * 2017-02-22 2019-10-15 International Business Machines Corporation System and method for perspective switching during video access
US10474724B1 (en) * 2015-09-18 2019-11-12 Mpulse Mobile, Inc. Mobile content attribute recommendation engine
US10592750B1 (en) * 2015-12-21 2020-03-17 Amazon Technlogies, Inc. Video rule engine
JP2020042770A (en) * 2018-09-07 2020-03-19 台達電子工業股▲ふん▼有限公司Delta Electronics,Inc. Data search method and data search system
US20200195989A1 (en) * 2018-12-14 2020-06-18 Rovi Guides, Inc. Generating media content keywords based on video-hosting website content
CN113127679A (en) * 2019-12-30 2021-07-16 阿里巴巴集团控股有限公司 Video searching method and device and index construction method and device
US20210224346A1 (en) 2018-04-20 2021-07-22 Facebook, Inc. Engaging Users by Personalized Composing-Content Recommendation
US20220012076A1 (en) * 2018-04-20 2022-01-13 Facebook, Inc. Processing Multimodal User Input for Assistant Systems
US11386163B2 (en) 2018-09-07 2022-07-12 Delta Electronics, Inc. Data search method and data search system thereof for generating and comparing strings
US20220284218A1 (en) * 2021-03-05 2022-09-08 Beijing Baidu Netcom Science Technology Co., Ltd. Video classification method, electronic device and storage medium
US11604920B2 (en) 2020-04-20 2023-03-14 Microsoft Technology Licensing, Llc Visual parsing for annotation extraction
US11715042B1 (en) 2018-04-20 2023-08-01 Meta Platforms Technologies, Llc Interpretability of deep reinforcement learning models in assistant systems
WO2023095043A3 (en) * 2021-11-24 2023-09-21 Jio Platforms Limited System and method for generating recommendations from multiple domains
US11886473B2 (en) 2018-04-20 2024-01-30 Meta Platforms, Inc. Intent identification for agent matching by assistant systems
US12118371B2 (en) 2018-04-20 2024-10-15 Meta Platforms, Inc. Assisting users with personalized and contextual communication content

Families Citing this family (89)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140123178A1 (en) * 2012-04-27 2014-05-01 Mixaroo, Inc. Self-learning methods, entity relations, remote control, and other features for real-time processing, storage, indexing, and delivery of segmented video
US11055340B2 (en) * 2013-10-03 2021-07-06 Minute Spoteam Ltd. System and method for creating synopsis for multimedia content
US9646227B2 (en) * 2014-07-29 2017-05-09 Microsoft Technology Licensing, Llc Computerized machine learning of interesting video sections
US9934423B2 (en) 2014-07-29 2018-04-03 Microsoft Technology Licensing, Llc Computerized prominent character recognition in videos
US9628873B2 (en) * 2014-09-05 2017-04-18 Verizon Patent And Licensing Inc. Methods and systems for identifying a media program clip associated with a trending topic
US9467743B2 (en) * 2014-09-15 2016-10-11 Verizon Patent And Licensing Inc. Personalized content aggregation platform
US10318575B2 (en) * 2014-11-14 2019-06-11 Zorroa Corporation Systems and methods of building and using an image catalog
WO2016081749A1 (en) 2014-11-19 2016-05-26 Google Inc. Methods, systems, and media for presenting related media content items
WO2016118537A1 (en) * 2015-01-19 2016-07-28 Srinivas Rao Method and system for creating seamless narrated videos using real time streaming media
US11488042B1 (en) * 2015-02-26 2022-11-01 Imdb.Com, Inc. Dynamic determination of media consumption
US20160294891A1 (en) 2015-03-31 2016-10-06 Facebook, Inc. Multi-user media presentation system
US10575057B2 (en) * 2015-04-23 2020-02-25 Rovi Guides, Inc. Systems and methods for improving accuracy in media asset recommendation models
US10003836B2 (en) 2015-04-23 2018-06-19 Rovi Guides, Inc. Systems and methods for improving accuracy in media asset recommendation models based on users' levels of enjoyment with respect to media assets
WO2016196693A1 (en) * 2015-06-01 2016-12-08 Miller Benjamin Aaron Content segmentation and time reconciliation
US20170055014A1 (en) * 2015-08-21 2017-02-23 Vilynx, Inc. Processing video usage information for the delivery of advertising
KR101656245B1 (en) * 2015-09-09 2016-09-09 주식회사 위버플 Method and system for extracting sentences
US9858967B1 (en) * 2015-09-09 2018-01-02 A9.Com, Inc. Section identification in video content
US10261964B2 (en) 2016-01-04 2019-04-16 Gracenote, Inc. Generating and distributing playlists with music and stories having related moods
US20170235828A1 (en) * 2016-02-12 2017-08-17 Microsoft Technology Licensing, Llc Text Digest Generation For Searching Multiple Video Streams
US9984314B2 (en) 2016-05-06 2018-05-29 Microsoft Technology Licensing, Llc Dynamic classifier selection based on class skew
US10204417B2 (en) * 2016-05-10 2019-02-12 International Business Machines Corporation Interactive video generation
US11409791B2 (en) 2016-06-10 2022-08-09 Disney Enterprises, Inc. Joint heterogeneous language-vision embeddings for video tagging and search
US20170366860A1 (en) * 2016-06-21 2017-12-21 Thomson Licensing Apparatus and Method for Recording Transition History and Selecting Next Playback from the Transition History
US10956481B2 (en) 2016-07-29 2021-03-23 Splunk Inc. Event-based correlation of non-text machine data
US11314799B2 (en) * 2016-07-29 2022-04-26 Splunk Inc. Event-based data intake and query system employing non-text machine data
US10467257B2 (en) 2016-08-09 2019-11-05 Zorroa Corporation Hierarchical search folders for a document repository
US10311112B2 (en) 2016-08-09 2019-06-04 Zorroa Corporation Linearized search of visual media
US10277540B2 (en) * 2016-08-11 2019-04-30 Jurni Inc. Systems and methods for digital video journaling
US10664514B2 (en) 2016-09-06 2020-05-26 Zorroa Corporation Media search processing using partial schemas
US10387488B2 (en) * 2016-12-07 2019-08-20 At7T Intellectual Property I, L.P. User configurable radio
US10171843B2 (en) * 2017-01-19 2019-01-01 International Business Machines Corporation Video segment manager
US10789291B1 (en) 2017-03-01 2020-09-29 Matroid, Inc. Machine learning in video classification with playback highlighting
US10225603B2 (en) 2017-03-13 2019-03-05 Wipro Limited Methods and systems for rendering multimedia content on a user device
US10735784B2 (en) * 2017-03-31 2020-08-04 Scripps Networks Interactive, Inc. Social media asset portal
CN108932252A (en) * 2017-05-25 2018-12-04 合网络技术(北京)有限公司 Video aggregation method and device
US10701413B2 (en) * 2017-06-05 2020-06-30 Disney Enterprises, Inc. Real-time sub-second download and transcode of a video stream
US10652592B2 (en) * 2017-07-02 2020-05-12 Comigo Ltd. Named entity disambiguation for providing TV content enrichment
EP3425483B1 (en) * 2017-07-07 2024-01-10 Accenture Global Solutions Limited Intelligent object recognizer
US10970334B2 (en) * 2017-07-24 2021-04-06 International Business Machines Corporation Navigating video scenes using cognitive insights
US10769207B2 (en) * 2017-08-17 2020-09-08 Opentv, Inc. Multimedia focalization
US11275787B2 (en) * 2017-08-31 2022-03-15 Micro Focus Llc Entity viewpoint determinations
CN107770598B (en) * 2017-10-12 2020-06-30 维沃移动通信有限公司 Synchronous play detection method and mobile terminal
US10628486B2 (en) * 2017-11-15 2020-04-21 Google Llc Partitioning videos
CN107918657B (en) * 2017-11-20 2021-10-08 腾讯科技(深圳)有限公司 Data source matching method and device
CN108134950B (en) * 2017-12-07 2022-05-06 广州锐竞信息科技有限责任公司 Intelligent video recommendation method and system
CN108388570B (en) * 2018-01-09 2021-09-28 北京一览科技有限公司 Method and device for carrying out classification matching on videos and selection engine
US12052455B2 (en) * 2018-01-19 2024-07-30 Mux, Inc. System and method for detecting and reporting concurrent viewership of online audio-video content
US11064268B2 (en) * 2018-03-23 2021-07-13 Disney Enterprises, Inc. Media content metadata mapping
US10558761B2 (en) * 2018-07-05 2020-02-11 Disney Enterprises, Inc. Alignment of video and textual sequences for metadata analysis
CN109101558B (en) * 2018-07-12 2022-07-01 北京猫眼文化传媒有限公司 Video retrieval method and device
US11651043B2 (en) * 2018-07-24 2023-05-16 MachEye, Inc. Leveraging analytics across disparate computing devices
US11853107B2 (en) 2018-07-24 2023-12-26 MachEye, Inc. Dynamic phase generation and resource load reduction for a query
US11816436B2 (en) 2018-07-24 2023-11-14 MachEye, Inc. Automated summarization of extracted insight data
US11841854B2 (en) 2018-07-24 2023-12-12 MachEye, Inc. Differentiation of search results for accurate query output
US11341126B2 (en) 2018-07-24 2022-05-24 MachEye, Inc. Modifying a scope of a canonical query
US10764654B2 (en) 2018-08-01 2020-09-01 Dish Network L.L.C. Automatically generating supercuts
US11153663B2 (en) 2018-08-01 2021-10-19 Dish Network L.L.C. Automatically generating supercuts
CN109151557B (en) * 2018-08-10 2021-02-19 Oppo广东移动通信有限公司 Video creation method and related device
US11550951B2 (en) * 2018-09-18 2023-01-10 Inspired Patents, Llc Interoperable digital social recorder of multi-threaded smart routed media
CN110248208B (en) * 2018-10-22 2023-03-24 浙江大华技术股份有限公司 Video playing method and device, electronic equipment and storage medium
CN111314775B (en) * 2018-12-12 2021-09-07 华为终端有限公司 Video splitting method and electronic equipment
US11102540B2 (en) * 2019-04-04 2021-08-24 Wangsu Science & Technology Co., Ltd. Method, device and system for synchronously playing message stream and audio-video stream
CN110035311A (en) * 2019-04-04 2019-07-19 网宿科技股份有限公司 A kind of methods, devices and systems that message flow and audio/video flow is played simultaneously
US12073177B2 (en) * 2019-05-17 2024-08-27 Applications Technology (Apptek), Llc Method and apparatus for improved automatic subtitle segmentation using an artificial neural network model
CN110321441A (en) * 2019-07-11 2019-10-11 北京奇艺世纪科技有限公司 A kind of method and relevant device generating recommendation information
US11836179B1 (en) * 2019-10-29 2023-12-05 Meta Platforms Technologies, Llc Multimedia query system
CN111212329B (en) * 2020-01-14 2021-03-19 北京锐马视讯科技有限公司 IP video/audio code stream switching method and device, equipment and storage medium
IT202000005875A1 (en) 2020-03-19 2021-09-19 Radio Dimensione Suono Spa SYSTEM AND METHOD OF AUTOMATIC ENRICHMENT OF INFORMATION FOR AUDIO STREAMS
US11238287B2 (en) * 2020-04-02 2022-02-01 Rovi Guides, Inc. Systems and methods for automated content curation using signature analysis
US11574248B2 (en) * 2020-04-02 2023-02-07 Rovi Guides, Inc. Systems and methods for automated content curation using signature analysis
CN111522970A (en) * 2020-04-10 2020-08-11 广东小天才科技有限公司 Exercise recommendation method, exercise recommendation device, exercise recommendation equipment and storage medium
CN111681677B (en) * 2020-06-09 2023-08-04 杭州星合尚世影视传媒有限公司 Video object sound effect construction method, system, device and readable storage medium
CN111669627B (en) * 2020-06-30 2022-02-15 广州市百果园信息技术有限公司 Method, device, server and storage medium for determining video code rate
WO2022031283A1 (en) * 2020-08-05 2022-02-10 Hewlett-Packard Development Company, L.P. Video stream content
EP3985669A1 (en) * 2020-10-16 2022-04-20 Moodagent A/S Methods and systems for automatically matching audio content with visual input
US12039501B2 (en) * 2020-10-26 2024-07-16 Genpact Usa, Inc. Artificial intelligence based determination of damage to physical structures via video
CN112033335B (en) * 2020-11-05 2021-01-26 成都中轨轨道设备有限公司 Intelligent monitoring and early warning system and method for railway gauging rule
US11297383B1 (en) 2020-11-20 2022-04-05 International Business Machines Corporation Gap filling using personalized injectable media
US20220245424A1 (en) * 2021-01-29 2022-08-04 Samsung Electronics Co., Ltd. Microgenre-based hyper-personalization with multi-modal machine learning
US12056929B2 (en) * 2021-05-11 2024-08-06 Google Llc Automatic generation of events using a machine-learning model
US11805083B2 (en) * 2021-05-27 2023-10-31 Rovi Guides, Inc. System and methods to generate messages for user shared media
US12022153B2 (en) * 2021-06-29 2024-06-25 Rovi Guides, Inc. Methods and systems for generating a playlist of content items and content item segments
FR3125193A1 (en) * 2021-07-08 2023-01-13 • Ecole nationale supérieure de l’électronique et de ses applications Computerized process of audiovisual de-linearization
US20230418861A1 (en) * 2022-06-28 2023-12-28 Adobe Inc. Generating embeddings for text and image queries within a common embedding space for visual-text image searches
CN115695910A (en) * 2022-09-27 2023-02-03 北京奇艺世纪科技有限公司 Video arrangement method and device, electronic equipment and storage medium
US12046260B2 (en) 2022-10-27 2024-07-23 Spotify Ab Architecture for personalized media segmentation
EP4391552A1 (en) * 2022-12-23 2024-06-26 NOS Inovação, S.A. Method and system for generating video streaming content bookmarks
CN116055825B (en) * 2023-01-10 2024-08-09 湖南快乐阳光互动娱乐传媒有限公司 Method and device for generating video title
CN116958331B (en) * 2023-09-20 2024-01-19 四川蜀天信息技术有限公司 Sound and picture synchronization adjusting method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050262051A1 (en) * 2004-05-13 2005-11-24 International Business Machines Corporation Method and system for propagating annotations using pattern matching
US20070038600A1 (en) * 2005-08-10 2007-02-15 Guha Ramanathan V Detecting spam related and biased contexts for programmable search engines
US20090112830A1 (en) * 2007-10-25 2009-04-30 Fuji Xerox Co., Ltd. System and methods for searching images in presentations
US7933338B1 (en) * 2004-11-10 2011-04-26 Google Inc. Ranking video articles
US20110099195A1 (en) * 2009-10-22 2011-04-28 Chintamani Patwardhan Method and Apparatus for Video Search and Delivery
US20120036139A1 (en) * 2009-03-31 2012-02-09 Kabushiki Kaisha Toshiba Content recommendation device, method of recommending content, and computer program product
US20140044361A1 (en) * 2011-02-21 2014-02-13 Enswers Co., Ltd. Device and method for analyzing the correlation between an image and another image or between an image and a video
US20150055854A1 (en) * 2013-08-20 2015-02-26 Xerox Corporation Learning beautiful and ugly visual attributes

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030093790A1 (en) * 2000-03-28 2003-05-15 Logan James D. Audio and video program recording, editing and playback systems using metadata
US7403910B1 (en) 2000-04-28 2008-07-22 Netflix, Inc. Approach for estimating user ratings of items
WO2006130824A2 (en) 2005-06-01 2006-12-07 Google Inc. Media play optimization
US9697231B2 (en) * 2005-11-09 2017-07-04 Cxense Asa Methods and apparatus for providing virtual media channels based on media search
US8275764B2 (en) 2007-08-24 2012-09-25 Google Inc. Recommending media programs based on media program popularity
US7853622B1 (en) 2007-11-01 2010-12-14 Google Inc. Video-related recommendations using link structure
US8055655B1 (en) 2008-02-15 2011-11-08 Google Inc. User interaction based related digital content items
US9396258B2 (en) 2009-01-22 2016-07-19 Google Inc. Recommending video programs
US8402482B2 (en) 2010-03-23 2013-03-19 Google Inc. Distributing content
US8533066B2 (en) 2010-10-13 2013-09-10 Hulu, LLC Method and apparatus for recommending media programs based on correlated user feedback
US8693844B2 (en) 2010-10-15 2014-04-08 Hulu, LLC Bookmarking media programs for subsequent viewing
EP2646964A4 (en) 2010-12-01 2015-06-03 Google Inc Recommendations based on topic clusters
GB2486257B (en) * 2010-12-09 2015-05-27 Samsung Electronics Co Ltd Multimedia system and method of recommending multimedia content
US10311386B2 (en) 2011-07-08 2019-06-04 Netflix, Inc. Identifying similar items based on interaction history
US8868481B2 (en) 2011-12-14 2014-10-21 Google Inc. Video recommendation based on video co-occurrence statistics
US8484203B1 (en) 2012-01-04 2013-07-09 Google Inc. Cross media type recommendations for media items based on identified entities

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050262051A1 (en) * 2004-05-13 2005-11-24 International Business Machines Corporation Method and system for propagating annotations using pattern matching
US7933338B1 (en) * 2004-11-10 2011-04-26 Google Inc. Ranking video articles
US20070038600A1 (en) * 2005-08-10 2007-02-15 Guha Ramanathan V Detecting spam related and biased contexts for programmable search engines
US20090112830A1 (en) * 2007-10-25 2009-04-30 Fuji Xerox Co., Ltd. System and methods for searching images in presentations
US20120036139A1 (en) * 2009-03-31 2012-02-09 Kabushiki Kaisha Toshiba Content recommendation device, method of recommending content, and computer program product
US20110099195A1 (en) * 2009-10-22 2011-04-28 Chintamani Patwardhan Method and Apparatus for Video Search and Delivery
US20140044361A1 (en) * 2011-02-21 2014-02-13 Enswers Co., Ltd. Device and method for analyzing the correlation between an image and another image or between an image and a video
US20150055854A1 (en) * 2013-08-20 2015-02-26 Xerox Corporation Learning beautiful and ugly visual attributes

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10713297B2 (en) 2014-08-27 2020-07-14 International Business Machines Corporation Consolidating video search for an event
US9870800B2 (en) 2014-08-27 2018-01-16 International Business Machines Corporation Multi-source video input
US20160063103A1 (en) * 2014-08-27 2016-03-03 International Business Machines Corporation Consolidating video search for an event
US10102285B2 (en) * 2014-08-27 2018-10-16 International Business Machines Corporation Consolidating video search for an event
US11847163B2 (en) 2014-08-27 2023-12-19 International Business Machines Corporation Consolidating video search for an event
US10332561B2 (en) 2014-08-27 2019-06-25 International Business Machines Corporation Multi-source video input
US20170109327A1 (en) * 2015-05-20 2017-04-20 Shenzhen Skyworth-Rgb Electronic Co., Ltd Method and system for webpage processing
US9898443B2 (en) * 2015-05-20 2018-02-20 Shenzhen Skyworth-Rgb Electronic Co., Ltd Method and system for webpage processing
US10474724B1 (en) * 2015-09-18 2019-11-12 Mpulse Mobile, Inc. Mobile content attribute recommendation engine
US11416568B2 (en) * 2015-09-18 2022-08-16 Mpulse Mobile, Inc. Mobile content attribute recommendation engine
CN105389361A (en) * 2015-11-05 2016-03-09 百度在线网络技术(北京)有限公司 Search recommendation method and apparatus
US10592750B1 (en) * 2015-12-21 2020-03-17 Amazon Technlogies, Inc. Video rule engine
US10116981B2 (en) 2016-08-01 2018-10-30 Microsoft Technology Licensing, Llc Video management system for generating video segment playlist using enhanced segmented videos
CN109564576A (en) * 2016-08-01 2019-04-02 微软技术许可有限责任公司 Video clip playlist in system for managing video generates
WO2018026567A1 (en) * 2016-08-01 2018-02-08 Microsoft Technology Licensing, Llc Video segment playlist generation in a video management system
US10277953B2 (en) * 2016-12-06 2019-04-30 The Directv Group, Inc. Search for content data in content
CN106682195A (en) * 2016-12-29 2017-05-17 北京奇虎科技有限公司 Method for processing search result page, search server and system
US10448063B2 (en) * 2017-02-22 2019-10-15 International Business Machines Corporation System and method for perspective switching during video access
US10674183B2 (en) 2017-02-22 2020-06-02 International Business Machines Corporation System and method for perspective switching during video access
US11727677B2 (en) 2018-04-20 2023-08-15 Meta Platforms Technologies, Llc Personalized gesture recognition for user interaction with assistant systems
US11908181B2 (en) 2018-04-20 2024-02-20 Meta Platforms, Inc. Generating multi-perspective responses by assistant systems
US20210224346A1 (en) 2018-04-20 2021-07-22 Facebook, Inc. Engaging Users by Personalized Composing-Content Recommendation
US20220012076A1 (en) * 2018-04-20 2022-01-13 Facebook, Inc. Processing Multimodal User Input for Assistant Systems
US12131522B2 (en) 2018-04-20 2024-10-29 Meta Platforms, Inc. Contextual auto-completion for assistant systems
US12131523B2 (en) 2018-04-20 2024-10-29 Meta Platforms, Inc. Multiple wake words for systems with multiple smart assistants
US12125272B2 (en) 2018-04-20 2024-10-22 Meta Platforms Technologies, Llc Personalized gesture recognition for user interaction with assistant systems
US12118371B2 (en) 2018-04-20 2024-10-15 Meta Platforms, Inc. Assisting users with personalized and contextual communication content
US11544305B2 (en) 2018-04-20 2023-01-03 Meta Platforms, Inc. Intent identification for agent matching by assistant systems
US12112530B2 (en) 2018-04-20 2024-10-08 Meta Platforms, Inc. Execution engine for compositional entity resolution for assistant systems
US11676220B2 (en) * 2018-04-20 2023-06-13 Meta Platforms, Inc. Processing multimodal user input for assistant systems
US20230186618A1 (en) 2018-04-20 2023-06-15 Meta Platforms, Inc. Generating Multi-Perspective Responses by Assistant Systems
US11688159B2 (en) 2018-04-20 2023-06-27 Meta Platforms, Inc. Engaging users by personalized composing-content recommendation
US11694429B2 (en) 2018-04-20 2023-07-04 Meta Platforms Technologies, Llc Auto-completion for gesture-input in assistant systems
US11704900B2 (en) 2018-04-20 2023-07-18 Meta Platforms, Inc. Predictive injection of conversation fillers for assistant systems
US11704899B2 (en) 2018-04-20 2023-07-18 Meta Platforms, Inc. Resolving entities from multiple data sources for assistant systems
US11715289B2 (en) 2018-04-20 2023-08-01 Meta Platforms, Inc. Generating multi-perspective responses by assistant systems
US11715042B1 (en) 2018-04-20 2023-08-01 Meta Platforms Technologies, Llc Interpretability of deep reinforcement learning models in assistant systems
US11721093B2 (en) 2018-04-20 2023-08-08 Meta Platforms, Inc. Content summarization for assistant systems
US12001862B1 (en) 2018-04-20 2024-06-04 Meta Platforms, Inc. Disambiguating user input with memorization for improved user assistance
US11908179B2 (en) 2018-04-20 2024-02-20 Meta Platforms, Inc. Suggestions for fallback social contacts for assistant systems
US11886473B2 (en) 2018-04-20 2024-01-30 Meta Platforms, Inc. Intent identification for agent matching by assistant systems
US11887359B2 (en) 2018-04-20 2024-01-30 Meta Platforms, Inc. Content suggestions for content digests for assistant systems
JP2020042770A (en) * 2018-09-07 2020-03-19 台達電子工業股▲ふん▼有限公司Delta Electronics,Inc. Data search method and data search system
US11386163B2 (en) 2018-09-07 2022-07-12 Delta Electronics, Inc. Data search method and data search system thereof for generating and comparing strings
US20200195989A1 (en) * 2018-12-14 2020-06-18 Rovi Guides, Inc. Generating media content keywords based on video-hosting website content
US12015814B2 (en) 2018-12-14 2024-06-18 Rovi Guides, Inc. Generating media content keywords based on video-hosting website content
US11539994B2 (en) 2018-12-14 2022-12-27 Rovi Guides, Inc. Generating media content keywords based on video-hosting website content
US10897639B2 (en) * 2018-12-14 2021-01-19 Rovi Guides, Inc. Generating media content keywords based on video-hosting website content
CN113127679A (en) * 2019-12-30 2021-07-16 阿里巴巴集团控股有限公司 Video searching method and device and index construction method and device
US11604920B2 (en) 2020-04-20 2023-03-14 Microsoft Technology Licensing, Llc Visual parsing for annotation extraction
US12094208B2 (en) * 2021-03-05 2024-09-17 Beijing Baidu Netcom Science Technology Co., Ltd. Video classification method, electronic device and storage medium
US20220284218A1 (en) * 2021-03-05 2022-09-08 Beijing Baidu Netcom Science Technology Co., Ltd. Video classification method, electronic device and storage medium
WO2023095043A3 (en) * 2021-11-24 2023-09-21 Jio Platforms Limited System and method for generating recommendations from multiple domains

Also Published As

Publication number Publication date
US9253511B2 (en) 2016-02-02
US20150293928A1 (en) 2015-10-15
US20150296228A1 (en) 2015-10-15

Similar Documents

Publication Publication Date Title
US9253511B2 (en) Systems and methods for performing multi-modal video datastream segmentation
US20160014482A1 (en) Systems and Methods for Generating Video Summary Sequences From One or More Video Segments
US11790933B2 (en) Systems and methods for manipulating electronic content based on speech recognition
US8750681B2 (en) Electronic apparatus, content recommendation method, and program therefor
US10102284B2 (en) System and method for generating media bookmarks
US9471936B2 (en) Web identity to social media identity correlation
US20190392866A1 (en) Video summarization and collaboration systems and methods
US11394675B2 (en) Method and device for commenting on multimedia resource
CN111279709B (en) Providing video recommendations
US20140255003A1 (en) Surfacing information about items mentioned or presented in a film in association with viewing the film
US10795560B2 (en) System and method for detection and visualization of anomalous media events
US20150301718A1 (en) Methods, systems, and media for presenting music items relating to media content
US10524005B2 (en) Facilitating television based interaction with social networking tools
US20220107978A1 (en) Method for recommending video content
US20170337201A1 (en) Methods, systems, and media for presenting content organized by category
Daneshi et al. Eigennews: Generating and delivering personalized news video
US10990456B2 (en) Methods and systems for facilitating application programming interface communications
TWI538491B (en) Television service system and method for supplying video service
Lian Innovative Internet video consuming based on media analysis techniques
CN112445921A (en) Abstract generation method and device
Peronikolis et al. Personalized Video Summarization: A Comprehensive Survey of Methods and Datasets
Xu et al. Personalized sports video customization based on multi-modal analysis for mobile devices

Legal Events

Date Code Title Description
AS Assignment

Owner name: THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, DAVID MO;CHEN, HUIZHONG;DANESHI, MARYAM;AND OTHERS;SIGNING DATES FROM 20140624 TO 20140701;REEL/FRAME:033254/0937

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION