Nothing Special   »   [go: up one dir, main page]

US20140223309A1 - Method and system for browsing, searching and sharing of personal video by a non-parametric approach - Google Patents

Method and system for browsing, searching and sharing of personal video by a non-parametric approach Download PDF

Info

Publication number
US20140223309A1
US20140223309A1 US14/249,565 US201414249565A US2014223309A1 US 20140223309 A1 US20140223309 A1 US 20140223309A1 US 201414249565 A US201414249565 A US 201414249565A US 2014223309 A1 US2014223309 A1 US 2014223309A1
Authority
US
United States
Prior art keywords
media
user
video
selected media
descriptors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/249,565
Inventor
Oren Boiman
Alexander Rav-Acha
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kreos Capital V (expert Fund) Lp
Vimeo com Inc
Original Assignee
SIGHTERA TECHNOLOGIES Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SIGHTERA TECHNOLOGIES Ltd filed Critical SIGHTERA TECHNOLOGIES Ltd
Priority to US14/249,565 priority Critical patent/US20140223309A1/en
Assigned to SIGHTERA TECHNOLOGIES LTD. reassignment SIGHTERA TECHNOLOGIES LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BOIMAN, OREN, RAV-ACHA, ALEXANDER
Publication of US20140223309A1 publication Critical patent/US20140223309A1/en
Assigned to MAGISTO LTD. reassignment MAGISTO LTD. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SIGHTERA TECHNOLOGIES LTD.
Assigned to KREOS CAPITAL V (EXPERT FUND) L.P. reassignment KREOS CAPITAL V (EXPERT FUND) L.P. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MAGISTO LTD.
Assigned to Vimeo, Inc. reassignment Vimeo, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MAGISTO LTD.
Assigned to KREOS CAPITAL V (EXPERT FUND) L.P. reassignment KREOS CAPITAL V (EXPERT FUND) L.P. CORRECTIVE ASSIGNMENT TO CORRECT THE SERIAL NO. 15/374,023 SHOULD BE 15/012,875 PREVIOUSLY RECORDED ON REEL 041151 FRAME 0899. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY INTEREST. Assignors: MAGISTO LTD.
Assigned to MAGISTO LTD. (NOW KNOWN AS NOLAN LEGACY LTD) reassignment MAGISTO LTD. (NOW KNOWN AS NOLAN LEGACY LTD) RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: KREOS CAPITAL V (EXPERT FUND) LP
Priority to US17/135,273 priority patent/US20210117471A1/en
Assigned to JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT reassignment JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT SECURITY AGREEMENT Assignors: Vimeo, Inc.
Assigned to VIMEO.COM, INC. reassignment VIMEO.COM, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: Vimeo, Inc.
Assigned to VIMEO.COM, INC. (FKA VIMEO, INC.) reassignment VIMEO.COM, INC. (FKA VIMEO, INC.) RELEASE OF PATENT SECURITY INTERESTS FILED FEBRUARY 12, 2021 AT REEL/FRAME 055288/0371 Assignors: JPMORGAN CHASE BANK, N.A.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7335Graphical querying, e.g. query-by-region, query-by-sketch, query-by-trajectory, GUIs for designating a person/face/object as a query predicate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • G11B27/32Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier
    • G11B27/327Table of contents
    • G11B27/329Table of contents on a disc [VTOC]
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 

Definitions

  • a method for determining a predictability of a media entity portion may include: receiving or generating (a) reference media descriptors, and (b) probability estimations of descriptor space representatives given the reference media descriptors; wherein the descriptor space representatives are representative of a set of media entities; and calculating a predictability score of the media entity portion based on at least (a) the probability estimations of the descriptor space representatives given the reference media descriptors, and (b) relationships between the media entity portion descriptors and the descriptor space representatives.
  • the method may include calculating distances between descriptors of the media entity and the descriptor space representatives.
  • the calculating of the predictability score of the media entity may include calculating a weighted sum of probability estimations of the descriptor space representatives, wherein weights applied for the weighted sum are determined according to distances between descriptors of the media entity portion and descriptor space representatives . . . .
  • the method may include generating the probability estimations given the reference media descriptors; wherein the generating may include calculating, for each descriptor space representative, a Parzen estimation of a probability of the descriptor space representative given the reference media descriptors.
  • the method may include calculating the predictability of the media entity portion based on reference media descriptors that represent media entity portions that precede the media entity portion and belong to a same media entity as the media entity portion.
  • the method may include calculating the predictability of multiple media entity portions of the media entity and detecting media entity portions of interest.
  • the method may include generating a representation of the media entity from the media entity portions of interest.
  • the method may include defining the media entity portion as a media entity portion of interest based on the predictability of the media entity portion and on at least one out of a detection of a camera motion, a detection of a camera zoom or a detection of a face close-up.
  • a method for evaluating a relationship between a first media entity and a second media entity may include: determining a predictability of the first media entity given the second media entity based on (a) probability estimations of descriptor space representatives given second media entity descriptors, wherein the descriptor space representatives are representative of a set of media entities and (b) relationships between second media entity descriptors and descriptors of the first media entity; determining a predictability of the second media entity given the first media entity based on (a) probability estimations of descriptor space representatives given first media entity descriptors, and (b) the relationships between first media entity descriptors and descriptors of the second media entity; and evaluating a similarity value between the first media entity and the second media entity based on the predictability of the first media entity given the second media entity and the predictability of the second media entity given the first media entity.
  • the method may include evaluating the relationships between multiple first media entities and multiple second media entities based on a predictability of each first media entity given the multiple second media entities and a predictability of each second media entity given the first media entity.
  • the method may include clustering first and second media entities based on the relationships between the multiple first media entities and the multiple second media entities.
  • a method for classifying media entities may include: receiving or generating (a) media class descriptors for each media entity class out of a set of media entity classes, and (b) probability estimations of descriptor space representatives given each of the media entity classes; wherein the descriptor space representatives are representative of a set of media entities; and calculating, for each pair of media entity and media class, a predictability score based on (a) the probability estimations of the descriptor space representatives given the media class descriptors of the media class, and (b) relationships between the media class descriptors and the descriptor space representatives descriptors of the media entity; and classifying each media entity based on predictability scores of the media entity and each media class.
  • a method for searching for a feature in a media entity may include: receiving or generating first media class descriptors and second media class descriptors; wherein the first media class descriptors represent a first media class of media entities that may include a first media feature; wherein the second media class descriptors represent a second media class of media entities that does not comprise the first media feature; calculating a predictability score given a first media class based on (a) probability estimations of descriptor space representatives given the first media class descriptors, and (b) relationships between the first media class descriptors and descriptors of the media entity; calculating a second media class predictability score based on (a) probability estimations of descriptor space representatives given the second media class descriptors, and (b) relationships between the second media class descriptors and descriptors of the media entity; and determining whether the media entity may include the feature based on the first media class predictability score and the second media class predictability score.
  • the method may include searching for the feature in multiple query media entities by repeating, for each media entity of the multiple media entities, the stage of receiving or generating first media class descriptors and second media class descriptors; wherein the first media class descriptors represent a first media class of media entities that may include a first media feature; wherein the second media class descriptors represent a second media class of media entities that does not comprise the first media feature; calculating a first media class predictability score based on (a) probability estimations of descriptor space representatives given the first media class descriptors, and (b) relationships between the first media class descriptors and descriptors of the media entity; calculating a second media class predictability score based on (a) probability estimations of descriptor space representatives given the second media class descriptors, and (b) relationships between the second media class descriptors and descriptors of the media entity; and determining whether the media entity may include the feature based on the first media class predictability score and the second media class predictability score.
  • the method may include detecting media entities of interest in response to a detection of the feature.
  • the method may include searching for a feature that is a face.
  • a method for processing media streams may include: applying probabilistic non-parametric process on the media stream to locate media portions of interest; and generating metadata indicative of the media portions of interest.
  • the method may include adding tags to the media portions of interest.
  • the method may include generating a representation of the media stream from the media portions of interest.
  • the method further may include detecting media portions of interest in response to at least one additional parameter out of: (a) a detection of a change of focal length of a camera that acquires the media; (b) a detection of a motion of the camera; (c) a detection of a face; (d) a detection of predefined sounds; (e) a detection of laughter; (f) a detection of predefined facial expressions; (g) a detection of an excited voice, and (h) detection of predefined behavior
  • the method may include generating a trick play media stream that may include the media portions of interest.
  • the method may include finding media portions of interest that are similar to each other.
  • the method may include tagging media portions of interest that are similar to each other.
  • the method may include editing the media stream based on the media portions of interest.
  • a computer program product may include a non-transitory computer readable medium that stores instructions for receiving or generating (a) reference media descriptors, and (b) probability estimations of descriptor space representatives given the reference media descriptors; wherein the descriptor space representatives are representative of a set of media entities; and calculating a predictability score of the media entity portion based on at least (a) the probability estimations of the descriptor space representatives given the reference media descriptors, and (b) relationships between the media entity portion descriptors and the descriptor space representatives.
  • the computer program product can store instructions for calculating distances between descriptors of the media entity and the descriptor space representatives . . . .
  • the calculating of the predictability score of the media entity may include calculating a weighted sum of probability estimations of the descriptor space representatives, wherein weights applied for the weighted sum are determined according to distances between descriptors of the media entity portion and descriptor space representatives . . . .
  • the computer program product can store instructions for generating the probability estimations given the reference media descriptors; wherein the generating may include calculating, for each descriptor space representative, a Parzen estimation of a probability of the descriptor space representative given the reference media descriptors.
  • the computer program product can store instructions for calculating the predictability of the media entity portion based on reference media descriptors that represent media entity portions that precede the media entity portion and belong to a same media entity as the media entity portion.
  • the computer program product can store instructions for calculating the predictability of multiple media entity portions of the media entity and detecting media entity portions of interest.
  • the computer program product can store instructions for generating a representation of the media entity from the media entity portions of interest.
  • the computer program product can store instructions for defining the media entity portion as a media entity portion of interest based on the predictability of the media entity portion and on at least one out of a detection of a camera motion, a detection of a camera zoom or a detection of a face close-up.
  • the computer program product can store instructions for: determining a predictability of the first media entity given the second media entity based on (a) probability estimations of descriptor space representatives given second media entity descriptors, wherein the descriptor space representatives are representative of a set of media entities and (b) relationships between second media entity descriptors and descriptors of the first media entity; determining a predictability of the second media entity given the first media entity based on (a) probability estimations of descriptor space representatives given first media entity descriptors, and (b) the relationships between first media entity descriptors and descriptors of the second media entity; and evaluating a similarity value between the first media entity and the second media entity based on the predictability of the first media entity given the second media entity and the predictability of the second media entity given the first media entity.
  • the computer program product can store instructions for evaluating the relationships between multiple first media entities and multiple second media entities based on a predictability of each first media entity given the multiple second media entities and a predictability of each second media entity given the first media entity.
  • the computer program product can store instructions for clustering first and second media entities based on the relationships between the multiple first media entities and the multiple second media entities.
  • the computer program product can store instructions for: receiving or generating (a) media class descriptors for each media entity class out of a set of media entity classes, and (b) probability estimations of descriptor space representatives given each of the media entity classes; wherein the descriptor space representatives are representative of a set of media entities; and calculating, for each pair of media entity and media class, a predictability score based on (a) the probability estimations of the descriptor space representatives given the media class descriptors of the media class, and (b) relationships between the media class descriptors and the descriptor space representatives descriptors of the media entity; and classifying each media entity based on predictability scores of the media entity and each media class.
  • the computer program product can store instructions for: receiving or generating first media class descriptors and second media class descriptors; wherein the first media class descriptors represent a first media class of media entities that may include a first media feature; wherein the second media class descriptors represent a second media class of media entities that does not comprise the first media feature; calculating a predictability score given a first media class based on (a) probability estimations of descriptor space representatives given the first media class descriptors, and (b) relationships between the first media class descriptors and descriptors of the media entity; calculating a second media class predictability score based on (a) probability estimations of descriptor space representatives given the second media class descriptors, and (b) relationships between the second media class descriptors and descriptors of the media entity; and determining whether the media entity may include the feature based on the first media class predictability score and the second media class predictability score.
  • the computer program product can store instructions for searching for the feature in multiple query media entities by repeating, for each media entity of the multiple media entities, the stage of receiving or generating first media class descriptors and second media class descriptors; wherein the first media class descriptors represent a first media class of media entities that may include a first media feature; wherein the second media class descriptors represent a second media class of media entities that does not comprise the first media feature; calculating a first media class predictability score based on (a) probability estimations of descriptor space representatives given the first media class descriptors, and (b) relationships between the first media class descriptors and descriptors of the media entity; calculating a second media class predictability score based on (a) probability estimations of descriptor space representatives given the second media class descriptors, and (b) relationships between the second media class descriptors and descriptors of the media entity; and determining whether the media entity may include the feature based on the first media class predictability score and the second media class
  • the computer program product can store instructions for detecting media entities of interest in response to a detection of the feature.
  • the computer program product can store instructions for searching for a feature that is a face.
  • the computer program product can store instructions for: applying probabilistic non-parametric process on the media stream to locate media portions of interest; and generating metadata indicative of the media portions of interest.
  • the computer program product can store instructions for adding tags to the media portions of interest.
  • the computer program product can store instructions for generating a representation of the media stream from the media portions of interest.
  • the method further may include detecting media portions of interest in response to at least one additional parameter out of: (a) a detection of a change of focal length of a camera that acquires the media; (b) a detection of a motion of the camera; (c) a detection of a face; (d) a detection of predefined sounds; (e) a detection of laughter; (f) a detection of predefined facial expressions; (g) a detection of an excited voice, and (h) detection of predefined behavior
  • the computer program product can store instructions for generating a trick play media stream that may include the media portions of interest.
  • the computer program product can store instructions for finding media portions of interest that are similar to each other.
  • the computer program product can store instructions for tagging media portions of interest that are similar to each other.
  • the computer program product can store instructions for editing the media stream based on the media portions of interest.
  • FIG. 1 illustrates a system according to an embodiment of the invention
  • FIG. 2 illustrates a system and its environment according to an embodiment of the invention
  • FIG. 3 illustrates a method according to an embodiment of the invention
  • FIG. 4 illustrates a pre-processing block according to an embodiment of the invention
  • FIG. 5 illustrates a query block according to an embodiment of the invention
  • FIG. 6 illustrates a similarity block according to an embodiment of the invention
  • FIG. 7 illustrates a classification block according to an embodiment of the invention
  • FIG. 8 illustrates a clustering block according to an embodiment of the invention
  • FIG. 9 illustrates a SalienSee block according to an embodiment of the invention.
  • FIG. 10 illustrates a detection block according to an embodiment of the invention
  • FIG. 11 illustrates an editing process according to an embodiment of the invention
  • FIG. 12 illustrates a system and its environment according to an embodiment of the invention.
  • FIG. 13-17 illustrate methods according to an embodiment of the invention.
  • the illustrated methods, systems and computer program products may provide a comprehensive solution to the problems of browsing, searching editing and producing personal video, by utilizing automatic image and video content analysis.
  • the methods, systems and computer program products may identify all the required aspects of the problem and thereby provides a complete solution.
  • media entity refers to information representative of visual information, information representative of audio information or a combination thereof.
  • Non-limiting examples of a media entity may include an image, a video stream, an access unit, multiple images, a portion of an image, a portion of a video stream, a transport packet, a elementary stream, a packetized elementary stream, an audio stream, an audio frame, any combination of audio representative information.
  • Any reference to a method should be interpreted as a reference to a system and additionally or alternatively as a reference to a computer program product. Thus, when describing a method is it noted that the method can be executed by a system or by a computer that executes instructions of the computer program product.
  • Any reference to a system should be interpreted as a reference to a method executed by the system and additionally or alternatively as a reference to a computer program product. Thus, when describing a system is it noted that the system can execute a method or can execute instructions of the computer program product.
  • Any reference to a block can include a reference to a hardware block, a software block or a stage of a method.
  • any of the blocks illustrated in FIG. 4-9 can be regarded as method stages.
  • the methods, systems and computer program products may provide a unified and generic approach—the media predictability framework—for handling the numerous capabilities required for a comprehensive solution.
  • the methods, systems and computer program products may provide provides a single coherent approach to tackle the entire problem.
  • the methods, systems and computer program products can be applied in diverse technological environments.
  • Methods, systems and computer program products may provide a comprehensive solution for using personal video as they enables browsing, searching editing and production of personal video.
  • the methods, systems and computer program products may rely on a unified automated media content analysis method, instead of relying on numerous methods for implementing the long list of features required for ‘media understanding’.
  • the proposed method relies on a unified content analysis platform that is based on the Media Predictability Framework (discussed in the next section), which forms the technological foundation of the product.
  • the processing of media entities may involve running software components on various hardware components and the processing of data files in several internet locations. We use the following entities in the text below:
  • User Computer A computer with general computing capabilities such as Desktop, Laptop, Tablet, Media Center, Smartphone.
  • Personal Media Images and Video of any common format (e.g., For images: Jpeg, Tiff, Gif, Jpeg2000 etc.
  • Private Data and Meta-Data Database Binary and Textual data and meta-data kept in tables and files either as a flat database organization or as a relational database (e.g., MySql).
  • Interaction Server An online server (either dedicated or in a computing cloud) which handles at least one of: uploading of user media, streaming, recording usage and viewing analytics, handling user and visitor interaction and registration, handling online payment, storage of online data and meta-data, selecting ads per viewed video and per user/visitor.
  • Content Analysis Server A server which performs content analysis for uploaded user media (user video including audio, user images, user selected soundtrack)
  • Production Server A server, which utilizes the original footage and the analyzed meta-data to create various personalized and stylized video productions.
  • This server may utilize professional video creative software such as Adobe After Effects, Sony Vegas etc. to render the video production (e.g., video effects and transitions).
  • Online Data and Meta-Data Database An online database, which contains Binary and Textual data and meta-data kept in tables and files either as a flat database organization or as a relational database (e.g., MySql).
  • a standalone application or web application (runs inside a web browser) or a software widget or software gadget which enables the user to (at least one of) play, view, browse, search, produce, upload, broadcast and share his personal media.
  • Mobile Application An application designed for a mobile device (e.g., Cellular application, iPad application etc.). This application is a specialized user interface application for the respective mobile device.
  • Local Player A mini-version of the User Interface Application with reduced capabilities, which runs locally on the user/visitor computing device using a playing platform (e.g., Flash, Silverlight, HTML5).
  • a playing platform e.g., Flash, Silverlight, HTML5
  • Electronic Media Capturing Device An electronic device which can capture personal image and/or video such as: Camcorder, Still Camera, Camera-phone, Internet Camera, Network Camera, Camera embedded in User Computer (e.g., Laptop) etc.
  • ‘My Video; My Pictures’ any set of file directories or libraries which reside on the user computer (e.g, on a Hard drive, or any electro-magnetic or optical media such as DVD, CD, Blue-Ray disk, Flash-Memory etc.) or on the user online folders (e.g., DropBox) and which stores the user personal media or shared media.
  • FIG. 1 illustrates a interaction server 10 , a user computer 20 and image acquisition devices 31 - 33 according to an embodiment of the invention.
  • the user provides acquired media from image acquisition devices such as camcorder 31 , camera-phones 32 , digital still camera 33 etc.
  • the media can be stored in a private database 21 of the user computer 20 and/or be loaded to the interaction server 10 .
  • the content analysis engine 22 of the user computer 20 analyzes the media using database accesses to a database 23 of the user computer 20 .
  • the data base 23 can store private data and private meta-data of the user.
  • Another database 11 (also referred to as on-line database) can store data and meta-data shared by multiple users.
  • the other database 11 and a content analysis server 12 belong to the interaction server 10 .
  • the analysis results of the content analysis engine 22 or of the content analysis server 12 can be stored in either one of the databases 11 and 23 —based on, at least, a selection of a user.
  • the user can directly upload media to the interaction server 10 .
  • the media is stored on the online database 11 and be analyzed by the content analysis server 12 .
  • the resulting data and meta-data can be stored on the Online database 11 .
  • Another option for the user is to use a combination of the approaches above: Uploading to the Interaction server, downloading and synchronizing to the user computer and processing in the Content Analysis Engine.
  • FIG. 2 illustrates an interaction between a interaction server 10 , the user computer 20 , a mobile network 50 and the Internet 60 according to an embodiment of the invention.
  • the user can interact using a User Interface (UI) Application which might be a standalone application or a web application in a web browser.
  • UI User Interface
  • the UI may get input from the original user media (e.g., on ‘My Video/My Pictures or other user media locations) with the extracted data and meta-data from the private and online databases 11 , 15 , 21 and 23 .
  • the user can still search and browse the online databases 11 and 13 using the UI.
  • the Mobile Application UI 60 the user can search and browse the data on the interaction server 10 (according to his user privacy settings) from mobile platform (e.g., Cellular phones, iPad). Users as well as Visitors can view, browse and search media on the Interaction server using the ‘Local Player’ (e.g., Flash Player embedded in HTML pages) which can be embedded in other web content.
  • ‘Local Player’ e.g., Flash Player embedded in HTML pages
  • Browsing enables users to quickly find interesting information, when the users cannot easily describe what they are seeking. For this mode of associative discovery, it should be easy to understand the content of a video and to quickly navigate inside video and between semantically related video clips.
  • the invention enables automatically generation of a table of content, of intelligent preview and thumbnails, of links to “similar” video, content based fast-forwarding and spatial video browsing.
  • Table of content may be a table-of-visual content (optionally hierarchical), which segments a video (or any other set of visual entities) to scenes with similar visual content. Note that these scenes usually cannot be separated by detecting different shots and they might overlap in time (e.g., the cameraman zooms in on a first context then moves on to a second context, then returns to the first context).
  • Intelligent preview and thumbnails may include a very short (e.g., 5-10 seconds long) summary of the most representative portions of the video. This condensed summary enables the user to get a quick impression of the content in the video. It could comprise frames (storyboard), short clips or a combination of both. Such short representation can be even used as an intelligent thumbnail that plays the video preview when the user selects it (e.g., mouse hovers over thumbnail).
  • Link to “similar” video may include a list of related video and images, where relatedness is determined according to direct visual similarity as well as semantic similarity of the visual content: similar persons, similar objects, similar place, similar event, similar scene, similar time.
  • the link can either point to an entire clip or to a time frame in it. Such links enable associative browsing when the user in not seeking a specific content.
  • Content-based fast forward Viewing personal video may become a boring task very quickly, as real-life activity tends to repeat itself.
  • Content-based fast-forward enables the user to fast forward to the next novel activity (with different actions, behavior, etc'). This capability is executed either by adapting the speedup to the (automatically determined) degree of interest or by jumping to the next interesting segment in the video.
  • Spatial Video Browsing In many video shots, the camera wanders around while scanning the area of interest. Spatial Browsing enables the user to freeze time and simulate spatial browsing with the camera. Namely, in response to a request from the user to move the camera (via keyboard, mouse or touch screen) the viewed image will change to an image with the proper camera point of view.
  • the Search engine enables the users to quickly retrieve information according to a given criterion. Searching can be done using a visual or textual query. In order to enable searching the method enables deep, frame-based indexing, automatic tagging and keywords and criterion based search.
  • Deep, frame-based indexing The method creates an index of objects, actions, faces, facial expressions, type of sound, places and people. Objects includes among many possible options pets, cars, computers, cellular phones, books, paintings, TV, tables, chairs etc.
  • the indexing includes the extraction of new entities, comparing them to known entities (e.g., a known face) and keeping an index item for them.
  • the index can be associated with a frame, a video segment or with the entire video clip.
  • the method clusters repeating entities (e.g., a repeating face) and generates a tag from it.
  • a tag has a visual representation (e.g., image of a face) and a textual tag (e.g., name of a person). The user can name a visual tag.
  • Each frame has a list of tags and each video has a list of the most important (frequent) tags.
  • the user can add his own tags to the automatically generated tags.
  • a tag has a semantic meaning (e.g., ‘dog’ as opposed to ‘Rexy’) the method relates the semantic meaning of the tag to other synonym keywords enabling easier textual search.
  • Criterion based Search The user can search by a query combining free text, visual and textual tags. The method finds the video or the images that are most relevant to the query. For instance, the user can select a picture of a person's face, select the textual tag ‘living-room’ and add free text ‘birthday party’ (which is used as a keyword).
  • the method may enable at least one of the following: (a) Automatic Editing of Video and Images; (b) Semi-Automatic Editing of Video and Images; (c) Automatic Video production of selected clips; (d) Automatic Interpretation of user directives; (e) Manual Post Production; (f) Personalized Production; (g) Professional Production; (h) Automatic Movie “Trailer”; (i) Automatic Content Suggestions; (j) Automatic News and Updates; (k) Automatic Group and Event Suggestions; (l) Graphics-Video interaction; (m) Return to original video; (n) Uploading and Broadcasting: and (o) Documentary web-pages.
  • the method automatically selects and edits clips and images from raw video and images input, in order to create a shorter video summary.
  • the automatic editing relies on various factors for choosing the most important parts: Faces, known persons/objects, camera motion/zoom, video and image quality, action saliency, photo-artistic quality, type of voice/sound, facial expression (e.g., smile).
  • the image quality is improved using de-noising, video stabilization and super-resolution.
  • the automatic editing can change the speed of a video (e.g., slow motion/fast motion) or even convert a video clip to an image if, for instance, the clip is too short.
  • Another case for converting video clip to image is when the camera pans and the automatic editing decides to create a mosaic image from the clip.
  • the user can select a sound track to add to the edited video.
  • Prior meta-data and analysis on the audio track might affect the automatic editing decisions (e.g., fast pace, short clips for high tempo audio track).
  • the automatic editing is generating the selected clips (and images) to fit a video length specified by the user (e.g., 45 seconds).
  • object or tagged entities can be similarly removed or emphasized (e.g. emphasizing a certain location).
  • FIG. 11 illustrates a process of editing a video entity.
  • Symbols representing media entity portions of interest 220 can be displayed to the user.
  • the user can select which media entity portions to include in an edited media entity and can, additionally or alternatively, indicate an attribute such as an importance level of features.
  • An attribute can reflect a preference of a user—for example—whether the feature is important or not, a level of importance of the feature, or any other attribute that may affect an editing that is responsive to the attribute.
  • an editing process can include one or more iterations.
  • the user can be presented with media entity portions of interest, features, and even an edited media entity and receive feedback from the user (whether to alter the edited media entity, which features are more important or less important, adding media entity portions of interest, defining a level of interest that should allow an media entity of interest to be considered as a candidate to be included in an edited media entity, and the like.
  • the selected clips and images can be used in a straightforward manner to create a video clip summary. However, the method can also provide a much more compelling automatically produced video clip.
  • the automatic production makes use of a library of effects, transitions, graphic assets and sound tracks, which are determined according to the video and the extracted meta-data. For instance, an algorithm can choose to use a face-morphing transition effect between two clips, where the first clip ends in a face and the second clip starts in a different face. Another example is to use an effect where the frame is moving in the direction of the camera motion.
  • the user can act as a director during the filming of the video and perform various predefined gestures, in order to guide the later automatic editing and production stage. For instance, a user can indicate that he would like to create a mosaic by passing a finger from one side of the camera to the other and then panning slowly. Another example is that a user signals that he has just captured an important clip that should pop up in any editing by a special gesture (e.g. making ‘V’ with the fingers). In this manner, the system can identify user gestures and enables the user to act as the director of the automatic summarization in vivo.
  • a special gesture e.g. making ‘V’ with the fingers
  • Manual Post Production The user can watch the resulting production and can intervene to override automatic decision. For instance, the user can remove or add clips from a candidate list of clips using a simple checkbox interface. In addition, the user can change the starting point and end point of each selected clip. Moreover, user can change the transitions if he likes, in a post production stage.
  • Personalized Production besides manual post editing, the user can affect the automatic production and editing stages using a search query, which emphasizes the parts in the video, which are important to the user.
  • the query can take the form of a full search query (text+tags+keywords). For instance, a query of the form ‘Danny jumping in the living room’ would put more emphasize in the editing and the production stages on parts which fit the query.
  • Another example is of a query which uses a visual tag describing a pet dog and a location tag with an image of the back yard.
  • Another option for the user to affect the editing stage is by directly marking a sub-clip in the video which must appear in the production.
  • the user marks several people (resulting from Face Clustering and Recognition) and gets several productions, each production with the selected person highlighted in the resulting clip, suitable for sharing with that respective person.
  • the method allows an additional, professional human editing and production.
  • the method delivers the raw video, the extracted meta-data and the automatically produced video to professional producers (via internet or via a delivery service using DVDs etc.).
  • the user receives a final product (e.g., produced DVD) via mail or delivery.
  • a professional production can complement the automatic production when professional quality is needed (e.g., for souvenirs, presents).
  • the method can export the automatic editing and the respective meta-data to common video editing formats (e.g., Adobe Premiere, Apple Final Cut).
  • Automatic Movie “Trailer” The method described above for editing and production of video can be used to create an automatic movie trailer for every video in the user library. This is a produced version of the video preview, which can be served as the default version for sharing a single video. This “Trailer” can also be used as a short version for various kinds of user generated content (even if not personal), for instance for automatic “Trailers” of popular YouTube videos for users who prefer to view the highlight before viewing the entire video.
  • Automatic Content Suggestions The method automatically suggests to the user edited video clips which are suitable for sharing. For instance, after the video from a recent trip was loaded to the user computer, the method automatically produces the relevant data and suggests it to the user, who can decide to share the suggestion by a simple approval of the suggestion.
  • the method uses the extracted meta-data to automatically find shared video and images which might interest the user. For instance, the method can suggest to the user to view a video in one of his friend's shared content in which he participates. In this manner, a user can be informed of visual information, which may be of interest to him, even if he did not upload the video by himself.
  • Automatic Group and Event Suggestions uses the extracted meta-data and discovered similarities between user data and shared data to propose formation of groups of people (e.g., close family, trip friends) and event suggestions (e.g., trip, party, birthday).
  • shared media entities which can be clustered with other media, can be grouped in a semi-automatic manner (with user approval).
  • the method can suggest producing personalized summaries of events—for instance, generating a different summary for each chosen participant in which this participant is highlighted in the generated synopsis.
  • personalized summaries can encourage event and group participants to add their own media from the event, remix the results and so on. This can promote the building a large media pool of an event or a group.
  • Graphics-Video interaction The method enables to add a layer of graphic-video interaction, based on the extracted meta-data. For instance, a conversation bubble can track a person's head or face. Another example is of a graphic sprite interacting with the video (e.g., a fly added as a graphic layer to the video and which avoids a person as he moves in the clip). This added layer can be disabled by the user.
  • a graphic sprite interacting with the video (e.g., a fly added as a graphic layer to the video and which avoids a person as he moves in the clip). This added layer can be disabled by the user.
  • the method enables the user to return to the original video clip from any point in the produced video by double-clicking (or tapping in touch screen) the display in that point.
  • the method enables the user to upload the produced video and related meta-data to a video storage site, which enables to embed the video to be streamed via a video player (e.g., Flash Player) in various internet locations including: email, social networks, blog sites, home pages, content management systems, image and video sharing sites.
  • a video player e.g., Flash Player
  • Documentary web-pages The method enables the user to create documentary web pages, which are dedicated for a certain entity such as event, person, group and object. For example, creating a web page of a child, where video clips and images of the child are kept, documenting the child at different stages of his life. Another example is a page documenting a party where all participating users are invited to view current productions, upload their footage of the party, invite further participants and use all uploaded footage to create new productions (and so on). A different example is a web page documenting a user's trips in the world. Yet another important example is a memorial page dedicated to the memory of a deceased person. The system can automatically detect new videos or images that are relevant to the documentary page, and add them to the page via approval of the user. This web page can be organized as an album or as a storyboard, and can be accompanied with annotations and text that was inserted automatically (using the meta-data) or by the user.
  • This web page can be organized as an album or as a storyboard, and can be accompanied with
  • FIG. 3 illustrates a method 300 according to an embodiment of the invention.
  • Method 300 may start by stage 302 or 304 . These stages are followed by a sequence of stages 310 , 320 , 330 , 340 , 350 and 360 .
  • Stage 302 includes selecting, by a user, clips and images to be included in the production, a time limit and an optional query for indicating importance for the editing stage.
  • Stage 304 includes selecting, by the content analysis server or content analysis engine, clips and images automatically to be used in a proposed production
  • Stage 310 includes completing, by the content analysis server or the content analysis engine, any unfinished analysis (if any) for the requested media
  • Stage 320 includes using the ImportanSee measure and other meta-data properties to automatically provide at least one video editing proposal
  • Stage 330 includes adding, automatically, production graphics to the video according to the meta-data.
  • Stage 340 includes presenting the results to the user.
  • the results may include clip selection, additional media clip/images proposals (which are currently out of the production), and relevant graphical effects. Optionally also previewing by the user the current production.
  • Stage 350 includes adapting the selection: changing start/end points, selected clips, audio track etc.
  • Stage 360 includes saving video production compilation in meta-data DB and produce video after obtaining user approval.
  • the proposed method relies on a unified media content analysis platform, which we denote as the media predictability framework.
  • the media predictability framework we measure to what extent a query media (visual or audio) entity is predictable from other reference media entities and use it to derive meta-data on this query entity: For instance, if a query media is un-predictable given the reference media, we might say that this media entity is interesting or surprising. We can utilize this measurement, for example, to detect interesting parts in a movie by seeking for video segments that are unpredictable in this manner from the rest of the video.
  • the media predictability framework to associate between related media entities. For example, we can associate a photo of a face with a specific person if this photo is highly predictable from other photos of that person.
  • the predictability framework is a non-parametric probabilistic approach for media analysis, which is used by our method as a unified framework for all the basic building blocks that require high-level media analysis: Recognition, Clustering, Classification, SalienSee Detection, etc'. We will first describe in detail the predictability framework and then show how to derive from it the different building blocks.
  • the predictability measure is defined as follows: Given a query media entity d and a reference media entity C (e.g.—portions of images, videos or audio) we say that d is predictable from C if the likelihood P(d
  • C e.g.—portions of images, videos or audio
  • a specific case of media descriptors is image descriptors.
  • Each image descriptor describes a patch or region of interest or arbitrarily shaped region in the image (this can also be the entire image).
  • One of the most informative image descriptors is the Daisy descriptor (Fua 2008) which computes a gradient image, and then, for each sample point, produces a log-polar sampling (of size 200) of the gradient image around this point (a detailed description is given in (Fua 2008)).
  • Video descriptors describe space-time regions (e.g. x-y-t cube in a video). Examples of video descriptors include, raw space-time patches or concatenating Daisy descriptors applied on several consecutive frames (e.g.
  • descriptors there are many types of descriptors, known in the literature, that capture different aspects of the media, such as—simple image patches, shape descriptors (See for example (G. Mori, S. Belongie, and J. Malik 2005)), color descriptors, motion descriptors, etc. Information from different types of descriptors can be fused to produce better predictability estimation.
  • audio can also be analyzed using audio descriptors.
  • Some audio descriptors that are popular in the literature are MFCC, PLP, or the short-time spectrum. Audio descriptors can be specialized for speech representation, music representation, or general sound analysis. These descriptors can be computed, for example, using open source tools such as the CMU sphinx (http://cmusphinx.sourceforge.net/). Although each media has its own very different descriptor type, our predictability framework is applicable to all descriptor and media types.
  • FIG. 4 illustrates a pre-processing block 40 according to an embodiment of the invention.
  • the pre-processing block 40 receives reference media entities 101 and a set of media data and outputs reference media descriptors 103 that can be stored in a media descriptors database.
  • the pre-processing block 40 processes the reference media entities 101 by a descriptor extractor 44 to provide a descriptor set of the reference media entities.
  • the pro-processing block 40 generates (by description extractor 41 and representative extractor 42 ) a descriptor space representatives of the set of media data 102 .
  • the descriptor set of the reference media entities and the descriptor space representative are fed to a likelihood estimator 45 that outputs the reference media descriptors 103 .
  • Descriptor Extraction Given a reference set of media entities C, we first compute a set of descriptors over a set of sampling points.
  • the sampling points can be a uniform dense sampling of the media (for example, a grid in an image) or only at points of interest (e.g.—corners in image).
  • ⁇ f 1 c , . . . , f K c ⁇ denote the set of descriptors computed for the media reference C.
  • Descriptor-Space Representatives Given a set of media entities (can be the reference media itself), the descriptors for these entities are extracted. Next, the representative set is extracted from the full descriptor set in the following manner. A random sampling of the descriptor can be used to generate representative, but vector-quantization might also be used (for example—using mean-shift or k-means quantization, etc').
  • K( ⁇ ) is the Parzen kernel function (which is a non-negative operator and integrates to 1;
  • the set of descriptor-representatives ⁇ q 1 , . . . , q L ⁇ together with their corresponding likelihoods ⁇ P(q 1 ), . . . , P(q L ) ⁇ and the original descriptors ⁇ f 1 c , . . . , f K c ⁇ are used to construct the Media Descriptors Data-base, which is used in the query block.
  • FIG. 5 illustrates a query block 50 according to an embodiment of the invention.
  • the query block 50 receives a query media entity (d) 104 , reference media descriptors from reference descriptor database and outputs a predictability score P(d
  • the query block 50 includes a description extractor 51 , a set (1 to K) of descriptor likelihood estimators 52 ( 1 )- 52 ( k ) and a combination unit 53 .
  • Descriptor Extraction 51 Given a query media entity d, we first compute a set of descriptors ⁇ f 1 d , . . . , f N p ⁇ over a set of sampling points (similar to the descriptor extraction step of the pre-processing block).
  • each descriptor is attached with a weight m i of its sample point, which can be user defined.
  • m i of its sample point
  • w k are interpolation weights which are determined as a function of the distance of f i d from q k .
  • the simplest weighting scheme is linear, by setting w k ⁇ f i d ⁇ q k ⁇ ⁇ 1 . This estimation can be approximated by taking only the first few nearest neighbors representatives, and setting w k to zero for the rest of the representatives.
  • C) can be estimated using a non-linear function of the representative log-likelihood values and the distances from them:
  • C ) F ( ⁇ log P ( q 1 ), . . . ,log P ( q L ), ⁇ f i d ⁇ q 1 ⁇ , . . . , ⁇ f i d ⁇ q L ⁇ )
  • ⁇ f 11 d , . . . , f N1 d ⁇ , . . . , ⁇ f 1R d , . . . , f NR d ⁇ (I.e.—R different descriptor types or R aspects)
  • ⁇ r are weights of each aspect (they can be determined manually or automatically from a training set).
  • the predictability score can be further improved using empirical post-processing.
  • PredictabilityScore (d
  • comparing between different reference media sets can be improved by empirically estimating the distribution of the predictability score over a “training” set.
  • This training set aims to represent the set of queries, so it is best (if possible) to draw it randomly from the query set.
  • the distribution that we are trying to estimate now is simply the distribution of the predictability scores of a media entity given a set of references C 1 , . . . C S (note that this generated a new “feature” vector of dimension S for representing the query media).
  • a straightforward approach is to use the non-parametric Parzen estimation, which has been described earlier, or recursively using our non-parametric likelihood estimation.
  • FIG. 6 illustrates a similarity block 60 according to an embodiment of the invention.
  • the similarity block 60 (also referred to as a similarity building block) is used to quantify the similarity between two media entities M1,M2. To do so, we use each media entity twice: once as a reference, and once as a query.
  • the similarity block 60 receives a first media entity 111 and a second media entity 112 .
  • the first media entity is provided to a pre-processing block 61 (when used as a reference) that extracts first media entity descriptor space representatives that are fed (in addition to the second media entity) to a query block 50 .
  • the query block 50 outputs a predictability score of the second media entity given the first media entity.
  • the second media entity is provided to a pre-processing block 61 (when used as a reference) that extracts second media entity descriptor space representatives that are fed (in addition to the first media entity) to another query block 50 .
  • the other query block 50 outputs a predictability score of the first media entity given the second media entity.
  • Both predictability scores are fed to a unification unit 53 that outputs similarity(M1, M2) 65 .
  • a descriptor database is constructed from each media entity (using the pre-processing block—as was shown in the pre-processing section of the predictability framework).
  • M 2 ) of media entity M 1 given the media entity M 2 as a reference is computed using the query block (as shown in the query section of the predictability framework).
  • the two predictability scores are combined to produce a single similarity measure.
  • a combination function one can use any bimodal operator according to the specific application, such as the ‘average’ or the ‘max’ operators.
  • FIG. 7 illustrates a classification building block 70 according to an embodiment of the invention.
  • the classification building block is also referred to as classification block.
  • the classification building block is used to classify a media entity into one of several classes. To do so, we collect a set of media entities that relates to each class, construct a media descriptor DB from each reference class, and compare the query media to all of them using the query building block.
  • the classification block 70 receives reference media entities of each class out of multiple media classes—C1 120 ( 1 )- 120 (N).
  • a query media entity d 104 and reference media entities of each class are fed to N query blocks 50 —each query block receives the query media entity d and one of the reference media entities of a class—separate query blocks receive reference media entities of different classes.
  • Each query block 50 outputs a predictability score of the query media entity given the media entity class.
  • a classification decision block 72 classifies the query media entity to one or these classes based on the predictability scores.
  • a descriptor database D B i is constructed using the pre-processing block—as was shown in the pre-processing section of the predictability framework.
  • C i ) of the query media entity d given each class is estimated using the query block (as shown in the query section of the predictability framework).
  • the predictability scores are entered into the classification decision block, which outputs the classification of d (Note that the classification doesn't necessarily have to be a hard decision on a single class, but it can be the posterior probability of d to belong to each class).
  • the simplest decision rule is setting the classification of d to be the class C for which the predictability score of d given C is the highest. But other decision rules are also possible—for example, computing posterior probabilities (given the prior probabilities of each class).
  • the distribution of the predictability scores given all (or subset) of the classes can be estimated using a “training” set. (A simple way to do it is using the non-parametric Parzen estimation, as described earlier). With this empirical distribution estimation, the probability of classifying d with each class can now be determined directly from the distribution, providing “Empirically Corrected” probabilities.
  • the classification block can operate as a detection block. Assuming that a certain feature is being searched in a query media stream. One reference media entity class is selected as including the feature as another reference media entity class is selected as not including the feature. The query media entity and these two media entity classes are fed to the classification block that classifies the query media entity as being included in one of these media classes-a s including the feature or not including the feature. It is noted that more than two media classes can be provided and may include different associations with the feature (not just a binary relationship of including or not including the feature).
  • FIG. 10 illustrates a decision block according to an embodiment of the invention.
  • a set of media entities 160 that is pre-filtered 99 to provide a set of candidates for searching the feature within.
  • the set of candidates and two classes of reference examples 162 and 164 are provided to a classification block 98 that decides whether the feature exists in the candidates.
  • the output is a list of detections 97 that indicates in which candidates the feature appears.
  • the detection building block is used to detect some pre-defined class (for example—face detection, or a detection of some specific person) inside a set of media entities.
  • the detection building block is actually a special case of the classification building block, in which the two reference classes are the “Class” and the “Non-Class” (for example—“Face”—“Non Face”, “Speech”—“Non-Speech”), and the set of queries is all the sub-segments of the media for which we would like to apply the detection—for example, a set of sub-windows in a image.
  • the classification process usually takes too much time to be applied on all sub-segments, a pre-filtering can be applied, choosing only a subset of the segments.
  • the cascade based Viola & Jones method is widely used for object (e.g., face) detection, outputting a set of rectangles for which a face was detected. Yet, it also outputs a large set of erroneous detections, which can be further eliminated by the “Class”—“Non Class” detection block describe herein. See
  • the clustering building block is used to cluster a set of media entities into groups. This building block is using the similarity building block described above to compute a similarity measure between pairs of media entities, and then use standard clustering methods to cluster the affinity matrix.
  • FIG. 8 illustrates a clustering block 80 according to an embodiment of the invention.
  • the clustering block 80 includes multiple similarity blocks 60 that are fed with different media entities. During each iteration, the clustering blocks output a similarity score between two media entities. These similarity scores can be arranged to form a similarity/affinity matrix (or any other data structure) that is fed to a clustering algorithm 81 that clusters the media entities based on the similarity scores—clustering M1, . . . , MN 85 .
  • FIG. 9 illustrates a SalienSee block 90 according to an embodiment of the invention.
  • the SalienSee block tries to predict a portion of a media entity (It) based on previous media entity portions (I1 . . . It ⁇ 1) that precede it.
  • An input media entity 130 that includes multiple media entity portions is fed to the SalienSee block 90 one media entity portion after the other so that the media entity portions can be evaluated in an iterative manner-one after the other.
  • Query block 50 receives (as a query media entity) the media entity portion It and receives (as reference descriptor space representative) descriptors space representatives of the previous media entity portions.
  • the query block 50 calculates a predictability score that may be regarded as a saltiness score 95 .
  • the media entity portions are also fed to a database 92 .
  • the content of the database are processed by pre-processing block 40 .
  • SalienSee measures the extent by which a point in time in the media is salient in the media. This can also indicate that this point in time is “surprising”, “unusual” or “interesting”.
  • d be some query media entity
  • C denote the reference set of media entities.
  • C) ⁇ log Pre dictabilityS core (d
  • the SalienSee measure can capture the moments in video in which the activity becomes boring (which is very common in a personal video)—for example, when someone starts jumping it might be interesting, but the next jumps are getting more and more boring as they are already very predictable from the past.
  • let/(t 1 , t 2 ) denote the time segment t 1 ⁇ t ⁇ t 2 of the video clip d.
  • the video d (t, t+ ⁇ t) is ‘boring’ if its SalienSee measure with respect to the past is small, i.e, if SalienSee(d(t,t+ ⁇ t)
  • Tagging Automatic tagging of media entities is achieved by applying the Detection/Recognition building block several times. Some tags are extracted by solving a detection problem. For instance adding a tag “face” whenever the face detector detected a face in a video clip, or a tag “applause” when a sound of clapping hands is detected. Other types of tags are extracted by solving a recognition (or classification) problem. For instance, a specific person-tag is added whenever the face-recognition module classifies a detected face as a specific, previously known face. Another example is classifying a scene to be “living-room scene” out of several possibilities of pre-defined scene location types. The combination of many detection and recognition modules can produce a rich and deep tagging of the media assets, which is valuable for many of the features described below.
  • the method utilizes at least some of the following tagging: face poses (“frontal”, “profile” etc.), specific persons, facial expressions (“smile”, “frown” etc.), scene-types (“living-room”, “backyard”, “seaside” etc.), behavior type (“running”, “jumping”, “dancing”, “clapping-hands” etc.), speech detection, soundtrack segment beat classification (e.g. “fast-beat”, “medium-beat”, “slow beat”), voice classification (“speech”, “shout”, “giggle”, etc.).
  • face poses frontal”, “profile” etc.
  • specific persons e.g. “frontal”, “profile” etc.
  • facial expressions e.g. “smile”, “frown” etc.
  • scene-types living-room”, “backyard”, “seaside” etc.
  • behavior type running”, “jumping”, “dancing”, “clapping-hands” etc.
  • speech detection e.g. “fast-beat”, “medium
  • ImportanSee our “ImportanSee” measure is used to describe the importance or the amount of interest of a video clip for some application—for example, in a video summary we can display only the important parts while omitting the unimportant ones. In principle, this measure is subjective, and cannot be determined automatically. However, in many cases it can be estimated with no user intervention using attributes such as the attributes listed below:
  • Camera motion is an important source of information on the intent of the cameraman.
  • a panning of the camera usually indicates that the photographer is either scanning the scene (to get a panorama of the view), or just changing the focus of attention.
  • Video segments that relates to the second option can be assigned with a low ImportanSee.
  • a case where the camera is very shaking and not stabilized can also reduce the overall ImportanSee.
  • the camera motion can be estimated using various common methods (e.g. (J. R. Bergen, P. Anandan, K. J. Hanna, and R. Hingorani 1992)).
  • a Camera zoom-in is usually a good indication for high importance (i.e., resulting in high ImportanSee). In many cases, the photographer zooms in on some object of interest to get a close-up view of the subject (or event).
  • Face close-up Images or video clips in which faces appear in the scene are usually important. Specifically, a close-up on a face (in a frontal view) will usually indicate a clear intention of the photographer to capture the person (or persons) being photographed, and can serve as a strong cue for high importanSee.
  • Speech detection and recognition can help detecting interesting periods in the video. Moreover, laughter (general, or of a child) increases the ImportanSee measure of the corresponding video segment. An excited voice may also be used as a cue for importanSee.
  • Facial expressions are a good cue for high ImportanSee. For instance, moments when a person smiles or a child frowns or cries indicates a high ImportanSee.
  • the attributes above can be used to compute intermediate importance scores s 1 , s l (in our implementation, these scores can be negative.
  • Table of contents is a hierarchical segmentation of visual entities (video or set of videos and images). This feature can be implemented as a clustering of the various scenes in a video. For instance, by sampling short video chunks (e.g., 1 second of video every 5 seconds of video) and clustering these media chunks (using the clustering building block) will produce a flat or hierarchical table of contents of the video.
  • each segment is attached with either a textual or visual short description (for example, a representative frame or a short clip). This representative can be selected randomly, or according to its ImportanSee measure.
  • Intelligent preview and thumbnails This is a very short (e.g., 5-10 seconds long) summary of the most representative and important portions of the video. This feature can be implemented by simply selecting the time segments of the video with the maximal ImportanSee.
  • Video links and Associative browsing This feature facilitates video and image links, which are based on audio-visual and semantic similarity.
  • This feature can be implemented as a combination of using the Tagging feature and the similarity building block:
  • the similarity building block is used to quantify the direct audio-visual similarity between images and video.
  • the Tagging feature is used to quantify the semantic association between media entities—for instance, two videos of birthday parties, two videos of dogs etc.
  • various simple distances can be used between the tag lists of each media entity, such as the number of mutual tags or a weighted sum of the mutual tags, which emphasizes some tags over others.
  • a (weighted) sum of the semantic and audio-visual similarity can be used to combine the different similarity measures. Links between media entities can be formed for pairs of entities with high enough overall similarity.
  • Automatic Video Editing & Synopsis The main challenge in automatic video editing is to automatically select the most important sub-clips in the video, which best represent the content of the original video. This selection is an essential stage for most of the features that relates to automatic video editing: creating a video synopsis (or Movie “Trailer”), video production, intelligent thumbnails, etc. This task is best served by the ImportanSee building block (describe above)—to determine the importance of each sub-clip in the video, and promoting the selection of the most important ones to be used in the edited video.
  • FIG. 12 Error! Reference source not found. illustrates a system and its environment according to an embodiment of the invention. The system implements any of the methods described above to provide a comprehensive solution for browsing, searching and sharing of personal video.
  • the system has various components which reside on several sites. The related sites and the components on them are described next.
  • the user computer may include two databases 21 and 23 , content analysis engine 22 and user interface application 24 .
  • the user computer can store a large amount of visual data in general locations such as ‘My Video’ and ‘My Pictures’ directories in Microsoft Windows Operation Systems. Most of the data in these locations is raw data and yet personal.
  • the content analysis engine 22 may process runs in the background (optionally only during the computer idle time) or upon user request. It analyzes the user's visual data (videos and pictures), and extracts meta-data using a work queue.
  • the work queue is filled by the content analysis engine 22 as well as by the user selection (a user can insert any video or image to the top of the queue).
  • the content analysis engine 22 may use the private Meta-Data DB 23 to store the extracted meta-data and reuses this meta-data for its own analysis (e.g., extracted visual tags are stored there for future automatic tagging).
  • the content analysis engine 22 is not a software installed on the user computer 20 , but rather an internet browser plug-in or a software component (e.g., ActiveX) which enables the user to apply the content analysis engine 22 to run without full software installation (but a plug-in installation).
  • a software component e.g., ActiveX
  • the user interface application 24 lets the user apply a sub-set of the method capabilities discussed above, thus enabling browsing, searching and sharing of personal video.
  • the sub-set depends on the type of client, license and computer. In one embodiment, this is a standalone client installed on the user computer. In another embodiment, this is a web application which uses an internet browser for running the user interface, which enables running it from any internet browser, without installing software.
  • the interaction server 10 hosts several servers which enable users to share personal video and images and broadcast them on various internet locations by embedding them.
  • the ‘User Profile’ 18 contains various information about the user, such as its personal details, a list of accounts in various internet services, a list of friend and family members and usage statistics.
  • the ‘Public Data+Meta-Data DB’ 17 contains data that the user selected to share from the ‘User Computer’: relevant meta-data and also video clips, images, etc. Sharing can be limited to various groups—family, friends, everyone etc.
  • the database is also responsible for initiating synchronization with connected ‘User Computers’ and mobile appliances.
  • the ‘Content Analysis Server’ 12 is a powerful version of the content analysis engine on the user computer 20 which enables to process a large amount of visual data being uploaded to the site. This enables the user to process video even from a computer that does not have the content analysis engine installed (i.e., SaaS—Software as a Service).
  • the ‘Video Platform Server’ 19 performs the actual streaming and interaction with users and visitors that view video and images stored on the ‘Interaction server’. It contains the actual ‘Streaming’ module 194 which is responsible for the actual delivery of the video on time and with the right quality.
  • the ‘Interaction’ module 192 is responsible for interpreting the user requests (e.g., press on a table of contents element) and communicate it with the ‘Streaming’ server or the ‘Local Player’.
  • the ‘Analytics’ module 193 is responsible for recording user behavior and response for each video and advertise that was displayed on it (e.g., number of times a video was watched, number of skips, number of times an ad was watched till its end).
  • the ‘Ad-Logic’ 191 uses information from the ‘Analytics’ module to choose the best strategy to select an ad for a specific video and user and how to display it. This information is synchronized in real-time with the ‘Local Player’.
  • the ‘Ad-Logic’ module can instruct the ‘Local Player’ to display an ad in various forms, including: pre-roll, post-roll, banners, floating ads, textual ads, bubble ads, ads embedded as visual objects using the extracted video meta-data (e.g., adding a Coca-Cola bottle on a table).
  • Users and visitors can view video and images which users decided to share on various ‘Internet Locations’ 40 that may include social networks, email services, blogs, MySpace, Gmail, Drupel, Facebook and the like.
  • the actual viewing of video is performed by an embedded player which can be based on various platforms such as Adobe Flash, Microsoft Silverlight, HTML5 etc.
  • the player can be embedded either directly or using a local application (e.g., Facebook application) in various internet locations including: Social Networks (e.g., Facebook, Myspace), Email messages, Homepages, Sharing-Sites (e.g, Flickr, Picasa), Blogging sites and platforms (e.g., Wordpress, Blogger) and Content Management Systems (e.g., Joomla, Wikimedia).
  • a ‘Local Player’ the user can user an internet link to a dedicated video page on the ‘Interaction server’.
  • Users can view and synchronize video via mobile appliances (e.g., cell phones) using the cellular networks 50 or internet networks 40 .
  • the mobile appliance is computationally strong enough (e.g., Pocket-PC, Smartphone) it can be regarded as a ‘User Computer’.
  • it can use a ‘Mobile Application’ which enables to view media from the ‘Interaction server’ as well as uploading raw media from the mobile appliance.
  • the ‘Mobile Application’ can use the ‘Content Analysis Server’ in the ‘Interaction server’ to produce and share video even for appliances with low computational powers.
  • the ‘Interaction server’ can automatically synchronize uploaded content with other connected ‘User Computers’.
  • the system proceeds by sending the relevant raw video, the extracted meta-data and the automatically produced video to a professional producer 70 (via internet or via a delivery service using DVDs etc.). After the professional editing is finished, the user receives a final product (e.g., produced DVD) via mail or delivery.
  • a final product e.g., produced DVD
  • the system is implemented on ‘Other Electronic Appliances’ with do not utilize general CPUs or without enough computational power.
  • parts of the software modules described in user computer are implemented in embedded form (ASIC, FPGA, DSP etc.).
  • FIG. 13 illustrates method 1300 according to an embodiment of the invention.
  • Method 1300 is for determining a predictability of a media entity portion.
  • Method 1300 starts by stage 1310 of receiving or generating (a) reference media descriptors, and (b) probability estimations of descriptor space representatives given the reference media descriptors; wherein the descriptor space representatives are representative of a set of media entities.
  • Stage 1310 is followed by stage 1320 of calculating a predictability score of the media entity portion based on at least (a) the probability estimations of the descriptor space representatives given the reference media descriptors, and (b) relationships between the media entity portion descriptors and the descriptor space representatives.
  • Stage 1320 may be followed by stage 1330 of responding to the predictability score.
  • Stages 1310 - 1330 can be repeated multiple times on multiple media entity portions.
  • Stage 1320 may include at least one of the following: (a) calculating distances between descriptors of the media entity and the descriptor space representatives; (b) calculating a weighted sum of probability estimations of the descriptor space representatives, wherein weights applied for the weighted sum are determined according to distances between descriptors of the media entity portion and descriptor space representatives; (c) generating the probability estimations given the reference media descriptors; wherein the generating comprises calculating, for each descriptor space representative, a Parzen estimation of a probability of the descriptor space representative given the reference media descriptors.
  • stage 1320 may include calculating the predictability of the media entity portion based on reference media descriptors that represent media entity portions that precede the media entity portion and belong to a same media entity as the media entity portion. Repeating stage 1310 and 1320 on multiple portions of the media entity can result in calculating the predictability of multiple media entity portions of the media entity and detecting media entity portions of interest. Stage 1330 may include generating a representation of the media entity from the media entity portions of interest.
  • stage 1320 can be augmented to include defining a media entity portion as a media entity portion of interest based on the predictability of the media entity portion and on at least one out of a detection of a camera motion, a detection of a camera zoom or a detection of a face close-up.
  • FIG. 14 illustrates method 1400 according to an embodiment of the invention.
  • Method 1400 is for evaluating a relationship between a first media entity and a second media entity.
  • Method 1400 starts by stage 1410 of determining a predictability of the first media entity given the second media entity based on (a) probability estimations of descriptor space representatives given second media entity descriptors, wherein the descriptor space representatives are representative of a set of media entities and (b) relationships between second media entity descriptors and descriptors of the first media entity.
  • Stage 1410 is followed by stage 1420 of determining a predictability of the second media entity given the first media entity based on (a) probability estimations of descriptor space representatives given first media entity descriptors, and (b) the relationships between first media entity descriptors and descriptors of the second media entity.
  • Stage 1420 is followed by stage 1430 of evaluating a similarity value between the first media entity and the second media entity based on the predictability of the first media entity given the second media entity and the predictability of the second media entity given the first media entity.
  • Stage 1400 may be repeated multiple times, on multiple media entity portions. For example, it may include evaluating the relationships between multiple first media entities and multiple second media entities based on a predictability of each first media entity given the multiple second media entities and a predictability of each second media entity given the first media entity.
  • Method 1400 can be used for clustering—by evaluating the similarity value of a media entity to a cluster of media entities.
  • method 1400 can include clustering first and second media entities based on the relationships between the multiple first media entities and the multiple second media entities.
  • FIG. 15 illustrates method 1500 according to an embodiment of the invention.
  • Method 1500 is for classifying media entities.
  • Method 1500 starts by stage 1510 of receiving or generating (a) media class descriptors for each media entity class out of a set of media entity classes, and (b) probability estimations of descriptor space representatives given each of the media entity classes; wherein the descriptor space representatives are representative of a set of media entities.
  • Stage 1510 is followed by stage 1520 of calculating, for each pair of media entity and media class, a predictability score based on (a) the probability estimations of the descriptor space representatives given the media class descriptors of the media class, and (b) relationships between the media class descriptors and the descriptor space representatives descriptors of the media entity.
  • Stage 1520 is followed by stage 1530 of classifying each media entity based on predictability scores of the media entity and each media class.
  • FIG. 16 illustrates method 1600 according to an embodiment of the invention.
  • Method 1600 is for searching for a feature in a media entity.
  • Method 1600 starts by stage 1610 of receiving or generating first media class descriptors and second media class descriptors; wherein the first media class descriptors represent a first media class of media entities that comprises a first media feature; wherein the second media class descriptors represent a second media class of media entities that does not comprise the first media feature.
  • Stage 1610 is followed by stage 1620 of calculating a predictability score given a first media class based on (a) probability estimations of descriptor space representatives given the first media class descriptors, and (b) relationships between the first media class descriptors and descriptors of the media entity.
  • Stage 1620 is followed by stage 1630 of calculating a second media class predictability score based on (a) probability estimations of descriptor space representatives given the second media class descriptors, and (b) relationships between the second media class descriptors and descriptors of the media entity.
  • Stage 1630 is followed by stage 1640 of determining whether the media entity comprises the feature based on the first media class predictability score and the second media class predictability score.
  • Stage 1640 can be followed by stage 1650 of responding to the determination.
  • stage 1650 may include detecting media entities of interest in response to a detection of the feature.
  • Stage 1600 can be repeated in order to detect a feature in multiple media entities by repeating, for each media entity stages 1610 - 1650 .
  • the feature can be a face but this is not necessarily so.
  • FIG. 17 illustrates method 1700 according to an embodiment of the invention.
  • Method 1700 is for processing media streams.
  • Method 1700 starts by stage 1710 of applying probabilistic non-parametric process on the media stream to locate media portions of interest.
  • probabilistic non-parametric process are provided in the specification.
  • a non-parametric probability estimation is an estimation that does not rely on data relating to predefined (or known in advance) probability distribution, but derive probability estimations directly from the (sample) data.
  • Stage 1710 may include detecting media portions of interest in response to at least one additional parameter out of: (a) a detection of a change of focal length of a camera that acquires the media; (b) a detection of a motion of the camera; (c) a detection of a face; (d) a detection of predefined sounds; (e) a detection of laughter; (f) a detection of predefined facial expressions; (g) a detection of an excited voice, and (h) detection of predefined behavior
  • Stage 1710 is followed by stage 1720 of generating metadata indicative of the media portions of interest.
  • Stage 1720 may include adding tags to the media portions of interest.
  • Stage 1720 is followed by stage 1730 of responding to the metadata.
  • Stage 1730 may include at least one of the following: (a) generating a representation of the media stream from the media portions of interest; (b) generating a trick play media stream that comprises the media portions of interest; (c) finding media portions of interest that are similar to each other; (d) tagging media portions of interest that are similar to each other; and (e) editing the media stream based on the media portions of interest.
  • Either one of the mentioned above methods can be executed by a computer program product that includes a non-transitory computer readable medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A method for determining a predictability of a media entity portion, the method includes: receiving or generating (a) reference media descriptors, and (b) probability estimations of descriptor space representatives given the reference media descriptors; wherein the descriptor space representatives are representative of a set of media entities; and calculating a predictability score of the media entity portion based on at least (a) the probability estimations of the descriptor space representatives given the reference media descriptors, and (b) relationships between the media entity portion descriptors and the descriptor space representatives.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 13/041,457, filed on Mar. 7, 2011, which claims the priority of U.S. Provisional Patent Application No. 61/311,524, filed on Mar. 8, 2010, which is incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • In recent years, there has been an explosion of visual information including personal images and video. Personal cameras today are affordable and portable, and enable shooting video either as portable camcorder (e.g., Flip), pocket still camera and camera-phones (e.g., iPhone). This enhanced portability and increased ease of use enable people to shoot video casually at any occasion. This creates an exponential growth in the amount of generated personal video. Although people are shooting more and more video, there is not a matching increase in the amount of viewing or sharing of personal video.
  • The internet video revolution has made a considerable impact in making video widely available to anyone. However, while large companies have grown by providing internet video services (e.g., YouTube, Hulu, Blinkx etc.) they provide a comprehensive solution only for viral video, TV-shows and movies. Personal video is left without any real comprehensive solution and thus viewing and sharing personal video is very limited. In contrast to other kinds of internet video, personal video is initially very raw and boring and thus not suitable for watching or sharing. In addition, personal video is completely unstructured, and thus can only be browsed primitively (manual forward-backward). Lastly, personal video does not contain meaningful meta-data and therefore cannot be searched. These problems, which create a poor user experience stand in contrast to other videos in the internet (e.g., viral video in YouTube), which can be searched, browsed and shared.
  • When compared to other kinds of internet video, personal video has an inherent problem that each personal video has a very small cycle of interest (few friends and family). As a result, the few viewers of each such video will not supply enough textual information and meta-data to enable textual mining engines (e.g., Google). Thus, while other forms of internet video gain a significant amount of textual meta-data from viewers, personal video remains raw and mostly non usable. In addition, personal video is mostly not edited and not produced, which creates huge files with boring content. As a result, besides being difficult to transmit, share and upload their required bandwidth and storage space is expensive relative to the minimal or zero amount of viewing they can generate.
  • There are many publications and patents involving partial solutions to the problem of browsing, searching and sharing personal video. For instance (Method and system for searching graphic images and videos n.d.) provides a method and system for searching in images and video. In (System and method for adaptive video fast forward using scene generative models n.d.) a method and system are presented for adaptive fast forward in video using a specific approach. The work in (Analysis of Video Footage n.d.) presents a method for extracting segments of interest from video, which are useful for a table of contents. The paper in (Emiliano Acosta and Luis Tones and Alberto Albiol and Edward Delp 2002) presents an approach for utilizing face detection and recognition for video indexing. The paper in (Oren Boiman and Eli Shechtman and Michal Irani 2008) presents an approach for classifying images. There are many other works dealing each with specific aspects of the problem discussed above. While there are many partial, ad hoc solutions to the problem of browsing, searching and sharing of personal video there is no single unified solution for handling this problem. Due to the magnitude of the problem and the large number of required modules, any practical system for solving this problem, which uses many ad hoc solutions, would be extremely complicated, inflexible and not scalable. However, partial solutions to this problem are inadequate: For instance, without being able to automatically edit and produce personal video, users would not be interested to share the raw footage, which eliminates one of the main drivers for using personal video. Without searching capabilities, and considering the exponential increase in personal video data, users will not be able to locate interesting parts in their personal media. Similarly, without browsing capabilities inside video and between related video users will not be able to explore their vast personal video library. Therefore, although partial solutions for the problems discussed above exist for more than 20 years, it is hard to point on a single usable system for browsing, searching and sharing personal video. This lack of suitable solutions explains the relatively tiny fraction of personal video, which is actually shared in the Internet.
  • SUMMARY OF THE INVENTION
  • A method for determining a predictability of a media entity portion, the method may include: receiving or generating (a) reference media descriptors, and (b) probability estimations of descriptor space representatives given the reference media descriptors; wherein the descriptor space representatives are representative of a set of media entities; and calculating a predictability score of the media entity portion based on at least (a) the probability estimations of the descriptor space representatives given the reference media descriptors, and (b) relationships between the media entity portion descriptors and the descriptor space representatives.
  • The method may include calculating distances between descriptors of the media entity and the descriptor space representatives.
  • The calculating of the predictability score of the media entity may include calculating a weighted sum of probability estimations of the descriptor space representatives, wherein weights applied for the weighted sum are determined according to distances between descriptors of the media entity portion and descriptor space representatives . . . .
  • The method may include generating the probability estimations given the reference media descriptors; wherein the generating may include calculating, for each descriptor space representative, a Parzen estimation of a probability of the descriptor space representative given the reference media descriptors.
  • The method may include calculating the predictability of the media entity portion based on reference media descriptors that represent media entity portions that precede the media entity portion and belong to a same media entity as the media entity portion.
  • The method may include calculating the predictability of multiple media entity portions of the media entity and detecting media entity portions of interest.
  • The method may include generating a representation of the media entity from the media entity portions of interest.
  • The method may include defining the media entity portion as a media entity portion of interest based on the predictability of the media entity portion and on at least one out of a detection of a camera motion, a detection of a camera zoom or a detection of a face close-up.
  • A method for evaluating a relationship between a first media entity and a second media entity; the method may include: determining a predictability of the first media entity given the second media entity based on (a) probability estimations of descriptor space representatives given second media entity descriptors, wherein the descriptor space representatives are representative of a set of media entities and (b) relationships between second media entity descriptors and descriptors of the first media entity; determining a predictability of the second media entity given the first media entity based on (a) probability estimations of descriptor space representatives given first media entity descriptors, and (b) the relationships between first media entity descriptors and descriptors of the second media entity; and evaluating a similarity value between the first media entity and the second media entity based on the predictability of the first media entity given the second media entity and the predictability of the second media entity given the first media entity.
  • The method may include evaluating the relationships between multiple first media entities and multiple second media entities based on a predictability of each first media entity given the multiple second media entities and a predictability of each second media entity given the first media entity.
  • The method may include clustering first and second media entities based on the relationships between the multiple first media entities and the multiple second media entities.
  • A method for classifying media entities, the method may include: receiving or generating (a) media class descriptors for each media entity class out of a set of media entity classes, and (b) probability estimations of descriptor space representatives given each of the media entity classes; wherein the descriptor space representatives are representative of a set of media entities; and calculating, for each pair of media entity and media class, a predictability score based on (a) the probability estimations of the descriptor space representatives given the media class descriptors of the media class, and (b) relationships between the media class descriptors and the descriptor space representatives descriptors of the media entity; and classifying each media entity based on predictability scores of the media entity and each media class.
  • A method for searching for a feature in a media entity, the method may include: receiving or generating first media class descriptors and second media class descriptors; wherein the first media class descriptors represent a first media class of media entities that may include a first media feature; wherein the second media class descriptors represent a second media class of media entities that does not comprise the first media feature; calculating a predictability score given a first media class based on (a) probability estimations of descriptor space representatives given the first media class descriptors, and (b) relationships between the first media class descriptors and descriptors of the media entity; calculating a second media class predictability score based on (a) probability estimations of descriptor space representatives given the second media class descriptors, and (b) relationships between the second media class descriptors and descriptors of the media entity; and determining whether the media entity may include the feature based on the first media class predictability score and the second media class predictability score.
  • The method may include searching for the feature in multiple query media entities by repeating, for each media entity of the multiple media entities, the stage of receiving or generating first media class descriptors and second media class descriptors; wherein the first media class descriptors represent a first media class of media entities that may include a first media feature; wherein the second media class descriptors represent a second media class of media entities that does not comprise the first media feature; calculating a first media class predictability score based on (a) probability estimations of descriptor space representatives given the first media class descriptors, and (b) relationships between the first media class descriptors and descriptors of the media entity; calculating a second media class predictability score based on (a) probability estimations of descriptor space representatives given the second media class descriptors, and (b) relationships between the second media class descriptors and descriptors of the media entity; and determining whether the media entity may include the feature based on the first media class predictability score and the second media class predictability score.
  • The method may include detecting media entities of interest in response to a detection of the feature.
  • The method may include searching for a feature that is a face.
  • A method for processing media streams, the method may include: applying probabilistic non-parametric process on the media stream to locate media portions of interest; and generating metadata indicative of the media portions of interest.
  • The method may include adding tags to the media portions of interest.
  • The method may include generating a representation of the media stream from the media portions of interest.
  • The method further may include detecting media portions of interest in response to at least one additional parameter out of: (a) a detection of a change of focal length of a camera that acquires the media; (b) a detection of a motion of the camera; (c) a detection of a face; (d) a detection of predefined sounds; (e) a detection of laughter; (f) a detection of predefined facial expressions; (g) a detection of an excited voice, and (h) detection of predefined behavior
  • The method may include generating a trick play media stream that may include the media portions of interest.
  • The method may include finding media portions of interest that are similar to each other.
  • The method may include tagging media portions of interest that are similar to each other.
  • The method may include editing the media stream based on the media portions of interest.
  • A computer program product that may include a non-transitory computer readable medium that stores instructions for receiving or generating (a) reference media descriptors, and (b) probability estimations of descriptor space representatives given the reference media descriptors; wherein the descriptor space representatives are representative of a set of media entities; and calculating a predictability score of the media entity portion based on at least (a) the probability estimations of the descriptor space representatives given the reference media descriptors, and (b) relationships between the media entity portion descriptors and the descriptor space representatives.
  • The computer program product can store instructions for calculating distances between descriptors of the media entity and the descriptor space representatives . . . .
  • The calculating of the predictability score of the media entity may include calculating a weighted sum of probability estimations of the descriptor space representatives, wherein weights applied for the weighted sum are determined according to distances between descriptors of the media entity portion and descriptor space representatives . . . .
  • The computer program product can store instructions for generating the probability estimations given the reference media descriptors; wherein the generating may include calculating, for each descriptor space representative, a Parzen estimation of a probability of the descriptor space representative given the reference media descriptors.
  • The computer program product can store instructions for calculating the predictability of the media entity portion based on reference media descriptors that represent media entity portions that precede the media entity portion and belong to a same media entity as the media entity portion.
  • The computer program product can store instructions for calculating the predictability of multiple media entity portions of the media entity and detecting media entity portions of interest.
  • The computer program product can store instructions for generating a representation of the media entity from the media entity portions of interest.
  • The computer program product can store instructions for defining the media entity portion as a media entity portion of interest based on the predictability of the media entity portion and on at least one out of a detection of a camera motion, a detection of a camera zoom or a detection of a face close-up.
  • The computer program product can store instructions for: determining a predictability of the first media entity given the second media entity based on (a) probability estimations of descriptor space representatives given second media entity descriptors, wherein the descriptor space representatives are representative of a set of media entities and (b) relationships between second media entity descriptors and descriptors of the first media entity; determining a predictability of the second media entity given the first media entity based on (a) probability estimations of descriptor space representatives given first media entity descriptors, and (b) the relationships between first media entity descriptors and descriptors of the second media entity; and evaluating a similarity value between the first media entity and the second media entity based on the predictability of the first media entity given the second media entity and the predictability of the second media entity given the first media entity.
  • The computer program product can store instructions for evaluating the relationships between multiple first media entities and multiple second media entities based on a predictability of each first media entity given the multiple second media entities and a predictability of each second media entity given the first media entity.
  • The computer program product can store instructions for clustering first and second media entities based on the relationships between the multiple first media entities and the multiple second media entities.
  • The computer program product can store instructions for: receiving or generating (a) media class descriptors for each media entity class out of a set of media entity classes, and (b) probability estimations of descriptor space representatives given each of the media entity classes; wherein the descriptor space representatives are representative of a set of media entities; and calculating, for each pair of media entity and media class, a predictability score based on (a) the probability estimations of the descriptor space representatives given the media class descriptors of the media class, and (b) relationships between the media class descriptors and the descriptor space representatives descriptors of the media entity; and classifying each media entity based on predictability scores of the media entity and each media class.
  • The computer program product can store instructions for: receiving or generating first media class descriptors and second media class descriptors; wherein the first media class descriptors represent a first media class of media entities that may include a first media feature; wherein the second media class descriptors represent a second media class of media entities that does not comprise the first media feature; calculating a predictability score given a first media class based on (a) probability estimations of descriptor space representatives given the first media class descriptors, and (b) relationships between the first media class descriptors and descriptors of the media entity; calculating a second media class predictability score based on (a) probability estimations of descriptor space representatives given the second media class descriptors, and (b) relationships between the second media class descriptors and descriptors of the media entity; and determining whether the media entity may include the feature based on the first media class predictability score and the second media class predictability score.
  • The computer program product can store instructions for searching for the feature in multiple query media entities by repeating, for each media entity of the multiple media entities, the stage of receiving or generating first media class descriptors and second media class descriptors; wherein the first media class descriptors represent a first media class of media entities that may include a first media feature; wherein the second media class descriptors represent a second media class of media entities that does not comprise the first media feature; calculating a first media class predictability score based on (a) probability estimations of descriptor space representatives given the first media class descriptors, and (b) relationships between the first media class descriptors and descriptors of the media entity; calculating a second media class predictability score based on (a) probability estimations of descriptor space representatives given the second media class descriptors, and (b) relationships between the second media class descriptors and descriptors of the media entity; and determining whether the media entity may include the feature based on the first media class predictability score and the second media class predictability score.
  • The computer program product can store instructions for detecting media entities of interest in response to a detection of the feature.
  • The computer program product can store instructions for searching for a feature that is a face.
  • The computer program product can store instructions for: applying probabilistic non-parametric process on the media stream to locate media portions of interest; and generating metadata indicative of the media portions of interest.
  • The computer program product can store instructions for adding tags to the media portions of interest.
  • The computer program product can store instructions for generating a representation of the media stream from the media portions of interest.
  • The method further may include detecting media portions of interest in response to at least one additional parameter out of: (a) a detection of a change of focal length of a camera that acquires the media; (b) a detection of a motion of the camera; (c) a detection of a face; (d) a detection of predefined sounds; (e) a detection of laughter; (f) a detection of predefined facial expressions; (g) a detection of an excited voice, and (h) detection of predefined behavior
  • The computer program product can store instructions for generating a trick play media stream that may include the media portions of interest.
  • The computer program product can store instructions for finding media portions of interest that are similar to each other.
  • The computer program product can store instructions for tagging media portions of interest that are similar to each other.
  • The computer program product can store instructions for editing the media stream based on the media portions of interest.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:
  • FIG. 1 illustrates a system according to an embodiment of the invention;
  • FIG. 2 illustrates a system and its environment according to an embodiment of the invention;
  • FIG. 3 illustrates a method according to an embodiment of the invention;
  • FIG. 4 illustrates a pre-processing block according to an embodiment of the invention;
  • FIG. 5 illustrates a query block according to an embodiment of the invention;
  • FIG. 6 illustrates a similarity block according to an embodiment of the invention;
  • FIG. 7 illustrates a classification block according to an embodiment of the invention;
  • FIG. 8 illustrates a clustering block according to an embodiment of the invention;
  • FIG. 9 illustrates a SalienSee block according to an embodiment of the invention;
  • FIG. 10 illustrates a detection block according to an embodiment of the invention;
  • FIG. 11 illustrates an editing process according to an embodiment of the invention;
  • FIG. 12 illustrates a system and its environment according to an embodiment of the invention; and
  • FIG. 13-17 illustrate methods according to an embodiment of the invention.
  • It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
  • DETAILED DESCRIPTION OF THE INVENTION
  • In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention.
  • The illustrated methods, systems and computer program products may provide a comprehensive solution to the problems of browsing, searching editing and producing personal video, by utilizing automatic image and video content analysis. In contrast to previous related art, the methods, systems and computer program products may identify all the required aspects of the problem and thereby provides a complete solution.
  • The term media entity refers to information representative of visual information, information representative of audio information or a combination thereof. Non-limiting examples of a media entity may include an image, a video stream, an access unit, multiple images, a portion of an image, a portion of a video stream, a transport packet, a elementary stream, a packetized elementary stream, an audio stream, an audio frame, any combination of audio representative information.
  • Any reference to a method should be interpreted as a reference to a system and additionally or alternatively as a reference to a computer program product. Thus, when describing a method is it noted that the method can be executed by a system or by a computer that executes instructions of the computer program product.
  • Any reference to a system should be interpreted as a reference to a method executed by the system and additionally or alternatively as a reference to a computer program product. Thus, when describing a system is it noted that the system can execute a method or can execute instructions of the computer program product.
  • Any reference to a block can include a reference to a hardware block, a software block or a stage of a method. Thus, for example, any of the blocks illustrated in FIG. 4-9 can be regarded as method stages.
  • The methods, systems and computer program products may provide a unified and generic approach—the media predictability framework—for handling the numerous capabilities required for a comprehensive solution.
  • Thus, instead of multiple ad hoc modules and partial solutions, the methods, systems and computer program products may provide provides a single coherent approach to tackle the entire problem.
  • The methods, systems and computer program products can be applied in diverse technological environments.
  • Methods, systems and computer program products may provide a comprehensive solution for using personal video as they enables browsing, searching editing and production of personal video.
  • The methods, systems and computer program products may rely on a unified automated media content analysis method, instead of relying on numerous methods for implementing the long list of features required for ‘media understanding’. The proposed method relies on a unified content analysis platform that is based on the Media Predictability Framework (discussed in the next section), which forms the technological foundation of the product.
  • In this section we discuss the various type of meta-data (and their use) obtained using analysis with the media predictability framework.
  • The processing of media entities may involve running software components on various hardware components and the processing of data files in several internet locations. We use the following entities in the text below:
  • User Computer: A computer with general computing capabilities such as Desktop, Laptop, Tablet, Media Center, Smartphone.
  • Personal Media: Images and Video of any common format (e.g., For images: Jpeg, Tiff, Gif, Jpeg2000 etc. For Video: Avi, wmv, mpeg-4, QuickTime etc.)
  • Private Data and Meta-Data Database: Binary and Textual data and meta-data kept in tables and files either as a flat database organization or as a relational database (e.g., MySql).
  • Interaction Server: An online server (either dedicated or in a computing cloud) which handles at least one of: uploading of user media, streaming, recording usage and viewing analytics, handling user and visitor interaction and registration, handling online payment, storage of online data and meta-data, selecting ads per viewed video and per user/visitor.
  • Content Analysis Server: A server which performs content analysis for uploaded user media (user video including audio, user images, user selected soundtrack)
  • Production Server: A server, which utilizes the original footage and the analyzed meta-data to create various personalized and stylized video productions. This server may utilize professional video creative software such as Adobe After Effects, Sony Vegas etc. to render the video production (e.g., video effects and transitions).
  • Online Data and Meta-Data Database: An online database, which contains Binary and Textual data and meta-data kept in tables and files either as a flat database organization or as a relational database (e.g., MySql).
  • User Interface Application: A standalone application or web application (runs inside a web browser) or a software widget or software gadget which enables the user to (at least one of) play, view, browse, search, produce, upload, broadcast and share his personal media.
  • Mobile Application: An application designed for a mobile device (e.g., Cellular application, iPad application etc.). This application is a specialized user interface application for the respective mobile device.
  • Local Player—A mini-version of the User Interface Application with reduced capabilities, which runs locally on the user/visitor computing device using a playing platform (e.g., Flash, Silverlight, HTML5).
  • Electronic Media Capturing Device—An electronic device which can capture personal image and/or video such as: Camcorder, Still Camera, Camera-phone, Internet Camera, Network Camera, Camera embedded in User Computer (e.g., Laptop) etc.
  • ‘My Video; My Pictures’ any set of file directories or libraries which reside on the user computer (e.g, on a Hard drive, or any electro-magnetic or optical media such as DVD, CD, Blue-Ray disk, Flash-Memory etc.) or on the user online folders (e.g., DropBox) and which stores the user personal media or shared media.
  • FIG. 1 illustrates a interaction server 10, a user computer 20 and image acquisition devices 31-33 according to an embodiment of the invention.
  • The user provides acquired media from image acquisition devices such as camcorder 31, camera-phones 32, digital still camera 33 etc. The media can be stored in a private database 21 of the user computer 20 and/or be loaded to the interaction server 10.
  • If the user stores the media on the user computer 20, the content analysis engine 22 of the user computer 20 analyzes the media using database accesses to a database 23 of the user computer 20. The data base 23 can store private data and private meta-data of the user. Another database 11 (also referred to as on-line database) can store data and meta-data shared by multiple users. The other database 11 and a content analysis server 12 belong to the interaction server 10.
  • The analysis results of the content analysis engine 22 or of the content analysis server 12 can be stored in either one of the databases 11 and 23—based on, at least, a selection of a user.
  • The user can directly upload media to the interaction server 10. In this case, the media is stored on the online database 11 and be analyzed by the content analysis server 12. The resulting data and meta-data can be stored on the Online database 11. Another option for the user is to use a combination of the approaches above: Uploading to the Interaction server, downloading and synchronizing to the user computer and processing in the Content Analysis Engine.
  • FIG. 2 illustrates an interaction between a interaction server 10, the user computer 20, a mobile network 50 and the Internet 60 according to an embodiment of the invention.
  • The user can interact using a User Interface (UI) Application which might be a standalone application or a web application in a web browser. Using this UI the user can search, browse, produce and broadcast his personal media (stored on the user computer 30). The UI may get input from the original user media (e.g., on ‘My Video/My Pictures or other user media locations) with the extracted data and meta-data from the private and online databases 11, 15, 21 and 23. For instance, even if the user computer 20 has no private database, the user can still search and browse the online databases 11 and 13 using the UI. Using the Mobile Application UI 60 the user can search and browse the data on the interaction server 10 (according to his user privacy settings) from mobile platform (e.g., Cellular phones, iPad). Users as well as Visitors can view, browse and search media on the Interaction server using the ‘Local Player’ (e.g., Flash Player embedded in HTML pages) which can be embedded in other web content.
  • Browsing
  • Browsing enables users to quickly find interesting information, when the users cannot easily describe what they are seeking. For this mode of associative discovery, it should be easy to understand the content of a video and to quickly navigate inside video and between semantically related video clips.
  • In order to support browsing the invention enables automatically generation of a table of content, of intelligent preview and thumbnails, of links to “similar” video, content based fast-forwarding and spatial video browsing.
  • Table of content may be a table-of-visual content (optionally hierarchical), which segments a video (or any other set of visual entities) to scenes with similar visual content. Note that these scenes usually cannot be separated by detecting different shots and they might overlap in time (e.g., the cameraman zooms in on a first context then moves on to a second context, then returns to the first context).
  • Intelligent preview and thumbnails may include a very short (e.g., 5-10 seconds long) summary of the most representative portions of the video. This condensed summary enables the user to get a quick impression of the content in the video. It could comprise frames (storyboard), short clips or a combination of both. Such short representation can be even used as an intelligent thumbnail that plays the video preview when the user selects it (e.g., mouse hovers over thumbnail).
  • Link to “similar” video—may include a list of related video and images, where relatedness is determined according to direct visual similarity as well as semantic similarity of the visual content: similar persons, similar objects, similar place, similar event, similar scene, similar time. The link can either point to an entire clip or to a time frame in it. Such links enable associative browsing when the user in not seeking a specific content.
  • Content-based fast forward. Viewing personal video may become a boring task very quickly, as real-life activity tends to repeat itself. Content-based fast-forward enables the user to fast forward to the next novel activity (with different actions, behavior, etc'). This capability is executed either by adapting the speedup to the (automatically determined) degree of interest or by jumping to the next interesting segment in the video.
  • Spatial Video Browsing. In many video shots, the camera wanders around while scanning the area of interest. Spatial Browsing enables the user to freeze time and simulate spatial browsing with the camera. Namely, in response to a request from the user to move the camera (via keyboard, mouse or touch screen) the viewed image will change to an image with the proper camera point of view.
  • Searching
  • The Search engine enables the users to quickly retrieve information according to a given criterion. Searching can be done using a visual or textual query. In order to enable searching the method enables deep, frame-based indexing, automatic tagging and keywords and criterion based search.
  • Deep, frame-based indexing—The method creates an index of objects, actions, faces, facial expressions, type of sound, places and people. Objects includes among many possible options pets, cars, computers, cellular phones, books, paintings, TV, tables, chairs etc. The indexing includes the extraction of new entities, comparing them to known entities (e.g., a known face) and keeping an index item for them. The index can be associated with a frame, a video segment or with the entire video clip.
  • Automatic Tagging and Keywords—The method clusters repeating entities (e.g., a repeating face) and generates a tag from it. A tag has a visual representation (e.g., image of a face) and a textual tag (e.g., name of a person). The user can name a visual tag. Each frame has a list of tags and each video has a list of the most important (frequent) tags. The user can add his own tags to the automatically generated tags. When a tag has a semantic meaning (e.g., ‘dog’ as opposed to ‘Rexy’) the method relates the semantic meaning of the tag to other synonym keywords enabling easier textual search.
  • Criterion based Search—The user can search by a query combining free text, visual and textual tags. The method finds the video or the images that are most relevant to the query. For instance, the user can select a picture of a person's face, select the textual tag ‘living-room’ and add free text ‘birthday party’ (which is used as a keyword).
  • Automatic Editing and Production—In order to support sharing and broadcasting of personal video the raw video should be edited and produced automatically (or with minimal user interaction). The method may enable at least one of the following: (a) Automatic Editing of Video and Images; (b) Semi-Automatic Editing of Video and Images; (c) Automatic Video production of selected clips; (d) Automatic Interpretation of user directives; (e) Manual Post Production; (f) Personalized Production; (g) Professional Production; (h) Automatic Movie “Trailer”; (i) Automatic Content Suggestions; (j) Automatic News and Updates; (k) Automatic Group and Event Suggestions; (l) Graphics-Video interaction; (m) Return to original video; (n) Uploading and Broadcasting: and (o) Documentary web-pages.
  • Automatic Editing of Video and Images—The method automatically selects and edits clips and images from raw video and images input, in order to create a shorter video summary. The automatic editing relies on various factors for choosing the most important parts: Faces, known persons/objects, camera motion/zoom, video and image quality, action saliency, photo-artistic quality, type of voice/sound, facial expression (e.g., smile).
  • As a part of the editing process, the image quality is improved using de-noising, video stabilization and super-resolution. The automatic editing can change the speed of a video (e.g., slow motion/fast motion) or even convert a video clip to an image if, for instance, the clip is too short. Another case for converting video clip to image, is when the camera pans and the automatic editing decides to create a mosaic image from the clip.
  • The user can select a sound track to add to the edited video. Prior meta-data and analysis on the audio track might affect the automatic editing decisions (e.g., fast pace, short clips for high tempo audio track). The automatic editing is generating the selected clips (and images) to fit a video length specified by the user (e.g., 45 seconds).
  • Semi-Automatic Editing of Video and Images—The user can modify the resulting automatic editing by the following operations:
  • Removing an unwanted clip
  • Adding a suggested clip (from an automatically prepared candidate list)
  • Selecting one of more faces to be emphasized or excluded from the edited video. This lists of faces is automatically extracted from the video and can be displayed to the user using a graphical user interface similar to the figure below.
  • Other types of object or tagged entities can be similarly removed or emphasized (e.g. emphasizing a certain location).
  • FIG. 11 illustrates a process of editing a video entity.
  • Symbols representing media entity portions of interest 220, media entity portions that may be of interest 230 (but may have a lower importance level), features 240 (such as faces of persons) and feature attributes 250 can be displayed to the user. The user can select which media entity portions to include in an edited media entity and can, additionally or alternatively, indicate an attribute such as an importance level of features. An attribute can reflect a preference of a user—for example—whether the feature is important or not, a level of importance of the feature, or any other attribute that may affect an editing that is responsive to the attribute.
  • According to an embodiment of the invention, an editing process can include one or more iterations. The user can be presented with media entity portions of interest, features, and even an edited media entity and receive feedback from the user (whether to alter the edited media entity, which features are more important or less important, adding media entity portions of interest, defining a level of interest that should allow an media entity of interest to be considered as a candidate to be included in an edited media entity, and the like.
  • These inputs are provided to any of the mentioned above blocks or system that may edit the edited media entity in response. The importance level provided by the user is taken into account during the editing—as images that may include a feature that was requested by the user will me more likely be included in the edited media entity.
  • Automatic Video production of selected clips—The selected clips and images can be used in a straightforward manner to create a video clip summary. However, the method can also provide a much more compelling automatically produced video clip. The automatic production makes use of a library of effects, transitions, graphic assets and sound tracks, which are determined according to the video and the extracted meta-data. For instance, an algorithm can choose to use a face-morphing transition effect between two clips, where the first clip ends in a face and the second clip starts in a different face. Another example is to use an effect where the frame is moving in the direction of the camera motion.
  • Automatic Interpretation of user directives—The user can act as a director during the filming of the video and perform various predefined gestures, in order to guide the later automatic editing and production stage. For instance, a user can indicate that he would like to create a mosaic by passing a finger from one side of the camera to the other and then panning slowly. Another example is that a user signals that he has just captured an important clip that should pop up in any editing by a special gesture (e.g. making ‘V’ with the fingers). In this manner, the system can identify user gestures and enables the user to act as the director of the automatic summarization in vivo.
  • Manual Post Production—The user can watch the resulting production and can intervene to override automatic decision. For instance, the user can remove or add clips from a candidate list of clips using a simple checkbox interface. In addition, the user can change the starting point and end point of each selected clip. Moreover, user can change the transitions if he likes, in a post production stage.
  • Personalized Production—besides manual post editing, the user can affect the automatic production and editing stages using a search query, which emphasizes the parts in the video, which are important to the user. The query can take the form of a full search query (text+tags+keywords). For instance, a query of the form ‘Danny jumping in the living room’ would put more emphasize in the editing and the production stages on parts which fit the query. Another example is of a query which uses a visual tag describing a pet dog and a location tag with an image of the back yard. Another option for the user to affect the editing stage is by directly marking a sub-clip in the video which must appear in the production. Yet another example is that the user marks several people (resulting from Face Clustering and Recognition) and gets several productions, each production with the selected person highlighted in the resulting clip, suitable for sharing with that respective person.
  • Professional Production—The method allows an additional, professional human editing and production. The method delivers the raw video, the extracted meta-data and the automatically produced video to professional producers (via internet or via a delivery service using DVDs etc.). After the professional editing, the user receives a final product (e.g., produced DVD) via mail or delivery. Such a professional production can complement the automatic production when professional quality is needed (e.g., for souvenirs, presents). Alternatively, the method can export the automatic editing and the respective meta-data to common video editing formats (e.g., Adobe Premiere, Apple Final Cut).
  • Automatic Movie “Trailer”—The method described above for editing and production of video can be used to create an automatic movie trailer for every video in the user library. This is a produced version of the video preview, which can be served as the default version for sharing a single video. This “Trailer” can also be used as a short version for various kinds of user generated content (even if not personal), for instance for automatic “Trailers” of popular YouTube videos for users who prefer to view the highlight before viewing the entire video.
  • Automatic Content Suggestions—The method automatically suggests to the user edited video clips which are suitable for sharing. For instance, after the video from a recent trip was loaded to the user computer, the method automatically produces the relevant data and suggests it to the user, who can decide to share the suggestion by a simple approval of the suggestion.
  • Automatic News and Updates—The method uses the extracted meta-data to automatically find shared video and images which might interest the user. For instance, the method can suggest to the user to view a video in one of his friend's shared content in which he participates. In this manner, a user can be informed of visual information, which may be of interest to him, even if he did not upload the video by himself.
  • Automatic Group and Event Suggestions—The method uses the extracted meta-data and discovered similarities between user data and shared data to propose formation of groups of people (e.g., close family, trip friends) and event suggestions (e.g., trip, party, birthday). In this manner, shared media entities, which can be clustered with other media, can be grouped in a semi-automatic manner (with user approval). In addition, the method can suggest producing personalized summaries of events—for instance, generating a different summary for each chosen participant in which this participant is highlighted in the generated synopsis. Such personalized summaries can encourage event and group participants to add their own media from the event, remix the results and so on. This can promote the building a large media pool of an event or a group.
  • Graphics-Video interaction—The method enables to add a layer of graphic-video interaction, based on the extracted meta-data. For instance, a conversation bubble can track a person's head or face. Another example is of a graphic sprite interacting with the video (e.g., a fly added as a graphic layer to the video and which avoids a person as he moves in the clip). This added layer can be disabled by the user.
  • Return to original video—The method enables the user to return to the original video clip from any point in the produced video by double-clicking (or tapping in touch screen) the display in that point.
  • Uploading and Broadcasting—The method enables the user to upload the produced video and related meta-data to a video storage site, which enables to embed the video to be streamed via a video player (e.g., Flash Player) in various internet locations including: email, social networks, blog sites, home pages, content management systems, image and video sharing sites.
  • Documentary web-pages—The method enables the user to create documentary web pages, which are dedicated for a certain entity such as event, person, group and object. For example, creating a web page of a child, where video clips and images of the child are kept, documenting the child at different stages of his life. Another example is a page documenting a party where all participating users are invited to view current productions, upload their footage of the party, invite further participants and use all uploaded footage to create new productions (and so on). A different example is a web page documenting a user's trips in the world. Yet another important example is a memorial page dedicated to the memory of a deceased person. The system can automatically detect new videos or images that are relevant to the documentary page, and add them to the page via approval of the user. This web page can be organized as an album or as a storyboard, and can be accompanied with annotations and text that was inserted automatically (using the meta-data) or by the user.
  • FIG. 3 illustrates a method 300 according to an embodiment of the invention.
  • Method 300 may start by stage 302 or 304. These stages are followed by a sequence of stages 310, 320, 330, 340, 350 and 360.
  • Stage 302 includes selecting, by a user, clips and images to be included in the production, a time limit and an optional query for indicating importance for the editing stage.
  • Stage 304 includes selecting, by the content analysis server or content analysis engine, clips and images automatically to be used in a proposed production
  • Stage 310 includes completing, by the content analysis server or the content analysis engine, any unfinished analysis (if any) for the requested media
  • Stage 320 includes using the ImportanSee measure and other meta-data properties to automatically provide at least one video editing proposal
  • Stage 330 includes adding, automatically, production graphics to the video according to the meta-data. Optionally suggesting by the production graphics, an audio track to add to the production
  • Stage 340 includes presenting the results to the user. The results may include clip selection, additional media clip/images proposals (which are currently out of the production), and relevant graphical effects. Optionally also previewing by the user the current production.
  • Stage 350 includes adapting the selection: changing start/end points, selected clips, audio track etc.
  • Stage 360 includes saving video production compilation in meta-data DB and produce video after obtaining user approval.
  • The Media Predictability Framework
  • The long list of features above is very difficult to implement in an ad hoc manner. Instead, the proposed method relies on a unified media content analysis platform, which we denote as the media predictability framework. In this framework, we measure to what extent a query media (visual or audio) entity is predictable from other reference media entities and use it to derive meta-data on this query entity: For instance, if a query media is un-predictable given the reference media, we might say that this media entity is interesting or surprising. We can utilize this measurement, for example, to detect interesting parts in a movie by seeking for video segments that are unpredictable in this manner from the rest of the video. In addition, we can use the media predictability framework to associate between related media entities. For example, we can associate a photo of a face with a specific person if this photo is highly predictable from other photos of that person.
  • In the sections below, we first describe the theoretical foundations of the media predictability framework, then detail the implementation of the media analysis building blocks using this framework. Lastly, we describe how to implement the diverse features above, providing a comprehensive solution for personal video using the media analysis building blocks.
  • A Non Parametric Approach for Determining Media Predictability
  • The predictability framework is a non-parametric probabilistic approach for media analysis, which is used by our method as a unified framework for all the basic building blocks that require high-level media analysis: Recognition, Clustering, Classification, SalienSee Detection, etc'. We will first describe in detail the predictability framework and then show how to derive from it the different building blocks.
  • Generally speaking, the predictability measure is defined as follows: Given a query media entity d and a reference media entity C (e.g.—portions of images, videos or audio) we say that d is predictable from C if the likelihood P(d|C) is high, and un-predictable if it is low. In this section we describe how to actually compute this predictability score in a unified manner, regardless of the application.
  • Descriptor Extraction
  • In this subsection we describe how to extract descriptors for a media entity.
  • A specific case of media descriptors is image descriptors. Each image descriptor describes a patch or region of interest or arbitrarily shaped region in the image (this can also be the entire image). One of the most informative image descriptors is the Daisy descriptor (Fua 2008) which computes a gradient image, and then, for each sample point, produces a log-polar sampling (of size 200) of the gradient image around this point (a detailed description is given in (Fua 2008)). Video descriptors describe space-time regions (e.g. x-y-t cube in a video). Examples of video descriptors include, raw space-time patches or concatenating Daisy descriptors applied on several consecutive frames (e.g. −3 frames, yielding a descriptor of length 200×3=600 around each sample point). However, there are many types of descriptors, known in the literature, that capture different aspects of the media, such as—simple image patches, shape descriptors (See for example (G. Mori, S. Belongie, and J. Malik 2005)), color descriptors, motion descriptors, etc. Information from different types of descriptors can be fused to produce better predictability estimation.
  • Similar to visual descriptors, audio can also be analyzed using audio descriptors. Some audio descriptors that are popular in the literature are MFCC, PLP, or the short-time spectrum. Audio descriptors can be specialized for speech representation, music representation, or general sound analysis. These descriptors can be computed, for example, using open source tools such as the CMU sphinx (http://cmusphinx.sourceforge.net/). Although each media has its own very different descriptor type, our predictability framework is applicable to all descriptor and media types.
  • FIG. 4 illustrates a pre-processing block 40 according to an embodiment of the invention.
  • The pre-processing block 40 receives reference media entities 101 and a set of media data and outputs reference media descriptors 103 that can be stored in a media descriptors database.
  • The pre-processing block 40 processes the reference media entities 101 by a descriptor extractor 44 to provide a descriptor set of the reference media entities. The pro-processing block 40 generates (by description extractor 41 and representative extractor 42) a descriptor space representatives of the set of media data 102. The descriptor set of the reference media entities and the descriptor space representative are fed to a likelihood estimator 45 that outputs the reference media descriptors 103.
  • Descriptor Extraction: Given a reference set of media entities C, we first compute a set of descriptors over a set of sampling points. The sampling points can be a uniform dense sampling of the media (for example, a grid in an image) or only at points of interest (e.g.—corners in image). Let {f1 c, . . . , fK c} denote the set of descriptors computed for the media reference C.
  • Descriptor-Space Representatives: Given a set of media entities (can be the reference media itself), the descriptors for these entities are extracted. Next, the representative set is extracted from the full descriptor set in the following manner. A random sampling of the descriptor can be used to generate representative, but vector-quantization might also be used (for example—using mean-shift or k-means quantization, etc').
  • Density Estimation: Given both the descriptor-space representatives {q1, . . . , qL}, and the descriptor set extracted from the reference C−{f1 c, . . . , fK c} the next step is likelihood estimation. {f1 c, . . . , fK c} is an empirical sampling from the underlying probability distribution of the reference. In this step, we estimate the log likelihood log P(qi) of each representative qi in this empirical distribution. Several non-parametric probability density estimation methods exist in the literature. The Parzen estimation of the likelihood is given by:
  • p ^ ( q i f 1 C , , f K C ) = 1 K j = 1 K K ( q i , f j C )
  • where K(·) is the Parzen kernel function (which is a non-negative operator and integrates to 1;
  • A common kernel is the Gaussian kernel: (qi, fj c)=exp(s∥qi−fj c2)) with s representing a fixed kernel width. The set of descriptor-representatives {q1, . . . , qL} together with their corresponding likelihoods {P(q1), . . . , P(qL)} and the original descriptors {f1 c, . . . , fK c} are used to construct the Media Descriptors Data-base, which is used in the query block.
  • FIG. 5 illustrates a query block 50 according to an embodiment of the invention.
  • The query block 50 receives a query media entity (d) 104, reference media descriptors from reference descriptor database and outputs a predictability score P(d|C) 54. The query block 50 includes a description extractor 51, a set (1 to K) of descriptor likelihood estimators 52(1)-52(k) and a combination unit 53.
  • Descriptor Extraction 51: Given a query media entity d, we first compute a set of descriptors {f1 d, . . . , fN p} over a set of sampling points (similar to the descriptor extraction step of the pre-processing block).
  • In addition, each descriptor is attached with a weight mi of its sample point, which can be user defined. Commonly, we use uniform weights, but other weighting schemes can be used: for example, giving a larger weight to a region of interest (e.g. a ROI in an image which gives a weight of 1 to all descriptors inside the ROI, and zero outside).
  • Media likelihood Estimation 52(1)-52(K): For each descriptor fi d, the log-likelihood logP(fi d|C) is estimated, where C is the reference media. The log-likelihood of each descriptor can be estimated in the following way:

  • log P(f i d |C)=w 1 log P(q 1)+ . . . +w L log P(q L), (Σwk=1)
  • Where P (qk) are pre-computed values extracted from the reference media descriptor database, wk are interpolation weights which are determined as a function of the distance of fi d from qk. The simplest weighting scheme is linear, by setting wk∝∥fi d−qk−1. This estimation can be approximated by taking only the first few nearest neighbors representatives, and setting wk to zero for the rest of the representatives.
  • More generally, the log-likelihood logP(fi d|C) can be estimated using a non-linear function of the representative log-likelihood values and the distances from them:

  • log P(f i d |C)=F({log P(q 1), . . . ,log P(q L),∥f i d −q 1 ∥, . . . ,∥f i d −q L})
  • Combination: All the likelihoods of the different descriptors are combined to a predictability score of the entire query media entity d. The simplest combination is a weighed sum of the log-likelihood estimations:

  • PredictabilityScore(d|C)=Σm i·log P(f i d |C).
  • Where mi are the sample point weights mentioned above. If we have multiple types of descriptors (referred below as aspects), {f11 d, . . . , fN1 d}, . . . , {f1R d, . . . , fNR d} (I.e.—R different descriptor types or R aspects), the combined score becomes:

  • PredictabilityScore(d|C)=Σr=1 RαrΣi=1 N m i·log P(f ir d |C)
  • Where αr are weights of each aspect (they can be determined manually or automatically from a training set).
  • More generally, dependencies between the different descriptor types can be taken into account by setting:

  • F Q=[(Σi=1 N m i·log P(f i1 d |C))0.5, . . . ,(Σi=1 N m i ·F(f iR d |C))0.5]

  • And:

  • PredictabilityScore(d|C)=F Q T *A*F Q
  • Where A encapsulates the dependencies between the different descriptor types (a diagonal matrix A will yield the previous formula, while taking the covariance matrix estimated empirically will yield the general formula).
  • Empirical Predictability Improvement.
  • The predictability score can be further improved using empirical post-processing.
  • Specifically, given a single media entity d, sometimes the predictability scores for several media references PredictabilityScore (d|C1), . . . , PredictabilityScore(d|CS) are dependent.
  • As a result, comparing between different reference media sets can be improved by empirically estimating the distribution of the predictability score over a “training” set. This training set aims to represent the set of queries, so it is best (if possible) to draw it randomly from the query set. Note that the distribution that we are trying to estimate now is simply the distribution of the predictability scores of a media entity given a set of references C1, . . . CS (note that this generated a new “feature” vector of dimension S for representing the query media). A straightforward approach is to use the non-parametric Parzen estimation, which has been described earlier, or recursively using our non-parametric likelihood estimation.
  • Media Analysis Building Blocks
  • In this section we describe how to derive each building block using the media predictability framework. The text below refers to the case of using a single aspect but the same approach holds for multiple aspects.
  • FIG. 6 illustrates a similarity block 60 according to an embodiment of the invention.
  • The similarity block 60 (also referred to as a similarity building block) is used to quantify the similarity between two media entities M1,M2. To do so, we use each media entity twice: once as a reference, and once as a query.
  • Referring to FIG. 6, the similarity block 60 receives a first media entity 111 and a second media entity 112. The first media entity is provided to a pre-processing block 61 (when used as a reference) that extracts first media entity descriptor space representatives that are fed (in addition to the second media entity) to a query block 50. The query block 50 outputs a predictability score of the second media entity given the first media entity.
  • The second media entity is provided to a pre-processing block 61 (when used as a reference) that extracts second media entity descriptor space representatives that are fed (in addition to the first media entity) to another query block 50. The other query block 50 outputs a predictability score of the first media entity given the second media entity.
  • Both predictability scores are fed to a unification unit 53 that outputs similarity(M1, M2) 65.
  • In more details:
  • A descriptor database is constructed from each media entity (using the pre-processing block—as was shown in the pre-processing section of the predictability framework).
  • The predictability PredictabilityScore (M1|M2) of media entity M1 given the media entity M2 as a reference is computed using the query block (as shown in the query section of the predictability framework).
  • Similarly, the predictability PredictabilityScore (M2|M1) of media entity M2 given the media entity M1 as a reference is computed.
  • The two predictability scores are combined to produce a single similarity measure. As a combination function, one can use any bimodal operator according to the specific application, such as the ‘average’ or the ‘max’ operators.
  • The “Classification” Building Block
  • FIG. 7 illustrates a classification building block 70 according to an embodiment of the invention. The classification building block is also referred to as classification block.
  • The classification building block is used to classify a media entity into one of several classes. To do so, we collect a set of media entities that relates to each class, construct a media descriptor DB from each reference class, and compare the query media to all of them using the query building block.
  • The classification block 70 receives reference media entities of each class out of multiple media classes—C1 120(1)-120(N).
  • A query media entity d 104 and reference media entities of each class are fed to N query blocks 50—each query block receives the query media entity d and one of the reference media entities of a class—separate query blocks receive reference media entities of different classes. Each query block 50 outputs a predictability score of the query media entity given the media entity class. A classification decision block 72 classifies the query media entity to one or these classes based on the predictability scores.
  • In more details:
  • For each class Ci, an example set of media entities relating to this class is selected.
  • For each set of entities, a descriptor database D Bi is constructed using the pre-processing block—as was shown in the pre-processing section of the predictability framework.
  • The predictability PredictabilityScore(d|Ci) of the query media entity d given each class is estimated using the query block (as shown in the query section of the predictability framework).
  • Finally, the predictability scores are entered into the classification decision block, which outputs the classification of d (Note that the classification doesn't necessarily have to be a hard decision on a single class, but it can be the posterior probability of d to belong to each class). The simplest decision rule is setting the classification of d to be the class C for which the predictability score of d given C is the highest. But other decision rules are also possible—for example, computing posterior probabilities (given the prior probabilities of each class). In addition, the distribution of the predictability scores given all (or subset) of the classes can be estimated using a “training” set. (A simple way to do it is using the non-parametric Parzen estimation, as described earlier). With this empirical distribution estimation, the probability of classifying d with each class can now be determined directly from the distribution, providing “Empirically Corrected” probabilities.
  • The “Detection” Building Block
  • The classification block can operate as a detection block. Assuming that a certain feature is being searched in a query media stream. One reference media entity class is selected as including the feature as another reference media entity class is selected as not including the feature. The query media entity and these two media entity classes are fed to the classification block that classifies the query media entity as being included in one of these media classes-a s including the feature or not including the feature. It is noted that more than two media classes can be provided and may include different associations with the feature (not just a binary relationship of including or not including the feature).
  • FIG. 10 illustrates a decision block according to an embodiment of the invention. A set of media entities 160 that is pre-filtered 99 to provide a set of candidates for searching the feature within. The set of candidates and two classes of reference examples 162 and 164 are provided to a classification block 98 that decides whether the feature exists in the candidates. The output is a list of detections 97 that indicates in which candidates the feature appears.
  • The detection building block is used to detect some pre-defined class (for example—face detection, or a detection of some specific person) inside a set of media entities. The detection building block is actually a special case of the classification building block, in which the two reference classes are the “Class” and the “Non-Class” (for example—“Face”—“Non Face”, “Speech”—“Non-Speech”), and the set of queries is all the sub-segments of the media for which we would like to apply the detection—for example, a set of sub-windows in a image.
  • Since the classification process usually takes too much time to be applied on all sub-segments, a pre-filtering can be applied, choosing only a subset of the segments. For example, the cascade based Viola & Jones method is widely used for object (e.g., face) detection, outputting a set of rectangles for which a face was detected. Yet, it also outputs a large set of erroneous detections, which can be further eliminated by the “Class”—“Non Class” detection block describe herein. See
  • for a schematic description of the detection building block.
  • The “Clustering” Building Block
  • The clustering building block is used to cluster a set of media entities into groups. This building block is using the similarity building block described above to compute a similarity measure between pairs of media entities, and then use standard clustering methods to cluster the affinity matrix.
  • FIG. 8 illustrates a clustering block 80 according to an embodiment of the invention.
  • The clustering block 80 includes multiple similarity blocks 60 that are fed with different media entities. During each iteration, the clustering blocks output a similarity score between two media entities. These similarity scores can be arranged to form a similarity/affinity matrix (or any other data structure) that is fed to a clustering algorithm 81 that clusters the media entities based on the similarity scores—clustering M1, . . . , MN 85.
  • In more details:
  • For each pair of media entities Mi and Mj, the similarity between them is computed using the similarity building block (described above).
  • A similarity matrix Aij is computed by Aij=similarity(Mi, Mj). This similarity matrix forms an Affinity matrix which is a common input for many clustering algorithms.
  • Finally, doing clustering from a Similarity or an Affinity matrix is well known in the art (For example, Agglomerative hierarchical clustering, spectral clustering (Andrew Y. Ng and Michael I. Jordan and Yair Weiss 2001) or simply merging all pairs for which similarity(Mi, Mj)>Threshold.
  • The “Saliensee” building block
  • FIG. 9 illustrates a SalienSee block 90 according to an embodiment of the invention.
  • The SalienSee block tries to predict a portion of a media entity (It) based on previous media entity portions (I1 . . . It−1) that precede it.
  • An input media entity 130 that includes multiple media entity portions is fed to the SalienSee block 90 one media entity portion after the other so that the media entity portions can be evaluated in an iterative manner-one after the other.
  • At point of time t a media entity portion (It) based on previous media entity portions (I1 . . . It−1) that precede it.
  • Query block 50 receives (as a query media entity) the media entity portion It and receives (as reference descriptor space representative) descriptors space representatives of the previous media entity portions.
  • The query block 50 calculates a predictability score that may be regarded as a saltiness score 95, The media entity portions are also fed to a database 92. The content of the database are processed by pre-processing block 40.
  • The proposed method uses a new measure called “SalienSee”. It measures the extent by which a point in time in the media is salient in the media. This can also indicate that this point in time is “surprising”, “unusual” or “interesting”. We say that a media entity has high SalienSee if it cannot be predicted from some reference set of media entities. Let d be some query media entity, and let C denote the reference set of media entities. We define the SalienSee of d with respect to C as the negative log predictability of d given C (i.e. SalienSee(d|C)=−log Pre dictabilityS core (d|C)). Using this notation, we can say an event is unusual if its SalienSee measure given other events is high. For instance, the SalienSee measure can capture the moments in video in which the activity becomes boring (which is very common in a personal video)—for example, when someone starts jumping it might be interesting, but the next jumps are getting more and more boring as they are already very predictable from the past. Formally, let/(t1, t2) denote the time segment t1<t<t2 of the video clip d. We say that the video d (t, t+δt) is ‘boring’ if its SalienSee measure with respect to the past is small, i.e, if SalienSee(d(t,t+δt)|d(t−T,t))<S, where T, δt are some periods of time (e.g.—T is a minute, δt is a second.
  • Implementing the personal video features above using the building blocks
  • As shown in the previous sub-section, all the basic building blocks that are used by the proposed method can be directly implemented using the media predictability framework. Next, we show how these building blocks (e.g., Recognition, Clustering) can be used to realize the long list of features, presented above, in order to enable comprehensive solution for searching, browsing, editing and production of personal video.
  • Tagging: Automatic tagging of media entities is achieved by applying the Detection/Recognition building block several times. Some tags are extracted by solving a detection problem. For instance adding a tag “face” whenever the face detector detected a face in a video clip, or a tag “applause” when a sound of clapping hands is detected. Other types of tags are extracted by solving a recognition (or classification) problem. For instance, a specific person-tag is added whenever the face-recognition module classifies a detected face as a specific, previously known face. Another example is classifying a scene to be “living-room scene” out of several possibilities of pre-defined scene location types. The combination of many detection and recognition modules can produce a rich and deep tagging of the media assets, which is valuable for many of the features described below.
  • The method utilizes at least some of the following tagging: face poses (“frontal”, “profile” etc.), specific persons, facial expressions (“smile”, “frown” etc.), scene-types (“living-room”, “backyard”, “seaside” etc.), behavior type (“running”, “jumping”, “dancing”, “clapping-hands” etc.), speech detection, soundtrack segment beat classification (e.g. “fast-beat”, “medium-beat”, “slow beat”), voice classification (“speech”, “shout”, “giggle”, etc.). Note that the Media Predictability Framework enables a single unified method to handle recognition and detection problems from completely different domains (from behavior recognition to audio classification), simply by supplying examples from the recognized classes (whether video, image or audio examples).
  • ImportanSee: our “ImportanSee” measure is used to describe the importance or the amount of interest of a video clip for some application—for example, in a video summary we can display only the important parts while omitting the unimportant ones. In principle, this measure is subjective, and cannot be determined automatically. However, in many cases it can be estimated with no user intervention using attributes such as the attributes listed below:
  • SalienSee—Very low saliency clips are usually boring and not important. Therefore, we can attribute low importanSee to those clips.
  • Camera Motion: Camera motion is an important source of information on the intent of the cameraman. A panning of the camera usually indicates that the photographer is either scanning the scene (to get a panorama of the view), or just changing the focus of attention. Video segments that relates to the second option (a wandering camera) can be assigned with a low ImportanSee. A case where the camera is very shaking and not stabilized can also reduce the overall ImportanSee. The camera motion can be estimated using various common methods (e.g. (J. R. Bergen, P. Anandan, K. J. Hanna, and R. Hingorani 1992)).
  • Camera Zoom: A Camera zoom-in is usually a good indication for high importance (i.e., resulting in high ImportanSee). In many cases, the photographer zooms in on some object of interest to get a close-up view of the subject (or event).
  • Face close-up: Images or video clips in which faces appear in the scene are usually important. Specifically, a close-up on a face (in a frontal view) will usually indicate a clear intention of the photographer to capture the person (or persons) being photographed, and can serve as a strong cue for high importanSee.
  • Speech: Speech detection and recognition can help detecting interesting periods in the video. Moreover, laughter (general, or of a child) increases the ImportanSee measure of the corresponding video segment. An excited voice may also be used as a cue for importanSee.
  • Facial expressions: Facial expressions are a good cue for high ImportanSee. For instance, moments when a person smiles or a child frowns or cries indicates a high ImportanSee.
  • Given a visual entity d (for example, a video segment), the attributes above can be used to compute intermediate importance scores s1, sl (in our implementation, these scores can be negative. Such scores can be obtained by using direct measurements (e.g, SalienSee measure of a clip), or by some binary predicate using the extracted meta-data (e.g., s=1 if clip includes a ‘large face closeup’ tag and s=0 otherwise). The final ImportanSee measure is given as a weighted sum of all attribute scores. I.e., ImportanSee (d)=max(Σiαisi, 0), where αi is the relative weights of each attribute.
  • Table of contents: Table of (visual) contents is a hierarchical segmentation of visual entities (video or set of videos and images). This feature can be implemented as a clustering of the various scenes in a video. For instance, by sampling short video chunks (e.g., 1 second of video every 5 seconds of video) and clustering these media chunks (using the clustering building block) will produce a flat or hierarchical table of contents of the video. In addition to this segmentation, each segment is attached with either a textual or visual short description (for example, a representative frame or a short clip). This representative can be selected randomly, or according to its ImportanSee measure.
  • Intelligent preview and thumbnails: This is a very short (e.g., 5-10 seconds long) summary of the most representative and important portions of the video. This feature can be implemented by simply selecting the time segments of the video with the maximal ImportanSee.
  • Video links and Associative browsing: This feature facilitates video and image links, which are based on audio-visual and semantic similarity. This feature can be implemented as a combination of using the Tagging feature and the similarity building block: The similarity building block is used to quantify the direct audio-visual similarity between images and video. The Tagging feature is used to quantify the semantic association between media entities—for instance, two videos of birthday parties, two videos of dogs etc. To quantify the semantic similarity, various simple distances can be used between the tag lists of each media entity, such as the number of mutual tags or a weighted sum of the mutual tags, which emphasizes some tags over others. To quantify the overall similarity a (weighted) sum of the semantic and audio-visual similarity can be used to combine the different similarity measures. Links between media entities can be formed for pairs of entities with high enough overall similarity.
  • Content-based fast forward: In Content-based fast-forward, interesting parts are displayed in a normal speed (or with a small speed-up), while less interesting parts are skipped (or displayed very fast). This can be done automatically using the ImportanSee measure: The speed-up of each video segment d is determined as a function of its ImportanSee, I.e. speedup(d)=F(ImportanSee(d)). Two simple examples for F are F(x)=1/x and the threshold function
  • F ( x ) = { 1 F ( x ) > S F ( x ) S
  • (which is equivalent to selecting the important video segments).
  • Automatic Video Editing & Synopsis: The main challenge in automatic video editing is to automatically select the most important sub-clips in the video, which best represent the content of the original video. This selection is an essential stage for most of the features that relates to automatic video editing: creating a video synopsis (or Movie “Trailer”), video production, intelligent thumbnails, etc. This task is best served by the ImportanSee building block (describe above)—to determine the importance of each sub-clip in the video, and promoting the selection of the most important ones to be used in the edited video. Using the fact that we can compute the ImportanSee measure on any video sub-clip we define a video editing score for a video editing selection of clips c1, . . . , cn, from a video v: score (c1, . . . , cn)=ΣiImportanSee(ci).
  • Thus we can pose the problem of automatic video editing as an optimization of the editing score above given some constraints (e.g., such that the total length of all selected sub-clips is not longer than one-minute). This is a highly non-continuous function and is best-optimized using stochastic optimization techniques (e.g., Simulated Annealing, Genetic Algorithms) where the score function is used to evaluate the quality of a selection and random selection and mutation (e.g., slightly changing clip starting and ending points) enables discovery of the problem-space during the optimization process.
  • System
  • FIG. 12Error! Reference source not found. illustrates a system and its environment according to an embodiment of the invention. The system implements any of the methods described above to provide a comprehensive solution for browsing, searching and sharing of personal video.
  • The system has various components which reside on several sites. The related sites and the components on them are described next.
  • User Computer 20—The user computer(Desktop, Laptop, Tablet, Media-Center, Pocket PC, Smartphone etc.) may include two databases 21 and 23, content analysis engine 22 and user interface application 24.
  • The user computer can store a large amount of visual data in general locations such as ‘My Video’ and ‘My Pictures’ directories in Microsoft Windows Operation Systems. Most of the data in these locations is raw data and yet personal.
  • The content analysis engine 22 may process runs in the background (optionally only during the computer idle time) or upon user request. It analyzes the user's visual data (videos and pictures), and extracts meta-data using a work queue.
  • The work queue is filled by the content analysis engine 22 as well as by the user selection (a user can insert any video or image to the top of the queue).
  • While the original video and images of the user may remain intact, the content analysis engine 22 may use the private Meta-Data DB 23 to store the extracted meta-data and reuses this meta-data for its own analysis (e.g., extracted visual tags are stored there for future automatic tagging).
  • In a different embodiment, the content analysis engine 22 is not a software installed on the user computer 20, but rather an internet browser plug-in or a software component (e.g., ActiveX) which enables the user to apply the content analysis engine 22 to run without full software installation (but a plug-in installation). In another embodiment of this system, there is not content analysis engine on the ‘User Computer’. Instead, the user can make use of content analysis server software (12) as a service which resides on the interaction server 10.
  • The user interface application 24 lets the user apply a sub-set of the method capabilities discussed above, thus enabling browsing, searching and sharing of personal video. The sub-set depends on the type of client, license and computer. In one embodiment, this is a standalone client installed on the user computer. In another embodiment, this is a web application which uses an internet browser for running the user interface, which enables running it from any internet browser, without installing software.
  • Interaction Server
  • The interaction server 10 hosts several servers which enable users to share personal video and images and broadcast them on various internet locations by embedding them. The ‘User Profile’ 18 contains various information about the user, such as its personal details, a list of accounts in various internet services, a list of friend and family members and usage statistics. The ‘Public Data+Meta-Data DB’ 17 contains data that the user selected to share from the ‘User Computer’: relevant meta-data and also video clips, images, etc. Sharing can be limited to various groups—family, friends, everyone etc. The database is also responsible for initiating synchronization with connected ‘User Computers’ and mobile appliances. The ‘Content Analysis Server’ 12 is a powerful version of the content analysis engine on the user computer 20 which enables to process a large amount of visual data being uploaded to the site. This enables the user to process video even from a computer that does not have the content analysis engine installed (i.e., SaaS—Software as a Service).
  • The ‘Video Platform Server’ 19 performs the actual streaming and interaction with users and visitors that view video and images stored on the ‘Interaction server’. It contains the actual ‘Streaming’ module 194 which is responsible for the actual delivery of the video on time and with the right quality. The ‘Interaction’ module 192 is responsible for interpreting the user requests (e.g., press on a table of contents element) and communicate it with the ‘Streaming’ server or the ‘Local Player’. The ‘Analytics’ module 193 is responsible for recording user behavior and response for each video and advertise that was displayed on it (e.g., number of times a video was watched, number of skips, number of times an ad was watched till its end). The ‘Ad-Logic’ 191 uses information from the ‘Analytics’ module to choose the best strategy to select an ad for a specific video and user and how to display it. This information is synchronized in real-time with the ‘Local Player’. The ‘Ad-Logic’ module can instruct the ‘Local Player’ to display an ad in various forms, including: pre-roll, post-roll, banners, floating ads, textual ads, bubble ads, ads embedded as visual objects using the extracted video meta-data (e.g., adding a Coca-Cola bottle on a table).
  • Internet Locations
  • Users and visitors can view video and images which users decided to share on various ‘Internet Locations’ 40 that may include social networks, email services, blogs, MySpace, Gmail, Drupel, Facebook and the like. The actual viewing of video is performed by an embedded player which can be based on various platforms such as Adobe Flash, Microsoft Silverlight, HTML5 etc. The player can be embedded either directly or using a local application (e.g., Facebook application) in various internet locations including: Social Networks (e.g., Facebook, Myspace), Email messages, Homepages, Sharing-Sites (e.g, Flickr, Picasa), Blogging sites and platforms (e.g., Wordpress, Blogger) and Content Management Systems (e.g., Drupal, Wikimedia). Alternatively to embedding a ‘Local Player’ the user can user an internet link to a dedicated video page on the ‘Interaction server’.
  • Mobile Networks
  • Users can view and synchronize video via mobile appliances (e.g., cell phones) using the cellular networks 50 or internet networks 40. In cases that the mobile appliance is computationally strong enough (e.g., Pocket-PC, Smartphone) it can be regarded as a ‘User Computer’. In other cases it can use a ‘Mobile Application’ which enables to view media from the ‘Interaction server’ as well as uploading raw media from the mobile appliance. In this manner the ‘Mobile Application’ can use the ‘Content Analysis Server’ in the ‘Interaction server’ to produce and share video even for appliances with low computational powers. Moreover, the ‘Interaction server’ can automatically synchronize uploaded content with other connected ‘User Computers’.
  • Movie Production
  • Users can select to send automatically produced media for further, professional production by human experts. The system proceeds by sending the relevant raw video, the extracted meta-data and the automatically produced video to a professional producer 70 (via internet or via a delivery service using DVDs etc.). After the professional editing is finished, the user receives a final product (e.g., produced DVD) via mail or delivery.
  • Other Electronic Appliances
  • In other embodiments, the system is implemented on ‘Other Electronic Appliances’ with do not utilize general CPUs or without enough computational power. In these cases, parts of the software modules described in user computer are implemented in embedded form (ASIC, FPGA, DSP etc.).
  • FIG. 13 illustrates method 1300 according to an embodiment of the invention. Method 1300 is for determining a predictability of a media entity portion.
  • Method 1300 starts by stage 1310 of receiving or generating (a) reference media descriptors, and (b) probability estimations of descriptor space representatives given the reference media descriptors; wherein the descriptor space representatives are representative of a set of media entities.
  • Stage 1310 is followed by stage 1320 of calculating a predictability score of the media entity portion based on at least (a) the probability estimations of the descriptor space representatives given the reference media descriptors, and (b) relationships between the media entity portion descriptors and the descriptor space representatives.
  • Stage 1320 may be followed by stage 1330 of responding to the predictability score.
  • Stages 1310-1330 can be repeated multiple times on multiple media entity portions.
  • Stage 1320 may include at least one of the following: (a) calculating distances between descriptors of the media entity and the descriptor space representatives; (b) calculating a weighted sum of probability estimations of the descriptor space representatives, wherein weights applied for the weighted sum are determined according to distances between descriptors of the media entity portion and descriptor space representatives; (c) generating the probability estimations given the reference media descriptors; wherein the generating comprises calculating, for each descriptor space representative, a Parzen estimation of a probability of the descriptor space representative given the reference media descriptors.
  • According to an embodiment of the invention method 1300 may be applied on different portions of a media entity in order to locate media portions of interest. Thus, stage 1320 may include calculating the predictability of the media entity portion based on reference media descriptors that represent media entity portions that precede the media entity portion and belong to a same media entity as the media entity portion. Repeating stage 1310 and 1320 on multiple portions of the media entity can result in calculating the predictability of multiple media entity portions of the media entity and detecting media entity portions of interest. Stage 1330 may include generating a representation of the media entity from the media entity portions of interest.
  • According to an embodiment of the invention, the importance of a media entity portion can be determined based on additional factors. Thus, stage 1320 can be augmented to include defining a media entity portion as a media entity portion of interest based on the predictability of the media entity portion and on at least one out of a detection of a camera motion, a detection of a camera zoom or a detection of a face close-up.
  • FIG. 14 illustrates method 1400 according to an embodiment of the invention. Method 1400 is for evaluating a relationship between a first media entity and a second media entity.
  • Method 1400 starts by stage 1410 of determining a predictability of the first media entity given the second media entity based on (a) probability estimations of descriptor space representatives given second media entity descriptors, wherein the descriptor space representatives are representative of a set of media entities and (b) relationships between second media entity descriptors and descriptors of the first media entity.
  • Stage 1410 is followed by stage 1420 of determining a predictability of the second media entity given the first media entity based on (a) probability estimations of descriptor space representatives given first media entity descriptors, and (b) the relationships between first media entity descriptors and descriptors of the second media entity.
  • Stage 1420 is followed by stage 1430 of evaluating a similarity value between the first media entity and the second media entity based on the predictability of the first media entity given the second media entity and the predictability of the second media entity given the first media entity.
  • Stage 1400 may be repeated multiple times, on multiple media entity portions. For example, it may include evaluating the relationships between multiple first media entities and multiple second media entities based on a predictability of each first media entity given the multiple second media entities and a predictability of each second media entity given the first media entity.
  • Method 1400 can be used for clustering—by evaluating the similarity value of a media entity to a cluster of media entities. Thus, method 1400 can include clustering first and second media entities based on the relationships between the multiple first media entities and the multiple second media entities.
  • FIG. 15 illustrates method 1500 according to an embodiment of the invention. Method 1500 is for classifying media entities.
  • Method 1500 starts by stage 1510 of receiving or generating (a) media class descriptors for each media entity class out of a set of media entity classes, and (b) probability estimations of descriptor space representatives given each of the media entity classes; wherein the descriptor space representatives are representative of a set of media entities.
  • Stage 1510 is followed by stage 1520 of calculating, for each pair of media entity and media class, a predictability score based on (a) the probability estimations of the descriptor space representatives given the media class descriptors of the media class, and (b) relationships between the media class descriptors and the descriptor space representatives descriptors of the media entity.
  • Stage 1520 is followed by stage 1530 of classifying each media entity based on predictability scores of the media entity and each media class.
  • FIG. 16 illustrates method 1600 according to an embodiment of the invention. Method 1600 is for searching for a feature in a media entity.
  • Method 1600 starts by stage 1610 of receiving or generating first media class descriptors and second media class descriptors; wherein the first media class descriptors represent a first media class of media entities that comprises a first media feature; wherein the second media class descriptors represent a second media class of media entities that does not comprise the first media feature.
  • Stage 1610 is followed by stage 1620 of calculating a predictability score given a first media class based on (a) probability estimations of descriptor space representatives given the first media class descriptors, and (b) relationships between the first media class descriptors and descriptors of the media entity.
  • Stage 1620 is followed by stage 1630 of calculating a second media class predictability score based on (a) probability estimations of descriptor space representatives given the second media class descriptors, and (b) relationships between the second media class descriptors and descriptors of the media entity.
  • Stage 1630 is followed by stage 1640 of determining whether the media entity comprises the feature based on the first media class predictability score and the second media class predictability score.
  • Stage 1640 can be followed by stage 1650 of responding to the determination. For example, stage 1650 may include detecting media entities of interest in response to a detection of the feature.
  • Stage 1600 can be repeated in order to detect a feature in multiple media entities by repeating, for each media entity stages 1610-1650.
  • The feature can be a face but this is not necessarily so.
  • FIG. 17 illustrates method 1700 according to an embodiment of the invention. Method 1700 is for processing media streams.
  • Method 1700 starts by stage 1710 of applying probabilistic non-parametric process on the media stream to locate media portions of interest. Non-limiting examples of such probabilistic non-parametric process are provided in the specification.
  • A non-parametric probability estimation is an estimation that does not rely on data relating to predefined (or known in advance) probability distribution, but derive probability estimations directly from the (sample) data.
  • Stage 1710 may include detecting media portions of interest in response to at least one additional parameter out of: (a) a detection of a change of focal length of a camera that acquires the media; (b) a detection of a motion of the camera; (c) a detection of a face; (d) a detection of predefined sounds; (e) a detection of laughter; (f) a detection of predefined facial expressions; (g) a detection of an excited voice, and (h) detection of predefined behavior
  • Stage 1710 is followed by stage 1720 of generating metadata indicative of the media portions of interest.
  • Stage 1720 may include adding tags to the media portions of interest.
  • Stage 1720 is followed by stage 1730 of responding to the metadata.
  • Stage 1730 may include at least one of the following: (a) generating a representation of the media stream from the media portions of interest; (b) generating a trick play media stream that comprises the media portions of interest; (c) finding media portions of interest that are similar to each other; (d) tagging media portions of interest that are similar to each other; and (e) editing the media stream based on the media portions of interest.
  • Either one of the mentioned above methods can be executed by a computer program product that includes a non-transitory computer readable medium.
  • While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.

Claims (20)

1. A method comprising:
obtaining at least one user-captured footage which includes at least one video footage;
automatically generating, based on a decision function, a sequence of selected media portions, wherein at least one selected media portion is a subset of the at least one video footage; and
allowing a user to apply modification operations to said sequence of selected media portions, wherein said modification operations comprise at least one of:
changing at least a starting point or an end point of at least one selected media portion of the video footage; removing at least one of said selected media portions; and
automatically generating, based on the user modifications, an audiovisual work comprising the selected media portions.
2. The method according to claim 1, wherein the at least one user-captured footage further comprises a plurality of still images, and wherein the selected media portions further comprise one of the plurality of still images.
3. The method according to claim 1, wherein the audiovisual work further comprises at least one transition between two of the selected media portions.
4. The method according to claim 1, further comprising automatically generating, prior to the modification operations by the user, an audiovisual work comprising the selected media portions.
5. The method according to claim 1, wherein said modification operations further comprise adding at least one additional selected media portion.
6. The method according to claim 1, wherein said modification operations further comprise adding at least one additional video media portion being a portion of said at least one user captured video footage.
7. The method according to claim 1, wherein said additional media portion is selected by a user from a second set of media portions, each media portion being one of the plurality of still images or a portion of the at least one video footage; and wherein this second set is automatically generated based on a decision function.
8. A method comprising:
obtaining at least one user-captured footage;
automatically generating, based on a decision function, a sequence of selected media portions; and
allowing a user to apply modification operations to said first sequence of selected media portions, wherein said modification operations comprise selecting a plurality of faces to be emphasized; and
automatically generating, based on the user modifications, an edited audiovisual work that emphasizes the selected faces.
9. The method according to claim 8, further comprising automatically generating, prior to the modification operations by the user, an initial audiovisual work comprising the selected media portions, and wherein the emphasized faces appear more often in the edited audiovisual work than in the initial audiovisual work.
10. A system comprising:
a computer processor;
an input device configured to obtain at least one user-captured footage which includes at least one video footage; and
a display device,
wherein the computer processor is configured to automatically generate, based on a decision function, a sequence of selected media portions, wherein at least one of the selected media portions is a subset of the at least one video footage;
wherein the input device is configured to allow a user to apply modification operations to said sequence of selected media portions presented over the display device, wherein said modification operations comprise: changing at least a starting point or an end point of at least one selected media portion of the video footage; and
removing at least one of said selected media portions, and wherein the computer processor is configured to automatically generate, based on the user modifications, an audiovisual work comprising the selected media portions.
11. The system according to claim 10, wherein the one user-captured footage further comprises a plurality of still images, and wherein the selected media portions further comprise one of the plurality of still images.
12. The system according to claim 10, wherein the audiovisual work further comprises at least one transition between two of the selected media portions.
13. The system according to claim 10, further comprising automatically generating, prior to the modification operations by the user, an audiovisual work comprising the selected media portions.
14. The system according to claim 10, wherein said modification operations further comprise adding at least one additional selected media portion.
15. The system according to claim 10, wherein said modification operations further comprise adding at least one additional video media portion being a portion of said at least one user captured video footage.
16. The system according to claim 10, wherein said additional media portion is selected by a user from a second set of media portions, each media portion being one of the plurality of still images or a portion of the at least one video footage; and wherein this second set is automatically generated based on a decision function.
17. A system comprising:
a computer processor;
an input device configured to obtain at least one user-captured footage; and
a display device,
wherein the computer processor is configured to automatically generate, based on a decision function, a first sequence of selected media portions,
wherein the input device is configured to allow a user to apply modification operations to said first sequence of selected media portions presented over the display device, wherein said modification operations comprise selecting a plurality of faces to be emphasized, and
wherein the computer processor is further configured to automatically generate, based on the user modifications, an edited audiovisual work that emphasizes the selected faces.
18. The system according to claim 17, the computer processor is further configured to automatically generate, prior to the modification operations by the user, an initial audiovisual work comprising the selected media portions; and wherein the emphasized faces appear more often in the edited audiovisual work than in the initial audiovisual work.
19. A computer program product comprising:
a non-transitory computer readable storage medium having computer readable program embodied therewith, the computer readable program comprising:
computer readable program configured to obtain at least one user-captured footage which includes at least one video footage;
computer readable program configured to automatically generate, based on a decision function, a sequence of selected media portions, wherein at least one selected media portion is a subset of the at least one video footage;
computer readable program configured to allow a user to apply modification operations to said sequence of selected media portions presented over the display device, wherein said modification operations comprise: changing at least a starting point or an end point of at least one selected media portion of the video footage; and removing at least one of said selected media portions; and
computer readable program configured to automatically generate, based on the user modifications, an audiovisual work comprising the selected media portions.
20. A computer program product comprising:
a non-transitory computer readable storage medium having computer readable program embodied therewith, the computer readable program comprising:
computer readable program configured to obtain at least one user-captured footage;
computer readable program configured to automatically generate, based on a decision function, a first sequence of selected media portions,
computer readable program configured to allow a user to apply modification operations to said first sequence of selected media portions presented over the display device, wherein said modification operations comprise selecting a plurality of faces to be emphasized; and
computer readable program configured to automatically generate, based on the user modifications, an edited audiovisual work that emphasizes the selected faces.
US14/249,565 2010-03-08 2014-04-10 Method and system for browsing, searching and sharing of personal video by a non-parametric approach Abandoned US20140223309A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/249,565 US20140223309A1 (en) 2010-03-08 2014-04-10 Method and system for browsing, searching and sharing of personal video by a non-parametric approach
US17/135,273 US20210117471A1 (en) 2010-03-08 2020-12-28 Method and system for automatically generating a video from an online product representation

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US31152410P 2010-03-08 2010-03-08
US13/041,457 US9189137B2 (en) 2010-03-08 2011-03-07 Method and system for browsing, searching and sharing of personal video by a non-parametric approach
US14/249,565 US20140223309A1 (en) 2010-03-08 2014-04-10 Method and system for browsing, searching and sharing of personal video by a non-parametric approach

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/041,457 Continuation US9189137B2 (en) 2010-03-08 2011-03-07 Method and system for browsing, searching and sharing of personal video by a non-parametric approach

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/135,273 Continuation-In-Part US20210117471A1 (en) 2010-03-08 2020-12-28 Method and system for automatically generating a video from an online product representation

Publications (1)

Publication Number Publication Date
US20140223309A1 true US20140223309A1 (en) 2014-08-07

Family

ID=44532189

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/041,457 Active 2031-09-05 US9189137B2 (en) 2010-03-08 2011-03-07 Method and system for browsing, searching and sharing of personal video by a non-parametric approach
US14/249,565 Abandoned US20140223309A1 (en) 2010-03-08 2014-04-10 Method and system for browsing, searching and sharing of personal video by a non-parametric approach

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/041,457 Active 2031-09-05 US9189137B2 (en) 2010-03-08 2011-03-07 Method and system for browsing, searching and sharing of personal video by a non-parametric approach

Country Status (1)

Country Link
US (2) US9189137B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120131005A1 (en) * 2010-11-19 2012-05-24 Microsoft Corporation File Kinship for Multimedia Data Tracking
US20150302028A1 (en) * 2012-12-20 2015-10-22 Alcatel Lucent Method and arrangement for image retrieval based on multiple images
WO2016161301A1 (en) * 2015-04-01 2016-10-06 Tiranoff Louise Video databases and methods for detection or diagnosis of neurodevelopment disorders

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9654447B2 (en) * 2006-08-29 2017-05-16 Digimarc Corporation Customized handling of copied content based on owner-specified similarity thresholds
EP2507743A2 (en) 2009-12-02 2012-10-10 QUALCOMM Incorporated Fast subspace projection of descriptor patches for image recognition
JP5570079B2 (en) * 2010-03-18 2014-08-13 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Data processing apparatus and data processing method
US9866922B2 (en) * 2010-03-31 2018-01-09 Thomson Licensing Trick playback of video data
US9530073B2 (en) * 2010-04-20 2016-12-27 Qualcomm Incorporated Efficient descriptor extraction over multiple levels of an image scale space
US20120089429A1 (en) * 2010-05-13 2012-04-12 Accenture Global Services Limited. Systems and methods for gathering and analyzing social media data
US9208599B2 (en) * 2010-06-17 2015-12-08 Microsoft Technology Licensing, Llc Visual previews
US8712930B1 (en) 2010-08-09 2014-04-29 Google Inc. Encoding digital content based on models for predicting similarity between exemplars
KR101681176B1 (en) 2010-09-17 2016-11-30 톰슨 라이센싱 Method for semantics based trick mode play in video system
US20120177345A1 (en) * 2011-01-09 2012-07-12 Matthew Joe Trainer Automated Video Creation Techniques
US8515193B1 (en) 2011-04-08 2013-08-20 Google Inc. Image compression using exemplar dictionary based on hierarchical clustering
US8756627B2 (en) * 2012-04-19 2014-06-17 Jumpercut, Inc. Distributed video creation
US9208819B1 (en) * 2012-06-20 2015-12-08 Google Inc. Dynamic split-frame preview of video editing effects
US9569100B2 (en) 2012-07-22 2017-02-14 Magisto Ltd. Method and system for scribble based editing
EP2713290A3 (en) * 2012-09-28 2014-09-17 Orange System and method for screencast creation and abridging based on user web browsing
US9020325B2 (en) 2012-11-14 2015-04-28 Storyvine, LLC Storyboard-directed video production from shared and individualized assets
US9843623B2 (en) * 2013-05-28 2017-12-12 Qualcomm Incorporated Systems and methods for selecting media items
US20140365572A1 (en) * 2013-06-05 2014-12-11 Brabble TV.com LLC System and Method for Media-Centric and Monetizable Social Networking
US20150009363A1 (en) * 2013-07-08 2015-01-08 Htc Corporation Video tagging method
US20150026578A1 (en) * 2013-07-22 2015-01-22 Sightera Technologies Ltd. Method and system for integrating user generated media items with externally generated media items
US9436705B2 (en) 2013-09-17 2016-09-06 Google Technology Holdings LLC Grading images and video clips
US9928877B2 (en) 2013-11-11 2018-03-27 Magisto Ltd. Method and system for automatic generation of an animated message from one or more images
US10867635B2 (en) 2013-11-11 2020-12-15 Vimeo, Inc. Method and system for generation of a variant video production from an edited video production
US11042274B2 (en) * 2013-12-04 2021-06-22 Autodesk, Inc. Extracting demonstrations from in-situ video content
US9510030B2 (en) * 2014-02-13 2016-11-29 Piksel, Inc. Delivering media content to a content consuming user
US9571727B2 (en) 2014-05-21 2017-02-14 Google Technology Holdings LLC Enhanced image capture
US10158908B1 (en) 2015-09-13 2018-12-18 Amazon Technologies, Inc. Content aggregator
CN106776890B (en) 2016-11-29 2021-06-29 北京小米移动软件有限公司 Method and device for adjusting video playing progress
US10795952B2 (en) 2017-01-05 2020-10-06 Microsoft Technology Licensing, Llc Identification of documents based on location, usage patterns and content
CN109982105A (en) * 2017-12-27 2019-07-05 上海文广互动电视有限公司 Content retrieval system and method for broadcast platform
US10825481B2 (en) * 2018-05-16 2020-11-03 At&T Intellectual Property I, L.P. Video curation service for personal streaming
CN111436005B (en) 2019-01-15 2022-03-08 北京字节跳动网络技术有限公司 Method and apparatus for displaying image
US11500923B2 (en) * 2019-07-29 2022-11-15 Meta Platforms, Inc. Systems and methods for generating interactive music charts
CN111177071B (en) * 2019-12-12 2023-07-07 广州地理研究所 Picture downloading method and device of Flickr platform
CN111614912B (en) * 2020-05-26 2023-10-03 北京达佳互联信息技术有限公司 Video generation method, device, equipment and storage medium
US11210596B1 (en) 2020-11-06 2021-12-28 issuerPixel Inc. a Nevada C. Corp Self-building hierarchically indexed multimedia database

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060126093A1 (en) * 2004-12-09 2006-06-15 Fedorovskaya Elena A Method for automatically determining the acceptability of a digital image
US20060187305A1 (en) * 2002-07-01 2006-08-24 Trivedi Mohan M Digital processing of video images
US20070159533A1 (en) * 2005-12-22 2007-07-12 Fujifilm Corporation Image filing method, digital camera, image filing program and video recording player
US20070183741A1 (en) * 2005-04-20 2007-08-09 Videoegg, Inc. Browser based video editing
US20080080743A1 (en) * 2006-09-29 2008-04-03 Pittsburgh Pattern Recognition, Inc. Video retrieval system for human face content
US7383508B2 (en) * 2002-06-19 2008-06-03 Microsoft Corporation Computer user interface for interacting with video cliplets generated from digital video
US20100026842A1 (en) * 2008-08-04 2010-02-04 Sony Corporation Image processing device, imaging apparatus, image-processing method, and program
US20110142420A1 (en) * 2009-01-23 2011-06-16 Matthew Benjamin Singer Computer device, method, and graphical user interface for automating the digital tranformation, enhancement, and editing of personal and professional videos
US20120281969A1 (en) * 2011-05-03 2012-11-08 Wei Jiang Video summarization using audio and visual cues
US20130343727A1 (en) * 2010-03-08 2013-12-26 Alex Rav-Acha System and method for semi-automatic video editing
US8775480B2 (en) * 2011-01-28 2014-07-08 Apple Inc. Media clip management

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6594392B2 (en) 1999-05-17 2003-07-15 Intel Corporation Pattern recognition based on piecewise linear probability density function
US6895111B1 (en) 2000-05-26 2005-05-17 Kidsmart, L.L.C. Evaluating graphic image files for objectionable content
US20060015904A1 (en) 2000-09-08 2006-01-19 Dwight Marcus Method and apparatus for creation, distribution, assembly and verification of media
IL146597A0 (en) 2001-11-20 2002-08-14 Gordon Goren Method and system for creating meaningful summaries from interrelated sets of information
US6751348B2 (en) 2001-03-29 2004-06-15 Fotonation Holdings, Llc Automated detection of pornographic images
US6892193B2 (en) * 2001-05-10 2005-05-10 International Business Machines Corporation Method and apparatus for inducing classifiers for multimedia based on unified representation of features reflecting disparate modalities
US7821541B2 (en) 2002-04-05 2010-10-26 Bruno Delean Remote control apparatus using gesture recognition
US7596247B2 (en) 2003-11-14 2009-09-29 Fujifilm Corporation Method and apparatus for object recognition using probability models
US20050160258A1 (en) 2003-12-11 2005-07-21 Bioobservation Systems Limited Detecting objectionable content in displayed images
US7437005B2 (en) 2004-02-17 2008-10-14 Microsoft Corporation Rapid visual sorting of digital files and data
US20050219665A1 (en) 2004-03-01 2005-10-06 Fuji Photo Film Co., Ltd. Apparatus, method, and program for editing images
US20070223871A1 (en) * 2004-04-15 2007-09-27 Koninklijke Philips Electronic, N.V. Method of Generating a Content Item Having a Specific Emotional Influence on a User
US20070136671A1 (en) 2005-12-12 2007-06-14 Buhrke Eric R Method and system for directing attention during a conversation
US20080159624A1 (en) 2006-12-27 2008-07-03 Yahoo! Inc. Texture-based pornography detection
FR2911426A1 (en) 2007-01-15 2008-07-18 France Telecom MODIFICATION OF A SPEECH SIGNAL
EP2203850A1 (en) 2007-08-31 2010-07-07 International Business Machines Corporation Method for synchronizing data flows
US8358846B2 (en) 2007-11-13 2013-01-22 Symantec Corporation Scanning images for pornography
US8472753B2 (en) 2008-01-10 2013-06-25 Samsung Electronics Co., Ltd. Method and system of adaptive reformatting of digital image
CA2723204C (en) * 2008-07-02 2013-04-09 Lexisnexis Risk & Information Analytics Group, Inc. Statistical measure and calibration of search criteria where one or both of the search criteria and database is incomplete
EP2377310A4 (en) 2009-01-06 2013-01-16 Lg Electronics Inc Apparatus for processing images and method thereof
JP5247480B2 (en) * 2009-01-13 2013-07-24 キヤノン株式会社 Object identification device and object identification method
US20100226582A1 (en) 2009-03-03 2010-09-09 Jiebo Luo Assigning labels to images in a collection
US8725660B2 (en) * 2009-07-30 2014-05-13 Hewlett-Packard Development Company, L.P. Applying non-linear transformation of feature values for training a classifier
JP5424819B2 (en) 2009-11-04 2014-02-26 キヤノン株式会社 Image processing apparatus and image processing method
US8775424B2 (en) * 2010-01-26 2014-07-08 Xerox Corporation System for creative image navigation and exploration
US8625905B2 (en) 2011-01-28 2014-01-07 Raytheon Company Classification of target objects in motion

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7383508B2 (en) * 2002-06-19 2008-06-03 Microsoft Corporation Computer user interface for interacting with video cliplets generated from digital video
US20060187305A1 (en) * 2002-07-01 2006-08-24 Trivedi Mohan M Digital processing of video images
US20060126093A1 (en) * 2004-12-09 2006-06-15 Fedorovskaya Elena A Method for automatically determining the acceptability of a digital image
US20070183741A1 (en) * 2005-04-20 2007-08-09 Videoegg, Inc. Browser based video editing
US20070159533A1 (en) * 2005-12-22 2007-07-12 Fujifilm Corporation Image filing method, digital camera, image filing program and video recording player
US20080080743A1 (en) * 2006-09-29 2008-04-03 Pittsburgh Pattern Recognition, Inc. Video retrieval system for human face content
US20100026842A1 (en) * 2008-08-04 2010-02-04 Sony Corporation Image processing device, imaging apparatus, image-processing method, and program
US20110142420A1 (en) * 2009-01-23 2011-06-16 Matthew Benjamin Singer Computer device, method, and graphical user interface for automating the digital tranformation, enhancement, and editing of personal and professional videos
US20130343727A1 (en) * 2010-03-08 2013-12-26 Alex Rav-Acha System and method for semi-automatic video editing
US8775480B2 (en) * 2011-01-28 2014-07-08 Apple Inc. Media clip management
US20120281969A1 (en) * 2011-05-03 2012-11-08 Wei Jiang Video summarization using audio and visual cues

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120131005A1 (en) * 2010-11-19 2012-05-24 Microsoft Corporation File Kinship for Multimedia Data Tracking
US9449024B2 (en) * 2010-11-19 2016-09-20 Microsoft Technology Licensing, Llc File kinship for multimedia data tracking
US11144586B2 (en) 2010-11-19 2021-10-12 Microsoft Technology Licensing, Llc File kinship for multimedia data tracking
US20150302028A1 (en) * 2012-12-20 2015-10-22 Alcatel Lucent Method and arrangement for image retrieval based on multiple images
US9535929B2 (en) * 2012-12-20 2017-01-03 Alcatel Lucent Method and arrangement for image retrieval based on multiple images
WO2016161301A1 (en) * 2015-04-01 2016-10-06 Tiranoff Louise Video databases and methods for detection or diagnosis of neurodevelopment disorders

Also Published As

Publication number Publication date
US9189137B2 (en) 2015-11-17
US20110218997A1 (en) 2011-09-08

Similar Documents

Publication Publication Date Title
US9189137B2 (en) Method and system for browsing, searching and sharing of personal video by a non-parametric approach
US8948515B2 (en) Method and system for classifying one or more images
US9570107B2 (en) System and method for semi-automatic video editing
US9554111B2 (en) System and method for semi-automatic video editing
US10714145B2 (en) Systems and methods to associate multimedia tags with user comments and generate user modifiable snippets around a tag time for efficient storage and sharing of tagged items
US20210117471A1 (en) Method and system for automatically generating a video from an online product representation
JP5092000B2 (en) Video processing apparatus, method, and video processing system
US9870797B1 (en) Generating and providing different length versions of a video
JP5507386B2 (en) Generating video content from image sets
US10679063B2 (en) Recognizing salient video events through learning-based multimodal analysis of visual features and audio-based analytics
US9715731B2 (en) Selecting a high valence representative image
Truong et al. Video abstraction: A systematic review and classification
KR102290419B1 (en) Method and Appratus For Creating Photo Story based on Visual Context Analysis of Digital Contents
CN101300567B (en) Method for media sharing and authoring on the web
US9270964B1 (en) Extracting audio components of a portion of video to facilitate editing audio of the video
WO2023011094A1 (en) Video editing method and apparatus, electronic device, and storage medium
JP2011217209A (en) Electronic apparatus, content recommendation method, and program
JP2016500218A (en) Join video to integrated video
Sun et al. Ranking highlights in personal videos by analyzing edited videos
Nixon et al. Data-driven personalisation of television content: a survey
WO2014103374A1 (en) Information management device, server and control method
Valdés et al. On-line video abstract generation of multimedia news
Niu et al. Real-time generation of personalized home video summaries on mobile devices
CN115917647A (en) Automatic non-linear editing style transfer
US20140189769A1 (en) Information management device, server, and control method

Legal Events

Date Code Title Description
AS Assignment

Owner name: SIGHTERA TECHNOLOGIES LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOIMAN, OREN;RAV-ACHA, ALEXANDER;REEL/FRAME:033082/0543

Effective date: 20140525

AS Assignment

Owner name: MAGISTO LTD., ISRAEL

Free format text: CHANGE OF NAME;ASSIGNOR:SIGHTERA TECHNOLOGIES LTD.;REEL/FRAME:036181/0194

Effective date: 20140922

AS Assignment

Owner name: KREOS CAPITAL V (EXPERT FUND) L.P., JERSEY

Free format text: SECURITY INTEREST;ASSIGNOR:MAGISTO LTD.;REEL/FRAME:041151/0899

Effective date: 20170202

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

AS Assignment

Owner name: VIMEO, INC., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MAGISTO LTD.;REEL/FRAME:051435/0430

Effective date: 20190523

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

AS Assignment

Owner name: KREOS CAPITAL V (EXPERT FUND) L.P., JERSEY

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE SERIAL NO. 15/374,023 SHOULD BE 15/012,875 PREVIOUSLY RECORDED ON REEL 041151 FRAME 0899. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY INTEREST;ASSIGNOR:MAGISTO LTD.;REEL/FRAME:052497/0880

Effective date: 20170202

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

AS Assignment

Owner name: MAGISTO LTD. (NOW KNOWN AS NOLAN LEGACY LTD), ISRAEL

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:KREOS CAPITAL V (EXPERT FUND) LP;REEL/FRAME:053136/0297

Effective date: 20200625

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT, NEW YORK

Free format text: SECURITY AGREEMENT;ASSIGNOR:VIMEO, INC.;REEL/FRAME:055288/0371

Effective date: 20210212

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

AS Assignment

Owner name: VIMEO.COM, INC., NEW YORK

Free format text: CHANGE OF NAME;ASSIGNOR:VIMEO, INC.;REEL/FRAME:056754/0261

Effective date: 20210521

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: VIMEO.COM, INC. (FKA VIMEO, INC.), NEW YORK

Free format text: RELEASE OF PATENT SECURITY INTERESTS FILED FEBRUARY 12, 2021 AT REEL/FRAME 055288/0371;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:064193/0449

Effective date: 20230630