Nothing Special   »   [go: up one dir, main page]

CN109327713A - A kind of generation method and device of media information - Google Patents

A kind of generation method and device of media information Download PDF

Info

Publication number
CN109327713A
CN109327713A CN201811287695.0A CN201811287695A CN109327713A CN 109327713 A CN109327713 A CN 109327713A CN 201811287695 A CN201811287695 A CN 201811287695A CN 109327713 A CN109327713 A CN 109327713A
Authority
CN
China
Prior art keywords
video
play time
frame
segment data
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811287695.0A
Other languages
Chinese (zh)
Other versions
CN109327713B (en
Inventor
王琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weimeng Chuangke Network Technology China Co Ltd
Original Assignee
Weimeng Chuangke Network Technology China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weimeng Chuangke Network Technology China Co Ltd filed Critical Weimeng Chuangke Network Technology China Co Ltd
Priority to CN201811287695.0A priority Critical patent/CN109327713B/en
Publication of CN109327713A publication Critical patent/CN109327713A/en
Application granted granted Critical
Publication of CN109327713B publication Critical patent/CN109327713B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the invention provides a kind of generation method of media information and devices, this method comprises: choosing the video segment data for indicating the video playing scene to be processed from video to be processed;When the play time of the video segment data is greater than time threshold, according to the preset maximum play time interval in any video between two frame images, multiple frame images are selected from the video segment data;Wherein, according to the preset maximum play time interval of described two frame images, the multiple frame images selected maintain the broadcasting continuity of any video;According to the multiple frame image, new video, the video cover as the video to be processed are generated.The technical program can not only show the main contents of video, while avoid not lively enough the showing problem of static video cover, further help user to find oneself favorite video and watch, and bring better viewing experience for user.

Description

A kind of generation method and device of media information
Technical field
The present invention relates to field of computer technology more particularly to the generation methods and device of a kind of media information.
Background technique
In mobile internet era, short-sighted frequency has obtained significant progress in recent years, with the release of short-sighted frequency, with Short-sighted frequency is that the video image processing algorithm of main research and development content has also obtained the favor of researcher accordingly.Common short-sighted frequency Static cover frame algorithm can all extract a certain frame image in short-sighted frequency generally to represent whole section of short-sighted frequency, seal sometimes for increasing The representativeness and interest of face frame can carry out relevant image procossing to cover frame image.Although short-sighted frequency static state cover frame side Method has its unique place, but is difficult to show short video content and show.
Summary of the invention
The embodiment of the present invention provides the generation method and device of a kind of media information, can more vivo show in video Main broadcasting content, and good viewing experience is brought to user.
On the one hand, it the embodiment of the invention provides a kind of determination method of user information, applies in server end, comprising: The video segment data for indicating the video playing scene to be processed is chosen from video to be processed;When broadcasting for the video segment data When putting the time greater than time threshold, according to the preset maximum play time interval in any video between two frame images, from institute It states and selects multiple frame images in video segment data;Wherein, according to the preset maximum play time interval of described two frame images, The multiple frame images selected maintain the broadcasting continuity of any video;According to the multiple frame image, new video is generated, is made For the video cover of the video to be processed.
On the other hand, it the embodiment of the invention provides a kind of generating means of media information, applies in server end, packet Include: selection unit chooses the video segment data for indicating the video playing scene to be processed from video to be processed;First choice Unit, when the play time of the video segment data is greater than time threshold, according in any video between two frame images Multiple frame images are selected at preset maximum play time interval from the video segment data;Wherein, according to described two frame figures The preset maximum play time interval of picture, the multiple frame images selected maintain the broadcasting continuity of any video;It generates single Member generates new video, the video cover as the video to be processed according to the multiple frame image.
Above-mentioned technical proposal has the following beneficial effects: based on the video segment data that will indicate video playing scene to be processed Play time be greater than time threshold, according to the preset maximum play time interval in any video between two frame images, from Multiple frame images are selected in video segment data, so that new video is generated, it, can not only as the video cover of video to be processed The main contents of video are shown, while avoiding not lively enough the showing problem of static video cover, are further helped User finds oneself favorite video and watches, and brings better viewing experience for user.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is the flow chart of one of one embodiment of the invention generation method of media information;
Fig. 2 is the flow chart of selection one sub- video playing scene in one embodiment of the present invention;
Fig. 3 is the structural schematic diagram of one of another embodiment of the present invention generating means of media information;
Fig. 4 is the structural schematic diagram of the selection unit in another preferred embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
As shown in Figure 1, applying for a kind of generation method of media information in one embodiment of the invention in server, comprising:
101: the video segment data for indicating video playing scene to be processed is chosen from video to be processed.
102: when the play time of video segment data is greater than time threshold, according in any video between two frame images Preset maximum play time interval, multiple frame images are selected from video segment data;Wherein, according to the pre- of two frame images Maximum play time interval is set, the multiple frame images selected maintain the broadcasting continuity of any video.
103: according to multiple frame images, generating new video, the video cover as video to be processed.
Preferably, the video segment data for indicating video playing scene to be processed is chosen from video to be processed, comprising: obtain The image feature data of each frame image in video to be processed;According to the similarity of the image feature data of adjacent two field pictures, Video to be processed is divided at least two sub-videos and plays scene, and therefrom chooses a sub- video playing scene as video Segment data.
Preferably, as shown in Fig. 2, according to the similarity of the image feature data of adjacent two field pictures, by video to be processed It is divided at least two sub-videos and plays scene, and therefrom choose a sub- video playing scene as video segment data, comprising: 201: the similarity of the image feature data of adjacent two field pictures is determined, when similarity is less than similarity threshold, it is determined that phase Adjacent two field pictures are that different video plays scene, according to the playing sequence of video to be played, are subordinated to different video and play scene Every group of adjacent two field pictures in select previous frame image as video node;202: according to the video node, will it is described to Processing video is cut into multiple sub-videos and plays scene;203: playing scene from multiple sub-videos and choose the longest son of play time Video playing scene is as video segment data.
Preferably, this method further comprises: when the play time of video segment data is less than time threshold, according to preset The play time of frame amount of images and video segment data determines the play time of frame image to be chosen;According to be chosen The play time of frame image selects multiple frame images from video segment data.
Preferably, according to the preset maximum play time interval in any video between two frame images, from video number of segment Multiple frame images are selected in, are specifically included: according to preset frame amount of images and preset maximum play time interval, according to broadcasting Time sequencing is put, the frame image of preset frame amount of images is selected from video segment data, or according to preset maximum play time Interval, according to play time sequence, selects multiple frame images from video segment data.
Preferably, this method further comprises: according to two frame images in any video playout speed and any video Between preset largest interval frame number, determine the preset maximum play time interval in any video between two frame images;Root According to preset frame amount of images and preset maximum play time interval, time threshold is determined.
As shown in figure 3, applying for a kind of generating means of media information in another embodiment of the present invention in server End, comprising:
Selection unit 31 chooses the video segment data for indicating video playing scene to be processed from video to be processed.
First selecting unit 32, when the play time of video segment data is greater than time threshold, according to two in any video Preset maximum play time interval between a frame image, selects multiple frame images from video segment data;Wherein, according to two The preset maximum play time interval of a frame image, the multiple frame images selected maintain the broadcasting continuity of any video.
Generation unit 33 generates new video, the video cover as video to be processed according to multiple frame images.
Preferably, as shown in figure 4, selection unit 31 includes:
Module 41 is obtained, the image feature data of each frame image in video to be processed is obtained.
Module 42 is chosen to be divided into video to be processed according to the similarity of the image feature data of adjacent two field pictures At least two sub-videos play scene, and therefrom choose a sub- video playing scene as video segment data.
Preferably, module 42 is chosen, the similarity of the image feature data of adjacent two field pictures is determined, when similarity is less than When similarity threshold, it is determined that adjacent two field pictures are that different video plays scene;, according to the playing sequence of video to be played, Being subordinated in every group of adjacent two field pictures of different video broadcasting scene selects previous frame image as video node, according to described The video to be processed is cut into multiple sub-videos and plays scene by video node;Scene selection is played from multiple sub-videos to broadcast It puts time longest sub-video and plays scene as video segment data.
Preferably, the device further include: the first determination unit, when the play time of video segment data is less than time threshold When, according to preset frame amount of images and the play time of video segment data, determine the play time of frame image to be chosen;The Two selecting units select multiple frame images according to the play time of frame image to be chosen from video segment data.
Preferably, first selecting unit 32 includes: first choice module, according to preset frame amount of images and it is described it is preset most The frame image of preset frame amount of images is selected according to play time sequence in big play time interval from video segment data, or Second selecting module is selected from video segment data according to the preset maximum play time interval according to play time sequence Multiple frame images out.
Preferably, the device further include: the second determination unit, according to two in any video playout speed and any video Preset largest interval frame number between a frame image determines the preset maximum broadcasting in any video between two frame images Time interval;Third determination unit determines time threshold according to preset frame amount of images and preset maximum play time interval.
Above-mentioned technical proposal of the embodiment of the present invention, which has the following beneficial effects:, to be based on to indicate video playing scene to be processed The play time of video segment data be greater than time threshold, maximum played according to preset between two frame images in any video Time interval selects multiple frame images from video segment data, to generate new video, the video as video to be processed is sealed Face can not only show the main contents of video, while avoid not lively enough the showing problem of static video cover, It further helps user to find oneself favorite video to watch, brings better viewing experience for user.
Above-mentioned technical proposal of the embodiment of the present invention is described in detail below in conjunction with application example:
Application example purport of the present invention is the main broadcasting content that can more vivo show in video, and is brought to user good Good viewing experience.
As shown in Figure 1, for example, microblogging video server can get multiple user's production from the region locally prestored Or the short-sighted frequency uploaded, a short-sighted frequency is randomly selected as video to be processed, reads the video data of the short-sighted frequency, and statistics should Tri- Color Histogram information of RGB in short-sighted frequency in every frame image determines the tri- Color Histogram distance of RGB of adjacent two field pictures, should Distance can be L2 Euclidean distance, the friendship of histogram, Hausdorff Hausdorff distance, when the distance is less than distance threshold, Then two consecutive frame images are to belong to the same video playing scene, when the distance is greater than distance threshold, two consecutive frame figures As being not belonging to the same video playing scene, at this point it is possible to be that at least two sub-videos play scene, then root by the short-sighted frequency division It is to be processed as expression to determine that the longest sub-video of play time plays scene for the play time that scene is played according to each sub-video The video segment data of video playing scene, when the play time that at least two sub-videos play scene is identical, then according to short-sighted The play time sequence of frequency, selects the sub-video of first broadcasting to play scene as the view for indicating video playing scene to be processed Frequency range data.Wherein, the method for determination of distance threshold is tri- Color Histogram of RGB between all consecutive frame images in the short-sighted frequency 2 times of distance average.After selecting video segment data, the play time of video-frequency band is compared with time threshold, e.g., 60s, when the play time of short-sighted frequency be greater than 60s when, according to will not occur in video play discrepancy two frame images it Between maximum play time interval, such as 0.5s, from video playing scene, that is, video-frequency band, (e.g., the play time of the video-frequency band is The multiple frame images selected in 0-70s), selection mode can be with are as follows: are based on prediction picture frame number, such as 120 frames and maximum are broadcast Time interval is put, first frame image is selected since 0.5s, is separated by the frame image that 0.5s selects second i.e. 1s of frame image, with this Analogize, until selecting enough 120 frame images;Alternatively, according to maximum play time interval 0.5s, the uniform design from video-frequency band , first frame image is selected such as since 0.5s, is separated by the frame image that 0.5s selects second i.e. 1s of frame image, with such It pushes away, until the video-frequency band of 70s terminates, 140 frame images has been selected altogether, by this 120 frame images or 140 frame image makings The picture for gif animation or continuously played, the cover as the short-sighted frequency is broadcasted, when user is visited by microblogging page end When asking the short-sighted frequency of microblogging video server access, short-sighted frequency and its cover are pushed to user by microblogging video server together Microblogging page end, when mouse is placed on by user, and dragging track is dragged and formed on the short-sighted frequency, it will showing should The gif animation cover of short-sighted frequency, the gif animation cover can follow always bad displaying.
It should be noted that short-sighted frequency division is regarded at least two by the tri- Color Histogram information of RGB in frame image Frequency, which plays scene, can more accurately intercept short-sighted frequency, so that the interception of each video playing scene is more accurate.
Wherein, video to be processed can be short-sighted frequency.Short-sighted frequency is made of multiple frame images.
The technical program is greater than the time based on the play time for the video segment data that will indicate video playing scene to be processed Threshold value is selected from video segment data according to maximum play time interval preset between two frame images in any video Multiple frame images, so that new video is generated, it, can not only be by the main contents exhibition of video as the video cover of video to be processed It shows and, while avoiding not lively enough the showing problem of static video cover, further help user to find oneself favorite Video is watched, and brings better viewing experience for user.
Optionally, the video segment data for indicating video playing scene to be processed is chosen from video to be processed, comprising: obtain The image feature data of each frame image in video to be processed;According to the similarity of the image feature data of adjacent two field pictures, Video to be processed is divided at least two sub-videos and plays scene, and therefrom chooses a sub- video playing scene as video Segment data.
Wherein, image feature data can be the tri- Color Histogram information of RGB in every frame image, the figure of adjacent two field pictures As the similarity of characteristic refers to the tri- Color Histogram distance of RGB of adjacent two field pictures.
Video playing scene to be processed includes that at least two sub-videos play scene, and it is basis that each sub-video, which plays scene, What the scene in video content determined, when video-frequency playing content lower shooting in the same scene, then belong to same height view Frequency plays scene, when what is shot under the different scene of video-frequency playing content, then belongs to different sub-videos and plays scene.
Optionally, as shown in Fig. 2, according to the similarity of the image feature data of adjacent two field pictures, by video to be processed It is divided at least two sub-videos and plays scene, and therefrom choose a sub- video playing scene as video segment data, comprising: 201: the similarity of the image feature data of adjacent two field pictures is determined, when similarity is less than similarity threshold, it is determined that phase Adjacent two field pictures are that different video plays scene, according to the playing sequence of video to be played, are subordinated to different video and play scene Every group of adjacent two field pictures in select previous frame image as video node;202: according to the video node, will it is described to Processing video is cut into multiple sub-videos and plays scene;203: playing scene from multiple sub-videos and choose the longest son of play time Video playing scene is as video segment data.
Wherein, similarity threshold is distance threshold.Then when similarity is less than similarity threshold, it can be considered that distance values are big In distance threshold, determine that adjacent two field pictures are that different video plays scene.
For example, reading each frame image of the short-sighted frequency according to described previously, count in the short-sighted frequency in every frame image Tri- Color Histogram information of RGB counts the tri- Color Histogram information of RGB in the short-sighted frequency in every frame image, determines adjacent two frames figure The tri- Color Histogram distance of RGB of picture, the distance can for L2 Euclidean distance, the friendship of histogram, Hausdorff Hao Siduofu away from From when the distance is less than distance threshold, then two consecutive frame images are to belong to the same video playing scene, when the distance is big When distance threshold, two consecutive frame images are not belonging to the same video playing scene, at this point it is possible to be extremely by the short-sighted frequency division Few two sub- video playing scenes, the play time of scene is played further according to each sub-video, determines the longest son of play time Video playing scene is as the video segment data for indicating video playing scene to be processed, when at least two sub-videos play scene When play time is identical, then according to short-sighted frequency play time sequence, select first broadcasting sub-video play scene as Indicate the video segment data of video playing scene to be processed.Wherein, the method for determination of distance threshold is all in the short-sighted frequency 2 times of tri- Color Histogram distance average of RGB between consecutive frame image.
In some instances, this method further comprises: when the play time of video segment data is less than time threshold, root According to preset frame amount of images and the play time of video segment data, the play time of frame image to be chosen is determined;According to The play time of the frame image of selection selects multiple frame images from video segment data.
Wherein, video segment data is that the sub-video selected plays contextual data.
For example, after the sub-video selected plays contextual data, sub-video being played scene according to described previously and is regarded The play time of frequency range is compared with time threshold, e.g., 60s, when the play time 48s of short-sighted frequency is less than 60s, based on pre- Number of image frames is set, such as the play time 48s of 120 frames and short-sighted frequency, 120 frames is uniformly selected from 48s, since video is 24 frames/s, then short-sighted frequency one shares 1152 frames, every 1152/120=9.6 frame will select a frame, i.e., every 9.6/24= 0.4s will select a frame, i.e., first frame image is selected since 0.4s, be separated by 0.4s and select second i.e. 0.8s's of frame image Frame image, and so on, until selecting enough 120 frame images.
Optionally, according to the preset maximum play time interval in any video between two frame images, from video number of segment Multiple frame images are selected in, are specifically included: according to preset frame amount of images and preset maximum play time interval, according to broadcasting Time sequencing is put, the frame image of preset frame amount of images is selected from video segment data, or according to preset maximum play time Interval, according to play time sequence, selects multiple frame images from video segment data.
Wherein, which refers to chooses multiple frame images from original video, multiple frame image The broadcasting content that the new video of composition is able to maintain original video in broadcasting content does not disconnect, any in multiple frame image at this time The maximum time interval of two consecutive frame images.
For example, when the play time 70s of short-sighted frequency is less than 60s, which is according to described previously 120,0.5s is divided between preset maximum play time, selects first frame image 0.5s since video-frequency band, is separated by 0.5s choosing The frame image of second i.e. 1s of frame image is selected, and so on, until selecting enough 120 frame images;Alternatively, being played according to maximum Time interval 0.5s, the uniform design from video-frequency band select first frame image such as since 0.5s, are separated by 0.5s selection the Two frame image, that is, 1s frame images, and so on, until the video-frequency band of 70s terminates, 140 frame images have been selected altogether.
In some instances, this method further comprises: according to two in any video playout speed and any video Preset largest interval frame number between frame image, when determining the preset maximum broadcasting in any video between two frame images Between be spaced;According to preset frame amount of images and the preset maximum play time interval, the time threshold is determined.
For example, according to described previously, by watching multiple videos, according to the playback progress of each video-frequency playing content, needle For any video, the maximum frame number interval of video " play and disconnect " is not caused to be estimated, e.g., 12 frame images, then Two are calculated in the i.e. any video of largest interval time deltaT of candidate frame image selection according to 24 frames/second broadcasting speed Preset maximum play time interval between frame image, e.g., 0.5s, the totalframes chosen according to deltaT and fixed requirement, that is, pre- 120 frame of frame amount of images is set, above-mentioned threshold value timevalTH, e.g., 0.5s*120=60s can be calculated.
It should be understood that the broadcasting speed of multiple any videos is identical, it is pre- between two frame images in multiple any videos It is identical to set largest interval frame number.
The embodiment of the invention provides a kind of generating means of media information, the method that above-mentioned offer may be implemented is implemented Example, concrete function realize the explanation referred in embodiment of the method, and details are not described herein.
It should be understood that the particular order or level of the step of during disclosed are the examples of illustrative methods.Based on setting Count preference, it should be appreciated that in the process the step of particular order or level can be in the feelings for the protection scope for not departing from the disclosure It is rearranged under condition.Appended claim to a method is not illustratively sequentially to give the element of various steps, and not It is to be limited to the particular order or level.
In above-mentioned detailed description, various features are combined together in single embodiment, to simplify the disclosure.No This published method should be construed to reflect such intention, that is, the embodiment of theme claimed needs to compare The more features of the feature clearly stated in each claim.On the contrary, as appended claims is reflected Like that, the present invention is in the state fewer than whole features of disclosed single embodiment.Therefore, appended claims It is hereby expressly incorporated into detailed description, wherein each claim is used as alone the individual preferred embodiment of the present invention.
For can be realized any technical staff in the art or using the present invention, above to disclosed embodiment into Description is gone.To those skilled in the art;The various modifications mode of these embodiments will be apparent from, and this The General Principle of text definition can also be suitable for other embodiments on the basis of not departing from the spirit and scope of the disclosure. Therefore, the disclosure is not limited to embodiments set forth herein, but most wide with principle disclosed in the present application and novel features Range is consistent.
Description above includes the citing of one or more embodiments.Certainly, in order to describe above-described embodiment and description portion The all possible combination of part or method is impossible, but it will be appreciated by one of ordinary skill in the art that each implementation Example can do further combinations and permutations.Therefore, embodiment described herein is intended to cover fall into the appended claims Protection scope in all such changes, modifications and variations.In addition, with regard to term used in specification or claims The mode that covers of "comprising", the word is similar to term " includes ", just as " including " solved in the claims as transitional word As releasing.In addition, the use of any one of specification in claims term "or" being to indicate " non-exclusionism Or ".
Those skilled in the art will also be appreciated that the various illustrative components, blocks that the embodiment of the present invention is listed (illustrative logical block), unit and step can by electronic hardware, computer software, or both knot Conjunction is realized.For the replaceability (interchangeability) for clearly showing that hardware and software, above-mentioned various explanations Property component (illustrative components), unit and step universally describe their function.Such function It can be that the design requirement for depending on specific application and whole system is realized by hardware or software.Those skilled in the art Can be can be used by various methods and realize the function, but this realization is understood not to for every kind of specific application Range beyond protection of the embodiment of the present invention.
Various illustrative logical blocks or unit described in the embodiment of the present invention can by general processor, Digital signal processor, specific integrated circuit (ASIC), field programmable gate array or other programmable logic devices, discrete gate Or transistor logic, discrete hardware components or above-mentioned any combination of design carry out implementation or operation described function.General place Managing device can be microprocessor, and optionally, which may be any traditional processor, controller, microcontroller Device or state machine.Processor can also be realized by the combination of computing device, such as digital signal processor and microprocessor, Multi-microprocessor, one or more microprocessors combine a digital signal processor core or any other like configuration To realize.
The step of method described in the embodiment of the present invention or algorithm can be directly embedded into hardware, processor execute it is soft The combination of part module or the two.Software module can store in RAM memory, flash memory, ROM memory, EPROM storage Other any form of storaging mediums in device, eeprom memory, register, hard disk, moveable magnetic disc, CD-ROM or this field In.Illustratively, storaging medium can be connect with processor, so that processor can read information from storaging medium, and It can be to storaging medium stored and written information.Optionally, storaging medium can also be integrated into the processor.Processor and storaging medium can To be set in asic, ASIC be can be set in user terminal.Optionally, processor and storaging medium also can be set in In different components in the terminal of family.
In one or more exemplary designs, above-mentioned function described in the embodiment of the present invention can be in hardware, soft Part, firmware or any combination of this three are realized.If realized in software, these functions be can store and computer-readable On medium, or it is transferred on a computer readable medium in the form of one or more instructions or code forms.Computer readable medium includes electricity Brain storaging medium and convenient for so that computer program is allowed to be transferred to from a place telecommunication media in other places.Storaging medium can be with It is that any general or special computer can be with the useable medium of access.For example, such computer readable media may include but It is not limited to RAM, ROM, EEPROM, CD-ROM or other optical disc storages, disk storage or other magnetic storage devices or other What can be used for carry or store with instruct or data structure and it is other can be by general or special computer or general or specially treated The medium of the program code of device reading form.In addition, any connection can be properly termed computer readable medium, example Such as, if software is to pass through a coaxial cable, fiber optic cables, double from a web-site, server or other remote resources Twisted wire, Digital Subscriber Line (DSL) are defined with being also contained in for the wireless way for transmitting such as example infrared, wireless and microwave In computer readable medium.The disk (disk) and disk (disc) includes compress disk, radium-shine disk, CD, DVD, floppy disk And Blu-ray Disc, disk is usually with magnetic replicate data, and disk usually carries out optically replicated data with laser.Combinations of the above Also it may be embodied in computer readable medium.
Above-described specific embodiment has carried out further the purpose of the present invention, technical scheme and beneficial effects It is described in detail, it should be understood that being not intended to limit the present invention the foregoing is merely a specific embodiment of the invention Protection scope, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should all include Within protection scope of the present invention.

Claims (12)

1. a kind of generation method of media information, which is characterized in that apply in server end, comprising:
The video segment data for indicating the video playing scene to be processed is chosen from video to be processed;
When the play time of the video segment data is greater than time threshold, according to pre- between two frame images in any video Maximum play time interval is set, multiple frame images are selected from the video segment data;Wherein, according to described two frame images Preset maximum play time interval, the multiple frame images selected maintain the broadcasting continuity of any video;
According to the multiple frame image, new video, the video cover as the video to be processed are generated.
2. the method according to claim 1, wherein described choose from video to be processed indicates described to be processed The video segment data of video playing scene, comprising:
Obtain the image feature data of each frame image in the video to be processed;
According to the similarity of the image feature data of adjacent two field pictures, the video to be processed is divided at least two sons and is regarded Frequency plays scene, and therefrom chooses a sub- video playing scene as video segment data.
3. according to the method described in claim 2, it is characterized in that, the image feature data according to adjacent two field pictures The video to be processed is divided at least two sub-videos and plays scene, and therefrom chooses a sub- video playing by similarity Scene is as video segment data, comprising:
The similarity for determining the image feature data of adjacent two field pictures, when the similarity is less than similarity threshold, then really The fixed adjacent two field pictures play scene for different video and are subordinated to different video according to the playing sequence of video to be played Playing in every group of adjacent two field pictures of scene selects previous frame image as video node;
According to the video node, the video to be processed is cut into multiple sub-videos and plays scene;
Scene, which is played, from the multiple sub-video chooses the longest sub-video broadcasting scene of play time as video segment data.
4. the method according to claim 1, wherein the method further includes:
When the play time of the video segment data is less than time threshold, according to preset frame amount of images and the video-frequency band The play time of data determines the play time of frame image to be chosen;
According to the play time of frame image to be chosen, multiple frame images are selected from the video segment data.
5. the method according to claim 1, wherein described according to pre- between two frame images in any video Maximum play time interval is set, multiple frame images are selected from the video segment data, are specifically included:
According to preset frame amount of images and the preset maximum play time interval, according to play time sequence, from the video The frame image of preset frame amount of images is selected in segment data, or
It is selected from the video segment data more according to the preset maximum play time interval according to play time sequence A frame image.
6. the method according to claim 1, wherein the method further includes:
According to the preset largest interval frame number in any video playout speed and any video between two frame images, institute is determined State the preset maximum play time interval in any video between two frame images;
According to preset frame amount of images and the preset maximum play time interval, the time threshold is determined.
7. a kind of generating means of media information, which is characterized in that apply in server end, comprising:
Selection unit chooses the video segment data for indicating the video playing scene to be processed from video to be processed;
First selecting unit, when the play time of the video segment data is greater than time threshold, according to two in any video Preset maximum play time interval between frame image, selects multiple frame images from the video segment data;Wherein, according to The preset maximum play time interval of described two frame images, the broadcasting that the multiple frame images selected maintain any video connect Coherence;
Generation unit generates new video, the video cover as the video to be processed according to the multiple frame image.
8. device according to claim 7, which is characterized in that the selection unit includes:
Module is obtained, the image feature data of each frame image in the video to be processed is obtained;
Choose module, according to the similarity of the image feature data of adjacent two field pictures, by the video to be processed be divided into Few two sub- video playing scenes, and a sub- video playing scene is therefrom chosen as video segment data.
9. device according to claim 8, which is characterized in that the selection module determines the image of adjacent two field pictures The similarity of characteristic, when the similarity is less than similarity threshold, it is determined that the adjacent two field pictures are different views Frequency plays scene, according to the playing sequence of video to be played, is subordinated to every group of adjacent two field pictures that different video plays scene It is middle to select previous frame image as video node, according to the video node, the video to be processed is cut into multiple sub- views Frequency plays scene;Scene, which is played, from the multiple sub-video chooses the longest sub-video broadcasting scene of play time as video-frequency band Data.
10. device according to claim 7, which is characterized in that described device further include:
First determination unit, when the play time of the video segment data is less than time threshold, according to preset frame amount of images And the play time of the video segment data, determine the play time of frame image to be chosen;
Second selecting unit selects multiple frames from the video segment data according to the play time of frame image to be chosen Image.
11. device according to claim 10, which is characterized in that the first selecting unit includes:
First choice module, it is suitable according to play time according to preset frame amount of images and the preset maximum play time interval Sequence selects the frame image of preset frame amount of images from the video segment data, or
Second selecting module, according to the preset maximum play time interval, according to play time sequence, from the video number of segment Multiple frame images are selected in.
12. device according to claim 11, which is characterized in that described device further include:
Second determination unit, according between the preset maximum in any video playout speed and any video between two frame images Every frame number, the preset maximum play time interval in any video between two frame images is determined;
Third determination unit determines the time threshold according to preset frame amount of images and the preset maximum play time interval Value.
CN201811287695.0A 2018-10-31 2018-10-31 Method and device for generating media information Active CN109327713B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811287695.0A CN109327713B (en) 2018-10-31 2018-10-31 Method and device for generating media information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811287695.0A CN109327713B (en) 2018-10-31 2018-10-31 Method and device for generating media information

Publications (2)

Publication Number Publication Date
CN109327713A true CN109327713A (en) 2019-02-12
CN109327713B CN109327713B (en) 2022-02-25

Family

ID=65260491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811287695.0A Active CN109327713B (en) 2018-10-31 2018-10-31 Method and device for generating media information

Country Status (1)

Country Link
CN (1) CN109327713B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112015310A (en) * 2019-05-31 2020-12-01 阿里巴巴集团控股有限公司 Cover for acquiring electronic icon, cover setting method and device and electronic equipment
CN114007133A (en) * 2021-10-25 2022-02-01 杭州当虹科技股份有限公司 Video playing start cover automatic generation method and device based on video playing

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778257A (en) * 2010-03-05 2010-07-14 北京邮电大学 Generation method of video abstract fragments for digital video on demand
CN102332001A (en) * 2011-07-26 2012-01-25 深圳市万兴软件有限公司 Video thumbnail generation method and device
CN104581407A (en) * 2014-12-31 2015-04-29 北京奇艺世纪科技有限公司 Video previewing method and device
CN104811745A (en) * 2015-04-28 2015-07-29 无锡天脉聚源传媒科技有限公司 Video content displaying method and device
KR20150089598A (en) * 2014-01-28 2015-08-05 에스케이플래닛 주식회사 Apparatus and method for creating summary information, and computer readable medium having computer program recorded therefor
CN106028094A (en) * 2016-05-26 2016-10-12 北京金山安全软件有限公司 Video content providing method and device and electronic equipment
US9578279B1 (en) * 2015-12-18 2017-02-21 Amazon Technologies, Inc. Preview streaming of video data
CN106851437A (en) * 2017-01-17 2017-06-13 南通同洲电子有限责任公司 A kind of method for extracting video frequency abstract
CN107888988A (en) * 2017-11-17 2018-04-06 广东小天才科技有限公司 Video editing method and electronic equipment
CN108038825A (en) * 2017-12-12 2018-05-15 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108401193A (en) * 2018-03-21 2018-08-14 北京奇艺世纪科技有限公司 A kind of video broadcasting method, device and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778257A (en) * 2010-03-05 2010-07-14 北京邮电大学 Generation method of video abstract fragments for digital video on demand
CN102332001A (en) * 2011-07-26 2012-01-25 深圳市万兴软件有限公司 Video thumbnail generation method and device
KR20150089598A (en) * 2014-01-28 2015-08-05 에스케이플래닛 주식회사 Apparatus and method for creating summary information, and computer readable medium having computer program recorded therefor
CN104581407A (en) * 2014-12-31 2015-04-29 北京奇艺世纪科技有限公司 Video previewing method and device
CN104811745A (en) * 2015-04-28 2015-07-29 无锡天脉聚源传媒科技有限公司 Video content displaying method and device
US9578279B1 (en) * 2015-12-18 2017-02-21 Amazon Technologies, Inc. Preview streaming of video data
CN106028094A (en) * 2016-05-26 2016-10-12 北京金山安全软件有限公司 Video content providing method and device and electronic equipment
CN106851437A (en) * 2017-01-17 2017-06-13 南通同洲电子有限责任公司 A kind of method for extracting video frequency abstract
CN107888988A (en) * 2017-11-17 2018-04-06 广东小天才科技有限公司 Video editing method and electronic equipment
CN108038825A (en) * 2017-12-12 2018-05-15 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108401193A (en) * 2018-03-21 2018-08-14 北京奇艺世纪科技有限公司 A kind of video broadcasting method, device and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112015310A (en) * 2019-05-31 2020-12-01 阿里巴巴集团控股有限公司 Cover for acquiring electronic icon, cover setting method and device and electronic equipment
CN114007133A (en) * 2021-10-25 2022-02-01 杭州当虹科技股份有限公司 Video playing start cover automatic generation method and device based on video playing
CN114007133B (en) * 2021-10-25 2024-02-23 杭州当虹科技股份有限公司 Video playing cover automatic generation method and device based on video playing

Also Published As

Publication number Publication date
CN109327713B (en) 2022-02-25

Similar Documents

Publication Publication Date Title
US11483625B2 (en) Optimizing timing of display of a video overlay
US9473548B1 (en) Latency reduction in streamed content consumption
US9077956B1 (en) Scene identification
ES2470976T3 (en) Method and system to control the recording and playback of interactive applications
US20220070540A1 (en) Content structure aware multimedia streaming service for movies, tv shows and multimedia contents
CN109565620A (en) Low latency HTTP real-time streaming transport
CN106331877A (en) Bullet screen playing method and device
WO2016081856A1 (en) Media management and sharing system
CN106792152B (en) Video synthesis method and terminal
JP2016531512A (en) Movie screen processing method and apparatus
CN104159151A (en) Device and method for intercepting and processing of videos on OTT box
CN102256095A (en) Electronic apparatus, video processing method, and program
US10659842B2 (en) Integral program content distribution
CN106162357B (en) Obtain the method and device of video content
CN109327713A (en) A kind of generation method and device of media information
WO2021052130A1 (en) Video processing method, apparatus and device, and computer-readable storage medium
CN108153882A (en) A kind of data processing method and device
CN105898398A (en) Advertisement play method and device, advertising method and device and advertisement system
CN104093084A (en) Method and apparatus for playing video
CN106375801B (en) Method and system for playing video containing advertisement content
CN106385613A (en) Method and device for controlling playing of bullet screens
CN107580264A (en) Multimedia resource play handling method and device
CN106576181A (en) Method and system for backward recording
KR102220088B1 (en) Method and device for determining intercut time bucket in audio or video
WO2023083064A1 (en) Video processing method and apparatus, electronic device, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant