Nothing Special   »   [go: up one dir, main page]

CN107241645B - Method for automatically extracting goal wonderful moment through caption recognition of video - Google Patents

Method for automatically extracting goal wonderful moment through caption recognition of video Download PDF

Info

Publication number
CN107241645B
CN107241645B CN201710434108.5A CN201710434108A CN107241645B CN 107241645 B CN107241645 B CN 107241645B CN 201710434108 A CN201710434108 A CN 201710434108A CN 107241645 B CN107241645 B CN 107241645B
Authority
CN
China
Prior art keywords
shot
score
goal
picture
scores
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710434108.5A
Other languages
Chinese (zh)
Other versions
CN107241645A (en
Inventor
杨益红
吴春中
陈晓军
李婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Sobey Digital Technology Co Ltd
Original Assignee
Chengdu Sobey Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Sobey Digital Technology Co Ltd filed Critical Chengdu Sobey Digital Technology Co Ltd
Priority to CN201710434108.5A priority Critical patent/CN107241645B/en
Publication of CN107241645A publication Critical patent/CN107241645A/en
Application granted granted Critical
Publication of CN107241645B publication Critical patent/CN107241645B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a method for automatically extracting goal wonderful moment by identifying subtitles of a video, which comprises S1, determining score subtitle positions, and identifying score subtitles in a video picture according to the characteristics of score cards; s2, monitoring the scores, continuously and intermittently obtaining the pictures corresponding to the current scores, comparing the current scores with the previous scores, if the current scores are the same as the previous scores, ignoring the current scores, and if the current scores are different from the previous scores, showing that the shot has a goal highlight moment; and S3, extracting the wonderful shot, searching the marked nearest distant scene from the frame with the changed score, and after identifying the distant shot, taking the end frame of the distant shot as a reference, and intercepting a certain number of continuous video frames before and after the end frame of the distant shot to be used as the goal shot. The method disclosed by the invention is used for recognizing the subtitles of the video by using the principle of scoring shots before score change to extract the goal shots of the game, overcomes the defect that the wonderful shots are cut by manually dotting in the prior art, and can improve the efficiency.

Description

Method for automatically extracting goal wonderful moment through caption recognition of video
Technical Field
The invention relates to a video extraction technology, in particular to a method for automatically extracting goal wonderful moment by recognizing subtitles of a video.
Background
At present, in the era of the convergence of media, broadcasting media and internet media are combined together and supplement each other. The broadcasting and television media have rich resources, for example, a large number of professional sports event signals can provide content support for internet media, and sports events are topics often covered by Jinjin music track in life of people and have a large number of audiences; the internet media has the characteristics of high propagation speed, wide spreading range, information fragmentation and the like, can continuously keep the heat, and on one hand, can promote the production of the broadcasting and television media, and on the other hand, also puts forward the requirements of content and efficiency on the existing manufacturing system of the broadcasting and television media.
The existing manufacturing system mainly has the following two problems:
the content problem, one match generally has one or two hours, and the time length of the content is not favorable for the propagation in the internet and social media, and the development of the internet and fusion media, so that more and more people adapt to and demand 'fragmented' information. Therefore, as for the content, a manner of extracting the game highlight and then distributing the game highlight to the internet is generally adopted.
The method for extracting the wonderful shots of the match basically comprises the steps of manually dotting and manually cutting after the video recording is finished, obviously, the method not only consumes a large amount of manpower and time, but also has low efficiency, and cannot meet the requirement of timeliness.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a method for automatically extracting goal wonderful moment by identifying a caption of a video, and extracts a goal shot of a match by identifying the caption of the video according to the principle of a shot with a score before score change.
The purpose of the invention is realized by the following technical scheme:
a method for automatically extracting goal highlights through caption recognition of a video comprises the following steps:
s1, determining the position of the score subtitle, and identifying the score subtitle in the video picture according to the characteristics of the score plate;
s2, the score is monitored, the picture corresponding to the current score is continuously obtained at intervals, the current score is compared with the previous score, if the current score is the same as the previous score, the current score is ignored, and if the current score is different from the previous score, the shot with the goal at the wonderful moment is shown;
and S3, extracting the wonderful shot, searching the marked nearest distant scene from the frame with the changed score, and after identifying the distant shot, taking the end frame of the distant shot as a reference, and intercepting a certain number of continuous video frames before and after the end frame of the distant shot to be used as the goal shot.
Further, the step S1 of determining the position of the splitter plate specifically includes the following sub-steps:
s11: searching character subtitles in a full screen mode, and identifying subtitle characters in different areas;
s12: and comparing the recognized caption characters with the characteristics of the score board, wherein if the comparison is successful, the comparison is the position of the score caption board.
Further, the score board is characterized in that the name of the team 1 is the score of the team 1: the name of team 2 to squad 2 or the name of team 1 to squad 2, i.e. identifying the character "in the caption: "or" - ".
Further, the step S2 of calculating the monitoring score specifically includes the following sub-steps:
s21: acquiring a picture of the position of the score according to a set time interval;
s22: identifying scores by a subtitle identification method;
s23: and comparing the score identified this time with the score identified last time, and ignoring the score identified first time except the score identified first time if the scores are the same, and indicating that the shot has a shot of a goal highlight moment if the scores are different.
Further, the step S3 of extracting the highlight specifically includes the following sub-steps:
s31: identifying each frame of picture by taking a frame as a unit;
s32: marking a short shot, a medium shot and a long shot according to the number of the heads or marking a long shot and a non-long shot according to a frame comparison mode in the lens;
s33: after the far shot is identified, a certain number of continuous video frames are intercepted before and after the far shot end frame is taken as a reference, and the far shot end frame is taken as a goal shot.
Further, the frame picture is judged according to the number of the human heads specifically means that a model for identifying the number of the human heads is trained by using an artificial neural network, the number of the human heads in the picture is judged by using a human head identification technology, and if the number of the human heads is less than or equal to 1, the picture is a close-range picture; if the number of people is between 2 and the set value x, the picture is a medium scene; if the number of the people is larger than x, the picture is a long-range picture.
Furthermore, the intra-shot frame contrast mode is to identify transition shots, a natural shot is arranged between two transition shots, internal frame contrast is performed on each natural shot to observe the fluctuation condition of the natural shot, the low fluctuation amplitude is long shot, the high fluctuation amplitude is non-long shot, and long shot is mainly used when the video is cut, so that the near shot and the medium shot are not needed to be distinguished in detail.
Further, the close shot, the middle shot and the long shot are defined as follows:
close-range view: the close-up of the individual skills and expressions of the player and the coach after the goal or the departure is a shot of one person;
and (3) medium scene: the local close-up of the occurrence process of the events such as the ball robbery, the foul and the like is the shot of 2-5 persons;
distant view: the shooting is carried out by taking the movement of the ball as a clue, and the shot is a shot of a plurality of people, and the number of people is more than that of people in the middle scenery.
The invention has the beneficial effects that: the method identifies the score change condition in the caption, and comprehensively calculates and finds out the wonderful video at the moment of goal by combining the analysis of the close shot, the middle shot and the long shot.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail with reference to fig. 1, but the scope of the present invention is not limited to the following.
Introduction of the principle:
1. we classify the various shots in the ball game into close shot, medium shot and long shot, respectively. Through analysis, the goal lens is found to be generated in the far lens
1) Close-up lens: generally refers to the close-up of the individual skills and expressions of players and coaches after the occurrence of events such as goal or departure, and is basically a shot of a person, and the shot generally occurs during the interruption of the game.
2) The medium shot: usually refers to a close-up of the occurrence of the ball robbery, foul, etc., and is generally the shot of three or five people. Such shots are typically found in the midfield or the side field, and are not found in front of the goal.
3) A distant view lens: usually a shot of the activity of the whole court, typically taken with the ball's movements as a clue. The goal is typically switched to when the ball reaches the goal, and the goal is made in this type of shot.
2. The position of the dividing plate is not changed in a game.
3. The change of the score marks that a goal moment is in front of the score, the goal moment is the key point most concerned by audiences in a match, and the distant shot closest to the change of the score can be extracted and issued to the internet media as a wonderful shot.
[ EXAMPLES one ]
A method for automatically extracting goal wonderful moment by identifying subtitles of a video is applied to a football match and comprises the following steps;
s1: determining a score caption location
Searching text subtitles in a full screen mode, identifying subtitle texts in different areas, comparing the identified subtitle texts with score board features, and determining the position of a score subtitle board if the identified subtitle texts can be compared with the score board features; the score template is characterized in that: team 1 name team 1 score: team 2 compares to team 2 names, such as china 00: 00 Korea;
s2: monitoring score
And acquiring a picture at the position of the score according to a set time interval, identifying the score by a subtitle identification method, comparing the identified score with the last identified score (except for the first identified score), if the identified score is the same as the last identified score, ignoring the identified score, and if the identified score is different from the first identified score, indicating that the shot at the goal moment is shot, and carrying out the next step.
S3: lens for extracting wonderful color
Searching the marked nearest perspective from the frame with the changed score, wherein the judgment of the picture type is carried out simultaneously with the score monitoring, and the judgment of the picture comprises the following steps:
and identifying each frame of picture by taking a frame as a unit, and labeling a near view, a medium view and a far view according to the number of heads or labeling a far view and a non-far view according to a frame comparison mode in a lens.
Judging by the number of human heads:
training a model for identifying the number of the human heads by using an artificial neural network, wherein the model can be used for identifying the human heads, judging the number of the human heads in a picture by using a human head identification technology, and if the number of the human heads is less than or equal to 1, the picture is a close-range picture; if the number of people is between 2 and the set value X, the picture is a medium scene; if the number of people is larger than X, the picture is a long-range picture, wherein X can be correspondingly set according to a specific video.
And judging an intra-shot frame comparison method:
the method comprises the steps of identifying transition lenses, forming a natural lens between the two transition lenses, comparing internal frames of each natural lens to observe fluctuation conditions of the natural lens, wherein long shot is used when fluctuation amplitude is small, non-long shot is used when fluctuation amplitude is large, and long shot is mainly used when a video is cut, so that close shot and medium shot are not needed to be distinguished in detail.
Selecting goal lens
After the far shot is identified, a certain number of continuous video frames are cut back and forth by taking the far shot ending frame as a reference, for example, 15 seconds are cut in front of the far shot and 5 seconds are cut in back of the far shot to be used as a goal shot.
[ example two ]
A method for automatically extracting goal wonderful moment by identifying subtitles of a video is applied to basketball games;
s1: determining a score caption location
Searching text subtitles in a full screen mode, identifying subtitle texts in different areas, comparing the identified subtitle texts with score board features, and determining the position of a score subtitle board if the identified subtitle texts can be compared with the score board features; the score template is characterized in that: team 1 name team 1 score-team 2 score team 2 name, such as bull 00-00 rocket;
s2: monitoring score
The method comprises the steps of obtaining a picture of the position of a score according to a set time interval, identifying the score through a subtitle identification method, comparing the identified score with the last score (except for the first identified score), if the identified score is the same as the last identified score, ignoring the identified score, and if the difference is 3, indicating that a shot at a wonderful moment of goal exists at the shot (the shot with 3 scores in the basketball game is generally a wonderful shot), and carrying out the next step.
S3: lens for extracting wonderful color
Searching the marked nearest perspective from the frame with the changed score, wherein the judgment of the picture type is carried out simultaneously with the score monitoring, and the judgment of the picture comprises the following steps:
recognizing each frame of picture by taking a frame as a unit, comprehensively judging the type of the frame of picture according to the number of heads, the frame comparison in a lens and other modes, and labeling: short shot, medium shot, long shot or long shot, non-long shot.
Judging by the number of human heads:
training a model for identifying the number of the human heads by using an artificial neural network, wherein the model can be used for identifying the human heads, judging the number of the human heads in a picture by using a human head identification technology, and if the number of the human heads is less than or equal to 1, the picture is a close-range picture; if the number of people is between 2 and the set value X, the picture is a medium scene; if the number of people is larger than X, the picture is a long-range picture, wherein X can be correspondingly set according to a specific video.
And judging an intra-shot frame comparison method:
the method comprises the steps of identifying transition lenses, forming a natural lens between the two transition lenses, comparing internal frames of each natural lens to observe fluctuation conditions of the natural lens, wherein long shot is used when fluctuation amplitude is small, non-long shot is used when fluctuation amplitude is large, and long shot is mainly used when a video is cut, so that close shot and medium shot are not needed to be distinguished in detail.
Selecting goal lens
After the far shot is identified, a certain number of continuous video frames are cut back and forth by taking the far shot ending frame as a reference, for example, 15 seconds are cut in front of the far shot and 5 seconds are cut in back of the far shot to be used as a goal shot.
The invention is not only limited in basketball or football games, but also is suitable for other scoring sports games, and the score change standard of the wonderful shot can be formulated according to different game scoring mechanisms, thereby extracting the corresponding wonderful shot.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (3)

1. A method for automatically extracting goal wonderful moment by recognizing caption of video is characterized by comprising the following steps:
s1: searching character subtitles in a full screen mode, and identifying subtitle characters in different areas;
s2: comparing the recognized caption characters with the characteristics of the score board, wherein if the characters can be compared, the characters are the positions of the score caption board;
s3: acquiring a picture of a position corresponding to the current score according to a set time interval;
s4: identifying scores by a subtitle identification method;
s5: comparing the score identified this time with the score identified last time, except the score identified for the first time, if the scores are the same, ignoring the scores, and if the scores are different, indicating that the shot has a shot of a goal highlight moment;
s6: identifying each frame of picture by taking a frame as a unit;
s7: marking a short shot, a medium shot and a long shot according to the number of the heads or marking a long shot and a non-long shot according to a frame comparison mode in the lens;
the frame picture is judged according to the number of the human heads, specifically, a model for identifying the number of the human heads is trained by using an artificial neural network, the number of the human heads in the picture is judged by using a human head identification technology, and if the number of the human heads is less than or equal to 1, the picture is a close-range picture; if the number of people is between 2 and the set value x, the picture is a medium scene; if the number of the people is more than x, the picture is a long-range view picture;
the close shot, the middle shot and the long shot are defined as follows:
close-range view: the close-up of the individual skills and expressions of the player and the coach after the goal or the departure is a shot of one person;
and (3) medium scene: the local close-up of the occurrence process of the events such as the ball robbery, the foul and the like is the shot of 2-5 persons;
distant view: the shot of the activity of the whole court is shot by taking the ball movement as a clue, and is a shot of a plurality of people, and the number of people is more than that of people in the middle scenery;
s8: and searching the marked nearest distant view from the frame with the changed score, and after identifying the far shot, taking the far shot ending frame as a reference, and intercepting a certain number of continuous video frames before and after the far shot ending frame as a goal shot.
2. A method for automatically extracting goal highlights by subtitle recognition for video according to claim 1, characterized by: the score board is characterized in that the name of the team 1 is 1 score: team 2 versus squad 2 name or team 1 name team 1 versus squad 2 name.
3. A method for automatically extracting goal highlights by subtitle recognition for video according to claim 1, characterized by: the intra-lens frame contrast mode is to identify transition lenses, a natural lens is arranged between the two transition lenses, internal frame contrast is carried out on each natural lens to observe the fluctuation condition of the natural lens, the natural lens with small fluctuation amplitude is a long shot, the natural lens with large fluctuation amplitude is a non-long shot, and the long shot is mainly used when a video is cut, so that a near shot and a middle shot are not needed to be distinguished in detail.
CN201710434108.5A 2017-06-09 2017-06-09 Method for automatically extracting goal wonderful moment through caption recognition of video Active CN107241645B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710434108.5A CN107241645B (en) 2017-06-09 2017-06-09 Method for automatically extracting goal wonderful moment through caption recognition of video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710434108.5A CN107241645B (en) 2017-06-09 2017-06-09 Method for automatically extracting goal wonderful moment through caption recognition of video

Publications (2)

Publication Number Publication Date
CN107241645A CN107241645A (en) 2017-10-10
CN107241645B true CN107241645B (en) 2020-07-24

Family

ID=59986153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710434108.5A Active CN107241645B (en) 2017-06-09 2017-06-09 Method for automatically extracting goal wonderful moment through caption recognition of video

Country Status (1)

Country Link
CN (1) CN107241645B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107801106B (en) * 2017-10-24 2019-10-15 维沃移动通信有限公司 A kind of video clip intercept method and electronic equipment
CN109344292B (en) * 2018-09-28 2022-04-22 百度在线网络技术(北京)有限公司 Method, device, server and storage medium for generating event score segments
CN109635707A (en) * 2018-12-06 2019-04-16 安徽海豚新媒体产业发展有限公司 A kind of video lens extracting method based on feature identification
CN110267116A (en) * 2019-05-22 2019-09-20 北京奇艺世纪科技有限公司 Video generation method, device, electronic equipment and computer-readable medium
CN110339566A (en) * 2019-05-29 2019-10-18 努比亚技术有限公司 A kind of game Wonderful time recognition methods, terminal and computer readable storage medium
CN111031384A (en) * 2019-12-24 2020-04-17 北京多格科技有限公司 Video content display method and device
CN111340837A (en) * 2020-02-18 2020-06-26 上海眼控科技股份有限公司 Image processing method, device, equipment and storage medium
CN111488847B (en) * 2020-04-17 2024-02-02 上海媒智科技有限公司 Sports game video ball-feeding segment acquisition system, method and terminal
CN111988670B (en) * 2020-08-18 2021-10-22 腾讯科技(深圳)有限公司 Video playing method and device, electronic equipment and computer readable storage medium
CN113537052B (en) * 2021-07-14 2023-07-28 北京百度网讯科技有限公司 Video clip extraction method, device, equipment and storage medium
CN113490049B (en) * 2021-08-10 2023-04-21 深圳市前海动竞体育科技有限公司 Sports event video editing method and system based on artificial intelligence
CN117132925B (en) * 2023-10-26 2024-02-06 成都索贝数码科技股份有限公司 Intelligent stadium method and device for sports event

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1460835A1 (en) * 2003-03-19 2004-09-22 Thomson Licensing S.A. Method for identification of tokens in video sequences
CN101211460A (en) * 2006-12-30 2008-07-02 中国科学院计算技术研究所 Method and device for automatically dividing and classifying sports vision frequency shot
CN101604325A (en) * 2009-07-17 2009-12-16 北京邮电大学 Method for classifying sports video based on key frame of main scene lens
CN102254160A (en) * 2011-07-12 2011-11-23 央视国际网络有限公司 Video score detecting and recognizing method and device
CN102263907A (en) * 2011-08-04 2011-11-30 央视国际网络有限公司 Play control method of competition video, and generation method and device for clip information of competition video
CN102393909A (en) * 2011-06-29 2012-03-28 西安电子科技大学 Method for detecting goal events in soccer video based on hidden markov model
CN103049787A (en) * 2011-10-11 2013-04-17 汉王科技股份有限公司 People counting method and system based on head and shoulder features
CN105955708A (en) * 2016-05-09 2016-09-21 西安北升信息科技有限公司 Sports video lens classification method based on deep convolutional neural networks

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1460835A1 (en) * 2003-03-19 2004-09-22 Thomson Licensing S.A. Method for identification of tokens in video sequences
CN101211460A (en) * 2006-12-30 2008-07-02 中国科学院计算技术研究所 Method and device for automatically dividing and classifying sports vision frequency shot
CN101604325A (en) * 2009-07-17 2009-12-16 北京邮电大学 Method for classifying sports video based on key frame of main scene lens
CN102393909A (en) * 2011-06-29 2012-03-28 西安电子科技大学 Method for detecting goal events in soccer video based on hidden markov model
CN102254160A (en) * 2011-07-12 2011-11-23 央视国际网络有限公司 Video score detecting and recognizing method and device
CN102263907A (en) * 2011-08-04 2011-11-30 央视国际网络有限公司 Play control method of competition video, and generation method and device for clip information of competition video
CN103049787A (en) * 2011-10-11 2013-04-17 汉王科技股份有限公司 People counting method and system based on head and shoulder features
CN105955708A (en) * 2016-05-09 2016-09-21 西安北升信息科技有限公司 Sports video lens classification method based on deep convolutional neural networks

Also Published As

Publication number Publication date
CN107241645A (en) 2017-10-10

Similar Documents

Publication Publication Date Title
CN107241645B (en) Method for automatically extracting goal wonderful moment through caption recognition of video
US10758807B2 (en) Smart court system
CN110012348B (en) A kind of automatic collection of choice specimens system and method for race program
US7793205B2 (en) Synchronization of video and data
CN105183849A (en) Event detection and semantic annotation method for snooker game videos
US10616663B2 (en) Computer-implemented capture of live sporting event data
CN101377852B (en) Apparatus for determining highlight segments of sport video
US20080193099A1 (en) Video Edition Device and Method
CN112533003B (en) Video processing system, device and method
CN107172487A (en) A kind of method that Highlight is extracted by camera lens playback feature
MX2012000902A (en) Play sequence visualization and analysis.
CN109672899A (en) The Wonderful time of object game live scene identifies and prerecording method in real time
US20080269924A1 (en) Method of summarizing sports video and apparatus thereof
CN104915433A (en) Method for searching for film and television video
JP6307892B2 (en) Extraction program, method, and apparatus, and baseball video meta information creation apparatus, method, and program
CN110188241A (en) A kind of race intelligence manufacturing system and production method
Lee et al. Highlight-video generation system for baseball games
CN107277409A (en) Spurt video diced system and method in a kind of timing type games project
US10200764B2 (en) Determination method and device
Raunsbjerg et al. TV sport and rhetoric: The mediated event
CN101090453A (en) Searching method of searching highlight in film of tennis game
CN101833978A (en) Character signal-triggered court trial video real-time indexing method
Dai et al. Replay scene classification in soccer video using web broadcast text
CN110321766A (en) A kind of mistake in shooting sports penetrates detection system and method
KR102338188B1 (en) 3-ball billiard trajectory analysis and prediction system using AI, and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant