Nothing Special   »   [go: up one dir, main page]

CN108337573A - A kind of implementation method that race explains in real time and medium - Google Patents

A kind of implementation method that race explains in real time and medium Download PDF

Info

Publication number
CN108337573A
CN108337573A CN201810251213.XA CN201810251213A CN108337573A CN 108337573 A CN108337573 A CN 108337573A CN 201810251213 A CN201810251213 A CN 201810251213A CN 108337573 A CN108337573 A CN 108337573A
Authority
CN
China
Prior art keywords
race
information
instruction
explanation
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810251213.XA
Other languages
Chinese (zh)
Inventor
陈彦均
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN201810251213.XA priority Critical patent/CN108337573A/en
Publication of CN108337573A publication Critical patent/CN108337573A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4662Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms
    • H04N21/4666Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms using neural networks, e.g. processing the feedback provided by the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This application discloses a kind of implementation method that race explains in real time and media.This method includes:Obtain the essential information of race;Receive the race explanation instruction that user sends;Based on the essential information of the race, explainative information corresponding with the race explanation instruction is generated;The explainative information includes that picture and text report information and/or phonetic explaining information;The explainative information is exported.The technical solution of the embodiment of the present invention can make audience that can also hear real-time explanation at the race scene not explained in real time, improve the viewing experience of spectators, also improve the viewing enjoyment of spectators.

Description

A kind of implementation method that race explains in real time and medium
Technical field
This disclosure relates to augmented reality field more particularly to a kind of implementation method that race explains in real time and medium.
Background technology
Currently, for some TVs or Internet video program, for example, competitive sports, grave news event programme televised live, And documentary film program etc., usually it is required for narrator to carry out explanation in real time or side according to the video process that program is played In vain, in order to video content that spectators more profoundly understand broadcasting.
But for some programs scene, by taking competitive sports as an example, such as football, basketball, table tennis, billiard ball or more The big events such as the Olympic Games that project is carried out at the same time scene, since site environment is noisy or needs to avoid influencing race team member's Level is played, is usually free of and is equipped with the real-time explanation towards audience, this sees for not knowing about laws of the game or custom The spectators that the scene seen explains in real time, can strong influence viewing experience.
Invention content
In view of drawbacks described above in the prior art or deficiency, it is intended to provide a kind of viewing that can effectively improve audience The scheme of experience.
In a first aspect, an embodiment of the present invention provides a kind of implementation method that race explains in real time, it is applied to augmented reality In equipment, the method includes:
Obtain the essential information of race;
Receive the race explanation instruction that user sends;
Based on the essential information of the race, explainative information corresponding with the race explanation instruction is generated;The explanation Information includes that picture and text report information and/or phonetic explaining information;
The explainative information is exported.
Optionally, the essential information of race is obtained, including:
Determine current location information;
Based on the current location information, the essential information of the race is obtained.
Optionally, the race explanation, which instructs, includes:Indicate the first instruction that the online live streaming announcer of switching explains;Or refer to Show the second instruction that the online live streaming announcer that do not transfer explains;
Based on the essential information of the race, explainative information corresponding with the race explanation instruction is generated, including:
When race explanation instruction includes first instruction, based on the essential information of the race, switching is online Live streaming explains signal;
When race explanation instruction includes second instruction, based on the essential information of the race, simulation is generated Explainative information.
Optionally, when race explanation instruction includes first instruction, based on the essential information of the race, turn It is connected on line live streaming and explains signal, including:
When race explanation instruction includes first instruction, the announcer's list being broadcast live online is exported;
Described in confirming in announcer's list of line live streaming after the target announcer comprising the user, it is based on the match The online live streaming of the essential information of thing, the target announcer that transfers explains signal.
Optionally, the method further includes:
Include first instruction when the race explanation instructs, and confirms in the announcer's list being broadcast live online not Including when the target announcer of the user, based on the essential information of the race, simulation explainative information is generated.
Optionally, the race explanation, which instructs, further includes:Start the third instruction of video playback functionality;Or do not start video 4th instruction of playback function;
Based on the essential information of the race, simulation explainative information is generated, including:
When race explanation instruction further includes third instruction, based on the essential information of the race, the first mould is generated Quasi- explainative information;The first simulation explainative information is that picture and text report information and phonetic explaining information;
When race explanation instruction further includes the 4th instruction, the live video data of the race is obtained;And it is based on The live video data and the pre-stored explainative information database of network side generate the second simulation explainative information;Described Two simulation explainative informations are phonetic explaining information.
Optionally, the race explanation, which instructs, further includes:The 5th instruction locally played back using User Defined;Or it does not adopt The 6th instruction locally played back with User Defined;
Based on the essential information of the race, the first simulation explainative information is generated, including:
When race explanation instruction further includes the 5th instruction, the Eye-controlling focus data of the user are obtained, and be based on The Eye-controlling focus data determine the visual field range of observation of the user;
Obtain the live video data of the race;
According to the visual field range of observation of the user, from being chosen in the live video data in the visual field range of observation Video data, be determined as part play back video data;
Video data, which is played back, according to the part generates the first simulation explainative information;
When race explanation instruction further includes the 6th instruction, the global tactics playback video counts of the race are obtained According to;
Video data is played back according to the global tactics of the race, generates the first simulation explainative information.
Optionally, the method further includes:
Determine the explanation preference information of the user;The explanation preference information includes:Announcer's preference information, race team At least one of in 5 preference informations and race team member's preference information;
Based on the live video data and the pre-stored explainative information database of network side, generates the second simulation and explain Information, including:
Explanation based on the live video data, the pre-stored explainative information database of network side and the user Preference information generates the second simulation explainative information.
Optionally, the explanation preference information of the user is determined, including:
Identify the identity of the user;
Based on the identity of the user, judge whether the explanation preference information for being previously stored with the user;
When the determination result is yes, the explanation preference information of the user is directly transferred;
When the judgment result is No, the carrying setting instruction for explaining preference information that the user sends is received.
Second aspect, the embodiment of the present invention additionally provide computer readable storage medium, are stored thereon with computer program Instruction, the above method is realized when the computer program instructions are executed by processor.
The implementation method that race provided in an embodiment of the present invention explains in real time by obtaining the essential information of race, and connects The race explanation instruction that user sends, and then the essential information based on race are received, explanation corresponding with race explanation instruction is generated Information, and explainative information is exported.This method can make audience that can also be listened at the race scene not explained in real time It is explained to real-time, improves the viewing experience of spectators, also improve the viewing enjoyment of spectators.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 shows the flow diagram for the implementation method that a kind of race provided in an embodiment of the present invention explains in real time;
Fig. 2 shows a kind of realization principle system diagrams that race explains in real time provided in an embodiment of the present invention;
Fig. 3 shows the detailed process signal for the implementation method that a kind of race provided in an embodiment of the present invention explains in real time Figure;
Fig. 4 is a kind of structural schematic diagram for the realization device that race explains in real time provided in an embodiment of the present invention;
Fig. 5 is a kind of knot of augmented reality equipment being suitable for being used for realizing the embodiment of the present application provided in an embodiment of the present invention Structure schematic diagram.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, is illustrated only in attached drawing and invent relevant part.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.
The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
As shown in Figure 1, an embodiment of the present invention provides a kind of implementation method that race explains in real time, it is applied to AR In (Augmented Reality, augmented reality) equipment, this method comprises the following steps:
Step 110, the essential information of race is obtained.
In the embodiment of the present invention, the essential information of race can be obtained by determining current location information.
In practical applications, it can be tracked by GPS (Global Positioning System, global positioning system) Location technology determines current location information, for example is arranged GPS track and localization modules in AR equipment, so by the GPS with Track locating module determines the location information of AR equipment, the i.e. location information of competition area, then can be compared with network data base To determine the essential information of race.
Wherein, the essential information of race can be, but not limited to include:Troop, team member, coach and judge of this race etc. Information.
Step 120, the race explanation instruction that user sends is received.
In the embodiment of the present invention, race explanation instruction can be user by voice command, gesture identification, button operation or Person touch operation is sent.For example, human-computer interaction module is arranged in AR equipment, user can be mutual by being carried out with the AR equipment It is dynamic, to send race explanation instruction.
In addition, it should also be noted that, in the embodiment of the present application race explanation instruction can be one instruction, can also It is series of instructions.
Specifically, the race explanation instruction can be, but not limited to include:
Indicate the first instruction that the online live streaming announcer of switching explains;Or instruction does not transfer what online live streaming announcer explained Second instruction.
Start the third instruction of video playback functionality;Or do not start the 4th instruction of video playback functionality;
The 5th instruction locally played back using User Defined;Or the 6th finger for not using User Defined locally to play back It enables.
Wherein, third instruction or the 4th instruction receive later in the first instruction or the second instruction;5th instruction or the Six instructions receive later in third instruction.
Step 130, the essential information based on race generates explainative information corresponding with race explanation instruction, explanation letter Breath reports information and/or phonetic explaining information comprising picture and text.
Step 140, explainative information is exported.
It describes in detail below to the realization process of step 130.
Step 130 can be realized in accordance with the following steps:
When receiving the first instruction, the online live streaming of the essential information based on race, directly switching explains signal.
Further, when receiving the first instruction, the announcer's list being broadcast live online can be exported first, confirmed In announcer's list of line live streaming after the target announcer (such as the favorite announcer of user) comprising user, then based on race The online live streaming of essential information, transfer destination announcer explains signal.
In practical application, voice output module can be arranged in AR equipment, signal is explained for exporting online live streaming.
When receiving the second instruction, that is, enters simulation announcer and explain pattern, the essential information based on race generates mould Quasi- explainative information.
In addition, not including the target explanation of user in receiving the first instruction and confirming the announcer's list being broadcast live online When member, likewise enters simulation announcer and explain pattern, the essential information based on race generates simulation explainative information.
Further, the first instruction or the second instruction are being received into after simulating announcer's explanation pattern, based on match The essential information of thing generates simulation explainative information, can specifically realize as follows:
When receiving third instruction again, the essential information based on race generates the first simulation explainative information, first mould Quasi- explainative information is that picture and text are reported and phonetic explaining information;
When receiving the 4th instruction again, the live video data of race is obtained;And it is based on live video data and network The pre-stored explainative information database in side generates the second simulation explainative information;The second simulation explainative information is phonetic explaining Information.
Wherein it is possible to obtain the live video data of race by the way that the camera cluster in competition field is arranged.
The pre-stored explainative information database of network side can obtain in the following manner:
Relevant information (ratio is extracted from magnanimity race by network big data technology and deep learning neural network algorithm Such as video explains voice, news, picture, comment), and learnt and divided by related data to the race and knowledge Class, and classification results are stored in network side explainative information database.
Further, this method can also include;
Determine the explanation preference information of the user;The explanation preference information includes:Announcer's preference information, race troop At least one of in preference information and race team member's preference information.
It is then based on live video data and the pre-stored explainative information database of network side, the second simulation is generated and explains letter It ceases, may include:
Explanation preference letter based on live video data, the pre-stored explainative information database of network side and user Breath generates the second simulation explainative information.For example, the solution with user preference can be generated according to the explanation preference information of user The person of saying explains the second simulation explainative information of style.
Wherein it is possible to be determined as follows the explanation preference information of user:
First, by the iris recognition technology of AR equipment, the identity of user is identified;Identity based on user, judgement again be The no explanation preference information for being previously stored with the user, if so, then directly transferring;If it is not, can be by user Iris information multiple repairing weld and preserve, while receive and preserve user transmission carry its explain preference information setting refer to It enables.
In practical applications, it can be realized by the way that identification and Eye-controlling focus module are arranged in AR equipment.
In addition, after receiving third instruction, the essential information based on race generates the first simulation explainative information, can Specifically to realize as follows:
When receiving the 5th instruction again, the Eye-controlling focus data of user is obtained, and be based on Eye-controlling focus data, determine and use The visual field range of observation at family;
Obtain the live video data of race;
According to the visual field range of observation of user, from the video data chosen in live video data in the range of observation of the visual field, It is determined as part and plays back video data;
The first simulation explainative information is generated according to part playback video data;
When receiving the 6th instruction again, the global tactics playback video data of race is obtained;
Video data is played back according to the global tactics of race, generates the first simulation explainative information.
In the embodiment of the present application, by obtaining the essential information of race, and the race explanation instruction of user's transmission is received, into And the essential information based on race, explainative information corresponding with race explanation instruction is generated, and explainative information is exported.This side Method can make audience that can also hear real-time explanation at the race scene not explained in real time, improve the viewing body of spectators It tests, also improves the viewing enjoyment of spectators.
Based on the implementation method that above-mentioned race explains in real time, the realization principle of the embodiment of the present invention is introduced below.
As shown in Fig. 2, be a kind of realization principle system diagram that race explains in real time provided in an embodiment of the present invention, including:
Race scene, field camera group, radio communication base station, system control unit and the increasing with Eye-controlling focus function Strong real world devices.
Wherein, system control unit includes:The real-time processing unit of augmented reality device data, video data processing element, Race data collection and training unit, informix processing unit and augmented reality equipment voice and virtual display picture and text are defeated Go out control unit;
Augmented reality equipment includes:Track and localization module, wireless communication module, identification and Eye-controlling focus module, people Machine interactive module and voice output module.
Wherein, field camera group can obtain the live view for comprehensive, each angle of competing as desired, with For football match, there are about 20 video cameras for a usual football match:Angle Position is one each on goal, suspension one behind goal, Lateral field artificial short distance video camera and for taking a crane shot what sportsman walked when video camera shoots whole audience picture, field from left to right Court overhead hangs video camera etc.;Base station reaches system control list to the race real time video data information of acquisition by radio communication The video data processing element of member is handled;
Wherein, race data collection and training unit include:Network data collection unit, deep learning neural network classification Computing unit and high in the clouds information database, by network big data technology and deep learning neural network algorithm from magnanimity race Relevant information (video, explanation voice, news, picture, comment etc.) is extracted, and passes through the related data and knowledge to the race Learnt and classified, and will be in classification results storage beyond the clouds information database;
Wherein, the real-time processing unit of augmented reality device data includes:Identification and (the corresponding enhancing of Eye-controlling focus unit The identifications of real world devices and realize tracing module), man-machine interaction unit (the human-computer interaction mould of corresponding augmented reality equipment Block), the track and localization unit track and localization module of equipment (corresponding augmented reality), (corresponding augmented reality is set wireless communication unit Do not marked in standby wireless communication module, figure), voice-output unit is (in the voice output module of corresponding augmented reality equipment, figure It does not mark).Wherein, corresponding module and cell data can be with intercommunications.
User identity, the explanation of calling and obtaining user are inclined for identification for the identification of augmented reality equipment and realization tracing module Good information (favorite explanation mode, favorite competing teams and team member etc.);Tracing and positioning unit is obtained by location information This race information;By human-computer interaction module, user can carry out interaction with equipment, confirm the operating mode of equipment, including Explanation mode selects, the selection of race picture playback, race tactics parse and prediction selection and real-time voice enquirement etc., interaction Mode can be voice command, gesture identification, button or touch operation;Wireless communication module and voice output module are for defeated Go out phonetic explaining signal.
Wherein, the live video data of informix processing unit combination video data processing element, high in the clouds information data The race classification based training in library as a result, analyze live video, and handle list in real time in conjunction with augmented reality device data in real time User's Eye-controlling focus data, user's human-machine interaction data that member obtains etc., control augmented reality equipment are interested to user interior Hold and carries out the virtual report of picture and text in real time and phonetic explaining.
Wherein, the mode explained in real time of competing can 1) switch through online live streaming main broadcaster/announcer according to user demand Real-time explanation voice;2) if do not have user to like in main broadcaster/announcer one is broadcast live online, engineering can be passed through It practises the tranining database for the main broadcaster/announcer for calling the user of high in the clouds storage to like and is carried out as a result, simulating the main broadcaster/announcer It explains in real time;If 3) like the part style of a few main broadcaster/announcers respectively, high in the clouds can be called by machine learning The corresponding style and segment for several main broadcaster/announcers that the user of storage chooses, the style after simulation fusion are solved in real time It says.
Wherein, according to calculate computing capability software and hardware development, if computing unit can not minimize or by This control, can be using system control unit as peripheral hardware (such as being arranged in auditorium seat) by wifi radio communication molds Block, radio communication or wired connection mode control augmented reality equipment are shown, if computing capability is enough and can minimize, System control unit can be directly integrated into augmented reality equipment.
The present invention is further explained in the light of specific embodiments, but the present invention is not limited to following embodiments.
As shown in figure 3, being that a kind of detailed process for the implementation method that race explains in real time provided in an embodiment of the present invention is shown It is intended to.The implementation method that the race explains in real time specifically includes following process:
Step 301, system start.
Step 302, system initialization, track and localization unit obtain device location information.
Step 303 obtains the information such as this competition information, including the player in of this competition, coach, judge.
Step 304, iris recognition, determine user identity.
Step 305 judges whether there is this person's identity information;If no, executing step 306, if so, executing step 308.
Step 306 carries out iris information multiple repairing weld and subscriber identity information preservation.
Step 307, user explain the setting of mode preference, determine certain position that the user likes or mostly announcer's information, Favorite race troop and team member etc..
Step 308 calls the user of storage to explain mode preference setting data.
Step 309 judges whether that switching through online live streaming announcer explains, if so, step 310 is executed, if it is not, executing step Rapid 313.
Announcer's identity information that step 310, output are being broadcast live.
Step 311, user judge whether there is favorite online live streaming announcer, if so, 312 are thened follow the steps, if No, step 313 is executed.
Step 312, the online live streaming for switching through the online live streaming announcer explain voice signal.Flow terminates.
Step 313 starts simulation announcer's explanation pattern.
Step 314 judges whether user starts video playback functionality, if started, 316 is thened follow the steps, if do not opened It is dynamic, then follow the steps 315.
Step 315, comprehensive high in the clouds explainative information library data, live video data and user preference setting simulation explain, language Sound exports.Flow terminates.
Specifically, high in the clouds explainative information library data are transferred, the explanation side being arranged in conjunction with live video data and user preference Formula carries out simulation in real time and explains, and is output by voice explainative information, such as football explains whose band is content may include Ball, whose goal, goal mode, foul mode, scoring event, match prediction, coach's tactics prediction etc.;High in the clouds explainative information library number According to by race data collection and training unit by network big data technology and deep learning neural network algorithm from magnanimity race In extract relevant information (video explains voice, news, picture, comment etc.), and by related data to the race and know Knowledge is learnt and acquisition of classifying.
Step 316, judge whether use User Defined part playback function, if using, execute step 320, if It does not use, executes step 317.
Step 317 transfers global tactics playback picture.
Step 318, control virtual reality show content output combine tactics parsing (such as tactics parse playback video It is middle be inserted into tactics animation simulation and label etc.) playback video.
Step 319, tendency of competing in conjunction with high in the clouds explainative information library data analysis competition tactics, prediction.Flow terminates.
Step 320 obtains Eye-controlling focus data, determines user visual field range of observation.
Step 321, informix processing unit calling and obtaining user camera shooting cluster video information within the vision.
Step 322, control virtual reality show content output playback video.
Step 323 simulates explanation according to video content voice output.Flow terminates.
In the embodiment of the present application, by obtaining the essential information of race, and the race explanation instruction of user's transmission is received, into And the essential information based on race, explainative information corresponding with race explanation instruction is generated, and explainative information is exported.This side Method can make audience that can also hear real-time explanation at the race scene not explained in real time, improve the viewing body of spectators It tests, also improves the viewing enjoyment of spectators.
It should be noted that although describing the operation of the method for the present invention with particular order in the accompanying drawings, this is not required that Or imply and must execute these operations according to the particular order, it could the realization phase or have to carry out operation shown in whole The result of prestige.On the contrary, the step of describing in flow chart, which can change, executes sequence.Additionally or alternatively, it is convenient to omit certain Multiple steps are merged into a step and executed, and/or a step is decomposed into execution of multiple steps by step.
Based on same inventive concept, the embodiment of the present invention additionally provides a kind of realization device that race explains in real time, the match The realization device explained when true is applied in above-mentioned AR equipment, as shown in figure 4, including
Race information acquisition unit 41, the essential information for obtaining race;
Instruction reception unit 42 is explained, the race explanation instruction for receiving user's transmission;
Explainative information generation unit 43 is used for the essential information based on the race, generates and is instructed with the race explanation Corresponding explainative information;The explainative information includes that picture and text report information and/or phonetic explaining information;
Explainative information output unit 44, for exporting the explainative information.
Optionally, the race information acquisition unit 41, specifically includes:
Location information determining module 411, for determining current location information;
Race data obtaining module 412 obtains the essential information of the race for being based on the current location information.
Optionally, the race explanation, which instructs, includes:Indicate the first instruction that the online live streaming announcer of switching explains;Or refer to Show the second instruction that the online live streaming announcer that do not transfer explains;
The explainative information generation unit 43, including:
Online live streaming explains signal generation module 431, is used for when race explanation instruction includes first instruction, Based on the essential information of the race, online live streaming of transferring explains signal;
First simulation explainative information generation module 432 is used for when race explanation instruction includes second instruction, Based on the essential information of the race, simulation explainative information is generated.
Optionally, the online live streaming explains signal generation module 431, is used for:
When race explanation instruction includes first instruction, the announcer's list being broadcast live online is exported;
Described in confirming in announcer's list of line live streaming after the target announcer comprising the user, it is based on the match The online live streaming of the essential information of thing, the target announcer that transfers explains signal.
Optionally, the explainative information generation unit 43 further includes:
Second simulation explainative information generation module 433, for including first instruction when race explanation instruction, and When confirming the target announcer for not including the user in the announcer's list being broadcast live online, based on the basic of the race Information generates simulation explainative information.
Optionally, the race explanation, which instructs, further includes:Start the third instruction of video playback functionality;Or do not start video 4th instruction of playback function;
The first simulation explainative information generation module 432 and the second simulation explainative information generation module 433 execute It is specifically used for when generating simulation explainative information based on the essential information of the race:
When race explanation instruction further includes third instruction, based on the essential information of the race, the first mould is generated Quasi- explainative information;The first simulation explainative information is that picture and text are reported and phonetic explaining information;
When race explanation instruction further includes the 4th instruction, the live video data of the race is obtained;And it is based on The live video data and the pre-stored explainative information database of network side generate the second simulation explainative information;Described Two simulation explainative informations are phonetic explaining information.
Optionally, the race explanation, which instructs, further includes:The 5th instruction locally played back using User Defined;Or it does not adopt The 6th instruction locally played back with User Defined;
The first simulation explainative information generation module 432 and the second simulation explainative information generation module 433 execute It is specifically used for when generating the first simulation explainative information based on the essential information of the race:
When race explanation instruction further includes the 5th instruction, the Eye-controlling focus data of the user are obtained, and be based on The Eye-controlling focus data determine the visual field range of observation of the user;
Obtain the live video data of the race;
According to the visual field range of observation of the user, from being chosen in the live video data in the visual field range of observation Video data, be determined as part play back video data;
Video data, which is played back, according to the part generates the first simulation explainative information;
When race explanation instruction further includes the 6th instruction, the global tactics playback video counts of the race are obtained According to;
Video data is played back according to the global tactics of the race, generates the first simulation explainative information.
Optionally, described device further includes;
Preference information determination unit 45, the explanation preference information for determining the user;The explanation preference information packet It includes:At least one of in announcer's preference information, race troop preference information and race team member's preference information;
The first simulation explainative information generation module 432 and the second simulation explainative information generation module 433 execute Based on the live video data and the pre-stored explainative information database of network side, when generating the second simulation explainative information, It is specifically used for:
Explanation based on the live video data, the pre-stored explainative information database of network side and the user Preference information generates the second simulation explainative information.
Optionally, the preference information determination unit 45, is specifically used for:
Identify the identity of the user;
Based on the identity of the user, judge whether the explanation preference information for being previously stored with the user;
When the determination result is yes, the explanation preference information of the user is directly transferred;
When the judgment result is No, the carrying setting instruction for explaining preference information that the user sends is received.
It should be appreciated that systems or unit described in the device with it is each in the method that is described with reference to figure 1- Fig. 3 Step is corresponding.It is equally applicable to the device and unit wherein included above with respect to the operation and feature of method description as a result, Details are not described herein.
Based on same inventive concept, the embodiment of the present invention additionally provides a kind of AR for being suitable for being used for realizing the embodiment of the present application Equipment, below with reference to Fig. 5, it illustrates the structural schematic diagrams suitable for the AR equipment for realizing the embodiment of the present application.
It, can be according to being stored in read-only memory as shown in figure 5, AR equipment includes central processing unit (CPU) 501 (ROM) it the program in 502 or is executed respectively from the program that storage section 508 is loaded into random access storage device (RAM) 503 Kind action appropriate and processing.In RAM 503, it is also stored with various programs and data needed for system operatio.CPU501、 ROM 502 and RAM 503 are connected with each other by bus 504.Input/output (I/O) interface 505 is also connected to bus 504.
It is connected to I/O interfaces 505 with lower component:Importation 506 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 509 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 508 including hard disk etc.; And the communications portion 509 of the network interface card including LAN card, modem etc..Communications portion 509 via such as because The network of spy's net executes communication process.Driver 55 is also according to needing to be connected to I/O interfaces 505.Detachable media 511, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 510, as needed in order to be read from thereon Computer program be mounted into storage section 508 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of Fig. 1-Fig. 3 descriptions Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be tangibly embodied in machine readable Computer program on medium, the computer program include the program code of the method for executing Fig. 1-Fig. 3.Such In embodiment, which can be downloaded and installed by communications portion 509 from network, and/or is situated between from detachable Matter 511 is mounted.
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of various embodiments of the invention, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more Executable instruction for implementing the specified logical function.It should also be noted that in some implementations as replacements, institute in box The function of mark can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are practical On can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it wants It is noted that the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart, Ke Yiyong The dedicated hardware based system of defined functions or operations is executed to realize, or can be referred to specialized hardware and computer The combination of order is realized.
Being described in unit or module involved in the embodiment of the present application can be realized by way of software, can also It is realized by way of hardware.Described unit or module can also be arranged in the processor.These units or module Title does not constitute the restriction to the unit or module itself under certain conditions.
As on the other hand, present invention also provides a kind of computer readable storage medium, the computer-readable storage mediums Matter can be computer readable storage medium included in device described in above-described embodiment;Can also be individualism, not The computer readable storage medium being fitted into equipment.There are one computer-readable recording medium storages or more than one journey Sequence, described program are used for executing the formula input method for being described in the application by one or more than one processor.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from the inventive concept, it is carried out by above-mentioned technical characteristic or its equivalent feature Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (10)

1. a kind of implementation method that race explains in real time, which is characterized in that be applied in augmented reality equipment, the method packet It includes:
Obtain the essential information of race;
Receive the race explanation instruction that user sends;
Based on the essential information of the race, explainative information corresponding with the race explanation instruction is generated;The explainative information Including picture and text report information and/or phonetic explaining information;
The explainative information is exported.
2. the method as described in claim 1, which is characterized in that the essential information of race is obtained, including:
Determine current location information;
Based on the current location information, the essential information of the race is obtained.
3. the method as described in claim 1, which is characterized in that the race explanation, which instructs, includes:The online live streaming of instruction switching The first instruction that announcer explains;Or instruction does not transfer and the second instruction that announcer explains is broadcast live online;
Based on the essential information of the race, explainative information corresponding with the race explanation instruction is generated, including:
When race explanation instruction includes first instruction, based on the essential information of the race, online live streaming of transferring Explain signal;
When race explanation instruction includes second instruction, based on the essential information of the race, generates simulation and explain Information.
4. according to the method described in claim 3, it is characterized in that, when race explanation instruction includes first instruction When, based on the essential information of the race, online live streaming of transferring explains signal, including:
When race explanation instruction includes first instruction, the announcer's list being broadcast live online is exported;
Described in confirming in announcer's list of line live streaming after the target announcer comprising the user, based on the race The online live streaming of essential information, the target announcer that transfers explains signal.
5. according to the method described in claim 4, it is characterized in that, the method further includes:
Include first instruction when the race explanation instructs, and confirms and do not include in the announcer's list being broadcast live online When the target announcer of the user, based on the essential information of the race, simulation explainative information is generated.
6. the method according to claim 3 or 5, which is characterized in that the race explanation, which instructs, further includes:Start video to return The third of playing function instructs;Or do not start the 4th instruction of video playback functionality;
Based on the essential information of the race, simulation explainative information is generated, including:
When race explanation instruction further includes third instruction, based on the essential information of the race, the first simulation solution is generated Say information;The first simulation explainative information is that picture and text report information and phonetic explaining information;
When race explanation instruction further includes the 4th instruction, the live video data of the race is obtained;And based on described Live video data and the pre-stored explainative information database of network side generate the second simulation explainative information;Second mould Quasi- explainative information is phonetic explaining information.
7. according to the method described in claim 6, it is characterized in that, race explanation instruction further includes:It is made by oneself using user The 5th instruction that justice locally plays back;Or the 6th instruction for not using User Defined locally to play back;
Based on the essential information of the race, the first simulation explainative information is generated, including:
When race explanation instruction further includes the 5th instruction, the Eye-controlling focus data of the user are obtained, and based on described Eye-controlling focus data determine the visual field range of observation of the user;
Obtain the live video data of the race;
According to the visual field range of observation of the user, from choosing regarding in the visual field range of observation in the live video data Frequency evidence is determined as part and plays back video data;
Video data, which is played back, according to the part generates the first simulation explainative information;
When race explanation instruction further includes the 6th instruction, the global tactics playback video data of the race is obtained;
Video data is played back according to the global tactics of the race, generates the first simulation explainative information.
8. method as claimed in claim 6, which is characterized in that the method further includes:
Determine the explanation preference information of the user;The explanation preference information includes:Announcer's preference information, race troop are inclined At least one of in good information and race team member's preference information;
Based on the live video data and the pre-stored explainative information database of network side, generates the second simulation and explain letter Breath, including:
Explanation preference based on the live video data, the pre-stored explainative information database of network side and the user Information generates the second simulation explainative information.
9. method as claimed in claim 8, which is characterized in that determine the explanation preference information of the user, including:
Identify the identity of the user;
Based on the identity of the user, judge whether the explanation preference information for being previously stored with the user;
When the determination result is yes, the explanation preference information of the user is directly transferred;
When the judgment result is No, the carrying setting instruction for explaining preference information that the user sends is received.
10. a kind of computer readable storage medium, is stored thereon with computer program instructions, which is characterized in that when the calculating Machine program instruction realizes method as claimed in any one of claims 1-9 wherein when being executed by processor.
CN201810251213.XA 2018-03-26 2018-03-26 A kind of implementation method that race explains in real time and medium Pending CN108337573A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810251213.XA CN108337573A (en) 2018-03-26 2018-03-26 A kind of implementation method that race explains in real time and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810251213.XA CN108337573A (en) 2018-03-26 2018-03-26 A kind of implementation method that race explains in real time and medium

Publications (1)

Publication Number Publication Date
CN108337573A true CN108337573A (en) 2018-07-27

Family

ID=62932367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810251213.XA Pending CN108337573A (en) 2018-03-26 2018-03-26 A kind of implementation method that race explains in real time and medium

Country Status (1)

Country Link
CN (1) CN108337573A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109769132A (en) * 2019-01-15 2019-05-17 北京中视广信科技有限公司 A kind of multi-channel long live video explanation method based on frame synchronization
CN110826361A (en) * 2018-08-09 2020-02-21 北京优酷科技有限公司 Method and device for explaining sports game
CN110971964A (en) * 2019-12-12 2020-04-07 腾讯科技(深圳)有限公司 Intelligent comment generation and playing method, device, equipment and storage medium
CN111526471A (en) * 2020-04-03 2020-08-11 深圳康佳电子科技有限公司 Multi-role audio playing method, intelligent terminal and storage medium
CN111539976A (en) * 2020-04-23 2020-08-14 北京字节跳动网络技术有限公司 Comment information generation method and device, electronic equipment and computer readable medium
CN111539978A (en) * 2020-04-23 2020-08-14 北京字节跳动网络技术有限公司 Method, apparatus, electronic device and medium for generating comment information
CN111988635A (en) * 2020-08-17 2020-11-24 深圳市四维合创信息技术有限公司 AI (Artificial intelligence) -based competition 3D animation live broadcast method and system
CN112040272A (en) * 2020-09-08 2020-12-04 海信电子科技(武汉)有限公司 Intelligent explanation method for sports events, server and display equipment
CN112118488A (en) * 2019-06-20 2020-12-22 京东方科技集团股份有限公司 Live broadcast method, electronic equipment and live broadcast system
CN113038162A (en) * 2021-03-25 2021-06-25 梁栋 Live broadcast method and system for billiard game
CN114491143A (en) * 2022-02-12 2022-05-13 北京蜂巢世纪科技有限公司 Audio comment searching method, device, equipment and medium for field activity
CN114979741A (en) * 2021-02-20 2022-08-30 腾讯科技(北京)有限公司 Method and device for playing video, computer equipment and storage medium
CN116506689A (en) * 2023-06-28 2023-07-28 央视频融媒体发展有限公司 Method and device for realizing multipath real-time explanation intellectualization suitable for online video

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1759909A (en) * 2004-09-15 2006-04-19 微软公司 Online gaming spectator system
CN102075696A (en) * 2011-01-07 2011-05-25 卢泳 Signal acquisition, transmission, signal editing and integration, broadcasting and viewing method and system
CN103608716A (en) * 2011-06-17 2014-02-26 微软公司 Volumetric video presentation
CN103946732A (en) * 2011-09-26 2014-07-23 微软公司 Video display modification based on sensor input for a see-through near-to-eye display
CN106056405A (en) * 2016-05-27 2016-10-26 上海青研科技有限公司 Advertisement directional-pushing technology based on virtual reality visual interest area
CN106327268A (en) * 2016-08-31 2017-01-11 李明昊 Multi-dimension interest information interconnection method and system
CN106375347A (en) * 2016-11-18 2017-02-01 上海悦野健康科技有限公司 Tourism live broadcast platform based on virtual reality
CN106774862A (en) * 2016-12-03 2017-05-31 西安科锐盛创新科技有限公司 VR display methods and VR equipment based on sight line
CN107392157A (en) * 2017-07-25 2017-11-24 中国人民解放军火箭军工程大学 A kind of Chinese chess match intelligent virtual live broadcasting method based on machine vision
CN107423274A (en) * 2017-06-07 2017-12-01 北京百度网讯科技有限公司 Commentary content generating method, device and storage medium based on artificial intelligence
US20180063461A1 (en) * 2016-08-30 2018-03-01 Samsung Electronics Co., Ltd. Apparatus and method for displaying image

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1759909A (en) * 2004-09-15 2006-04-19 微软公司 Online gaming spectator system
CN102075696A (en) * 2011-01-07 2011-05-25 卢泳 Signal acquisition, transmission, signal editing and integration, broadcasting and viewing method and system
CN103608716A (en) * 2011-06-17 2014-02-26 微软公司 Volumetric video presentation
CN103946732A (en) * 2011-09-26 2014-07-23 微软公司 Video display modification based on sensor input for a see-through near-to-eye display
CN106056405A (en) * 2016-05-27 2016-10-26 上海青研科技有限公司 Advertisement directional-pushing technology based on virtual reality visual interest area
US20180063461A1 (en) * 2016-08-30 2018-03-01 Samsung Electronics Co., Ltd. Apparatus and method for displaying image
CN106327268A (en) * 2016-08-31 2017-01-11 李明昊 Multi-dimension interest information interconnection method and system
CN106375347A (en) * 2016-11-18 2017-02-01 上海悦野健康科技有限公司 Tourism live broadcast platform based on virtual reality
CN106774862A (en) * 2016-12-03 2017-05-31 西安科锐盛创新科技有限公司 VR display methods and VR equipment based on sight line
CN107423274A (en) * 2017-06-07 2017-12-01 北京百度网讯科技有限公司 Commentary content generating method, device and storage medium based on artificial intelligence
CN107392157A (en) * 2017-07-25 2017-11-24 中国人民解放军火箭军工程大学 A kind of Chinese chess match intelligent virtual live broadcasting method based on machine vision

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
匿名用户: "F1车迷专用耳机如何使用", 《百度知道》 *
威锋网: "NBA开拓者队赛场上使用ipad进行实时战术指导", 《威锋网HTTP://BBS.FENG.COM/READ-HTM-TID-7324138.HTML》 *
李青珉: "面向智慧旅游的智能解说系统研究与设计", 《物联网技术》 *
许华东 等: "一个机器人足球现场解说系统", 《计算机工程》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826361A (en) * 2018-08-09 2020-02-21 北京优酷科技有限公司 Method and device for explaining sports game
CN109769132B (en) * 2019-01-15 2021-02-02 北京中视广信科技有限公司 Multi-channel remote live video commentary method based on frame synchronization
CN109769132A (en) * 2019-01-15 2019-05-17 北京中视广信科技有限公司 A kind of multi-channel long live video explanation method based on frame synchronization
CN112118488A (en) * 2019-06-20 2020-12-22 京东方科技集团股份有限公司 Live broadcast method, electronic equipment and live broadcast system
CN110971964A (en) * 2019-12-12 2020-04-07 腾讯科技(深圳)有限公司 Intelligent comment generation and playing method, device, equipment and storage medium
US11765439B2 (en) 2019-12-12 2023-09-19 Tencent Technology (Shenzhen) Company Limited Intelligent commentary generation and playing methods, apparatuses, and devices, and computer storage medium
CN110971964B (en) * 2019-12-12 2022-11-04 腾讯科技(深圳)有限公司 Intelligent comment generation and playing method, device, equipment and storage medium
WO2021114881A1 (en) * 2019-12-12 2021-06-17 腾讯科技(深圳)有限公司 Intelligent commentary generation method, apparatus and device, intelligent commentary playback method, apparatus and device, and computer storage medium
CN111526471A (en) * 2020-04-03 2020-08-11 深圳康佳电子科技有限公司 Multi-role audio playing method, intelligent terminal and storage medium
CN111539976A (en) * 2020-04-23 2020-08-14 北京字节跳动网络技术有限公司 Comment information generation method and device, electronic equipment and computer readable medium
CN111539978A (en) * 2020-04-23 2020-08-14 北京字节跳动网络技术有限公司 Method, apparatus, electronic device and medium for generating comment information
CN111988635A (en) * 2020-08-17 2020-11-24 深圳市四维合创信息技术有限公司 AI (Artificial intelligence) -based competition 3D animation live broadcast method and system
CN112040272A (en) * 2020-09-08 2020-12-04 海信电子科技(武汉)有限公司 Intelligent explanation method for sports events, server and display equipment
CN114979741A (en) * 2021-02-20 2022-08-30 腾讯科技(北京)有限公司 Method and device for playing video, computer equipment and storage medium
CN113038162A (en) * 2021-03-25 2021-06-25 梁栋 Live broadcast method and system for billiard game
CN114491143A (en) * 2022-02-12 2022-05-13 北京蜂巢世纪科技有限公司 Audio comment searching method, device, equipment and medium for field activity
CN116506689A (en) * 2023-06-28 2023-07-28 央视频融媒体发展有限公司 Method and device for realizing multipath real-time explanation intellectualization suitable for online video
CN116506689B (en) * 2023-06-28 2023-09-26 央视频融媒体发展有限公司 Method and device for realizing multipath real-time explanation intellectualization suitable for online video

Similar Documents

Publication Publication Date Title
CN108337573A (en) A kind of implementation method that race explains in real time and medium
US10762351B2 (en) Methods and systems of spatiotemporal pattern recognition for video content development
US20200342233A1 (en) Data processing systems and methods for generating interactive user interfaces and interactive game systems based on spatiotemporal analysis of video content
US20190351306A1 (en) Smart-court system and method for providing real-time debriefing and training services of sport games
US20190267041A1 (en) System and method for generating probabilistic play analyses from sports videos
US9463388B2 (en) Fantasy sports transition score estimates
US9138652B1 (en) Fantasy sports integration with video content
CA2798298C (en) Systems and methods for video processing
US10412467B2 (en) Personalized live media content
WO2013171658A1 (en) System and method for automatic video filming and broadcasting of sports events
US11875567B2 (en) System and method for generating probabilistic play analyses
CN112312142B (en) Video playing control method and device and computer readable storage medium
Yu et al. Current and emerging topics in sports video processing
CN112533003B (en) Video processing system, device and method
CN110944123A (en) Intelligent guide method for sports events
CN112287848A (en) Live broadcast-based image processing method and device, electronic equipment and storage medium
CN112287771A (en) Method, apparatus, server and medium for detecting video event
US20110255742A1 (en) Information processing device, information processing system, information processing method, and information storage medium
US11606608B1 (en) Gamification of video content presented to a user
CN110753267B (en) Display control method and device and display
WO2020154425A1 (en) System and method for generating probabilistic play analyses from sports videos
US20210084352A1 (en) Automatic generation of augmented reality media
US12142041B2 (en) Enhancing viewing experience by animated tracking of user specific key instruments
US20230013988A1 (en) Enhancing viewing experience by animated tracking of user specific key instruments
US20120194736A1 (en) Methods and Apparatus for Interactive Media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180727