CN109963162B - Cloud directing system and live broadcast processing method and device - Google Patents
Cloud directing system and live broadcast processing method and device Download PDFInfo
- Publication number
- CN109963162B CN109963162B CN201711420104.8A CN201711420104A CN109963162B CN 109963162 B CN109963162 B CN 109963162B CN 201711420104 A CN201711420104 A CN 201711420104A CN 109963162 B CN109963162 B CN 109963162B
- Authority
- CN
- China
- Prior art keywords
- video stream
- data
- video
- processing
- live broadcast
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 158
- 230000000694 effects Effects 0.000 claims abstract description 108
- 230000000007 visual effect Effects 0.000 claims abstract description 87
- 238000000034 method Methods 0.000 claims abstract description 44
- 238000009877 rendering Methods 0.000 claims description 27
- 238000012790 confirmation Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- SPBWHPXCWJLQRU-FITJORAGSA-N 4-amino-8-[(2r,3r,4s,5r)-3,4-dihydroxy-5-(hydroxymethyl)oxolan-2-yl]-5-oxopyrido[2,3-d]pyrimidine-6-carboxamide Chemical compound C12=NC=NC(N)=C2C(=O)C(C(=O)N)=CN1[C@@H]1O[C@H](CO)[C@@H](O)[C@H]1O SPBWHPXCWJLQRU-FITJORAGSA-N 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/239—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
- H04N21/2393—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/262—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
- H04N21/26208—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6587—Control parameters, e.g. trick play commands, viewpoint selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/835—Generation of protective data, e.g. certificates
- H04N21/8352—Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8543—Content authoring using a description language, e.g. Multimedia and Hypermedia information coding Expert Group [MHEG], eXtensible Markup Language [XML]
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The utility model discloses a cloud director system and live broadcast processing method, device, including: receiving a live broadcast request from a requester, wherein the live broadcast request carries a domain name identifier, an application identifier, a video stream identifier and a visual effect instruction, and the visual effect instruction comprises visual effect setting information; acquiring a corresponding input video stream according to the domain name identifier, the application identifier and the video stream identifier in the live broadcast request, and decoding the input video stream to obtain video data; performing special effect processing on the video data according to the visual effect setting information, and forming a video stream to be output based on the video data after the special effect processing; and pushing the video stream to be output. The method and the device can adapt to the situation that the visual effect of the user needs to be increased in the changeable scene in the social network video live broadcast.
Description
Technical Field
The invention relates to the technical field of networks, in particular to a cloud directing system, a live broadcast processing method and a live broadcast processing device.
Background
The multi-channel video live broadcast input is switched or mixed into one channel of video stream output according to the requirement, and the multi-channel video live broadcast input is used as a broadcasting guide station. In social network live video, electronic devices such as mobile phones and tablet computers can be used as live broadcast sources, the number of input channels of live broadcast can be hundreds of live broadcast streams, scenes are changeable, and users need to add different visual effects to different live broadcast streams.
In the related art, a cloud broadcasting system is configured with some scene templates, and a user adds a visual effect to a live program by selecting the scene templates, but the scene templates provided by the cloud broadcasting system are limited. This has at least the following problems: on one hand, the user cannot increase other visual effects except the scene template by himself or set the presented visual effects individually, so that the use is limited, the flexibility is poor, and the individual requirements of the user cannot be met; on the other hand, the scene template needs to be developed and configured in advance by a provider of the cloud director system, is high in cost, slow in updating, difficult to expand, poor in effect and incapable of adapting to the changeable scenes of social network video live broadcast.
Disclosure of Invention
The present application is directed to solving at least one of the technical problems in the related art.
The application provides a cloud directing system, a live broadcast processing method and a live broadcast processing device, which can at least adapt to the situation that a user needs to increase visual effect under variable scenes in social network video live broadcast.
The technical scheme is as follows.
A live broadcast processing method of a cloud director system comprises the following steps:
receiving a live broadcast request from a requester, wherein the live broadcast request carries a domain name identifier, an application identifier, a video stream identifier and a visual effect instruction, and the visual effect instruction comprises visual effect setting information;
acquiring a corresponding input video stream according to the domain name identifier, the application identifier and the video stream identifier in the live broadcast request, and decoding the input video stream to obtain video data;
performing special effect processing on the video data according to the visual effect setting information, and forming a video stream to be output based on the video data after the special effect processing;
and pushing the video stream to be output.
Wherein, the performing special effect processing on the video data according to the visual effect setting information includes: and acquiring corresponding material data from a corresponding material data set according to the material identification in the visual effect setting information, and performing special effect processing on the material data and the video data.
Before the obtaining of the corresponding material data from the corresponding material data set, the method further includes: and acquiring corresponding materials according to the material identification and the material display parameters provided by the requester or the third party, processing the materials to obtain the material data, and storing the material data into the material data set.
The material data set is a video file, and the material data is YUV format data.
Wherein the processing the material to obtain the material data includes: processing the material into material data based on a hypertext markup language; and converting the material data based on the hypertext markup language into YUV format data.
Before the obtaining of the corresponding material data from the corresponding material data set, the method further includes: rendering according to the content of the material data, and providing a result obtained by the rendering to the requesting party or a third party so as to facilitate the requesting party or the third party to preview the special effect.
Wherein the rendering according to the content of the material data and providing the result obtained by the rendering to the requester or a third party includes: rendering is carried out according to the content of the material data based on the hypertext markup language, and a result obtained by rendering is provided for the requesting party or the third party, so that the requesting party or the third party can preview the special effect.
Wherein, the converting the material data based on the hypertext markup language into YUV format data comprises: and after receiving the confirmation of the requester or the third party, converting the material data based on the hypertext markup language into YUV format data by using a preset conversion tool.
Wherein, after performing special effect processing on the video data according to the visual effect setting information, the method includes: rendering based on the video data after the special effect processing, and providing a result obtained by the rendering to the requesting party so that the requesting party can preview the special effect; and after receiving the confirmation of the requester, forming a video stream to be output based on the video data after the special effect processing.
The live broadcast request carries video stream identifications of multiple paths of video streams; the performing special effect processing on the video data according to the visual effect setting information includes: and performing special effect processing on the video data of at least one path of video stream indicated by the visual effect setting information in the multiple paths of video streams.
Wherein, the obtaining of the corresponding input video stream according to the domain name identifier, the application identifier and the video stream identifier in the live broadcast request includes: and acquiring a corresponding input video stream from a CDN node of a near live source in the CDN system according to the domain name identifier, the application identifier and the video stream identifier in the live request.
Wherein the pushing the video stream to be output includes: and outputting the video stream to be output to a CDN node of a near-viewer device in the CDN system so that the CDN node of the near-viewer device provides the video stream to be output to the viewer device.
A live broadcast processing device of a cloud directing system comprises:
the receiving module is used for receiving a live broadcast request from a requester, wherein the live broadcast request carries a domain name identifier, an application identifier, a video stream identifier and a visual effect instruction, and the visual effect instruction comprises visual effect setting information;
the input module is used for acquiring corresponding input video stream data according to the domain name identifier, the application identifier and the video stream identifier in the live broadcast request;
the special effect processing module is used for decoding the input video stream to obtain video frequency and carrying out special effect processing on the video data according to the visual effect setting information;
and the output module is used for forming a video stream to be output based on the video data after the special effect processing and pushing the video stream to be output.
The special effect processing module is specifically configured to obtain corresponding material data from a corresponding material data set according to a material identifier in the visual effect setting information, and perform special effect processing on the material data and the video data.
Wherein, still include: and the material processing module is used for acquiring corresponding materials according to the material identification and the material display parameters provided by the requester or the third party, processing the materials to obtain the material data, and storing the material data into the material data set.
A cloud director system comprising a plurality of server nodes distributed in parallel, the server nodes comprising:
a memory storing a live broadcast processing program;
a processor configured to read the live processing program to perform the following operations:
receiving a live broadcast request from a requester, wherein the live broadcast request carries a domain name identifier, an application identifier, a video stream identifier and a visual effect instruction, and the visual effect instruction comprises visual effect setting information;
acquiring a corresponding input video stream according to the domain name identifier, the application identifier and the video stream identifier in the live broadcast request, and decoding the input video stream to obtain video data;
performing special effect processing on the video data according to the visual effect setting information, and forming a video stream to be output based on the video data after the special effect processing;
and pushing the video stream to be output.
The application includes the following advantages:
on one hand, the method and the device can execute corresponding special effect processing on the corresponding video stream according to the visual effect instruction provided by the request party, namely the request party requirement, a scene template does not need to be configured in advance, the method and the device are flexible to use, can meet the personalized requirement of the user, are not limited in applicable scene, easy to realize, low in cost, and are suitable for the changeable scene of social network live video, and the visual effect better meets the requirements of the user and the scene.
On the other hand, the cloud direct broadcasting system in the application can meet the requirements that hundreds of live broadcasting streams are switched randomly in social network video live broadcasting, independent functions are used as required, and the like.
Of course, it is not necessary for any product to achieve all of the above-described advantages at the same time for the practice of the present application.
Drawings
Fig. 1 is a schematic diagram illustrating an architecture of a webcast system in the related art;
fig. 2 is a schematic diagram illustrating a network live broadcast system architecture supporting a cloud director in the related art;
fig. 3 is a schematic view of a user interface provided by a cloud director in the related art;
fig. 4 is a flowchart illustrating a live broadcast processing method of a cloud director system according to an embodiment;
fig. 5 is an exemplary structural diagram of a cloud director system in accordance with an embodiment;
fig. 6 is a schematic diagram illustrating an exemplary combination of a cloud director system and a CDN system according to an embodiment of the first embodiment;
fig. 7 is a schematic diagram of an exemplary implementation procedure of a live broadcast processing method according to an embodiment;
FIG. 8 is a diagram illustrating an exemplary process of processing material according to the first embodiment;
fig. 9 is a schematic structural diagram illustrating a composition of a live broadcast processing apparatus of a cloud director system according to a second embodiment.
Detailed Description
The technical solutions of the present application will be described in more detail below with reference to the accompanying drawings and embodiments.
It should be noted that, if not conflicted, the embodiments and the features of the embodiments can be combined with each other and are within the scope of protection of the present application. Additionally, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
In a typical configuration, a computing device of a client or server may include one or more processors (CPUs), input/output interfaces, network interfaces, and memory (memory).
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium. The memory may include module 1, module 2, … …, and module N (N is an integer greater than 2).
Computer readable media include both permanent and non-permanent, removable and non-removable storage media. A storage medium may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
In the related art, the network live broadcast system mainly has the following two implementation modes:
fig. 1 is a schematic diagram illustrating an architecture of a webcast system in the related art. As shown in fig. 1, a process of a webcast system performing a live broadcast process may generally include: client push flow → Content Delivery Network (CDN) access → cloud handling → CDN Delivery → player. The cloud processing comprises transcoding and recording functions of the single-path stream. As shown in fig. 1, the live video capture device a0 and the live video capture device a1 represent different live video devices, and are responsible for capturing and compressing audio and video content, and uploading the compressed audio and video content to network access nodes such as a video stream access node B0 and a video stream access node B1 through a network. The video stream near access node B0 and the video stream near access node B1 are responsible for forwarding the video stream to the video stream receiving center cluster C0, the video switching cluster D0 obtains the video stream from the video stream receiving center cluster C0, completes transcoding, recording and other operations according to the instruction, and then returns the result to the video stream receiving center cluster C0. The viewer obtains the video stream from the video stream near distribution node B3, B4 through viewing devices E0, E1, and if the video stream near distribution node B3, B4 does not have the video stream, the video stream near distribution node B3, B4 obtains the video live stream from C0. The video stream is accessed to the node B0 nearby, the video stream is accessed to the node B1 nearby, the video stream is distributed to the node B3 nearby, and the video stream is multiplexed to the video stream transmission node Bn such as the node B4 nearby. The video stream transcoding cluster D0 does not have the function of a cloud director, and cannot complete processing of video streams such as switching and special effects. As shown in fig. 1, in the live webcast system, a management cluster G is used to control the live webcast system to realize live webcast.
Fig. 2 is a schematic diagram of a network live system architecture supporting a cloud director in the related art. As shown in fig. 2, the process of live broadcast by the webcast system supporting the cloud director station may include: client push flow → cloud access and processing → CDN delivery (optional) → player. The video live broadcast devices A8 and a9 represent different video live broadcast devices, and are responsible for acquiring and compressing audio and video content and transmitting the compressed audio and video content to the cloud terminal broadcast guide station M through a network or a cable. The audience obtains the video stream from the CDN through the viewing device E9, and the CDN obtains the live video stream from the cloud director M when the CDN does not have the video stream. The cloud director M may be a dedicated device, or the cloud director M is a computer, and may be deployed in an IDC room or a virtual machine on the cloud.
In the related art, the cloud broadcasting guide M is a standalone software deployed on a standalone device or a virtual machine, and the cloud broadcasting guide M is a software that installs video processing software on a machine to perform corresponding video processing operations. As shown in fig. 3, a schematic view of a user interface provided by the cloud director M in the related art is shown.
As can be seen from fig. 3, the cpu utilization, the disk space condition, and the network bandwidth condition of the machine deployed by the cloud director M. Therefore, the cloud director M in the related art has the following defects: firstly, the stability of the system is completely dependent on a single machine, and disaster recovery cannot be realized; secondly, the capacity of the system is limited, and generally only 4 paths of live broadcast streams can be switched and processed, and at most 16 paths of live broadcast streams can be supported; third, even using only a single function (e.g., video stream switching or adding special visual effects) requires renting the entire device, which is costly to use. Therefore, the number of live broadcast sources which can be input by the current cloud director system is limited, and the requirements that hundreds of live broadcast streams are switched randomly in social network live video broadcast, only one single function (for example, video stream switching or special visual effect increasing) is used and the like cannot be met.
As shown in fig. 3, the cloud director M in the related art pre-configures some scene templates, and the user selects these scene templates to add a visual effect to the live program. As shown in fig. 3, the cloud director system supports only several scene templates, i.e., an explanation mode, a conversation mode, and a conference mode. Therefore, scene templates provided by the cloud director in the related art are very limited. The cloud broadcasting station M and the processing mode of the visual effect thereof have at least the following problems: on one hand, the user cannot increase other visual effects except the scene template by himself or set the presented visual effects individually, so that the use is limited, the flexibility is poor, and the individual requirements of the user cannot be met; on the other hand, the scene template needs to be developed and configured in advance by a provider of the cloud directing system, so that the method is high in cost, slow in updating, time-consuming, labor-consuming, poor in effect and incapable of adapting to the changeable scenes of social network video live broadcast.
Aiming at the technical problems in the related art, the application provides a cloud directing broadcast system and a live broadcast processing method thereof, which can at least solve the technical problem that in the related art, a visual effect can be added to a live broadcast interface only by a cloud-end directing broadcast station due to the fact that a scene template needs to be configured in advance. In addition, the technical problem that the input live broadcast source which can be supported by the cloud director in the related technology is limited can be solved, so that the use cost of a single user can be low, and the requirements of randomly switching hundreds of live broadcast streams in social network live video, using independent functions (such as video stream switching or adding special visual effects) according to needs and the like can be met.
Various implementations of the technical solution of the present application are described in detail below.
Example one
As shown in fig. 4, a live broadcast processing method of a cloud director system is provided, which may be implemented by a distributed system, and may include:
In this embodiment, the cloud director system can perform corresponding special effect processing on a corresponding video stream according to a visual effect instruction provided by a requester, that is, the requirement of the requester, without pre-configuring a scene template, so that the cloud director system is flexible to use, can meet the personalized requirement of a user, is not limited in applicable scene, is easy to implement, is low in cost, and has a visual effect more meeting the requirements of the user and the scene, and is applicable to a changeable scene of social network live video.
In this embodiment, a live source (e.g., a video source, an audio source, etc.) can be uniquely identified by the domain name identifier, the application identifier, and the video stream identifier. A domain name identifier is used to uniquely represent a domain name, and may be, for example, a domain name, a domain ID, or others. An application ID is used to uniquely represent an application, and may be, for example, an application ID, an application name, or others. A video stream identification is used to uniquely identify a video stream, which may be the name, ID, or other of the video stream. The domain name identification, the application identification and the video stream identification can be provided by a requester and stored in the cloud director system in advance. In practical application, a requester can configure a domain name identifier, an application identifier and a video stream identifier in a cloud director system in a registration manner. One requester may register one or more domain names under each of which a plurality of applications may be created, and under each of which a plurality of live streams (e.g., video streams, audio streams, etc.) may be created, that is, the domain name identifier and the application identifier may be in a one-to-one, one-to-many correspondence, the application identifier and the video stream identifier may be in a one-to-one, one-to-many correspondence, the domain name identifier and the video stream identifier may be in a one-to-one, one-to-many correspondence,
in one implementation, the cloud director system may generate a stream pushing address for uniquely identifying a live source according to the domain name identifier, the application identifier, and the video stream identifier, and obtain a corresponding video stream according to the stream pushing address (for example, a CDN may obtain the video stream from the stream pushing address, and then the CDN provides the video stream to the cloud director system). In addition, the cloud director system can obtain a corresponding playing address according to the stream pushing address and mapping according to a preset rule, the video stream to be output is pushed according to the playing address, and the viewer equipment can obtain the video stream by accessing the playing address and play the video stream so as to be watched by the viewers. In practical applications, the video content related to the video stream may be collected by a live source provider in real time, or may be pre-recorded by the live source provider and then stored in a specified location (for example, pre-stored in a cloud storage space).
In this embodiment, the visual effect instruction is an instruction matched with an API provided by the cloud director system. The visual effect setting information carried in the visual instruction is information representing direct description of the visual effect, and the visual effect setting information is information input by a user in requesting equipment. For example, when the video stream a and the video stream B need to be displayed in a picture-in-picture form, the visual effect setting information may include: an identifier of the video stream a, an identifier of the video stream B, a display position of the video stream a, a display position of the video stream B (for example, position information such as upper left, upper right, lower left, lower right, and the like, and may further include information such as a horizontal offset, a vertical offset, and the like), and the like. As yet another example of this type of device,
in one implementation, the visual effect setting information may further include material-related information, and the material-related information may include at least a material identifier and display position information of the material. Here, the material may be text, a picture, an audio file, a video file, animation, or the like. For example, when it is required that a picture X is superimposed on a video stream a and displayed in a picture-in-picture form, the visual effect setting information may include: the identification of the video stream a, the information of the picture X (for example, the storage address of the picture X, the identification of the picture X, a thumbnail of the picture X, a high-definition image, or the like), the display position of the picture X (for example, the position information of the upper left, the upper right, the lower left, the lower right, and the like, and may further include information such as a horizontal offset, a vertical offset, and the like), and the like. In practical application, the material identification can have various implementation forms. In one implementation, the material identifier may be a storage address of the material. In this way, the cloud director system can acquire the encountered material data according to the storage address of the material. In another implementation manner, the material identifier may be a material ID, and the cloud director system may directly obtain corresponding material data from a corresponding material data set according to the material ID. In another implementation manner, the material identifier in the visual effect setting information may be a material itself (e.g., a picture thumbnail, a video file, a voice file, etc.), and at this time, the cloud director system may convert the material into corresponding material data and store the corresponding material data in a material data set, and then perform subsequent special effect processing.
In this embodiment, a corresponding material data set may be created before the special effect processing is performed, where the material data set includes material data that can be directly mixed with the video data to perform the special effect processing, and the material data or the material data set may be obtained based on information (e.g., a material identifier, a material display parameter) provided by a requesting party or a third party. In practical application, the cloud director system can provide a corresponding API to a requester or a third party, the requester or the third party provides information (such as a material identifier and a material display parameter) by calling the API to establish a material data set required by the requester or the third party, so that the material processing process and the live broadcast processing process can be respectively realized through different data processing links, and the requester can instruct the cloud director system to realize the special effect required by the requester only by setting the material identifier in the information through a visual effect in the live broadcast processing process.
In one implementation, the material data may be YUV format data, and the corresponding material data set may be a video file.
In one implementation, the performing special effect processing on the video data according to the visual effect setting information may include: and acquiring corresponding material data from a corresponding material data set according to the material identification in the visual effect setting information, and performing special effect processing on the material data and the video data.
In one implementation manner, before acquiring the corresponding material data from the corresponding material data set, the method may further include: and acquiring corresponding materials according to the material identification and the material display parameters provided by the requester or the third party, processing the materials to obtain the material data, and storing the material data into the material data set.
In this embodiment, the manner of processing the material to obtain the material data may be various. In one implementation, processing the material to obtain the material data may include: processing the material into material data based on a hypertext markup language; and converting the material data based on the hypertext markup language into YUV format data.
In this embodiment, a function of previewing an effect may be provided to the requester before the effect processing. That is, before acquiring the corresponding material data from the corresponding material data set, the method may further include: rendering according to the content of the material data, and providing a result obtained by the rendering to the requesting party or a third party so as to facilitate the requesting party or the third party to preview the special effect.
In this embodiment, there may be a plurality of rendering manners according to the content of the material data. In one implementation, the rendering according to the content of the material data and providing the result of the rendering to the requester or a third party may include: rendering is carried out according to the content of the material data based on the hypertext markup language, and a result obtained by rendering is provided for the requesting party or the third party, so that the requesting party or the third party can preview the special effect.
Here, two kinds of material data sets can be created: a set of material data includes hypertext markup language-based material data (e.g., web pages) that can be used to provide special effects previews to a requestor; another material data set contains YUV-formatted material data (e.g., video), which can be directly used for special effects processing. In this way, the hypertext markup language based material data can be directly obtained from the first material data set so as to directly render an image that provides a previewable visual effect to the requesting party. Of course, the creation of the two material data sets can be based on material information processing such as material identification and material display parameters provided by the requesting party or the third party.
In one implementation, the converting the material data based on the hypertext markup language into YUV format data may include: and after receiving the confirmation of the requester or the third party, converting the material data based on the hypertext markup language into YUV format data by using a preset conversion tool.
In one implementation, the live broadcast request may carry a video stream identifier of multiple video streams; the performing special effect processing on the video data according to the visual effect setting information may include: and performing special effect processing on the video data of at least one path of video stream indicated by the visual effect setting information in the multiple paths of video streams. Therefore, the requirement of randomly switching hundreds of live broadcast streams in social network video live broadcast can be met.
In this embodiment, a function of providing a preview to the requester in advance before outputting the video stream may also be provided. In one implementation, after performing special effect processing on the video data according to the visual effect setting information, the method may include: rendering based on the video data after the special effect processing, and providing a result obtained by the rendering to the requesting party so that the requesting party can preview the special effect; and after receiving the confirmation of the requester, forming a video stream to be output based on the video data after the special effect processing.
In this embodiment, the cloud director system may be implemented by a server cluster. In one implementation manner, the server cluster may be implemented as a distributed computing system, where the distributed computing system includes a plurality of nodes and a resource scheduler, the resource scheduler is configured to schedule computing resources of the plurality of nodes to complete the live broadcast processing method of this embodiment, and each node executes a task under the scheduling of the resource scheduler to complete the live broadcast processing method of this embodiment. For example, the resource scheduler may create a corresponding task according to a live broadcast request of a requester, query a node whose currently available computing resource is not less than the computing resource required by the task, allocate the task to the node, and execute the task to complete corresponding live broadcast processing.
Fig. 5 is a schematic diagram of an exemplary structure of the cloud director system according to this embodiment. The cloud director system includes an API, a first processing unit PU _ Ux (PU _ U0, … …, PU _ Uw), a second processing unit PU _ Px (PU _ P0, … …, PU _ Pv), and a master unit. The API is responsible for being called by a requesting party or a third party to implement communication between the requesting party or the third party and the cloud director system, and the API may be of various types, for example, the API may include API _1, API _2, and the like. For example, a requester may send a live broadcast request to the cloud director system by calling API _1, and the requester or a third party may provide material information (e.g., material identifiers, material display parameters, etc.) of a special effect specified by a user to the cloud director system by calling API _2, so that the cloud director system generates corresponding material data according to the material information and creates a material data set corresponding to the special effect specified by the user. The first processing unit PU _ Ux is responsible for material processing and special effect processing, that is, according to material information (for example, material identifiers, material display parameters, and the like) which can be provided by a requester or a third party and is used for specifying a special effect by a user, generating corresponding material data, creating a material data set corresponding to the special effect specified by the user, acquiring corresponding material data based on a live broadcast request of the requester, and completing the corresponding special effect processing. The second processing unit PU _ Px is responsible for stream processing, that is, obtaining a corresponding video stream according to a live broadcast request of a requesting party and obtaining original data thereof, and processing the data after the special effect processing into a video stream and outputting the video stream. The first processing unit PU _ Ux is connected to the second processing unit PU _ Px, an output of the second processing unit PU _ Px may be an input of the first processing unit PU _ Ux, and an output of the first processing unit PU _ Ux may also be an input of the second processing unit PU _ Px. In practical applications, the first processing unit PU _ Ux and the second processing unit PU _ Px may be implemented by one node in the distributed computing system, or may be implemented by different nodes in the distributed computing system.
In this embodiment, the cloud director system may be combined with the CDN to meet the requirements of hundreds of live stream inputs and arbitrary switching in social network live video. Namely, the CDN system obtains video content of a live source and inputs the video content into the cloud director system for live processing, and the cloud director system outputs the video content subjected to live processing to the CDN system to output the video content to the viewer through the CDN system. The basic idea of the CDN is to avoid bottlenecks and links on the internet that may affect the data transmission speed and stability as much as possible, so that the content transmission is faster and more stable. The CDN is a layer of intelligent virtual network on the basis of the existing Internet, which is formed by placing node servers at each position of the network, and the CDN system can redirect the request of a user to a service node closest to the user according to the network flow, the connection of each node, the load condition, the distance to the user, the response time and other comprehensive information in real time, so that the user can obtain required content nearby, the crowded condition of the Internet network is solved, and the response speed of the user for accessing a website is improved.
In one implementation, the CDN system includes: the video stream access node, the video stream receiving center cluster and the video stream distribution node are close to each other. At this time, as shown in fig. 6, the communication between the cloud director system and the CDN system may adopt the manner shown in fig. 6, and both the input and the output of the cloud director system are the video stream receiving center cluster of the CDN system.
In one implementation, the obtaining a corresponding input video stream according to a domain name identifier, an application identifier, and a video stream identifier in the live broadcast request may include: and acquiring a corresponding input video stream from a CDN node of a near live source in the CDN system according to the domain name identifier, the application identifier and the video stream identifier in the live request.
In one implementation, the pushing the video stream to be output includes: the pushing the video stream to be output includes: and outputting the video stream to be output to a CDN node of a near-viewer device in the CDN system, so that the CDN node of the near-viewer device provides the video stream to be output to the viewer device, and the viewer device acquires and plays the video stream for a viewer to watch.
An exemplary implementation of the live broadcast processing method of the present embodiment is described in detail below. It should be noted that, in practical applications, the present embodiment may also have other implementation manners, and the implementation procedure of the following exemplary implementation manner may be adjusted according to the needs of practical applications, and the specific implementation procedure is not limited herein.
Fig. 7 is a schematic diagram illustrating an exemplary implementation process of the live broadcast processing method in this embodiment. As shown in fig. 7, a special effect processing link of a video stream, that is, a special effect processing a shown in fig. 7, is established first, when a live broadcast request is received, a corresponding video stream can be directly acquired, original data of the corresponding video stream, data of a material, and the like are inserted into the special effect processing link for processing, and finally, the video stream is encoded and output.
As shown in fig. 7, the live broadcast processing procedure when special effects need to be performed may include: the method comprises the steps of obtaining a video stream A, carrying out processing such as decapsulation and decoding to generate original data (converting video in the video stream A into YUV data and converting audio into PCM data), obtaining a video stream B, carrying out processing such as section encapsulation and decoding to generate original data (converting video in the video stream A into YUV data and converting audio into PCM data), carrying out copying processing by a special effect processing A, copying the original data of the video stream A, sending the copied data to an encoder and a multiplexer, and processing the copied data into a video stream to be output. Here, the special effect processing a acquires raw data (for example, material data in YUV format) of a corresponding material from the special effect processing C according to a video effect instruction of the requester, and performs processing based on the raw data of the material and the raw data of the video stream, thereby completing the special effect processing.
In practical application, a material data set, that is, special effect processing C shown in fig. 7, may be created in advance, so that raw data of a corresponding material may be directly obtained when special effect processing is performed. As shown in fig. 8, an exemplary diagram of the material processing procedure is shown. As shown in fig. 8, two material data sets may be created in advance: a material data set D0 and a material data set D1. The material data set D0 includes material data (e.g., a web page) based on a hypertext markup language, the material data set D1 includes material data in a YUV format (i.e., raw data of a material shown in fig. 7, such as a video file, etc.), and the material data in the material data set D0 and the material data set D1 are obtained by processing based on material information (e.g., material identifiers, material display parameters, etc.) provided by a requester or a third party, that is, the material data is the material data for specifying special effects corresponding to a user. As shown in fig. 8, the processing of the material may include: according to material information (such as material identification, material display parameters and the like) from a requester or a third party and representing user-specified special effects (such as picture-in-picture, superposed pictures and the like), materials such as characters, pictures and video files are processed into material data (such as a webpage) based on a hypertext markup language, a material data set D0 corresponding to the user-specified special effects is obtained, the material data based on the hypertext markup language is converted into material data in a YUV format by using a conversion tool, and a material data set D1 corresponding to the user-specified special effects is obtained, wherein the material data set D0 can be used for special effect preview, namely corresponding material data is obtained according to the request of the requester, and is rendered through screen rendering R0, and the rendered result is provided to the requester so that the requester can preview images or videos after special effect processing. The material data set D1 can be used for actual data processing, that is, in the live broadcast processing shown in fig. 7, raw data (e.g., YUV-format material data) of a corresponding material can be directly obtained from the material data set D1, and corresponding special effect processing is realized by off-screen rendering R1.
The following exemplifies an implementation of live broadcast processing including special effects processing.
Example Ex0, if the live request from the requestor is content that is output as video stream B, then the process may be: the cloud director system acquires a video stream B from a specified live broadcast source, decapsulates and decodes the video stream B to generate original data (video becomes YUV data and audio becomes PCM data) of the video stream B, performs special effect processing A to copy the original data of the video stream B, and sends the copied original data to an encoder, a multiplexer and the like to process the copied original data to obtain a video stream, and the video stream is pushed to a specified viewer.
Example Ex1, if the live request from the requesting party is to display the content of video a and the content of video B in picture-in-picture, then the process may be: the cloud director system acquires a video stream A and a video stream B from a specified live broadcast source, respectively decapsulates and decodes the video stream A and the video stream B to generate original data of the video stream A and original data of the video stream B (video is changed into YUV data, and audio is changed into PCM data), a special effect processing A performs mixing processing, copies the original data of the video stream A and the original data of the video stream B respectively, performs picture-in-picture splicing processing according to an effect specified by a request party, sends a picture-in-picture splicing processing result to an encoder, a multiplexer and the like for processing to obtain a video stream, and pushes the video stream to a specified viewer.
Example Ex2, if the live request from the requestor is to output the content of video a and display it in the form of a superimposed specified picture X, then the process may be: the cloud director system acquires a video stream A from a specified live broadcast source, decapsulates and decodes the video stream A to generate original data (video becomes YUV data and audio becomes PCM data) of the video stream A, performs mixing processing on a special effect processing A, copies the original data of the video stream A and the original data of a picture X (namely the material data of the picture X in a material data set D1) respectively, performs picture-in-picture splicing processing according to an effect specified by a requesting party, sends a picture-in-picture splicing processing result to an encoder, a multiplexer and the like for processing to obtain a video stream, and pushes the video stream to a specified viewing party.
Example Ex3, if the live request from the requestor is to output the content of video a, video B, and video C and display it in picture-in-picture. Then, the process may be: the cloud director system acquires a video stream A, a video stream B and a video stream C from a specified live broadcast source, respectively decapsulates and decodes the video stream A, the video stream B and the video stream C to generate original data of the video stream A, the video stream B and the video stream C (video becomes YUV data and audio becomes PCM data), performs mixing processing on the special effect processing A, copies the original data of the video stream A, the original data of the video stream B and the original data of the video stream C, performs picture-in-picture splicing processing according to an effect specified by a request party, and sends a picture-in-picture splicing processing result to an encoder, a multiplexer and the like for processing to obtain a video stream and pushes the video stream to a specified viewer.
It should be noted that, in this embodiment, the requestor may be a party that provides a live service to the viewer, and the requestor device may be a user device on the anchor side, a live platform, and the like. The watching party is a user watching the live broadcast, and the watching party equipment can be electronic equipment which can access the broadcast address to acquire the live broadcast stream and play the live broadcast stream, such as a mobile phone and a tablet personal computer. The third party is a special effect processing service provider, and the third party can provide special effect making service for the live broadcast party according to the requirement of the live broadcast party.
Example two
A live broadcast processing apparatus of a cloud director system, as shown in fig. 9, may include:
a receiving module 91, configured to receive a live broadcast request from a requestor, where the live broadcast request carries a domain name identifier, an application identifier, a video stream identifier, and a visual effect instruction, and the visual effect instruction includes visual effect setting information;
an input module 92, configured to obtain corresponding input video stream data according to the domain name identifier, the application identifier, and the video stream identifier in the live broadcast request;
a special effect processing module 93, configured to decode the input video stream to obtain video data, and perform special effect processing on the video data according to the visual effect setting information;
and an output module 94, configured to form a video stream to be output based on the video data after the special effect processing, and push the video stream to be output.
In an implementation manner, the special effect processing module 93 may be specifically configured to obtain corresponding material data from a corresponding material data set according to a material identifier in the visual effect setting information, and perform special effect processing on the material data and the video data.
In one implementation manner, the live broadcast processing apparatus of the cloud director system may further include: and the material processing module 95 is configured to obtain corresponding materials according to the material identifiers and the material display parameters provided by the requester or the third party, process the materials to obtain the material data, and store the material data in the material data set.
In this embodiment, the live broadcast processing device of the cloud director system may be implemented by one server in a server cluster. In one implementation, the live processing device may be located on one or more nodes of a distributed computing system. In this embodiment, the receiving module 91, the input module 92, the special effect processing module 93, the output module 94, and the material processing module 95 may be software, hardware, or a combination of both.
Other technical details of the present embodiment may refer to the first embodiment.
EXAMPLE III
A cloud director system comprising a plurality of server nodes distributed in parallel, the server nodes comprising:
a memory storing a live broadcast processing program;
a processor configured to read the live processing program to perform the following operations: receiving a live broadcast request from a requester, wherein the live broadcast request carries a domain name identifier, an application identifier, a video stream identifier and a visual effect instruction, and the visual effect instruction comprises visual effect setting information; acquiring a corresponding input video stream according to the domain name identifier, the application identifier and the video stream identifier in the live broadcast request, and decoding the input video stream to obtain video data; performing special effect processing on the video data according to the visual effect setting information, and forming a video stream to be output based on the video data after the special effect processing; and pushing the video stream to be output.
In this embodiment, the cloud director system may be implemented as a distributed computing system.
Other technical details of the present embodiment may refer to the first embodiment.
Example four
A computer-readable storage medium having stored thereon a live broadcast processing program, which when executed by a processor, implements the steps of the live broadcast processing method of embodiment one.
Other implementation details of the present embodiment can refer to embodiment one.
It will be understood by those skilled in the art that all or part of the steps of the above methods may be implemented by instructing the relevant hardware through a program, and the program may be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, and the like. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits. Accordingly, each module/unit in the above embodiments may be implemented in the form of hardware, and may also be implemented in the form of a software functional module. The present application is not limited to any specific form of hardware or software combination.
There are, of course, many other embodiments of the invention that can be devised without departing from the spirit and scope thereof, and it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the invention.
Claims (16)
1. A live broadcast processing method of a cloud director system comprises the following steps:
receiving a live broadcast request from a requester, wherein the live broadcast request carries a domain name identifier, an application identifier, a video stream identifier and a visual effect instruction, and the visual effect instruction comprises visual effect setting information;
acquiring a corresponding input video stream according to the domain name identifier, the application identifier and the video stream identifier in the live broadcast request, and decoding the input video stream to obtain video data;
performing special effect processing on the video data according to the visual effect setting information, and forming a video stream to be output based on the video data after the special effect processing;
and pushing the video stream to be output.
2. The method of claim 1,
the performing special effect processing on the video data according to the visual effect setting information includes: and acquiring corresponding material data from a corresponding material data set according to the material identification in the visual effect setting information, and performing special effect processing on the material data and the video data.
3. The method of claim 2, wherein prior to obtaining the corresponding material data from the corresponding material data set, further comprising:
and acquiring corresponding materials according to the material identification and the material display parameters provided by the requester or the third party, processing the materials to obtain the material data, and storing the material data into the material data set.
4. The method according to claim 2, wherein the material data set is a video file, and the material data is YUV format data.
5. The method of claim 3, wherein said processing said material to obtain said material data comprises:
processing the material into material data based on a hypertext markup language;
and converting the material data based on the hypertext markup language into YUV format data.
6. The method of claim 2, 3, 4 or 5, wherein prior to obtaining the respective material data from the corresponding material data set, further comprising:
rendering according to the content of the material data, and providing a result obtained by the rendering to the requesting party or a third party so as to facilitate the requesting party or the third party to preview the special effect.
7. The method of claim 6,
the rendering according to the content of the material data and providing the result obtained by the rendering to the requester or a third party includes: rendering is carried out according to the content of the material data based on the hypertext markup language, and a result obtained by rendering is provided for the requesting party or the third party, so that the requesting party or the third party can preview the special effect.
8. The method of claim 5,
the converting the material data based on the hypertext markup language into YUV format data comprises: and after receiving the confirmation of the requester or the third party, converting the material data based on the hypertext markup language into YUV format data by using a preset conversion tool.
9. The method according to any one of claims 1 to 5,
after the video data is subjected to special effect processing according to the visual effect setting information, the method comprises the following steps: rendering based on the video data after the special effect processing, and providing a result obtained by the rendering to the requesting party so that the requesting party can preview the special effect;
and after receiving the confirmation of the requester, forming a video stream to be output based on the video data after the special effect processing.
10. The method according to any one of claims 1 to 5,
the live broadcast request carries video stream identifications of multiple paths of video streams;
the performing special effect processing on the video data according to the visual effect setting information includes: and performing special effect processing on the video data of at least one path of video stream indicated by the visual effect setting information in the multiple paths of video streams.
11. The method according to any one of claims 1 to 5, wherein the obtaining a corresponding input video stream according to the domain name identifier, the application identifier, and the video stream identifier in the live broadcast request comprises:
and acquiring a corresponding input video stream from a CDN node of a near live source in the CDN system according to the domain name identifier, the application identifier and the video stream identifier in the live request.
12. The method according to any one of claims 1 to 5,
the pushing the video stream to be output includes: and outputting the video stream to be output to a CDN node of a near-viewer device in the CDN system so that the CDN node of the near-viewer device provides the video stream to be output to the viewer device.
13. A live broadcast processing device of a cloud directing system comprises:
the receiving module is used for receiving a live broadcast request from a requester, wherein the live broadcast request carries a domain name identifier, an application identifier, a video stream identifier and a visual effect instruction, and the visual effect instruction comprises visual effect setting information;
the input module is used for acquiring corresponding input video stream data according to the domain name identifier, the application identifier and the video stream identifier in the live broadcast request;
the special effect processing module is used for decoding the input video stream to obtain video data and carrying out special effect processing on the video data according to the visual effect setting information;
and the output module is used for forming a video stream to be output based on the video data after the special effect processing and pushing the video stream to be output.
14. The live processing apparatus of claim 13,
the special effect processing module is specifically configured to obtain corresponding material data from a corresponding material data set according to the material identifier in the visual effect setting information, and perform special effect processing on the material data and the video data.
15. The live processing apparatus of claim 14,
further comprising: and the material processing module is used for acquiring corresponding materials according to the material identification and the material display parameters provided by the requester or the third party, processing the materials to obtain the material data, and storing the material data into the material data set.
16. A cloud director system comprising a plurality of server nodes distributed in parallel, the server nodes comprising:
a memory storing a live broadcast processing program;
a processor configured to read the live processing program to perform the following operations:
receiving a live broadcast request from a requester, wherein the live broadcast request carries a domain name identifier, an application identifier, a video stream identifier and a visual effect instruction, and the visual effect instruction comprises visual effect setting information;
acquiring a corresponding input video stream according to the domain name identifier, the application identifier and the video stream identifier in the live broadcast request, and decoding the input video stream to obtain video data;
performing special effect processing on the video data according to the visual effect setting information, and forming a video stream to be output based on the video data after the special effect processing;
and pushing the video stream to be output.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110413164.7A CN113099258B (en) | 2017-12-25 | 2017-12-25 | Cloud guide system, live broadcast processing method and device, and computer readable storage medium |
CN201711420104.8A CN109963162B (en) | 2017-12-25 | 2017-12-25 | Cloud directing system and live broadcast processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711420104.8A CN109963162B (en) | 2017-12-25 | 2017-12-25 | Cloud directing system and live broadcast processing method and device |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110413164.7A Division CN113099258B (en) | 2017-12-25 | 2017-12-25 | Cloud guide system, live broadcast processing method and device, and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109963162A CN109963162A (en) | 2019-07-02 |
CN109963162B true CN109963162B (en) | 2021-04-30 |
Family
ID=67020931
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110413164.7A Active CN113099258B (en) | 2017-12-25 | 2017-12-25 | Cloud guide system, live broadcast processing method and device, and computer readable storage medium |
CN201711420104.8A Active CN109963162B (en) | 2017-12-25 | 2017-12-25 | Cloud directing system and live broadcast processing method and device |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110413164.7A Active CN113099258B (en) | 2017-12-25 | 2017-12-25 | Cloud guide system, live broadcast processing method and device, and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN113099258B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112419456B (en) * | 2019-08-23 | 2024-04-16 | 腾讯科技(深圳)有限公司 | Special effect picture generation method and device |
CN110784730B (en) * | 2019-10-31 | 2022-03-08 | 广州方硅信息技术有限公司 | Live video data transmission method, device, equipment and storage medium |
CN112752033B (en) * | 2019-10-31 | 2022-03-22 | 上海哔哩哔哩科技有限公司 | Broadcasting directing method and system |
CN111104376B (en) * | 2019-12-19 | 2023-04-07 | 湖南快乐阳光互动娱乐传媒有限公司 | Resource file query method and device |
CN111182322B (en) * | 2019-12-31 | 2021-04-06 | 北京达佳互联信息技术有限公司 | Director control method and device, electronic equipment and storage medium |
CN111355971B (en) * | 2020-02-20 | 2021-12-24 | 北京金山云网络技术有限公司 | Live streaming transmission method and device, CDN server and computer readable medium |
CN111447460B (en) * | 2020-05-15 | 2022-02-18 | 杭州当虹科技股份有限公司 | Method for applying low-delay network to broadcasting station |
CN112866727B (en) * | 2020-12-23 | 2024-03-01 | 贵阳叁玖互联网医疗有限公司 | Streaming media live broadcast method and system capable of receiving third party push stream |
CN112738540B (en) * | 2020-12-25 | 2023-09-05 | 广州虎牙科技有限公司 | Multi-device live broadcast switching method, device, system, electronic device and readable storage medium |
CN112770122B (en) * | 2020-12-31 | 2022-10-14 | 上海网达软件股份有限公司 | Method and system for synchronizing videos on cloud director |
CN112804564A (en) * | 2021-03-29 | 2021-05-14 | 浙江华创视讯科技有限公司 | Media stream processing method, device and equipment for video conference and readable storage medium |
CN113365093B (en) * | 2021-06-07 | 2022-09-06 | 广州虎牙科技有限公司 | Live broadcast method, device, system, electronic equipment and storage medium |
CN116916051B (en) * | 2023-06-09 | 2024-04-16 | 北京医百科技有限公司 | Method and device for updating layout scene in cloud director client |
CN116866624B (en) * | 2023-06-09 | 2024-03-26 | 北京医百科技有限公司 | Method and system for copying and sharing configuration information of guide table |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103856543A (en) * | 2012-12-07 | 2014-06-11 | 腾讯科技(深圳)有限公司 | Method for processing video, mobile terminal and server |
CN106331436A (en) * | 2016-08-26 | 2017-01-11 | 杭州奥点科技股份有限公司 | Cloud program directing system and online audio and video program production method |
CN106385590A (en) * | 2016-09-12 | 2017-02-08 | 广州华多网络科技有限公司 | Video push remote control method and device |
CN107483460A (en) * | 2017-08-29 | 2017-12-15 | 广州华多网络科技有限公司 | A kind of method and system of multi-platform parallel instructor in broadcasting's plug-flow |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7882258B1 (en) * | 2003-02-05 | 2011-02-01 | Silver Screen Tele-Reality, Inc. | System, method, and computer readable medium for creating a video clip |
US9294624B2 (en) * | 2009-01-28 | 2016-03-22 | Virtual Hold Technology, Llc | System and method for client interaction application integration |
CN206042179U (en) * | 2016-09-30 | 2017-03-22 | 徐文波 | Live integrative equipment of instructor in broadcasting |
CN107197172A (en) * | 2017-06-21 | 2017-09-22 | 北京小米移动软件有限公司 | Net cast methods, devices and systems |
-
2017
- 2017-12-25 CN CN202110413164.7A patent/CN113099258B/en active Active
- 2017-12-25 CN CN201711420104.8A patent/CN109963162B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103856543A (en) * | 2012-12-07 | 2014-06-11 | 腾讯科技(深圳)有限公司 | Method for processing video, mobile terminal and server |
CN106331436A (en) * | 2016-08-26 | 2017-01-11 | 杭州奥点科技股份有限公司 | Cloud program directing system and online audio and video program production method |
CN106385590A (en) * | 2016-09-12 | 2017-02-08 | 广州华多网络科技有限公司 | Video push remote control method and device |
CN107483460A (en) * | 2017-08-29 | 2017-12-15 | 广州华多网络科技有限公司 | A kind of method and system of multi-platform parallel instructor in broadcasting's plug-flow |
Also Published As
Publication number | Publication date |
---|---|
CN113099258B (en) | 2023-09-29 |
CN109963162A (en) | 2019-07-02 |
CN113099258A (en) | 2021-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109963162B (en) | Cloud directing system and live broadcast processing method and device | |
JP6570646B2 (en) | Audio video file live streaming method, system and server | |
CN104539977A (en) | Live broadcast previewing method and device | |
TW201547265A (en) | Media projection method and device, control terminal and cloud server | |
CN112261416A (en) | Cloud-based video processing method and device, storage medium and electronic equipment | |
CN107517411B (en) | Video playing method based on GSstreamer frame | |
JP6280215B2 (en) | Video conference terminal, secondary stream data access method, and computer storage medium | |
CN103491431A (en) | Method, terminal and system for audio and video sharing of digital television | |
KR20160077066A (en) | Transmission device, transmission method, reception device, and reception method | |
WO2015180446A1 (en) | System and method for maintaining connection channel in multi-device interworking service | |
CN106998441A (en) | Method, system and the Homed systems for supporting camera video data multiplex to broadcast | |
KR20160077067A (en) | Transmission device, transmission method, reception device, and reception method | |
JPWO2018043134A1 (en) | Delivery device, delivery method, receiving device, receiving method, program, and content delivery system | |
CN107509093A (en) | Video resource processing method, the method and device across the synchronous broadcasting video resource of screen | |
KR102373195B1 (en) | Receiving device, transmission device, data communication method, and data processing method | |
CN108156490A (en) | A kind of method, system and storage medium using mobile terminal playback live telecast | |
KR20180065432A (en) | system and method for providing cloud based user interfaces | |
US11778011B2 (en) | Live streaming architecture with server-side stream mixing | |
CN116366890B (en) | Method for providing data monitoring service and integrated machine equipment | |
JP7526414B1 (en) | Server, method and computer program | |
TWI653884B (en) | Digital signage system | |
CN105554586B (en) | A kind of new business rendering method | |
JP2018006846A (en) | Synchronous presentation system, synchronous presentation method, and synchronous presentation program | |
KR102052385B1 (en) | Collaborating service providing method for media sharing and system thereof | |
US20140150018A1 (en) | Apparatus for receiving augmented broadcast, method of receiving augmented broadcast content using the same, and system for providing augmented broadcast content |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40010301 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |