CN111563027B - Application operation monitoring method, device and system - Google Patents
Application operation monitoring method, device and system Download PDFInfo
- Publication number
- CN111563027B CN111563027B CN202010360865.4A CN202010360865A CN111563027B CN 111563027 B CN111563027 B CN 111563027B CN 202010360865 A CN202010360865 A CN 202010360865A CN 111563027 B CN111563027 B CN 111563027B
- Authority
- CN
- China
- Prior art keywords
- application
- target frame
- frame picture
- terminal
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3466—Performance evaluation by tracing or monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/302—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a software system
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Computer Hardware Design (AREA)
- Mathematical Physics (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The invention provides an application operation monitoring method, device and system; the method is applied to the monitoring server and comprises the following steps: acquiring operation parameters of an appointed sub-process in a complete process period of a target frame picture of the monitored application; the method comprises the steps that a terminal acquires first gesture information of the terminal, the terminal finishes displaying a received target frame picture, and the target frame picture is an application picture generated by rendering according to the first gesture information by a cloud rendering server, and the application picture is coded by the cloud rendering server and then is sent to the terminal; and analyzing the running condition of the application according to the running parameters of the appointed subprocess, and outputting the analysis result of the running condition of the application. The cloud rendering server is used for monitoring the running of the application in high efficiency, so that the staff can manage and check the running condition and user experience of the application conveniently, and the data processing amount of the cloud rendering server is reduced.
Description
Technical Field
The invention relates to the technical field of cloud computing, in particular to an application operation monitoring method, device and system.
Background
In the running mode of the cloud game, the cloud VR and other applications, the application is rendered at the cloud, in the mode, the application runs on a cloud rendering server, the cloud rendering server renders scenes generated by running the application, then images generated by rendering and audio data generated by running the application are collected, the images and the audio data are transmitted to a terminal after being encoded, and the terminal decodes and displays the images and the audio data.
In the running process of the cloud game, the cloud VR and other applications, the manager needs to know the running condition of the application in time so as to maintain the running system of the application. In the prior art, the terminal display equipment is arranged to upload running state data to the cloud rendering server at regular time, the cloud rendering server records the state data in a mode of generating a log file, and then a manager can know the running information of the whole system by checking the log file.
According to the method for generating the log file through the cloud rendering server, on one hand, the load of the cloud rendering server is increased under the condition of huge application quantity, on the other hand, when a manager checks the log file to know the running condition of the system, the manager needs to download the log file, then the manager looks up the data to be checked in the log one by one and makes judgment, and the efficiency is low.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus, and a system for monitoring the running of an application, so as to monitor the running state of the application efficiently and know the user experience in time.
Specifically, the invention is realized by the following technical scheme:
in a first aspect, an embodiment of the present invention provides an application operation monitoring method, where the method is applied to a monitoring server, and the method includes:
acquiring operation parameters of a designated sub-process in a complete process period of a target frame picture of a monitored application, wherein the complete process period refers to a process from the beginning of the acquisition of first gesture information of a terminal to the end of the display of the received target frame picture by the terminal, and the target frame picture is an application picture generated by rendering according to the first gesture information by a cloud rendering server, and is coded by the cloud rendering server and then issued to the terminal;
and analyzing the running condition of the application according to the running parameters of the appointed subprocess, and outputting the analysis result of the running condition of the application.
In a second aspect, an embodiment of the present invention provides an operation monitoring system for an application, the system including: the cloud rendering server, the terminal and the monitoring server;
The monitoring server is used for acquiring operation parameters of an appointed subprocess in a complete process period of a target frame picture of the monitored application, wherein the complete process period refers to a processing process from the beginning of the acquisition of first gesture information of the terminal by the terminal to the ending of the display completion of the terminal on the received target frame picture, and the target frame picture is an application picture generated by rendering according to the first gesture information by the cloud rendering server, and is coded by the cloud rendering server and then issued to the terminal;
the monitoring server is also used for analyzing the running condition of the application according to the running parameters of the appointed subprocess and outputting the analysis result of the running condition of the application.
In a third aspect, an embodiment of the present invention provides an apparatus for monitoring operation of an application, where the apparatus includes:
the acquisition module is used for acquiring the operation parameters of the appointed subprocess in the complete process period of the target frame picture of the monitored application; the method comprises the steps that a terminal acquires first gesture information of the terminal, the terminal finishes displaying a received target frame picture, and the target frame picture is an application picture generated by rendering according to the first gesture information by a cloud rendering server, and the application picture is coded by the cloud rendering server and then is sent to the terminal;
And the analysis module is used for analyzing the running condition of the application according to the running parameters of the appointed subprocess and outputting the analysis result of the running condition of the application.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present invention further provides a computer device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor for performing the method steps as described in the first aspect when executing a program stored on a memory.
The embodiment of the invention provides an application operation monitoring method, device and system, wherein a monitoring server acquires operation parameters of an appointed subprocess in a complete process period of a target frame picture of a monitored application, and the complete process period comprises the following steps: and acquiring the first posture information of the terminal from the terminal, and displaying the received target frame picture by the terminal, wherein the target frame picture is an application picture generated by rendering according to the first posture information by a cloud rendering server, and transmitting the application picture to the terminal after being encoded by the cloud rendering server. And the cloud rendering server analyzes the applied running condition according to the running parameters of the subprocesses and outputs the analysis result of the running condition of the application. According to the embodiment of the invention, the application running state is uniformly monitored through the set monitoring server, the running state of the application is analyzed according to the monitored running parameters, the analysis result sent by the running state of the application is output, the efficient monitoring of the application running is realized, the running state and the user experience of the application are convenient for the staff to manage and check, and the data processing capacity of the cloud rendering server is reduced.
Drawings
FIG. 1 is a full process cycle schematic of a frame of game play in accordance with an exemplary embodiment of the invention;
FIG. 2 is a flow chart of a method of monitoring operation of an application according to an exemplary embodiment of the present invention;
FIG. 3 is a flow chart illustrating a method of operation monitoring of a first application according to an exemplary embodiment of the present invention;
FIG. 4 is a schematic illustration of a scenario illustrating a first application's operation monitoring method according to an exemplary embodiment of the present invention;
FIG. 5 is a time axis diagram of a complete process cycle of a frame of game screen according to an exemplary embodiment of the present invention;
FIG. 6 is a schematic diagram of a scenario illustrating a second application operation monitoring method according to an exemplary embodiment of the present invention;
FIG. 7 is a flow chart illustrating a method of operation monitoring of a second application according to an exemplary embodiment of the present invention;
FIG. 8 is a schematic diagram of an operation monitoring device for an application according to an exemplary embodiment of the present invention;
fig. 9 is a schematic diagram of a computer device according to an exemplary embodiment of the present invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the invention. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
Under the operation architecture of cloud game, virtual Reality (VR), cloud AR (Augmented Reality ), cloud MR (Mixed Reality) and other applications, the application is deployed on a cloud rendering server, in the application operation process, taking a cloud game as an example, the cloud rendering server receives an operation instruction sent by a terminal through a network and controlling the operation of the cloud game application by a user, and transmits the operation instruction to the cloud game application, the cloud game application generates game data in response to the operation instruction, the cloud rendering server uses the game data to render to obtain a game picture, the cloud rendering server acquires the game picture and performs encoding processing on the acquired game picture and audio data output by the cloud game application to an audio system, the game picture and the audio data after the encoding processing are transmitted to the terminal in real time through the network in an audio-video stream mode, and the terminal decodes and outputs the received audio-video stream. Because the rendering and encoding of the game in the cloud game mode are realized on the cloud rendering server, the dependence of the game on the computing capacity and the storage capacity of the terminal is greatly reduced through the mode of the cloud game.
Considering that in the prior art, running state parameters of applications such as cloud games are generally collected through a cloud rendering server and stored in a log mode, management staff searches data to be watched in a log reading mode, efficiency is low, the counted parameters in the existing running state parameter counting mode are single, and evaluation of the running state of the applications is difficult to systematically and comprehensively perform; based on the above, the embodiment of the invention provides an application operation monitoring method, device and system.
In the running process of the application, the terminal periodically acquires the gesture information of the terminal and uploads the gesture information to a cloud rendering server, and the cloud rendering server renders an application picture according to the gesture information so that the application picture obtained by rendering is consistent with the angle of view of a user; in an embodiment of the present invention, a terminal includes: head mounted displays (such as VR helmets, AR helmets, MR helmets), mobile terminals, and the like.
Taking a cloud game as an example, fig. 1 is a schematic diagram of a complete process cycle of a frame of cloud game frames according to an exemplary embodiment of the present invention; referring to fig. 1, a complete process cycle of one frame of game screen includes the following sub-processes: the terminal comprises a gesture information detection subprocess, a gesture information sending subprocess (the detected gesture information is sent to a cloud rendering server through the subprocess terminal), a gesture information receiving subprocess, a game picture rendering subprocess, a game picture coding subprocess, a game picture sending subprocess, a game picture receiving subprocess, a decoding subprocess and a display subprocess. In the embodiment of the invention, the operation condition of the cloud game application is analyzed by monitoring the appointed subprocess operation data in the complete process period of the cloud game picture, so that the experience of the terminal user is known in time.
FIG. 2 is a flow chart of a method of monitoring operation of an application according to an exemplary embodiment of the present invention; referring to fig. 2, the method comprises the steps of:
s10, acquiring operation parameters of an appointed sub-process in a complete process period of a target frame picture of the monitored application; the method comprises the steps that a terminal acquires first gesture information of the terminal, the terminal finishes displaying a received target frame picture, and the target frame picture is an application picture generated by a cloud rendering server according to the first gesture information, and the application picture is coded by the cloud rendering server and then is sent to the terminal.
In this embodiment, the above may be monitoring one or more sub-processes in the complete process period of the target frame picture, and the monitored sub-processes may be selectively set by a worker.
The target frame picture may refer to each frame picture of the application, and at this time, the terminal may upload gesture information to trigger to perform running state monitoring of the application; the target frame picture can also be a part of the frame picture of the application, for example, the running state monitoring of the application is carried out at certain intervals, and if the terminal uploading gesture information is detected after the monitoring period is reached, the running state monitoring of the application is triggered.
With continued reference to fig. 1, in this embodiment, a complete processing procedure from the time when the terminal collects the first gesture information of the terminal to the time when the terminal finishes displaying the received target frame includes the following sub-procedures: the method comprises the steps of a gesture information detection subprocess, a gesture information transmission subprocess, a gesture information receiving subprocess, a game picture rendering subprocess, a game picture coding subprocess, a game picture transmission subprocess, a game picture receiving subprocess, a decoding subprocess and a display subprocess of the terminal; wherein, the operation parameters of each sub-process comprise the time consumed by each sub-process.
S20, analyzing the running condition of the application according to the running parameters of the appointed subprocess, and outputting an analysis result of the running condition of the application.
In this embodiment, the operation parameters of the designated sub-process may be compared with a preset operation parameter threshold, and the comparison result is used as an analysis result of the application operation condition; and according to the selection operation of the user, the operation parameters and/or analysis results which are selected by the user and want to be checked are output in the form of presentation modes selected by the user, such as a graph, a table and the like.
In one embodiment of the present invention, the monitoring server monitors the operation parameters of the designated sub-process by receiving the operation parameters uploaded by the terminal.
FIG. 3 is a flow chart illustrating a method of operation monitoring of a first application according to an exemplary embodiment of the present invention; referring to fig. 3, in the above step S10, the method of the present invention obtains the operation parameters of the designated sub-process in the complete process period of the target frame picture of the monitored application, and specifically includes the following steps S10':
s10', acquiring the time consumed by each appointed subprocess in the complete process period of the target frame picture uploaded by the terminal; the time consumed by all the designated subprocesses is calculated by the terminal and/or the cloud rendering server.
The terminal calculates the time consumed by each sub-process according to the time stamp data of the designated sub-process, or directly obtains the time consumed by the designated sub-process on the cloud rendering server side calculated by the cloud rendering server and the time consumed by the designated sub-process on the terminal side calculated by the terminal.
In the embodiment of the application, the time consumed by each appointed subprocess in the complete process period of the target frame picture is uploaded to the monitoring server through the terminal, so that the problem of network congestion caused by uploading different data by the terminal and the cloud rendering server can be solved.
In the embodiment of the application, the terminal and the cloud rendering server record the start time stamp and the end time stamp of the subprocess in the process of executing the corresponding appointed subprocess.
For example, the recording may be performed by encapsulating the start time stamp and the end time stamp of the sub-process in the data packet processed by the sub-process, and finally calculating, by the terminal, the time stamp data of all the specified sub-processes encapsulated in the final data packet containing the target frame picture, to obtain the consumed time of each specified sub-process.
The designating of the sub-process in this embodiment includes: the gesture information sending subprocess, the gesture information obtaining subprocess, the target frame picture rendering subprocess, the target frame picture coding subprocess, the target frame picture sending subprocess, the target frame picture receiving subprocess and the decoding subprocess are taken as examples; the consumed run time of each of the specified sub-processes described above includes: the sum of the time consumed by all the appointed subprocesses, the receiving time consumed by the terminal for receiving the target frame picture, the decoding time consumed by decoding the target frame picture and the network downlink delay; and the cloud rendering server receives the first gesture information, the first gesture information acquisition time obtained by the time consumed by the application for taking the first gesture information out, the rendering time consumed by the cloud rendering server for rendering the target frame according to the first gesture information, the encoding time consumed by encoding the target frame, the waiting transmission time from the end of encoding the target frame to the terminal and the network uplink time delay.
FIG. 4 is a schematic illustration of a scenario illustrating a first application's operation monitoring method according to an exemplary embodiment of the present invention; referring to fig. 4, in the process of uploading the collected first pose information to the cloud rendering server 200, the terminal 100 encapsulates the transmission time stamp T1 in a data packet of the first pose information, the cloud rendering server 200 encapsulates the reception time stamp T2 when receiving the first pose information, the cloud rendering server 200 encapsulates the pose information to remove the time stamp T3 when detecting that the cloud game application removes the pose information, the cloud rendering server 200 applies the pose information to render the game picture, if the game picture rendering is completed, the cloud rendering server encapsulates the first pose information and encapsulated time stamp data together with the game picture rendering completion time stamp T4 in the rendered game picture, and continues to encapsulate the encoding start time stamp T5 and the encoding end time stamp T6 in the process of encoding the game picture, encapsulates the transmission time stamp T7 when transmitting the encoded game picture to the terminal 100, continues to encapsulate the reception start time stamp T8 and the reception end time stamp T9 when receiving the encoded game picture, and encapsulates the decoding start time stamp T11 in the decoding process.
The terminal calculates and obtains the time consumed by the appointed subprocess according to the timestamp data; for example, according to the receiving time stamp T2 of the first gesture information and the gesture information removing time stamp T3, calculating to obtain the time of the cloud game application to acquire the gesture information; and taking out the time stamp T3 and the game picture rendering completion time stamp T4 according to the gesture information, calculating to obtain the rendering time consumed by the rendering subprocess for executing the game picture rendering, and calculating to obtain the encoding time consumed by encoding the game picture according to the encoding start time stamp T5 and the encoding end time stamp T6 of the game picture.
In an optional embodiment of the present invention, the uplink delay of the network may be calculated according to a transmission timestamp T1 encapsulated in a data packet of the first gesture information and a reception timestamp T2 encapsulated by the cloud rendering server for receiving the first gesture information.
In another embodiment of the present invention, the terminal sends a delay detection signaling to the cloud rendering server, the cloud rendering server sends feedback information after receiving the delay detection signaling, the terminal calculates a time period lasting from the time of sending the delay detection command to the time of receiving the feedback information, divides the time period by 2, and sends the network uplink delay to the monitoring server as the network uplink delay.
Optionally, after obtaining the time sum, the terminal or the monitoring server subtracts the consumed receiving time, decoding time, first gesture information obtaining time, rendering time, encoding time, waiting sending time and network uplink time delay of the received target frame picture from the time sum to calculate to obtain the network downlink time delay of the target frame picture in the complete processing period. If the execution subject of the calculation can be a terminal, all time data are uploaded to a monitoring server for analysis and presentation after the calculation by the terminal is completed.
Referring to fig. 5, in an embodiment of the present application, the operation monitoring data of the application further includes: the terminal further calculates the terminal delay sum (receiving time+decoding time) and the server delay sum (first gesture information obtaining time+rendering time+encoding time+waiting sending time) of the target frame picture in the processing process after the time consumed by each sub-process is obtained, and uploads the terminal delay sum server delay sum to the monitoring server.
The time consumed by the terminal to calculate each sub-process and the time consumed by each sub-process to upload to the cloud rendering server may be after the encoding of the target frame picture is finished or after the output display of the target frame picture is finished, which is not limited in the present application.
It should be noted that, in the above embodiment of the present invention, only the specific time of the package timestamp or the specific time of the package timestamp is illustrated and should not be construed as limiting the present invention.
In another possible embodiment of the present invention, referring to fig. 6, the monitoring server 300 designates the operation parameters of the sub-process by acquiring the complete process period of the target frame picture from the terminal 100 and the cloud rendering server 200, respectively.
FIG. 7 is a flow chart illustrating a method of operation monitoring of a second application according to an exemplary embodiment of the present invention; referring to fig. 7, in the above step S10 in the present embodiment, the method for acquiring the operation parameters of the sub-process specified by the complete process period of the target frame picture of the monitored application specifically includes the following steps S101 to S102:
s101, respectively acquiring first operation data of a terminal side appointed subprocess in a complete process period of the target frame picture uploaded by the terminal and second operation data of the target frame picture server side appointed subprocess uploaded by a cloud rendering server running the application.
In this embodiment, the terminal side designating sub-process includes: terminal gesture information sending subprocess, receiving subprocess and decoding subprocess; the server side assigned sub-process includes: terminal gesture information receiving subprocess, game picture rendering subprocess, game picture coding subprocess and sending subprocess.
S102, calculating network delay data in the complete processing period based on the first operation data and the second operation data.
In this embodiment, in the step S20, the analyzing the running condition of the application according to the first running data and the second running data, and outputting the analysis result of the running condition of the application includes:
s20', analyzing the running condition of the application according to the first running data, the second running data and the network time delay data, and outputting an analysis result of the running condition of the application.
In this embodiment, the above-mentioned designated sub-process includes: the method comprises a gesture information sending sub-process, a gesture information obtaining sub-process, a target frame picture rendering sub-process, a target frame picture coding sub-process, a target frame picture sending sub-process, a target frame picture receiving sub-process and a decoding sub-process; further, in this embodiment, the first operation data includes: the sum of the time consumed by all the specified sub-processes, the reception time consumed by receiving the target frame picture, and the decoding time consumed by decoding the target frame picture.
The second operation data includes:
the cloud rendering server obtains first posture information obtaining time by calculating time consumed for obtaining the first posture information until the application takes the first posture information, rendering time consumed for rendering the target frame picture according to the first posture information, encoding time consumed for encoding the target frame picture, waiting sending time from the end of encoding the target frame picture to the terminal and network uplink time delay.
Optionally, in this embodiment, the step S20' calculates, based on the first operation data and the second operation data, network delay data applied in the complete processing cycle, and specifically includes the following step a10:
and step A10, after obtaining the time sum consumed by all the appointed subprocesses, subtracting the consumed receiving time for receiving the target frame picture, the decoding time consumed for decoding the target frame picture, the first gesture information acquisition time, the rendering time, the encoding time, the waiting sending time and the network uplink time delay of the cloud rendering server from the time sum, and calculating to obtain the network downlink time delay of the target frame picture in the complete processing period.
In the embodiment of the present invention, the terminal and the cloud rendering server may calculate the time consumed by each sub-process by using the timestamp data of each corresponding designated sub-process, and upload the time consumed by each sub-process to the monitoring server respectively; in the invention, the terminal and the cloud rendering server can upload the time consumed by each appointed subprocess to the monitoring server after the end of each appointed subprocess, or upload the operation parameters of each appointed subprocess to the monitoring server after the end of the last subprocess, wherein the specific calculation and uploading time is not limited by the invention.
In another possible embodiment of the present invention, the above-mentioned designated sub-process further includes a display sub-process, and the operation parameters of the display sub-process include display parameters that characterize the display quality of the target frame picture, and exemplary display parameters of the display quality include: black edge rate.
In this embodiment, in the process of displaying a target frame picture, the terminal acquires first posture information of the terminal and second posture information of the current terminal, which are used by a cloud rendering server when rendering and generating the target frame picture; the terminal calculates a black edge rate of the target frame picture based on the rendering FOV (field angle of view, field angle) of the target frame picture, the terminal FOV, the first pose information, and the second pose information.
In this embodiment, after the terminal calculates the black edge rate of the target frame picture, the black edge rate is uploaded to the monitoring server, the detection server analyzes the black edge rate, and then the manager can judge the user experience through the black edge rate analysis result.
The operation parameters further comprise code rate and frame rate of the cloud rendering server side, the cloud rendering server directly uploads the code rate and the frame rate to the monitoring server, or the cloud rendering server issues the code rate and the frame rate together with time stamp data to the terminal, and the code rate and the frame rate are uploaded to the monitoring server after being summarized by the terminal.
In another possible embodiment of the present invention, the method further includes: acquiring working state parameters and/or application information of a terminal, wherein the working state parameters comprise information such as electric quantity information, CPU (central processing unit) utilization rate and the like, and the application information comprises: the application configuration resolution, the application configuration code rate, the application name and other information are used for realizing comprehensive monitoring and statistics on the application operation in the application operation process.
In one possible embodiment of the present invention, the monitoring server notifies the staff of the running status analysis result of the application in a set manner at a set time; the monitoring server counts the occurrence times of the abnormal data of each operation parameter and the abnormal times of the working state of the terminal in a set time period, and informs the staff of the abnormal data of the monitored application, the occurrence times of the abnormal data and other data in a mail, a short message and other setting modes after the set time period is finished, so that the operation process and the user experience of the monitored application are effectively monitored.
In another embodiment of the present invention, there is provided an operation monitoring system for an application, the system including: the cloud rendering server, the terminal and the monitoring server; the method for performing application operation monitoring in the application operation monitoring system of the present invention may be described in the embodiments of the application operation monitoring method described above.
In this embodiment, the monitoring server is configured to obtain an operation parameter of a designated sub-process in a complete process period of a target frame picture of a monitored application; the method comprises the steps that a terminal acquires first gesture information of the terminal, the terminal finishes displaying a received target frame picture, and the target frame picture is an application picture generated by rendering according to the first gesture information by a cloud rendering server, and the application picture is coded by the cloud rendering server and then is sent to the terminal.
The monitoring server is further used for analyzing the running condition of the application according to the running parameters of the appointed subprocess and outputting the analysis result of the running condition of the application.
Optionally, the monitoring server is configured to obtain the operation parameters of the designated sub-process in the complete process period of the target frame picture of the monitored application by: acquiring the time consumed by each appointed sub-process in the complete process period of the target frame picture uploaded by the terminal;
the terminal is used for uploading the time consumed by each appointed subprocess in the complete process period of the target frame picture; the time consumed by all the designated subprocesses is calculated by the terminal and/or the cloud rendering server.
Optionally, the consumed time of the designated sub-process includes: the sum of the time consumed by all the appointed subprocesses, the receiving time consumed by the terminal for receiving the target frame picture, the decoding time consumed by decoding the target frame picture and the network downlink delay; and the cloud rendering server receives the first gesture information and the first gesture information acquisition time consumed by the application for taking the first gesture information, the rendering time consumed by the cloud rendering server for rendering the target frame picture according to the first gesture information, the encoding time consumed by the cloud rendering server for encoding the target frame picture, the waiting sending time from the end of encoding the target frame picture to the terminal and the network uplink time delay.
In another embodiment of the present application, the monitoring server is specifically configured to: respectively acquiring first operation data of a terminal side appointed subprocess of the target frame picture uploaded by the terminal in a complete process period and second operation data of the target frame picture server side appointed subprocess uploaded by a cloud rendering server running the application; calculating network delay data in the complete processing period based on the first operation data and the second operation data;
And analyzing the running condition of the application according to the first running data, the second running data and the network time delay data, and outputting an analysis result of the running condition of the application.
In this embodiment, the first operation data includes: the sum of the time consumed by all the specified sub-processes, the reception time consumed by receiving the target frame picture, and the decoding time consumed by decoding the target frame picture.
The second operation data includes:
the cloud rendering server obtains first posture information obtaining time by calculating time consumed for obtaining the first posture information until the application takes the first posture information out, rendering time consumed for rendering the target frame picture according to the first posture information, encoding time consumed for encoding the target frame picture, waiting sending time from the end of encoding the target frame picture to the middle of sending the target frame picture to the terminal and network uplink time delay.
Optionally, the monitoring server is specifically configured to:
after the time sum consumed by all the appointed subprocesses is obtained, the time sum is used for subtracting the consumed receiving time for receiving the target frame picture, the decoding time for decoding the target frame picture, the first gesture information acquisition time, the rendering time, the encoding time, the waiting sending time and the network uplink time delay of the cloud rendering server, and the network downlink time delay of the target frame picture in the complete processing period is calculated.
Optionally, the above operating parameters further include: display parameters representing the display quality of the target frame picture and/or code rate and frame rate corresponding to the cloud rendering server side target frame picture
FIG. 8 is a schematic diagram of an operation monitoring device for an application according to an exemplary embodiment of the present invention; referring to fig. 8, there is further provided an apparatus 800 for monitoring operation of an application according to an embodiment of the present invention, including:
an obtaining module 801, configured to obtain an operation parameter of a designated sub-process in a complete process period of a target frame picture of a monitored application; the method comprises the steps that a terminal acquires first gesture information of the terminal, the terminal finishes displaying a received target frame picture, and the target frame picture is an application picture generated by rendering according to the first gesture information by a cloud rendering server, and the application picture is coded by the cloud rendering server and then is sent to the terminal;
and the analysis module 802 is configured to analyze the running condition of the application according to the running parameter of the specified sub-process, and output an analysis result of the running condition of the application.
Optionally, the acquiring module 801 is specifically configured to:
Acquiring the time consumed by each appointed sub-process in the complete process period of the target frame picture uploaded by the terminal;
the time consumed by all the designated subprocesses is calculated by the terminal and/or the cloud rendering server.
Optionally, the consumed time of the designated sub-process includes: the sum of the time consumed by all the appointed subprocesses, the receiving time consumed by the terminal for receiving the target frame picture, the decoding time consumed by decoding the target frame picture and the network downlink delay;
and the cloud rendering server receives the first gesture information and the first gesture information acquisition time consumed by the application for taking the first gesture information, the rendering time consumed by the cloud rendering server for rendering the target frame picture according to the first gesture information, the encoding time consumed by the cloud rendering server for encoding the target frame picture, the waiting sending time from the end of encoding the target frame picture to the terminal and the network uplink time delay.
Optionally, the acquiring module 801 is specifically configured to:
respectively acquiring first operation data of a terminal side appointed subprocess of the target frame picture uploaded by the terminal in a complete process period and second operation data of the target frame picture server side appointed subprocess uploaded by a cloud rendering server running the application;
Calculating network delay data in the complete processing period based on the first operation data and the second operation data;
the analysis module 802 is specifically configured to:
and analyzing the running condition of the application according to the first running data, the second running data and the network time delay data, and outputting an analysis result of the running condition of the application.
Optionally, the first operation data includes: the sum of the time consumed by all the specified sub-processes, the reception time consumed by receiving the target frame picture, and the decoding time consumed by decoding the target frame picture.
Optionally, the second operation data includes:
the cloud rendering server obtains first posture information obtaining time by calculating time consumed for obtaining the first posture information until the application takes the first posture information out, rendering time consumed for rendering the target frame picture according to the first posture information, encoding time consumed for encoding the target frame picture, waiting sending time from the end of encoding the target frame picture to the middle of sending the target frame picture to the terminal and network uplink time delay.
Optionally, the acquiring module 801 is specifically configured to calculate, based on the first operation data and the second operation data, network delay data of the application in the complete processing cycle by:
and if the sum of the consumed time of all the appointed subprocesses uploaded by the terminal is received, subtracting the consumed receiving time of the received target frame picture, the decoding time of the target frame picture, the first gesture information acquisition time, the rendering time, the encoding time, the waiting sending time and the network uplink time delay which are uploaded by the cloud rendering server from the whole consumed time, and calculating to obtain the network downlink time delay of the target frame picture in the complete processing period.
Optionally, the operating parameters further include: display parameters representing the display quality of the target frame picture and/or code rate and frame rate of the target frame picture.
In another embodiment, the present invention further provides a computer readable storage medium, on which a computer program is stored, where the program is executed by a processor to implement the steps of the method for applying audio isolation acquisition described in any one of the above embodiments.
FIG. 9 is a schematic diagram of a computer device according to an exemplary embodiment of the invention; an electronic device provided in an embodiment of the present invention, as shown in fig. 9, includes a processor 501, a communication interface 502, a memory 503, and a communication bus 504, where the processor 501, the communication interface 502, and the memory 503 perform communication with each other through the communication bus 504,
a memory 503 for storing a computer program;
the processor 501 is configured to implement the steps of the method for applying audio isolation acquisition described in any of the above embodiments when executing the program stored in the memory 503.
The communication bus mentioned by the above terminal may be a peripheral component interconnect standard (Peripheral Component Interconnect, abbreviated as PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated as EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the terminal and other devices.
The memory may include random access memory (Random Access Memory, RAM) or non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The implementation process of the functions and roles of each unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present invention. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices including, for example, semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., internal magnetic disks or removable disks), magneto-optical disks, and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features of specific embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. On the other hand, the various features described in the individual embodiments may also be implemented separately in the various embodiments or in any suitable subcombination. Furthermore, although features may be acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Furthermore, the processes depicted in the accompanying drawings are not necessarily required to be in the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the invention.
Claims (12)
1. An application operation monitoring method, wherein the method is applied to a monitoring server, and the method comprises:
acquiring operation parameters of an appointed sub-process in a complete process period of a target frame picture of the monitored application; the method comprises the steps that a terminal acquires first gesture information of the terminal, the terminal finishes displaying a received target frame picture, and the target frame picture is an application picture generated by rendering according to the first gesture information by a cloud rendering server, and the application picture is coded by the cloud rendering server and then is sent to the terminal;
and analyzing the running condition of the application according to the running parameters of the appointed subprocess, and outputting the analysis result of the running condition of the application.
2. The method according to claim 1, wherein the obtaining the operation parameters of the specified sub-process in the complete process period of the target frame picture of the monitored application comprises:
acquiring the time consumed by each appointed sub-process in the complete process period of the target frame picture uploaded by the terminal;
the time consumed by all the designated subprocesses is calculated by the terminal and/or the cloud rendering server.
3. The method of claim 2, wherein the consumed time of the designated sub-process comprises: the sum of the time consumed by all the appointed subprocesses, the receiving time consumed by the terminal for receiving the target frame picture, the decoding time consumed by decoding the target frame picture and the network downlink delay;
and the cloud rendering server receives the first gesture information and the first gesture information acquisition time consumed by the application for taking the first gesture information, the rendering time consumed by the cloud rendering server for rendering the target frame picture according to the first gesture information, the encoding time consumed by the cloud rendering server for encoding the target frame picture, the waiting sending time from the end of encoding the target frame picture to the terminal and the network uplink time delay.
4. The method according to claim 1, wherein the obtaining the operation parameters of the designated sub-process of the target frame picture of the monitored application in the complete process period comprises:
respectively acquiring first operation data of a terminal side appointed subprocess of the target frame picture uploaded by the terminal in a complete process period and second operation data of the target frame picture server side appointed subprocess uploaded by a cloud rendering server running the application;
Calculating network delay data in the complete process period based on the first operation data and the second operation data;
analyzing the running condition of the application according to the running parameters of the appointed subprocess, and outputting the analysis result of the running condition of the application, wherein the analysis result comprises the following steps:
and analyzing the running condition of the application according to the first running data, the second running data and the network time delay data, and outputting an analysis result of the running condition of the application.
5. The method of claim 4, wherein the first operational data comprises: the sum of the time consumed by all the specified sub-processes, the reception time consumed by receiving the target frame picture, and the decoding time consumed by decoding the target frame picture.
6. The method of claim 5, wherein the second operational data comprises:
the cloud rendering server obtains first posture information obtaining time by calculating time consumed for obtaining the first posture information until the application takes the first posture information out, rendering time consumed for rendering the target frame picture according to the first posture information, encoding time consumed for encoding the target frame picture, waiting sending time from the end of encoding the target frame picture to the middle of sending the target frame picture to the terminal and network uplink time delay.
7. The method of claim 6, wherein the computing network latency data for the application over the full process period based on the first operational data and the second operational data comprises:
after the time sum consumed by all the appointed subprocesses is obtained, subtracting the consumed receiving time for receiving the target frame picture, the decoding time consumed for decoding the target frame picture, the first gesture information acquisition time, the rendering time, the encoding time, the waiting sending time and the network uplink time delay of the cloud rendering server from the time sum, and calculating to obtain the network downlink time delay of the target frame picture in the complete process period.
8. The method of any one of claims 1-7, wherein the operating parameters further comprise: and representing display parameters of the display quality of the target frame picture and/or code rate and frame rate corresponding to the target frame picture at the cloud rendering server side.
9. An operation monitoring system for an application, the system comprising: the cloud rendering server, the terminal and the monitoring server;
the monitoring server is used for acquiring operation parameters of an appointed subprocess in a complete process period of a target frame picture of the monitored application, wherein the complete process period refers to a processing process from the beginning of the acquisition of first gesture information of the terminal by the terminal to the ending of the display completion of the terminal on the received target frame picture, and the target frame picture is an application picture generated by rendering according to the first gesture information by the cloud rendering server, and is coded by the cloud rendering server and then issued to the terminal;
The monitoring server is also used for analyzing the running condition of the application according to the running parameters of the appointed subprocess and outputting the analysis result of the running condition of the application.
10. An operation monitoring device for an application, the device comprising:
the acquisition module is used for acquiring the operation parameters of the appointed subprocess in the complete process period of the target frame picture of the monitored application; the method comprises the steps that a terminal acquires first gesture information of the terminal, the terminal finishes displaying a received target frame picture, and the target frame picture is an application picture generated by rendering according to the first gesture information by a cloud rendering server, and the application picture is coded by the cloud rendering server and then is sent to the terminal;
and the analysis module is used for analyzing the running condition of the application according to the running parameters of the appointed subprocess and outputting the analysis result of the running condition of the application.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method of any of claims 1-8.
12. The computer equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-8 when executing a program stored on a memory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010360865.4A CN111563027B (en) | 2020-04-30 | 2020-04-30 | Application operation monitoring method, device and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010360865.4A CN111563027B (en) | 2020-04-30 | 2020-04-30 | Application operation monitoring method, device and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111563027A CN111563027A (en) | 2020-08-21 |
CN111563027B true CN111563027B (en) | 2023-09-01 |
Family
ID=72070748
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010360865.4A Active CN111563027B (en) | 2020-04-30 | 2020-04-30 | Application operation monitoring method, device and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111563027B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115209178A (en) * | 2021-04-14 | 2022-10-18 | 华为技术有限公司 | Information processing method, device and system |
CN113452944B (en) * | 2021-08-31 | 2021-11-02 | 江苏北弓智能科技有限公司 | Picture display method of cloud mobile phone |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107992392A (en) * | 2017-11-21 | 2018-05-04 | 国家超级计算深圳中心(深圳云计算中心) | A kind of automatic monitoring repair system and method for cloud rendering system |
CN111061560A (en) * | 2019-11-18 | 2020-04-24 | 北京视博云科技有限公司 | Cloud rendering resource scheduling method and device, electronic equipment and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5155462B2 (en) * | 2011-08-17 | 2013-03-06 | 株式会社スクウェア・エニックス・ホールディングス | VIDEO DISTRIBUTION SERVER, VIDEO REPRODUCTION DEVICE, CONTROL METHOD, PROGRAM, AND RECORDING MEDIUM |
-
2020
- 2020-04-30 CN CN202010360865.4A patent/CN111563027B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107992392A (en) * | 2017-11-21 | 2018-05-04 | 国家超级计算深圳中心(深圳云计算中心) | A kind of automatic monitoring repair system and method for cloud rendering system |
CN111061560A (en) * | 2019-11-18 | 2020-04-24 | 北京视博云科技有限公司 | Cloud rendering resource scheduling method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111563027A (en) | 2020-08-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11551408B2 (en) | Three-dimensional model distribution method, three-dimensional model receiving method, three-dimensional model distribution device, and three-dimensional model receiving device | |
US11240483B2 (en) | Three-dimensional model distribution method and three-dimensional model distribution device | |
CN111563027B (en) | Application operation monitoring method, device and system | |
CN112291520B (en) | Abnormal event identification method and device, storage medium and electronic device | |
EP3703375A1 (en) | Three-dimensional model encoding device, three-dimensional model decoding device, three-dimensional model encoding method, and three-dimensional model decoding method | |
CN105376335B (en) | Collected data uploading method and device | |
CN103731631B (en) | The method, apparatus and system of a kind of transmitting video image | |
CN108184166A (en) | A kind of video quality analysis method and system | |
EP3706411A1 (en) | Early video equipment failure detection system | |
CN110177024A (en) | Monitoring method and client, server-side, the system of hotspot device | |
CN111628905B (en) | Data packet capturing method, device and equipment | |
CN111741247B (en) | Video playback method and device and computer equipment | |
CN113452630B (en) | Data merging method, data splitting method, device, equipment and storage medium | |
CN111506769B (en) | Video file processing method and device, storage medium and electronic device | |
CN111263113B (en) | Data packet sending method and device and data packet processing method and device | |
CN109120468A (en) | The method and apparatus for obtaining end-to-end network delay | |
CN111083527A (en) | Video playing method and device of application, storage medium and electronic equipment | |
CN110855947A (en) | Image snapshot processing method and device | |
CN106549794A (en) | A kind of mass monitoring system of OTT business, apparatus and method | |
CN107846586B (en) | Monitoring method, device and the server-side of video flow quality | |
CN116264592A (en) | Virtual desktop performance detection method, device, equipment and storage medium | |
CN114222096A (en) | Data transmission method, camera and electronic equipment | |
JP2007006203A (en) | Forming device for forming user's physical feeling quality estimating model, quality control device, and program | |
CN112449151B (en) | Data generation method, device and computer readable storage medium | |
CN106330548B (en) | Flow statistical method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |