Nothing Special   »   [go: up one dir, main page]

CN111782488B - Message queue monitoring method, device, electronic equipment and medium - Google Patents

Message queue monitoring method, device, electronic equipment and medium Download PDF

Info

Publication number
CN111782488B
CN111782488B CN202010666202.5A CN202010666202A CN111782488B CN 111782488 B CN111782488 B CN 111782488B CN 202010666202 A CN202010666202 A CN 202010666202A CN 111782488 B CN111782488 B CN 111782488B
Authority
CN
China
Prior art keywords
message queue
data
depth value
neural network
monitoring data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010666202.5A
Other languages
Chinese (zh)
Other versions
CN111782488A (en
Inventor
赵海龙
王薇薇
张寒
刘颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202010666202.5A priority Critical patent/CN111782488B/en
Publication of CN111782488A publication Critical patent/CN111782488A/en
Application granted granted Critical
Publication of CN111782488B publication Critical patent/CN111782488B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/32Monitoring with visual or acoustical indication of the functioning of the machine
    • G06F11/324Display of status information
    • G06F11/327Alarm or error message display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3447Performance evaluation by modeling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3476Data logging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The present disclosure provides a message queue monitoring method, including: acquiring monitoring data; determining the current bearable capacity of the message queue and a predicted depth value at the next moment based on the monitoring data; determining an alarm threshold of the message queue based on a current bearable amount of the message queue; and determining that the message queue is in a blocking state under the condition that the predicted depth value of the next moment of the message queue exceeds the alarm threshold value. The embodiment of the disclosure also provides a message queue monitoring device, electronic equipment and a computer readable medium.

Description

Message queue monitoring method, device, electronic equipment and medium
Technical Field
The present disclosure relates to the field of computer technology, and more particularly, to a message queue monitoring method, a message queue monitoring apparatus, an electronic device, and a computer readable medium.
Background
In the design and implementation of security software, an alarm function is one of common functions, and when an abnormal event occurs, a user needs to be alerted in real time to remind the user of processing.
In the related art, monitoring for the depth of a message queue is mainly achieved by setting a fixed threshold. For example, if the average value of the detection queue depth is higher than the set threshold value in the unit time period, an alarm is given. Then, according to the alarm condition for a period of time, the threshold value is manually adjusted to adapt to the current queue running state.
In the course of implementing the inventive concept, the inventors found that at least the following problems exist in the related art. The alarm is often set based on historical experience of the staff to set a fixed threshold. However, the depth of the message queue is affected by real-time fluctuations in factors such as the running state of the upstream and downstream systems (e.g., call request amount of the upstream and downstream systems, etc.), the performance of the system itself (e.g., memory, CUP, storage, etc.), and the network condition. The mode of setting the fixed threshold is difficult to achieve the purpose of real-time monitoring. The threshold value is set too low, so that a great number of false alarms are likely to be caused to be an alarm storm, and operation and maintenance manpower is wasted. And if the threshold value is set too high, the risk of missing report is increased. The system running risk is increased while the effectiveness of automatic monitoring is lost undoubtedly according to the 'optimized' threshold value which is frequently tested by historical experience.
Disclosure of Invention
In view of this, the present disclosure provides a message queue monitoring method, a message queue monitoring apparatus, an electronic device, and a computer-readable medium.
One aspect of the present disclosure provides a message queue monitoring method, including: acquiring monitoring data, determining the current bearable capacity of the message queue and a predicted depth value at the next moment based on the monitoring data, determining an alarm threshold of the message queue based on the current bearable capacity of the message queue, and determining that the message queue is in a blocking state under the condition that the predicted depth value at the next moment of the message queue exceeds the alarm threshold.
According to an embodiment of the present disclosure, the method further comprises: and under the condition that the predicted depth value of the next moment of the message queue exceeds the alarm threshold value, alarm processing is carried out.
According to an embodiment of the present disclosure, the method further comprises: and when the predicted depth value of the next moment of the message queue exceeds the alarm threshold value, accumulating a count value, wherein the count value represents the accumulated times that the predicted depth value of the next moment of the message queue continuously exceeds the alarm threshold value, and when the count value exceeds the count threshold value, performing alarm processing.
According to an embodiment of the disclosure, the determining, based on the monitoring data, a current bearable amount of the message queue and a predicted depth value at a next time, includes: and inputting the monitoring data into a trained neural network to obtain the current bearable capacity of the message queue and the predicted depth value at the next moment.
According to an embodiment of the present disclosure, the method further comprises: and acquiring historical data, wherein the historical data comprises historical monitoring data and a bearable amount and a depth value at the next moment corresponding to the historical monitoring data, and training the neural network based on the historical data.
According to an embodiment of the present disclosure, training the neural network includes: the neural network is trained based on an elastic network algorithm.
According to an embodiment of the present disclosure, the method further comprises: and performing discrete processing on the historical data based on an entropy discretization method to obtain discretized historical data. The training the neural network based on the historical data includes: training the neural network based on the discretized historical data.
According to an embodiment of the disclosure, the training the neural network based on the discretized historical data includes: and acquiring discretization historical data corresponding to each time point, determining data packets of each time point based on the discretization historical data of each time point, determining at least one training data packet based on a preset time period and the data packets of each time point, and training the neural network by using the at least one training data packet.
According to an embodiment of the present disclosure, the monitoring data includes at least one of a current queue depth value of the message queue, a queue in per unit time of the message queue, a queue out per unit time of the message queue, a disk space usage rate where the message queue is located, a CPU usage rate, a memory usage rate, a network connectivity, and a network instant network speed.
According to an embodiment of the present disclosure, the acquiring the monitoring data includes acquiring, by a log collection system, at least one of an operating system log, a web log, and an application log.
Another aspect of the present disclosure provides a message queue monitoring apparatus, which includes an acquisition module, a first determination module, a second determination module, and a third determination module. The acquisition module is used for acquiring the monitoring data. The first determining module is used for determining the current bearable capacity of the message queue and the predicted depth value of the next moment based on the monitoring data. The second determining module is used for determining an alarm threshold value of the message queue based on the current bearable capacity of the message queue. And the third determining module is used for determining that the message queue is in a blocking state under the condition that the predicted depth value of the next moment of the message queue exceeds the alarm threshold value.
According to an embodiment of the present disclosure, the apparatus further comprises: and the first alarm module is used for carrying out alarm processing under the condition that the predicted depth value of the next moment of the message queue exceeds the alarm threshold value.
According to an embodiment of the present disclosure, the apparatus further comprises: the second alarm module is used for accumulating a count value under the condition that the predicted depth value of the next moment of the message queue exceeds the alarm threshold value, wherein the count value represents the accumulated times that the predicted depth value of the next moment of the message queue continuously exceeds the alarm threshold value; and when the count value exceeds the count threshold, alarm processing is performed.
According to an embodiment of the disclosure, the determining, based on the monitoring data, a current bearable amount of the message queue and a predicted depth value at a next time, includes: and inputting the monitoring data into a trained neural network to obtain the current bearable capacity of the message queue and the predicted depth value at the next moment.
According to an embodiment of the disclosure, the apparatus further includes a training module for acquiring historical data, the historical data including historical monitoring data and a bearable amount corresponding to the historical monitoring data and a depth value at a next time, and training the neural network based on the historical data.
According to an embodiment of the present disclosure, training the neural network includes: the neural network is trained based on an elastic network algorithm.
According to an embodiment of the disclosure, the apparatus further includes a processing module configured to perform a discretization process on the history data based on an entropy discretization method, to obtain discretized history data. The training the neural network based on the historical data includes: training the neural network based on the discretized historical data.
According to an embodiment of the disclosure, the training the neural network based on the discretized historical data includes: and acquiring discretization historical data corresponding to each time point, determining data packets of each time point based on the discretization historical data of each time point, determining at least one training data packet based on a preset time period and the data packets of each time point, and training the neural network by using the at least one training data packet.
According to an embodiment of the present disclosure, the monitoring data includes at least one of a current queue depth value of the message queue, a queue in per unit time of the message queue, a queue out per unit time of the message queue, a disk space usage rate where the message queue is located, a CPU usage rate, a memory usage rate, a network connectivity, and a network instant network speed.
According to an embodiment of the present disclosure, the acquiring the monitoring data includes acquiring, by a log collection system, at least one of an operating system log, a web log, and an application log.
Another aspect of the present disclosure provides an electronic device, comprising: one or more processors, a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the methods described above.
Another aspect of the present disclosure provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement a method as described above.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions which when executed are for implementing a method as described above.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments thereof with reference to the accompanying drawings in which:
FIG. 1 schematically illustrates an exemplary system architecture to which a message queue monitoring method may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a message queue monitoring method according to an embodiment of the disclosure
FIG. 3 schematically illustrates an operational schematic of a monitoring system according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a block diagram of a message queue monitoring apparatus according to an embodiment of the disclosure; and
fig. 5 schematically illustrates a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The words "a", "an", and "the" as used herein are also intended to include the meaning of "a plurality", etc., unless the context clearly indicates otherwise. Furthermore, the terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a formulation similar to at least one of "A, B or C, etc." is used, in general such a formulation should be interpreted in accordance with the ordinary understanding of one skilled in the art (e.g. "a system with at least one of A, B or C" would include but not be limited to systems with a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). It should also be appreciated by those skilled in the art that virtually any disjunctive word and/or phrase presenting two or more alternative items, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the items, either of the items, or both. For example, the phrase "a or B" should be understood to include the possibility of "a" or "B", or "a and B".
The embodiment of the disclosure provides a message queue monitoring method and device. The method comprises the following steps: and acquiring monitoring data, and determining the current bearable capacity of the message queue and the predicted depth value at the next moment based on the monitoring data. An alarm threshold for the message queue is then determined based on the current bearable amount of the message queue, such that if the predicted depth value for the next time of the message queue exceeds the alarm threshold, the message queue is determined to be in a blocked state.
Fig. 1 schematically illustrates an exemplary system architecture 100 to which a message queue monitoring method may be applied, according to an embodiment of the present disclosure.
It should be noted that fig. 1 illustrates only an example of a system architecture to which the message queue monitoring method of the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but it does not mean that the embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the message queue monitoring method provided in the embodiments of the present disclosure may be generally performed by the server 105. Accordingly, the message queue monitoring apparatus provided by the embodiments of the present disclosure may be generally disposed in the server 105. The message queue monitoring method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the message queue monitoring apparatus provided by the embodiments of the present disclosure may also be provided in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
For example, any one of the terminal devices 101, 102, 103 (e.g., the terminal device 101, but not limited thereto) may collect own monitoring data and transmit the monitoring data to the server 105. Server 105 may obtain the monitoring data and determine the current bearable amount of the message queue and the predicted depth value at the next time based on the monitoring data. An alarm threshold for the message queue is then determined based on the current bearable amount of the message queue, such that if the predicted depth value for the next time of the message queue exceeds the alarm threshold, the message queue is determined to be in a blocked state.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically illustrates a flow chart of a message queue monitoring method according to an embodiment of the disclosure.
As shown in fig. 2, the method includes operations S201 to S204.
In operation S201, monitoring data is acquired.
In the embodiment of the present disclosure, a Message Queue (MQ) may be, for example, a storage container for caching upstream and downstream interaction information (e.g., text, pictures, audio, mails, etc.). For example, the message queue may be a first-in first-out data structure that waits for extrapolation or waiting for the downstream application to actively pull after a message is pushed to the downstream application by the upstream application for caching. The message queue is widely applied to a distributed system as a middleware, and can be used for solving the problems of application coupling, asynchronous message, flow peak clipping and the like.
According to embodiments of the present disclosure, at least one of an operating system log, a web log, and an application log may be collected by a log collection system to obtain monitoring data.
In the embodiment of the present disclosure, the monitoring data may include at least one of a current queue depth value of the message queue, a queue in unit time of the message queue, a queue out unit time of the message queue, a disk space usage rate where the message queue is located, a CPU usage rate, a memory usage rate, a network connectivity, and a network instant network speed.
For example, the generation of log files such as an operating system log (e.g., disk space usage, CPU usage, memory usage), a web log (e.g., web connectivity, web instant network speed), an application log (e.g., current queue depth value, queuing amount per unit time, dequeuing amount per unit time) and the like can be realized through a log acquisition script implemented by autonomous programming based on Fluentd (td-agent), and then the td-agent client sends the generated log file to the server so that the server can perform the next data processing on the acquired monitoring data.
In operation S202, a current bearable amount of the message queue and a predicted depth value at a next time are determined based on the monitoring data.
In an embodiment of the present disclosure, the monitoring data may be input into a trained neural network to obtain the current bearable amount of the message queue and the predicted depth value at the next time. According to the embodiment of the disclosure, the current bearable capacity of the message queue and the predicted depth value at the next moment are predicted through the trained neural network, so that the prediction accuracy can be improved.
In another embodiment of the present disclosure, the current bearable amount of the message queue and the predicted depth value of the next moment may also be determined through a preset algorithm based on the monitoring data. The preset algorithm may be, for example, a fitted algorithm, which may, for example, characterize the correlation between the monitoring data and the current bearable capacity of the message queue and the predicted depth value at the next moment.
In yet another embodiment of the present disclosure, the current bearable amount of the message queue and the predicted depth value at the next time may be determined by a preset judgment standard based on the monitoring data. The preset judgment standard may be, for example, a judgment standard debugged according to experience of a worker. For example, in the case where the monitored data is the usage rate of the disk space where the message queue is located, it may be determined whether the current usage rate of the disk space exceeds a threshold (e.g., 80%), and if so, the current bearable amount of the message queue may be appropriately lowered, thereby lowering the alarm threshold, and alerting the user in time.
It will be appreciated that the embodiments of the present disclosure are not limited to a particular manner or algorithm for determining the current bearable amount of the message queue and the predicted depth value at the next time based on the monitoring data, and those skilled in the art may set the embodiments according to the actual situation.
According to the embodiment of the disclosure, in a scenario in which the current bearable amount of the message queue and the predicted depth value at the next time are predicted by a trained neural network, the embodiment of the disclosure can train the neural network in the following manner to improve the accuracy of the prediction.
In an embodiment of the present disclosure, historical data may be acquired, which may include historical monitoring data and a bearable amount corresponding to the historical monitoring data and a depth value at a next time, and training a neural network based on the historical data.
For example, the current queue depth value of the message queue at the time T1, the queue in amount per unit time of the message queue, the queue out amount per unit time of the message queue, the disk space usage rate of the message queue, the CPU usage rate, the memory usage rate, the network connectivity and the network instant network speed can be obtained as the historical input training data. And acquiring the current bearable capacity corresponding to the moment T1 and the queue depth value at the moment T2 at the next moment as historical output training data. The neural network may be trained based on historical input training data at a plurality of times and its corresponding historical output training data.
According to the embodiment of the disclosure, the depth value of the queue at the next moment in the historical output training data can be directly obtained through the historical data at the next moment.
According to embodiments of the present disclosure, the current supportable amount in the historical output training data may be determined by the CPU usage, the memory usage, and the disk space usage. For example, when the queue is at a supportable amount, the supportable amount may be considered as the current supportable amount if the CPU usage, memory usage, and disk space usage are all below a threshold (e.g., 90%) and at least one of the CPU usage, memory usage, and disk space usage approaches the threshold. For example, the initial value H of the bearable amount can be given based on historical experience 0 (initial values may be, for exampleAnd is the average of the daily queue depth of the historical data) and the peak message increment M per unit time (e.g., the maximum positive value of the difference between the number of incoming and outgoing messages per unit time in the historical data) and set the thresholds of CPU utilization, memory utilization, and disk space utilization to 90% (by way of illustration only, those skilled in the art can adjust up or down as appropriate depending on the particular situation). Can be based on the initial value H of the bearable amount 0 The peak M of the queue message increment per unit time and the CPU utilization, memory utilization, and disk space utilization. For example, the bearable amount H can be calculated 1 =H 0 +M×N (N may represent a multiple, the initial multiple may be set to 1), and according to the bearable amount H 1 If the CPU utilization rate, the memory utilization rate and the disk space utilization rate break through the threshold value at the next moment, the value of the multiple N can be increased (for example, the current multiple N is increased by two times), and the estimation judgment is performed again after the current multiple N is updated. If the threshold value is exceeded, the value of the multiple N may be reduced (for example, the current multiple N is reduced by two times), and the value of N is gradually increased in sequence until the next time is estimated that the CPU utilization, the memory utilization, and the disk space utilization are all lower than the threshold value (for example, 90%), and at least one of the CPU utilization, the memory utilization, and the disk space utilization approaches the threshold value (for example, falls within the region of 80% -90%), then H at this time is estimated 1 And determining the current bearable amount.
According to embodiments of the present disclosure, historical monitoring data may be obtained as input data for training a neural network. The queue depth value at the next moment is obtained as output data of the neural network. And calculating the bearable capacity at the current moment by acquiring the CPU utilization rate, the memory utilization rate and the disk space utilization rate at the next moment to be used as the output data of the neural network. The neural network may thus be trained based on the data to enable the trained neural network to predict the current bearable amount of the message queue and the next time queue depth value based on the collected monitoring data.
In embodiments of the present disclosure, the neural network may be trained based on an elastic network algorithm. For example, an elastic network regression (ElasticNet Regression) machine learning algorithm may be applied to train the neural network. The elastic network regression algorithm is a model capable of processing nonlinear separable data, can fit a curved surface suitable for data points, and is suitable for the situation that high collinearity exists between characteristic variables (namely, the two characteristic values have approximate linear correlation). According to the embodiment of the disclosure, the influence of correlation among partial index items (for example, CPU (Central processing Unit) utilization rate and memory utilization rate and queue in and out amount in unit time) on model training can be reduced by training the neural network through the elastic network regression algorithm, so that the data high co-linearity is reduced, and a better fitting effect can be provided.
According to the embodiment of the disclosure, the history data can be further subjected to discrete processing based on the entropy discretization method to obtain discretization history data, and the neural network is trained based on the discretization history data. For example, discretization history data corresponding to each of a plurality of time points may be acquired, data packets for each time point may be determined based on the discretization history data for each time point, at least one training data packet may be determined based on a preset time period and the data packets for each time point, and the neural network may be trained using the at least one training data packet.
For example, historical monitoring data may be combined and discretized. For example, one or more key index items may be selected from the data items in the operating system log, the weblog, and the application log to fit, and further compose an initial data set. For example, only one or more of the current queue depth value of the message queue, the queue in unit time of the message queue, the queue out unit time of the message queue, the disk space utilization rate of the message queue, the CPU utilization rate, the memory utilization rate, the network connectivity and the network instant network speed are selected as key index items to form an input training set of the neural network.
In embodiments of the present disclosure, the history monitoring data may also be processed by an entropy-based discretization method. For example, entropy of each history monitoring data index item in a preset time period is obtained, and then each history monitoring data index item is discretized according to the entropy, so that a training data set which can be used for machine learning is obtained.
It will be appreciated that discretizing the historical acquisition data with continuous attributes is beneficial to improving model training efficiency and accuracy in the process of applying machine learning. The continuous attribute discretization may take some method to divide a continuous interval into intervals of cells and associate the resulting intervals with discrete values. Entropy-based data discretization of embodiments of the present disclosure may be applicable to supervised learning, typically using class information to calculate and determine segmentation points, a top-down splitting technique. The specific operation steps are as follows: step 1: defining the entropy of the interval; step 2: regarding each value as a partition point, dividing the data into two parts, and selecting one of a plurality of possible partitions to generate the minimum entropy; step 3: finding a section with larger entropy in the two sections, and repeating the step 1; step 4: and when the number specified by the user is met, ending the splitting process.
For example, embodiments of the present disclosure may select one or more key indicators (queue length, memory/CPU utilization, etc.) in the history data as separation feature items, and select an optimal cut point by calculating the sum of the entropies of the two portions of the comparison cut point. It can be understood that each historical value of the feature item can be used as a dividing point, so that entropy values need to be calculated sequentially, and the lowest value is used as the optimal dividing point.
According to embodiments of the present disclosure, data discretization may refer to grouping original continuous data into a section of discretized intervals, which is essentially a method of data binning or grouping. Compared with other discretization methods such as a histogram, the discretization method based on entropy reduces errors of manually setting interval boundary thresholds, and has higher discretization efficiency for a large number of sample sets (larger data volume in unit time of monitoring data).
For example, the neural network can be trained using an Elastic Net algorithm based on discrete processed historical data. For example, firstly, taking time as a dimension, applying entropy-based discretization to characteristic index items such as an outbound/inbound queue cache message amount, a CPU (Central processing Unit) utilization rate, a memory utilization rate, a disk space utilization rate and the like to obtain discretization data of a current time point, and simultaneously, determining a data packet most suitable for current production and use by adjusting selection of a cut point; secondly, a large amount of historical monitoring data is initially grouped according to time periods, and then iterative model training is carried out according to the groups as units, so that model training time is shortened, and timeliness of training data is guaranteed. And finally, adjusting model parameters according to training results of each time period, and outputting a predicted value of the current time point.
According to the embodiment of the disclosure, in a scenario of determining the current bearable amount of the message queue and the predicted depth value of the next moment through a preset algorithm based on the monitoring data, the embodiment of the disclosure can be realized through the following algorithm:
y=w 1 x 1 +w 2 x 2 +...+w i x i +w 0
where y is a predicted output term, x is each input term (i.e., one or more of a current queue depth value of a message queue, a unit time in-queue amount of the message queue, a unit time out-queue amount of the message queue, a disk space usage where the message queue is located, a CPU usage, a memory usage, a network connectivity, and a network instant network speed), and w is a weight term.
The corresponding loss function (the optimal solution w when the function value is the smallest) can be expressed as:
min(||xw-y|| 2 +z 1 ||w||+z 2 ||w|| 2 )
the embodiments of the present disclosure are not limited to a specific preset algorithm, and those skilled in the art may set the algorithm according to actual situations.
It may be appreciated that the embodiment of the present disclosure may predict or calculate a preset depth value of the current bearable capacity and the next moment based on one or more of a current queue depth value of the message queue, a queue-in amount per unit time of the message queue, a queue-out amount per unit time of the message queue, a disk space usage rate where the message queue is located, a CPU usage rate, a memory usage rate, network connectivity, and a network instant network speed, but does not disclose a method that does not limit prediction or calculation, and may be set by a person skilled in the art according to actual situations.
In operation S203, an alarm threshold for the message queue is determined based on the current bearable amount of the message queue.
According to embodiments of the present disclosure, an alarm threshold for a message queue may be determined based on a current bearable amount and a preset proportion of the message queue. For example, 80% of the current bearable capacity of a message queue may be used as an alarm threshold for the message queue.
In operation S204, it is determined that the message queue is in a blocking state in case the predicted depth value of the next time of the message queue exceeds the alarm threshold.
According to the embodiment of the disclosure, if the predicted depth value of the next time exceeds the determined alarm threshold, it may be determined that the message queue is in a blocking state, and an alarm process may be performed to notify the user.
In an embodiment of the present disclosure, the immediate alarm processing may be performed in response to the predicted depth value at the next time of the message queue exceeding an alarm threshold.
In another embodiment of the present disclosure, a count value may be further accumulated when the predicted depth value at the next time of the message queue exceeds the alarm threshold, where the count value may represent the accumulated number of times the predicted depth value at the next time of the message queue continuously exceeds the alarm threshold, and when the count value exceeds the count threshold, alarm processing is performed. For example, in response to the predicted depth value at the next time exceeding the alarm threshold, counting is started, if the predicted depth value at the next time given by the prediction model still exceeds the alarm threshold, 1 is accumulated, and if the predicted depth value does not exceed the alarm threshold, zero is cleared. And when the accumulated value reaches the counting threshold value, judging that the current queue is in a blocking state, and carrying out alarm processing.
In still another embodiment of the present disclosure, a certain time interval may be further taken, and by calculating an average value of the interval, whether the queue is in a critical state near full load for a long period of time is detected, so as to early warn about the potential risk. For example, a mean value of the predicted current bearable capacity in a period of time and a mean value of the predicted depth value at the next moment are calculated, and whether the message queue is in a blocking state in the period of time is determined based on the mean value. Or, the predicted average depth value can be calculated based on the predicted depth value, the variance of the group of depth values is calculated at the same time, after the situation that the variance is overlarge due to the occurrence of peak burrs is screened out, after the predicted average depth value reaches a certain percentage of a preset threshold value, the queue is judged to be in a blocking state, and early warning is triggered.
The embodiment of the disclosure can collect monitoring data in real time, and predict the current bearable capacity of the message queue and the predicted depth value at the next moment based on the neural network, so that whether the current moment of the queue is in a blocking state can be determined based on the predicted current bearable capacity and the predicted depth value at the next moment. The problem of alarm omission or alarm storm caused by the fact that the threshold value in the fixed threshold value monitoring cannot be adapted to the current queue condition in real time is well solved, the labor cost in the process of checking, adjusting and optimizing the monitoring threshold value is reduced, and the alarm accuracy and the alarm effectiveness of the system are improved.
Fig. 3 schematically illustrates an operational schematic of a monitoring system according to an embodiment of the present disclosure.
As shown in fig. 3, the monitoring system comprises a data acquisition module, a data preprocessing module, a model training module, a prediction alarm module and a unified monitoring platform.
The data acquisition module acquires historical monitoring data such as an operating system log (disk space utilization rate, CPU utilization rate and memory utilization rate), a network log (network card port link connectivity and network instant network speed), an application log (current queue depth value, message queuing amount per unit time and message dequeuing amount per unit time) and the like to form a log file, and transmits the log file to the data preprocessing module.
The data preprocessing module further combines a plurality of index data in the history log and discretizes the index data to form a training data set, and after the training data set reaches a certain scale, the prediction model is trained according to a preselected elastic regression algorithm;
the prediction alarm module predicts the current bearable capacity of the message queue and the predicted depth value at the next moment according to the prediction model obtained in the last step and the monitoring data obtained in real time by the data acquisition module, updates the alarm threshold according to the current bearable capacity, and judges whether to trigger an alarm according to the predicted depth value at the next moment and the updated alarm threshold.
When the prediction alarm module judges that the predicted depth value at the next moment exceeds the alarm threshold value, the organization alarm information is uploaded to a unified monitoring platform for formal alarm release.
According to the embodiment of the disclosure, the monitoring data can be collected in real time, the current bearable capacity of the message queue and the predicted depth value of the next moment are predicted based on the neural network, so that whether the current moment of the queue is in a blocking state or not can be determined based on the predicted current bearable capacity and the predicted depth value of the next moment, and when the current moment is in the blocking state, alarm processing is carried out to remind a user. The problem of alarm omission or alarm storm caused by the fact that the threshold value in the fixed threshold value monitoring cannot be adapted to the current queue condition in real time is well solved, the labor cost in the process of checking, adjusting and optimizing the monitoring threshold value is reduced, and the alarm accuracy and the alarm effectiveness of the system are improved.
Fig. 4 schematically illustrates a block diagram of a message queue monitoring apparatus 400 according to an embodiment of the disclosure.
As shown in fig. 4, the apparatus 400 includes an acquisition module 410, a first determination module 420, a second determination module 430, and a third determination module 440.
The acquisition module 410 is configured to acquire monitoring data.
The first determining module 420 is configured to determine, based on the monitoring data, a current bearable amount of the message queue and a predicted depth value at a next time.
The second determining module 430 is configured to determine an alarm threshold for the message queue based on a current bearable amount of the message queue.
The third determining module 440 is configured to determine that the message queue is in a blocking state if the predicted depth value at the next time of the message queue exceeds the alarm threshold.
According to an embodiment of the present disclosure, the apparatus further comprises: and the first alarm module is used for carrying out alarm processing under the condition that the predicted depth value of the next moment of the message queue exceeds the alarm threshold value.
According to an embodiment of the present disclosure, the apparatus further comprises: the second alarm module is used for accumulating a count value under the condition that the predicted depth value of the next moment of the message queue exceeds the alarm threshold value, wherein the count value represents the accumulated times that the predicted depth value of the next moment of the message queue continuously exceeds the alarm threshold value; and when the count value exceeds the count threshold, alarm processing is performed.
According to an embodiment of the disclosure, the determining, based on the monitoring data, a current bearable amount of the message queue and a predicted depth value at a next time, includes: and inputting the monitoring data into a trained neural network to obtain the current bearable capacity of the message queue and the predicted depth value at the next moment.
According to an embodiment of the disclosure, the apparatus further includes a training module for acquiring historical data, the historical data including historical monitoring data and a bearable amount corresponding to the historical monitoring data and a depth value at a next time, and training the neural network based on the historical data.
According to an embodiment of the present disclosure, training the neural network includes: the neural network is trained based on an elastic network algorithm.
According to an embodiment of the disclosure, the apparatus further includes a processing module configured to perform a discretization process on the history data based on an entropy discretization method, to obtain discretized history data. The training the neural network based on the historical data includes: training the neural network based on the discretized historical data.
According to an embodiment of the disclosure, the training the neural network based on the discretized historical data includes: and acquiring discretization historical data corresponding to each time point, determining data packets of each time point based on the discretization historical data of each time point, determining at least one training data packet based on a preset time period and the data packets of each time point, and training the neural network by using the at least one training data packet.
According to an embodiment of the present disclosure, the monitoring data includes at least one of a current queue depth value of the message queue, a queue in per unit time of the message queue, a queue out per unit time of the message queue, a disk space usage rate where the message queue is located, a CPU usage rate, a memory usage rate, a network connectivity, and a network instant network speed.
According to an embodiment of the present disclosure, the acquiring the monitoring data includes acquiring, by a log collection system, at least one of an operating system log, a web log, and an application log.
The apparatus 400 may, for example, perform the method described above with reference to fig. 2 according to the embodiments of the present disclosure, and will not be described here again.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions.
For example, any of the acquisition module 410, the first determination module 420, the second determination module 430, and the third determination module 440 may be combined in one module/unit/sub-unit or any of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least some of the functionality of one or more of these modules/units/sub-units may be combined with at least some of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to embodiments of the present disclosure, at least one of the acquisition module 410, the first determination module 420, the second determination module 430, and the third determination module 440 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging the circuitry, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, at least one of the acquisition module 410, the first determination module 420, the second determination module 430, and the third determination module 440 may be at least partially implemented as computer program modules, which when executed, may perform the respective functions.
Fig. 5 schematically illustrates a block diagram of an electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 5 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 5, the electronic device according to the embodiment of the present disclosure includes a processor 501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. The processor 501 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 501 may also include on-board memory for caching purposes. The processor 501 may comprise a single processing unit or a plurality of processing units for performing different actions of the method flows according to embodiments of the disclosure.
In the RAM 503, various programs and data required for the operation of the system 500 are stored. The processor 501, ROM 502, and RAM 503 are connected to each other by a bus 504. The processor 501 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 502 and/or the RAM 503. Note that the program may be stored in one or more memories other than the ROM 502 and the RAM 503. The processor 501 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the system 500 may further include an input/output (I/O) interface 505, the input/output (I/O) interface 505 also being connected to the bus 504. The system 500 may also include one or more of the following components connected to the I/O interface 505: an input section 506 including a keyboard, a mouse, and the like; an output portion 507 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as needed so that a computer program read therefrom is mounted into the storage section 508 as needed.
According to embodiments of the present disclosure, the method flow according to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 509, and/or installed from the removable media 511. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 501. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 502 and/or RAM 503 and/or one or more memories other than ROM 502 and RAM 503 described above.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (9)

1. A message queue monitoring method, comprising:
acquiring monitoring data;
determining the current bearable capacity of the message queue and a predicted depth value at the next moment based on the monitoring data;
determining an alarm threshold of the message queue based on a current bearable amount of the message queue; and
determining that the message queue is in a blocking state under the condition that the predicted depth value of the next moment of the message queue exceeds the alarm threshold value;
wherein the determining, based on the monitoring data, the current bearable capacity of the message queue and the predicted depth value at the next moment includes: inputting the monitoring data into a trained neural network to obtain the current bearable capacity of the message queue and a predicted depth value at the next moment; wherein the trained neural network is obtained by training the neural network based on discretized historical data; the discretization historical data is obtained by performing discretization on the obtained historical data by adopting an entropy discretization method; training the neural network includes: training the neural network based on an elastic network algorithm; the history data comprises history monitoring data and a bearable capacity and a depth value at the next moment corresponding to the history monitoring data.
2. The method of claim 1, further comprising:
and under the condition that the predicted depth value of the next moment of the message queue exceeds the alarm threshold value, alarm processing is carried out.
3. The method of claim 2, further comprising:
accumulating a count value under the condition that the predicted depth value of the next moment of the message queue exceeds the alarm threshold value, wherein the count value represents the accumulated times that the predicted depth value of the next moment of the message queue continuously exceeds the alarm threshold value;
and when the count value exceeds the count threshold, alarm processing is performed.
4. The method of claim 1, wherein the training the neural network based on the discretized historical data comprises:
acquiring discretization historical data corresponding to a plurality of time points respectively;
determining a data packet for each point in time based on the discretized history data for each point in time;
determining at least one training data packet based on a preset time period and the data packets at each time point;
training the neural network using the at least one training data packet.
5. The method of claim 1, wherein the monitoring data comprises at least one of a current queue depth value of the message queue, an in-queue amount per unit time of the message queue, an out-queue amount per unit time of the message queue, a disk space usage rate at which the message queue is located, a CPU usage rate, a memory usage rate, a network connectivity, and a network instant network speed.
6. The method of claim 1, wherein the acquiring monitoring data comprises acquiring, by a log collection system, at least one of an operating system log, a web log, and an application log.
7. A message queue monitoring apparatus comprising:
the acquisition module is used for acquiring the monitoring data;
the first determining module is used for determining the current bearable capacity of the message queue and a predicted depth value at the next moment based on the monitoring data;
a second determining module, configured to determine an alarm threshold of the message queue based on a current bearable amount of the message queue; and
a third determining module, configured to determine that the message queue is in a blocking state when a predicted depth value at a next time of the message queue exceeds the alarm threshold;
wherein the determining, based on the monitoring data, the current bearable capacity of the message queue and the predicted depth value at the next moment includes: inputting the monitoring data into a trained neural network to obtain the current bearable capacity of the message queue and a predicted depth value at the next moment; wherein the trained neural network is obtained by training the neural network based on discretized historical data; the discretization historical data is obtained by performing discretization on the obtained historical data by adopting an entropy discretization method; training the neural network includes: training the neural network based on an elastic network algorithm; the history data comprises history monitoring data and a bearable capacity and a depth value at the next moment corresponding to the history monitoring data.
8. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-6.
9. A computer readable medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method according to any of claims 1-6.
CN202010666202.5A 2020-07-10 2020-07-10 Message queue monitoring method, device, electronic equipment and medium Active CN111782488B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010666202.5A CN111782488B (en) 2020-07-10 2020-07-10 Message queue monitoring method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010666202.5A CN111782488B (en) 2020-07-10 2020-07-10 Message queue monitoring method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111782488A CN111782488A (en) 2020-10-16
CN111782488B true CN111782488B (en) 2024-02-02

Family

ID=72767461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010666202.5A Active CN111782488B (en) 2020-07-10 2020-07-10 Message queue monitoring method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111782488B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113190369A (en) * 2021-04-21 2021-07-30 北京海博思创科技股份有限公司 Data processing method, device, equipment and storage medium
CN113645156B (en) * 2021-07-16 2023-08-08 苏州浪潮智能科技有限公司 Switch SAI layer message queue adjusting method, system, terminal and storage medium
CN113676419B (en) * 2021-09-03 2024-08-27 中国人民银行清算总中心 Message transmission method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06209330A (en) * 1993-01-11 1994-07-26 Nec Corp System and device for congestion detection in asynchronous transfer mode
US5696764A (en) * 1993-07-21 1997-12-09 Fujitsu Limited ATM exchange for monitoring congestion and allocating and transmitting bandwidth-guaranteed and non-bandwidth-guaranteed connection calls
CN1545286A (en) * 2003-11-21 2004-11-10 清华大学 ECN based congestion control method with prediction verification
CN101212389A (en) * 2006-12-30 2008-07-02 华为技术有限公司 Outburst convergence control method, device, and communication device
CN102457906A (en) * 2010-10-26 2012-05-16 中国移动通信集团河南有限公司 Load balancing control method and system of message queues
CN109039727A (en) * 2018-07-24 2018-12-18 中国银行股份有限公司 Message queue monitoring method and device based on deep learning
CN111372284A (en) * 2020-03-10 2020-07-03 中国联合网络通信集团有限公司 Congestion processing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06209330A (en) * 1993-01-11 1994-07-26 Nec Corp System and device for congestion detection in asynchronous transfer mode
US5696764A (en) * 1993-07-21 1997-12-09 Fujitsu Limited ATM exchange for monitoring congestion and allocating and transmitting bandwidth-guaranteed and non-bandwidth-guaranteed connection calls
CN1545286A (en) * 2003-11-21 2004-11-10 清华大学 ECN based congestion control method with prediction verification
CN101212389A (en) * 2006-12-30 2008-07-02 华为技术有限公司 Outburst convergence control method, device, and communication device
CN102457906A (en) * 2010-10-26 2012-05-16 中国移动通信集团河南有限公司 Load balancing control method and system of message queues
CN109039727A (en) * 2018-07-24 2018-12-18 中国银行股份有限公司 Message queue monitoring method and device based on deep learning
CN111372284A (en) * 2020-03-10 2020-07-03 中国联合网络通信集团有限公司 Congestion processing method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种基于中间件的自适应动态负载平衡方法;朱志, 朱义, 邢春晓;《计算机工程与应用》(第34期);全文 *
基于Additive-multiplicative模糊神经网的ATM网络拥塞控制;翟东海, 李力, 靳蕃;《控制与决策》;第19卷(第6期);第1-5章节 *

Also Published As

Publication number Publication date
CN111782488A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111221702B (en) Log analysis-based exception handling method, system, terminal and medium
CN111782488B (en) Message queue monitoring method, device, electronic equipment and medium
CN109471783B (en) Method and device for predicting task operation parameters
CN114443429B (en) Alarm event processing method and device and computer readable storage medium
CN109257200A (en) The method and apparatus of big data platform monitoring
CN108170580A (en) A kind of rule-based log alarming method, apparatus and system
US10896073B1 (en) Actionability metric generation for events
CN105335271A (en) State monitoring apparatus and comprehensive monitoring system and method
CN110955586A (en) System fault prediction method, device and equipment based on log
CN110933172A (en) Remote monitoring system and method based on cloud computing
CN111049673A (en) Method and system for counting and monitoring API call in service gateway
CN111585785A (en) Method and device for shielding alarm information, computer equipment and storage medium
CN112948223B (en) Method and device for monitoring running condition
CN110677271B (en) Big data alarm method, device, equipment and storage medium based on ELK
CN105471938B (en) Server load management method and device
CN114500318B (en) Batch operation monitoring method, device, equipment and medium
US20150195174A1 (en) Traffic data collection apparatus, traffic data collection method and program
US11477215B2 (en) Scaling a processing resource of a security information and event management system
CN112910733A (en) Full link monitoring system and method based on big data
CN114697247B (en) Fault detection method, device, equipment and storage medium of streaming media system
CN117354206A (en) Method, device, system and medium for monitoring API (application program interface)
CN114661562A (en) Data warning method, device, equipment and medium
KR20230055575A (en) Universal large-scale multi-cloud environment monitoring system and method for private and public networks
CN113672472A (en) Disk monitoring method and device
CN112988417A (en) Message processing method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant