Nothing Special   »   [go: up one dir, main page]

WO2024125201A1 - 一种服务网格限流方法及相关装置 - Google Patents

一种服务网格限流方法及相关装置 Download PDF

Info

Publication number
WO2024125201A1
WO2024125201A1 PCT/CN2023/132091 CN2023132091W WO2024125201A1 WO 2024125201 A1 WO2024125201 A1 WO 2024125201A1 CN 2023132091 W CN2023132091 W CN 2023132091W WO 2024125201 A1 WO2024125201 A1 WO 2024125201A1
Authority
WO
WIPO (PCT)
Prior art keywords
container group
pod
downstream
group pod
upstream
Prior art date
Application number
PCT/CN2023/132091
Other languages
English (en)
French (fr)
Inventor
刘冬冬
Original Assignee
华为云计算技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为云计算技术有限公司 filed Critical 华为云计算技术有限公司
Publication of WO2024125201A1 publication Critical patent/WO2024125201A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames

Definitions

  • the present application relates to the field of cloud computing, and in particular to a service grid current limiting method and related devices.
  • microservices Due to the large number of microservices and the complex calling relationships between microservices, when the backend system faces a large number of business requests, it is easy to cause overload. This requires limiting the flow of business requests to avoid system crashes and keep the business stable as much as possible.
  • the commonly used flow limiting method is to perform flow limiting control based on the threshold of the service request rate, that is, calculate the service request volume within a time window. When the service request volume within a time window exceeds the set threshold, no service requests will be accepted or new service requests will be discarded until the current time window ends.
  • the above flow limiting method uses a one-size-fits-all approach to flow limiting without paying attention to the service level, which can easily lead to service damage.
  • the present application provides a service grid flow limiting method and related devices.
  • the upstream node can limit the flow according to the comprehensive situation of the downstream node, thereby improving the business service quality.
  • the present application provides a service grid current limiting method, the method is applied to an upstream container group Pod having an upstream and downstream correspondence relationship, a sidecar container and an application container are deployed in each container group Pod, the application container is used to process the business request, and the sidecar container is used to control the traffic of the business request, the method includes:
  • each downstream container group Pod required to be called by the multiple first business requests, some of the multiple first business requests are sent to the corresponding downstream container group Pod, and the remaining business requests of the multiple first business requests except the part of the business requests are discarded.
  • the upstream container group Pod determines which first business requests to discard based on the comprehensive level of the downstream container group Pod, thereby achieving the purpose of current limiting, wherein the comprehensive level of each downstream container group Pod is not determined based on a single factor, but is determined based on the static business level of each downstream container group Pod and the actual ability to process business requests.
  • the current limiting method described in this application is used to determine which partial requests are discarded and which partial requests need to be processed, rather than determining by a single factor or a one-size-fits-all approach, which improves the quality of business services.
  • the ability of the downstream container group Pod to actually process business requests includes one or more of a business request discard rate threshold of the downstream container group Pod and an overload rate of the downstream container group Pod.
  • the overload rate of the downstream container group Pod is positively correlated with the discard rate difference, and the discard rate difference refers to the difference between the business request discard rate of the downstream container group Pod and the business request discard rate threshold.
  • the service request discard rate threshold refers to the pre-set limit of the service request discard rate, which can be understood as the number of service requests that are allowed to be discarded, or the tolerance of the downstream container group Pod for discarded service requests. Therefore, the pre-set service request rate threshold can reflect the ability of the downstream container group Pod to actually process service requests.
  • the overload rate can reflect the ability of the downstream container group Pod to actually process service requests. The overload rate is positively correlated with the difference between the actual service request discard rate of the container group Pod and the preset service request discard rate threshold.
  • the comprehensive level of the downstream container group Pod is positively correlated with the static service level of the downstream container group Pod; the comprehensive level of the downstream container group Pod is negatively correlated with the service request discard rate threshold of the downstream container group Pod. Off; the comprehensive level of the downstream container group Pod is negatively correlated with the overload rate of the downstream container group Pod.
  • the lower the static service level the lower the comprehensive level of the container group Pod; the larger the preset service request discard rate threshold, the more service requests that the container group Pod allows to be discarded, and the lower the comprehensive level of the container group Pod; the smaller the preset service request discard rate threshold, the fewer service requests that the container group Pod allows to be discarded, and the higher the comprehensive level of the container group Pod; the larger the difference between the actual service request discard rate and the preset service request discard rate threshold, the higher the overload degree of the container group Pod, and the lower the comprehensive level of the container group Pod.
  • the method further includes: obtaining a resource usage rate of the upstream container group Pod in a current time window; and determining whether the upstream container group Pod is overloaded in the current window according to the resource usage rate of the upstream container group Pod.
  • the resource utilization of the upstream container group Pod includes the utilization of the processor and/or the utilization of the memory in the upstream container group Pod; determining whether the upstream container group Pod is overloaded in the current window according to the resource utilization of the upstream container group Pod includes: when the utilization of the processor is greater than a set processor utilization overload threshold, and/or when the utilization of the memory is greater than a set memory utilization overload threshold, determining that the upstream container group Pod is overloaded in the current window; otherwise, determining that the upstream container group Pod is not overloaded in the current window.
  • a service request for indicating calling a downstream container group Pod having a higher comprehensive level than the first target container group Pod among the multiple first service requests is sent to the corresponding downstream container group Pod, and the remaining service requests are discarded.
  • the lowest comprehensive level of the downstream container group Pod includes any one or more of the following situations: the static business level of the downstream container group Pod is relatively low; the preset business request discard rate threshold of the downstream container group Pod is relatively large (in other words, the number of business requests allowed to be discarded by the downstream container group Pod is relatively large); the overload rate (overload degree) of the downstream container group Pod is relatively high.
  • the method further includes:
  • the upstream container group Pod When it is determined that the upstream container group Pod is overloaded, determine the comprehensive level of each downstream container group Pod in at least one downstream container group Pod required to be called by the multiple second service requests received by the upstream container group Pod;
  • downstream container group Pods Determine, from the downstream container group Pods required to be called by the multiple second business requests, downstream container group Pods having a higher comprehensive level than the first target container group Pod as a first set;
  • the service requests for indicating the call of the downstream pod whose comprehensive level is higher than the second target container group pod among the multiple second service requests are sent to the corresponding downstream container group Pod, and the remaining service requests are discarded.
  • the business request that calls the downstream container group Pod with the lowest comprehensive level is discarded.
  • the business request that calls the container group Pod with a higher comprehensive level than the downstream container group Pod with the lowest comprehensive level in the previous time window is discarded. Therefore, when a certain upstream container group Pod is overloaded in consecutive time windows, the comprehensive level of the downstream container group Pod called by the discarded business requests becomes higher and higher, so as to eliminate the overload status of the current upstream container group Pod and/or downstream container group Pod as soon as possible.
  • the method further includes:
  • the upstream container group Pod If the upstream container group Pod is overloaded, determine the comprehensive level of each downstream container group Pod in at least one downstream container group Pod required to be called by multiple third service requests received by the upstream container group Pod;
  • downstream container group Pods Determine, from the downstream container group Pods required to be called by the multiple third business requests, downstream container group Pods having a higher comprehensive level than the second target container group Pod as the second set;
  • the service requests for indicating the call of the downstream pod whose comprehensive level is higher than the third target container group pod among the multiple third service requests are sent to the corresponding downstream container group Pod, and the remaining service requests are discarded.
  • the comprehensive level of the downstream container group Pod is represented by a numerical value.
  • the comprehensive level can be represented by a numerical value.
  • the larger the numerical value the higher the comprehensive level.
  • the smaller the numerical value the higher the comprehensive level.
  • the present application provides a service grid current limiting device, the device comprising an upstream container group Pod having an upstream and downstream correspondence relationship, a sidecar container and an application container deployed in each container group Pod, the application container is used to process the business request, and the sidecar container is used to control the traffic of the business request, the device comprising:
  • a comprehensive level determination module is used to determine the comprehensive level of each downstream container group Pod in at least one downstream container group Pod required to be called by multiple first business requests received by the upstream container group Pod when it is determined that the upstream container group Pod is overloaded, wherein the comprehensive level of the downstream container group Pod is determined based on the preset static business level of the downstream container group Pod and the actual ability of the downstream container group Pod to process business requests;
  • a communication module is used to send some of the multiple first business requests to the corresponding downstream container group Pod according to the comprehensive levels of each downstream container group Pod required to be called by the multiple first business requests, and discard the remaining business requests in the multiple first business requests except the part of the business requests.
  • the ability of the downstream container group Pod to actually process business requests includes one or more of a business request discard rate threshold of the downstream container group Pod and an overload rate of the downstream container group Pod.
  • the overload rate of the downstream container group Pod is positively correlated with the discard rate difference, and the discard rate difference refers to the difference between the business request discard rate of the downstream container group Pod and the business request discard rate threshold.
  • the comprehensive level of the downstream container group Pod is positively correlated with the static business level of the downstream container group Pod; the comprehensive level of the downstream container group Pod is negatively correlated with the business request discard rate threshold of the downstream container group Pod; the comprehensive level of the downstream container group Pod is negatively correlated with the overload rate of the downstream container group Pod.
  • an acquisition module is used to obtain the resource utilization rate of the upstream container group Pod in the current time window; and an overload determination module is used to determine whether the upstream container group Pod is overloaded in the current window according to the resource utilization rate of the upstream container group Pod.
  • the resource utilization of the upstream container group Pod includes the utilization of the processor and/or the memory in the upstream container group Pod; the overload determination module is used to: when the utilization of the processor is greater than a set processor utilization overload threshold, and/or when the utilization of the memory is greater than a set memory utilization overload threshold, determine that the upstream container group Pod is overloaded in the current window; otherwise, determine that the upstream container group Pod is not overloaded in the current window.
  • the comprehensive level determination module is used to determine, from the downstream container group Pods required to be called by the multiple first business requests, a downstream container group Pod with the lowest comprehensive level as the first target container group Pod;
  • the communication module is used to send a service request for indicating the call of a downstream container group Pod with a higher comprehensive level than the first target container group Pod among the multiple first service requests to the corresponding downstream container group Pod, and discard the remaining service requests.
  • the comprehensive level determination module is used to:
  • the upstream container group Pod When it is determined that the upstream container group Pod is overloaded, determine the comprehensive level of each downstream container group Pod in at least one downstream container group Pod required to be called by the multiple second service requests received by the upstream container group Pod;
  • downstream container group Pods Determine, from the downstream container group Pods required to be called by the multiple second business requests, downstream container group Pods having a higher comprehensive level than the first target container group Pod as a first set;
  • the communication module is used for:
  • the service requests for indicating the call of the downstream pod whose comprehensive level is higher than the second target container group pod among the multiple second service requests are sent to the corresponding downstream container group Pod, and the remaining service requests are discarded.
  • the comprehensive level determination module is used to:
  • the upstream container group Pod If the upstream container group Pod is overloaded, determine the comprehensive level of each downstream container group Pod in at least one downstream container group Pod required to be called by multiple third service requests received by the upstream container group Pod;
  • downstream container group Pods Determine, from the downstream container group Pods required to be called by the multiple third business requests, downstream container group Pods having a higher comprehensive level than the second target container group Pod as the second set;
  • the communication module is used for:
  • the service requests for indicating the call of the downstream pod whose comprehensive level is higher than the third target container group pod among the multiple third service requests are sent to the corresponding downstream container group Pod, and the remaining service requests are discarded.
  • the comprehensive level of the downstream container group Pod is represented by a numerical value.
  • Each functional module of the second aspect is used to implement the method described in the first aspect and any possible implementation method of the first aspect.
  • the present application provides a computing device cluster, comprising at least one computing device, each of the at least one computing device comprising a memory and a processor, the processor of the at least one computing device being used to execute instructions stored in the memory of the at least one computing device, so that the computing device cluster executes the method described in the first aspect and any possible implementation method of the first aspect.
  • the present application provides a computer storage medium containing instructions, which, when executed in a computing device cluster, enables the computing device cluster to execute the method described in the first aspect and any possible implementation of the first aspect.
  • the present application provides a computer program product comprising program instructions.
  • the program instructions When the program instructions are executed on a computing device cluster, the computing device cluster executes the method described in the first aspect and any possible implementation of the first aspect.
  • FIG1 is a schematic diagram of a system architecture provided by the present application.
  • FIG2 is a flow chart of a service grid current limiting method provided by the present application.
  • FIG3 is a schematic diagram of a flow chart of a first current limiting operation provided by the present application.
  • FIG4 is an example diagram provided by the present application.
  • FIG5 is a schematic diagram of the structure of a service grid current limiting device provided by the present application.
  • FIG6 is a schematic diagram of the structure of a computing device provided by the present application.
  • FIG7 is a schematic diagram of the structure of a computing device cluster provided by the present application.
  • FIG8 is a schematic diagram of the structure of another computing device cluster provided in the present application.
  • Service mesh is the infrastructure layer for communication between many services. It is responsible for describing the complex service topology relationship of cloud-native applications and controlling network traffic between various microservices.
  • a cloud may include multiple server nodes, and a server node may be a virtual machine or a physical host.
  • a server node can include one or more container groups Pod.
  • Pod is the basic unit for Kubernetes (Kubernetes is an open source container orchestration engine from Google, referred to as K8s) to deploy, manage, and orchestrate containerized applications.
  • K8s is an open source container orchestration engine from Google, referred to as K8s
  • Pod can include one or more containers.
  • the sidecar container and the application container are deployed in a Pod, as shown in Figure 1.
  • the sidecar container is used to manage and control network traffic. Specifically, the sidecar container is used to process each business request received by this Pod, including current limiting operations and distributing the corresponding business requests to the corresponding Pod.
  • the application container is used to implement the business functions of microservices. It can be understood that any business request must first pass through the sidecar container, and the sidecar container distributes the business request.
  • the sidecar container is used to distribute the business request to the application container where the Pod is located, so that the application container where the Pod is located can process the business request to achieve the corresponding function; or, the sidecar container is used to distribute the business request to the sidecar container in other Pods, and the sidecar container in other Pods will send the received business request to the application container in its own Pod, so that the application container in its own Pod can process the business request to achieve the corresponding function.
  • an application can include multiple microservices, and multiple microservices can be implemented in one or more application containers. Therefore, an application can be implemented in one or more application containers. Multiple microservices of an application can have a calling relationship, and different applications can also have a calling relationship.
  • the service grid management platform specifies the arrangement of each sidecar container and the calling relationship between each microservice, where the calling relationship between each microservice can also be called a network topology relationship.
  • the system architecture shown in Figure 1 may also include a service mesh management platform (not shown in Figure 1).
  • the service mesh management platform is used to manage each sidecar container, such as adding or deleting a sidecar container.
  • the service mesh management platform is also used to manage the calling relationship and calling frequency between each microservice, for example, the number of times microservice a (implemented in Poda) is allowed to call microservice b (implemented in Podb) per unit time.
  • microservice a is implemented in Poda
  • microservice b is implemented in Podb.
  • Microservice a can call microservice b, then Poda and Podb have an upstream and downstream correspondence relationship, Poda is the upstream, and Podb is the downstream.
  • FIG. 1 may include more or fewer server nodes, and a server node may include more or fewer Pods.
  • the network topology relationship is merely an example, and FIG. 1 does not constitute a limitation on the present application.
  • the present application provides a method for limiting flow based on service level, by introducing a flow limiting software development kit (SDK) inside the application container.
  • SDK software development kit
  • the flow limiting SDK determines the service level and overload status of the received service requests, selects service requests with a high service level for processing, and discards service requests with a low service level.
  • This method requires the introduction of SDK in the application container, and the microservices and SDK cooperate with each other to process business requests.
  • This method is an intrusive container current limiting method. With the development of cloud computing, the number of microservices has increased, and the calling relationships between microservices are complex. This intrusive container current limiting method is complex to implement and difficult to develop business.
  • the present application also provides a non-intrusive service mesh current limiting method, which is applied to an upstream Pod having an upstream and downstream correspondence relationship. Specifically, it can be applied to a sidecar container in the upstream Pod. See Figure 2, which is a flow chart of a service mesh current limiting method provided by the present application. The method includes but is not limited to the description of the following content.
  • the Pod obtains the resource usage rate of the Pod in the current time window.
  • this Pod refers to the upstream Pod.
  • the time window refers to a time interval of a preset duration, which can be one minute, one second, or other preset durations, which are not limited in this application.
  • the resource utilization of this Pod includes the processor utilization of this Pod and/or the memory utilization of this Pod.
  • the processor utilization of this Pod refers to the ratio of the amount of processor resources consumed by this Pod to the total amount of processor resources of this Pod.
  • the memory utilization of this Pod refers to the ratio of the memory capacity occupied by this Pod to the total memory capacity of this Pod.
  • the resource capacity size has been allocated to each Pod, including the processor capacity size and memory capacity size of each Pod.
  • S102 Determine whether the Pod is overloaded in the current time window according to the resource usage rate of the Pod.
  • the overload threshold of this Pod includes the processor usage overload threshold of this Pod and/or the memory usage overload threshold of this Pod.
  • the overload threshold of this Pod is pre-set.
  • the overload threshold of the Pod includes a processor usage overload threshold. If the processor usage of the Pod is greater than the processor usage overload threshold, it is determined that the Pod is overloaded in the current time window.
  • the overload threshold of the Pod includes a memory usage overload threshold. If the memory usage of the Pod is greater than the memory usage overload threshold, it is determined that the Pod is overloaded in the current time window.
  • the overload threshold of this Pod includes a memory usage overload threshold and a processor usage overload threshold. If the memory usage of this Pod is greater than the memory usage overload threshold and the processor usage is greater than the processor usage overload threshold, it is determined that this Pod is overloaded in the current time window.
  • a first current limiting operation is performed, which may specifically include but is not limited to step S1031 and step S1032, as shown in FIG3 , which is a schematic flow chart of a method for the first current limiting operation provided in the present application.
  • this Pod receives multiple first business requests. Based on the first business requests, the Pod can identify which microservice each request corresponds to. It can be understood that the Pod can identify which downstream Pod needs to be called for each request based on the first business request. According to multiple first business requests, first, one or more downstream Pods that need to be called by multiple first business requests can be determined, and then the comprehensive level of one or more downstream Pods that need to be called by multiple first business requests is calculated. Among them, the comprehensive level of each downstream Pod changes dynamically. For any downstream Pod, the number of business requests discarded by each downstream Pod is different in different time windows.
  • the comprehensive level of each Pod is determined based on the static business level of the Pod and the actual ability of the Pod to process business requests. Among them, the number of business requests discarded by the downstream Pod is a reflection of the actual ability to process business requests. The understanding of the ability of the Pod to actually process business requests is described below. For details, please refer to the description below.
  • the Pod is Pod A
  • the downstream nodes that need to be called by each first service request received by Pod A are Pods include Pod B, Pod C, and Pod D, where each Pod corresponds to a microservice and is used to implement a certain business function.
  • the comprehensive level of the downstream container group Pod can be determined based on the static business level of the downstream container group Pod, the business request discard rate threshold of the downstream container group Pod, and the overload rate of the downstream container group Pod.
  • the comprehensive level of Pod B can be determined based on the static business level of downstream Pod B, the business request discard rate threshold of Pod B, and the overload rate of Pod B.
  • the comprehensive levels of Pod C and Pod D can be determined.
  • the static business level of each Pod is pre-set according to the business implemented by the microservices in each Pod.
  • downstream Pod B and Pod C in Figure 4 Taking downstream Pod B and Pod C in Figure 4 as examples, the business request discard rate threshold of the downstream Pod and the business request discard rate of the downstream Pod are explained.
  • the downstream Pod B in a certain time window, the number of business requests sent by the upstream Pod A to the downstream Pod B is x1, but the downstream Pod B only processes y1 business requests out of x1 business requests, where y1 is less than x1.
  • the upstream Pod A receives the response of y1 business requests returned by the downstream Pod B, and x1-y1 business requests are discarded by the downstream Pod B. Therefore, the upstream Pod A can determine that the business request discard rate of the downstream Pod B in the current time window is (x1-y1)/x1.
  • the business request discard rate threshold of the downstream Pod B refers to the ratio of the number of business requests that the upstream Pod A allows the downstream Pod B to discard to the total number of business requests sent by the upstream Pod A to the downstream Pod B.
  • the number of business requests sent by the upstream Pod A to the downstream Pod C is x2, but the downstream Pod B only processes y2 business requests out of x2 business requests, where y2 is less than x2.
  • the upstream Pod A receives the response of y2 business requests returned by the downstream Pod B, and x2-y2 business requests are discarded by the downstream Pod B. Therefore, the upstream Pod A can determine that the business request discard rate of the downstream Pod B in the current time window is (x2-y2)/x2.
  • the business request discard rate threshold of the downstream Pod B refers to the ratio of the number of business requests that the upstream Pod A allows the downstream Pod B to discard to the total number of business requests sent by the upstream Pod A to the downstream Pod B.
  • the business request discard rate and the business request discard rate threshold are similar.
  • the static service level of each Pod can be represented by a numerical value.
  • 0-100 can be used to represent the high and low static service levels. The larger the numerical value, the lower the static service level. 0 represents the highest static service level, and 100 represents the lowest static service level.
  • the comprehensive level of the downstream Pod can be calculated using the Euclidean distance algorithm based on the static service level of the downstream Pod, the service request discard rate threshold of the downstream Pod, and the overload rate of the downstream Pod.
  • p is used to represent the static service level of the downstream Pod, where the value range of p is [0,100], L is the service request discard rate threshold of the downstream Pod, and the value range is [0,1], and f is the overload rate, which is in the range of [0,100].
  • r represents the number of service requests sent by the upstream Pod discarded by the downstream Pod
  • s represents the number of service requests sent by the upstream Pod to the downstream Pod.
  • d can represent the service request discard rate per unit time
  • r represents the number of service requests sent by the upstream Pod discarded by the downstream Pod per unit time
  • s represents the number of service requests sent by the upstream Pod to the downstream Pod per unit time.
  • d can represent the service request discard rate in the previous time window
  • r represents the number of service requests sent by the upstream Pod discarded by the downstream Pod in the previous time window
  • s represents the number of service requests sent by the upstream Pod to the downstream Pod in the previous time window.
  • the actual service request discard rate is less than the service request discard rate threshold, indicating that the service request in the downstream Pod is not overloaded, and the overload rate f is 0;
  • the actual service request discard rate is greater than or equal to the service request discard rate threshold, indicating that the service request in the downstream Pod has been overloaded or is about to be overloaded, and the value of f is 100* ⁇ , ⁇ is a scaling factor, and the value range of ⁇ is [0,1], and the larger the discard rate difference, the larger the value of ⁇ , where the discard rate difference refers to the difference between the actual service request discard rate and the preset discard rate threshold.
  • the specific value of ⁇ can be set according to the service request overload situation. For example, when the overload is small, ⁇ can take a smaller value, and when the overload is large, ⁇ can take a larger value.
  • the calculation method of the overload rate and the value of ⁇ in this application are only an example. In actual applications, the overload rate can also be calculated by other methods, and the value of ⁇ can also be other calculation methods, which is not limited in this application.
  • the comprehensive level of the downstream Pod is calculated using the Euclidean distance algorithm.
  • the comprehensive level of the downstream Pod is:
  • the static service level p of the downstream Pod B is 10
  • the service request discard rate threshold L is 20%
  • the service request discard rate d is 0.
  • the corresponding three-dimensional vector is (10, 20, 0)
  • the comprehensive level of the downstream Pod B is
  • the comprehensive level of the downstream Pod B is
  • the comprehensive level of the downstream Pod C is 30
  • the service request discard rate threshold L is 10%
  • the service request discard rate d is 20%.
  • is set to 1
  • the corresponding three-dimensional vector is (30, 10, 100)
  • the comprehensive level of downstream Pod C is
  • the static service level p of the downstream Pod D is 40
  • the service request discard rate threshold L is 30%
  • the service request discard rate d is 0.
  • the corresponding three-dimensional vector is (40, 30, 0).
  • the comprehensive level of the downstream Pod D is From this, we can conclude that among the downstream Pods of upstream Pod A, downstream Pod B has the highest comprehensive level, downstream Pod D has the second highest comprehensive level,
  • the larger the static service level p the larger the value of the comprehensive level of the downstream Pod, and the lower the comprehensive level of the downstream Pod. It can be understood that the larger the p, the lower the static service level of the downstream Pod, the lower the service level of the microservice in the Pod, and the lower the calculated comprehensive level; when p and f are constant, the larger the L, the larger the value of the comprehensive level of the downstream Pod, and the lower the comprehensive level of the downstream Pod.
  • the service request discard rate threshold can be understood as the tolerance for the number of discarded service requests.
  • the step of calculating the comprehensive level of each Pod is performed by this Pod (upstream Pod).
  • the static service level of each Pod is globally shared in the service grid, so this Pod (upstream Pod) can obtain the static service level of each downstream Pod, and thus calculate the comprehensive level of each downstream Pod.
  • the business request indicating the lowest comprehensive level of calling in the multiple first business requests is discarded, and the other business requests are sent to the corresponding downstream Pods.
  • the downstream Pods that need to be called in each business request received by upstream Pod A include Pod B, Pod C, and Pod D.
  • Pod C has the lowest comprehensive level, so Pod A discards the business request used to indicate the call of Pod C in each business request, and sends the other business requests to Pod B and Pod D accordingly.
  • the value representing the comprehensive level of Pod B is 22
  • the value representing the comprehensive level of Pod C is 104
  • the value representing the comprehensive level of Pod D is 50. According to the values, it is determined that Pod C has the lowest comprehensive level, so Pod A discards the service request for instructing to call Pod C in each service request, and sends other service requests to Pod B and Pod D accordingly. Yes, in this application, discarding a service request can be understood as not processing the service request.
  • the Pod When it is determined that the Pod is not overloaded in the current time window, the Pod sends multiple first business requests to the corresponding downstream Pods as normal, without discarding any business requests, until the next time window arrives, and then determines whether the Pod is overloaded in the next time window. If an overload occurs, execute step S105; if not, execute step S101.
  • the judgment method is similar to the method described in the above steps S101 and S102: 1) Obtain the resource utilization rate of the Pod in the first time window after the current time window, where the resource utilization rate includes the processor utilization rate and/or the memory utilization rate; 2) According to the resource utilization rate of the Pod, determine whether the Pod is overloaded in the first time window after the current time window. Specifically, the resource utilization rate of the Pod can be compared with the overload threshold to determine whether the Pod is overloaded in the first time window after the current time window.
  • the resource utilization rate of the Pod can be compared with the overload threshold to determine whether the Pod is overloaded in the first time window after the current time window.
  • a second current limiting operation is performed, specifically including but not limited to the contents in the following steps S1061 and S1062.
  • the business request can identify which microservice each request corresponds to. It can be understood that the Pod can identify which downstream Pod needs to be called for each request based on the first business request. Then, determine the comprehensive level of one or more downstream Pods that need to be called by multiple second business requests. For any downstream Pod, the comprehensive level of the Pod is determined based on the static business level of the Pod, the business request discard rate threshold of the Pod, and the business request discard rate of the Pod. In one example, the static business level can be represented by a numerical value, and the comprehensive level of each downstream Pod can be calculated using the Euclidean distance algorithm. For specific content, please refer to the description of the content in step S1031. For the sake of brevity of the specification, it will not be introduced here.
  • the Pod with the lowest comprehensive level is Pod B
  • the first target set determines the Pod with the lowest comprehensive level and delete it to obtain the second target set.
  • this Pod upstream Pod
  • Pod B has the lowest comprehensive level, which means that the comprehensive level value of Pod B is 104.
  • the comprehensive level values of the downstream Pods required to be called by the multiple second business requests are calculated, and then the Pods with a comprehensive level value smaller than 104 (Pods with a higher comprehensive level than Pod B) are determined as the first target set, and then the Pod with the largest comprehensive level value (Pod with the lowest comprehensive level) is determined from the first target set and deleted to obtain the second target set.
  • the business request in the multiple second business requests for indicating the call of any Pod in the second target set is sent to the corresponding Pod, and the other business requests are discarded.
  • step S101 is executed.
  • step S105 it is determined whether the Pod is overloaded, and the method is similar to step S105.
  • a third current limiting operation is performed, including:
  • step S1061 Determine the comprehensive level of one or more downstream Pods required to be called by the multiple third service requests received by the Pod. For this step, reference may be made to the relevant description of step S1061, which will not be described in detail here;
  • step S101 the Pod (upstream Pod) is not overloaded, that is, the Pod is not overloaded in a time window before the current time window.
  • the first current limiting operation is performed in the n+1th time window, where n can be set by the user according to the specific situation.
  • a current limiting operation similar to the first current limiting operation or the second current limiting operation may be performed to discard some business requests.
  • the embodiment of the present application describes the current limiting method for two consecutive time windows from step S101 to step S107.
  • the Pod upstream Pod
  • the Pod will perform a current limiting operation in each of the three time windows in which the overload occurs, and the comprehensive level of the downstream Pod called by the business request discarded in the second current limiting operation is higher than the comprehensive level of the downstream Pod called by the business request discarded in the first current limiting operation, and the comprehensive level of the downstream Pod called by the business request discarded in the third current limiting operation is higher than the comprehensive level of the downstream Pod called by the business request discarded in the second current limiting operation, so that the overload behavior of the Pod disappears as soon as possible.
  • this application provides a service grid current limiting method, which determines whether the upstream Pod is used according to the resource usage of the upstream Pod.
  • current limiting is performed when an overload is determined.
  • the static business level of the downstream Pod and the actual overload situation of the downstream Pod are comprehensively considered to determine the comprehensive level of the downstream Pod, and business requests with low comprehensive levels are discarded to ensure the stability of the system and business.
  • the Pod with a low comprehensive level may have a low static business level (low microservice level) or may be in a high load state.
  • FIG. 5 is a schematic diagram of the structure of a service grid current limiting device 500 provided in the present application.
  • the device 500 includes an upstream container group Pod having an upstream and downstream correspondence relationship.
  • a sidecar container and an application container are deployed in each container group Pod.
  • the application container is used to process business requests, and the sidecar container is used to control the traffic of business requests.
  • the device 500 includes:
  • the comprehensive level determination module 510 is used to determine the comprehensive level of each downstream container group Pod in at least one downstream container group Pod required to be called by multiple first business requests received by the upstream container group Pod when it is determined that the upstream container group Pod is overloaded.
  • the comprehensive level of the downstream container group Pod is determined based on the preset static business level of the downstream container group Pod and the actual ability of the downstream container group Pod to process business requests;
  • the communication module 520 is used to send some of the multiple first business requests to the corresponding downstream container group Pod according to the comprehensive levels of each downstream container group Pod required to be called by the multiple first business requests, and discard the remaining business requests except some of the business requests.
  • the ability of the downstream container group Pod to actually process business requests includes one or more of a business request discard rate threshold of the downstream container group Pod and an overload rate of the downstream container group Pod.
  • the overload rate of the downstream container group Pod is positively correlated with the discard rate difference.
  • the discard rate difference refers to the difference between the business request discard rate of the downstream container group Pod and the business request discard rate threshold.
  • the comprehensive level of the downstream container group Pod is positively correlated with the static business level of the downstream container group Pod; the comprehensive level of the downstream container group Pod is negatively correlated with the business request discard rate threshold of the downstream container group Pod; the comprehensive level of the downstream container group Pod is negatively correlated with the overload rate of the downstream container group Pod.
  • the acquisition module 530 is used to obtain the resource usage rate of the upstream container group Pod in the current time window; the overload determination module 540 is used to determine whether the upstream container group Pod is overloaded in the current window according to the resource usage rate of the upstream container group Pod.
  • the resource usage of the upstream container group Pod includes the processor usage and/or memory usage in the upstream container group Pod; the overload determination module 540 is used to: when the processor usage is greater than a set processor usage overload threshold, and/or when the memory usage is greater than a set memory usage overload threshold, determine that the upstream container group Pod is overloaded in the current window; otherwise, determine that the upstream container group Pod is not overloaded in the current window.
  • the comprehensive level determination module 510 is used to determine, from among the downstream container group Pods required to be called by the multiple first business requests, a downstream container group Pod with the lowest comprehensive level as the first target container group Pod;
  • the communication module 520 is used to send the service request for instructing to call the downstream container group Pod with a higher comprehensive level than the first target container group Pod among the multiple first service requests to the corresponding downstream container group Pod, and discard the remaining service requests.
  • the comprehensive level determination module 510 is used to:
  • the upstream container group Pod When it is determined that the upstream container group Pod is overloaded, determine the comprehensive level of each downstream container group Pod in at least one downstream container group Pod required to be called by the multiple second service requests received by the upstream container group Pod;
  • the communication module 520 is used for:
  • the service requests for indicating the call of the downstream pod with a higher comprehensive level than the second target container group pod in the multiple second service requests are sent to the corresponding downstream container group Pod, and the remaining service requests are discarded.
  • the comprehensive level determination module is used to:
  • the upstream container group Pod determines the comprehensive level of each downstream container group Pod in at least one downstream container group Pod that needs to be called by multiple third business requests received by the upstream container group Pod;
  • Communication modules are used for:
  • a business request for indicating a call to a downstream pod having a higher comprehensive level than the third target container group pod among the multiple third business requests is sent to the corresponding downstream container group Pod, and the remaining business requests are discarded.
  • the comprehensive level of the downstream container group Pod is represented by a numerical value.
  • the comprehensive level determination module 510, the communication module 520, the acquisition module 530, and the overload determination module 540 can all be implemented by software, or can be implemented by hardware.
  • the implementation of the comprehensive level determination module 510 is introduced below by taking the comprehensive level determination module 510 as an example.
  • the implementation of the communication module 520, the acquisition module 530, and the overload determination module 540 can refer to the implementation of the comprehensive level determination module 510.
  • the comprehensive level determination module 510 may include code running on a computing device.
  • the computing device may be at least one of a physical host, a virtual machine, a container, etc. Further, the above-mentioned computing device may be one or more.
  • the comprehensive level determination module 510 may include code running on multiple hosts/virtual machines/containers. It should be noted that the multiple hosts/virtual machines/containers used to run the application can be distributed in the same region (region) or in different regions. The multiple hosts/virtual machines/containers used to run the code can be distributed in the same availability zone (AZ) or in different AZs, each AZ including a data center or multiple data centers with similar geographical locations. Among them, usually a region can include multiple AZs.
  • AZ availability zone
  • VPC virtual private cloud
  • multiple hosts/virtual machines/containers used to run the code can be distributed in the same virtual private cloud (VPC) or in multiple VPCs.
  • VPC virtual private cloud
  • a VPC is set up in a region.
  • a communication gateway needs to be set up in each VPC to achieve interconnection between VPCs through the communication gateway.
  • the comprehensive level determination module 510 may include at least one computing device, such as a server, etc.
  • the comprehensive level determination module 510 may also be a device implemented by an application-specific integrated circuit (ASIC) or a programmable logic device (PLD).
  • the PLD may be a complex programmable logical device (CPLD), a field-programmable gate array (FPGA), a generic array logic (GAL) or any combination thereof.
  • the multiple computing devices included in the comprehensive level determination module 510 can be distributed in the same region or in different regions.
  • the multiple computing devices included in the comprehensive level determination module 510 can be distributed in the same AZ or in different AZs.
  • the multiple computing devices included in the comprehensive level determination module 510 can be distributed in the same VPC or in multiple VPCs.
  • the multiple computing devices can be any combination of computing devices such as servers, ASICs, PLDs, CPLDs, FPGAs, and GALs.
  • FIG. 6 is a schematic diagram of the structure of a computing device 600 provided in the present application.
  • the computing device 600 is, for example, a bare metal server, a virtual machine, a container, etc.
  • the computing device 600 can be configured as an upstream container group Pod in the method embodiment, and can be specifically configured as a sidecar container in the upstream container group Pod.
  • the computing device 600 includes: a bus 602, a processor 604, a memory 606, and a communication interface 608.
  • the processor 604, the memory 606, and the communication interface 608 communicate through the bus 602. It should be understood that the present application does not limit the number of processors and memories in the computing device 600.
  • the bus 602 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • the bus may be divided into an address bus, a data bus, a control bus, etc.
  • FIG. 6 is represented by only one line, but does not mean that there is only one bus or one type of bus.
  • the bus 602 may include a path for transmitting information between various components of the computing device 600 (e.g., the memory 606, the processor 604, and the communication interface 608).
  • Processor 604 may include any one or more of a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor (MP), or a digital signal processor (DSP).
  • CPU central processing unit
  • GPU graphics processing unit
  • MP microprocessor
  • DSP digital signal processor
  • the memory 606 may include a volatile memory, such as a random access memory (RAM).
  • the processor 604 may also include a non-volatile memory, such as a read-only memory (ROM), a flash memory, a hard disk drive (HDD), or a solid state drive (SSD).
  • ROM read-only memory
  • HDD hard disk drive
  • SSD solid state drive
  • the memory 606 stores executable program codes, and the processor 604 executes the executable program codes to respectively implement the functions of the aforementioned comprehensive level determination module 510, the communication module 520, the acquisition module 530, and the overload determination module 540, thereby implementing a service grid current limiting method. That is, the memory 606 stores instructions for executing a service grid current limiting method.
  • the communication interface 608 uses a transceiver module such as, but not limited to, a network interface card or a transceiver to implement communication between the computing device 600 and other devices or a communication network.
  • a transceiver module such as, but not limited to, a network interface card or a transceiver to implement communication between the computing device 600 and other devices or a communication network.
  • the communication module 520 may be located in the communication interface 608 .
  • the embodiment of the present application also provides a computing device cluster.
  • the computing device cluster includes at least one computing device.
  • the computing device can be a server, a virtual machine, or a container, such as a central server, an edge server, or a sidecar container.
  • Figure 7 is a structural diagram of a computing device cluster provided by the present application, wherein the computing device cluster includes at least one computing device 600.
  • the computing device cluster can be configured as a sidecar container, and the memory 606 in one or more computing devices 600 in the computing device cluster can store the same instructions for executing a service grid current limiting method.
  • the memory 606 of one or more computing devices 600 in the computing device cluster may also store partial instructions for executing a service grid current limiting method.
  • the combination of one or more computing devices 600 can be used to jointly execute instructions of a service grid current limiting method.
  • the memory 606 in different computing devices 600 in the computing device cluster may store different instructions, which are respectively used to execute part of the functions of the device 500. That is, the instructions stored in the memory 606 in different computing devices 600 may implement the functions of one or more modules among the comprehensive level determination module 510, the communication module 520, the acquisition module 530, and the overload determination module 540.
  • one or more computing devices in a computing device cluster may be connected via a network.
  • the network may be a wide area network or a local area network, etc.
  • FIG. 8 shows a possible implementation of a computing device cluster. As shown in FIG. 8 , two computing devices 600A and 600B are connected via a network. Specifically, the network is connected via a communication interface in each computing device.
  • the memory 606 in the computing device 600A stores instructions for the functions of the acquisition module 530 and the overload determination module 540
  • the memory 606 in the computing device 600B stores instructions for executing the functions of the communication module 520 and the comprehensive level determination module 510.
  • the function of the computing device 600A shown in Figure 8 can also be completed by multiple computing devices 600, or the computing device cluster includes multiple computing devices with the same function as the computing device 600A.
  • the function of the computing device 600B can also be completed by multiple computing devices 600, or the computing device cluster includes multiple computing devices with the same function as the computing device 600B.
  • the embodiment of the present application also provides another computing device cluster.
  • the connection relationship between the computing devices in the computing device cluster can be similar to the connection method of the computing device cluster described in Figures 7 and 8.
  • the memory 606 in one or more computing devices 600 in the computing device cluster may store different instructions for executing a service grid current limiting method.
  • the memory 606 of one or more computing devices 600 in the computing device cluster may also respectively store partial instructions for executing a service grid current limiting method.
  • a combination of one or more computing devices 600 can jointly execute instructions for executing a service grid current limiting method.
  • the embodiment of the present application also provides a computer program product including instructions.
  • the computer program product may be a software or program product including instructions that can be run on a computing device or stored in any available medium.
  • the at least one computing device executes a service grid current limiting method.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium can be any available medium that can be stored by a computing device or a data storage device such as a data center containing one or more available media.
  • the available medium can be a magnetic medium (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a solid-state hard disk).
  • the computer-readable storage medium includes instructions that instruct a computing device or a computing device cluster to execute a service grid current limiting method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请提供了一种服务网格限流方法及相关装置,所述方法应用于具有上下游对应关系的上游容器组Pod,所述方法包括:在确定上游容器组Pod发生过载的情况下,确定上游容器组Pod对应的至少一个下游容器组Pod中各个下游容器组Pod的综合等级;根据多个第一业务请求所需调用的各个下游容器组Pod的综合等级,将多个第一业务请求中的部分业务请求下发至对应的下游容器组Pod,将多个第一业务请求中除部分业务请求之外的剩余部分业务请求丢弃。本申请综合考虑了下游Pod的业务等级和真实过载情况,选择综合等级低的业务请求丢弃,从而保证系统的稳定性和业务的稳定性,提高服务质量。

Description

一种服务网格限流方法及相关装置
本申请要求在2022年12月13日提交中国国家知识产权局、申请号为202211595059.0的中国专利申请的优先权,发明名称为“一种服务网格限流方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及云计算领域,尤其涉及一种服务网格限流方法及相关装置。
背景技术
随着云技术的发展,越来越多的应用程序通过微服务的形式实现,由于微服务众多,微服务之间的调用关系复杂,当后台系统在面临大量业务请求时,容易造成过载的情况,这就需要对业务请求进行限流,避免系统崩溃,又要尽量保持业务稳定。
常用的限流方法是,基于业务请求速率的阈值进行限流控制,即,计算一个时间窗口内的业务请求量,当一个时间窗口内的业务请求量超过设定的阈值的情况下,不再接收业务请求或者丢弃新的业务请求,直至当前时间窗口结束。上述限流方法,采用一刀切的方式进行限流,没有关注业务等级,容易导致业务受损。另外,有可能存在这种情况:上游Pod中的业务请求量过载,但是该上游Pod对应的某个下游Pod可能处于空闲状态,这种情况下,上游Pod采用一刀切的方式丢弃新的业务请求是不合理的。
发明内容
本申请提供了一种服务网格限流方法及相关装置,采用本申请所述的方法,上游节点能够根据下游节点的综合情况来进行限流,提高业务服务质量。
第一方面,本申请提供了一种服务网格限流方法,所述方法应用于具有上下游对应关系的上游容器组Pod,每个容器组Pod中部署了边车容器与应用程序容器,所述应用程序容器用于处理所述业务请求,所述边车容器用于对业务请求的流量进行管控,所述方法包括:
在当前时间窗口下:
在确定所述上游容器组Pod发生过载的情况下,确定所述上游容器组Pod接收的多个第一业务请求所需调用的至少一个下游容器组Pod中各个下游容器组Pod的综合等级,所述下游容器组Pod的综合等级是基于预先设置的所述下游容器组Pod的静态业务等级和所述下游容器组Pod实际处理业务请求的能力确定的;
根据所述多个第一业务请求所需调用的各个下游容器组Pod的综合等级,将所述多个第一业务请求中的部分业务请求下发至对应的下游容器组Pod,将所述多个第一业务请求中除所述部分业务请求之外的剩余部分业务请求丢弃。
可以看到,上游容器组Pod根据下游容器组Pod的综合等级,确定将哪些第一业务请求丢弃,从而达到限流的目的,其中各个下游容器组Pod的综合等级并非是根据单一因素确定的,而是根据各个下游容器组Pod的静态业务等级和实际处理业务请求的能力确定的。采用本申请所述的限流方法来确定丢弃的部分请求是哪些,需要处理的部分请求是哪些,并非通过单一因素确定或者一刀切的方式,提高了业务服务的质量。
基于第一方面,在可能的实现方式中,所述下游容器组Pod实际处理业务请求的能力包括所述下游容器组Pod的业务请求丢弃率阈值和所述下游容器组Pod的过载率中的一项或多项,所述下游容器组Pod的过载率与丢弃率差值成正相关,所述丢弃率差值指的是所述下游容器组Pod的业务请求丢弃率与所述业务请求丢弃率阈值的差值。
业务请求丢弃率阈值指的是预先设置的业务请求丢弃率的界限,可以理解为允许丢弃的业务请求的数量是多少,或者可以理解为该下游容器组Pod对丢弃的业务请求的容忍度是多少,因此预先设置的业务请求率阈值可以反映该下游容器组Pod实际处理业务请求的能力。过载率可以体现该下游容器组Pod的实际处理业务请求的能力,过载率与该容器组Pod实际业务请求丢弃率和预设的业务请求丢弃率阈值的差值成正相关。
基于第一方面,在可能的实现方式中,所述下游容器组Pod的综合等级与所述下游容器组Pod的静态业务等级成正相关;所述下游容器组Pod的综合等级与所述下游容器组Pod的业务请求丢弃率阈值成负相 关;所述下游容器组Pod的综合等级与所述下游容器组Pod的过载率成负相关。
可以理解,静态业务等级越低,该容器组Pod的综合等级越低;预设的业务请求丢弃率阈值越大,表示该容器组Pod允许丢弃的业务请求的数量越多,则该容器组Pod的综合等级越低,预设的业务请求丢弃率阈值越小,表示该容器组Pod允许丢弃的业务请求的数量越少,则该容器组Pod的综合等级越高;实际业务请求丢弃率和预设的业务请求丢弃率阈值的差值越大,表示该容器组Pod过载程度越高,则该容器组Pod的综合等级越低。
基于第一方面,在可能的实现方式中,所述方法还包括:获取当前时间窗口下所述上游容器组Pod的资源使用率;根据所述上游容器组Pod的资源使用率,确定当前窗口下所述上游容器组Pod是否发生过载。
基于第一方面,在可能的实现方式中,所述上游容器组Pod的资源使用率包括所述上游容器组Pod中处理器的使用率和/或内存的使用率;所述根据所述上游容器组Pod的资源使用率,确定当前窗口下所述上游容器组Pod是否发生过载,包括:当所述处理器的使用率大于设置的处理器使用率过载阈值,和/或当所述内存的使用率大于设置的内存使用率过载阈值时,确定当前窗口下所述上游容器组Pod发生过载;否则,确定当前窗口下所述上游容器组Pod未发生过载。
基于第一方面,在可能的实现方式中,所述根据所述多个第一业务请求所需调用的各个下游容器组Pod的综合等级,将所述多个第一业务请求中的部分业务请求下发至对应的下游容器组Pod,将所述多个第一业务请求中除所述部分业务请求之外的剩余部分业务请求丢弃,包括:
从所述多个第一业务请求所需调用的各个下游容器组Pod中,确定出综合等级最低的下游容器组Pod,作为第一目标容器组Pod;
将所述多个第一业务请求中用于指示调用综合等级高于所述第一目标容器组Pod的下游容器组Pod的业务请求下发至对应的下游容器组Pod,将剩余部分业务请求丢弃。
可以理解,将调用综合等级最低的下游容器组Pod的第一业务请求丢弃,下游容器组Pod的综合等级最低包括以下情况中的任意一种或多种:下游容器组Pod的静态业务等级比较低;下游容器组Pod预设的业务请求丢弃率阈值比较大(或者称,下游容器组Pod允许丢弃的业务请求的数量较多);下游容器组Pod的过载率(过载程度)较高。
基于第一方面,在可能的实现方式中,所述方法还包括:
在所述当前时间窗口之后的第一个时间窗口下:
在确定所述上游容器组Pod发生过载的情况下,确定所述上游容器组Pod接收的多个第二业务请求所需调用的至少一个下游容器组Pod中各个下游容器组Pod的综合等级;
从所述多个第二业务请求所需调用的各个下游容器组Pod中,确定出综合等级比所述第一目标容器组Pod高的下游容器组Pod,作为第一集合;
确定出所述第一集合中综合等级最低的下游容器组Pod,作为第二目标容器组pod;
将所述多个第二业务请求中用于指示调用综合等级高于所述第二目标容器组pod的下游pod的业务请求下发至对应的下游容器组Pod中,将剩余部分业务请求丢弃。
可以看到,当前时间窗口时上游容器组Pod过载的情况下,丢弃的是调用综合等级最低的下游容器组Pod的业务请求,在当前时间窗口自后的第一个时间窗口下上游容器组Pod过载的情况下,丢弃的是调用比上一个时间窗口中综合等级最低的下游容器组Pod高的容器组Pod的业务请求,因此,当某个上游容器组Pod连续时间窗口发生过载的情况下,丢弃的业务请求所调用的下游容器组Pod的综合等级越来越高,以尽快消除当前上游容器组Pod和/或下游容器组Pod的过载状态。
基于第一方面,在可能的实现方式中,所述方法还包括:
在所述当前时间窗口之后的第二个时间窗口下:
若所述上游容器组Pod发生过载的情况下,确定所述上游容器组Pod接收的多个第三业务请求所需调用的至少一个下游容器组Pod中各个下游容器组Pod的综合等级;
从所述多个第三业务请求所需调用的各个下游容器组Pod中,确定出综合等级比所述第二目标容器组Pod高的下游容器组Pod,作为第二集合;
确定出所述第二集合中综合等级最低的下游容器组Pod,作为第三目标容器组pod;
将所述多个第三业务请求中用于指示调用综合等级高于所述第三目标容器组pod的下游pod的业务请求下发至对应的下游容器组Pod中,将剩余部分业务请求丢弃。
可以看到,当某个上游容器组Pod连续时间窗口发生过载的情况下,丢弃的业务请求所调用的下游容器组Pod的综合等级越来越高,目的是尽快消除当前上游容器组Pod和/或下游容器组Pod的过载状态。
基于第一方面,在可能的实现方式中,所述下游容器组Pod的综合等级通过数值表示。
可以理解,综合等级可以通过数值来表示,在一种示例中,数值越大,综合等级越高,在一种示例中,数值越小,综合等级越高。
第二方面,本申请提供了一种服务网格限流装置,所述装置包括具有上下游对应关系的上游容器组Pod,每个容器组Pod中部署了边车容器与应用程序容器,所述应用程序容器用于处理所述业务请求,所述边车容器用于对业务请求的流量进行管控,所述装置包括:
在当前时间窗口下:
综合等级确定模块,用于在确定所述上游容器组Pod发生过载的情况下,确定所述上游容器组Pod接收的多个第一业务请求所需调用的至少一个下游容器组Pod中各个下游容器组Pod的综合等级,所述下游容器组Pod的综合等级是基于预先设置的所述下游容器组Pod的静态业务等级和所述下游容器组Pod实际处理业务请求的能力确定的;
通信模块,用于根据所述多个第一业务请求所需调用的各个下游容器组Pod的综合等级,将所述多个第一业务请求中的部分业务请求下发至对应的下游容器组Pod,将所述多个第一业务请求中除所述部分业务请求之外的剩余部分业务请求丢弃。
基于第二方面,在可能的实现方式中,所述下游容器组Pod实际处理业务请求的能力包括所述下游容器组Pod的业务请求丢弃率阈值和所述下游容器组Pod的过载率中的一项或多项,所述下游容器组Pod的过载率与丢弃率差值成正相关,所述丢弃率差值指的是所述下游容器组Pod的业务请求丢弃率与所述业务请求丢弃率阈值的差值。
基于第二方面,在可能的实现方式中,所述下游容器组Pod的综合等级与所述下游容器组Pod的静态业务等级成正相关;所述下游容器组Pod的综合等级与所述下游容器组Pod的业务请求丢弃率阈值成负相关;所述下游容器组Pod的综合等级与所述下游容器组Pod的过载率成负相关。
基于第二方面,在可能的实现方式中,获取模块,用于获取当前时间窗口下所述上游容器组Pod的资源使用率;过载确定模块,用于根据所述上游容器组Pod的资源使用率,确定当前窗口下所述上游容器组Pod是否发生过载。
基于第二方面,在可能的实现方式中,所述上游容器组Pod的资源使用率包括所述上游容器组Pod中处理器的使用率和/或内存的使用率;所述过载确定模块用于:当所述处理器的使用率大于设置的处理器使用率过载阈值,和/或当所述内存的使用率大于设置的内存使用率过载阈值时,确定当前窗口下所述上游容器组Pod发生过载;否则,确定当前窗口下所述上游容器组Pod未发生过载。
基于第二方面,在可能的实现方式中,所述综合等级确定模块用于,从所述多个第一业务请求所需调用的各个下游容器组Pod中,确定出综合等级最低的下游容器组Pod,作为第一目标容器组Pod;
所述通信模块用于,将所述多个第一业务请求中用于指示调用综合等级高于所述第一目标容器组Pod的下游容器组Pod的业务请求下发至对应的下游容器组Pod,将剩余部分业务请求丢弃。
基于第二方面,在可能的实现方式中,所述综合等级确定模块用于:
在所述当前时间窗口之后的第一个时间窗口下:
在确定所述上游容器组Pod发生过载的情况下,确定所述上游容器组Pod接收的多个第二业务请求所需调用的至少一个下游容器组Pod中各个下游容器组Pod的综合等级;
从所述多个第二业务请求所需调用的各个下游容器组Pod中,确定出综合等级比所述第一目标容器组Pod高的下游容器组Pod,作为第一集合;
确定出所述第一集合中综合等级最低的下游容器组Pod,作为第二目标容器组pod;
所述通信模块用于:
将所述多个第二业务请求中用于指示调用综合等级高于所述第二目标容器组pod的下游pod的业务请求下发至对应的下游容器组Pod中,将剩余部分业务请求丢弃。
基于第二方面,在可能的实现方式中,所述综合等级确定模块用于:
在所述当前时间窗口之后的第二个时间窗口下:
若所述上游容器组Pod发生过载的情况下,确定所述上游容器组Pod接收的多个第三业务请求所需调用的至少一个下游容器组Pod中各个下游容器组Pod的综合等级;
从所述多个第三业务请求所需调用的各个下游容器组Pod中,确定出综合等级比所述第二目标容器组Pod高的下游容器组Pod,作为第二集合;
确定出所述第二集合中综合等级最低的下游容器组Pod,作为第三目标容器组pod;
所述通信模块用于:
将所述多个第三业务请求中用于指示调用综合等级高于所述第三目标容器组pod的下游pod的业务请求下发至对应的下游容器组Pod中,将剩余部分业务请求丢弃。
基于第二方面,在可能的实现方式中,所述下游容器组Pod的综合等级通过数值表示。
第二方面的各个功能模块用于实现上述第一方面以及第一方面的任意一种可能的实现方式所述的方法。
第三方面,本申请提供了一种计算设备集群,包括至少一台计算设备,所述至少一台计算设备中的每台计算设备包括存储器和处理器,所述至少一台计算设备的处理器用于执行所述至少一台计算设备的存储器中存储的指令,使得所述计算设备集群执行上述第一方面以及第一方面的任意一种可能的实现方式所述的方法。
第四方面,本申请提供了一种包含指令的计算机存储介质,当所述指令在计算设备集群中运行时,使得所述计算设备集群执行上述第一方面以及第一方面的任意一种可能的实现方式所述的方法。
第五方面,本申请提供了一种包含程序指令的计算机程序产品,当所述程序指令在计算设备集群上执行时,所述计算设备集群执行上述第一方面以及第一方面的任意一种可能的实现方式所述的方法。
附图说明
图1为本申请提供的一种系统架构示意图;
图2为本申请提供的一种服务网格限流方法流程示意图;
图3为本申请提供的一种第一限流操作的流程示意图;
图4为本申请提供的一种示例图;
图5为本申请提供的一种服务网格限流装置结构示意图;
图6为本申请提供的一种计算设备的结构示意图;
图7为本申请提供的一种计算设备集群的结构示意图;
图8为本申请提供的又一种计算设备集群的结构示意图。
具体实施方式
在云原生架构中,应用程序通常被设计为多个微服务的分布式集合的形式,其中每个微服务用于执行一些离散的业务功能。服务网格(servicemesh)是众多服务之间通信的基础设施层,用于负责描述云原生应用程序的复杂服务拓扑关系,以及各个微服务之间的网络流量控制。
参见图1,图1为本申请提供的一种系统架构示意图。如图1所示,云中可以包括多个服务器节点,一个服务器节点可以是一个虚拟机,也可以是一个物理主机。
一个服务器节点上可以包括一个或多个容器组Pod,Pod是Kubernetes(Kubernetes是谷歌开源的一种容器编排引擎,简称为K8s)为部署、管理、编排容器化应用的基本单位,Pod可以包括一个或多个容器。
在云原生架构中,边车(sidecar)容器和应用程序容器部署在一个Pod中,如图1所示。边车容器用于对网络流量进行管理控制,具体的,边车容器用于对本Pod接收到的各个业务请求进行处理,处理包括限流操作和将相应的业务请求分发至对应的Pod。应用程序容器用于实现微服务的业务功能。可以理解,任一个业务请求都需先经过边车容器,由边车容器对业务请求进行分发。具体的,边车容器用于将业务请求分发给本Pod所在的应用程序容器,以便本Pod所在的应用程序容器对业务请求进行处理,以实现相应的功能;或者,边车容器用于将业务请求分发给其他Pod中的边车容器,其他Pod中的边车容器将接收到业务请求发送给自身Pod中的应用程序容器,以使自身Pod中的应用程序容器对业务请求进行处理,以实现相应的功能。
可以理解,一个应用程序可以包括多个微服务,多个微服务可以在一个或多个应用程序容器中实现,因此,一个应用程序可以在一个或多个应用程序容器中实现。一个应用程序的多个微服务之间可以具有调用关系,不同应用程序之间也可以具有调用关系。服务网格管理平台中规定了各个边车容器的排布形式以及各个微服务之间的调用关系,其中各个微服务之间的调用关系也可称为网络拓扑关系。
可选的,图1所示的系统架构中,还可以包括服务网格管理平台(图1中未示出),服务网格管理平台用于管理各个边车容器,比如增加或删除边车容器,服务网格管理平台还用于管理各个微服务之间的调用关系、调用频率,比如,单位时间内允许微服务a(对应在Poda中实现)调用微服务b(对应在Podb中实现)的次数。
可以理解,若多个微服务之间具有调用关系,则多个微服务所在的多个Pod具有上下游关系。例如,微服务a在Poda中实现,微服务b在Podb中实现,微服务a可以调用微服务b,则Poda与Podb之间具有上下游对应关系,Poda为上游,Podb为下游。
需要说明的是,图1所示的系统中可以包括更多或更少的服务器节点,一个服务器节点可以包括更多或更少的Pod,网络拓扑关系仅仅是一种示例,图1并不构成对本申请的限定。
本申请提供了一种基于业务等级进行限流的方法,通过在应用程序容器内部引入一个限流软件开发工具包(software development kit,SDK),限流SDK判断接收的业务请求的业务等级和过载情况,选择业务等级高的业务请求进行处理,丢弃业务等级低的业务请求。
但是这种方法,需要在应用程序容器中引入SDK,微服务和SDK相互配合处理业务请求,这种方式属于侵入式容器限流方法。随着云计算的发展,微服务数量增多,微服务之间调用关系复杂,这种侵入式容器限流方法实现起来复杂,业务开发较难。
本申请又提供了一种无侵入式服务网格限流方法,所述方法应用于具有上下游对应关系的上游Pod,具体的,可应用于上游Pod中的边车容器,参见图2,图2为本申请提供的一种服务网格限流方法的流程示意图,所述方法包括但不限于以下内容的描述。
S101、本Pod获取当前时间窗口下本Pod的资源使用率。
本申请所述方法中,本Pod指的是上游Pod。时间窗口指的是预设时长的时间间隔,预设时长可以是一分钟,也可以是一秒钟,还可以是其他预设时长,本申请不做限定。
本Pod的资源使用率包括本Pod的处理器使用率和/或本Pod的内存使用率。本Pod的处理器使用率指的是本Pod已消耗的处理器资源量与本Pod处理器资源总量的比值,本Pod的内存使用率指的是本Pod已占用的内存容量与本Pod内存总容量的比值。其中,在建立各个Pod时,已经为各个Pod分配好了资源容量大小,包括每个Pod的处理器容量大小和内存容量大小。计算本Pod的处理器使用率和本Pod的内存使用率时,可以是在当前时间窗口的某一时刻计算一次,也可以是通过其他方法计算,本申请不做限定。
S102、根据本Pod的资源使用率,确定当前时间窗口下本Pod是否发生过载。
将本Pod的资源使用率与本Pod的过载阈值进行比较,确定当前时间窗口下本Pod是否发生过载。本Pod的过载阈值包括本Pod的处理器使用率过载阈值和/或本Pod的内存使用率过载阈值,本Pod的过载阈值是预先设置好的。
在一种示例中,本Pod的过载阈值包括处理器使用率过载阈值,若本Pod的处理器使用率大于处理器使用率过载阈值,则确定当前时间窗口下本Pod发生过载。
在一种示例中,本Pod的过载阈值包括内存使用率过载阈值,若本Pod的内存使用率大于内存使用率过载阈值,则确定当前时间窗口下本Pod发生过载。
在一种示例中,本Pod的过载阈值包括内存使用率过载阈值和处理器使用率过载阈值,若本Pod的内存使用率大于内存使用率过载阈值且处理器使用率大于处理器使用率过载阈值,则确定当前时间窗口下本Pod发生过载。
S103、在确定当前时间窗口下本Pod发生过载的情况下,执行第一限流操作,丢弃调用综合等级最低的下游Pod的业务请求。
在确定当前时间窗口下本Pod发生过载的情况下,执行第一限流操作,具体的,可以包括但不限于步骤S1031和步骤S1032,如图3所示,图3为本申请提供的第一限流操作的方法流程示意图。
S1031、确定本Pod接收到的多个第一业务请求所需调用的至少一个下游Pod中各个下游Pod的综合等级。
当前时间窗口下,本Pod接收到多个第一业务请求,Pod根据第一业务请求可以识别出每个请求对应的是哪个微服务,可以理解为,Pod根据第一业务请求可以识别出每个请求对应需要调用哪个下游Pod。根据多个第一业务请求,首先,可以确定出多个第一业务请求需要调用的一个或多个下游Pod,然后,计算多个第一业务请求需要调用的一个或多个下游Pod的综合等级。其中,各个下游Pod的综合等级是动态变化的,对于任一个下游Pod来说,在不同时间窗口下,每个下游Pod丢弃业务请求的数量不同,每个Pod的综合等级是基于该Pod的静态业务等级和该Pod实际处理业务请求的能力确定的,其中,下游Pod丢弃业务请求的数量多少是实际处理业务请求的能力的一种体现,关于Pod实际处理业务请求的能力的理解在下文中描述,具体参见下文的描述。
例如,参见图4所示的示例图,本Pod为Pod A,Pod A接收到的各个第一业务请求所需调用的下游 Pod包括Pod B、Pod C和Pod D,其中,每个Pod对应一个微服务,用于实现某个业务功能。
下面介绍一下如何计算各个下游Pod的综合等级。
对于任一个下游Pod来说,根据该下游容器组Pod的静态业务等级、该下游容器组Pod的业务请求丢弃率阈值、该下游容器组Pod的过载率,可以确定该下游容器组Pod的综合等级。例如,可以根据下游Pod B的静态业务等级、Pod B的业务请求丢弃率阈值和Pod B的过载率,确定Pod B的综合等级。类似的,可以确定Pod C和Pod D的综合等级。其中,各个Pod的静态业务等级是根据各个Pod中微服务实现的业务预先设置的。
分别以图4中的下游Pod B、Pod C为例,解释一下下游Pod的业务请求丢弃率阈值和下游Pod的业务请求丢弃率。
以下游Pod B为例,在某一个时间窗口下,上游PodA下发给下游Pod B的业务请求数量为x1,但下游Pod B只处理了x1个业务请求中的y1个业务请求,其中y1小于x1,上游Pod A接收到下游Pod B返回的y1个业务请求的响应,x1-y1个业务请求被下游Pod B丢弃,因此,上游Pod A可以确定当前时间窗口下下游Pod B的业务请求丢弃率为(x1-y1)/x1。下游Pod B的业务请求丢弃率阈值指的是上游Pod A允许下游Pod B丢弃的业务请求的数量与上游Pod A发送给下游Pod B的业务请求总数量的比值。
以下游Pod C为例,在某一个时间窗口下,上游PodA下发给下游Pod C的业务请求数量为x2,但下游Pod B只处理了x2个业务请求中的y2个业务请求,其中y2小于x2,上游Pod A接收到下游Pod B返回的y2个业务请求的响应,x2-y2个业务请求被下游Pod B丢弃,因此,上游Pod A可以确定当前时间窗口下下游Pod B的业务请求丢弃率为(x2-y2)/x2。下游Pod B的业务请求丢弃率阈值指的是上游Pod A允许下游Pod B丢弃的业务请求的数量与上游Pod A发送给下游Pod B的业务请求总数量的比值。对于其他下游Pod,业务请求丢弃率和业务请求丢弃率阈值是类似的。
可选的,各个Pod的静态业务等级可以通过数值来表示,例如,可以使用0-100表示静态业务等级的高低,数值越大,表示静态业务等级越低,0表示静态业务等级最高,100表示静态业务等级最低。
可选的,可以根据下游Pod的静态业务等级、下游Pod的业务请求丢弃率阈值和下游Pod的过载率,使用欧几里得距离算法,计算下游Pod的综合等级。例如,使用p表示下游Pod的静态业务等级,其中p的取值范围为[0,100],L表示下游Pod的业务请求丢弃率阈值,取值范围为[0,1],f表示过载率,取值范围为[0,100],其中,
其中,d表示业务请求丢弃率,计算方式为d=r/s,r表示上游Pod发送过来的业务请求被下游Pod丢弃的数量,s表示上游Pod发送至下游Pod的业务请求的数量。在一种示例中,d可以表示单位时间内的业务请求丢弃率,r表示单位时间内上游Pod发送过来的业务请求被下游Pod丢弃的数量,s表示单位时间内上游Pod发送至下游Pod的业务请求的数量。在一种示例中,d可以表示上一时间窗口下的业务请求丢弃率,r表示上一时间窗口下上游Pod发送过来的业务请求被下游Pod丢弃的数量,s表示上一时间窗口下上游Pod发送至下游Pod的业务请求的数量。
当d小于L时,实际业务请求丢弃率小于业务请求丢弃率阈值,表示该下游Pod中的业务请求未过载,过载率f取值为0;当d大于或等于L时,实际业务请求丢弃率大于或等于业务请求丢弃率阈值,表示该下游Pod中的业务请求已经发生过载或即将发生过载,f取值为100*α,α为缩放因子,α的取值范围是[0,1],且丢弃率差值越大,α的取值越大,其中,丢弃率差值指的是实际业务请求丢弃率与预设的丢弃率阈值之间的差值。需要说明的是,α具体取值可以根据业务请求过载情况具体设置,比如,当过载较少时,α可以取较小的值,当过载较多时,α可以取较大的值。本申请中过载率的计算方式以及α的取值仅仅是一种示例,在实际应用中,过载率还可以通过其他方式计算,α取值还可以是其他计算方式,本申请不做限定。
利用欧几里得距离算法计算下游Pod的综合等级,下游Pod的综合等级为
也就是计算三维向量(p,L*100,f)距离坐标原点的欧几里得距离。
例如,图4示例中,下游Pod B的静态业务等级p为10,业务请求丢弃率阈值L为20%,业务请求丢弃率d为0,则对应的三维向量为(10,20,0),则下游Pod B的综合等级为下游Pod C的静态业务等级p为30,业务请求丢弃率阈值L为10%,业务请求丢弃率d为20%,为了便 于计算,这里将α取值为1,则对应的三维向量为(30,10,100),则下游Pod C的综合等级为下游Pod D的静态业务等级p为40,业务请求丢弃率阈值L为30%,业务请求丢弃率d为0,则对应的三维向量为(40,30,0),则下游Pod D的综合等级为由此可以得到,上游Pod A的各个下游Pod中,下游Pod B的综合等级最高,下游Pod D的综合等级次之,下游Pod C的综合等级最低。
根据公式(2)分析可知:当L和f一定时,静态业务等级p越大,则表示下游Pod的综合等级的数值越大,则下游Pod的综合等级越低,可以理解,p越大时表示该下游Pod的静态业务等级越低,该Pod中微服务的业务等级越低,则计算获得的综合等级越低;当p和f一定时,L越大,则表示下游Pod的综合等级的数值越大,则下游Pod的综合等级越低,业务请求丢弃率阈值可以理解为对丢弃的业务请求数量的容忍度,业务请求丢弃率阈值L越大,即容忍度越大,则该下游Pod的综合等级越低;当p和L一定时,下游Pod的业务请求丢弃率越大,则过载率f越大,则下游Pod的综合等级越低。
需要说明的是,计算各个Pod的综合等级这个步骤是由本Pod(上游Pod)执行的。其中,各个Pod的静态业务等级在服务网格中是全局共享的,因此本Pod(上游Pod)可以获取到各个下游Pod的静态业务等级,从而计算出各个下游Pod的综合等级。
S1032、根据多个第一业务请求所需调用的各个下游Pod的综合等级,将多个第一业务请求中的部分业务请求丢弃,将剩余部分业务请求下发至对应的下游Pod。
在确定出多个第一业务请求所需调用的各个下游Pod的综合等级后,将多个第一业务请求中指示调用综合等级最低的业务请求丢弃,将其他业务请求下发至对应的下游Pod。例如,在图4示例中,上游Pod A接收的各个业务请求中所需调用的下游Pod包括Pod B、Pod C和Pod D,经计算,Pod C的综合等级最低,所以Pod A将各个业务请求中用于指示调用Pod C的业务请求丢弃掉,将其他业务请求对应发送至Pod B、Pod D。
在一种示例中,经计算,表示Pod B综合等级的数值为22,表示Pod C综合等级的数值为104,表示Pod D综合等级的数值为50,根据数值确定出综合等级最低的是Pod C,因此将Pod A将各个业务请求中用于指示调用Pod C的业务请求丢弃掉,将其他业务请求对应发送至Pod B、Pod D。可以,本申请中,将业务请求丢弃,可以理解为不处理该业务请求。
当前时间窗口下,本Pod发生过载的情况下,通过将部分业务请求丢弃后,本Pod中的业务请求数量减少。
S104、在确定当前时间窗口下本Pod未发生过载的情况下,将多个第一业务请求下发至对应的下游容器组Pod。
在确定当前时间窗口下本Pod未发生过载的情况下,本Pod按照正常情况将多个第一业务请求下发至对应的下游Pod,未丢弃任何业务请求,直至下一个时间窗口来临,再判断下一个时间窗口下本Pod是否发生过载,若发生过载执行步骤S105,若未发生过载,则执行步骤S101。
S105、在当前时间窗口发生过载的情况下,确定当前时间窗口之后的第一个时间窗口本Pod是否发生过载。
在当前时间窗口结束后,进入下一个时间窗口,即当前时间窗口之后的第一个时间窗口,判断当前时间窗口之后的第一个时间窗口下本Pod(上游Pod)是否发生过载。判断方法与上述S101和S102步骤中所述方法类似:1)获取本Pod在当前时间窗口之后的第一个时间窗口下的资源使用率,其中,资源使用率包括处理器使用率和/或内存使用率;2)根据本Pod的资源使用率,确定当前时间窗口之后的第一个时间窗口下本Pod是否发生过载。具体的,可以将本Pod的资源使用率与过载阈值进行比较,确定当前时间窗口之后的第一个时间窗口下本Pod是否发生过载,具体内容可参考上述S101和S102步骤中的描述,为了说明书的简洁,在此不再展开介绍。
S106、在确定当前时间窗口之后的第一个时间窗口下本Pod发生过载的情况下,执行第二限流操作,第二限流操作中丢弃的业务请求所调用的下游Pod的综合等级高于第一限流操作中丢弃的业务请求所调用的下游Pod的综合等级。
在确定当前时间窗口之后的第一个时间窗口下本Pod发生过载的情况下,执行第二限流操作,具体的,包括但不限于如下步骤S1061、S1062中的内容。
S1061、确定本Pod接收到的多个第二业务请求所需调用的至少一个下游Pod中各个下游Pod的综合等级。
首先,确定本Pod接收到的多个第二业务请求所需调用的一个或多个下游Pod,其中,Pod根据第一 业务请求可以识别出每个请求对应的是哪个微服务,可以理解为,Pod根据第一业务请求可以识别出每个请求对应需要调用哪个下游Pod。然后,确定多个第二业务请求所需调用的一个或多个下游Pod的综合等级。对于任一个下游Pod来说,根据该Pod的静态业务等级、该Pod的业务请求丢弃率阈值、该Pod的业务请求丢弃率,确定该Pod的综合等级。在一种示例中,可以通过数值表示静态业务等级,利用欧几里得距离算法计算获得各个下游Pod的综合等级。具体内容可参考步骤S1031中内容的描述,为了说明书的简洁,在此不再展开介绍。
S1062、根据多个第二业务请求所需调用的各个下游Pod的综合等级,将多个第二业务请求中的部分业务请求丢弃,将剩余部分业务请求下发至对应的下游Pod。
确定出多个第二业务请求所需调用的各个下游Pod的综合等级后,首先,从其中筛选出综合等级比第一限流操作中多个第一业务请求调用的各个下游Pod中综合等级最低的Pod高的Pod,作为第一目标集合。例如,图4示例中,综合等级最低的Pod是Pod B,则从多个第二业务请求所需调用的各个下游Pod中筛选出综合等级比Pod B高的Pod,作为第一目标集合。然后,从第一目标集合中,确定出综合等级最低的Pod并删除,获得第二目标集合。最后,本Pod(上游Pod)将多个第二业务请求中用于指示调用第二目标集合中的任意一个Pod的业务请求下发至对应的下游Pod,将其他业务请求丢弃。
例如,图4示例中,第一限流操作中,Pod B的综合等级最低,表示Pod B的综合等级的数值为104,则在第二限流操作中,首先,计算出多个第二业务请求所需调用的各个下游Pod的综合等级的数值,然后,从其中确定出综合等级数值比104小的Pod(综合等级比Pod B高的Pod),作为第一目标集合,再从第一目标集合中确定出综合等级数值最大的Pod(综合等级最低的Pod),并将其删除,获得第二目标集合,最后,将多个第二业务请求中用于指示调用第二目标集合中的任意一个Pod的业务请求下发至对应的Pod,将其他业务请求丢弃。
S107、在当前时间窗口发生过载且在当前时间窗口之后的第一个时间窗口下本Pod未发生过载的情况下,将多个第二业务请求下发至对应的下游容器组Pod。
在确定当前时间窗口之后的第一个时间窗口下本Pod未发生过载的情况下,按照正常操作将多个第二业务请求下发至对应的下游容器组Pod,等待下一个时间窗口的来临,再次判断下一个时间窗口是否发生过载,若发生过载则执行第三限流操作,若未发生过载,则执行步骤S101。
可选的,在当前时间窗口之后的第二个时间窗口下,确定本Pod是否发生过载,方法类似步骤S105,具体内容可参考上述S105步骤中的描述,为了说明书的简洁,在此不再展开介绍。在确定本Pod发生过载的情况下,执行第三限流操作,包括:
1)确定本Pod接收的多个第三业务请求所需调用的一个或多个下游Pod的综合等级,本步骤可参考步骤S1061的相关描述,在此不再展开描述;
2)从多个第三业务请求所需调用的各个下游Pod中,确定出第三目标集合,第三目标集合包括综合等级比第二限流操作中第一目标集合中综合等级最低的Pod高的Pod;
3)从第三目标集合中确定出综合等级最低的Pod,并删除,获得第四目标集合;
4)将多个第三业务请求中用于指示调用第四目标集合中任意一个Pod的业务请求下发至对应的下游Pod中,将其他业务请求丢弃。
需要说明的是,在执行步骤S101之前,本Pod(上游Pod)未发生过载,即在当前时间窗口之前的一个时间窗口下,本Pod未发生过载。
可选的,在一种示例中,若在当前时间窗口之后的连续n个时间窗口下本Pod均未发生过载,在第n+1个时间窗口下发生过载的情况下,则第n+1个时间窗口下执行第一限流操作,其中n可以由用户根据具体情况具体设置。
可选的,在一种示例中,若本Pod未达到在当前时间窗口之后的连续n个时间窗口下不发生过载,也可以执行类似第一限流或第二限流操作的限流操作,丢弃一部分业务请求。
需要说明的是,为了便于理解,本申请实施例描述了步骤S101至步骤S107连续两个时间窗口的限流方法,实际应用中,可能存在更多数量或更少数量的连续几个时间窗口发生过载。比如,若连续三个时间窗口本Pod(上游Pod)发生过载行为,则本Pod在发生过载行为的这三个时间窗口下,均执行一次限流操作,且第二次限流操作中丢弃的业务请求所调用的下游Pod的综合等级比第一次限流操作中丢弃的业务请求所调用的下游Pod的综合等级高,第三次限流操作中丢弃的业务请求所调用的下游Pod的综合等级比第二次限流操作中丢弃的业务请求所调用的下游Pod的综合等级高,以使本Pod的过载行为尽快消失。
可以看到,本申请提供了一种服务网格限流方法,根据上游Pod的资源使用情况确定该上游Pod是否 发生过载,在确定发生过载的情况下,进行限流,在进行限流时,综合考虑了下游Pod的静态业务等级和下游Pod的真实过载情况,来确定下游Pod的综合等级,选择调用综合等级低的业务请求进行丢弃,从而保证系统的稳定性和业务的稳定性。其中综合等级低的Pod可能是静态业务等级低(微服务等级低),也可能是处于高负载状态。
参见图5,图5为本申请提供的一种服务网格限流装置500的结构示意图,装置500包括具有上下游对应关系的上游容器组Pod,每个容器组Pod中部署了边车容器与应用程序容器,应用程序容器用于处理业务请求,边车容器用于对业务请求的流量进行管控,装置500包括:
在当前时间窗口下:
综合等级确定模块510,用于在确定上游容器组Pod发生过载的情况下,确定上游容器组Pod接收的多个第一业务请求所需调用的至少一个下游容器组Pod中各个下游容器组Pod的综合等级,下游容器组Pod的综合等级是基于预先设置的下游容器组Pod的静态业务等级和下游容器组Pod实际处理业务请求的能力确定的;
通信模块520,用于根据多个第一业务请求所需调用的各个下游容器组Pod的综合等级,将多个第一业务请求中的部分业务请求下发至对应的下游容器组Pod,将多个第一业务请求中除部分业务请求之外的剩余部分业务请求丢弃。
在可能的实现方式中,下游容器组Pod实际处理业务请求的能力包括下游容器组Pod的业务请求丢弃率阈值和下游容器组Pod的过载率中的一项或多项,下游容器组Pod的过载率与丢弃率差值成正相关,丢弃率差值指的是下游容器组Pod的业务请求丢弃率与业务请求丢弃率阈值的差值。
在可能的实现方式中,下游容器组Pod的综合等级与下游容器组Pod的静态业务等级成正相关;下游容器组Pod的综合等级与下游容器组Pod的业务请求丢弃率阈值成负相关;下游容器组Pod的综合等级与下游容器组Pod的过载率成负相关。
在可能的实现方式中,获取模块530,用于获取当前时间窗口下上游容器组Pod的资源使用率;过载确定模块540,用于根据上游容器组Pod的资源使用率,确定当前窗口下上游容器组Pod是否发生过载。
在可能的实现方式中,上游容器组Pod的资源使用率包括上游容器组Pod中处理器的使用率和/或内存的使用率;过载确定模块540用于:当处理器的使用率大于设置的处理器使用率过载阈值,和/或当内存的使用率大于设置的内存使用率过载阈值时,确定当前窗口下上游容器组Pod发生过载;否则,确定当前窗口下上游容器组Pod未发生过载。
在可能的实现方式中,综合等级确定模块510用于,从多个第一业务请求所需调用的各个下游容器组Pod中,确定出综合等级最低的下游容器组Pod,作为第一目标容器组Pod;
通信模块520用于,将多个第一业务请求中用于指示调用综合等级高于第一目标容器组Pod的下游容器组Pod的业务请求下发至对应的下游容器组Pod,将剩余部分业务请求丢弃。
在可能的实现方式中,综合等级确定模块510用于:
在当前时间窗口之后的第一个时间窗口下:
在确定上游容器组Pod发生过载的情况下,确定上游容器组Pod接收的多个第二业务请求所需调用的至少一个下游容器组Pod中各个下游容器组Pod的综合等级;
从多个第二业务请求所需调用的各个下游容器组Pod中,确定出综合等级比第一目标容器组Pod高的下游容器组Pod,作为第一集合;
确定出第一集合中综合等级最低的下游容器组Pod,作为第二目标容器组pod;
通信模块520用于:
将多个第二业务请求中用于指示调用综合等级高于第二目标容器组pod的下游pod的业务请求下发至对应的下游容器组Pod中,将剩余部分业务请求丢弃。
在可能的实现方式中,综合等级确定模块用于:
在当前时间窗口之后的第二个时间窗口下:
若上游容器组Pod发生过载的情况下,确定上游容器组Pod接收的多个第三业务请求所需调用的至少一个下游容器组Pod中各个下游容器组Pod的综合等级;
从多个第三业务请求所需调用的各个下游容器组Pod中,确定出综合等级比第二目标容器组Pod高的下游容器组Pod,作为第二集合;
确定出第二集合中综合等级最低的下游容器组Pod,作为第三目标容器组pod;
通信模块用于:
将多个第三业务请求中用于指示调用综合等级高于第三目标容器组pod的下游pod的业务请求下发至对应的下游容器组Pod中,将剩余部分业务请求丢弃。
在可能的实现方式中,下游容器组Pod的综合等级通过数值表示。
其中,综合等级确定模块510、通信模块520、获取模块530、过载确定模块540均可以通过软件实现,或者可以通过硬件实现。示例性的,接下来以综合等级确定模块510为例,介绍综合等级确定模块510的实现方式。类似的,通信模块520、获取模块530、过载确定模块540的实现方式可以参考综合等级确定模块510的实现方式。
模块作为软件功能单元的一种举例,综合等级确定模块510可以包括运行在计算设备上的代码。其中,计算设备可以是物理主机、虚拟机、容器等中的至少一种。进一步地,上述计算设备可以是一台或者多台。例如,综合等级确定模块510可以包括运行在多个主机/虚拟机/容器上的代码。需要说明的是,用于运行该应用程序的多个主机/虚拟机/容器可以分布在相同的区域(region)中,也可以分布在不同的region中。用于运行该代码的多个主机/虚拟机/容器可以分布在相同的可用区(availability zone,AZ)中,也可以分布在不同的AZ中,每个AZ包括一个数据中心或多个地理位置相近的数据中心。其中,通常一个region可以包括多个AZ。
同样,用于运行该代码的多个主机/虚拟机/容器可以分布在同一个虚拟私有云(virtual private cloud,VPC)中,也可以分布在多个VPC中。其中,通常一个VPC设置在一个region内。同一region内两个VPC之间,以及不同region的VPC之间跨区通信需在每个VPC内设置通信网关,经通信网关实现VPC之间的互连。
模块作为硬件功能单元的一种举例,综合等级确定模块510可以包括至少一个计算设备,如服务器等。或者,综合等级确定模块510也可以是利用专用集成电路(application-specific integrated circuit,ASIC)实现、或可编程逻辑器件(programmable logic device,PLD)实现的设备等。其中,上述PLD可以是复杂程序逻辑器件(complex programmable logical device,CPLD)、现场可编程门阵列(field-programmable gate array,FPGA)、通用阵列逻辑(generic array logic,GAL)或其任意组合实现。
综合等级确定模块510包括的多个计算设备可以分布在相同的region中,也可以分布在不同的region中。综合等级确定模块510包括的多个计算设备可以分布在相同的AZ中,也可以分布在不同的AZ中。同样,综合等级确定模块510包括的多个计算设备可以分布在同一个VPC中,也可以分布在多个VPC中。其中,所述多个计算设备可以是服务器、ASIC、PLD、CPLD、FPGA和GAL等计算设备的任意组合。
参见图6,图6为本申请提供的一种计算设备600的结构示意图,计算设备600例如裸金属服务器、虚拟机、容器等,该计算设备600可以配置为方法实施例中的上游容器组Pod,具体的可配置为上游容器组Pod中的边车容器。计算设备600包括:总线602、处理器604、存储器606和通信接口608。处理器604、存储器606和通信接口608之间通过总线602通信。应理解,本申请不限定计算设备600中的处理器、存储器的个数。
总线602可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图6中仅用一条线表示,但并不表示仅有一根总线或一种类型的总线。总线602可包括在计算设备600各个部件(例如,存储器606、处理器604、通信接口608)之间传送信息的通路。
处理器604可以包括中央处理器(central processing unit,CPU)、图形处理器(graphics processing unit,GPU)、微处理器(micro processor,MP)或者数字信号处理器(digital signal processor,DSP)等处理器中的任意一种或多种。
存储器606可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM)。处理器604还可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM),快闪存储器,机械硬盘(hard disk drive,HDD)或固态硬盘(solid state drive,SSD)。
存储器606中存储有可执行的程序代码,处理器604执行该可执行的程序代码以分别实现前述综合等级确定模块510、通信模块520、获取模块530、过载确定模块540的功能,从而实现一种服务网格限流方法。也即,存储器606上存有用于执行一种服务网格限流方法的指令。
通信接口608使用例如但不限于网络接口卡、收发器一类的收发模块,来实现计算设备600与其他设备或通信网络之间的通信。可选的,例如通信模块520可以位于通信接口608中。
本申请实施例还提供了一种计算设备集群。该计算设备集群包括至少一台计算设备。该计算设备可以是服务器、虚拟机、容器,例如是中心服务器、边缘服务器、边车容器。
如图7所示,图7为本申请提供的一种计算设备集群的结构示意图,所述计算设备集群包括至少一个计算设备600,在一种场景中,所述计算设备集群中的至少一个计算设备可以配置为边车容器,计算设备集群中的一个或多个计算设备600中的存储器606中可以存有相同的用于执行一种服务网格限流方法的指令。
在一些可能的实现方式中,该计算设备集群中的一个或多个计算设备600的存储器606中也可以分别存有用于执行一种服务网格限流方法的部分指令。换言之,一个或多个计算设备600的组合可用于共同执行一种服务网格限流方法的指令。
当计算设备集群中的至少一个计算设备配置为服务网格限流装置500时,计算设备集群中的不同的计算设备600中的存储器606可以存储不同的指令,分别用于执行装置500的部分功能。也即,不同的计算设备600中的存储器606存储的指令可以实现综合等级确定模块510、通信模块520、获取模块530、过载确定模块540中的一个或多个模块的功能。
在一些可能的实现方式中,计算设备集群中的一个或多个计算设备可以通过网络连接。其中,所述网络可以是广域网或局域网等等。图8示出了一种可能的计算设备集群的实现方式。如图8所示,两个计算设备600A和600B之间通过网络进行连接。具体地,通过各个计算设备中的通信接口与所述网络进行连接。在这一类可能的实现方式中,计算设备600A中的存储器606中存储有获取模块530、过载确定模块540的功能的指令,计算设备600B中的存储器606中存有执行通信模块520、综合等级确定模块510的功能的指令。
应理解,图8中示出的计算设备600A的功能也可以由多个计算设备600完成,或者计算设备集群中包括多个与计算设备600A具有相同功能的计算设备。同样,计算设备600B的功能也可以由多个计算设备600完成,或者计算设备集群中包括多个与计算设备600B具有相同功能的计算设备。
本申请实施例还提供了另一种计算设备集群。该计算设备集群中各计算设备之间的连接关系可以类似的参考图7和图8所述计算设备集群的连接方式。不同的是,该计算设备集群中的一个或多个计算设备600中的存储器606中可以存有不同的用于执行一种服务网格限流方法的指令。在一些可能的实现方式中,该计算设备集群中的一个或多个计算设备600的存储器606中也可以分别存有用于执行一种服务网格限流方法的部分指令。换言之,一个或多个计算设备600的组合可以共同执行用于执行一种服务网格限流方法的指令。
本申请实施例还提供了一种包含指令的计算机程序产品。所述计算机程序产品可以是包含指令的,能够运行在计算设备上或被储存在任何可用介质中的软件或程序产品。当所述计算机程序产品在至少一个计算设备上运行时,使得至少一个计算设备执行一种服务网格限流方法。
本申请实施例还提供了一种计算机可读存储介质。所述计算机可读存储介质可以是计算设备能够存储的任何可用介质或者是包含一个或多个可用介质的数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘)等。该计算机可读存储介质包括指令,所述指令指示计算设备或计算设备集群执行一种服务网格限流方法。
以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的保护范围。

Claims (18)

  1. 一种服务网格限流方法,其特征在于,所述方法应用于具有上下游对应关系的上游容器组Pod,每个容器组Pod中部署了边车容器与应用程序容器,所述应用程序容器用于处理所述业务请求,所述边车容器用于对业务请求的流量进行管控,所述方法包括:
    在当前时间窗口下:
    在确定所述上游容器组Pod发生过载的情况下,确定所述上游容器组Pod接收的多个第一业务请求所需调用的至少一个下游容器组Pod中各个下游容器组Pod的综合等级,所述下游容器组Pod的综合等级是基于预先设置的所述下游容器组Pod的静态业务等级和所述下游容器组Pod实际处理业务请求的能力确定的;
    根据所述多个第一业务请求所需调用的各个下游容器组Pod的综合等级,将所述多个第一业务请求中的部分业务请求下发至对应的下游容器组Pod,将所述多个第一业务请求中除所述部分业务请求之外的剩余部分业务请求丢弃。
  2. 根据权利要求1所述的方法,其特征在于,所述下游容器组Pod实际处理业务请求的能力包括所述下游容器组Pod的业务请求丢弃率阈值和所述下游容器组Pod的过载率中的一项或多项,所述下游容器组Pod的过载率与丢弃率差值成正相关,所述丢弃率差值指的是所述下游容器组Pod的业务请求丢弃率与所述业务请求丢弃率阈值的差值。
  3. 根据权利要求2所述的方法,其特征在于,
    所述下游容器组Pod的综合等级与所述下游容器组Pod的静态业务等级成正相关;
    所述下游容器组Pod的综合等级与所述下游容器组Pod的业务请求丢弃率阈值成负相关;
    所述下游容器组Pod的综合等级与所述下游容器组Pod的过载率成负相关。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述方法还包括:
    获取当前时间窗口下所述上游容器组Pod的资源使用率;
    根据所述上游容器组Pod的资源使用率,确定当前窗口下所述上游容器组Pod是否发生过载。
  5. 根据权利要求4所述的方法,其特征在于,所述上游容器组Pod的资源使用率包括所述上游容器组Pod中处理器的使用率和/或内存的使用率;
    所述根据所述上游容器组Pod的资源使用率,确定当前窗口下所述上游容器组Pod是否发生过载,包括:
    当所述处理器的使用率大于设置的处理器使用率过载阈值,和/或当所述内存的使用率大于设置的内存使用率过载阈值时,确定当前窗口下所述上游容器组Pod发生过载;
    否则,确定当前窗口下所述上游容器组Pod未发生过载。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述根据所述多个第一业务请求所需调用的各个下游容器组Pod的综合等级,将所述多个第一业务请求中的部分业务请求下发至对应的下游容器组Pod,将所述多个第一业务请求中除所述部分业务请求之外的剩余部分业务请求丢弃,包括:
    从所述多个第一业务请求所需调用的各个下游容器组Pod中,确定出综合等级最低的下游容器组Pod,作为第一目标容器组Pod;
    将所述多个第一业务请求中用于指示调用综合等级高于所述第一目标容器组Pod的下游容器组Pod的业务请求下发至对应的下游容器组Pod,将剩余部分业务请求丢弃。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    在所述当前时间窗口之后的第一个时间窗口下:
    在确定所述上游容器组Pod发生过载的情况下,确定所述上游容器组Pod接收的多个第二业务请求所需调用的至少一个下游容器组Pod中各个下游容器组Pod的综合等级;
    从所述多个第二业务请求所需调用的各个下游容器组Pod中,确定出综合等级比所述第一目标容器组Pod高的下游容器组Pod,作为第一集合;
    确定出所述第一集合中综合等级最低的下游容器组Pod,作为第二目标容器组pod;
    将所述多个第二业务请求中用于指示调用综合等级高于所述第二目标容器组pod的下游pod的业务请求下发至对应的下游容器组Pod中,将剩余部分业务请求丢弃。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述下游容器组Pod的综合等级通过数值表示。
  9. 一种服务网格限流装置,其特征在于,所述装置包括具有上下游对应关系的上游容器组Pod,每个容器组Pod中部署了边车容器与应用程序容器,所述应用程序容器用于处理所述业务请求,所述边车容器 用于对业务请求的流量进行管控,所述装置包括:
    在当前时间窗口下:
    综合等级确定模块,用于在确定所述上游容器组Pod发生过载的情况下,确定所述上游容器组Pod接收的多个第一业务请求所需调用的至少一个下游容器组Pod中各个下游容器组Pod的综合等级,所述下游容器组Pod的综合等级是基于预先设置的所述下游容器组Pod的静态业务等级和所述下游容器组Pod实际处理业务请求的能力确定的;
    通信模块,用于根据所述多个第一业务请求所需调用的各个下游容器组Pod的综合等级,将所述多个第一业务请求中的部分业务请求下发至对应的下游容器组Pod,将所述多个第一业务请求中除所述部分业务请求之外的剩余部分业务请求丢弃。
  10. 根据权利要求9所述的装置,其特征在于,所述下游容器组Pod实际处理业务请求的能力包括所述下游容器组Pod的业务请求丢弃率阈值和所述下游容器组Pod的过载率中的一项或多项,所述下游容器组Pod的过载率与丢弃率差值成正相关,所述丢弃率差值指的是所述下游容器组Pod的业务请求丢弃率与所述业务请求丢弃率阈值的差值。
  11. 根据权利要求10所述的装置,其特征在于,
    所述下游容器组Pod的综合等级与所述下游容器组Pod的静态业务等级成正相关;
    所述下游容器组Pod的综合等级与所述下游容器组Pod的业务请求丢弃率阈值成负相关;
    所述下游容器组Pod的综合等级与所述下游容器组Pod的过载率成负相关。
  12. 根据权利要求9-11任一项所述的装置,其特征在于,
    获取模块,用于获取当前时间窗口下所述上游容器组Pod的资源使用率;
    过载确定模块,用于根据所述上游容器组Pod的资源使用率,确定当前窗口下所述上游容器组Pod是否发生过载。
  13. 根据权利要求12所述的装置,其特征在于,所述上游容器组Pod的资源使用率包括所述上游容器组Pod中处理器的使用率和/或内存的使用率;
    所述过载确定模块用于:
    当所述处理器的使用率大于设置的处理器使用率过载阈值,和/或当所述内存的使用率大于设置的内存使用率过载阈值时,确定当前窗口下所述上游容器组Pod发生过载;
    否则,确定当前窗口下所述上游容器组Pod未发生过载。
  14. 根据权利要求9-13任一项所述的装置,其特征在于,
    所述综合等级确定模块用于,从所述多个第一业务请求所需调用的各个下游容器组Pod中,确定出综合等级最低的下游容器组Pod,作为第一目标容器组Pod;
    所述通信模块用于,将所述多个第一业务请求中用于指示调用综合等级高于所述第一目标容器组Pod的下游容器组Pod的业务请求下发至对应的下游容器组Pod,将剩余部分业务请求丢弃。
  15. 根据权利要求14所述的装置,其特征在于,所述综合等级确定模块用于:
    在所述当前时间窗口之后的第一个时间窗口下:
    在确定所述上游容器组Pod发生过载的情况下,确定所述上游容器组Pod接收的多个第二业务请求所需调用的至少一个下游容器组Pod中各个下游容器组Pod的综合等级;
    从所述多个第二业务请求所需调用的各个下游容器组Pod中,确定出综合等级比所述第一目标容器组Pod高的下游容器组Pod,作为第一集合;
    确定出所述第一集合中综合等级最低的下游容器组Pod,作为第二目标容器组pod;
    所述通信模块用于:
    将所述多个第二业务请求中用于指示调用综合等级高于所述第二目标容器组pod的下游pod的业务请求下发至对应的下游容器组Pod中,将剩余部分业务请求丢弃。
  16. 根据权利要求9-15任一项所述的装置,其特征在于,所述下游容器组Pod的综合等级通过数值表示。
  17. 一种计算设备集群,其特征在于,包括至少一台计算设备,所述至少一台计算设备中的每台计算设备包括存储器和处理器,所述至少一台计算设备的处理器用于执行所述至少一台计算设备的存储器中存储的指令,使得所述计算设备集群执行如权利要求1至8任一项所述的方法。
  18. 一种包含指令的计算机存储介质,其特征在于,当所述指令在计算设备集群中运行时,使得所述计算设备集群执行如权利要求1至8任一项所述的方法。
PCT/CN2023/132091 2022-12-13 2023-11-16 一种服务网格限流方法及相关装置 WO2024125201A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211595059.0A CN118233401A (zh) 2022-12-13 2022-12-13 一种服务网格限流方法及相关装置
CN202211595059.0 2022-12-13

Publications (1)

Publication Number Publication Date
WO2024125201A1 true WO2024125201A1 (zh) 2024-06-20

Family

ID=91484424

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/132091 WO2024125201A1 (zh) 2022-12-13 2023-11-16 一种服务网格限流方法及相关装置

Country Status (2)

Country Link
CN (1) CN118233401A (zh)
WO (1) WO2024125201A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112187660A (zh) * 2020-08-31 2021-01-05 浪潮云信息技术股份公司 一种用于云平台容器网络的租户流量限制方法及系统
WO2021111516A1 (ja) * 2019-12-03 2021-06-10 日本電信電話株式会社 通信管理装置及び通信管理方法
CN113765816A (zh) * 2021-08-02 2021-12-07 阿里巴巴新加坡控股有限公司 一种基于服务网格的流量控制方法、系统、设备及介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021111516A1 (ja) * 2019-12-03 2021-06-10 日本電信電話株式会社 通信管理装置及び通信管理方法
CN112187660A (zh) * 2020-08-31 2021-01-05 浪潮云信息技术股份公司 一种用于云平台容器网络的租户流量限制方法及系统
CN113765816A (zh) * 2021-08-02 2021-12-07 阿里巴巴新加坡控股有限公司 一种基于服务网格的流量控制方法、系统、设备及介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MULANI SAFA: "Traffic Management in Istio - A detailed Guide", 3 August 2022 (2022-08-03), XP093182249, Retrieved from the Internet <URL:https://www.digitalocean.com/community/tutorials/traffic-management-in-istio> *

Also Published As

Publication number Publication date
CN118233401A (zh) 2024-06-21

Similar Documents

Publication Publication Date Title
CN108776934A (zh) 分布式数据计算方法、装置、计算机设备及可读存储介质
CN113485822A (zh) 内存管理方法、系统、客户端、服务器及存储介质
WO2021068205A1 (zh) 访问控制方法、装置、服务器和计算机可读介质
CN111857992B (zh) 一种Radosgw模块中线程资源分配方法和装置
US20120005688A1 (en) Allocating space in message queue for heterogeneous messages
CN112053105A (zh) 划分服务区域的方法和装置
WO2024022443A1 (zh) 一种资源弹性伸缩方法、装置及设备
WO2024060682A9 (zh) 内存管理方法、装置、内存管理器、设备及存储介质
CN105430028B (zh) 服务调用方法、提供方法及节点
WO2024125201A1 (zh) 一种服务网格限流方法及相关装置
CN111404839A (zh) 报文处理方法和装置
CN112685169A (zh) 一种负载控制方法、装置、服务器及可读存储介质
CN113794755A (zh) 基于微服务架构的共享服务推送方法及系统
CN108366102A (zh) 一种基于Consul的服务发现方法、装置及电子设备
US11962476B1 (en) Systems and methods for disaggregated software defined networking control
CN111597041A (zh) 一种分布式系统的调用方法、装置、终端设备及服务器
CN112416506A (zh) 一种容器管理方法、设备及计算机存储介质
CN115733800A (zh) 一种网卡选择方法、系统、电子设备及介质
CN115914236A (zh) 存储空间的分配调整方法、装置、电子设备及存储介质
CN111049758B (zh) 一种实现报文QoS处理的方法、系统及设备
CN110866066B (zh) 一种业务处理方法及装置
CN114675973A (zh) 资源管理方法、设备、存储介质及程序产品
CN114374657A (zh) 一种数据处理方法和装置
CN113765796A (zh) 流量转发控制方法及装置
WO2024113847A1 (zh) 共享资源分配方法、装置及计算设备集群

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23902401

Country of ref document: EP

Kind code of ref document: A1