CN110149396B - Internet of things platform construction method based on micro-service architecture - Google Patents
Internet of things platform construction method based on micro-service architecture Download PDFInfo
- Publication number
- CN110149396B CN110149396B CN201910420269.8A CN201910420269A CN110149396B CN 110149396 B CN110149396 B CN 110149396B CN 201910420269 A CN201910420269 A CN 201910420269A CN 110149396 B CN110149396 B CN 110149396B
- Authority
- CN
- China
- Prior art keywords
- service
- micro
- load
- module
- container
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0893—Assignment of logical groups to network elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/147—Network analysis or design for predicting network behaviour
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5041—Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
- H04L41/5054—Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1031—Controlling of the operation of servers by a load balancer, e.g. adding or removing servers that serve requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/30—Profiles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/51—Discovery or management thereof, e.g. service location protocol [SLP] or web services
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention provides a method for constructing an Internet of things platform based on a micro-service architecture, which comprises the following steps of; s1, dividing micro service levels for the platform of the Internet of things; s2, the micro-service deployment module deploys a micro-service cluster of the Internet of things platform by using a Docker Swarm tool, and registers micro-service information to a service discovery center; s3, distributing the client requests to the micro-services of the Internet of things platform through a load balancing scheduling module; s4, the service load prediction module establishes a difference integration autoregressive moving average model and calculates a service load prediction value; s5, the service performance prediction module generates a performance prediction value according to the service load prediction value and the number of instances; and S6, the dynamic container scheduling module performs horizontal scaling of the micro-service according to the performance predicted value. The Internet of things platform constructed by the method can be used for rapidly expanding new functions according to market demand changes and stretching container clusters according to real-time loads, so that the expandability of the Internet of things platform is improved.
Description
Technical Field
The invention relates to the field of Internet of things platform construction, in particular to a method for constructing an Internet of things platform based on a micro-service architecture.
Technical Field
The micro-service architecture divides a single application program into a plurality of small services for independent development, and decoupling is realized between the services through a lightweight communication mechanism. These services are built with specific business (appropriate building techniques and development tools are selected based on the business context), and the services are deployed through an automated deployment mechanism.
The Docker container technology may provide a viable solution for implementation, deployment, and maintenance of microservice architecture systems. Compared with a virtual machine, the container has obvious advantages in the aspects of starting speed, elastic expansion, resource consumption and the like, and is more suitable for application of a micro-service architecture. Based on the advantages, the Docker can effectively meet the requirements of the micro-service system on multi-node multi-instance operation, deployment and maintenance in a distributed environment. Meanwhile, a mode that a plurality of Docker containers are deployed on one host and each container independently runs one micro-service is adopted, so that the waste of equipment resources caused by running a virtual machine can be avoided.
Therefore, the invention provides an Internet of things platform construction method based on a micro-service architecture, and provides a corresponding solution for the deployment and scheduling problems involved in the Internet of things platform construction process.
Disclosure of Invention
The invention aims to apply the micro-service architecture and the Docker container technology to an Internet of things platform, solve the problems of poor expandability and usability of the Internet of things platform and construct a multi-instance deployed distributed Internet of things platform.
The invention is realized by at least one of the following technical schemes.
The method for constructing the platform of the internet of things as claimed in claim 1, comprising the following steps:
s1, dividing micro service levels into an access layer, a service layer, a middle layer and a basic layer for the platform of the Internet of things;
s2, the micro-service deployment module deploys a micro-service cluster of the Internet of things platform by using a container cluster management tool Docker Swarm, and registers micro-service information to a service discovery center;
s3, distributing the client requests to the micro-services of the Internet of things platform through a load balancing scheduling module;
s4, the service load prediction module analyzes the time series by using a differential Integrated Moving Average model (ARIMA) to calculate a service load prediction value.
S5, the service performance prediction module generates a performance prediction value according to the service load prediction value and the number of instances;
and S6, the dynamic container scheduling module performs horizontal scaling of the micro-service according to the performance predicted value.
Further, the access layer in step S1 provides terminal access components of the heterogeneous device and the platform user, and provides a terminal access authentication group; the service layer provides connection and communication components of the Internet of things platform, the intelligent device, the user APP, the management platform and the third-party platform, and the communication components comprise Application Programming Interface (API) service components, a central control service component and a transit service component; the middle layer comprises a service logic component which does not directly interact with the terminal, and the service logic component comprises a data analysis component and a log management component; the basic layer provides basic service components required by the access layer, the service layer and the middle layer, and the basic service components comprise a memory database, a relational database and a message queue.
Further, the Docker Swarm tool described in step S2 is a built-in cluster management tool of the Docker container, and the micro-service deployment module constructs Docker container clusters using the Docker Swarm tool, where each container cluster deploys and runs a micro-service separately, and selects a deployment node of the Docker container by using a Spread policy built in a scheduler in the Docker Swarm tool.
Further, the service discovery center described in step S2 is configured to store the node and the container state of the current container cluster, and when a container in the node joins or leaves the cluster, the service registration tool on the node reports a corresponding event to the service discovery center, and the service registration center updates the cluster state according to the reported event.
Further, the load balancing scheduling module in step S3 includes a service discovery module, a container registration module, a load balancing module, and a configuration update module;
the service discovery module adopts a Consul cluster as a service discovery center of the container cluster, and the Consul provides registration and discovery services of nodes and container instances for the Docker container cluster;
the Consul cluster is constructed by adopting a Consul open source tool, two service nodes exist, namely a Consul Server and a Consul Client, and the Consul Server is used for storing and copying configuration data and is communicated with the Consul Client and a data center; the Consul Client forwards the data access request to a Consul Server cluster and provides a key value to read and write data for the outside; the construction mode of the Consul Server cluster is that a Consul open source tool is used for deploying and operating a Server node on a host machine, the Server node is designated as a Server mode during operation, if the Server node is not a first Server node, an adding command is used for configuring the address of the first Server node, and the first Server node and the address of the first Server node form a cluster;
the information of all Swarm working nodes and container instances in the Consul cluster is stored in a Consul Server cluster, the Consul cluster accesses a Consul Client by using a representation layer state conversion application program interface (RESTful API) of the Consul cluster, and the Consul Client forwards a request to a Consul Server of the same data center to complete the query and add-delete functions of the information of the Docker container and the nodes;
the container registration module is used for realizing registration and cancellation of Docker container information on the nodes through deploying a service registration tool registrar, the registrar monitors starting and stopping events of a container of a Docker engine, the Docker container information is registered in a consul cluster in a consul service form, and the Internet of things platform runs the registrar tool on each Swarm working node to monitor and forward the container survival state to a service discovery center;
the load balancing module adopts a Nginx load balancer to realize the load balancing of the platform of the Internet of things, when a client request arrives, the kernel part of the Nginx load balancer maps an access request to a corresponding positioning block by searching a configuration file, and each configuration instruction in the positioning block can start a corresponding functional module to complete corresponding work;
the configuration updating module is connected with the service discovery module and the load balancing module through a configuration updating tool Consul-Template to realize the timely updating of the configuration files of the load balancing module, when the registration information of the service instance on the Consul cluster is changed, the Consul-Template timely updates the configuration files of all the Nginx reverse proxy servers on the load balancing module, and calls the Nginx command to reload the configuration so as to update the scheduling of the load balancing module.
Further, the calculating of the load prediction value of the service in step S4 includes the following steps:
s41, the service load forecasting module acquires the name, the analysis time period and the forecasting time period information of the micro service needing load forecasting;
s42, the load prediction module obtains access request data in the analysis time period from the InfluxDB time sequence database of the service monitoring module according to the service name and the analysis time period, and generates a service load time sequence;
s43, judging whether the service load time sequence is stable through a unit root test (ADF), wherein the unit root test is to test whether a unit root exists in the service load time sequence, if the unit root exists, the service load time sequence does not have stationarity, and the regression analysis of the service load time sequence is under the condition of pseudo regression, specifically, the original hypothesis of the unit root test is that the service load time sequence has the unit root, and if the statistic obtained through calculation is less than the critical statistic values of 3 confidence coefficients of 1%, 5% and 10%, the original hypothesis can be rejected, and the service load time sequence is fully proved to present stationarity; if the calculated statistic is larger than the critical statistic value of the 3 confidence degrees, carrying out differential transformation on the service load time sequence to convert the service load time sequence into a stable sequence;
s44, for the unstable service load time sequence, d-order difference is needed, namely, the value at the time t minus the value at the time t-1 in the original time sequence to obtain a first-order difference sequence, if the first-order difference sequence is not a stable sequence, the new difference sequence is continuously differentiated to obtain a second-order difference sequence, the difference is continued to obtain a stable d-order difference sequence, and after the difference order d of the ARIMA (p, d, q) is determined, the order p and the order q of the ARIMA model are determined by adopting Bayesian Information Criterion (BIC);
s45, constructing an ARIMA model for service load prediction, wherein the ARIMA model is as follows:
in the above formula, the input load time series X ═ X1,x2,…,xnIs a new stationary sequence after d-order differential transformation, where xnIs the average load value of the nth time window, p and q are the autoregressive order and the moving average order of the model respectively, alphaiAnd betahEach parameter of the model is a random value between 0 and 1, i is 1 to p, h is 1 to q, ωtIs a white noise sequence.
Training the model, firstly, extracting service load data from a time sequence database of a service monitoring module, inputting the service load data into the model until the error of the model is lower than a set value, setting a prediction time period of the model after the model is generated, and outputting a service load prediction value of the time period by the model;
s46, the service load prediction module generates a service load prediction value of a time period to be predicted by using the trained ARIMA model;
and S47, carrying out d-order differential reduction on the service load predicted value, wherein the service load predicted value is a group of time sequences, and the reduction process is the inverse process of differential change, namely accumulating each element in the sequence and all the elements in front of the sequence, and executing d times to obtain the finally required service load predicted value.
Further, in step S5, the service performance prediction module generates a service performance prediction model using an Extreme Learning Machine (ELM), where the service performance prediction model takes a load prediction value of the service and a container instance number of the service as inputs of the Extreme learning Machine, and takes a performance prediction value of the service as an output;
the service performance prediction model building process comprises the steps of obtaining historical data of services from a time sequence database, inputting the historical data into an extreme learning machine for training, storing the model when the model error is lower than a set value, obtaining the number of instances of micro-services from a service discovery center and obtaining a service load prediction value from a service load prediction module by a service performance prediction module, inputting the obtained number of instances and the load prediction value into the service performance prediction model, and outputting the service performance prediction value by the model.
Further, in step S6, the container dynamic scheduling module obtains a micro-service performance prediction value, and when the performance prediction value exceeds a service performance threshold range specified by an internet of things platform administrator, the service performance prediction module is called again to calculate the number of container instances required by the micro-service, and level scaling of the micro-service is performed, that is, the number of instances of the micro-service in the micro-service cluster is adjusted.
Compared with the prior art, the invention has the following advantages and technical effects:
the invention constructs the micro-service of the Internet of things platform by using a micro-service architecture, and deploys and runs the micro-service by using a Docker container. The constructed Internet of things platform can be used for rapidly updating and increasing micro-services according to business requirements, and the expandability of the platform is realized; and the horizontal extension and load balance of the micro-service cluster are realized by combining a load balancing technology and a container dynamic scheduling technology, the problems of insufficient and excessive load capacity of the micro-service of the platform of the Internet of things in the operation process are solved, and the availability of the platform is improved.
Drawings
Fig. 1 is a micro-service layered architecture diagram of an internet of things platform according to the embodiment;
fig. 2 is a flowchart of micro-service deployment of the micro-service cluster module according to this embodiment;
fig. 3 is a load balancing framework diagram of the load balancing scheduling module of the present embodiment;
FIG. 4 is a flowchart illustrating the operation of the service load prediction module according to this embodiment;
FIG. 5 is an ELM structure diagram of a service performance prediction model of the present embodiment;
FIG. 6 is a diagram illustrating exemplary modules for dynamic scheduling of containers;
fig. 7 is a flowchart illustrating the operation of the dynamic container scheduling module according to this embodiment.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
A method for constructing an Internet of things platform based on a micro-service architecture comprises the following steps:
s1, dividing micro service levels for the platform of the Internet of things
The overall architecture of the internet of things platform is shown in fig. 1, and the internet of things platform supports access of various clients such as equipment, a gateway, a user APP, an enterprise management platform and a third-party platform, provides services such as protocol conversion, equipment remote control and data forwarding, and realizes interaction among the equipment, the user and a cloud. The platform of the internet of things can be divided into four layers on the level of a system architecture, namely an access layer, a service layer, a middle layer and a basic layer, and each layer is continuously divided into a plurality of groups according to service characteristics. The access layer provides access services for terminals such as heterogeneous devices and platform users, and includes protocol proxy packets and terminal access authentication packets for converting different communication protocols into platform standard communication protocols (MQTT). Communication protocols based on a CoAP (managed Application Protocol), an XMPP (extensible Messaging and Presence Protocol) Protocol and a HyperText Transfer Protocol (HTTP) all need to use services in a Protocol agent packet for Protocol conversion and then can communicate with a platform, and the terminal access authentication comprises an enterprise administrator, a sub-user access authentication service and an equipment authentication access service. The service layer provides connection and communication services of the Internet of things platform, the intelligent equipment, the user APP, the management platform and the third-party platform, and can be divided into an API service group, a central control service group and a data transfer service group, wherein the central control service comprises a data reporting service, a fixed upgrading service, an equipment control service, a timing task service and a user binding service, the API service comprises a user management service, a product management service, a message pushing service, an enterprise management service and an equipment authorization service, and the data transfer service comprises a third-party platform management service and a transfer rule service. The middle layer contains business logic groups which do not directly interact with the terminal, such as data analysis services, log management, contextual models, device linkage services and group control services. The basic layer provides basic service groups required by the access layer, the service layer and the middle layer, such as Mysql persistence service, cache service, Cassandra data service, memory database, relational database, message queue service and the like.
The service requirement in the platform of the internet of things accords with a single function principle, a stateless principle, an independent deployment principle and a lightweight communication principle, namely, the four service design principles of a micro-service architecture are met. Fig. 1 is a diagram for dividing an internet of things platform into four layers by using service types as layering bases, wherein each layer comprises a plurality of groups with complex service functions. The internet of things platform can encapsulate all functions in a service group into one component to operate, so that the difficulty of service cooperation and deployment is simplified, but the realization does not follow the single function principle of micro-service design, and the service functions of the same component are highly coupled, so that the upgrading, maintenance and expansion of services are not facilitated. For example, the central control service group provides multiple functional services such as data reporting, remote control, user binding and the like for equipment, and if the equipment is integrated into one application for development, test, deployment and operation, the difficulty of team cooperation, service upgrade and maintenance is increased. Therefore, it is necessary to continue analyzing and modeling service components at various levels, and to split each component into one or more single-function and low-coupling services. Based on the concept of micro-services, multiple components such as MQTT service and API service are divided into multiple autonomous fine-grained services, and each micro-service supports independent development, testing, deployment and scaling.
S2, the micro service deployment module deploys a micro service cluster of the Internet of things platform by using a Docker Swarm tool, and registers micro service information to the service discovery center, so that other micro services and each module can acquire the state of the micro service cluster;
and the micro-service deployment module deploys the micro-service cluster of the Internet of things platform by using a Docker Swarm tool. The Docker Swarm is a built-in cluster management tool of the Docker container, uses a standard Docker API interface as an access entry, abstracts the container cluster into a virtual Docker host, and facilitates container cluster management of users. Deployment and management of massive containers on hundreds or thousands of nodes can be easily achieved using Docker Swarm, and the built-in Raft consistency algorithm maintains the high availability of Swarm clusters. Therefore, the micro-service deployment module uses a cluster management tool to construct a container cluster, and combines an external load balancing scheduling module to realize service deployment and service discovery of the platform of the internet of things.
Fig. 2 shows a flow of the micro-service of the internet of things platform being packaged and deployed to the Swarm cluster, after the development of the platform is completed, the platform micro-service is packaged into an independent Docker mirror image, and then the independent Docker mirror image is released to the private mirror image warehouse of the platform. After the micro-service deployment module analyzes the configuration of each micro-service, after a Docker client initiates a command for deploying a micro-service cluster to a Docker Swarm by using a Docker API, a Swarm management node selects an available Swarm work node by using a built-in scheduler and distributes a container deployment task to the selected Swarm work node. The Swarm working node pulls the mirror image of the micro-service to be deployed from the private mirror image warehouse, and after the mirror image downloading is completed, the downloaded mirror image is used for creating and operating a container at the node. After the Swarm working node successfully deploys and runs the container, the container information, that is, the deployment information of the micro service, is registered with the service discovery center.
The node ranking strategy is built in the dispatcher of the Docker Swarm and used for selecting the deployment nodes of the container, namely a Random strategy, a Binpack strategy and a Spread strategy. The Random strategy is mainly used in an application development stage, and a working node meeting the container resource limitation condition is randomly selected for container deployment; the Binpack strategy is to fill up one working node as much as possible to vacate more vacant nodes; the Spread policy is to preferentially select the node with the least number of running containers from all the Swarm working nodes, and assign a container deployment request to the node, so as to ensure uniform use of all the node resources in the Swarm cluster. In the Swarm cluster based on the Spread strategy, the management node constructs the information of all the working nodes into a small top heap, and the number of containers running in the nodes is used as the standard of heap sorting. Compared with Random and Binpack strategies, the Spread strategy can ensure that the number of containers for all Swarm working nodes to operate is similar, and the problem of unbalanced use of part of node resources is avoided, so that the micro-service deployment module adopts the Spread strategy as a container deployment strategy of the Internet of things platform.
The service discovery center is used for storing the node and container state of the current container cluster so as to provide other services and various modules for use. When the container in the node joins or exits the cluster, the service registration tool on the node reports the corresponding event to the service discovery center, and the service registration center updates the cluster state according to the reported event. The Internet of things platform adopts a Consul open source tool to construct a Consul cluster, the Consul cluster is used as a service discovery center of a container cluster, and the Consul is an open source tool which is introduced by HashCorp and used for service discovery and service registration of a distributed system. Consul provides functions of service health checking, multiple data centers, key-value pair storage, and the like, and provides a RESTful API for other service components to call. Two service nodes exist in the Consul cluster, namely a Consul Server and a Consul Client. The Consul Server is used for storing and copying configuration data and communicating with a Consul Client and other data centers; and the Consul Client forwards the data access request to a Consul Server cluster and provides a key value to read and write data for the outside.
S3, distributing the client requests to the micro-services of the Internet of things platform through a load balancing scheduling module;
the load balancing scheduling module of the internet of things platform uses the Nginx as a load balancer of the internet of things platform, and comprises a Domain Name System (DNS), a service discovery module, a container registration module, a load balancing module and a configuration updating module. Fig. 3 is a load balancing framework diagram of the internet of things platform.
A service discovery module: the service discovery module adopts a Consul cluster as a service discovery center of the container cluster, and the Consul provides registration and discovery services of nodes and container instances for the container cluster. The information of all working nodes and container instances in the Consul cluster is stored in a Consul Server cluster, service and function modules in the Consul cluster can use RESTful API of the Consul cluster to access a Consul Client, and the Consul Client forwards a request to a Consul Server of the same data center to complete the functions of querying, adding and deleting the information of the containers and the nodes.
A container registration module: and the Swarm working node realizes the access of the Consul cluster through a user-defined configuration mode, automatically sends corresponding events to the Consul cluster when being added and deleted, and finishes the updating of the data of the Swarm working node in the Consul cluster. However, the Swarm working node cannot actively report the state of its internal container to the Consul cluster, and therefore, the container information on the node needs to be registered and unregistered by deploying a container registration module. The registrar may listen for start and stop events for the container of the Docker engine and register the container information in the consul service form into the consul cluster. Therefore, the internet of things platform runs the registry service at each Swarm working node to monitor and forward the container survival status to the service discovery center.
A load balancing module: and the load balancing module adopts a Nginx load balancer to realize load balancing of the Internet of things platform. The Nginx load balancer can be used as a lightweight high-concurrency Web server and a reverse proxy server of a back-end service. The reverse proxy server hides the condition of back-end service, the front-end request only interacts with the reverse proxy server, and the reverse proxy server selects the upstream node providing service.
The Internet of things platform deploys the Nginx cluster at an entrance to intercept and distribute the access request. When terminals such as equipment, users and enterprise management platforms inquire a DNS server about an access address of the platform, the DNS server searches DNS records of a domain name to be requested to acquire an IP1 address list of Nginx services, and returns an IP1 address in the list to the terminal. After the internet of things terminal acquires the Nginx service address of the platform, the terminal sends a request to the address, the Nginx forwards the request data to a proper back-end container instance according to configured load balancing scheduling, and sends response data returned by the back-end container instance to the internet of things terminal.
A configuration update module: the container cluster of the internet of things platform is dynamically changed, the number of container instances corresponding to each micro service needs to be dynamically adjusted according to the actual load condition of the service, and the Nginx load balancing module needs to update configuration information when the state of the container cluster changes, so that all rear-end container instances are in an available state in the load balancing process. Therefore, the configuration updating module realizes the timely updating of the configuration file of the load balancing module through the Consul-Template connection service discovery module and the load balancing module. The Consul-Template is a configuration file updating tool depending on Consul, which monitors a Consul cluster in real time, updates configuration files of all Nginx reverse proxy servers on the load balancing module in time when the registration information of service instances on the Consul cluster is changed, and calls a Nginx command to reload configuration so as to update the scheduling of the load balancing module.
S4, the service load prediction module analyzes the time series by using a differential Integrated Moving Average model (ARIMA) to calculate a service load prediction value.
And the service load prediction module acquires historical access load data of the container from an InfluxDB database of the service monitoring module. In a microservice cluster, the performance state of a service is mainly affected by its access load, and since microservices conform to a single responsibility principle, their access load types are also of a single nature. Therefore, in the case that the number of container instances is not changed, the future performance of the service is predicted, and the future access load of the service can be predicted first, namely, the access request amount of the service in a future period of time needs to be predicted first. Service access load data is data associated with time, which is essentially a set of time series data, i.e., a sequence of service load times. The service load time series refers to a chronological set of access load values of all time windows tw (time windows) in a time period tz (time zone), which may be X ═ X1,x2…, xi, where xiRepresenting the ith time window twiNumber of requests for internal access to the service.
The service load prediction module applies the ARIMA model to the service load prediction, and the work flow of the service load prediction module is shown in fig. 4. The service load prediction process can be split into the following key steps:
s41, the service load forecasting module acquires the name, the analysis time period and the forecasting time period information of the micro service needing load forecasting;
s42, the load prediction module obtains access request data in the analysis time period from the InfluxDB database of the service monitoring module according to the service name and the analysis time period, and generates a service load time sequence;
s43, judging whether the service load time sequence is stable through a unit root test (ADF), wherein the unit root test is to test whether a unit root exists in the service load time sequence, if the unit root exists, the service load time sequence has no stationarity, and the regression analysis of the service load time sequence has a pseudo regression condition, specifically, the original hypothesis of the unit root test is that the service load time sequence has the unit root, if the calculated statistic is less than the critical statistic values of 3 confidence coefficients of 1%, 5% and 10%, the original hypothesis can be rejected, and the service load time sequence is fully proved to have stationarity; if the calculated statistic is larger than the critical statistic value of the 3 confidence coefficients, performing d-order differential transformation on the service load time sequence to convert the service load time sequence into a stable sequence;
s44, for the unstable service load time sequence, d-order difference is needed, namely, the value at the time t minus the value at the time t-1 in the original time sequence is used to obtain a first-order difference sequence, if the first-order difference sequence is not a stable sequence, the new difference sequence is continuously differenced to obtain a second-order difference sequence, and a stable d-order difference sequence is finally obtained after the difference. After the differential order d of the ARIMA (p, d, q) is determined, determining the order p and the order q of the ARIMA model by adopting a Bayesian Information Criterion (BIC);
s45, constructing an ARIMA model for service load prediction, wherein the ARIMA model is as follows:
in the above formula, the input load time series X ═ X1,x2,…,xnIs a new stationary sequence after d-order differential transformation, where xnIs the average load value of the nth time window, p and q are the order of the model, alphaiAnd betaiParameters of the model (taking a random value between 0 and 1), ω, respectivelytIs a white noise sequence.
Training the model, firstly, extracting service load data from a time sequence database of a service monitoring module, inputting the service load data into the model until the error of the model is lower than a set value, setting a prediction time period of the model after the model is generated, and outputting a service load prediction value of the time period by the model.
S46, the service load prediction module generates a service load prediction value of a time period to be predicted by using the trained ARIMA model;
and S47, carrying out d-order differential reduction on the service load predicted value, wherein the predicted value is a group of time sequences, the reduction process is the inverse process of differential change, each element in the sequence and all the elements in front of the element are accumulated, and the required service load predicted value is obtained after d times of execution.
S5, the service performance prediction module generates a performance prediction value according to the service load prediction value and the number of instances;
the service performance prediction module takes the CPU utilization rate of the service as a prediction target to generate a service performance prediction model. The use conditions (CPU utilization rate, memory utilization rate and the like) of container resources are regularly collected through Docker API commands in the Internet of things platform and stored in a time sequence database. The service performance prediction model takes the load prediction value of the service and the container instance number of the service as the input of the extreme learning machine, and takes the performance prediction value of the service as the output. The model construction process is to obtain service history data (service load value, service performance value and service container instance number) from a time sequence database, input the service history data into an extreme learning machine for training, and store the model when the model error is lower than a set value. The service performance prediction module obtains the number of instances of the micro-service from the service discovery center and obtains a service load prediction value from the service load prediction module, and inputs the obtained number of instances and the load prediction value into a service performance prediction model, and the model outputs the service performance prediction value.
Service performance prediction is a multiple regression analysis process, and the change of service performance within a single time window depends on the load condition of the service and the number of container instances of the service. Therefore, in the service performance prediction model, the service load value and the number of containers may be used as input variables of the model, and the service performance (CPU utilization) may be used as output variables of the model. In a micro-service cluster scene of an internet of things platform, the service characteristics of each micro-service are different, the influence trends of two input variables on the service performance are different, and the micro-service cluster scene is not suitable for realizing data fitting by uniformly using models such as linear regression and polynomial regression. An Extreme Learning Machine (ELM) as a simple-structure single hidden layer feedforward neural network has the advantages of high learning speed, high generalization capability, capability of approximating any nonlinear function and the like, so the ELM can be applied to service performance prediction nonlinear multiple regression analysis.
Fig. 5 shows an ELM network structure of a service performance prediction model, an input layer has 2 neurons, and an output layer has one neuron, and neurons in an implicit layer use a Sigmoid function as an activation function g (x). The sample set of the service performance prediction model is S ═ { x ═ xj, t j1,2, …, N, where the samples are input into the spaceFeature(s)Representing the service load value (number of access requests) within the time window j, and featuresRepresenting the number of container instances corresponding to a service within a time window j, N representing the number of time windows, R2Representing two dimensionsA real number space; output spaceR1Representing one-dimensional real space, sample notesRepresents the service performance within time window j, here the average of the CPU utilization of all container instances served. A in FIG. 5iIs a weight vector, β, connecting the input layer node and the ith hidden layerkIs a weight vector connecting the output layer and the kth hidden layer, bkAnd K is the offset of the kth hidden element and the number of hidden layer neurons.
Service-to-load data, performance data and container service performance data of the Internet of things platform among the service monitoring modules are collected to a time sequence database, and the service performance prediction module acquires the three data from the time sequence database according to a set collection time window and forms a data set. The data set is divided into a training set, a verification set and a test set according to the proportion of (60:20:20), the training set is used for training the service performance prediction model, the verification set is used for verifying the service performance prediction model obtained by training, the optimal service performance prediction model is selected in a limited iteration number, and the test set is used for evaluating the generalization capability of the model. The service prediction model generation flow based on the ELM algorithm is shown as pseudo-code 1 (for example reference only).
The service performance prediction model takes the load prediction value of the service and the container instance number of the service as the input of the extreme learning machine, and takes the performance prediction value of the service as the output. The model construction process is to obtain service history data (service load value, service performance value and service container example number) from a time series database, input the service history data into an extreme learning machine for training, and store the model when the model error is lower than a set value. The service performance prediction module obtains the number of instances of the micro-service from the service discovery center and obtains a service load prediction value from the service load prediction module, and inputs the obtained number of instances and the load prediction value into a service performance prediction model, and the model outputs the service performance prediction value.
In the process of generating the service prediction model, because the input weight and the hidden element bias of the newly-built service prediction model are random each time, the service performance prediction module generates a plurality of service prediction models by using a training set, and evaluates the prediction result of a verification set by using Root Mean Square Error (RMSE) to select the optimal model. And the model evaluation process comprises the steps of inputting a verification set to a newly-built service prediction model, outputting a service performance prediction value, calculating the RMSE of the service performance prediction value, taking the model as a final model if the RMSE is lower than a set value, and continuing the model training if the RMSE is not lower than the set value.
And S6, the dynamic container scheduling module performs horizontal scaling of the micro-service according to the performance predicted value.
As shown in fig. 6, the cooperation process of the container dynamic scheduling module and other modules is that the service load prediction module obtains service request load data from the service monitoring module, and processes the load data into a time sequence to be applied to the ARIMA model to obtain a load prediction value; the service performance prediction module takes the current container quantity and the load prediction value as the input of a service performance prediction model to output the service performance prediction value, and transmits the value to the container dynamic scheduling module; the container dynamic scheduling module determines whether to perform level scaling of the micro-service according to service performance change in a future period of time, namely, the number of instances of the micro-service in the micro-service cluster is adjusted.
And the container dynamic scheduling module calculates a service performance predicted value through the cooperation of the service load prediction module and the service performance prediction module, compares the performance predicted value with a service performance threshold value, and performs horizontal expansion and contraction of the container instance if the performance predicted value exceeds the service performance threshold value. And in each horizontal expansion and contraction, the dynamic scheduling module can pause working according to the set cooling time (such as 5 minutes) so as to avoid cluster shaking caused by frequent expansion and contraction.
And the container dynamic scheduling module adopts the upper and lower limits of the performance threshold set by the user as the judgment standard of the container dynamic scheduling. When the predicted value of the service performance is higher than the upper threshold, the container dynamic scheduling module considers that the current number of the instances is not enough to support the access load, and the instances need to be added to share the load. And when the service performance prediction value is lower than the lower limit, the redundancy of the current container instance is considered to be too high, whether part of container resources are recovered or not can be determined according to the current service performance state, and the service cluster is contracted only when the current service performance value is also lower than the lower limit of the threshold value, so that the service availability of the current time period is ensured.
The scheduling process of the container dynamic scheduling module, as shown in fig. 7, includes the following steps:
(1) the user specifies the upper and lower limits of the service performance threshold and the upper and lower limits of the container instance number threshold. The number-of-instances threshold is an integer no less than the redundancy of the micro-service, and is mainly used for limiting the number of instances and preventing the resource from being excessively occupied while ensuring high availability of the service. Turning to the step 2;
(2) the service performance prediction module uses the load prediction value and the current container instance number to calculate a performance prediction value, and the container dynamic scheduling module receives the performance prediction value and transfers to the step (3);
(3) if the performance prediction value is larger than the upper limit of the threshold value, turning to the step (6), otherwise, turning to the step (4);
(4) if the service performance prediction value is smaller than the lower limit of the threshold value, turning to the step (5), otherwise, turning to the step (8);
(5) if the current service performance is smaller than the lower threshold, turning to the step (6), otherwise, turning to the step (8);
(6) and under the condition that the service load predicted value is not changed, modifying the number of the parameter container instances, and using a performance prediction module to generate a new performance predicted value. If a container instance number is found, the performance prediction value of the container instance number is in the performance threshold range, and the instance number does not exceed the instance number threshold, selecting the instance number as a scheduling target, otherwise, taking the instance number threshold as the scheduling target, and turning to the step (7);
(7) the container dynamic scheduling module horizontally expands and contracts according to the selected number of container instances, and then the step (8) is carried out;
(8) and (6) ending.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution of the present invention and the inventive concept within the scope of the present invention disclosed by the present invention.
Claims (7)
1. A method for constructing an Internet of things platform based on a micro-service architecture is characterized by comprising the following steps:
s1, dividing micro service levels into an access layer, a service layer, a middle layer and a basic layer for the platform of the Internet of things;
s2, the micro-service deployment module deploys a micro-service cluster of the Internet of things platform by using a container cluster management tool Docker Swarm, and registers micro-service information to a service discovery center;
s3, distributing the client requests to the micro-services of the Internet of things platform through a load balancing scheduling module;
s4, analyzing the time sequence by a service load prediction module through a difference Integrated Autoregressive Moving Average model (ARIMA), and calculating a service load prediction value; calculating a load prediction value for a service comprises the steps of:
s41, the service load forecasting module acquires the name, the analysis time period and the forecasting time period information of the micro service needing load forecasting;
s42, the load prediction module obtains access request data in the analysis time period from the InfluxDB time sequence database of the service monitoring module according to the service name and the analysis time period, and generates a service load time sequence;
s43, judging whether the service load time sequence is stable through a unit root test (ADF), wherein the unit root test is to test whether a unit root exists in the service load time sequence, if the unit root exists, the service load time sequence does not have stationarity, and the regression analysis of the service load time sequence is under the condition of pseudo regression, specifically, the original hypothesis of the unit root test is that the service load time sequence has the unit root, and if the statistic obtained through calculation is less than the critical statistic values of 3 confidence coefficients of 1%, 5% and 10%, the original hypothesis can be rejected, and the service load time sequence is fully proved to present stationarity; if the calculated statistic is larger than the critical statistic value of the 3 confidence degrees, carrying out differential transformation on the service load time sequence to convert the service load time sequence into a stable sequence;
s44, for an unstable service load time sequence, d-order difference is needed, namely, a value at the time t minus a value at the time t-1 in an original time sequence is obtained to obtain a first-order difference sequence, if the first-order difference sequence is not a stable sequence, a new difference sequence is continuously differentiated to obtain a second-order difference sequence, the difference is carried out in such a way, a stable d-order difference sequence is finally obtained, and after an ARIMA (p, d, q) difference order d is determined, an autoregressive order p and a moving average order q of the ARIMA model are determined by adopting a Bayesian Information Criterion (BIC);
s45, constructing an ARIMA model for service load prediction, wherein the ARIMA model is as follows:
in the above formula, the input load time series X ═ X1,x2,…,xnIs a new stationary sequence after d-order differential transformation, where xnIs the average load value of the nth time window, alphaiAnd betahEach parameter of the model is a random value between 0 and 1, i is 1 to p, h is 1 to q, ωtIs a white noise sequence;
training the model, firstly, extracting service load data from a time sequence database of a service monitoring module, inputting the service load data into the model until the error of the model is lower than a set value, after the model is generated, setting a prediction time period t of the model, and outputting a service load predicted value x of the time period t by the modelt;
S46, the service load prediction module generates a service load prediction value of a time period to be predicted by using the trained ARIMA model;
s47, carrying out d-order differential reduction on the service load predicted value, wherein the service load predicted value is a group of time sequences, and the reduction process is the inverse process of differential change, namely accumulating each element in the sequence and all the elements in front of the sequence, and executing d times to obtain the final required service load predicted value;
s5, the service performance prediction module generates a performance prediction value according to the service load prediction value and the number of instances;
and S6, the dynamic container scheduling module performs horizontal scaling of the micro-service according to the performance predicted value.
2. The method for constructing an internet of things platform based on micro-service architecture as claimed in claim 1, wherein the access layer in step S1 provides terminal access components of heterogeneous devices and platform users, and provides a terminal access authentication group; the service layer provides connection and communication components of the Internet of things platform, the intelligent device, the user APP, the management platform and the third-party platform, and the communication components comprise Application Programming Interface (API) service components, a central control service component and a transit service component; the middle layer comprises a service logic component which does not directly interact with the terminal, and the service logic component comprises a data analysis component and a log management component; the basic layer provides basic service components required by the access layer, the service layer and the middle layer, and the basic service components comprise a memory database, a relational database and a message queue.
3. The method for constructing an internet of things platform based on a micro-service architecture as claimed in claim 1, wherein the Docker Swarm tool in step S2 is a built-in cluster management tool of a Docker container, and the micro-service deployment module uses the Docker Swarm tool to construct a Docker container cluster, wherein each container cluster deploys and runs a micro-service independently, and selects a deployment node of the Docker container by using a Spread policy built in a scheduler in the Docker Swarm tool.
4. The method for constructing an internet of things platform based on a micro-service architecture as claimed in claim 1, wherein the service discovery center in step S2 is configured to store the node and the container state of the current container cluster, and when a container in the node joins or leaves the cluster, a service registration tool on the node reports a corresponding event to the service discovery center, and the service registration center updates the cluster state according to the reported event.
5. The method for constructing a platform of internet of things based on micro-service architecture as claimed in claim 1, wherein the load balancing scheduling module in step S3 includes a service discovery module, a container registration module, a load balancing module and a configuration update module;
the service discovery module adopts a Consul cluster as a service discovery center of the container cluster, and the Consul provides registration and discovery services of nodes and container instances for the Docker container cluster;
the Consul cluster is constructed by adopting a Consul open source tool, two service nodes exist, namely a Consul Server and a Consul Client, and the Consul Server is used for storing and copying configuration data and is communicated with the Consul Client and a data center; the ConsulClient forwards the data access request to a ConsulServer cluster and provides a key value to read and write function of the data to the outside; the construction mode of the Consulter cluster is that a Consulter node is deployed and operated by using a Consulter open source tool on a host machine, the Consulter node is designated as a Server mode during operation, if the Consulter node is not a first Server node, an address of the first Server node is configured by using an adding command, and the first Server node and the address form a cluster;
the information of all Swarm working nodes and container instances in the Consul cluster is stored in a Consul Server cluster, the Consul cluster uses a representation layer state conversion application program interface (RESTful API) of the Consul cluster to access a Consul Client, and the Consul Client forwards a request to a Consul Server of the same data center to complete the query, addition and deletion functions of a Docker container and node information;
the container registration module is used for realizing registration and cancellation of Docker container information on the nodes through deploying a service registration tool registrar, the registrar monitors starting and stopping events of a container of a Docker engine, the Docker container information is registered in a consul cluster in a consul service form, and the Internet of things platform runs the registrar tool on each Swarm working node to monitor and forward the container survival state to a service discovery center;
the load balancing module adopts a Nginx load balancer to realize the load balancing of the platform of the Internet of things, when a client request arrives, the kernel part of the Nginx load balancer maps an access request to a corresponding positioning block by searching a configuration file, and each configuration instruction in the positioning block can start a corresponding functional module to complete corresponding work;
the configuration updating module is connected with the service discovery module and the load balancing module through a configuration updating tool Consul-Template to realize the timely updating of the configuration files of the load balancing module, when the registration information of the service instance on the Consul cluster is changed, the Consul-Template timely updates the configuration files of all the Nginx reverse proxy servers on the load balancing module, and calls the Nginx command to reload the configuration so as to update the scheduling of the load balancing module.
6. The method for constructing a platform of internet of things based on micro service architecture as claimed in claim 1, wherein the method for constructing a platform of internet of things based on micro service architecture as claimed in claim 1 is characterized in that, in step S5, the service performance prediction module generates a service performance prediction model using an Extreme Learning Machine (ELM), the service performance prediction model taking a load prediction value of a service and a container instance number of the service as input of the Extreme learning Machine and taking a performance prediction value of the service as output;
the service performance prediction model building process comprises the steps of obtaining historical data of services from a time sequence database, inputting the historical data into an extreme learning machine for training, storing the model when the model error is lower than a set value, obtaining the number of instances of micro-services from a service discovery center and obtaining a service load prediction value from a service load prediction module by a service performance prediction module, inputting the obtained number of instances and the load prediction value into the service performance prediction model, and outputting the service performance prediction value by the model.
7. The method for constructing an internet of things platform based on a micro-service architecture according to claim 1, wherein in step S6, the container dynamic scheduling module obtains a predicted value of micro-service performance, and when the predicted value of performance exceeds a service performance threshold range specified by an internet of things platform administrator, the service performance prediction module is called again to calculate the number of container instances required by the micro-service, and horizontal scaling of the micro-service is performed, that is, the number of instances of the micro-service in the micro-service cluster is adjusted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910420269.8A CN110149396B (en) | 2019-05-20 | 2019-05-20 | Internet of things platform construction method based on micro-service architecture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910420269.8A CN110149396B (en) | 2019-05-20 | 2019-05-20 | Internet of things platform construction method based on micro-service architecture |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110149396A CN110149396A (en) | 2019-08-20 |
CN110149396B true CN110149396B (en) | 2022-03-29 |
Family
ID=67592256
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910420269.8A Active CN110149396B (en) | 2019-05-20 | 2019-05-20 | Internet of things platform construction method based on micro-service architecture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110149396B (en) |
Families Citing this family (66)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110659180A (en) * | 2019-09-05 | 2020-01-07 | 国家计算机网络与信息安全管理中心 | Data center infrastructure management system based on cluster technology |
CN110532192A (en) * | 2019-09-06 | 2019-12-03 | 北京温杜科技有限公司 | A kind of api interface performance optimized experimental facility |
CN110620822A (en) * | 2019-09-27 | 2019-12-27 | 腾讯科技(深圳)有限公司 | Network element determination method and device |
CN110728372B (en) * | 2019-09-27 | 2023-04-25 | 达而观信息科技(上海)有限公司 | Cluster design method and cluster system for dynamic loading of artificial intelligent model |
CN110719293A (en) * | 2019-10-17 | 2020-01-21 | 华夏银行股份有限公司 | Security service generation method and related equipment |
CN110971449A (en) * | 2019-10-25 | 2020-04-07 | 武汉烽火众智数字技术有限责任公司 | Service management and control system based on micro-service architecture |
CN110780979B (en) * | 2019-10-28 | 2021-01-26 | 北京海益同展信息科技有限公司 | Control method and device for configuration under micro-service framework, medium and electronic equipment |
CN110928709B (en) * | 2019-11-21 | 2023-08-29 | 中国民航信息网络股份有限公司 | Service calling method and device under micro-service framework and server |
CN111083199A (en) * | 2019-11-23 | 2020-04-28 | 上海畅星软件有限公司 | High-concurrency, high-availability and service-extensible platform-based processing architecture |
CN110990458B (en) * | 2019-12-03 | 2023-04-18 | 电子科技大学 | Distributed database system, interface communication middleware |
CN113076771A (en) * | 2019-12-17 | 2021-07-06 | 深圳市优必选科技股份有限公司 | Vehicle and vehicle management system and monitoring method thereof |
CN110944067B (en) * | 2019-12-27 | 2021-07-16 | 华为技术有限公司 | Load balancing method and server |
CN111083743A (en) * | 2019-12-31 | 2020-04-28 | 上海无线通信研究中心 | Distributed QoS prediction method, system and device based on 5G access network |
CN111083240A (en) * | 2019-12-31 | 2020-04-28 | 江苏徐工信息技术股份有限公司 | Intelligent front-end drainage system realized by combining container technology |
CN111355606B (en) * | 2020-02-10 | 2021-12-28 | 天津大学 | Web application-oriented container cluster self-adaptive expansion and contraction system and method |
CN111314455A (en) * | 2020-02-12 | 2020-06-19 | 深圳供电局有限公司 | Container micro-service IT monitoring system and method |
CN111309442B (en) * | 2020-02-19 | 2023-07-28 | 望海康信(北京)科技股份公司 | Method, device, system, medium and equipment for adjusting number of micro-service containers |
CN111338682B (en) * | 2020-02-27 | 2023-05-09 | 上海百秋新网商数字科技有限公司 | Continuous upgrade system service method based on load |
CN111414233A (en) * | 2020-03-20 | 2020-07-14 | 京东数字科技控股有限公司 | Online model reasoning system |
CN112000459B (en) * | 2020-03-31 | 2023-06-27 | 华为云计算技术有限公司 | Method for expanding and shrinking capacity of service and related equipment |
CN111460039A (en) * | 2020-04-07 | 2020-07-28 | 中国建设银行股份有限公司 | Relational database processing system, client, server and method |
CN111475393A (en) * | 2020-04-08 | 2020-07-31 | 拉扎斯网络科技(上海)有限公司 | Service performance prediction method and device, electronic equipment and readable storage medium |
CN111541746B (en) * | 2020-04-09 | 2022-04-15 | 哈尔滨工业大学 | Multi-version coexistence microservice self-adaption method facing user demand change |
CN111491027A (en) * | 2020-04-16 | 2020-08-04 | 北京雷石天地电子技术有限公司 | Load balancing method, load balancing device and readable storage medium |
CN111752675B (en) * | 2020-05-27 | 2024-03-15 | 南京认知物联网研究院有限公司 | Internet of things platform based on containerization technology |
CN111752678A (en) * | 2020-06-16 | 2020-10-09 | 杭州电子科技大学 | Low-power-consumption container placement method for distributed collaborative learning in edge computing |
CN111984274B (en) * | 2020-07-03 | 2024-06-25 | 新浪技术(中国)有限公司 | Method and device for automatically deploying ETCD cluster by one key |
CN111984830A (en) * | 2020-07-29 | 2020-11-24 | 中国石油集团工程股份有限公司 | Management operation and maintenance platform and data processing method |
CN111988383B (en) * | 2020-08-07 | 2022-06-21 | 苏州浪潮智能科技有限公司 | Method and device for verifying application opening micro-service treatment condition |
CN112087504A (en) * | 2020-08-31 | 2020-12-15 | 浪潮通用软件有限公司 | Dynamic load balancing method and device based on working load characteristics |
CN112181942A (en) * | 2020-09-22 | 2021-01-05 | 中国建设银行股份有限公司 | Time sequence database system and data processing method and device |
CN113010260B (en) * | 2020-09-29 | 2024-06-21 | 证通股份有限公司 | Container number elastic expansion method and container number elastic expansion method system |
CN112256351B (en) * | 2020-10-26 | 2023-11-17 | 卫宁健康科技集团股份有限公司 | Method for realizing Feign component, method and device for calling micro-service |
CN112448848A (en) * | 2020-11-13 | 2021-03-05 | 上海电器科学研究所(集团)有限公司 | Automatic capacity expansion method based on micro-service |
CN112433755A (en) * | 2020-11-17 | 2021-03-02 | 东南大学 | Micro-service architecture identification method based on multiple types of features and multiple measurement indexes |
CN112437154A (en) * | 2020-11-20 | 2021-03-02 | 国网江苏省电力有限公司营销服务中心 | Internet of things management system and method based on open source component |
CN112434302B (en) * | 2020-11-26 | 2021-09-07 | 国家工业信息安全发展研究中心 | Multitask collaboration vulnerability platform and construction method and service method thereof |
CN112804290B (en) * | 2020-12-17 | 2023-02-17 | 中电科思仪科技股份有限公司 | Cloud platform access method suitable for frequency spectrum/signal analyzer |
CN112711396A (en) * | 2020-12-17 | 2021-04-27 | 华人运通(上海)云计算科技有限公司 | Micro-service layered structure supporting flexible expansion |
CN112579260A (en) * | 2020-12-21 | 2021-03-30 | 常州微亿智造科技有限公司 | Automatic expansion and contraction method and device for industrial Internet of things data center to transmit Worker service |
CN112559189A (en) * | 2020-12-21 | 2021-03-26 | 厦门亿联网络技术股份有限公司 | Service request processing method and device, electronic equipment and storage medium |
CN112738184B (en) * | 2020-12-24 | 2022-11-18 | 上海家睦网络科技有限公司 | Plug-in dynamic registration distributed micro-service gateway system |
CN112596995A (en) * | 2020-12-26 | 2021-04-02 | 中国农业银行股份有限公司 | Capacity determination method and device based on cluster architecture |
CN112714183A (en) * | 2020-12-28 | 2021-04-27 | 广州辰创科技发展有限公司 | Multi-device Internet of things connection method, device and storage medium based on micro-service |
CN112764875B (en) * | 2020-12-31 | 2023-02-28 | 中国科学院软件研究所 | Intelligent calculation-oriented lightweight portal container microservice system and method |
CN112788124B (en) * | 2020-12-31 | 2023-06-09 | 中科星通(廊坊)信息技术有限公司 | Remote sensing image distributed registration service method and device |
CN113032090A (en) * | 2021-02-20 | 2021-06-25 | 博普乐科技(北京)有限公司 | Virtual programming simulation management platform |
CN113110914A (en) * | 2021-03-02 | 2021-07-13 | 西安电子科技大学 | Internet of things platform construction method based on micro-service architecture |
CN113132457B (en) * | 2021-03-02 | 2022-06-14 | 西安电子科技大学 | Automatic method and system for converting Internet of things application program into RESTful service on cloud |
CN113253980A (en) * | 2021-03-05 | 2021-08-13 | 吉林烟草工业有限责任公司 | Tobacco logistics platform design method, system, device and storage medium |
CN113468045B (en) * | 2021-06-04 | 2023-12-29 | 济南浪潮数据技术有限公司 | Test system, method and component for server batch configuration software |
CN113342765A (en) * | 2021-06-22 | 2021-09-03 | 山东浪潮通软信息科技有限公司 | Service unit updating method, system and computer equipment |
CN113934539B (en) * | 2021-10-14 | 2023-03-03 | 中国电子科技集团公司第二十八研究所 | Construction method of geographic information service system based on micro-service architecture |
CN113965538B (en) * | 2021-10-21 | 2023-04-18 | 青岛海信智慧生活科技股份有限公司 | Equipment state message processing method, device and storage medium |
CN113824799B (en) * | 2021-11-22 | 2022-09-27 | 南京中孚信息技术有限公司 | High-performance network security intelligent distribution control method and device |
CN114205190B (en) * | 2021-12-03 | 2023-07-14 | 中国长江三峡集团有限公司 | Autonomous monitoring coordination method for Internet of things gateway |
CN114422371B (en) * | 2022-01-20 | 2024-10-29 | 重庆邮电大学 | Elastic micro-service system based on distributed and container virtualization and implementation method |
CN114979274A (en) * | 2022-04-12 | 2022-08-30 | 深圳追一科技有限公司 | Control method and device for micro-service scheduling, computer equipment and storage medium |
CN114884886B (en) * | 2022-05-26 | 2023-07-21 | 中国联合网络通信集团有限公司 | Micro-service load balancing method, device, equipment, system and storage medium |
CN115242797B (en) * | 2022-06-17 | 2023-10-27 | 西北大学 | Micro-service architecture-oriented client load balancing method and system |
CN115454624B (en) * | 2022-08-30 | 2024-05-31 | 南京信易达计算技术有限公司 | Full stack type high performance computing cluster management and data analysis system and method |
CN115720204A (en) * | 2022-11-23 | 2023-02-28 | 证通股份有限公司 | Fault detection method and system for VoIP communication network element |
CN116094897A (en) * | 2023-01-10 | 2023-05-09 | 北京蚂蜂窝网络科技有限公司 | Webpage configuration method, device and storage medium |
CN116578295B (en) * | 2023-05-04 | 2023-11-10 | 谷云科技(广州)有限责任公司 | Component dynamic expansion system based on micro-service architecture |
CN116643844B (en) * | 2023-05-24 | 2024-02-06 | 方心科技股份有限公司 | Intelligent management system and method for automatic expansion of power super-computing cloud resources |
CN116502868A (en) * | 2023-06-25 | 2023-07-28 | 北京云行在线软件开发有限责任公司 | Distributed scheduling engine system and distributed scheduling method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105515759A (en) * | 2015-11-27 | 2016-04-20 | 国网信息通信产业集团有限公司 | Micro service registration method and micro service registration system |
CN108228347A (en) * | 2017-12-21 | 2018-06-29 | 上海电机学院 | The Docker self-adapting dispatching systems that a kind of task perceives |
CN108270855A (en) * | 2018-01-15 | 2018-07-10 | 司中明 | A kind of method of platform of internet of things access device |
CN108337106A (en) * | 2017-12-18 | 2018-07-27 | 海尔优家智能科技(北京)有限公司 | Construction method, platform and the computer equipment of Internet of Things micro services system architecture |
US10289538B1 (en) * | 2018-07-02 | 2019-05-14 | Capital One Services, Llc | Systems and methods for failure detection with orchestration layer |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9729615B2 (en) * | 2013-11-18 | 2017-08-08 | Nuwafin Holdings Ltd | System and method for collaborative designing, development, deployment, execution, monitoring and maintenance of enterprise applications |
CN106506605B (en) * | 2016-10-14 | 2020-09-22 | 华南理工大学 | SaaS application construction method based on micro-service architecture |
CN106533929B (en) * | 2016-12-30 | 2019-08-23 | 北京中电普华信息技术有限公司 | A kind of micro services development system, generation method and dispositions method and device |
CN108712464A (en) * | 2018-04-13 | 2018-10-26 | 中国科学院信息工程研究所 | A kind of implementation method towards cluster micro services High Availabitity |
CN108833462A (en) * | 2018-04-13 | 2018-11-16 | 中国科学院信息工程研究所 | A kind of system and method found from registration service towards micro services |
-
2019
- 2019-05-20 CN CN201910420269.8A patent/CN110149396B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105515759A (en) * | 2015-11-27 | 2016-04-20 | 国网信息通信产业集团有限公司 | Micro service registration method and micro service registration system |
CN108337106A (en) * | 2017-12-18 | 2018-07-27 | 海尔优家智能科技(北京)有限公司 | Construction method, platform and the computer equipment of Internet of Things micro services system architecture |
CN108228347A (en) * | 2017-12-21 | 2018-06-29 | 上海电机学院 | The Docker self-adapting dispatching systems that a kind of task perceives |
CN108270855A (en) * | 2018-01-15 | 2018-07-10 | 司中明 | A kind of method of platform of internet of things access device |
US10289538B1 (en) * | 2018-07-02 | 2019-05-14 | Capital One Services, Llc | Systems and methods for failure detection with orchestration layer |
Non-Patent Citations (1)
Title |
---|
基于微服务架构的物联网应用基础框架设计;吴昌雨,李云松,刘青,王善勤;《宿州学院学报》;20150731;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110149396A (en) | 2019-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110149396B (en) | Internet of things platform construction method based on micro-service architecture | |
US20220046083A1 (en) | Performing load balancing self adjustment within an application environment | |
Dinesh Reddy et al. | Energy-aware virtual machine allocation and selection in cloud data centers | |
TWI382318B (en) | Coordinating service performance and application placement management | |
CN108228347A (en) | The Docker self-adapting dispatching systems that a kind of task perceives | |
Mechalikh et al. | PureEdgeSim: A simulation framework for performance evaluation of cloud, edge and mist computing environments | |
EP3407194A2 (en) | Method for the deployment of distributed fog computing and storage architectures in robotic modular components | |
Nguyen et al. | Monad: Self-adaptive micro-service infrastructure for heterogeneous scientific workflows | |
CN116069512B (en) | Serverless efficient resource allocation method and system based on reinforcement learning | |
CN113822456A (en) | Service combination optimization deployment method based on deep reinforcement learning in cloud and mist mixed environment | |
WilsonPrakash et al. | Artificial neural network based load balancing on software defined networking | |
Barika et al. | Online scheduling technique to handle data velocity changes in stream workflows | |
Gao et al. | Toward QoS analysis of adaptive service-oriented architecture | |
Bacchiani et al. | Low-latency anomaly detection on the edge-cloud continuum for Industry 4.0 applications: The SEAWALL case study | |
Huedo et al. | An experimental framework for executing applications in dynamic Grid environments | |
Wang et al. | A distributed control framework for performance management of virtualized computing environments | |
CN118113467A (en) | Fine granularity function programming method and system for cross-domain server-free data analysis | |
Mcheick et al. | Evaluation of load balance algorithms | |
Shahhosseini et al. | Hybrid learning for orchestrating deep learning inference in multi-user edge-cloud networks | |
CN116582407A (en) | Containerized micro-service arrangement system and method based on deep reinforcement learning | |
KR20230089509A (en) | Bidirectional Long Short-Term Memory based web application workload prediction method and apparatus | |
Barika et al. | Adaptive scheduling for efficient execution of dynamic stream workflows | |
Bhagavathiperumal et al. | Workload Analysis of Cloud Resources using Time Series and Machine Learning Prediction | |
Korala et al. | Design and implementation of a platform for managing time-sensitive IoT applications | |
Thysebaert et al. | Evaluation of Grid Scheduling Strategies through |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |