CN116954810A - Method, system, storage medium and program product for creating container application instance - Google Patents
Method, system, storage medium and program product for creating container application instance Download PDFInfo
- Publication number
- CN116954810A CN116954810A CN202210399836.8A CN202210399836A CN116954810A CN 116954810 A CN116954810 A CN 116954810A CN 202210399836 A CN202210399836 A CN 202210399836A CN 116954810 A CN116954810 A CN 116954810A
- Authority
- CN
- China
- Prior art keywords
- virtual machine
- container
- management component
- virtual
- application instance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000003860 storage Methods 0.000 title claims abstract description 24
- 238000004891 communication Methods 0.000 claims description 20
- 238000012544 monitoring process Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 11
- 230000004044 response Effects 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 description 12
- 239000008186 active pharmaceutical agent Substances 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 244000035744 Hura crepitans Species 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/52—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
- G06F21/53—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by executing in a restricted environment, e.g. sandbox or secure virtual machine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45562—Creating, deleting, cloning virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45587—Isolation or security of virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5011—Pool
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Stored Programmes (AREA)
Abstract
The embodiment of the application provides a method, a system, a storage medium and a program product for creating a container application instance, and relates to the technical field of computers and containers. The method is applied to a container application management system, the container application management system comprises a management and control cluster and a node cluster, the management and control cluster comprises a virtual machine management component, and the node cluster comprises a plurality of physical machines, and the method comprises the following steps: the virtual machine management component creates a virtual machine on a target physical machine, wherein the virtual machine is configured with the container management component, and the target physical machine is one physical machine in a plurality of physical machines; the container management component creates a container application instance in the virtual machine, the container application instance including at least one container; wherein each container application instance monopolizes one virtual machine, and different virtual machines are isolated from each other. The technical scheme provided by the embodiment of the application can improve the independence among different container application examples, thereby improving the privacy and safety of the container application examples.
Description
Technical Field
Embodiments of the present application relate to the field of computers and containers, and in particular, to a method, a system, a storage medium, and a program product for creating a container application instance.
Background
Container technology is a technology that partitions resources of a single operating system into isolated groups to better balance conflicting resource usage requirements among the isolated groups.
In the related art, a container application management system includes a plurality of nodes, and each node may have a plurality of container application instances running therein, and the plurality of container application instances share all resources in the node. However, this may cause resource contradiction between container application instances in the same node, for example, if some container application instances generate too many read-write requests, the read-write speed of other container application instances may be affected.
Disclosure of Invention
The embodiment of the application provides a method, a system, a storage medium and a program product for creating container application examples, which can promote independence among different container application examples. The technical scheme is as follows:
according to one aspect of the embodiment of the application, a method for creating a container application instance is provided, and the method is applied to a container application management system, wherein the container application management system comprises a management cluster and a node cluster, the management cluster comprises a virtual machine management component, and the node cluster comprises a plurality of physical machines; the method comprises the following steps:
The virtual machine management component creates a virtual machine on a target physical machine, the virtual machine is configured with a container management component, and the target physical machine is one physical machine in the plurality of physical machines;
the container management component creates a container application instance in the virtual machine, the container application instance comprising at least one container; wherein each container application instance monopolizes one virtual machine, and different virtual machines are isolated from each other.
According to an aspect of an embodiment of the present application, there is provided a container application management system, including a management cluster including a virtual machine management component and a node cluster including a plurality of physical machines;
the virtual machine management component is used for creating a virtual machine on a target physical machine, the virtual machine is configured with the container management component, and the target physical machine is one physical machine in the plurality of physical machines;
the container management component is used for creating a container application instance in the virtual machine, wherein the container application instance comprises at least one container; wherein each container application instance monopolizes one virtual machine, and different virtual machines are isolated from each other.
According to an aspect of an embodiment of the present application, there is provided a computer-readable storage medium having stored therein a computer program loaded and executed by a processor to implement a method of creating a container application instance executed by any one or more of the above-described principals.
According to an aspect of an embodiment of the present application, there is provided a computer program product comprising a computer program stored in a computer readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the method of creating a container application instance executed by any one or more of the principals described above.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
when one container application instance needs to be operated, a virtual machine is created on a target physical machine through a virtual machine management component, and the one container application instance is created in the one virtual machine, so that each container application instance can monopolize one virtual machine, and the virtual machine are isolated, different container application instances are separated, each container application instance can be operated in the respective virtual machine, and different container application instances do not affect each other, and the independence among different container application instances is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
FIG. 1 is a schematic diagram of a container application management system provided in one embodiment of the present application;
FIG. 2 is a flow chart of a method of creating a container application instance provided by one embodiment of the present application;
FIG. 3 is a flow chart of a method of creating a container application instance provided by another embodiment of the present application;
FIG. 4 is a block diagram of a system for creating an instance of a container application provided by one embodiment of the present application;
FIG. 5 is a block diagram of a system for creating an instance of a container application provided by another embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of methods consistent with aspects of the application as detailed in the accompanying claims.
First, some terms related to the embodiments of the present application will be described:
container application management system Kubernetes (k 8s for short): a container operation platform can realize the functions of combining a plurality of containers into one service, dynamically distributing hosts for container operation and the like, and provides great convenience for users to use the containers. Kubernetes may or may not be open source.
Example container application Pod: the basic scheduling unit of Kubernetes is called "pod". Higher level abstract content may be added to the containerized component by such abstract categories. A pod typically contains one or more containers so that they can be guaranteed to be always on the host and can share resources. Each pod in Kubernetes is assigned a unique IP address (within the cluster) which allows applications to use the same port while avoiding conflict problems.
APIServer (Application Programming Interface Server, interface server): APIServer is a key component and uses Kubernetes API and JSON over HTTP (Hyper Text Transfer Protocol hypertext transfer protocol) to provide internal and external interfaces to Kubernetes. APIServer processes and validates REST requests (e.g., get resources, add (post) resources, update (put) resources, and delete) resources) and updates the state of API objects etcd (a distributed key pair storage system), allowing clients to configure workloads and containers between Worker nodes.
Virtual node (VK), virtual Kubelet): is one implementation of the open source Kubernetes that virtually links the APIs of Kubernetes and other platforms. The main scenario of the virtual node is to support the expansion of Kubernetes API into serverless container platforms such as ACI (Applied Computational Intel, application of computational intelligence) and Fargate (black hole interception). From the perspective of Kubernetes APIServer, the virtual nodes may schedule containers elsewhere, such as in a cloud server API, rather than on the nodes. Optionally, the virtual nodes have a pluggable architecture.
Cloud technology (Cloud technology) refers to a hosting technology for integrating hardware, software, network and other series resources in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied by the cloud computing business mode, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
Cloud computing (clouding) is a computing model that distributes computing tasks across a large pool of computers, enabling various application systems to acquire computing power, storage space, and information services as needed. The network that provides the resources is referred to as the "cloud". Resources in the cloud are infinitely expandable in the sense of users, and can be acquired at any time, used as needed, expanded at any time and paid for use as needed.
In some embodiments, a cloud computing base capability provider establishes a cloud computing resource pool platform (abbreviated as a cloud platform, commonly referred to as IaaS (Infrastructure as a Service, infrastructure as a service)), and deploys multiple types of virtual resources in the resource pool for selection by external clients. The cloud computing resource pool mainly comprises: computing devices (which are virtualized machines, including operating systems), storage devices, network devices.
According to the logic function division, a PaaS (Platform as a Service ) layer can be deployed on the IaS layer, a SaaS (Software as a Service ) layer can be deployed on the PaaS layer, and the SaaS can also be directly deployed on the IaS. PaaS is a platform on which software runs, such as a database, web container, etc. SaaS is a wide variety of business software such as web portals, sms mass senders, etc. Generally, saaS and PaaS are upper layers relative to IaaS.
Referring to fig. 1, a schematic diagram of a container application management system according to an embodiment of the application is shown. As shown in fig. 1, the container application management system 10 may include: the cluster 11 and the node cluster 12 are managed.
The management cluster 11 is used for managing the node cluster 12. The management cluster may be a server. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
The management cluster 11 includes a virtual machine management component (eklet) 13, which is responsible for lifecycle management of the virtual machines.
Included in the node cluster 12 are a plurality of physical machines 14 for running container application instances 15. Optionally, one or more virtual machines 16 may be running in the physical machine 14; each virtual machine 16 includes a container application instance 15 and a container management component (eklet-agent) 17, and the virtual machine 16 is configured to run the container management component 17 therein, and the container management component 17 is responsible for lifecycle management of containers in the container application instance 15. Optionally, a security group may be configured for the container application instance 15 to implement network security and alerting (network policy). Optionally, the container management component 17 completes status report of the container application instance, event report of the container application instance, update of the synchronous configmap, secret, update of services in the synchronous cluster, and creation of relevant ipvs rules by means of a direct connection port server. Alternatively, the container management component 17 exposes the operational data externally through only one port (e.g., 9100 port) and does not expose other ports and operations. At least one container 18 is included in each container application instance 15. In some embodiments, the physical machine 14 may be a server as described above, or may be a terminal. The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The physical machine 14 and the management and control cluster 11 may be directly or indirectly connected through a wired or wireless communication manner, and the present application is not limited herein.
In some embodiments, the administration cluster 11 further comprises an interface Server 19, a Scheduler (Scheduler) 20 and a support component (EKS Server, elastic Kubernetes Service Server) 21. The interface server 19 is used for realizing data interaction and communication among all components in the management and control cluster 11; the scheduler 20 is configured to determine a virtual node and a target physical machine where the container application instance 15 is located; the support assembly 21 is used to create a container application instance 15. Optionally, a communication connection is established between the client and the interface server 19.
In some implementations, at least one virtual node 22 is also included in the node cluster 12, each virtual node 22 corresponding to at least one physical machine 14. In some embodiments, each virtual machine 16 further includes a container network interface (Container Network Interface) 23 and a Log Collector (Log Collector) 24, where the container network interface 23 is used to implement network communication between the container application instance 15 and the outside world, and the Log Collector 24 is used to collect a running Log of the container application instance 15.
The technical scheme of the application is described and illustrated by the following examples.
Referring to fig. 2, a flowchart of a method for creating a container application instance according to an embodiment of the present application is shown. In this embodiment, the method is applied to the container application management system described in the embodiment of fig. 1 above for illustration. Optionally, the container application management system includes a management cluster and a node cluster, the management cluster includes a virtual machine management component, and the node cluster includes a plurality of physical machines, the method may include the following steps (201-202):
In step 201, the virtual machine management component creates a virtual machine on the target physical machine, the virtual machine configured with the container management component.
Wherein the target physical machine is one of a plurality of physical machines.
In some embodiments, a virtual machine management component located in a management cluster may create a virtual machine on one physical machine (i.e., a target physical machine) in a node cluster through other components for providing a runtime environment for a container application instance. Optionally, the virtual machine is configured with a container management component for managing containers in running the virtual machine.
Optionally, after the virtual machine is created, the container management component is automatically configured and run (i.e., the virtual machine images self-boot container management component).
Optionally, the implementation and design of the virtual machine may include, but is not limited to, at least one of: based on the tliinux 5.4 kernel; optimizing the starting speed of an operating system: optimizing the loading of modules such as disk encryption, random number generation and the like; built-in log acquisition, GPU driving, clients for storing various files and other components, and opening according to requirements; the priviled privilege mode is opened, allowing the user to alter the kernel parameters.
In step 202, the container management component creates a container application instance in the virtual machine.
In some embodiments, the container application instance includes at least one container. Thus, a container application instance may be considered a collection of containers. Optionally, at least one container located in the same container application instance is used to implement the same instance (e.g., an instance corresponding to a certain application).
Wherein each container application instance monopolizes one virtual machine, and different virtual machines are isolated from each other. Optionally, mirror pulling, container rootfs creation, and log collection are all implemented within the virtual machine. That is, one virtual machine only runs one container application instance, and different container application instances run in different virtual machines respectively, so that the running environments/operating systems where the different container application instances are located are independent and irrelevant to each other. Thus, each virtual machine can be considered a sandbox (sadbox). The container management component is located in the same virtual machine/sandbox as the container application instance it created.
In some embodiments, the container management component checks the operational status of the containers contained in the container application instance, and if a target container whose operational status is problematic is checked, restarts the target container. Therefore, the operation faults of the container are eliminated in time, the influence on the overall operation condition of the container application example when part of the container is abnormal in operation is reduced, and the operation efficiency of the container and the container application example is improved.
In some embodiments, the virtual machine management component performs a rebuild flow for the virtual machine if it determines that the virtual machine is malfunctioning based on heartbeat information with the container management component. Optionally, a communication connection (e.g., gRPC (Remote Procedure Calls, remote procedure call protocol) connection) is established between the virtual machine management component and the container management component. In some embodiments, the virtual machine may fail, such as a network failure. Optionally, the container management component periodically reports heartbeat information to the virtual machine management component, and if the virtual machine management component fails to timely receive the heartbeat information or receives information that the virtual machine has a fault, the virtual machine can be considered to have a fault in operation; thereafter, the virtual machine management component may execute a rebuild flow for the virtual machine, i.e., rebuild one virtual machine to run the corresponding container application instance. Therefore, the influence of the virtual machine operation faults on the operation of the container application instance is reduced, and the operation efficiency of the container application instance is improved.
Optionally, after the virtual machine fails in operation, the virtual machine with the failed operation is deleted, and then the process that the virtual machine management component creates a virtual machine on the target physical machine and the container management component creates a container application instance in the virtual machine is executed.
In some embodiments, all components associated with a container application instance run in a virtual machine.
In some embodiments, some components associated with the container application instance may run outside of the virtual machine.
In summary, in the technical solution provided in the embodiments of the present application, when one container application instance needs to be run, a virtual machine is created on a target physical machine through a virtual machine management component, and the one container application instance is created in the one virtual machine, so that each container application instance can monopolize one virtual machine, and the virtual machine are isolated, so that different container application instances are separated, each container application instance can run in a respective virtual machine, and different container application instances do not affect each other, thereby improving the independence between different container application instances, and further improving the privacy and security of the container application instance.
In addition, in the embodiment of the application, as one container application instance monopolizes one virtual machine, only one container application instance is influenced but other container application instances are not influenced in the process of reconstructing one virtual machine, thereby improving the overall operation efficiency of the system.
Referring to fig. 3, a flowchart of a method for creating a container application instance according to another embodiment of the present application is shown. In this embodiment, the method is applied to the container application management system described in the embodiment of fig. 1 above for illustration. The method may comprise the following steps (301-311):
in step 301, an interface server receives a resource claim from a client.
Optionally, the resource declaration is used to create a container application instance. In some embodiments, a communication connection is established between the interface server and the client. The client may generate a resource declaration and send the resource declaration to the interface server to create the container application instance, if desired.
In step 302, the scheduler obtains the resource declaration through the monitoring interface server.
In some embodiments, the scheduler may continually monitor the interface server, and thus after the interface server receives the resource declaration described above, the scheduler may obtain the resource declaration from the interface server by monitoring the interface server.
Step 303, the scheduler determines a scheduling result corresponding to the resource declaration according to the remaining computing resources of each virtual node, and sends the scheduling result to the interface server.
In some embodiments, the scheduling result is used to indicate that the container application instance is created on the target physical machine of the target virtual node. In some embodiments, a virtual node may be understood as a computing resource, when a portion of the computing resources in the virtual node are allocated to a task/instance, the portion of the computing resources belong to occupied computing resources that cannot be used to run other tasks/instances until the occupied computing resources are touched; where some of the computing resources in the virtual node are unoccupied, these computing resources may be referred to as remaining computing resources of the virtual node.
In some embodiments, the scheduler may query the remaining computations of each virtual node and select one of the virtual nodes with sufficient remaining computing resources as the virtual node corresponding to the container application instance. Specifically, the virtual node may correspond to at least one physical machine; the remaining computing resources of the virtual node may indicate the remaining computing resources of each physical machine corresponding to the virtual node. The scheduler may select, from the at least one physical machine, a physical machine with sufficient remaining computing resources (i.e., a physical machine with remaining computing resources that meet the operating requirements of the container application instance) as a target physical machine, where the target physical machine is configured to carry the container application instance.
In some embodiments, the remaining computing resources of the virtual node are contained in the label information of the virtual node; the virtual machine management component periodically acquires the residual computing resources of each virtual node from the support component, and adds or updates the residual computing resources of the virtual nodes in the label information of the virtual nodes. Optionally, the support component continuously detects/acquires the computing resource usage of the virtual nodes, and a communication connection exists between the virtual machine management component and the support component, so that a virtual machine management team member can periodically acquire the computing resource usage of each virtual node from the support component, and add or update the computing resource usage of the virtual nodes in the tag information of the virtual nodes.
In some embodiments, the computing resource usage of the virtual node may include an amount of computing resources already occupied in the virtual node, an amount of computing resources remaining; further, the computing resource usage of the virtual node may also indicate an amount of computing resources occupied by each physical machine corresponding to the virtual node, and an amount of remaining computing resources.
Optionally, the label information of the virtual node is stored in a storage system in the management and control cluster. In some embodiments, the storage system may be a distributed storage system (e.g., etcd system).
In some embodiments, the scheduler accesses the storage system through the interface server to obtain the tag information of the virtual node, so as to obtain the amount of occupied computing resources and the amount of remaining computing resources of each physical machine corresponding to the virtual node, thereby selecting one physical machine from the plurality of physical machines as a target physical machine, where the amount of remaining computing resources of the target physical machine needs to meet the running requirement of the container application instance.
In step 304, the virtual machine management component obtains the scheduling result through the monitoring interface server, and generates a virtual machine creation request according to the scheduling result.
Optionally, the virtual machine creation request is for requesting creation of a virtual machine on the target physical machine. In some embodiments, the virtual machine management component may obtain the scheduling result from the interface server by continuously monitoring the interface server and generating the virtual machine creation request according to the scheduling result.
In step 305, the virtual machine management component sends a virtual machine creation request to the support component.
In step 306, the support component creates a virtual machine on the target physical machine based on the virtual machine creation request.
In some embodiments, the virtual machine creation request carries an identification of the target physical machine.
In step 307, the container management component creates a container application instance in the virtual machine.
Some of the contents of step 307 are the same as or similar to those of step 202 in the embodiment of fig. 2, and will not be described again here.
In some embodiments, a container management component obtains running information for a container application instance; transmitting operation information to an interface server through a direct communication connection established with the interface server; or sending the operation information to the virtual machine management component, and forwarding the operation information to the interface server by the virtual machine management component.
In some embodiments, a direct communication connection may be established between the container management component and the interface server, such that data interactions and communications may be directly performed between the container management component and the interface server. For example, the container management component may send the operation information directly to the interface server without forwarding by the virtual machine management component, thereby improving the operation efficiency between the container management component and the interface server.
In some embodiments, since a communication connection is established between the container management component and the virtual machine management component, and a communication connection is established between the virtual machine management component and the interface server, the container management component may send the running information to the virtual machine management component first, and then the virtual machine management component forwards the running information to the interface server.
In some embodiments, the network namespaces of the container application instances are isolated from the network namespaces of the container management component. That is, the network naming of the container application instance needs to be differentiated from the network naming of the container management component, so that the conflict between the container application instance and the network connection between the container management component and the outside world is avoided, and the communication between the container management component and the virtual machine management component is influenced, or the communication between the container application instance and the client is influenced, so that the probability of abnormal network communication is reduced. Further, since the container application instance does not use host network (virtual machine network), traffic affecting the control plane components (i.e., components in the control cluster) after the service container changes iptables (IP packet filtering system) is avoided, such as the side of the relation, i.e., the service network rule is avoided from affecting the components in the control cluster.
In some embodiments, the container management component uses a network card of the virtual machine to directly establish communication connection with the client, and no intermediate physical device is required for forwarding, thereby saving resources and energy loss caused by forwarding.
In some embodiments, the support component deletes the virtual machine in response to the container application instance ceasing to run. If the container application instance stops running, the container management component directly or indirectly sends a message of stopping running the container application instance to the interface server; because the supporting component continuously monitors the interface server, the supporting component can acquire the information of stopping the operation of the container application instance and delete the virtual machine, so that unnecessary computing resource occupation is timely relieved, and the utilization efficiency of the computing resource is further improved.
In step 308, the interface server obtains a data obtaining request from the client, where the data obtaining request is used to request to obtain the target data.
In some embodiments, after the container application instance is created, the container application instance may run in a virtual machine and generate some data during the run. The client may generate a data acquisition request and send the data acquisition request to the interface server in the case where the data generated by the container application instance needs to be acquired. Optionally, the data acquisition request carries an identifier of the container application instance, and is used for requesting acquisition of the target data.
The target data may be various types of data, and thus, various types of interfaces may be implemented through a connection between the virtual machine management component and the container management component, such as logs (query container management component log), exec (login container management component execution command), cadvisor (acquire operation information of the container management component), port forward (port forwarding), and the like.
In step 309, the virtual machine management component obtains the data obtaining request through the monitoring interface server, and sends the data obtaining request to the container management component corresponding to the data obtaining request.
In some embodiments, the interface server or the virtual machine management component may determine which container application instance data it needs to obtain according to the data acquisition request, and the virtual machine management component sends the data acquisition request to the corresponding container management component.
In step 310, the container management component corresponding to the data acquisition request sends the target data to the virtual machine management component.
In step 311, the virtual machine management component sends the target data to the interface server, which forwards the target data to the client.
In summary, in the technical solution provided in the embodiment of the present application, it is determined on which physical machine to create the container application instance according to the remaining computing resources of the virtual node, and in the embodiment of the present application, the computing resources are occupied to create the virtual machine to run the container application instance only if it is really necessary to run a new container application instance; and the corresponding virtual machine is deleted immediately after the container application instance stops running, so that the situation that the virtual machine does not run any container application instance to occupy computing resources is reduced. In other words, in the technical scheme provided by the embodiment of the application, the virtual machine is created as required and is not required to be destroyed, so that the elastic capacity expansion is realized, and no or less redundant empty virtual machines exist, thereby improving the utilization efficiency of the virtual machine. And as a good isolation effect is realized at the virtual machine level, the operation and maintenance burden of a container user is reduced.
The following are system embodiments of the present application that may be used to perform method embodiments of the present application. For details not disclosed in the system embodiments of the present application, please refer to the method embodiments of the present application.
Referring to FIG. 4, a block diagram of a container application management system according to one embodiment of the present application is shown. The container application management system has a function of realizing the above-described example of the creation method of the container application instance. The container application management system 400 includes a management cluster including a virtual machine management component 410 and a node cluster including a plurality of physical machines, and the container application management system 400 further includes a container management component 420.
The virtual machine management component 410 is configured to create a virtual machine on a target physical machine, where the virtual machine is configured with the container management component 420, and the target physical machine is one of the plurality of physical machines.
The container management component 420 is configured to create a container application instance in the virtual machine, where the container application instance includes at least one container; wherein each container application instance monopolizes one virtual machine, and different virtual machines are isolated from each other.
In some embodiments, the node cluster further comprises a support assembly 430.
The virtual machine management component 410 is configured to send a virtual machine creation request to the support component 430, where the virtual machine creation request is for requesting creation of a virtual machine on the target physical machine.
The supporting component 430 is configured to create a virtual machine on the target physical machine according to the virtual machine creation request.
In some embodiments, the management cluster further includes an interface server 440 and a scheduler 450.
The interface server 440 is configured to receive a resource declaration from a client, where the resource declaration is used to create the container application instance.
The scheduler 450 is configured to obtain the resource declaration by monitoring the interface server 440; and determining a scheduling result corresponding to the resource statement according to the residual computing resources of each virtual node, and sending the scheduling result to the interface server, wherein the scheduling result is used for indicating the creation of the container application instance on the target physical machine of the target virtual node.
The virtual machine management component 410 is further configured to obtain the scheduling result by monitoring the interface server 440, and generate the virtual machine creation request according to the scheduling result.
In some embodiments, the remaining computing resources of the virtual node are contained in the label information of the virtual node; the virtual machine management component 410 is further configured to periodically obtain the remaining computing resources of each virtual node from the support component, and add or update the remaining computing resources of the virtual node in the tag information of the virtual node.
In some embodiments, the interface server 440 is further configured to obtain a data obtaining request from the client, where the data obtaining request is used to request obtaining the target data.
The virtual machine management component 410 is further configured to obtain the data obtaining request by monitoring the interface server 440, and send the data obtaining request to the container management component 420 corresponding to the data obtaining request.
The container management component 420 corresponding to the data acquisition request is configured to send the target data to the virtual machine management component 410.
The virtual machine management component 410 is further configured to send the target data to the interface server 440, and the interface server 440 forwards the target data to the client.
In some embodiments, the support component 430 is further configured to delete the virtual machine in response to the container application instance ceasing to run.
In some embodiments, the container management component 420 is further configured to check an operation state of a container included in the container application instance, and restart a target container when the target container whose operation state is problematic is checked.
In some embodiments, the container management component 420 is further configured to:
acquiring operation information of the container application instance;
transmitting the operation information to an interface server through a direct communication connection established with the interface server; or sending the running information to the virtual machine management component, and forwarding the running information to the interface server by the virtual machine management component.
In some embodiments, the virtual machine management component 410 is further configured to execute a rebuilding flow for the virtual machine when the virtual machine operation failure is determined based on heartbeat information with the container management component 420.
In some embodiments, the network namespaces of the container application instances are isolated from the network namespaces of the container management component.
In summary, in the technical solution provided in the embodiments of the present application, when one container application instance needs to be run, a virtual machine is created on a target physical machine through a virtual machine management component, and the one container application instance is created in the one virtual machine, so that each container application instance can monopolize one virtual machine, and the virtual machine are isolated, so that different container application instances are separated, each container application instance can run in a respective virtual machine, and different container application instances do not affect each other, thereby improving independence between different container application instances.
In some embodiments, a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions which, when executed by a processor, implement a method of creating a container application instance executed by any one or more of the above-described principals is also provided.
Alternatively, the computer-readable storage medium may include: ROM (Read-Only Memory), RAM (Random-Access Memory), SSD (Solid State Drives, solid State disk), optical disk, or the like. The random access memory may include ReRAM (Resistance Random Access Memory, resistive random access memory) and DRAM (Dynamic Random Access Memory ), among others.
In some embodiments, a computer program product or computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the method of creating a container application instance executed by any one or more of the principals described above.
It should be understood that references herein to "a plurality" are to two or more.
The foregoing description of the exemplary embodiments of the application is not intended to limit the application to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the application.
Claims (20)
1. The method for creating the container application instance is characterized by being applied to a container application management system, wherein the container application management system comprises a management cluster and a node cluster, the management cluster comprises a virtual machine management component, and the node cluster comprises a plurality of physical machines; the method comprises the following steps:
the virtual machine management component creates a virtual machine on a target physical machine, the virtual machine is configured with a container management component, and the target physical machine is one physical machine in the plurality of physical machines;
the container management component creates a container application instance in the virtual machine, the container application instance comprising at least one container; wherein each container application instance monopolizes one virtual machine, and different virtual machines are isolated from each other.
2. The method of claim 1, wherein the virtual machine management component creates a virtual machine on the target physical machine, comprising:
The virtual machine management component sends a virtual machine creation request to the support component, wherein the virtual machine creation request is used for requesting to create a virtual machine on the target physical machine;
the support component creates a virtual machine on the target physical machine according to the virtual machine creation request.
3. The method of claim 2, wherein the management and control cluster further comprises an interface server and a scheduler; the method further comprises the steps of:
the interface server receives a resource declaration from a client, wherein the resource declaration is used for creating the container application instance;
the scheduler acquires the resource declaration by monitoring the interface server;
the scheduler determines a scheduling result corresponding to the resource statement according to the residual computing resources of each virtual node, and sends the scheduling result to the interface server, wherein the scheduling result is used for indicating the creation of the container application instance on a target physical machine of a target virtual node;
and the virtual machine management component acquires the scheduling result by monitoring the interface server and generates the virtual machine creation request according to the scheduling result.
4. A method according to claim 3, wherein the remaining computing resources of the virtual node are contained in the label information of the virtual node; the method further comprises the steps of:
The virtual machine management component periodically acquires the residual computing resources of each virtual node from the support component, and adds or updates the residual computing resources of the virtual nodes in the label information of the virtual nodes.
5. A method according to claim 3, characterized in that the method further comprises:
the interface server acquires a data acquisition request from a client, wherein the data acquisition request is used for requesting acquisition of target data;
the virtual machine management component acquires the data acquisition request by monitoring the interface server and sends the data acquisition request to a container management component corresponding to the data acquisition request;
the container management component corresponding to the data acquisition request sends the target data to the virtual machine management component;
the virtual machine management component sends the target data to the interface server, and the interface server forwards the target data to the client.
6. The method according to claim 2, wherein the method further comprises:
in response to the container application instance ceasing to run, the support component deletes the virtual machine.
7. The method according to any one of claims 1 to 6, further comprising:
the container management component checks the running state of the container contained in the container application instance, and restarts the target container when the target container with the running state having a problem is checked.
8. The method according to any one of claims 1 to 6, further comprising:
the container management component obtains the running information of the container application instance;
the container management component sends the operation information to the interface server through a direct communication connection established between the container management component and the interface server; or sending the running information to the virtual machine management component, and forwarding the running information to the interface server by the virtual machine management component.
9. The method according to any one of claims 1 to 6, further comprising:
and the virtual machine management component executes a reconstruction flow aiming at the virtual machine under the condition that the virtual machine operation fault is determined based on the heartbeat information between the virtual machine management component and the container management component.
10. The method of any of claims 1 to 6, wherein a network namespace of the container application instance is isolated from a network namespace of the container management component.
11. The container application management system is characterized by comprising a management cluster and a node cluster, wherein the management cluster comprises a virtual machine management component, and the node cluster comprises a plurality of physical machines;
the virtual machine management component is used for creating a virtual machine on a target physical machine, the virtual machine is configured with the container management component, and the target physical machine is one physical machine in the plurality of physical machines;
the container management component is used for creating a container application instance in the virtual machine, wherein the container application instance comprises at least one container; wherein each container application instance monopolizes one virtual machine, and different virtual machines are isolated from each other.
12. The system of claim 11, wherein the system further comprises a controller configured to control the controller,
the virtual machine management component is used for sending a virtual machine creation request to the supporting component, wherein the virtual machine creation request is used for requesting to create a virtual machine on the target physical machine;
the support component is used for creating a virtual machine on the target physical machine according to the virtual machine creation request.
13. The system of claim 12, wherein the management and control cluster further comprises an interface server and a scheduler;
The interface server is used for receiving a resource statement from a client, wherein the resource statement is used for creating the container application instance;
the scheduler is used for acquiring the resource declaration by monitoring the interface server; determining a scheduling result corresponding to the resource declaration according to the residual computing resources of each virtual node, and sending the scheduling result to the interface server, wherein the scheduling result is used for indicating to create the container application instance on a target physical machine of a target virtual node;
the virtual machine management component is further configured to obtain the scheduling result by monitoring the interface server, and generate the virtual machine creation request according to the scheduling result.
14. The system of claim 13, wherein remaining computing resources of the virtual node are included in tag information of the virtual node;
the virtual machine management component is further configured to periodically acquire the remaining computing resources of each virtual node from the support component, and add or update the remaining computing resources of the virtual node in the tag information of the virtual node.
15. The system of claim 13, wherein the system further comprises a controller configured to control the controller,
The interface server is further used for acquiring a data acquisition request from the client, wherein the data acquisition request is used for requesting to acquire target data;
the virtual machine management component is further configured to acquire the data acquisition request by monitoring the interface server, and send the data acquisition request to a container management component corresponding to the data acquisition request;
the container management component corresponding to the data acquisition request is used for sending the target data to the virtual machine management component;
the virtual machine management component is further configured to send the target data to the interface server, and the interface server forwards the target data to the client.
16. The system of claim 12, wherein the support component is further configured to delete the virtual machine in response to the container application instance ceasing to run.
17. The system of any one of claims 11 to 16, wherein the container management component is further configured to check an operational state of a container included in the container application instance, and restart a target container if the operational state is checked for the target container.
18. The system of any one of claims 11 to 16, wherein the container management component is further configured to:
acquiring operation information of the container application instance;
transmitting the operation information to an interface server through a direct communication connection established with the interface server; or sending the running information to the virtual machine management component, and forwarding the running information to the interface server by the virtual machine management component.
19. A computer readable storage medium having stored therein a computer program that is loaded and executed by a processor to implement a method performed by one or more of the subjects of any of the preceding claims 1 to 10.
20. A computer program product, characterized in that it comprises a computer program stored in a computer readable storage medium, from which a processor reads and executes the computer program to implement a method performed by the subject matter according to any one or more of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210399836.8A CN116954810A (en) | 2022-04-15 | 2022-04-15 | Method, system, storage medium and program product for creating container application instance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210399836.8A CN116954810A (en) | 2022-04-15 | 2022-04-15 | Method, system, storage medium and program product for creating container application instance |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116954810A true CN116954810A (en) | 2023-10-27 |
Family
ID=88458956
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210399836.8A Pending CN116954810A (en) | 2022-04-15 | 2022-04-15 | Method, system, storage medium and program product for creating container application instance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116954810A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117270916A (en) * | 2023-11-21 | 2023-12-22 | 北京凌云雀科技有限公司 | Istio-based Sidecar thermal updating method and device |
-
2022
- 2022-04-15 CN CN202210399836.8A patent/CN116954810A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117270916A (en) * | 2023-11-21 | 2023-12-22 | 北京凌云雀科技有限公司 | Istio-based Sidecar thermal updating method and device |
CN117270916B (en) * | 2023-11-21 | 2024-02-06 | 北京凌云雀科技有限公司 | Istio-based Sidecar thermal updating method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Netto et al. | State machine replication in containers managed by Kubernetes | |
US8930409B2 (en) | System and method for supporting named operations in a distributed data grid | |
CN109976667B (en) | Mirror image management method, device and system | |
CN112104723B (en) | Multi-cluster data processing system and method | |
CN101969391B (en) | Cloud platform supporting fusion network service and operating method thereof | |
US20080140857A1 (en) | Service-oriented architecture and methods for direct invocation of services utilizing a service requestor invocation framework | |
US20080140759A1 (en) | Dynamic service-oriented architecture system configuration and proxy object generation server architecture and methods | |
US20150365351A1 (en) | System and method for dynamic provisioning of applications | |
CN114787781A (en) | System and method for enabling high availability managed failover services | |
Netto et al. | Koordinator: A service approach for replicating docker containers in kubernetes | |
WO2012125144A1 (en) | Systems and methods for sizing resources in a cloud-based environment | |
CN112698838B (en) | Multi-cloud container deployment system and container deployment method thereof | |
CN112751847A (en) | Interface call request processing method and device, electronic equipment and storage medium | |
CN114615268B (en) | Service network, monitoring node, container node and equipment based on Kubernetes cluster | |
CN115757611A (en) | Big data cluster switching method and device, electronic equipment and storage medium | |
CN115086166A (en) | Computing system, container network configuration method, and storage medium | |
CN116805946A (en) | Message request processing method and device, electronic equipment and storage medium | |
CN116954810A (en) | Method, system, storage medium and program product for creating container application instance | |
CN114579250B (en) | Method, device and storage medium for constructing virtual cluster | |
CN110049081A (en) | For build and using high availability Docker private library method and system | |
US10917292B2 (en) | Architectural design to enable bidirectional service registration and interaction among clusters | |
US10896077B2 (en) | Messaging abstraction layer for integration with message oriented middleware platforms | |
CN116974780A (en) | Data caching method, device, software program, equipment and storage medium | |
CN113472638B (en) | Edge gateway control method, system, device, electronic equipment and storage medium | |
CN111905361B (en) | Game service system, game processing method, storage medium and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |