CN115065730B - Data processing method, first container, electronic equipment and storage medium - Google Patents
Data processing method, first container, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115065730B CN115065730B CN202210572930.9A CN202210572930A CN115065730B CN 115065730 B CN115065730 B CN 115065730B CN 202210572930 A CN202210572930 A CN 202210572930A CN 115065730 B CN115065730 B CN 115065730B
- Authority
- CN
- China
- Prior art keywords
- network interface
- data packet
- address
- network
- container
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000000034 method Methods 0.000 claims description 44
- 230000004044 response Effects 0.000 claims description 36
- 238000004590 computer program Methods 0.000 claims description 11
- 101100513046 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) eth-1 gene Proteins 0.000 description 14
- 230000008569 process Effects 0.000 description 14
- 238000011144 upstream manufacturing Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 9
- 230000001360 synchronised effect Effects 0.000 description 9
- 230000005291 magnetic effect Effects 0.000 description 6
- 230000003068 static effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 101000765037 Mus musculus Class E basic helix-loop-helix protein 40 Proteins 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000004888 barrier function Effects 0.000 description 2
- 238000007667 floating Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005294 ferromagnetic effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The embodiment of the invention is applicable to the technical field of computers, and provides a data processing method, a first container, electronic equipment and a storage medium, wherein the data processing method is applied to a first container in a first container group of a container cluster, the first container group further comprises a first network interface and a second network interface, and the data processing method comprises the following steps: receiving a request data packet based on a first network interface; matching the backend server based on the request data packet; the back-end server characterizes nodes in the container cluster for providing application services; modifying the network address in the request data packet based on the matched back-end server and the second network interface; and transmitting the modified request data packet to the back-end server based on the second network interface.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data processing method, a first container, an electronic device, and a storage medium.
Background
In Kubernetes clusters, related technologies introduce north-south traffic into the clusters through Ingress, nodePort Service or Loadbalancer Service, and this kind of drainage method cannot determine the service to which the request packet belongs according to its destination IP.
Disclosure of Invention
In order to solve the above-mentioned problems, embodiments of the present invention provide a data processing method, a first container, an electronic device, and a storage medium, so as to at least solve the problem that the related art cannot determine the service to which the data packet belongs according to the destination IP of the data packet.
The technical scheme of the invention is realized as follows:
In a first aspect, an embodiment of the present invention provides a data processing method, applied to a first container in a first container group of a container cluster, where the first container group further includes a first network interface and a second network interface, the method includes:
Receiving a request data packet based on a first network interface;
Matching a backend server based on the request data packet; the back-end server characterizes nodes in the container cluster for providing application services;
Modifying a network address in the request data packet based on the matched back-end server and the second network interface;
and transmitting the modified request data packet to the back-end server based on the second network interface.
In the above solution, the modifying, based on the matched backend server and the second network interface, the network address in the request packet includes:
modifying the source IP address in the request data packet into the IP address of the second network interface;
And modifying the destination IP address of the request data packet into the matched IP address of the back-end server.
In the above aspect, after the modified request packet is sent to the backend server based on the second network interface, the method further includes:
receiving a response data packet of the back-end server based on the second network interface;
and modifying the response data packet, and sending the modified response data packet based on the first network interface.
In the above scheme, the modifying the response data packet includes:
Modifying the destination IP address of the response data packet into the source IP address of the request data packet;
And modifying the source IP address of the response data packet into the destination IP address of the request data packet.
In the above aspect, before receiving the request packet based on the first network interface, the method further includes:
receiving a setting broadcast request;
Transmitting a network address of the first network interface to a peer device based on the set broadcast request, so that the peer device transmits a request data packet to the first network interface based on the network address; the peer device characterizes the sender of the setup broadcast request.
In the above solution, the matching the backend server based on the request packet includes:
Matching a virtual service based on a request parameter in the request data packet;
And selecting a corresponding back-end server based on the matched virtual service.
In the above aspect, the first network interface and the second network interface are taken over by the first container; wherein,
The first network interface is created based on the network port of the node where the first container group is located; the second network interface is created based on a third network interface of the first container group; the third network interface characterizes a network interface created by a network plug-in of the container cluster for the first container group; the IP address of the second network interface is characterized as the IP address of the third network interface.
In a second aspect, embodiments of the present invention provide a first container comprising:
a receiving module, configured to receive a request packet based on a first network interface;
the matching module is used for matching the back-end server based on the request data packet; the back-end server characterizes nodes in the container cluster for providing application services;
A modifying module, configured to modify a network address in the request packet based on the matched backend server and the second network interface;
and the sending module is used for sending the modified request data packet to the back-end server based on the second network interface.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the processor and the memory are connected to each other, where the memory is configured to store a computer program, the computer program including program instructions, and the processor is configured to invoke the program instructions to perform the steps of the data processing method provided in the first aspect of the embodiment of the present invention.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium comprising: the computer readable storage medium stores a computer program. The computer program when executed by a processor implements the steps of the data processing method as provided in the first aspect of the embodiment of the present invention.
The embodiment of the invention is applied to a first container in a first container group of a container cluster, the first container group further comprises a first network interface and a second network interface, a request data packet is received based on the first network interface, a back-end server is matched based on the request data packet, a network address in the request data packet is modified based on the matched back-end server and the second network interface, and the modified request data packet is sent to a corresponding back-end server through the second network interface, wherein the back-end server characterizes a node providing application service in the container cluster. In the embodiment of the invention, the first container can receive the data packets of different destination IPs through the first network interface, can distinguish the service to which the data packet belongs according to the destination IP of the request data packet, and then sends the data packet to the back-end server corresponding to the service through the second network interface. The embodiment of the invention does not need to distinguish services according to ports, and is more in line with the service scenes of common fixed ports such as HTTP, HTTPS and the like.
Drawings
FIG. 1 is a schematic diagram of a container cluster according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an implementation flow of a data processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an implementation flow of another data processing method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an implementation flow of another data processing method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an implementation flow of another data processing method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a data path of a north-south traffic provided by an embodiment of the present invention;
FIG. 7 is a schematic illustration of a first container provided in accordance with an embodiment of the present invention;
Fig. 8 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Kubernetes (K8 s) is an exemplary open source container cluster management system for automatically deploying, expanding, and managing containerized applications. In Kubernetes clusters, pod (container group) is the basis for all traffic types, also the minimum unit level of K8s management, which is a combination of one or more containers. One or more containers may run within one Pod, the containers sharing the same Pod's network environment. And K8s takes Pod as a minimum unit to schedule, expand, share resources and manage life cycle.
The current scheme of introducing the north-south traffic into the cluster by Kubernetes mainly comprises Ingress, nodePort Service and Loadbalancer Service, and the traffic from the client to the server is the north-south traffic, namely the traffic between clients-servers. Wherein, the Ingress is a seven-layer virtual service for introducing network traffic into the Kubernetes cluster network according to Host and Path. NodePort Service is a four-tier virtual service for introducing network traffic into a Kubernetes cluster network according to Port. Loadbalancer Service is a NodePort Service and External hardware load balancer based four-layer virtual service for introducing network traffic into Kubernetes cluster networks according to an External IP.
Virtual services are an abstraction of the application services, which enables high availability and lateral expansion of the back-end services by providing a unified ingress internet protocol (IP, internet Protocol), port, host, and Path for the application server cluster, and scheduling requests from clients to the appropriate back-end servers according to a load balancing algorithm. Virtual services are generally classified into four-layer virtual services and seven-layer virtual services, which are scheduled according to open systems interconnection (OSI, open System Interconnect) four-layer and below network packet information, and seven-layer virtual services which are scheduled according to OSI seven-layer and below network packet information.
NodePort Service disadvantages: the method has the advantages that precious host port resources are occupied, the number of configurable services is limited by the number of ports, and the services are generally required to be forwarded through multiple layers (such as being dispatched to Node B when the NodeA hits the Service), the performance is poor, barrier removal is difficult (an overlong flow path not only reduces network performance, but also increases network complexity, and the barrier removal difficulty is increased). Second, for classical HTTP/HTTPs/FTP etc. services, the ports used are typically fixed 80, 443 etc., but publishing such traffic through NodePort Service would require avoiding such common ports, e.g. NodePort Service using ports ranging from 30000 to 50000, which is also not a friendly behavior for clients.
Loadbalancer Service disadvantages: loadbalancer Service based on NodePort Service implementation, there are the same port number limitations and performance issues.
Disadvantages of Ingress: the traditional Ingress is realized based on the nginix, which is a reverse proxy server with an open source code, and is deployed in Kubernetes clusters in the form of Pod, and traffic is generally led into the nginix Pod through NodePort or Loadbalancer Service, so that the destination IP of a network data packet reaching the nginix Pod is necessarily the nginix Pod IP, and therefore, the nginix cannot judge the service to which the network data packet belongs according to the destination IP of the data packet.
As can be seen, the conventional Ingress introduces traffic into the cluster through NodePort or Loadbalancer Service, and then performs node scheduling according to application layer information such as Host and Path, and the drainage method cannot determine the service to which the packet belongs according to the destination IP of the packet, so that the conventional service which does not support domain name is difficult to deploy into Kubernetes. And can also be affected by port management difficulties of NodePort and Loadbalancer Service, as well as performance problems.
In view of the above-mentioned drawbacks of the related art, an embodiment of the present invention provides a data processing method, which can receive request data packets of different destination IPs, and distinguish services to which the request data packets belong according to the destination IPs. In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
FIG. 1 is a schematic diagram of a container cluster according to an embodiment of the present invention, where Node in FIG. 1 is a Node in the container cluster, and the Node includes a container group (Pod). In the related art, pod only contains one eth0 network interface, eth0 is Veth interface created by Kubernetes CNI (ContainerNetwork Interface) plug-in for Pod, and Veth is a virtual network device.
As shown in fig. 1, the embodiment of the present invention creates a data plane (dataplane), an eth1 (first network interface), and an eth2 (second network interface) in a Pod of a node of the container cluster. The data plane is deployed in a container cluster in the form of a container, that is, the first container in the embodiment of the present invention, eth1 and eth2 are taken over by the data plane process.
In practical application, dataplane is a software load data plane supporting INTEL DPDK technology, DPDK is generally called DATAPLANE DEVELOPMENTKIT, is a data plane development tool set provided by Intel, and a DPDK application is operated in a user space to send and receive data packets by using a data plane database provided by itself.
Eth1 may be a MacVTap interface running in VEPA mode, created based on a certain portal (Nic) in the default network namespace, and added eth1 to the network namespace in which the data plane process resides. Nic is one of the ports DefaultNetworkNamespace of Kubernetes WorkerNode. MacVTap is a Linux kernel device driver aimed at simplifying the virtualized bridge network, essentially a combination of Macvlan driver and Tap device, commonly used to provide virtual network devices for virtual machines instead of LinuxBridge +tap in a virtualized environment.
Eth2 may be a MacVTap interface that operates in Passthru mode, with eth2 created based on eth0 and located in the same network namespace as the data plane processes.
Configuring an externally reachable IP address on eth 1: ip1 for receiving external network traffic. The ad IP address (IP 2) on eth0 is moved to eth2 for accessing the Pod network. If deployed in the virtualized environment, the underlying virtualized platform is required to turn on network promiscuous mode, ensuring that packets with destination media access control address (MAC, mediaAccess ControlAddress) of eth1 are not discarded by the virtualized platform.
Fig. 2 is a schematic implementation flow chart of a data processing method according to an embodiment of the present invention, where the data processing method is applied to a first container in a first container group of a container cluster, that is, a data plane in fig. 1, and a physical execution body of the data processing method is a node in the container cluster, where the first container group is located in the node, and the node may be an electronic device such as a desktop computer or a notebook computer. Referring to fig. 2, the data processing method includes:
s201, receiving a request data packet based on a first network interface.
Here, the first container corresponds to the data plane in fig. 1, and the first container group further includes a first network interface corresponding to eth1 in fig. 1 and a second network interface corresponding to eth2 in fig. 1.
In an embodiment, the first network interface and the second network interface are taken over by the first container; the first network interface is created based on the network port of the node where the first container group is located; the second network interface is created based on a third network interface of the first container group; the third network interface characterizes a network interface created by a network plug-in of the container cluster for the first container group; the IP address of the second network interface is characterized as the IP address of the third network interface.
The first container receives an external network request through a first network interface created based on a certain portal (Nic) in a default network namespace and adds eth1 to the network namespace in which the data plane process resides. The first network interface is configured with an externally reachable IP address for receiving external network traffic.
The third network interface corresponds to eth0 in fig. 1, and moves the Pod IP address carried on eth0 to the second network interface for accessing the Pod network.
In practical applications, the first network interface and the second network interface may be MacVTap interfaces, macVTap is a Linux kernel device driver for simplifying the virtualized bridge network, essentially a Macvlan driver and Tap device combination, and is generally used to provide virtual network devices for virtual machines instead of LinuxBridge +taps in the virtualized environment. The first container is a software load data plane supporting INTEL DPDK technology, which takes over two MacVTap high-performance virtual portals through INTEL DPDK technology. The performance influence and port management difficulty of NodePort/Loadbalancer Service can be avoided through MacVTap high-performance virtual network ports, the path of the north-south traffic entering the cluster is shortened, the performance is improved, and meanwhile, the obstacle removal difficulty is reduced.
In an embodiment, before receiving the request packet based on the first network interface, the method further comprises:
S301, receiving a set broadcast request.
Here, the setting broadcast request is transmitted by the client of the upstream network device, i.e., the external device of the container cluster. A broadcast request is set for obtaining a network address of the first network interface.
Specifically, the upstream network device sends an address resolution protocol (ARP, address Resolution Protocal) broadcast request to obtain the MAC address of the first network interface. ARP is used to implement the mapping from IP address to MAC address, i.e. to interrogate the MAC address corresponding to the target IP.
S302, transmitting a network address of the first network interface to opposite terminal equipment based on the setting broadcast request, so that the opposite terminal equipment transmits a request data packet to the first network interface based on the network address; the peer device characterizes the sender of the setup broadcast request.
Here, the peer device is an upstream network device, and the first container provides the network address (MAC address of eth 1) of the first network interface to the upstream network device in response to the set broadcast request. Thus, the upstream network device may send network traffic to the first network interface over the two-layer network.
In the embodiment of the invention, the request data packet can be data packets from different objective IPs, such as HTTP/HTTPS/FTP. The embodiment of the invention supports pure IP service, which is the traditional service without supporting domain name.
S202, matching a back-end server based on the request data packet; the backend server characterizes nodes in the container cluster that provide application services.
After receiving the request data packet through the first network interface, the first container matches the back-end server according to the request data packet.
Referring to fig. 4, in an embodiment, the matching back-end server based on the request packet includes:
S401, matching virtual services based on the request parameters in the request data packet.
S402, selecting a corresponding back-end server based on the matched virtual service.
Here, the virtual service refers to an application service provided by the container cluster, the request parameters include information such as destination IP, destination port, host, path and the like in the request packet, the virtual service is matched according to the information, one virtual service may correspond to a plurality of back-end servers, and an appropriate back-end server is selected according to a scheduling algorithm. For example, a backend server with a low load may be selected according to the load of the backend server.
And S203, modifying the network address in the request data packet based on the matched back-end server and the second network interface.
In an embodiment, said modifying the network address in the request packet based on the matched back-end server and the second network interface comprises:
and modifying the source IP address in the request data packet into the IP address of the second network interface.
And modifying the destination IP address of the request data packet into the matched IP address of the back-end server.
Specifically, the source IP of the request data packet is modified into the IP address of eth2, and after the destination IP is modified into the corresponding IP address of the back-end server, the request data packet is sent out through the eth2 Pod network.
S204, the modified request data packet is sent to the back-end server based on the second network interface.
The second network interface interfaces with the Pod network, and thus the modified request data packet is sent to the corresponding backend server through the second network interface.
The embodiment of the invention is applied to a first container in a first container group of a container cluster, the first container group further comprises a first network interface and a second network interface, a request data packet is received based on the first network interface, a back-end server is matched based on the request data packet, a network address in the request data packet is modified based on the matched back-end server and the second network interface, and the modified request data packet is sent to a corresponding back-end server through the second network interface, wherein the back-end server characterizes a node providing application service in the container cluster. In the embodiment of the invention, the first container can receive the data packets of different destination IPs through the first network interface, can distinguish the service to which the data packet belongs according to the destination IP of the request data packet, and then sends the data packet to the back-end server corresponding to the service through the second network interface. The embodiment of the invention does not need to distinguish services according to ports, and is more in line with the service scenes of common fixed ports such as HTTP, HTTPS and the like.
Referring to fig. 5, in an embodiment, after the modified request packet is sent to the backend server based on the second network interface, the method further comprises:
s501, receiving a response data packet of the back-end server based on the second network interface.
And after receiving the request data packet sent by the upstream network equipment, the back-end server processes the request data packet, and after the processing is completed, sends the response data packet to the second network interface through the Pod network.
S502, modifying the response data packet, and sending the modified response data packet based on the first network interface.
The first container receives the response data packet sent by the back-end server through the second network interface, matches the connection tracking information, and returns the response data packet to the upstream network equipment through the first network interface.
In an embodiment, said modifying said response packet comprises:
Modifying the destination IP address of the response data packet into the source IP address of the request data packet;
And modifying the source IP address of the response data packet into the destination IP address of the request data packet.
After the matching connection is tracked, the destination IP address and the source IP address of the response data packet are modified to correspond to the source IP address and the destination IP address of the request data packet, so that the response data packet can be ensured to return to the client correctly.
Referring to fig. 6, fig. 6 is a schematic diagram of a data path of a north-south traffic according to an embodiment of the present invention. The upstream network device sends an ARP broadcast request to resolve the MAC address corresponding to ip 1. The data plane process receives the ARP broadcast request through eth1, and provides the MAC address of eth1 to the upstream network device in response to the ARP request.
The upstream network equipment sends a network request to the eth1 through a two-layer network, the data surface process receives the network request through the eth1, the network request matches virtual services according to information such as destination IP, destination port, host, path and the like in a request data packet, and an appropriate back-end server is selected according to a scheduling algorithm. And after the source IP of the request data packet is modified to IP2 and the destination IP is modified to the corresponding back-end server IP, the request data packet is sent out through the eth2 Pod network.
And after receiving the request data packet, the back-end server processes the request data packet, after finishing the processing, sends the response data packet to the eth2 through the Pod network, and after receiving the response data packet through the eth2, the data plane process matches the connection tracking information and returns the response data packet to the upstream network equipment through the eth 1. After the matching connection is tracked, the destination IP and the source IP of the response packet are modified to correspond to the source IP and the destination IP of the request packet, so as to ensure that the response packet can return to the client correctly.
The embodiment of the invention can distinguish different service flows according to the destination IP of the request data packet, for example, two virtual services are assumed to exist: VS1 and VS2 represent two IP traffic, VIP 192.168.0.1 and 192.168.0.2, respectively. The upstream network device assigns an externally reachable floating IP for VS 1: EIP1 assigns an externally reachable floating IP for VS 2: EIP2.
The client accesses EIP1, the upstream network device maps EIP1 to 192.168.0.1 (by DNAT changing the destination IP of the packet to 192.168.0.1) and routes the packet to IP1 (which can be accomplished by configuring static routes). The data plane process receives the request packet, successfully matches to VS1 according to the destination IP (192.168.0.1), and forwards the data packet to the corresponding back-end server.
The client accesses EIP2, the upstream network device maps EIP2 to 192.168.0.2 (by DNAT changing the destination IP of the packet to 192.168.0.2) and routes the packet to IP1 (which can be accomplished by configuring static routes). The data plane process receives the request packet, successfully matches to VS2 according to the destination IP (192.168.0.2), and forwards the data packet to the corresponding back-end server.
The embodiment of the invention can release pure IP service in a Kubernetes environment, can receive data packets of different destination IPs, distinguishes the service to which the data packets belong according to the destination IPs of the data packets, is not influenced by port management difficulty and performance problems of NodePort/Loadbalancer Service, does not need to distinguish the service through the ports, and is more suitable for service use scenes of HTTP/HTTPS and the like which usually use fixed ports.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The technical schemes described in the embodiments of the present invention may be arbitrarily combined without any collision.
In addition, in the embodiments of the present invention, "first", "second", etc. are used to distinguish similar objects and are not necessarily used to describe a particular order or precedence.
Referring to fig. 7, fig. 7 is a schematic diagram of a first container according to an embodiment of the present invention, and as shown in fig. 7, the first container includes a receiving module, a matching module, a modifying module, and a transmitting module.
A receiving module, configured to receive a request packet based on a first network interface;
the matching module is used for matching the back-end server based on the request data packet; the back-end server characterizes nodes in the container cluster for providing application services;
A modifying module, configured to modify a network address in the request packet based on the matched backend server and the second network interface;
and the sending module is used for sending the modified request data packet to the back-end server based on the second network interface.
In an embodiment, the modifying module modifies the network address in the request packet based on the matched backend server and the second network interface, comprising:
modifying the source IP address in the request data packet into the IP address of the second network interface;
And modifying the destination IP address of the request data packet into the matched IP address of the back-end server.
In an embodiment, the device further comprises:
The second receiving module is used for receiving the response data packet of the back-end server based on the second network interface;
And the modification module is also used for modifying the response data packet and sending the modified response data packet based on the first network interface.
In one embodiment, the modifying module modifies the response packet, including:
Modifying the destination IP address of the response data packet into the source IP address of the request data packet;
And modifying the source IP address of the response data packet into the destination IP address of the request data packet.
In an embodiment, the receiving module is further configured to: receiving a setting broadcast request;
The second sending module is used for sending the network address of the first network interface to the opposite terminal equipment based on the set broadcast request, so that the opposite terminal equipment sends a request data packet to the first network interface based on the network address; the peer device characterizes the sender of the setup broadcast request.
In an embodiment, the matching module matches the backend server based on the request packet, including:
Matching a virtual service based on a request parameter in the request data packet;
And selecting a corresponding back-end server based on the matched virtual service.
In an embodiment, the first network interface and the second network interface are taken over by the first container; the first network interface is created based on the network port of the node where the first container group is located; the second network interface is created based on a third network interface of the first container group; the third network interface characterizes a network interface created by a network plug-in of the container cluster for the first container group; the IP address of the second network interface is characterized as the IP address of the third network interface.
In practice, the receiving module, the matching module, the modifying module and the transmitting module may be implemented by processors in the electronic device, such as a central processing unit (CPU, central Processing Unit), a digital signal Processor (DSP, digital Signal Processor), a micro control unit (MCU, microcontroller Unit) or a programmable gate array (FPGA, field-Programmable GateArray).
It should be noted that: in the data processing of the first container provided in the foregoing embodiment, only the division of the modules is illustrated, and in practical application, the processing allocation may be performed by different modules according to needs, that is, the internal structure of the first container is divided into different modules, so as to complete all or part of the processing described above. In addition, the first container provided in the above embodiment and the data processing method embodiment belong to the same concept, and specific implementation processes of the first container are detailed in the method embodiment, which is not described herein again.
Based on the hardware implementation of the program modules, and in order to implement the method of the embodiment of the present application, the embodiment of the present application further provides an electronic device. Fig. 8 is a schematic diagram of a hardware composition structure of an electronic device according to an embodiment of the present application, where, as shown in fig. 8, the electronic device includes:
a communication interface capable of information interaction with other devices such as a network device and the like;
And the processor is connected with the communication interface so as to realize information interaction with other equipment and is used for executing the method provided by one or more technical schemes on the electronic equipment side when the computer program is run. And the computer program is stored on the memory.
Of course, in practice, the various components in the electronic device are coupled together by a bus system. It will be appreciated that a bus system is used to enable connected communications between these components. The bus system includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled as bus systems in fig. 8.
The memory in the embodiments of the present application is used to store various types of data to support the operation of the electronic device. Examples of such data include: any computer program for operating on an electronic device.
It will be appreciated that the memory can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. The non-volatile Memory may be, among other things, a Read Only Memory (ROM), a programmable Read Only Memory (PROM, programmable Read-Only Memory), erasable programmable Read-Only Memory (EPROM, erasable Programmable Read-Only Memory), electrically erasable programmable Read-Only Memory (EEPROM, ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory), Magnetic random access Memory (FRAM, ferromagnetic random access Memory), flash Memory (Flash Memory), magnetic surface Memory, optical disk, or compact disk-Only (CD-ROM, compact Disc Read-Only Memory); The magnetic surface memory may be a disk memory or a tape memory. The volatile memory may be random access memory (RAM, random Access Memory) which acts as external cache memory. By way of example and not limitation, many forms of RAM are available, such as static random access memory (SRAM, static Random Access Memory), synchronous static random access memory (SSRAM, synchronous Static Random Access Memory), dynamic random access memory (DRAM, dynamic Random Access Memory), synchronous dynamic random access memory (SDRAM, synchronous Dynamic Random Access Memory), and, Double data rate synchronous dynamic random access memory (DDRSDRAM, double Data Rate Synchronous Dynamic Random Access Memory), enhanced synchronous dynamic random access memory (ESDRAM, enhanced Synchronous Dynamic Random Access Memory), synchronous link dynamic random access memory (SLDRAM, syncLink Dynamic Random Access Memory), direct memory bus random access memory (DRRAM, direct Rambus Random Access Memory). the memory described by embodiments of the present application is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed by the embodiment of the application can be applied to a processor or realized by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general purpose processor, DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiment of the application can be directly embodied in the hardware of the decoding processor or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium having a memory, and the processor reads the program in the memory and performs the steps of the method in combination with its hardware.
Optionally, when the processor executes the program, a corresponding flow implemented by the electronic device in each method of the embodiment of the present application is implemented, and for brevity, will not be described herein.
In an exemplary embodiment, the present application also provides a storage medium, i.e. a computer storage medium, in particular a computer readable storage medium, for example comprising a first memory storing a computer program, which is executable by a processor of an electronic device to perform the steps of the aforementioned method. The computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash Memory, magnetic surface Memory, optical disk, or CD-ROM.
In several embodiments provided by the present application, it should be understood that the disclosed first container, electronic device and method may be implemented in other manners. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware associated with program instructions, where the foregoing program may be stored in a computer readable storage medium, and when executed, the program performs steps including the above method embodiments; and the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
Or the above-described integrated units of the application may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
The technical schemes described in the embodiments of the present application may be arbitrarily combined without any collision.
In addition, in the present examples, "first," "second," etc. are used to distinguish similar objects and not necessarily to describe a particular order or sequence.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (8)
1. A data processing method, characterized by being applied to a first container in a first group of containers of a cluster of containers, the first group of containers further comprising a first network interface and a second network interface, the method comprising:
Receiving a request data packet based on a first network interface, wherein the first network interface is configured with an externally reachable IP address for receiving external network traffic;
Matching a backend server based on the request data packet; the back-end server characterizes nodes in the container cluster for providing application services;
Modifying a network address in the request data packet based on the matched back-end server and the second network interface;
Transmitting the modified request data packet to the back-end server based on the second network interface;
The matching back-end server based on the request data packet comprises:
Matching a virtual service based on a request parameter in the request data packet;
selecting a corresponding back-end server based on the matched virtual service;
Wherein the first network interface and the second network interface are taken over by the first container; the first network interface is created based on the network port of the node where the first container group is located; the second network interface is created based on a third network interface of the first container group; the third network interface characterizes a network interface created by a network plug-in of the container cluster for the first container group; the IP address of the second network interface is characterized as the IP address of the third network interface.
2. The method of claim 1, wherein said modifying the network address in the request packet based on the matched back-end server and the second network interface comprises:
modifying the source IP address in the request data packet into the IP address of the second network interface;
And modifying the destination IP address of the request data packet into the matched IP address of the back-end server.
3. The method of claim 1, wherein after sending the modified request packet to the backend server based on the second network interface, the method further comprises:
receiving a response data packet of the back-end server based on the second network interface;
and modifying the response data packet, and sending the modified response data packet based on the first network interface.
4. A method according to claim 3, wherein said modifying said response packet comprises:
Modifying the destination IP address of the response data packet into the source IP address of the request data packet;
And modifying the source IP address of the response data packet into the destination IP address of the request data packet.
5. The method of claim 1, wherein prior to receiving the request packet based on the first network interface, the method further comprises:
receiving a setting broadcast request;
Transmitting a network address of the first network interface to a peer device based on the set broadcast request, so that the peer device transmits a request data packet to the first network interface based on the network address; the peer device characterizes the sender of the setup broadcast request.
6. A first container, comprising:
the receiving module is used for receiving the request data packet based on a first network interface, wherein the first network interface is configured with an externally accessible IP address and is used for receiving external network traffic;
The matching module is used for matching the back-end server based on the request data packet; the back-end server characterizes nodes in the container cluster for providing application services; the virtual service matching method is specifically used for matching the virtual service based on the request parameters in the request data packet; selecting a corresponding back-end server based on the matched virtual service;
The modifying module is used for modifying the network address in the request data packet based on the matched back-end server and the second network interface;
the sending module is used for sending the modified request data packet to the back-end server based on the second network interface;
Wherein the first network interface and the second network interface are taken over by the first container; the first network interface is created based on the network port of the node where the first container group is located; the second network interface is created based on a third network interface of the first container group; the third network interface characterizes a network interface created by a network plug-in of the container cluster for the first container group; the IP address of the second network interface is characterized as the IP address of the third network interface.
7. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the data processing method according to any of claims 1 to 5 when executing the computer program.
8. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the data processing method according to any of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210572930.9A CN115065730B (en) | 2022-05-24 | 2022-05-24 | Data processing method, first container, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210572930.9A CN115065730B (en) | 2022-05-24 | 2022-05-24 | Data processing method, first container, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115065730A CN115065730A (en) | 2022-09-16 |
CN115065730B true CN115065730B (en) | 2024-07-09 |
Family
ID=83198477
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210572930.9A Active CN115065730B (en) | 2022-05-24 | 2022-05-24 | Data processing method, first container, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115065730B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107278360A (en) * | 2017-06-16 | 2017-10-20 | 唐全德 | A kind of system for realizing network interconnection, method and device |
CN113783922A (en) * | 2021-03-26 | 2021-12-10 | 北京沃东天骏信息技术有限公司 | Load balancing method, system and device |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107026890B (en) * | 2016-02-02 | 2020-10-09 | 华为技术有限公司 | Message generation method based on server cluster and load balancer |
US10791088B1 (en) * | 2016-06-17 | 2020-09-29 | F5 Networks, Inc. | Methods for disaggregating subscribers via DHCP address translation and devices thereof |
CN111866064B (en) * | 2016-12-29 | 2021-12-28 | 华为技术有限公司 | Load balancing method, device and system |
US10841226B2 (en) * | 2019-03-29 | 2020-11-17 | Juniper Networks, Inc. | Configuring service load balancers with specified backend virtual networks |
CN110971698B (en) * | 2019-12-09 | 2022-04-22 | 北京奇艺世纪科技有限公司 | Data forwarding system, method and device |
CN111797341B (en) * | 2020-06-22 | 2023-04-18 | 电子科技大学 | Programmable switch-based in-network caching method |
CN113676564B (en) * | 2021-09-28 | 2022-11-22 | 深信服科技股份有限公司 | Data transmission method, device and storage medium |
-
2022
- 2022-05-24 CN CN202210572930.9A patent/CN115065730B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107278360A (en) * | 2017-06-16 | 2017-10-20 | 唐全德 | A kind of system for realizing network interconnection, method and device |
CN113783922A (en) * | 2021-03-26 | 2021-12-10 | 北京沃东天骏信息技术有限公司 | Load balancing method, system and device |
Also Published As
Publication number | Publication date |
---|---|
CN115065730A (en) | 2022-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11283707B2 (en) | Segment routing with fast reroute for container networking | |
JP7355830B2 (en) | System and method for on-demand flow-based policy enforcement in multi-cloud environments | |
US11265368B2 (en) | Load balancing method, apparatus, and system | |
TWI222288B (en) | End node partitioning using virtualization | |
US9749145B2 (en) | Interoperability for distributed overlay virtual environment | |
US10355991B1 (en) | Managing communications using alternative packet addressing | |
CN107079060B (en) | System and method for carrier-level NAT optimization | |
JP4722157B2 (en) | Intelligent load balancing and failover of network traffic | |
US9450780B2 (en) | Packet processing approach to improve performance and energy efficiency for software routers | |
US20090063706A1 (en) | Combined Layer 2 Virtual MAC Address with Layer 3 IP Address Routing | |
US9560016B2 (en) | Supporting IP address overlapping among different virtual networks | |
US20150172104A1 (en) | Managing failure behavior for computing nodes of provided computer networks | |
CN112640371A (en) | Reducing distributed storage operation latency using segment routing techniques | |
CN111641719B (en) | Intranet type load balancing implementation method based on Openstack and storage medium | |
CN109495596B (en) | Method and device for realizing address conversion | |
CN113676564B (en) | Data transmission method, device and storage medium | |
US11121969B2 (en) | Routing between software defined networks and physical networks | |
US10764241B2 (en) | Address assignment and data forwarding in computer networks | |
CN115065730B (en) | Data processing method, first container, electronic equipment and storage medium | |
US20090190495A1 (en) | General multi-link interface for networking environments | |
CN116566933A (en) | Message processing method, gateway equipment and storage system | |
US10855612B2 (en) | Suppressing broadcasts in cloud environments | |
CN113839876A (en) | Transmission path optimization method and equipment for internal network | |
CN117478596B (en) | SDN traffic forwarding system, equipment and method | |
CN116319354B (en) | Network topology updating method based on cloud instance migration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |