Nothing Special   »   [go: up one dir, main page]

CN114338524A - Method and system for improving large-scale container cloud cluster network Service performance - Google Patents

Method and system for improving large-scale container cloud cluster network Service performance Download PDF

Info

Publication number
CN114338524A
CN114338524A CN202111586161.XA CN202111586161A CN114338524A CN 114338524 A CN114338524 A CN 114338524A CN 202111586161 A CN202111586161 A CN 202111586161A CN 114338524 A CN114338524 A CN 114338524A
Authority
CN
China
Prior art keywords
network
mode
ipvs
access
container
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111586161.XA
Other languages
Chinese (zh)
Inventor
姚昊赟
张勇
石光银
蔡卫卫
高传集
孙思清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Cloud Information Technology Co Ltd
Original Assignee
Inspur Cloud Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Cloud Information Technology Co Ltd filed Critical Inspur Cloud Information Technology Co Ltd
Priority to CN202111586161.XA priority Critical patent/CN114338524A/en
Publication of CN114338524A publication Critical patent/CN114338524A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a method and a system for improving the performance of a large-scale container cloud cluster network Service, which relate to the technical field of container networks and comprise the following steps: changing a default IPinIP mode into a BGP mode by using a Kubernetes network plug-in, and directly routing a network packet to a node in the forwarding process of the container network packet; when the Kubernets use the Service to access the container application, the default IPtable mode of the network agent component is switched to the IPVS mode, at the moment, the network agent component generates a uniform rule for all the services based on the IPVS mode, then the IP in the IPSet is searched through a Hash algorithm, and the IP is loaded on the corresponding PodIP through the IPVS; the method comprises the steps of self-defining writing of an eBPF program running in a Linux kernel, and loading to different hook points; the eBPF program adopts a map or a ring buffer to store a network matching rule, and when a network request of the client ClusterIP Service comes in, the eBPF program searches for a pod address corresponding to the back end in the map or the ring buffer for forwarding. The invention can solve the problem of slow scale matching of the large-scale cluster network Service.

Description

Method and system for improving large-scale container cloud cluster network Service performance
Technical Field
The invention relates to the technical field of container networks, in particular to a method and a system for improving the performance of a large-scale container cloud cluster network Service.
Background
With the large-scale application of the container cloud platform, the traditional application starts the cloud operation, and the container technology is used for providing application micro-services. With the increasing number of applications supported by the container cloud platform, the number of nodes managed by the container cloud platform is also increasing, and the reduction of the delay and the loss of the container network is an important performance index required to be met by the container cloud platform.
On one hand, the container network plug-in used by Kubernetes defaults to an ipin ip mode, which is based on a 4-layer network as the packet unpacking of network packets, and supports the inter-network segment intercommunication of the container network, and when the cluster scale is thousands, each node needs to unpack packets, which can greatly increase the network delay.
On the other hand, in a kubernets cluster, Service is generally used to access a container application, a network agent supports an Iptables mode by default, the SNAT rule from the ServiceIP to the container IP is realized by using the Iptables, when the cluster scale is enlarged, the sequential reading rule is required to be matched, the matching rule of each cluster Iptables is probably millions, the matching time is long, the complexity is o (n), a performance bottleneck exists, and in the matching process, the whole cluster is basically in an unusable state, which greatly increases the network delay.
Disclosure of Invention
The invention provides a method and a system for improving the Service performance of a large-scale container cloud cluster network, which aims to reduce the delay of a container network and reduce the loss of the container network.
Firstly, the invention provides a method for improving the performance of a large-scale container cloud cluster network Service, and the technical scheme adopted for solving the technical problems is as follows:
a method for improving the performance of a large-scale container cloud cluster network Service specifically comprises the following steps:
changing a default IPinIP mode into a BGP mode by using a Kubernetes network plug-in, wherein in the forwarding process of the container network packet, the BGP mode uses the cluster nodes as network layer-3 routers and directly routes the network packet to the nodes;
when the Kubernets use the Service to access the container application, the default IPtable mode of the network agent component is switched to the IPVS mode, at the moment, the network agent component generates a uniform rule for all the services based on the IPVS mode, then the IP in the IPSet is searched through a Hash algorithm, and the IP is loaded on the corresponding PodIP through the IPVS;
the method comprises the steps of self-defining writing of an eBPF program running in a Linux kernel, and loading to different hook points; the eBPF program adopts a map or a ring buffer to store a network matching rule, and when a network request of the client ClusterIP Service comes in, the eBPF program searches for a pod address corresponding to the back end in the map or the ring buffer for forwarding.
Optionally, the related network proxy component adopts an IPVS mode, and when kubernet uses Service to access the container application:
if the access container is applied to a specific node, firstly, rule matching is completed through the Iptables + the Ipset;
then the current node is judged by combining the kube-ipv 0 network card,
if the access IP is the current node, the INPUT rule is walked, and finally the access IP is routed to a specific PodIP through IPVS,
if the visited IP is not the current node, the FORWARD rule is forwarded to other nodes.
Alternatively, the same eBPF program may be linked to multiple events;
different eBPF programs may also access the same map to share data.
Optionally, the specific steps of the involved eBPF program running in the Linux kernel are as follows:
1) the user space sends the byte codes and the program types to a Linux kernel together;
2) the Linux kernel runs a verifier on the byte codes to ensure that the program can run safely;
3) compiling the byte codes into local codes by the Linux kernel, and inserting the local codes into a specified code position;
4) the inserted code writes data into a ring buffer or a general key value map;
5) the user space reads the result values from the shared map or circular buffer.
Secondly, the invention provides a system for improving the performance of a large-scale container cloud cluster network Service, and the technical scheme adopted for solving the technical problems is as follows:
a system for improving the performance of a large-scale container cloud cluster network Service realizes a network plug-in and a network agent component related to Kubernetes and an eBPF program which runs in a Linux kernel and is subjected to real-time local compilation;
changing a default IPinIP mode into a BGP mode by using a Kubernetes network plug-in, wherein in the forwarding process of the container network packet, the BGP mode uses the cluster nodes as network layer-3 routers and directly routes the network packet to the nodes;
when Kubernets use Service to access the container application, switching the default IPtable mode of the network agent component into an IPVS mode, at the moment, generating a uniform rule for all the services by the network agent component based on the IPVS mode, searching the IP in the IPSet through a Hash algorithm, and loading the IP on the corresponding PodIP through the IPVS;
the eBPF program is customized and written, and the written eBPF program is loaded to different hook points; the eBPF program adopts a map or a ring buffer to store a network matching rule, and when a network request of the client ClusterIP Service comes in, the eBPF program searches for a pod address corresponding to the back end in the map or the ring buffer for forwarding.
Optionally, the related network proxy component adopts an IPVS mode, and when kubernet uses Service to access the container application:
if the access container is applied to a specific node, firstly, rule matching is completed through the Iptables + the Ipset;
then the current node is judged by combining the kube-ipv 0 network card,
if the access IP is the current node, the INPUT rule is walked, and finally the access IP is routed to a specific PodIP through IPVS,
if the visited IP is not the current node, the FORWARD rule is forwarded to other nodes.
Alternatively, the same eBPF program may be linked to multiple events;
different eBPF programs may also access the same map to share data.
Optionally, the specific steps of the involved eBPF program running in the Linux kernel are as follows:
1) the user space sends the byte codes and the program types to a Linux kernel together;
2) the Linux kernel runs a verifier on the byte codes to ensure that the program can run safely;
3) compiling the byte codes into local codes by the Linux kernel, and inserting the local codes into a specified code position;
4) the inserted code writes data into a ring buffer or a general key value map;
5) the user space reads the result values from the shared map or circular buffer.
Compared with the prior art, the method and the system for improving the Service performance of the large-scale container cloud cluster network have the beneficial effects that:
on one hand, the invention uses the BGP mode of the network plug-in and the IPVS mode of the network proxy component to reduce the network delay and reduce the network performance loss, on the other hand, the invention reduces the network forwarding link by writing the self-defined eBPF program and loading the program to different hook points, improves the long and short connection performance of the network, further improves the Service performance of the network, and solves the problem of slow scale matching of large-scale cluster network Service.
Drawings
FIG. 1 is a schematic flow diagram of a network proxy component searching for an IpSet through a Hash algorithm in an IPVS mode;
FIG. 2 is a flow diagram illustrating the use of Service by Kubernets to access container applications in accordance with the present invention.
Detailed Description
In order to make the technical scheme, the technical problems to be solved and the technical effects of the present invention more clearly apparent, the following technical scheme of the present invention is clearly and completely described with reference to the specific embodiments.
The first embodiment is as follows:
the embodiment provides a method for improving the performance of a large-scale container cloud cluster network Service, and the improvement measure relates to the following three aspects:
and (I) changing the default IPinIP mode into a BGP mode by using a network plug-in of Kubernetes, wherein the BGP mode uses the cluster nodes as network layer-3 routers in the forwarding process of the container network packet and directly routes the network packet to the nodes. In BGP mode, the rule matching time complexity is O (1).
And secondly, when the Kubernets use the Service to access the container application, switching the default IPtable mode of the network agent component into the IPVS mode, generating a uniform rule for all the services by the network agent component based on the IPVS mode by combining the attached figure 1, searching the IP in the IPSet by a Hash algorithm, and loading the IP to the corresponding PodIP by the IPVS.
The network Proxy component specifically selects a Kube-Proxy component, and the Kube-Proxy component supports the IPVS mode. When Kubernet accesses a container application using Service, with reference to FIG. 2:
if the access container is applied to a specific node, firstly, rule matching is completed through the Iptables + the Ipset;
then the current node is judged by combining the kube-ipv 0 network card,
if the access IP is the current node, the INPUT rule is walked, and finally the access IP is routed to a specific PodIP through IPVS,
if the visited IP is not the current node, the FORWARD rule is forwarded to other nodes.
When the network agent component supports the IPtable mode, the time complexity of the reading rule is O (n), and after the IPtable mode is switched to the IPVS mode, the problem that the large-scale cluster Service scale matching is slow can be solved, and the complexity is reduced to O (1), so that the access delay of the container network is reduced.
Thirdly, self-defining writing is carried out on the eBPF program running in the Linux kernel, and the eBPF program is loaded to different hook points; the eBPF program adopts a map or a ring buffer to store a network matching rule, and when a network request of the client ClusterIP Service comes in, the eBPF program searches for a pod address corresponding to the back end in the map or the ring buffer for forwarding.
The specific steps of operating the eBPF program in the Linux kernel are as follows:
1) the user space sends the byte codes and the program types to a Linux kernel together;
2) the Linux kernel runs a verifier on the byte codes to ensure that the program can run safely;
3) compiling the byte codes into local codes by the Linux kernel, and inserting the local codes into a specified code position;
4) the inserted code writes data into a ring buffer or a general key value map;
5) the user space reads the result values from the shared map or circular buffer.
What needs to be supplemented is: the same eBPF program may be linked to multiple events;
different eBPF programs may also access the same map to share data.
What is also needed is: the user mode program can exchange data with the eBPF program in the Linux kernel in real time through the bpf system call and the bpf map structure. When a network request of a client terminal ClusterIP Service comes in, aiming at the DNAT of the ClusterIP, a back end address corresponding to a front end is stored in a map of eBPF, the request is directly sent to a back end Pod of a destination, at the moment, a bpf program originally mounted on a veth of a node side is mounted on an independent network card in the Pod on a data plane, so that a network message of the Pod is DNAT when being sent out, and a return message is reverse DNAT when the network card receives the return message.
Example two:
the embodiment provides a system for improving the performance of a large-scale container cloud cluster network Service, which realizes a network plug-in and a network agent component related to Kubernets and an eBPF program which runs in a Linux kernel and is subjected to real-time local compilation.
In order to improve the Service performance of the large-scale container cloud cluster network, measures are mainly taken from the following aspects:
and (I) changing the default IPinIP mode into a BGP mode by using a network plug-in of Kubernetes, wherein the BGP mode uses the cluster nodes as network layer-3 routers in the forwarding process of the container network packet and directly routes the network packet to the nodes. In BGP mode, the rule matching time complexity is O (1).
And (II) when Kubernets use services to access the container application, switching the default IPtable mode of the network agent component into an IPVS mode, generating a uniform rule for all the services by the network agent component based on the IPVS mode at the moment by combining the attached figure 1, searching the IP in the IPSet by a Hash algorithm, and loading the IP to the corresponding PodIP through the IPVS.
The network Proxy component specifically selects a Kube-Proxy component, and the Kube-Proxy component supports the IPVS mode. When Kubernet accesses a container application using Service, with reference to FIG. 2:
if the access container is applied to a specific node, firstly, rule matching is completed through the Iptables + the Ipset;
then the current node is judged by combining the kube-ipv 0 network card,
if the access IP is the current node, the INPUT rule is walked, and finally the access IP is routed to a specific PodIP through IPVS,
if the visited IP is not the current node, the FORWARD rule is forwarded to other nodes.
When the network agent component supports the IPtable mode, the time complexity of the reading rule is O (n), and after the IPtable mode is switched to the IPVS mode, the problem that the large-scale cluster Service scale matching is slow can be solved, and the complexity is reduced to O (1), so that the access delay of the container network is reduced.
Thirdly, self-defining writing is carried out on the eBPF program running in the Linux kernel, and the eBPF program is loaded to different hook points; the eBPF program adopts a map or a ring buffer to store a network matching rule, and when a network request of the client ClusterIP Service comes in, the eBPF program searches for a pod address corresponding to the back end in the map or the ring buffer for forwarding.
The specific steps of operating the eBPF program in the Linux kernel are as follows:
1) the user space sends the byte codes and the program types to a Linux kernel together;
2) the Linux kernel runs a verifier on the byte codes to ensure that the program can run safely;
3) compiling the byte codes into local codes by the Linux kernel, and inserting the local codes into a specified code position;
4) the inserted code writes data into a ring buffer or a general key value map;
5) the user space reads the result values from the shared map or circular buffer.
What needs to be supplemented is: the same eBPF program may be linked to multiple events;
different eBPF programs may also access the same map to share data.
What is also needed is: the user mode program can exchange data with the eBPF program in the Linux kernel in real time through the bpf system call and the bpf map structure. When a network request of a client terminal ClusterIP Service comes in, aiming at the DNAT of the ClusterIP, a back end address corresponding to a front end is stored in a map of eBPF, the request is directly sent to a back end Pod of a destination, at the moment, a bpf program originally mounted on a veth of a node side is mounted on an independent network card in the Pod on a data plane, so that a network message of the Pod is DNAT when being sent out, and a return message is reverse DNAT when the network card receives the return message.
In summary, the method and the system for improving the Service performance of the large-scale container cloud cluster network can reduce network delay, reduce network performance loss, improve the Service performance of the network, and solve the problem of slow scale matching of the large-scale cluster network Service.
The principles and embodiments of the present invention have been described in detail using specific examples, which are provided only to aid in understanding the core technical content of the present invention. Based on the above embodiments of the present invention, those skilled in the art should make any improvements and modifications to the present invention without departing from the principle of the present invention, and therefore, the present invention should fall into the protection scope of the present invention.

Claims (8)

1. A method for improving the performance of a large-scale container cloud cluster network Service is characterized by specifically comprising the following steps:
changing a default IPinIP mode into a BGP mode by using a Kubernetes network plug-in, wherein in the forwarding process of the container network packet, the BGP mode uses the cluster nodes as network layer-3 routers and directly routes the network packet to the nodes;
when the Kubernets use the Service to access the container application, the default IPtable mode of the network agent component is switched to the IPVS mode, at the moment, the network agent component generates a uniform rule for all the services based on the IPVS mode, then the IP in the IPSet is searched through a Hash algorithm, and the IP is loaded on the corresponding PodIP through the IPVS;
the method comprises the steps of self-defining writing of an eBPF program running in a Linux kernel, and loading to different hook points; the eBPF program adopts a map or a ring buffer to store a network matching rule, and when a network request of the client ClusterIP Service comes in, the eBPF program searches for a pod address corresponding to the back end in the map or the ring buffer for forwarding.
2. The method for improving the performance of the large-scale container cloud cluster network Service according to claim 1, wherein the network proxy component adopts an IPVS mode, and when Kubernets uses the Service to access the container application:
if the access container is applied to a specific node, firstly, rule matching is completed through the Iptables + the Ipset;
then the current node is judged by combining the kube-ipv 0 network card,
if the access IP is the current node, the INPUT rule is walked, and finally the access IP is routed to a specific PodIP through IPVS,
if the visited IP is not the current node, the FORWARD rule is forwarded to other nodes.
3. The method of claim 1, wherein the same eBPF program can link to multiple events;
different eBPF programs may also access the same map to share data.
4. The method according to claim 1 or 3, wherein the specific steps of operating the eBPF program in the Linux kernel are as follows:
1) the user space sends the byte codes and the program types to a Linux kernel together;
2) the Linux kernel runs a verifier on the byte codes to ensure that the program can run safely;
3) compiling the byte codes into local codes by the Linux kernel, and inserting the local codes into a specified code position;
4) the inserted code writes data into a ring buffer or a general key value map;
5) the user space reads the result values from the shared map or circular buffer.
5. A system for improving the performance of a large-scale container cloud cluster network Service is characterized in that the system realizes a network plug-in and a network agent component related to Kubernets and an eBPF program which runs in a Linux kernel and is subjected to real-time local compilation;
changing a default IPinIP mode into a BGP mode by using a Kubernetes network plug-in, wherein in the forwarding process of the container network packet, the BGP mode uses the cluster nodes as network layer-3 routers and directly routes the network packet to the nodes;
when Kubernets use Service to access the container application, switching the default IPtable mode of the network agent component into an IPVS mode, at the moment, generating a uniform rule for all the services by the network agent component based on the IPVS mode, searching the IP in the IPSet through a Hash algorithm, and loading the IP on the corresponding PodIP through the IPVS;
the eBPF program is customized and written, and the written eBPF program is loaded to different hook points; the eBPF program adopts a map or a ring buffer to store a network matching rule, and when a network request of the client ClusterIP Service comes in, the eBPF program searches for a pod address corresponding to the back end in the map or the ring buffer for forwarding.
6. The system according to claim 5, wherein the network proxy component employs an IPVS mode, and when Kubernets uses Service to access a container application:
if the access container is applied to a specific node, firstly, rule matching is completed through the Iptables + the Ipset;
then the current node is judged by combining the kube-ipv 0 network card,
if the access IP is the current node, the INPUT rule is walked, and finally the access IP is routed to a specific PodIP through IPVS,
if the visited IP is not the current node, the FORWARD rule is forwarded to other nodes.
7. The system of claim 5, wherein the same eBPF program can be linked to multiple events;
different eBPF programs may also access the same map to share data.
8. The system of claim 5 or 7, wherein the eBPF program is executed in the Linux kernel by the following specific steps:
1) the user space sends the byte codes and the program types to a Linux kernel together;
2) the Linux kernel runs a verifier on the byte codes to ensure that the program can run safely;
3) compiling the byte codes into local codes by the Linux kernel, and inserting the local codes into a specified code position;
4) the inserted code writes data into a ring buffer or a general key value map;
5) the user space reads the result values from the shared map or circular buffer.
CN202111586161.XA 2021-12-20 2021-12-20 Method and system for improving large-scale container cloud cluster network Service performance Pending CN114338524A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111586161.XA CN114338524A (en) 2021-12-20 2021-12-20 Method and system for improving large-scale container cloud cluster network Service performance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111586161.XA CN114338524A (en) 2021-12-20 2021-12-20 Method and system for improving large-scale container cloud cluster network Service performance

Publications (1)

Publication Number Publication Date
CN114338524A true CN114338524A (en) 2022-04-12

Family

ID=81053976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111586161.XA Pending CN114338524A (en) 2021-12-20 2021-12-20 Method and system for improving large-scale container cloud cluster network Service performance

Country Status (1)

Country Link
CN (1) CN114338524A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112162828A (en) * 2020-10-29 2021-01-01 杭州谐云科技有限公司 Container network cooperation system and method based on cloud side scene
CN115589383A (en) * 2022-09-28 2023-01-10 建信金融科技有限责任公司 eBPF-based virtual machine data transmission method, device, equipment and storage medium
CN117544506A (en) * 2023-11-09 2024-02-09 北京中电汇通科技有限公司 Container cloud DNS performance optimization method based on eBPF technology
WO2024148877A1 (en) * 2023-01-09 2024-07-18 苏州元脑智能科技有限公司 Method and apparatus for implementing service topology awareness of cluster, and device and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111443993A (en) * 2020-04-01 2020-07-24 山东汇贸电子口岸有限公司 Method for realizing large-scale container cluster
JP2020174259A (en) * 2019-04-09 2020-10-22 日本電信電話株式会社 Communication system and communication method
CN112817597A (en) * 2021-01-12 2021-05-18 山东兆物网络技术股份有限公司 EBPF-based software container implementation method operating in user space
US20210216369A1 (en) * 2020-01-09 2021-07-15 Sap Se Multi-language stateful serverless workloads
CN113676524A (en) * 2021-08-09 2021-11-19 浪潮云信息技术股份公司 Method for realizing multi-CPU architecture container network proxy

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020174259A (en) * 2019-04-09 2020-10-22 日本電信電話株式会社 Communication system and communication method
US20210216369A1 (en) * 2020-01-09 2021-07-15 Sap Se Multi-language stateful serverless workloads
CN111443993A (en) * 2020-04-01 2020-07-24 山东汇贸电子口岸有限公司 Method for realizing large-scale container cluster
CN112817597A (en) * 2021-01-12 2021-05-18 山东兆物网络技术股份有限公司 EBPF-based software container implementation method operating in user space
CN113676524A (en) * 2021-08-09 2021-11-19 浪潮云信息技术股份公司 Method for realizing multi-CPU architecture container network proxy

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
崔秀龙: "eBPF 概念和基本原理", Retrieved from the Internet <URL:https://cloud.tencent.com/developer/article/1749470> *
张忠琳: "【kubernetes/k8s概念】calico 介绍与与原理", Retrieved from the Internet <URL:https://blog.csdn.net/zhonglinzhang/article/details/97613927> *
无: "Calico 介绍、原理与使用", Retrieved from the Internet <URL:https://cloud.tencent.com/developer/article/1638845> *
毕小红、等: "微服务应用平台的网络性能研究与优化", 《计算机工程》, 7 July 2017 (2017-07-07), pages 2 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112162828A (en) * 2020-10-29 2021-01-01 杭州谐云科技有限公司 Container network cooperation system and method based on cloud side scene
CN115589383A (en) * 2022-09-28 2023-01-10 建信金融科技有限责任公司 eBPF-based virtual machine data transmission method, device, equipment and storage medium
CN115589383B (en) * 2022-09-28 2024-04-26 建信金融科技有限责任公司 EBPF-based virtual machine data transmission method, eBPF-based virtual machine data transmission device, eBPF-based virtual machine data transmission apparatus, eBPF-based virtual machine data transmission device, eBPF-based virtual machine data transmission storage medium, and eBPF-based virtual machine data transmission program product
WO2024148877A1 (en) * 2023-01-09 2024-07-18 苏州元脑智能科技有限公司 Method and apparatus for implementing service topology awareness of cluster, and device and medium
CN117544506A (en) * 2023-11-09 2024-02-09 北京中电汇通科技有限公司 Container cloud DNS performance optimization method based on eBPF technology
CN117544506B (en) * 2023-11-09 2024-05-24 北京中电汇通科技有限公司 Container cloud DNS performance optimization method based on eBPF technology

Similar Documents

Publication Publication Date Title
CN114338524A (en) Method and system for improving large-scale container cloud cluster network Service performance
US20190327345A1 (en) Method and apparatus for forwarding heterogeneous protocol message and network switching device
US10079780B2 (en) Packet processing method and device
CN111193773B (en) Load balancing method, device, equipment and storage medium
US9973400B2 (en) Network flow information collection method and apparatus
CN102857414A (en) Forwarding table writing method and device and message forwarding method and device
US11800587B2 (en) Method for establishing subflow of multipath connection, apparatus, and system
US9391896B2 (en) System and method for packet forwarding using a conjunctive normal form strategy in a content-centric network
CN112448887A (en) Segmented routing method and device
US10536368B2 (en) Network-aware routing in information centric networking
CN114422415A (en) Egress node processing flows in segmented routing
CN112787932B (en) Method, device and system for generating forwarding information
CN107483628A (en) Unidirectional proxy method and system based on DPDK
CN116055446B (en) Cross-network message forwarding method, electronic equipment and machine-readable storage medium
CN112714075A (en) Method for limiting speed of data packet forwarding by bridge
CN110071872B (en) Service message forwarding method and device, and electronic device
CN105072043B (en) Client announcement procedure optimization method in MESH network Routing Protocol
CN114338529B (en) Five-tuple rule matching method and device
CN115460213A (en) Service processing method and device, electronic equipment and computer readable medium
US20040093424A1 (en) Packet routing device
CN117014501A (en) Stateless SRv6 service chain proxy method and system based on programmable switch
CN115297082A (en) ARP protocol processing method and system based on cooperation of FPGA and eBPF
CN110401594B (en) Message forwarding method and device, electronic equipment and machine-readable storage medium
US10708221B1 (en) Generating a natural name data structure to prevent duplicate network data associated with an asynchronous distributed network operating system
CN111988221A (en) Data transmission method, data transmission device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination