US20230179484A1 - Automatic configuring of vlan and overlay logical switches for container secondary interfaces - Google Patents
Automatic configuring of vlan and overlay logical switches for container secondary interfaces Download PDFInfo
- Publication number
- US20230179484A1 US20230179484A1 US18/102,700 US202318102700A US2023179484A1 US 20230179484 A1 US20230179484 A1 US 20230179484A1 US 202318102700 A US202318102700 A US 202318102700A US 2023179484 A1 US2023179484 A1 US 2023179484A1
- Authority
- US
- United States
- Prior art keywords
- network
- pod
- network segment
- interface
- segment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims abstract description 62
- 230000008569 process Effects 0.000 claims description 31
- 239000008186 active pharmaceutical agent Substances 0.000 description 44
- 239000003795 chemical substances by application Substances 0.000 description 11
- 238000004891 communication Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 101800001509 Large capsid protein Proteins 0.000 description 2
- 229920000106 Liquid crystal polymer Polymers 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 101710084629 Major non-capsid protein Proteins 0.000 description 1
- 101100513046 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) eth-1 gene Proteins 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 239000010410 layer Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 229920003245 polyoctenamer Polymers 0.000 description 1
- 229920002451 polyvinyl alcohol Polymers 0.000 description 1
- VMSRVIHUFHQIAL-UHFFFAOYSA-N sodium;dimethylcarbamodithioic acid Chemical compound [Na+].CN(C)C(S)=S VMSRVIHUFHQIAL-UHFFFAOYSA-N 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0876—Aspects of the degree of configuration automation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0895—Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4641—Virtual LANs, VLANs, e.g. virtual private networks [VPN]
Definitions
- Container networks are an increasingly popular type of network system for deploying applications in datacenters.
- the pods of containers produced by such a system can be deployed more rapidly than virtual machines (VMs) or physical computers. Therefore, a deployment can be scaled up or down to meet demand more rapidly than is typical for VMs or physical computers.
- VMs virtual machines
- a set of containers in a container network system has less overhead and can generally perform the same tasks faster than a corresponding VM would.
- pods are instantiated with an automatically configured primary interface for communicating with outside devices (e.g., physical or virtual machines or containers separate from the pod).
- outside devices e.g., physical or virtual machines or containers separate from the pod.
- existing container based network systems do not have a convenient way of adding secondary interfaces to a pod.
- multiple interfaces for a single pod are necessary.
- the method of some embodiments allocates a secondary network interface for a pod, which has a primary network interface, in a container network operating on an underlying logical network.
- the method receives a network attachment definition (ND) that designates a network segment.
- the method receives the pod, wherein the pod includes an identifier of the ND.
- the method then creates a secondary network interface for the pod and connects the secondary network interface to the network segment.
- the pods include multiple ND identifiers that each identify a network segment. The method of such embodiments creates multiple secondary network interfaces and attaches the multiple network segments to the multiple secondary network interfaces.
- Designating the network segment includes identifying a network segment created on the logical network before the ND is received in some embodiments.
- the method may further include directing the logical network to modify the network segment according to a set of attributes in the received ND.
- Designating the network segment includes providing a set of attributes of the network segment in some embodiments.
- the method of such embodiments further includes directing the logical network to create the network segment according to the received set of attributes.
- the set of attributes may include a network type, where the network type is a VLAN-backed network segment or an overlay-backed network segment.
- each ND designates a network segment by identifying a network segment created on the logical network before the ND is received while for another set of NDs, each ND designates a network segment by providing a set of attributes of the network segment.
- the method of such embodiments further includes directing the logical network to create the second set of network segments according to the received set of attributes.
- FIG. 1 illustrates an example of a control system of some embodiments of the invention.
- FIG. 2 illustrates an example of a logical network for a virtual private cloud.
- FIG. 3 illustrates pods implemented on VMs of a host computer.
- FIG. 4 conceptually illustrates pods with interfaces to one or more network segments.
- FIG. 5 illustrates a communication sequence of some embodiments for adding a secondary interface to a pod.
- FIG. 6 conceptually illustrates a process of some embodiments for allocating a secondary network interface for a pod with a primary network interface.
- FIG. 7 conceptually illustrates a computer system with which some embodiments of the invention are implemented.
- the method of some embodiments allocates a secondary network interface for a pod, which has a primary network interface, in a container network operating on an underlying logical network.
- the method receives an ND that designates a network segment.
- the method receives the pod, wherein the pod includes an identifier of the ND.
- the method then creates a secondary network interface for the pod and connects the secondary network interface to the network segment.
- the pods include multiple ND identifiers that each identify a network segment.
- the method of such embodiments creates multiple secondary network interfaces and attaches the multiple network segments to the multiple secondary network interfaces.
- Designating the network segment includes identifying a network segment created on the logical network before the ND is received in some embodiments.
- the method may further include directing the logical network to modify the network segment according to a set of attributes in the received ND.
- Designating the network segment includes providing a set of attributes of the network segment in some embodiments.
- the method of such embodiments further includes directing the logical network to create the network segment according to the received set of attributes.
- the set of attributes may include a network type, where the network type is a VLAN-backed network segment or an overlay-backed network segment.
- each ND designates a network segment by identifying a network segment created on the logical network before the ND is received while for another set of NDs, each ND designates a network segment by providing a set of attributes of the network segment.
- the method of such embodiments further includes directing the logical network to create the second set of network segments according to the received set of attributes.
- a container in a container network is a lightweight executable image that contains software and all of its dependencies (e.g., libraries, etc.).
- Containers are executed in pods.
- a pod is the smallest deployable unit a user can create in a Kubernetes system.
- a pod may have one or more containers running in it.
- the containers of a pod may use shared storage and network resources.
- the pod includes a specification for how to run the containers.
- a pod's contents in some embodiments are always stored together and executed together.
- a pod provides an application-specific logical host.
- the logical host contains one or more application containers.
- One of the potential shared resources of a pod is a secondary interface.
- the network control system of some embodiments processes one or more Custom Resource Definitions (CRDs) that define attributes of custom-specified network resources.
- CRDs define extensions to the Kubernetes networking requirements.
- Some embodiments use the following CRDs: network-attachment-definition (NDs), Virtual Network Interfaces (VIF) CRDs, Virtual Network CRDs, Endpoint Group CRDs, security CRDs, Virtual Service Object (VSO) CRDs, and Load Balancer CRDs.
- FIG. 1 illustrates an example of a control system 100 of some embodiments of the invention.
- This system 100 processes APIs that use the Kubernetes-based declarative model to describe the desired state of (1) the machines to deploy, and (2) the connectivity, security and service operations that are to be performed for the deployed machines (e.g., private and public IP addresses connectivity, load balancing, security policies, etc.).
- the control system 100 uses one or more CRDs to define some of the resources referenced in the APIs.
- the system 100 performs automated processes to deploy a logical network that connects the deployed machines and segregates these machines from other machines in the datacenter set.
- the machines are connected to the deployed logical network of a virtual private cloud (VPC) in some embodiments.
- VPC virtual private cloud
- the control system 100 includes an API processing cluster 105 , a software defined network (SDN) manager cluster 110 , an SDN controller cluster 115 , and compute managers and controllers 117 .
- the API processing cluster 105 includes two or more API processing nodes 135 , with each node comprising an API processing server 140 , a Kubelet 142 node agent, and a network controller plugin (NCP) 145 .
- the API processing server 140 receives intent-based API calls and parses these calls.
- the received API calls are in a declarative, hierarchical Kubernetes format, and may contain multiple different requests.
- the API processing server 140 parses each received intent-based API request into one or more individual requests.
- the API server provides these requests directly to compute managers and controllers 117 , or indirectly provide these requests to the compute managers and controllers 117 through the Kubelet 142 and/or the NCP 145 running on the Kubernetes master node 135 .
- the compute managers and controllers 117 then deploy VMs and/or Pods on host computers in the availability zone.
- the kubelet 142 node agent on a node can register the node with the API server 140 using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.
- the kubelet 142 receives PodSpecs, YAML (a data serialization language) or JavaScript Object Notation (JSON) formatted objects that each describe a pod.
- the kubelet 142 uses a set of PodSpecs to create (e.g., using the compute managers and controllers 117 ) the pods that are provided by various mechanism elements (e.g., from the API server 140 ) and ensures that the containers described in those PodSpecs are running and healthy.
- the API calls can also include requests that require network elements to be deployed. In some embodiments, these requests explicitly identify the network elements to deploy, while in other embodiments the requests can also implicitly identify these network elements by requesting the deployment of compute constructs (e.g., compute clusters, containers, etc.) for which network elements have to be defined by default. As further described below, the control system 100 uses the NCP 145 to identify the network elements that need to be deployed, and to direct the deployment of these network elements.
- compute constructs e.g., compute clusters, containers, etc.
- the API calls refer to extended resources that are not defined per se by the baseline Kubernetes system.
- the API processing server 140 uses one or more CRDs 120 to interpret the references in the API calls to the extended resources.
- the CRDs in some embodiments include the NDs, VIF, Virtual Network, Endpoint Group, Security Policy, Admin Policy, and Load Balancer and VSO CRDs.
- the CRDs are provided to the API processing server 140 in one stream with the API calls.
- NCP 145 is the interface between the API server 140 and the SDN manager cluster 110 that manages the network elements that serve as the forwarding elements (e.g., switches, routers, bridges, etc.) and service elements (e.g., firewalls, load balancers, etc.) in an availability zone.
- the SDN manager cluster 110 directs the SDN controller cluster 115 to configure the network elements to implement the desired forwarding elements and/or service elements (e.g., logical forwarding elements and logical service elements) of one or more logical networks.
- the SDN controller cluster 115 interacts with local controllers on host computers and edge gateways to configure the network elements in some embodiments.
- NCP 145 registers for event notifications with the API server 140 , e.g., sets up a long-pull session with the API server to receive all CRUD (Create, Read, Update, and Delete) events for various CRDs that are defined for networking.
- the API server 140 is a Kubernetes master VM, and the NCP 145 runs in this VM as a Pod.
- NCP 145 in some embodiments collects realization data from the SDN resources for the CRDs and provides this realization data as it relates to the CRD status.
- the NCP 145 communicates directly with the API server 140 and/or through the Kubelet 142 .
- NCP 145 processes the parsed API requests relating to NDs, VIFs, virtual networks, load balancers, endpoint groups, security policies, and VSOs, to direct the SDN manager cluster 110 to implement (1) the NDs to designate network segments for use with secondary interfaces of pods, (2) the VIFs needed to connect VMs and Pods to forwarding elements on host computers, (3) the virtual networks to implement different segments of a logical network of the VPC, (4) the load balancers to distribute the traffic load to endpoint machines, (5) the firewalls to implement security and admin policies, and (6) the exposed ports to access services provided by a set of machines in the VPC to machines outside and inside of the VPC.
- the API server 140 provides the CRDs 120 that have been defined for these extended network constructs to the NCP 145 for it to process the APIs that refer to the corresponding network constructs (e.g., network segments).
- the API server 140 also provides configuration data from the configuration storage 125 to the NCP 145 .
- the configuration data in some embodiments includes parameters that adjust the pre-defined template rules that the NCP 145 follows to perform its automated processes.
- the configuration data includes a configuration map.
- the configuration map of some embodiments may be generated from one or more directories, files, or literal values.
- the configuration map (or “ConfigMap”) is discussed further with respect to the device plugin 144 , below.
- the NCP 145 performs these automated processes to execute the received API requests in order to direct the SDN manager cluster 110 to deploy the network elements for the VPC.
- the control system 100 performs one or more automated processes to identify and deploy one or more network elements that are used to implement the logical network for a VPC.
- the control system performs these automated processes without an administrator performing any action to direct the identification and deployment of the network elements after an API request is received.
- the SDN managers 110 and controllers 115 can be any SDN managers and controllers available today. In some embodiments, these managers and controllers are the NSX-T managers and controllers licensed by VMware Inc. In such embodiments, NCP 145 detects network events by processing the data supplied by its corresponding API server 140 , and uses NSX-T APIs to direct the NSX-T manager 110 to deploy and/or modify NSX-T network constructs needed to implement the network state expressed by the API calls.
- the communication between the NCP and NSX-T manager 110 is an asynchronous communication, in which NCP provides the desired state to NSX-T managers, which then relay the desired state to the NSX-T controllers to compute and disseminate the state asynchronously to the host computer, forwarding elements and service nodes in the availability zone (i.e., to the SDDC set controlled by the controllers 115 ).
- the SDN managers 110 After receiving the APIs from the NCPs 145 , the SDN managers 110 in some embodiments direct the SDN controllers 115 to configure the network elements to implement the network state expressed by the API calls. In some embodiments, the SDN controllers serve as the central control plane (CCP) of the control system 100 .
- CCP central control plane
- a device plug-in 144 identifies resources available to the pods on a node based on a configuration map of the node.
- the configuration map in some embodiments is received from the API server 140 .
- the configuration map is generated from files in the configuration storage 125 , from data received by the API server from the NCP and/or from data generated by the SDN manager 110 .
- the device plug-in receives the configuration map directly from the API server 140 .
- the device plug-in receives the configuration map through the kubelet 142 .
- the configuration map in some embodiments includes identifiers of pre-created network segments of the logical network.
- a network segment acts in a manner similar to a subnet, e.g., a layer 2 broadcast zone.
- Individual pods can interface with a network segment and communicate with other pods or devices configured to interface with the network segment.
- a network segment does not operate as a physical switch connecting devices that are both directly connected to the same switch, but for example as a VPN tunnel or VLAN, allowing pods or devices that are not directly connected to communicate as though they are all connected to a common switch.
- FIG. 2 illustrates an example of a logical network for a virtual private cloud.
- FIG. 2 depicts the SDN controllers 115 , acting as the CCP, computing high level configuration data (e.g., port configuration, policies, forwarding tables, service tables, etc.).
- the SDN controllers 115 push the high-level configuration data to the local control plane (LCP) agents 220 on host computers 205 , LCP agents 225 on edge appliances 210 and TOR (top-of-rack) agents 230 of TOR switches 215 .
- LCP local control plane
- the CCP and LCPs configure managed physical forwarding elements (PFEs), e.g., switches, routers, bridges, etc., to implement logical forwarding elements (LFEs).
- PFEs managed physical forwarding elements
- a typical LFE spans multiple PFEs running on multiple physical devices (e.g., computers, etc.).
- the LCP agents 220 on the host computers 205 configure one or more software switches 250 and software routers 255 to implement distributed logical switches, routers, bridges and/or service nodes (e.g., service VMs or hypervisor service engines) of one or more logical networks with the corresponding switches and routers on other host computers 205 , edge appliances 210 , and TOR switches 215 .
- the LCP agents 225 configure packet processing stages 270 of these appliance to implement the logical switches, routers, bridges and/or service nodes of one or more logical networks along with the corresponding switches and routers on other host computers 205 , edge appliances 210 , and TOR switches 215 .
- the TOR agents 230 configure one or more configuration tables 275 of TOR switches 215 through an OVSdb server 240 .
- the data in the configuration tables is then used to configure the hardware ASIC packet-processing pipelines 280 to perform the desired forwarding operations to implement the desired logical switching, routing, bridging and service operations.
- FIG. 2 illustrates an example of a logical network 295 that defines a VPC for one entity, such as one corporation in a multi-tenant public datacenter, or one department of one corporation in a private datacenter.
- the logical network 295 includes multiple logical switches 284 with each logical switch connecting different sets of machines and serving as a different network segment.
- Each logical switch has a port 252 that connects with (i.e., is associated with) a virtual interface 265 of a machine 260 .
- the machines 260 in some embodiments include VMs and Pods, with each Pod having one or more containers.
- the logical network 295 also includes a logical router 282 that connects the different network segments defined by the different logical switches 284 . In some embodiments, the logical router 282 serves as a gateway for the deployed VPC in FIG. 2 .
- FIG. 3 illustrates pods 260 implemented on VM 360 of a host computer 205 .
- the pods 365 are connected to a software forwarding element (SFE) 370 .
- the SFE 370 is a software switch, a software bridge, or software code that enables the pods to share the virtual network interface card (VNIC) 375 of the VM 360 .
- the connection between the pods 365 and the SFE 370 is initiated by an NSX node agent 380 that performs the functions of an NCP (e.g., as part of a distributed NCP) on the VM 360 .
- the SFE 370 in turn passes communications between the pods 365 and the VNIC 375 .
- the VNIC 375 connects to the port 385 of the software switch 250 that is configured by the LCP 220 .
- the LCP 220 acts as a local agent of a CCP and, in some embodiments, configures the software switch 250 to implement one or more network segments.
- a network segment (or logical switch) allows multiple pods to communicate as though they were on a common switch, but the logical switch itself is implemented by multiple software switches 250 that operate on different host computers, VMs, etc.
- a single software switch 250 may implement part of multiple different network segments.
- Pods of some embodiments may require multiple interfaces to provide multiple avenues of communication that require different characteristics.
- a pod may implement part of a telecommunications application, the primary interface of the pod may connect to the main telecommunications network (e.g., to handle one or more of telecommunications control functions, voice data, etc.) while a secondary interface of the pod may provide a high performance link for data traffic.
- a high performance link may be used in some embodiments to connect to a Single Root I/O Virtualization (SR-IOV) system.
- the pods are not limited to just the primary and one secondary interfaces, but may have an arbitrary number of interfaces up to the capacity of the logical network to provide network segments.
- FIG. 4 conceptually illustrates pods 405 , 410 , and 415 with interfaces to one or more network segments.
- Pod 405 is limited to a single interface 407 , connecting to network segment 420 .
- the network segment 420 is a logical construct provided by a software switch (not shown) that enables the pod 405 to communicate (e.g., through a VLAN or tunnel in some embodiments) with other pods that interface with the network segment 420 such as pod 410 .
- Pod 410 may be implemented by the same VM as pod 405 , or a different VM on the same host, on a VM on a different host, or even directly on a physical computer without a VM.
- Pod 410 also has a primary interface 412 that connects it to network segment 420 . However, pod 410 also has secondary interfaces 413 and 414 connecting pod 410 to network segments 430 and 440 , respectively. Pod 415 has primary interface 417 and secondary interface 418 connecting pod 415 to network segments 430 and 440 , respectively. Thus pods 410 and 415 can communicate using either network segment 430 or network segment 440 .
- the logical router 282 connects the network segments 420 - 440 .
- FIG. 5 illustrates a communication sequence 500 of some embodiments for adding a secondary interface to a pod.
- the communication sequence includes several steps, numbered (1) to (7) in the diagram and the following description.
- the communication sequence 500 begins when the API server 140 (1) sends a list of network segments and, for each network segment, a list of interfaces of the segment to the device plugin 144 .
- the device plugin 144 determines which interfaces are available and (2) provides the list of available interfaces for each network segment to the kubelet 142 .
- the device plugin 144 determines available interfaces of a network segment by retrieving an interface list from a specific file location (e.g., sys/class/net) and comparing this interface list with the interface names of the network segment. If an interface name from the interface list matches an interface name of the network segment then the device plugin 144 identifies it as available and that it is a list of such available interfaces that is sent to the kubelet 142 in step (2).
- a specific file location e.g., sys/class/net
- the API server (3) sends a pod definition to the kubelet 142 that the kubelet 142 will use to create a pod.
- the pod definition in some embodiments contains a name or other identifier of a secondary network segment to attach the pod to.
- the pod includes an internal identifier of the secondary interface to identify the interface to containers of the pod.
- this internal identifier is a separate and generally distinct identifier from the list of available interfaces identified by the device plugin.
- the kubelet 142 then sends (4) a request for an interface ID of an unallocated interface of the network segment identified in the pod definition, to the device plugin 144 .
- the device plugin 144 then sends (5) an interface ID of an unallocated interface of the identified network segment to the kubelet 142 .
- the device plugin 144 monitors the allocated interface IDs in the embodiment in FIG. 5 however, in other embodiments, the kubelet 142 or the NCP 145 monitors the allocated interface IDs.
- whichever element monitors the allocated interface IDs updates the status of the secondary interface(s) allocated to that pod to “unallocated.”
- the NCP 145 queries (6) the kubelet 142 for any pods with secondary interfaces to be attached and receives (7) the interface ID from the kubelet 142 .
- the NCP 145 then creates an interface for the pod and attaches the interface to the identified network segment.
- FIG. 5 conceptually illustrates a process 600 of some embodiments for allocating a secondary network interface for a pod with a primary network interface. In some embodiments, the process 600 is performed by an NCP.
- the process 600 begins by receiving (at 605 ) a pod.
- receiving a pod means receiving at the NCP a notification that a pod has been created (e.g., by a kubelet).
- the process 600 determines (at 610 ) that the pod includes an identifier of a network attachment definition (ND).
- An ND designates a network segment to attach to a secondary network interface of the pod.
- designating a network segment may include identifying, in the ND, a pre-created network segment of a logical network and/or providing attributes in the ND that allow an NCP to command a network manager or controller to dynamically create a network segment in the logical network.
- the NCP uses that identifier (e.g., in operation 620 ) to determine which ND designates the network segment to be attached to a secondary interface of the pod.
- Pod metadata name: my-pod namespace: my-namespace annotations: k8s.v1.cni.cncf.io/networks:
- ips [“1.2.3.4/24”], # (optional)IP/prefix_length and mac addresses for the interface, optional.
- mac “aa:bb:cc:dd:ee:ff” #(optional) ⁇ , ]
- the pod includes one ND identifier indicating that the pod should have one secondary network interface.
- pods may include multiple ND identifiers, indicating that the pods should have multiple secondary network interfaces attached to multiple network segments.
- the identified ND has an identifier called a name, in this example, “net-nsx”.
- the ND may have other designations such as a number, code, or other type of identifier.
- the process 600 creates (at 615 ) a secondary interface for the pod.
- the process 600 then connects (at 620 ) the secondary network interface created in the pod in operation 615 to the network segment designated by the ND identified in operation 610 .
- the network segment in some embodiments may be a pre-created network segment. Pre-created network segments are created independently on the logical network without the use of an ND. When a user codes the corresponding ND, the user adds a network identifier, used by the logical network to identify the pre-created network segment, to the ND.
- this example of an ND corresponding to the name: net-nsx (the identifier in the pod example above).
- the ND designates the network segment to be attached when a pod uses the ND identifier “net-nsx”.
- This ND example, and the subsequent dynamically created network segment examples include the name: net-nsx.
- this example of an ND that designates a pre-created network segment includes an identifier of an existing, pre-created network segment:
- ND example 1 apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: net-nsx spec: config: ‘ ⁇ “cniVersion”: “0.3.0”, “type”: “nsx”, # NCP CNI plugin type “networkID”: “071c3745-f982-45ba-91b2-3f9c22af0240”, # ID of pre-created NSXT Segment “ipam”: ⁇ # “ipam” is optional “subnet”: “192.168.0.0/24”, # required in “ipam” “rangeStart”: “192.168.0.2”, # optional, default value is the secondary IP of subnet “rangeEnd”: “192.168.0.254”, # optional, default value is the penultimate IP of subnet “gateway”: “192.168.0.1” # optional, default value is the first IP of subnet ⁇ ⁇ ’
- the networkID: “071c3745-f982-45ba-91b2-3f9c22af0240” is an ID used by the logical network to identify a pre-created network segment of the logical network.
- the identified network segment was created (e.g., at the instructions of the user, using the logical network) without the ND and selected by the user (e.g., using the network ID placed in the ND when it was coded) to be used as the network segment for pods using that ND.
- the NDs of some embodiments with pre-created network segment IDs may also contain additional attributes that modify the pre-created network and/or the interface of the pod on the network segment.
- the process 600 in operation 620 may connect network segments that are dynamically created according to network attributes provided in an ND.
- these network attributes may merely identify the type of network (e.g., VLAN, overlay, MACVLAN, IPVLAN, ENS, etc.) to create or may include additional network attributes.
- the following are examples of NDs for creating a VLAN-backed network segment and an overlay-backed network segment.
- ND example 2 apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: net-nsx spec: config: ‘ ⁇ “cniVersion”: “0.3.0”, “type”: “nsx”, # NCP CNI plugin type “networkID”: “”, # ID of pre-created NSXT Segment “networkType”: “vlan”, “vlanID”: 100, “ipam”: ⁇ # “ipam” is optional “subnet”: “192.168.0.0/24”, # required in “ipam” “rangeStart”: “192.168.0.2”, # optional, default value is the secondary IP of subnet “rangeEnd”: “192.168.0.254”, # optional, default value is the penultimate IP of subnet “gateway”: “192.168.0.1” # optional, default value is the first IP of subnet ⁇ ⁇ ’ ND example 3: apiVersion
- NSX-T Gateway ID of NSX-T Gateway to which the created Segment should be connected.
- “ipam”: ⁇ # “ipam” is optional “subnet”: “192.168.0.0/24”, # required in “ipam” “rangeStart”: “192.168.0.2”, # optional, default value is the secondary IP of subnet “rangeEnd”: “192.168.0.254”, # optional, default value is the penultimate IP of subnet “gateway”: “192.168.0.1” # optional, default value is the first IP of subnet ⁇ ⁇ ’
- ND example 2 there is no networkID as the ND CRD does not specify a pre-created network segment.
- the ND includes a network type (vlan) and a vlanID number (100).
- the ND includes a network type (overlay) and an ID of a logical network Gateway to which the created segment should be connected (081c3745-d982-45bc-91c2-3f9c22af0249).
- the illustrated embodiment of FIG. 6 handles both pre-created and dynamically created network segments.
- the dynamically created network segments are created by the NCP directing the logical network to create the network segments: (1) when the Kubernetes system is being brought online, (2) when the NDs are initially provided to the Kubernetes API (e.g., for NDs coded after the Kubernetes system is started), and/or (3) the first time a pod identifying a particular ND that designates a particular network segment is first received by the NCP.
- the NCP provides, to the logical network, default attributes for a dynamic network segment to supplement any attributes supplied by the ND. In some embodiments, these default attributes are supplied in one or more CRDs (that are not network attachment definition CRDs).
- a pod may have more than one secondary interface. Therefore, the process 600 determines (at 625 ) whether the ND identifier was the last ND identifier of the pod. If the ND identifier was not the last one in the pod, the process 600 loops back to operation 615 . If the network segment identifier was the last one in the pod, the process 600 ends.
- Computer-readable storage medium also referred to as computer-readable medium.
- processing unit(s) e.g., one or more processors, cores of processors, or other processing units
- processing unit(s) e.g., one or more processors, cores of processors, or other processing units
- Examples of computer-readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc.
- the computer-readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
- the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor.
- multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions.
- multiple software inventions can also be implemented as separate programs.
- any combination of separate programs that together implement a software invention described here is within the scope of the invention.
- the software programs when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
- FIG. 7 conceptually illustrates a computer system 700 with which some embodiments of the invention are implemented.
- the computer system 700 can be used to implement any of the above-described hosts, controllers, gateway and edge forwarding elements. As such, it can be used to execute any of the above-described processes.
- This computer system 700 includes various types of non-transitory machine-readable media and interfaces for various other types of machine-readable media.
- Computer system 700 includes a bus 705 , processing unit(s) 710 , a system memory 725 , a read-only memory 730 , a permanent storage device 735 , input devices 740 , and output devices 745 .
- the bus 705 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 700 .
- the bus 705 communicatively connects the processing unit(s) 710 with the read-only memory 730 , the system memory 725 , and the permanent storage device 735 .
- the processing unit(s) 710 retrieve instructions to execute and data to process in order to execute the processes of the invention.
- the processing unit(s) may be a single processor or a multi-core processor in different embodiments.
- the read-only-memory (ROM) 730 stores static data and instructions that are needed by the processing unit(s) 710 and other modules of the computer system.
- the permanent storage device 735 is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the computer system 700 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 735 .
- the system memory 725 is a read-and-write memory device. However, unlike storage device 735 , the system memory 725 is a volatile read-and-write memory, such as random access memory.
- the system memory 725 stores some of the instructions and data that the processor needs at runtime.
- the invention's processes are stored in the system memory 725 , the permanent storage device 735 , and/or the read-only memory 730 . From these various memory units, the processing unit(s) 710 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
- the bus 705 also connects to the input and output devices 740 and 745 .
- the input devices 740 enable the user to communicate information and select commands to the computer system 700 .
- the input devices 740 include alphanumeric keyboards and pointing devices (also called “cursor control devices”).
- the output devices 745 display images generated by the computer system 700 .
- the output devices 745 include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as touchscreens that function as both input and output devices 740 and 745 .
- bus 705 also couples computer system 700 to a network 765 through a network adapter (not shown).
- the computer 700 can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet), or a network of networks (such as the Internet). Any or all components of computer system 700 may be used in conjunction with the invention.
- Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media).
- computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks.
- CD-ROM compact discs
- CD-R recordable compact
- the computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations.
- Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
- ASICs application-specific integrated circuits
- FPGAs field-programmable gate arrays
- integrated circuits execute instructions that are stored on the circuit itself.
- the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people.
- the terms “display” or “displaying” mean displaying on an electronic device.
- the terms “computer-readable medium,” “computer-readable media,” and “machine-readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.
- gateways in public cloud datacenters.
- the gateways are deployed in a third-party's private cloud datacenters (e.g., datacenters that the third-party uses to deploy cloud gateways for different entities in order to deploy virtual networks for these entities).
- a third-party's private cloud datacenters e.g., datacenters that the third-party uses to deploy cloud gateways for different entities in order to deploy virtual networks for these entities.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Automation & Control Theory (AREA)
- Computer Security & Cryptography (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
- Container networks (e.g., Kubernetes) are an increasingly popular type of network system for deploying applications in datacenters. The pods of containers produced by such a system can be deployed more rapidly than virtual machines (VMs) or physical computers. Therefore, a deployment can be scaled up or down to meet demand more rapidly than is typical for VMs or physical computers. In addition, a set of containers in a container network system has less overhead and can generally perform the same tasks faster than a corresponding VM would.
- In present container based network systems (e.g., Kubernetes) pods are instantiated with an automatically configured primary interface for communicating with outside devices (e.g., physical or virtual machines or containers separate from the pod). However, existing container based network systems do not have a convenient way of adding secondary interfaces to a pod. For some container network based applications, multiple interfaces for a single pod are necessary. However, in the existing art, there is no way to automatically add additional interfaces to a pod. Therefore, there is a need in the art for an automated way to add secondary interfaces to a pod.
- The method of some embodiments allocates a secondary network interface for a pod, which has a primary network interface, in a container network operating on an underlying logical network. The method receives a network attachment definition (ND) that designates a network segment. The method receives the pod, wherein the pod includes an identifier of the ND. The method then creates a secondary network interface for the pod and connects the secondary network interface to the network segment. In some embodiments, the pods include multiple ND identifiers that each identify a network segment. The method of such embodiments creates multiple secondary network interfaces and attaches the multiple network segments to the multiple secondary network interfaces.
- Designating the network segment includes identifying a network segment created on the logical network before the ND is received in some embodiments. The method may further include directing the logical network to modify the network segment according to a set of attributes in the received ND.
- Designating the network segment includes providing a set of attributes of the network segment in some embodiments. The method of such embodiments further includes directing the logical network to create the network segment according to the received set of attributes. The set of attributes may include a network type, where the network type is a VLAN-backed network segment or an overlay-backed network segment.
- In some embodiments in which a pod includes multiple ND identifiers, for one set of NDs, each ND designates a network segment by identifying a network segment created on the logical network before the ND is received while for another set of NDs, each ND designates a network segment by providing a set of attributes of the network segment. The method of such embodiments further includes directing the logical network to create the second set of network segments according to the received set of attributes.
- The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, the Detailed Description, the Drawings, and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, the Detailed Description, and the Drawings.
- The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.
-
FIG. 1 illustrates an example of a control system of some embodiments of the invention. -
FIG. 2 illustrates an example of a logical network for a virtual private cloud. -
FIG. 3 illustrates pods implemented on VMs of a host computer. -
FIG. 4 conceptually illustrates pods with interfaces to one or more network segments. -
FIG. 5 illustrates a communication sequence of some embodiments for adding a secondary interface to a pod. -
FIG. 6 conceptually illustrates a process of some embodiments for allocating a secondary network interface for a pod with a primary network interface. -
FIG. 7 conceptually illustrates a computer system with which some embodiments of the invention are implemented. - In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
- The method of some embodiments allocates a secondary network interface for a pod, which has a primary network interface, in a container network operating on an underlying logical network. The method receives an ND that designates a network segment. The method receives the pod, wherein the pod includes an identifier of the ND. The method then creates a secondary network interface for the pod and connects the secondary network interface to the network segment. In some embodiments, the pods include multiple ND identifiers that each identify a network segment. The method of such embodiments creates multiple secondary network interfaces and attaches the multiple network segments to the multiple secondary network interfaces.
- Designating the network segment includes identifying a network segment created on the logical network before the ND is received in some embodiments. The method may further include directing the logical network to modify the network segment according to a set of attributes in the received ND.
- Designating the network segment includes providing a set of attributes of the network segment in some embodiments. The method of such embodiments further includes directing the logical network to create the network segment according to the received set of attributes. The set of attributes may include a network type, where the network type is a VLAN-backed network segment or an overlay-backed network segment.
- In some embodiments in which a pod includes multiple ND identifiers, for one set of NDs, each ND designates a network segment by identifying a network segment created on the logical network before the ND is received while for another set of NDs, each ND designates a network segment by providing a set of attributes of the network segment. The method of such embodiments further includes directing the logical network to create the second set of network segments according to the received set of attributes.
- Many of the embodiments described herein are described with relation to a Kubernetes system, sometimes abbreviated “Kubes” or “K8s.” However, one of ordinary skill in the art will understand that this is merely one example of a container network system that embodies the inventions described herein and that other embodiments may apply to other container network systems.
- In the Kubernetes system, a container in a container network is a lightweight executable image that contains software and all of its dependencies (e.g., libraries, etc.). Containers are executed in pods. A pod is the smallest deployable unit a user can create in a Kubernetes system. A pod may have one or more containers running in it. The containers of a pod may use shared storage and network resources. The pod includes a specification for how to run the containers. A pod's contents in some embodiments are always stored together and executed together. A pod provides an application-specific logical host. The logical host contains one or more application containers. One of the potential shared resources of a pod is a secondary interface.
- In addition to the templates and code that is supplied by the original programmers of the Kubernetes system, the system allows a user to create customized resources. The network control system of some embodiments processes one or more Custom Resource Definitions (CRDs) that define attributes of custom-specified network resources. The CRDs define extensions to the Kubernetes networking requirements. Some embodiments use the following CRDs: network-attachment-definition (NDs), Virtual Network Interfaces (VIF) CRDs, Virtual Network CRDs, Endpoint Group CRDs, security CRDs, Virtual Service Object (VSO) CRDs, and Load Balancer CRDs.
-
FIG. 1 illustrates an example of acontrol system 100 of some embodiments of the invention. Thissystem 100 processes APIs that use the Kubernetes-based declarative model to describe the desired state of (1) the machines to deploy, and (2) the connectivity, security and service operations that are to be performed for the deployed machines (e.g., private and public IP addresses connectivity, load balancing, security policies, etc.). To process these APIs, thecontrol system 100 uses one or more CRDs to define some of the resources referenced in the APIs. Thesystem 100 performs automated processes to deploy a logical network that connects the deployed machines and segregates these machines from other machines in the datacenter set. The machines are connected to the deployed logical network of a virtual private cloud (VPC) in some embodiments. - As shown, the
control system 100 includes anAPI processing cluster 105, a software defined network (SDN)manager cluster 110, anSDN controller cluster 115, and compute managers andcontrollers 117. TheAPI processing cluster 105 includes two or moreAPI processing nodes 135, with each node comprising anAPI processing server 140, aKubelet 142 node agent, and a network controller plugin (NCP) 145. TheAPI processing server 140 receives intent-based API calls and parses these calls. In some embodiments, the received API calls are in a declarative, hierarchical Kubernetes format, and may contain multiple different requests. - The
API processing server 140 parses each received intent-based API request into one or more individual requests. When the requests relate to the deployment of machines, the API server provides these requests directly to compute managers andcontrollers 117, or indirectly provide these requests to the compute managers andcontrollers 117 through theKubelet 142 and/or theNCP 145 running on theKubernetes master node 135. The compute managers andcontrollers 117 then deploy VMs and/or Pods on host computers in the availability zone. - The
kubelet 142 node agent on a node can register the node with theAPI server 140 using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider. Thekubelet 142 receives PodSpecs, YAML (a data serialization language) or JavaScript Object Notation (JSON) formatted objects that each describe a pod. Thekubelet 142 uses a set of PodSpecs to create (e.g., using the compute managers and controllers 117) the pods that are provided by various mechanism elements (e.g., from the API server 140) and ensures that the containers described in those PodSpecs are running and healthy. - The API calls can also include requests that require network elements to be deployed. In some embodiments, these requests explicitly identify the network elements to deploy, while in other embodiments the requests can also implicitly identify these network elements by requesting the deployment of compute constructs (e.g., compute clusters, containers, etc.) for which network elements have to be defined by default. As further described below, the
control system 100 uses theNCP 145 to identify the network elements that need to be deployed, and to direct the deployment of these network elements. - In some embodiments, the API calls refer to extended resources that are not defined per se by the baseline Kubernetes system. For these references, the
API processing server 140 uses one ormore CRDs 120 to interpret the references in the API calls to the extended resources. As mentioned above, the CRDs in some embodiments include the NDs, VIF, Virtual Network, Endpoint Group, Security Policy, Admin Policy, and Load Balancer and VSO CRDs. In some embodiments, the CRDs are provided to theAPI processing server 140 in one stream with the API calls. -
NCP 145 is the interface between theAPI server 140 and theSDN manager cluster 110 that manages the network elements that serve as the forwarding elements (e.g., switches, routers, bridges, etc.) and service elements (e.g., firewalls, load balancers, etc.) in an availability zone. TheSDN manager cluster 110 directs theSDN controller cluster 115 to configure the network elements to implement the desired forwarding elements and/or service elements (e.g., logical forwarding elements and logical service elements) of one or more logical networks. As further described below, theSDN controller cluster 115 interacts with local controllers on host computers and edge gateways to configure the network elements in some embodiments. - In some embodiments,
NCP 145 registers for event notifications with theAPI server 140, e.g., sets up a long-pull session with the API server to receive all CRUD (Create, Read, Update, and Delete) events for various CRDs that are defined for networking. In some embodiments, theAPI server 140 is a Kubernetes master VM, and theNCP 145 runs in this VM as a Pod.NCP 145 in some embodiments collects realization data from the SDN resources for the CRDs and provides this realization data as it relates to the CRD status. In some embodiments, theNCP 145 communicates directly with theAPI server 140 and/or through theKubelet 142. - In some embodiments,
NCP 145 processes the parsed API requests relating to NDs, VIFs, virtual networks, load balancers, endpoint groups, security policies, and VSOs, to direct theSDN manager cluster 110 to implement (1) the NDs to designate network segments for use with secondary interfaces of pods, (2) the VIFs needed to connect VMs and Pods to forwarding elements on host computers, (3) the virtual networks to implement different segments of a logical network of the VPC, (4) the load balancers to distribute the traffic load to endpoint machines, (5) the firewalls to implement security and admin policies, and (6) the exposed ports to access services provided by a set of machines in the VPC to machines outside and inside of the VPC. - The
API server 140 provides the CRDs 120 that have been defined for these extended network constructs to theNCP 145 for it to process the APIs that refer to the corresponding network constructs (e.g., network segments). TheAPI server 140 also provides configuration data from theconfiguration storage 125 to theNCP 145. The configuration data in some embodiments includes parameters that adjust the pre-defined template rules that theNCP 145 follows to perform its automated processes. In some embodiments, the configuration data includes a configuration map. The configuration map of some embodiments may be generated from one or more directories, files, or literal values. The configuration map (or “ConfigMap”) is discussed further with respect to thedevice plugin 144, below. - The
NCP 145 performs these automated processes to execute the received API requests in order to direct theSDN manager cluster 110 to deploy the network elements for the VPC. For a received API, thecontrol system 100 performs one or more automated processes to identify and deploy one or more network elements that are used to implement the logical network for a VPC. The control system performs these automated processes without an administrator performing any action to direct the identification and deployment of the network elements after an API request is received. - The
SDN managers 110 andcontrollers 115 can be any SDN managers and controllers available today. In some embodiments, these managers and controllers are the NSX-T managers and controllers licensed by VMware Inc. In such embodiments,NCP 145 detects network events by processing the data supplied by its correspondingAPI server 140, and uses NSX-T APIs to direct the NSX-T manager 110 to deploy and/or modify NSX-T network constructs needed to implement the network state expressed by the API calls. The communication between the NCP and NSX-T manager 110 is an asynchronous communication, in which NCP provides the desired state to NSX-T managers, which then relay the desired state to the NSX-T controllers to compute and disseminate the state asynchronously to the host computer, forwarding elements and service nodes in the availability zone (i.e., to the SDDC set controlled by the controllers 115). - After receiving the APIs from the
NCPs 145, theSDN managers 110 in some embodiments direct theSDN controllers 115 to configure the network elements to implement the network state expressed by the API calls. In some embodiments, the SDN controllers serve as the central control plane (CCP) of thecontrol system 100. - In some embodiments, a device plug-in 144 identifies resources available to the pods on a node based on a configuration map of the node. The configuration map in some embodiments is received from the
API server 140. In some embodiments, the configuration map is generated from files in theconfiguration storage 125, from data received by the API server from the NCP and/or from data generated by theSDN manager 110. In some embodiments, the device plug-in receives the configuration map directly from theAPI server 140. In other embodiments, the device plug-in receives the configuration map through thekubelet 142. The configuration map in some embodiments includes identifiers of pre-created network segments of the logical network. - A network segment, sometimes called a logical switch, logical network segment, or a transport zone, acts in a manner similar to a subnet, e.g., a
layer 2 broadcast zone. Individual pods can interface with a network segment and communicate with other pods or devices configured to interface with the network segment. However, one of ordinary skill in the art will understand that a network segment (or logical switch) does not operate as a physical switch connecting devices that are both directly connected to the same switch, but for example as a VPN tunnel or VLAN, allowing pods or devices that are not directly connected to communicate as though they are all connected to a common switch. -
FIG. 2 illustrates an example of a logical network for a virtual private cloud.FIG. 2 depicts theSDN controllers 115, acting as the CCP, computing high level configuration data (e.g., port configuration, policies, forwarding tables, service tables, etc.). In such capacity, theSDN controllers 115 push the high-level configuration data to the local control plane (LCP)agents 220 onhost computers 205,LCP agents 225 onedge appliances 210 and TOR (top-of-rack)agents 230 of TOR switches 215. The CCP and LCPs configure managed physical forwarding elements (PFEs), e.g., switches, routers, bridges, etc., to implement logical forwarding elements (LFEs). A typical LFE spans multiple PFEs running on multiple physical devices (e.g., computers, etc.). - Based on the received configuration data, the
LCP agents 220 on thehost computers 205 configure one or more software switches 250 andsoftware routers 255 to implement distributed logical switches, routers, bridges and/or service nodes (e.g., service VMs or hypervisor service engines) of one or more logical networks with the corresponding switches and routers onother host computers 205,edge appliances 210, and TOR switches 215. On the edge appliances, theLCP agents 225 configure packet processing stages 270 of these appliance to implement the logical switches, routers, bridges and/or service nodes of one or more logical networks along with the corresponding switches and routers onother host computers 205,edge appliances 210, and TOR switches 215. - For the
TORs 215, theTOR agents 230 configure one or more configuration tables 275 of TOR switches 215 through anOVSdb server 240. The data in the configuration tables is then used to configure the hardware ASIC packet-processing pipelines 280 to perform the desired forwarding operations to implement the desired logical switching, routing, bridging and service operations. U.S. patent application Ser. No. 14/836,802, filed Aug. 26, 2015, now issued as U.S. Pat. No. 10,554,484, U.S. patent application Ser. No. 15/342,921, filed Nov. 3, 2016, now issued as U.S. Pat. No. 10,250,553, U.S. patent application Ser. No. 14/815,839, filed Jul. 31, 2015, now issued as U.S. Pat. No. 9,847,938, and U.S. patent application Ser. No. 13/589,077, filed Aug. 17, 2021, now issued as U.S. Pat. No. 9,178,833 describe CCPs, LCPs and TOR agents in more detail, and are incorporated herein by reference. - After the
host computers 205 are configured along with theedge appliances 210 and/or TOR switches 215, they can implement one or more logical networks, with each logical network segregating the machines and network traffic of the entity for which it is deployed from the machines and network traffic of other entities in the same availability zone.FIG. 2 illustrates an example of alogical network 295 that defines a VPC for one entity, such as one corporation in a multi-tenant public datacenter, or one department of one corporation in a private datacenter. - As shown, the
logical network 295 includes multiplelogical switches 284 with each logical switch connecting different sets of machines and serving as a different network segment. Each logical switch has aport 252 that connects with (i.e., is associated with) avirtual interface 265 of amachine 260. Themachines 260 in some embodiments include VMs and Pods, with each Pod having one or more containers. Thelogical network 295 also includes alogical router 282 that connects the different network segments defined by the differentlogical switches 284. In some embodiments, thelogical router 282 serves as a gateway for the deployed VPC inFIG. 2 . -
FIG. 3 illustratespods 260 implemented onVM 360 of ahost computer 205. The pods 365 are connected to a software forwarding element (SFE) 370. In some embodiments theSFE 370 is a software switch, a software bridge, or software code that enables the pods to share the virtual network interface card (VNIC) 375 of theVM 360. The connection between the pods 365 and theSFE 370 is initiated by anNSX node agent 380 that performs the functions of an NCP (e.g., as part of a distributed NCP) on theVM 360. TheSFE 370 in turn passes communications between the pods 365 and theVNIC 375. TheVNIC 375 connects to theport 385 of thesoftware switch 250 that is configured by theLCP 220. - The
LCP 220 acts as a local agent of a CCP and, in some embodiments, configures thesoftware switch 250 to implement one or more network segments. As mentioned above, a network segment (or logical switch) allows multiple pods to communicate as though they were on a common switch, but the logical switch itself is implemented by multiple software switches 250 that operate on different host computers, VMs, etc. In some embodiments, asingle software switch 250 may implement part of multiple different network segments. - Pods of some embodiments may require multiple interfaces to provide multiple avenues of communication that require different characteristics. For example, in some embodiments a pod may implement part of a telecommunications application, the primary interface of the pod may connect to the main telecommunications network (e.g., to handle one or more of telecommunications control functions, voice data, etc.) while a secondary interface of the pod may provide a high performance link for data traffic. Such a high performance link may be used in some embodiments to connect to a Single Root I/O Virtualization (SR-IOV) system. In some embodiments, the pods are not limited to just the primary and one secondary interfaces, but may have an arbitrary number of interfaces up to the capacity of the logical network to provide network segments.
-
FIG. 4 conceptually illustratespods Pod 405 is limited to asingle interface 407, connecting tonetwork segment 420. Thenetwork segment 420 is a logical construct provided by a software switch (not shown) that enables thepod 405 to communicate (e.g., through a VLAN or tunnel in some embodiments) with other pods that interface with thenetwork segment 420 such aspod 410.Pod 410 may be implemented by the same VM aspod 405, or a different VM on the same host, on a VM on a different host, or even directly on a physical computer without a VM.Pod 410 also has aprimary interface 412 that connects it to networksegment 420. However,pod 410 also hassecondary interfaces pod 410 to networksegments Pod 415 hasprimary interface 417 andsecondary interface 418 connectingpod 415 to networksegments pods network segment 430 ornetwork segment 440. Thelogical router 282 connects the network segments 420-440. - Some embodiments provide a sequence for providing resources (including interfaces) to pods, using a device plugin to identify the resources for a kubelet creating the pods. Although the discussion below is limited to a list of network segments, in some embodiments, the device plugin supplies lists of other devices in addition to network segments.
FIG. 5 illustrates acommunication sequence 500 of some embodiments for adding a secondary interface to a pod. The communication sequence includes several steps, numbered (1) to (7) in the diagram and the following description. Thecommunication sequence 500 begins when the API server 140 (1) sends a list of network segments and, for each network segment, a list of interfaces of the segment to thedevice plugin 144. Thedevice plugin 144 then determines which interfaces are available and (2) provides the list of available interfaces for each network segment to thekubelet 142. In some embodiments, thedevice plugin 144 determines available interfaces of a network segment by retrieving an interface list from a specific file location (e.g., sys/class/net) and comparing this interface list with the interface names of the network segment. If an interface name from the interface list matches an interface name of the network segment then thedevice plugin 144 identifies it as available and that it is a list of such available interfaces that is sent to thekubelet 142 in step (2). - At some point after the
kubelet 142 receives the network segment and available interface lists, the API server (3) sends a pod definition to thekubelet 142 that thekubelet 142 will use to create a pod. The pod definition in some embodiments contains a name or other identifier of a secondary network segment to attach the pod to. In some embodiments, the pod includes an internal identifier of the secondary interface to identify the interface to containers of the pod. One of ordinary skill in the art will understand that this internal identifier is a separate and generally distinct identifier from the list of available interfaces identified by the device plugin. - The
kubelet 142, in some embodiments, then sends (4) a request for an interface ID of an unallocated interface of the network segment identified in the pod definition, to thedevice plugin 144. Thedevice plugin 144 then sends (5) an interface ID of an unallocated interface of the identified network segment to thekubelet 142. Thedevice plugin 144 monitors the allocated interface IDs in the embodiment inFIG. 5 however, in other embodiments, thekubelet 142 or theNCP 145 monitors the allocated interface IDs. In some embodiments, when a pod is deleted, whichever element monitors the allocated interface IDs updates the status of the secondary interface(s) allocated to that pod to “unallocated.” TheNCP 145 queries (6) thekubelet 142 for any pods with secondary interfaces to be attached and receives (7) the interface ID from thekubelet 142. TheNCP 145 then creates an interface for the pod and attaches the interface to the identified network segment. - Although the communications sequence of
FIG. 5 includes a particular set of messages sent in a particular order, in other embodiments, different messages may be sent or the order may be different. For example, in some embodiments, rather than a device plugin tracking the allocated and unallocated interface IDs of a network segment, a kubelet or the NCP tracks which interfaces are allocated and unallocated. In some embodiments, the kubelet receives a pod definition with a network segment identified and creates the pod. Then an NCP determines that the pod includes a network segment identifier, creates a secondary network interface for the pod and connects the secondary interface to the identified network segment.FIG. 6 conceptually illustrates aprocess 600 of some embodiments for allocating a secondary network interface for a pod with a primary network interface. In some embodiments, theprocess 600 is performed by an NCP. - The
process 600 begins by receiving (at 605) a pod. In some embodiments, receiving a pod means receiving at the NCP a notification that a pod has been created (e.g., by a kubelet). Theprocess 600 determines (at 610) that the pod includes an identifier of a network attachment definition (ND). An ND designates a network segment to attach to a secondary network interface of the pod. In some embodiments, designating a network segment may include identifying, in the ND, a pre-created network segment of a logical network and/or providing attributes in the ND that allow an NCP to command a network manager or controller to dynamically create a network segment in the logical network. When the pod includes an identifier of an ND, the NCP uses that identifier (e.g., in operation 620) to determine which ND designates the network segment to be attached to a secondary interface of the pod. - This is an example of a pod definition that includes an identifier of an ND:
-
kind: Pod metadata: name: my-pod namespace: my-namespace annotations: k8s.v1.cni.cncf.io/networks: | [ { “name”: “net-nsx”, # The name of network attachment CRD “interface”: “eth1”, # (optional)The name of interface within the pod. “ips”: [“1.2.3.4/24”], # (optional)IP/prefix_length and mac addresses for the interface, optional. “mac”: “aa:bb:cc:dd:ee:ff” #(optional) }, ] - In the above example (pod example 1) the pod includes one ND identifier indicating that the pod should have one secondary network interface. However, in some embodiments, pods may include multiple ND identifiers, indicating that the pods should have multiple secondary network interfaces attached to multiple network segments. The identified ND has an identifier called a name, in this example, “net-nsx”. However, in some embodiments the ND may have other designations such as a number, code, or other type of identifier. Some examples of NDs that might designate the secondary network segments to attach to the pod of the pod example are provided below.
- The
process 600 creates (at 615) a secondary interface for the pod. Theprocess 600 then connects (at 620) the secondary network interface created in the pod inoperation 615 to the network segment designated by the ND identified inoperation 610. The network segment, in some embodiments may be a pre-created network segment. Pre-created network segments are created independently on the logical network without the use of an ND. When a user codes the corresponding ND, the user adds a network identifier, used by the logical network to identify the pre-created network segment, to the ND. - Here is an example of an ND corresponding to the name: net-nsx (the identifier in the pod example above). The ND designates the network segment to be attached when a pod uses the ND identifier “net-nsx”. This ND example, and the subsequent dynamically created network segment examples include the name: net-nsx. However, unlike the dynamic network segments, this example of an ND that designates a pre-created network segment includes an identifier of an existing, pre-created network segment:
-
ND example 1: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: net-nsx spec: config: ‘{ “cniVersion”: “0.3.0”, “type”: “nsx”, # NCP CNI plugin type “networkID”: “071c3745-f982-45ba-91b2-3f9c22af0240”, # ID of pre-created NSXT Segment “ipam”: { # “ipam” is optional “subnet”: “192.168.0.0/24”, # required in “ipam” “rangeStart”: “192.168.0.2”, # optional, default value is the secondary IP of subnet “rangeEnd”: “192.168.0.254”, # optional, default value is the penultimate IP of subnet “gateway”: “192.168.0.1” # optional, default value is the first IP of subnet } }’ - In
Example ND 1, the networkID: “071c3745-f982-45ba-91b2-3f9c22af0240” is an ID used by the logical network to identify a pre-created network segment of the logical network. The identified network segment was created (e.g., at the instructions of the user, using the logical network) without the ND and selected by the user (e.g., using the network ID placed in the ND when it was coded) to be used as the network segment for pods using that ND. The NDs of some embodiments with pre-created network segment IDs may also contain additional attributes that modify the pre-created network and/or the interface of the pod on the network segment. - In some embodiments, in addition to or instead of connecting pre-created network segments to pods, the
process 600 inoperation 620 may connect network segments that are dynamically created according to network attributes provided in an ND. In some embodiments, these network attributes may merely identify the type of network (e.g., VLAN, overlay, MACVLAN, IPVLAN, ENS, etc.) to create or may include additional network attributes. The following are examples of NDs for creating a VLAN-backed network segment and an overlay-backed network segment. -
ND example 2: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: net-nsx spec: config: ‘{ “cniVersion”: “0.3.0”, “type”: “nsx”, # NCP CNI plugin type “networkID”: “”, # ID of pre-created NSXT Segment “networkType”: “vlan”, “vlanID”: 100, “ipam”: { # “ipam” is optional “subnet”: “192.168.0.0/24”, # required in “ipam” “rangeStart”: “192.168.0.2”, # optional, default value is the secondary IP of subnet “rangeEnd”: “192.168.0.254”, # optional, default value is the penultimate IP of subnet “gateway”: “192.168.0.1” # optional, default value is the first IP of subnet } }’ ND example 3: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: net-nsx spec: config: ‘{ “cniVersion”: “0.3.0”, “type”: “nsx”, # NCP CNI plugin type “networkID”: “” # ID of pre-created NSXT Segment “networkType”: “overlay”, “gatewayID”: “081c3745-d982-45bc-91c2-3f9c22af0249”, # Optional. ID of NSX-T Gateway to which the created Segment should be connected. “ipam”: { # “ipam” is optional “subnet”: “192.168.0.0/24”, # required in “ipam” “rangeStart”: “192.168.0.2”, # optional, default value is the secondary IP of subnet “rangeEnd”: “192.168.0.254”, # optional, default value is the penultimate IP of subnet “gateway”: “192.168.0.1” # optional, default value is the first IP of subnet } }’ - In ND example 2, there is no networkID as the ND CRD does not specify a pre-created network segment. In ND example 2, the ND includes a network type (vlan) and a vlanID number (100). In ND example 3, the ND includes a network type (overlay) and an ID of a logical network Gateway to which the created segment should be connected (081c3745-d982-45bc-91c2-3f9c22af0249).
- The illustrated embodiment of
FIG. 6 handles both pre-created and dynamically created network segments. In some embodiments, the dynamically created network segments are created by the NCP directing the logical network to create the network segments: (1) when the Kubernetes system is being brought online, (2) when the NDs are initially provided to the Kubernetes API (e.g., for NDs coded after the Kubernetes system is started), and/or (3) the first time a pod identifying a particular ND that designates a particular network segment is first received by the NCP. In some embodiments, the NCP provides, to the logical network, default attributes for a dynamic network segment to supplement any attributes supplied by the ND. In some embodiments, these default attributes are supplied in one or more CRDs (that are not network attachment definition CRDs). - As previously mentioned, in some embodiments, a pod may have more than one secondary interface. Therefore, the
process 600 determines (at 625) whether the ND identifier was the last ND identifier of the pod. If the ND identifier was not the last one in the pod, theprocess 600 loops back tooperation 615. If the network segment identifier was the last one in the pod, theprocess 600 ends. - Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer-readable storage medium (also referred to as computer-readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer-readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer-readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
- In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
-
FIG. 7 conceptually illustrates acomputer system 700 with which some embodiments of the invention are implemented. Thecomputer system 700 can be used to implement any of the above-described hosts, controllers, gateway and edge forwarding elements. As such, it can be used to execute any of the above-described processes. Thiscomputer system 700 includes various types of non-transitory machine-readable media and interfaces for various other types of machine-readable media.Computer system 700 includes abus 705, processing unit(s) 710, asystem memory 725, a read-only memory 730, apermanent storage device 735,input devices 740, andoutput devices 745. - The
bus 705 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of thecomputer system 700. For instance, thebus 705 communicatively connects the processing unit(s) 710 with the read-only memory 730, thesystem memory 725, and thepermanent storage device 735. - From these various memory units, the processing unit(s) 710 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM) 730 stores static data and instructions that are needed by the processing unit(s) 710 and other modules of the computer system. The
permanent storage device 735, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when thecomputer system 700 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as thepermanent storage device 735. - Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the
permanent storage device 735. Like thepermanent storage device 735, thesystem memory 725 is a read-and-write memory device. However, unlikestorage device 735, thesystem memory 725 is a volatile read-and-write memory, such as random access memory. Thesystem memory 725 stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in thesystem memory 725, thepermanent storage device 735, and/or the read-only memory 730. From these various memory units, the processing unit(s) 710 retrieve instructions to execute and data to process in order to execute the processes of some embodiments. - The
bus 705 also connects to the input andoutput devices input devices 740 enable the user to communicate information and select commands to thecomputer system 700. Theinput devices 740 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). Theoutput devices 745 display images generated by thecomputer system 700. Theoutput devices 745 include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as touchscreens that function as both input andoutput devices - Finally, as shown in
FIG. 7 ,bus 705 also couplescomputer system 700 to anetwork 765 through a network adapter (not shown). In this manner, thecomputer 700 can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet), or a network of networks (such as the Internet). Any or all components ofcomputer system 700 may be used in conjunction with the invention. - Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
- While the above discussion primarily refers to microprocessors or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
- As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” mean displaying on an electronic device. As used in this specification, the terms “computer-readable medium,” “computer-readable media,” and “machine-readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.
- While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For instance, several of the above-described embodiments deploy gateways in public cloud datacenters. However, in other embodiments, the gateways are deployed in a third-party's private cloud datacenters (e.g., datacenters that the third-party uses to deploy cloud gateways for different entities in order to deploy virtual networks for these entities). Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.
Claims (21)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/102,700 US20230179484A1 (en) | 2021-06-11 | 2023-01-28 | Automatic configuring of vlan and overlay logical switches for container secondary interfaces |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2021099722 | 2021-06-11 | ||
CNPCT/CN2021/099722 | 2021-06-11 | ||
US17/389,305 US11606254B2 (en) | 2021-06-11 | 2021-07-29 | Automatic configuring of VLAN and overlay logical switches for container secondary interfaces |
US18/102,700 US20230179484A1 (en) | 2021-06-11 | 2023-01-28 | Automatic configuring of vlan and overlay logical switches for container secondary interfaces |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/389,305 Continuation US11606254B2 (en) | 2021-06-11 | 2021-07-29 | Automatic configuring of VLAN and overlay logical switches for container secondary interfaces |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230179484A1 true US20230179484A1 (en) | 2023-06-08 |
Family
ID=84390197
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/389,305 Active US11606254B2 (en) | 2021-06-11 | 2021-07-29 | Automatic configuring of VLAN and overlay logical switches for container secondary interfaces |
US18/102,700 Pending US20230179484A1 (en) | 2021-06-11 | 2023-01-28 | Automatic configuring of vlan and overlay logical switches for container secondary interfaces |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/389,305 Active US11606254B2 (en) | 2021-06-11 | 2021-07-29 | Automatic configuring of VLAN and overlay logical switches for container secondary interfaces |
Country Status (1)
Country | Link |
---|---|
US (2) | US11606254B2 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11831511B1 (en) | 2023-01-17 | 2023-11-28 | Vmware, Inc. | Enforcing network policies in heterogeneous systems |
US11848910B1 (en) | 2022-11-11 | 2023-12-19 | Vmware, Inc. | Assigning stateful pods fixed IP addresses depending on unique pod identity |
US11902245B2 (en) | 2022-01-14 | 2024-02-13 | VMware LLC | Per-namespace IP address management method for container networks |
US12058102B2 (en) | 2020-04-01 | 2024-08-06 | VMware LLC | Virtual load-balanced service object |
US12101244B1 (en) | 2023-06-12 | 2024-09-24 | VMware LLC | Layer 7 network security for container workloads |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10942788B2 (en) | 2018-06-15 | 2021-03-09 | Vmware, Inc. | Policy constraint framework for an sddc |
US10812337B2 (en) | 2018-06-15 | 2020-10-20 | Vmware, Inc. | Hierarchical API for a SDDC |
US11803408B2 (en) | 2020-07-29 | 2023-10-31 | Vmware, Inc. | Distributed network plugin agents for container networking |
US11863352B2 (en) | 2020-07-30 | 2024-01-02 | Vmware, Inc. | Hierarchical networking for nested container clusters |
US20230239268A1 (en) * | 2022-01-21 | 2023-07-27 | Vmware, Inc. | Managing ip addresses for dpdk enabled network interfaces for cloud native pods |
US11936544B2 (en) * | 2022-07-20 | 2024-03-19 | Vmware, Inc. | Use of custom resource definitions for reporting network resource usage of a node cluster |
US11943101B2 (en) * | 2022-08-29 | 2024-03-26 | VMware LLC | Joint orchestration for private mobile network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5515373A (en) * | 1994-01-11 | 1996-05-07 | Apple Computer, Inc. | Telecommunications interface for unified handling of varied analog-derived and digital data streams |
EP3617880A1 (en) * | 2018-08-30 | 2020-03-04 | Juniper Networks, Inc. | Multiple networks for virtual execution elements |
US20210328858A1 (en) * | 2020-04-16 | 2021-10-21 | Ribbon Communications Operating Company, Inc. | Communications methods and apparatus for migrating a network interface and/or ip address from one pod to another pod in a kubernetes system |
Family Cites Families (151)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040098154A1 (en) | 2000-10-04 | 2004-05-20 | Mccarthy Brendan | Method and apparatus for computer system engineering |
AU2004227600B2 (en) | 2003-04-09 | 2009-05-07 | Cisco Technology, Inc. | Selective diversion and injection of communication traffic |
US8146148B2 (en) | 2003-11-19 | 2012-03-27 | Cisco Technology, Inc. | Tunneled security groups |
US10180809B2 (en) | 2006-05-17 | 2019-01-15 | Richard Fetik | Secure application acceleration system, methods and apparatus |
CN101222772B (en) | 2008-01-23 | 2010-06-09 | 西安西电捷通无线网络通信有限公司 | Wireless multi-hop network authentication access method based on ID |
CA2734041A1 (en) | 2008-08-12 | 2010-02-18 | Ntt Docomo, Inc. | Communication control system, communication system and communication control method |
US8266477B2 (en) | 2009-01-09 | 2012-09-11 | Ca, Inc. | System and method for modifying execution of scripts for a job scheduler using deontic logic |
US8385332B2 (en) | 2009-01-12 | 2013-02-26 | Juniper Networks, Inc. | Network-based macro mobility in cellular networks using an extended routing protocol |
US8194680B1 (en) | 2009-03-11 | 2012-06-05 | Amazon Technologies, Inc. | Managing communications for modified computer networks |
US8514864B2 (en) | 2009-03-31 | 2013-08-20 | Verizon Patent And Licensing Inc. | System and method for providing network mobility |
US10200251B2 (en) | 2009-06-11 | 2019-02-05 | Talari Networks, Inc. | Methods and apparatus for accessing selectable application processing of data packets in an adaptive private network |
EP2583211B1 (en) | 2010-06-15 | 2020-04-15 | Oracle International Corporation | Virtual computing infrastructure |
US9258312B1 (en) | 2010-12-06 | 2016-02-09 | Amazon Technologies, Inc. | Distributed policy enforcement with verification mode |
US8793286B2 (en) | 2010-12-09 | 2014-07-29 | International Business Machines Corporation | Hierarchical multi-tenancy management of system resources in resource groups |
US8683560B1 (en) | 2010-12-29 | 2014-03-25 | Amazon Technologies, Inc. | Techniques for credential generation |
US10122735B1 (en) | 2011-01-17 | 2018-11-06 | Marvell Israel (M.I.S.L) Ltd. | Switch having dynamic bypass per flow |
US8479089B2 (en) | 2011-03-08 | 2013-07-02 | Certusoft, Inc. | Constructing and applying a constraint-choice-action matrix for decision making |
US8627442B2 (en) | 2011-05-24 | 2014-01-07 | International Business Machines Corporation | Hierarchical rule development and binding for web application server firewall |
FR2977050A1 (en) | 2011-06-24 | 2012-12-28 | France Telecom | METHOD OF DETECTING ATTACKS AND PROTECTION |
US8407323B2 (en) | 2011-07-12 | 2013-03-26 | At&T Intellectual Property I, L.P. | Network connectivity wizard to support automated creation of customized configurations for virtual private cloud computing networks |
US20130019314A1 (en) | 2011-07-14 | 2013-01-17 | International Business Machines Corporation | Interactive virtual patching using a web application server firewall |
EP2748978B1 (en) | 2011-11-15 | 2018-04-25 | Nicira Inc. | Migrating middlebox state for distributed middleboxes |
US8966085B2 (en) | 2012-01-04 | 2015-02-24 | International Business Machines Corporation | Policy-based scaling of computing resources in a networked computing environment |
US9152803B2 (en) | 2012-04-24 | 2015-10-06 | Oracle International Incorporated | Optimized policy matching and evaluation for hierarchical resources |
US9058219B2 (en) | 2012-11-02 | 2015-06-16 | Amazon Technologies, Inc. | Custom resources in a resource stack |
US9755900B2 (en) | 2013-03-11 | 2017-09-05 | Amazon Technologies, Inc. | Managing configuration updates |
CN105051749A (en) | 2013-03-15 | 2015-11-11 | 瑞典爱立信有限公司 | Policy based data protection |
US9225638B2 (en) | 2013-05-09 | 2015-12-29 | Vmware, Inc. | Method and system for service switching using service tags |
WO2015013685A1 (en) | 2013-07-25 | 2015-01-29 | Convida Wireless, Llc | End-to-end m2m service layer sessions |
US20190171650A1 (en) | 2017-12-01 | 2019-06-06 | Chavdar Botev | System and method to improve data synchronization and integration of heterogeneous databases distributed across enterprise and cloud using bi-directional transactional bus of asynchronous change data system |
CN105900518B (en) | 2013-08-27 | 2019-08-20 | 华为技术有限公司 | System and method for mobile network feature virtualization |
US9571603B2 (en) | 2013-09-17 | 2017-02-14 | Cisco Technology, Inc. | Redundancy network protocol system |
JP6026385B2 (en) | 2013-10-25 | 2016-11-16 | 株式会社日立製作所 | Attribute information providing method and attribute information providing system |
CA2936810C (en) | 2014-01-16 | 2018-03-06 | Arz MURR | Device, system and method of mobile identity verification |
US9634900B2 (en) | 2014-02-28 | 2017-04-25 | Futurewei Technologies, Inc. | Declarative approach to virtual network creation and operation |
US9225597B2 (en) | 2014-03-14 | 2015-12-29 | Nicira, Inc. | Managed gateways peering with external router to attract ingress packets |
US9590901B2 (en) | 2014-03-14 | 2017-03-07 | Nicira, Inc. | Route advertisement by managed gateways |
US20150317169A1 (en) | 2014-05-04 | 2015-11-05 | Midfin Systems Inc. | Constructing and operating high-performance unified compute infrastructure across geo-distributed datacenters |
US20150348044A1 (en) | 2014-05-30 | 2015-12-03 | Verizon Patent And Licensing Inc. | Secure credit card transactions based on a mobile device |
US20170085561A1 (en) | 2014-06-09 | 2017-03-23 | Beijing Stone Shield Technology Co., Ltd. | Key storage device and method for using same |
US9613218B2 (en) | 2014-06-30 | 2017-04-04 | Nicira, Inc. | Encryption system in a virtualized environment |
US9973380B1 (en) | 2014-07-10 | 2018-05-15 | Cisco Technology, Inc. | Datacenter workload deployment using cross-domain global service profiles and identifiers |
US20160080422A1 (en) | 2014-09-12 | 2016-03-17 | International Business Machines Corporation | Transforming business policies to information technology security control terms for improved system compliance |
US10257095B2 (en) | 2014-09-30 | 2019-04-09 | Nicira, Inc. | Dynamically adjusting load balancing |
US9531590B2 (en) | 2014-09-30 | 2016-12-27 | Nicira, Inc. | Load balancing across a group of load balancers |
US10516568B2 (en) | 2014-09-30 | 2019-12-24 | Nicira, Inc. | Controller driven reconfiguration of a multi-layered application or service model |
CN105682093A (en) | 2014-11-20 | 2016-06-15 | 中兴通讯股份有限公司 | Wireless network access method and access device, and client |
US10205701B1 (en) | 2014-12-16 | 2019-02-12 | Infoblox Inc. | Cloud network automation for IP address and DNS record management |
RU2671949C1 (en) | 2015-01-12 | 2018-11-08 | Телефонактиеболагет Лм Эрикссон (Пабл) | Methods and modules for managing packets in program-configurable network |
US9594546B1 (en) | 2015-01-30 | 2017-03-14 | EMC IP Holding Company LLC | Governed application deployment on trusted infrastructure |
US10530697B2 (en) | 2015-02-17 | 2020-01-07 | Futurewei Technologies, Inc. | Intent based network configuration |
US9674275B1 (en) | 2015-03-16 | 2017-06-06 | Amazon Technologies, Inc. | Providing a file system interface to network-accessible computing resources |
US9380027B1 (en) | 2015-03-30 | 2016-06-28 | Varmour Networks, Inc. | Conditional declarative policies |
US10609091B2 (en) | 2015-04-03 | 2020-03-31 | Nicira, Inc. | Method, apparatus, and system for implementing a content switch |
US10038628B2 (en) | 2015-04-04 | 2018-07-31 | Nicira, Inc. | Route server mode for dynamic routing between logical and physical networks |
US9971624B2 (en) | 2015-05-17 | 2018-05-15 | Nicira, Inc. | Logical processing for containers |
US10129097B2 (en) | 2015-06-02 | 2018-11-13 | ALTR Solutions, Inc. | GUI and high-level API wrapper for software defined networking and software defined access for controlling network routing and rules |
US10148493B1 (en) | 2015-06-08 | 2018-12-04 | Infoblox Inc. | API gateway for network policy and configuration management with public cloud |
US9787641B2 (en) | 2015-06-30 | 2017-10-10 | Nicira, Inc. | Firewall rule management |
US11204791B2 (en) | 2015-06-30 | 2021-12-21 | Nicira, Inc. | Dynamic virtual machine network policy for ingress optimization |
US20170033997A1 (en) | 2015-07-31 | 2017-02-02 | Vmware, Inc. | Binding Policies to Computing Resources |
US10051002B2 (en) | 2015-08-28 | 2018-08-14 | Nicira, Inc. | Distributed VPN gateway for processing remote device management attribute based rules |
US10075363B2 (en) | 2015-08-31 | 2018-09-11 | Nicira, Inc. | Authorization for advertised routes among logical routers |
US10129201B2 (en) | 2015-12-09 | 2018-11-13 | Bluedata Software, Inc. | Management of domain name systems in a large-scale processing environment |
US10613888B1 (en) | 2015-12-15 | 2020-04-07 | Amazon Technologies, Inc. | Custom placement policies for virtual machines |
US9864624B2 (en) | 2015-12-21 | 2018-01-09 | International Business Machines Corporation | Software-defined computing system remote support |
US10095669B1 (en) | 2015-12-22 | 2018-10-09 | Amazon Technologies, Inc. | Virtualized rendering |
US10237163B2 (en) | 2015-12-30 | 2019-03-19 | Juniper Networks, Inc. | Static route advertisement |
US10270796B1 (en) | 2016-03-25 | 2019-04-23 | EMC IP Holding Company LLC | Data protection analytics in cloud computing platform |
US10812452B2 (en) | 2016-04-01 | 2020-10-20 | Egnyte, Inc. | Methods for improving performance and security in a cloud computing system |
US10333849B2 (en) | 2016-04-28 | 2019-06-25 | Nicira, Inc. | Automatic configuration of logical routers on edge nodes |
US10193977B2 (en) | 2016-04-29 | 2019-01-29 | Huawei Technologies Co., Ltd. | System, device and process for dynamic tenant structure adjustment in a distributed resource management system |
US10496605B2 (en) | 2016-04-29 | 2019-12-03 | Splunk Inc. | Application deployment for data intake and query system |
US10263840B2 (en) | 2016-05-24 | 2019-04-16 | Microsoft Technology Licensing, Llc | Subnet stretching via layer three communications |
US10348556B2 (en) | 2016-06-02 | 2019-07-09 | Alibaba Group Holding Limited | Method and network infrastructure for a direct public traffic connection within a datacenter |
US10375121B2 (en) | 2016-06-23 | 2019-08-06 | Vmware, Inc. | Micro-segmentation in virtualized computing environments |
EP4199579B1 (en) | 2016-07-07 | 2024-06-05 | Huawei Technologies Co., Ltd. | Network resource management method, apparatus, and system |
US10333983B2 (en) | 2016-08-30 | 2019-06-25 | Nicira, Inc. | Policy definition and enforcement for a network virtualization platform |
US10484243B2 (en) | 2016-09-16 | 2019-11-19 | Oracle International Corporation | Application management for a multi-tenant identity cloud service |
US10489424B2 (en) | 2016-09-26 | 2019-11-26 | Amazon Technologies, Inc. | Different hierarchies of resource data objects for managing system resources |
US10469359B2 (en) | 2016-11-03 | 2019-11-05 | Futurewei Technologies, Inc. | Global resource orchestration system for network function virtualization |
US10320749B2 (en) | 2016-11-07 | 2019-06-11 | Nicira, Inc. | Firewall rule creation in a virtualized computing environment |
US20180167487A1 (en) | 2016-12-13 | 2018-06-14 | Red Hat, Inc. | Container deployment scheduling with constant time rejection request filtering |
EP3361675B1 (en) | 2016-12-14 | 2019-05-08 | Huawei Technologies Co., Ltd. | Distributed load balancing system, health check method and service node |
US10873565B2 (en) | 2016-12-22 | 2020-12-22 | Nicira, Inc. | Micro-segmentation of virtual computing elements |
CN106789367A (en) | 2017-02-23 | 2017-05-31 | 郑州云海信息技术有限公司 | The construction method and device of a kind of network system |
US10554607B2 (en) | 2017-02-24 | 2020-02-04 | Telefonaktiebolaget Lm Ericsson (Publ) | Heterogeneous cloud controller |
US10791089B2 (en) | 2017-03-29 | 2020-09-29 | Hewlett Packard Enterprise Development Lp | Converged address translation |
US10698714B2 (en) | 2017-04-07 | 2020-06-30 | Nicira, Inc. | Application/context-based management of virtual networks using customizable workflows |
EP3635540A4 (en) | 2017-04-25 | 2021-02-24 | Intento, Inc. | Intent-based organisation of apis |
US10693704B2 (en) | 2017-05-10 | 2020-06-23 | B.yond, Inc. | Dynamic allocation of service components of information service in hierarchical telecommunication architecture |
CA3066459C (en) | 2017-06-13 | 2023-10-17 | Equinix, Inc. | Service peering exchange |
US10911397B2 (en) | 2017-07-31 | 2021-02-02 | Nicira, Inc. | Agent for implementing layer 2 communication on layer 3 underlay network |
US11194753B2 (en) | 2017-09-01 | 2021-12-07 | Intel Corporation | Platform interface layer and protocol for accelerators |
US10360025B2 (en) | 2017-09-08 | 2019-07-23 | Accenture Global Solutions Limited | Infrastructure instantiation, collaboration, and validation architecture for serverless execution frameworks |
JP7196164B2 (en) | 2017-09-30 | 2022-12-26 | オラクル・インターナショナル・コーポレイション | Bindings from backend service endpoints to API functions in the API Registry |
US11005684B2 (en) | 2017-10-02 | 2021-05-11 | Vmware, Inc. | Creating virtual networks spanning multiple public clouds |
CN107947961B (en) | 2017-10-17 | 2021-07-30 | 上海数讯信息技术有限公司 | SDN-based Kubernetes network management system and method |
US10805181B2 (en) | 2017-10-29 | 2020-10-13 | Nicira, Inc. | Service operation chaining |
US10708229B2 (en) | 2017-11-15 | 2020-07-07 | Nicira, Inc. | Packet induced revalidation of connection tracker |
US11012420B2 (en) | 2017-11-15 | 2021-05-18 | Nicira, Inc. | Third-party service chaining using packet encapsulation in a flow-based forwarding element |
US10757077B2 (en) | 2017-11-15 | 2020-08-25 | Nicira, Inc. | Stateful connection policy filtering |
US20190229987A1 (en) | 2018-01-24 | 2019-07-25 | Nicira, Inc. | Methods and apparatus to deploy virtual networking in a data center |
US10797910B2 (en) | 2018-01-26 | 2020-10-06 | Nicira, Inc. | Specifying and utilizing paths through a network |
US10659252B2 (en) | 2018-01-26 | 2020-05-19 | Nicira, Inc | Specifying and utilizing paths through a network |
US10454824B2 (en) | 2018-03-01 | 2019-10-22 | Nicira, Inc. | Generic communication channel for information exchange between a hypervisor and a virtual machine |
US10728174B2 (en) | 2018-03-27 | 2020-07-28 | Nicira, Inc. | Incorporating layer 2 service between two interfaces of gateway device |
US10805192B2 (en) | 2018-03-27 | 2020-10-13 | Nicira, Inc. | Detecting failure of layer 2 service using broadcast messages |
US10951661B1 (en) | 2018-05-08 | 2021-03-16 | Amazon Technologies, Inc. | Secure programming interface hierarchies |
US10841336B2 (en) | 2018-05-21 | 2020-11-17 | International Business Machines Corporation | Selectively providing mutual transport layer security using alternative server names |
CN108809722B (en) | 2018-06-13 | 2022-03-22 | 郑州云海信息技术有限公司 | Method, device and storage medium for deploying Kubernetes cluster |
US10795909B1 (en) | 2018-06-14 | 2020-10-06 | Palantir Technologies Inc. | Minimized and collapsed resource dependency path |
US10942788B2 (en) | 2018-06-15 | 2021-03-09 | Vmware, Inc. | Policy constraint framework for an sddc |
US10812337B2 (en) | 2018-06-15 | 2020-10-20 | Vmware, Inc. | Hierarchical API for a SDDC |
US11086700B2 (en) | 2018-08-24 | 2021-08-10 | Vmware, Inc. | Template driven approach to deploy a multi-segmented application in an SDDC |
US10628144B2 (en) | 2018-08-24 | 2020-04-21 | Vmware, Inc. | Hierarchical API for defining a multi-segmented application in an SDDC |
CA3107455A1 (en) | 2018-08-24 | 2020-02-27 | Vmware, Inc. | Hierarchical api for defining a multi-segmented application in an sddc |
US10728145B2 (en) | 2018-08-30 | 2020-07-28 | Juniper Networks, Inc. | Multiple virtual network interface support for virtual execution elements |
US11595250B2 (en) | 2018-09-02 | 2023-02-28 | Vmware, Inc. | Service insertion at logical network gateway |
US10944673B2 (en) | 2018-09-02 | 2021-03-09 | Vmware, Inc. | Redirection of data messages at logical network gateway |
US11074091B1 (en) | 2018-09-27 | 2021-07-27 | Juniper Networks, Inc. | Deployment of microservices-based network controller |
US11159366B1 (en) | 2018-09-28 | 2021-10-26 | Juniper Networks, Inc. | Service chaining for virtual execution elements |
US11316822B1 (en) | 2018-09-28 | 2022-04-26 | Juniper Networks, Inc. | Allocating external IP addresses from isolated pools |
US11153203B2 (en) | 2018-10-05 | 2021-10-19 | Sandvine Corporation | System and method for adaptive traffic path management |
US11175964B2 (en) | 2019-02-01 | 2021-11-16 | Virtustream Ip Holding Company Llc | Partner enablement services for managed service automation |
US11307967B2 (en) | 2019-02-04 | 2022-04-19 | Oracle International Corporation | Test orchestration platform |
US10972386B2 (en) | 2019-03-29 | 2021-04-06 | Juniper Networks, Inc. | Scalable multi-tenant underlay network supporting multi-tenant overlay network |
US10841226B2 (en) | 2019-03-29 | 2020-11-17 | Juniper Networks, Inc. | Configuring service load balancers with specified backend virtual networks |
US11095504B2 (en) | 2019-04-26 | 2021-08-17 | Juniper Networks, Inc. | Initializing network device and server configurations in a data center |
US11290493B2 (en) | 2019-05-31 | 2022-03-29 | Varmour Networks, Inc. | Template-driven intent-based security |
US11194632B2 (en) | 2019-06-18 | 2021-12-07 | Nutanix, Inc. | Deploying microservices into virtualized computing systems |
US11003423B2 (en) | 2019-06-28 | 2021-05-11 | Atlassian Pty Ltd. | System and method for autowiring of a microservice architecture |
US11416342B2 (en) | 2019-07-03 | 2022-08-16 | EMC IP Holding Company LLC | Automatically configuring boot sequence of container systems for disaster recovery |
CN110531987A (en) | 2019-07-30 | 2019-12-03 | 平安科技(深圳)有限公司 | Management method, device and computer readable storage medium based on Kubernetes cluster |
CN110611588B (en) | 2019-09-02 | 2022-04-29 | 深信服科技股份有限公司 | Network creation method, server, computer readable storage medium and system |
US10708368B1 (en) | 2019-10-30 | 2020-07-07 | Verizon Patent And Licensing Inc. | System and methods for generating a slice deployment description for a network slice instance |
US11347806B2 (en) | 2019-12-30 | 2022-05-31 | Servicenow, Inc. | Discovery of containerized platform and orchestration services |
US10944691B1 (en) | 2020-01-15 | 2021-03-09 | Vmware, Inc. | Container-based network policy configuration in software-defined networking (SDN) environments |
CN113141386B (en) | 2020-01-19 | 2023-01-06 | 北京百度网讯科技有限公司 | Kubernetes cluster access method, device, equipment and medium in private network |
US11153279B2 (en) | 2020-01-30 | 2021-10-19 | Hewlett Packard Enterprise Development Lp | Locally representing a remote application programming interface (API) endpoint within an application platform |
CN111327640B (en) * | 2020-03-24 | 2022-02-18 | 广西梯度科技有限公司 | Method for setting IPv6 for Pod in Kubernetes |
CN111371627B (en) * | 2020-03-24 | 2022-05-10 | 广西梯度科技有限公司 | Method for setting multiple IPs (Internet protocol) in Kubernetes through Pod |
CN115380514B (en) | 2020-04-01 | 2024-03-01 | 威睿有限责任公司 | Automatic deployment of network elements for heterogeneous computing elements |
US11194483B1 (en) | 2020-06-05 | 2021-12-07 | Vmware, Inc. | Enriching a storage provider with container orchestrator metadata in a virtualized computing system |
US11625256B2 (en) | 2020-06-22 | 2023-04-11 | Hewlett Packard Enterprise Development Lp | Container-as-a-service (CAAS) controller for selecting a bare-metal machine of a private cloud for a cluster of a managed container service |
US11803408B2 (en) | 2020-07-29 | 2023-10-31 | Vmware, Inc. | Distributed network plugin agents for container networking |
US11863352B2 (en) | 2020-07-30 | 2024-01-02 | Vmware, Inc. | Hierarchical networking for nested container clusters |
US11522951B2 (en) | 2020-08-28 | 2022-12-06 | Microsoft Technology Licensing, Llc | Configuring service mesh networking resources for dynamically discovered peers or network functions |
US20220182439A1 (en) | 2020-12-04 | 2022-06-09 | Vmware, Inc. | General grouping mechanism for endpoints |
US11190491B1 (en) | 2020-12-31 | 2021-11-30 | Netflow, UAB | Method and apparatus for maintaining a resilient VPN connection |
US11743182B2 (en) * | 2021-03-01 | 2023-08-29 | Juniper Networks, Inc. | Container networking interface for multiple types of interfaces |
-
2021
- 2021-07-29 US US17/389,305 patent/US11606254B2/en active Active
-
2023
- 2023-01-28 US US18/102,700 patent/US20230179484A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5515373A (en) * | 1994-01-11 | 1996-05-07 | Apple Computer, Inc. | Telecommunications interface for unified handling of varied analog-derived and digital data streams |
EP3617880A1 (en) * | 2018-08-30 | 2020-03-04 | Juniper Networks, Inc. | Multiple networks for virtual execution elements |
US20210328858A1 (en) * | 2020-04-16 | 2021-10-21 | Ribbon Communications Operating Company, Inc. | Communications methods and apparatus for migrating a network interface and/or ip address from one pod to another pod in a kubernetes system |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12058102B2 (en) | 2020-04-01 | 2024-08-06 | VMware LLC | Virtual load-balanced service object |
US11902245B2 (en) | 2022-01-14 | 2024-02-13 | VMware LLC | Per-namespace IP address management method for container networks |
US11848910B1 (en) | 2022-11-11 | 2023-12-19 | Vmware, Inc. | Assigning stateful pods fixed IP addresses depending on unique pod identity |
US11831511B1 (en) | 2023-01-17 | 2023-11-28 | Vmware, Inc. | Enforcing network policies in heterogeneous systems |
US12101244B1 (en) | 2023-06-12 | 2024-09-24 | VMware LLC | Layer 7 network security for container workloads |
Also Published As
Publication number | Publication date |
---|---|
US11606254B2 (en) | 2023-03-14 |
US20220400053A1 (en) | 2022-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11606254B2 (en) | Automatic configuring of VLAN and overlay logical switches for container secondary interfaces | |
US11863352B2 (en) | Hierarchical networking for nested container clusters | |
US11757940B2 (en) | Firewall rules for application connectivity | |
US10608993B2 (en) | Firewall rule management | |
US10481933B2 (en) | Enabling virtual machines access to switches configured by different management entities | |
US10567482B2 (en) | Accessing endpoints in logical networks and public cloud service providers native networks using a single network interface and a single routing table | |
US10148696B2 (en) | Service rule console for creating, viewing and updating template based service rules | |
US20210136140A1 (en) | Using service containers to implement service chains | |
US20220321495A1 (en) | Efficient trouble shooting on container network by correlating kubernetes resources and underlying resources | |
US10469450B2 (en) | Creating and distributing template based service rules | |
EP3373560B1 (en) | Network control system for configuring middleboxes | |
US11902245B2 (en) | Per-namespace IP address management method for container networks | |
US20170180319A1 (en) | Datapath processing of service rules with qualifiers defined in terms of template identifiers and/or template matching criteria | |
US20190230064A1 (en) | Remote session based micro-segmentation | |
US20230094120A1 (en) | Runtime customization of nodes for network function deployment | |
US11113085B2 (en) | Virtual network abstraction | |
CN116057909A (en) | Routing advertisement supporting distributed gateway services architecture | |
WO2023133797A1 (en) | Per-namespace ip address management method for container networks | |
US20230300002A1 (en) | Mapping vlan of container network to logical network in hypervisor to support flexible ipam and routing container traffic | |
US11848910B1 (en) | Assigning stateful pods fixed IP addresses depending on unique pod identity | |
US20240380810A1 (en) | Fast provisioning of machines using network cloning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
AS | Assignment |
Owner name: VMWARE LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:VMWARE, INC.;REEL/FRAME:066692/0103 Effective date: 20231121 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |