US20210377162A1 - Malleable routing for data packets - Google Patents
Malleable routing for data packets Download PDFInfo
- Publication number
- US20210377162A1 US20210377162A1 US17/360,283 US202117360283A US2021377162A1 US 20210377162 A1 US20210377162 A1 US 20210377162A1 US 202117360283 A US202117360283 A US 202117360283A US 2021377162 A1 US2021377162 A1 US 2021377162A1
- Authority
- US
- United States
- Prior art keywords
- implementations
- data packets
- routing
- network
- network nodes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims abstract description 84
- 238000004891 communication Methods 0.000 claims abstract description 41
- 101100322982 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) alg-11 gene Proteins 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000001902 propagating effect Effects 0.000 description 4
- 102100033821 GDP-Man:Man(3)GlcNAc(2)-PP-Dol alpha-1,2-mannosyltransferase Human genes 0.000 description 3
- 101000779347 Homo sapiens GDP-Man:Man(3)GlcNAc(2)-PP-Dol alpha-1,2-mannosyltransferase Proteins 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 244000141353 Prunus domestica Species 0.000 description 2
- 241000465502 Tobacco latent virus Species 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 102100032339 Dol-P-Man:Man(7)GlcNAc(2)-PP-Dol alpha-1,6-mannosyltransferase Human genes 0.000 description 1
- 101000797862 Homo sapiens Dol-P-Man:Man(7)GlcNAc(2)-PP-Dol alpha-1,6-mannosyltransferase Proteins 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/302—Route determination based on requested QoS
- H04L45/306—Route determination based on the nature of the carried application
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/12—Shortest path evaluation
- H04L45/123—Evaluation of link metrics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/42—Centralised routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/44—Distributed routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/50—Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/12—Shortest path evaluation
Definitions
- the present disclosure relates generally to routing, and in particular, to malleable routing for data packets.
- data packets are routed based on a fixed rule that aims to optimize a specific metric. For example, in some previously available networks, data packets are routed based on a shortest path algorithm.
- fixed rules are sometimes unsuitable for certain types of data packets. For example, not all data packets need to be routed based on the shortest path algorithm. As an example, some data packets may need to be routed via network nodes that support a heightened level of encryption even if such network nodes are not on the shortest path.
- FIGS. 1A-1F are schematic diagrams of a network environment that allows malleable routing for data packets in accordance with some implementations.
- FIG. 2A is a flowchart representation of a method of configuring network nodes in a network in accordance with some implementations.
- FIG. 2B is a flowchart representation of a method of propagating data packets in accordance with some implementations.
- FIG. 3 is a block diagram of a device enabled with various modules that are provided to configure network nodes and propagate data packets in accordance with some implementations.
- a method of routing a type of data packets is performed by a device.
- the device includes a non-transitory memory and one or more processors coupled with the non-transitory memory.
- the method includes determining a routing criterion to transmit a set of data packets across a network.
- the method includes identifying network nodes and communication links in the network that satisfy the routing criterion.
- the method includes determining a route for the set of data packets through the network nodes and the communication links that satisfy the routing criterion.
- the method includes configuring the network nodes that are on the route with configuration information that allows the set of data packets to propagate along the route.
- Some networks treat various types of data packets equally. For example, some networks route different types of data packets according to the same routing criterion (e.g., routing algorithm). Many networks primarily utilize a shortest route criterion (e.g., the shortest path algorithm) to route data packets. In such networks, data packets corresponding to a video download are routed according to the same criterion as data packets corresponding to a phone call. This rigid approach of routing data packets according to a fixed criterion is sometimes unsuitable for certain types of data packets. For example, while the shortest route criterion may be suitable for data packets corresponding to a phone call, the shortest route criterion may be unsuitable for data packets corresponding to an encrypted file transfer.
- routing criterion e.g., routing algorithm
- the present disclosure provides methods, systems and/or devices that enable malleable routing for data packets.
- different routing criteria are utilized to transport different types of data packets.
- different routing criteria include different routing algorithms or different routing schemes. This flexible approach of utilizing different routing criteria for different types of data packets tends to result in routes that are more suitable for the type of data packets. For example, in some implementations, the shortest route criterion is utilized to route data packets corresponding to a phone call but a different routing criterion is utilized to route data packets corresponding to an encrypted file transfer.
- the present disclosure provides more configuration options to network operators by allowing the network operators to support different routing criteria.
- the present disclosure enables a network operator to support an existing routing criterion, support a modified version of an existing routing criterion, and/or create a new routing criterion.
- the present disclosure also provides more configuration options for individual network nodes.
- the present disclosure enables a network node to install configuration information for an existing routing criterion, a modified version of an existing routing criterion, and/or a new routing criterion.
- FIGS. 1A-1F are schematic diagrams of a network environment 10 that allows malleable routing for data packets in accordance with some implementations. While certain specific features are illustrated, those of ordinary skill in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the example implementations disclosed herein. To that end, the network environment 10 includes client devices A and B, various network nodes N 1 , N 2 . . . N 9 , various communication links L 1 , L 2 . . . L 19 , and a network controller 20 . Although the network controller 20 is shown as being separate from the network nodes N 1 , N 2 . . .
- the network controller 20 is implemented by one or more of the network nodes N 1 , N 2 . . . N 9 . In other words, in some implementations, the network controller 20 is distributed across one or more of the network nodes N 1 , N 2 . . . N 9 .
- the network nodes N 1 . . . N 9 support one or more routing criteria.
- the network nodes N 1 . . . N 9 support one or more routing algorithms In the example of FIG.
- the network node N 1 supports algorithm (Alg) x
- the network node N 2 supports Alg.s x and y
- the network node N 3 supports Alg.s x and z
- the network node N 4 supports Alg.s x and y
- the network node N 5 supports Alg.s x, y and z
- the network node N 6 supports Alg.s y and z
- the network node N 7 supports Alg z
- the network node N 8 support Alg.s x and z
- the network node N 9 supports Alg.s x and z.
- the network nodes N 1 . . .
- N 9 are configured to route data packets in accordance with the routing criteria (e.g., the routing algorithm) that the network nodes N 1 . . . N 9 support.
- the network node N 1 is configured to route data packets in accordance with Alg x.
- data packets that are to be routed in accordance with a particular routing criterion are propagated along a route that includes network nodes that support that particular routing criterion.
- data packets that are to be routed in accordance with Alg x may be transmitted along a route that includes network node N 1 or network node N 4 .
- data packets that are to be routed in accordance with Alg y are transmitted along a route that includes network node N 4 , and not network node N 1 .
- some network nodes indicate (e.g., advertise, broadcast and/or publish) the routing criteria that the network nodes support.
- the network nodes indicate their support for a particular routing criterion via a router capability (RC).
- the network nodes are associated with a respective set of router capabilities.
- a network node utilizes a first router capability to indicate the definition of a routing criterion, a second router capability to indicate whether or not the network node supports the routing criterion, and a third router capability to indicate a segment identifier (SID) that is associated with the routing criterion.
- SID segment identifier
- a router capability of network node N 1 indicates that the SID for Alg x is 16,001.
- a router capability of network node N 2 indicates that the SID for Alg x is 16,002 and the SID for Alg y is 16,102.
- a router capability of network node N 5 indicates that the SID for Alg x is 16,005, the SID for Alg y is 16,105 and the SID for Alg z is 16,205.
- a router capability of network node N 9 indicates that the SID for Alg x is 16,009 and the SID for Alg z is 16,209.
- some network nodes do not advertise the routing criteria that the network nodes support. In some implementations, some network nodes are not associated with SIDs. For example, in FIG. 1C , network nodes N 3 , N 4 , N 7 and N 8 are associated with any SIDs. In some implementations, some network nodes have SIDs for some routing criteria while not for other routing criteria. For example, in FIG. 1C , the network node N 6 has an SID for Alg y but the network node N 6 does not have an SID for Alg z.
- the network nodes N 1 . . . N 9 install configuration information that allows the network nodes N 1 . . . N 9 to direct (e.g., propagate or transmit) data packets in accordance with different routing criteria.
- the configuration information includes forwarding entries that indicate downstream network nodes that support the routing criterion being used to transmit data packets.
- the forwarding entries include a mapping of SIDs to downstream network nodes that are associated with the SIDs.
- the network node N 1 includes forwarding entries that map SIDs 16,002, 16,005 and 16,009 to the network node N 2 .
- the network node N 1 transmits any data packets that are labeled with SID 16,002, 16,005 or 16,009 to the network node N 2 .
- the network node N 2 includes forwarding entries that map SID 16,009 to network node N 3 , and SID 16,106 to network node N 6 .
- the network node N 2 transmits any data packets that are labeled with SID 16,009 to network node N 3 , and any data packets that are labeled with SID 16,106 to network node N 6 .
- the network node N 5 includes forwarding entries that map SID 16,009 to network node N 9 . As such, the network node N 5 transmits any data packets that are labeled with SID 16,009 to network node N 9 .
- the forwarding entries are installed at the network nodes in response to the network nodes being on a route that is being used to transport data packets.
- the forwarding entries are installed at the network nodes by a controller (e.g., the network controller 20 shown in FIG. 1A ).
- the controller pushes the forwarding entries to the network nodes after determining that the network nodes are on a selected route for transporting data packets.
- the forwarding entries are installed on the network nodes that are on the route, and not on the network nodes that are not on the route. More generally, in various implementations, network nodes that are on a route for transporting a set of data packets are configured with configuration information.
- the configuration information is updated when data packets are transmitted using a different routing criterion.
- different forwarding entries are installed when data packets are transmitted using different routing algorithms
- the configuration information e.g., the forwarding entries
- the routing criterion e.g., the routing algorithm
- Updating the configuration information allows more flexibility in transporting data packets using different routing criteria (e.g., different routing algorithms) For example, with reference to FIG.
- updating the forwarding entries enables transporting a first set of data packets in accordance with Alg x, a second set of data packets in accordance with Alg y, and a third set of data packets in accordance with Alg z.
- the network obtains a request to transmit a set of data packets 100 from client device A to client device B.
- the set of data packets are labeled with a MPLS (multiprotocol label switching) label of 16,009.
- the MPLS label of 16,009 indicates that the set of data packets are to be transmitted in accordance with Alg x because 16,009 is the SID for Alg x at network node N 9 .
- the set of data packets 100 indicates a routing criterion that is to be used to transport the set of data packets 100 .
- the MPLS label is applied to the set of data packets 100 by the network node N 1 , the network controller 20 shown in FIG. 1A and/or the client device A.
- FIG. 1E illustrates a route which includes network nodes that support Alg x—the routing criterion that is to be used to transport the set of data packets 100 .
- the route includes network nodes N 1 , N 2 , N 5 and N 9 , and communication links L 1 , L 2 , L 6 , L 14 and L 19 .
- the network node N 1 receives the set of data packets 100 . Since the set of data packets 100 are labeled with 16,009, in accordance with the forwarding entries installed at network node N 1 , network node N 1 forwards the set of data packets 100 to network node N 2 .
- Network node N 2 receives the set of data packets 100 from network node N 1 over the communication link L 2 . Since the set of data packets 100 are labeled with 16,009, in accordance with the forwarding entries installed at network node N 2 , network node N 2 forwards the set of data packets 100 to network node N 5 .
- Network node N 5 receives the set of data packets 100 from network node N 2 over the communication link L 6 . Since the set of data packets 100 are labeled with 16,009, in accordance with the forwarding entries installed at network node N 5 , network node N 5 forwards the set of data packets 100 to network node N 9 .
- Network node N 9 receives the set of data packets 100 from network node N 5 over the communication link L 14 .
- Network node N 9 forwards the set of data packets 100 to the client device B over the communication link L 19 .
- the network nodes N 1 , N 2 , N 5 and N 9 on the route satisfy the routing criterion associated with the set of data packets 100 .
- the network nodes N 1 , N 2 , N 5 and N 9 support Alg x.
- Network nodes that do not support the routing criterion associated with the set of data packets 100 are not included in the route.
- network nodes N 7 and N 6 are not included in the route.
- a controller e.g., the network controller 20 shown in FIG. 1A ) identifies all network nodes that support the routing criterion associated with the set of data packets 100 .
- the controller identifies that network nodes N 1 , N 2 , N 3 , N 4 , N 5 , N 8 and N 9 support Alg x.
- the controller determines the shortest/fastest route through the network nodes that support the routing criterion associated with the set of data packets 100 .
- the shortest/fastest route includes network nodes N 1 , N 2 , N 5 and N 9 .
- the routing criterion indicates one or more restrictions (e.g., exclusionary constraints).
- the exclusionary constraints indicate characteristics of network nodes and/or communication links that are to be avoided.
- the determined route does not include network nodes and/or communication links with characteristics that match the exclusionary constraints.
- the communication link L 6 is associated with affinity red.
- one of the exclusionary constraints associated with the routing criterion e.g., with Alg x
- the route does not include communication link L 6 .
- the route in the example of FIG. 1F includes communication links L 3 and L 7 .
- the routing criterion for the set of data packets 100 is determined based on a type of the data packets 100 . For example, in some implementations, if the set of data packets 100 correspond to streaming video, then the routing criterion for the set of data packets 100 is set to Alg x. In some implementations, if the set of data packets 100 correspond to messaging, then the routing criterion for the set of data packets 100 is set to Alg y. In some implementations, if the set of data packets 100 correspond to encrypted traffic, then the routing criterion for the set of data packets 100 is set to Alg z. In some implementations, the routing criterion for the set of data packets 100 is set by the client device A.
- the routing criterion for the set of data packets 100 is set by network node Ni. In some implementations, the routing criterion for the set of data packets 100 is set by a controller (e.g., by the network controller 20 shown in FIG. 1A ).
- the network controller 20 is shown as being separate from the network nodes N 1 . . . N 9 . However, in some implementations, the network controller 20 resides at one or more of the network nodes N 1 . . . N 9 . In some implementations, the network controller 20 is distributed across various computing devices. For example, in some implementations, the network controller 20 is implemented by a cloud computing system. In the example of FIG. 1A , a single instance of the network controller 20 is shown. However, in some implementations, there are multiple instances of the network controller 20 . For example, in some implementations, different network controllers control different parts of the network. In some implementations, the network nodes N 1 . . . N 9 are controlled by different network operating entities. In such implementations, each network operating entity utilizes a network controller to control its network nodes.
- FIG. 2A is a flowchart representation of a method 200 of configuring network nodes (e.g., the network nodes N 1 . . . N 9 shown in FIGS. 1A-1F ) in a network in accordance with some implementations.
- the method 200 is implemented as a set of computer readable instructions that are executed at a device (e.g., the network controller 20 shown in FIG. 1A , one or more of the network nodes N 1 . . . N 9 shown in FIGS. 1A-1F and/or the device 300 shown in FIG. 3 ).
- a device e.g., the network controller 20 shown in FIG. 1A , one or more of the network nodes N 1 . . . N 9 shown in FIGS. 1A-1F and/or the device 300 shown in FIG. 3 ).
- the method 200 includes determining a routing criterion to transmit a set of data packets ( 210 ), identifying network nodes and communication links that satisfy the routing criterion ( 220 ), determining a route for the set of data packets through the network nodes and the communication links that satisfy the routing criterion ( 230 ), and configuring the network nodes that are on the route with configuration information that allows the set of data packets to propagate along the route ( 240 ).
- the method 200 includes determining a routing criterion to transmit a set of data packets across a network. For example, in some implementations, the method 200 includes determining a routing algorithm to transmit the set of data packets across a set of interconnected network nodes. As represented by block 210 a, in some implementations, the method 200 includes determining the routing criterion based on the set of data packets. In some implementations, the method 200 includes determining the routing criterion based on a type of the set of data packets.
- the type indicates whether the set of data packets carry messaging data (e.g., messages from an instant messaging application), media data (e.g., videos, music, etc.), voice data, file transfer data, streaming data (e.g., video streaming data, audio streaming data, etc.), and/or encrypted data.
- the method 200 includes selecting a first routing criterion for data packets that correspond to video streaming, selecting a second routing criterion for data packets that correspond to messaging, and selecting a third routing criterion for data packets that correspond to all other types of traffic. As represented by block 210 b, in some implementations, the method 200 includes selecting the routing criterion from a plurality of routing criteria.
- the method 200 includes determining the routing criterion by selecting an existing routing criterion. In some implementations, the method 200 includes determining the routing criterion by modifying an existing routing criterion. In some implementations, the method 200 includes determining the routing criterion by creating a new routing criterion.
- the set of data packets are associated with a transmission priority value (e.g., ‘1’ for high priority, ‘0’ for medium priority and ‘-1’ for low priority).
- the method 200 includes determining the routing criterion for the set of data packets based on the transmission priority value. For example, in such implementations, the method 200 includes selecting a first routing criterion (e.g., Alg x shown in FIG. 1B ) for data packets with a transmission priority value of ‘1’, a second routing criterion (e.g., Alg y shown in FIG. 1B ) for data packets with a transmission priority value of ‘0’, and a third routing criterion (e.g., Alg z shown in FIG. 1B ) for data packets with a transmission priority value of ‘ ⁇ 1’.
- a first routing criterion e.g., Alg x shown in FIG. 1B
- a second routing criterion e.g., Alg y
- the method 200 includes determining the routing criterion based on a target metric associated with the set of data packets.
- the target metric includes an Interior Gateway Protocol (IGP) metric.
- the network nodes and/or the communication links are configured to control (e.g., maintain, reduce or increase) the IGP metric.
- the target metric relates to affinity values.
- the target metric is to exclude predefined affinity values (e.g., exclude Traffic Engineering (TE) affinity 2, exclude TE affinity 1, etc.).
- the target metric relates to color values.
- the target metric specifies specific color values (e.g., Color 1, Color 2, etc.).
- the method 200 includes identifying network nodes and communication links in the network that satisfy the routing criterion.
- the method 200 includes selecting nodes that support and/or advertise support for the routing criterion for the set of data packets.
- the method 200 includes selecting network nodes that support the routing criterion regardless of whether the network nodes advertise support for the routing criterion.
- the method 200 includes selecting network nodes that support the routing criterion and advertise support for the routing criterion (e.g., the network nodes indicate that they support the routing criterion).
- the method 200 includes selecting network nodes and/or communication links that are not associated with exclusionary constraints corresponding to the routing criterion.
- the routing criterion indicates one or more exclusionary constraints (e.g., characteristics of network nodes and/or communication links that are to be avoided).
- the method 200 includes forgoing selecting network nodes and/or communication links with characteristics that are among the exclusionary constraints.
- the method 200 includes determining a route through the network nodes and the communication links that satisfy the routing criterion. In some implementations, the method 200 includes determining the shortest route through the network nodes and the communication links that satisfy the routing criterion. In some implementations, the method 200 includes determining the fastest route through the network nodes and the communication links that satisfy the routing criterion.
- the method 200 includes determining the route based on a target metric associated with the set of data packets.
- the network nodes are associated with respective target metrics.
- the network nodes are configured to control (e.g., maintain, reduce or increase) their respective target metrics.
- the method 200 includes determining the route by selecting network nodes that are configured to control the target metric associated with the set of data packets.
- the target metric includes one or more of an Interior Gateway Protocol (IGP) metric, a Traffic Engineering (TE) metric, etc.
- IGP Interior Gateway Protocol
- TE Traffic Engineering
- the method 200 includes configuring the network nodes that are on the route with configuration information that allows the set of data packets to propagate along the route.
- the method 400 includes installing forwarding entries at the network nodes that are on the route.
- the method 200 includes pushing, by a controller (e.g., the network controller 20 shown in FIG. 1A ) the forwarding entries to the network nodes that are on the route.
- the method 200 includes fetching, by the network nodes that are on the route, the forwarding entries.
- the forwarding entries identify downstream nodes that satisfy the routing criterion. For example, in some implementations, the forwarding entries map SIDs to corresponding network nodes (e.g., Internet Protocol (IP) addresses of corresponding network nodes).
- IP Internet Protocol
- FIG. 2B is a flowchart representation of a method 250 of propagating data packets in accordance with some implementations.
- the method 250 is implemented as a set of computer readable instructions that are executed at a device (e.g., the network controller 20 shown in FIG. 1A , one or more of the network nodes N 1 . . . N 9 shown in FIGS. 1A-1F and/or the device 300 shown in FIG. 3 ).
- the method 250 includes obtaining a request to transmit a set of data packets ( 260 ), applying a label to the set of data packets ( 270 ), and propagating the data packets in accordance with forwarding entries and the label ( 280 ).
- the method 250 includes obtaining a request to transmit the set of data packets (e.g., the set of data packets 100 shown in FIG. 1E ).
- the method 250 includes receiving the request from a client device (e.g., the client device A shown in FIGS. 1A-1F ).
- the method 250 includes receiving the request at a network node that is at an edge of the network (e.g., at an edge node, for example, at network node N 1 shown in FIGS. 1A-1F ).
- the method 250 includes receiving the request at a controller (e.g., the network controller 20 shown in FIG. 1A ).
- the method 250 includes applying a label to the set of data packets.
- the label indicates a routing criterion for transmitting the data packets.
- the method 250 includes determining the routing criterion for transmitting the data packets.
- the method 250 includes determining the routing criterion based on the set of data packets.
- the method 250 includes selecting an existing routing criterion, modifying an existing routing criterion or creating a new routing criterion based on a type of the set of data packets.
- the method 250 includes determining the routing criterion based on the request (e.g., by retrieving the routing criterion from the request). As represented by block 270 b, in some implementations, the method 250 includes applying a MPLS label to the set of data packets. For example, in some implementations, the method 250 includes inserting the label in respective header fields of the data packets.
- the method 250 includes propagating the data packets in accordance with configuration information of the network nodes and the label. In some implementations, the method 250 includes forwarding the data packets in accordance with forwarding entries and the label affixed to the set of data packets. For example, in some implementations, the method 250 includes forwarding the data packets to the network node that is mapped to the label.
- the method 200 and/or the method 250 allow routing of data packets even though the network nodes and the communication links have different routing capabilities.
- the method 200 allows the network nodes and the communication links to support different routing criteria thereby providing more flexibility.
- the method 200 allows the network nodes and/or the communication links to support different routing algorithms.
- the method 200 enables malleable routing for data packets by allowing the network nodes and/or the communication links to change their respective routing capabilities (e.g., by supporting different routing criteria).
- FIG. 3 is a block diagram of a device 300 enabled with one or more components of a device (e.g., the network controller 20 shown in FIG. 1A , and/or one or more of the network nodes N 1 . . . N 9 shown in FIGS. 1A-1F ) in accordance with some implementations. While certain specific features are illustrated, those of ordinary skill in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity, and so as not to obscure more pertinent aspects of the implementations disclosed herein.
- a device e.g., the network controller 20 shown in FIG. 1A , and/or one or more of the network nodes N 1 . . . N 9 shown in FIGS. 1A-1F . While certain specific features are illustrated, those of ordinary skill in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity, and so as not to obscure more pertinent aspects of the implementations disclosed herein.
- the device 300 includes one or more processing units (CPUs) 302 , a network interface 303 , a programming interface 305 , a memory 306 , and one or more communication buses 304 for interconnecting these and various other components.
- CPUs processing units
- network interface 303 a network interface 303
- programming interface 305 a programming interface 305
- memory 306 a memory
- communication buses 304 for interconnecting these and various other components.
- the network interface 303 is provided to, among other uses, establish and maintain a metadata tunnel between a cloud hosted network management system and at least one private network including one or more compliant devices.
- the communication buses 304 include circuitry that interconnects and controls communications between system components.
- the memory 306 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices.
- the memory 306 optionally includes one or more storage devices remotely located from the CPU(s) 302 .
- the memory 306 comprises a non-transitory computer readable storage medium.
- the memory 306 or the non-transitory computer readable storage medium of the memory 306 stores the following programs, modules and data structures, or a subset thereof including an optional operating system 308 , a routing criterion determination module 310 , a node/link identification module 320 , a route determination module 330 , and a configuration module 340 .
- the routing criterion determination module 310 determines a routing criterion that is to be used to transmit a set of data packets across a network. To that end, the routing criterion determination module 310 includes instructions 310 a, and heuristics and metadata 310 b.
- the node/link identification module 320 identifies network nodes and communication links in the network that satisfy the routing criterion. To that end, the node/link identification module 320 includes instructions 320 a, and heuristics and metadata 320 b. In various implementations, the route determination module 330 determines a route for the set of data packets through the network nodes and the communication links that satisfy the routing criterion. To that end, the route determination module 330 includes instructions 330 a, and heuristics and metadata 330 b . In various implementations, the configuration module 340 configures the network nodes that are on the route with configuration information that allows the set of data packets to propagate along the route. To that end, the configuration module 340 includes instructions 340 a, and heuristics and metadata 340 b.
- the method 200 , the method 250 and/or the device 300 enable a routing criterion (e.g., a routing algorithm, for example, an IGP prefix SID algorithm) to be defined on a per-deployment basis.
- a routing criterion e.g., a routing algorithm, for example, an IGP prefix SID algorithm
- a flexible algorithm K is defined as controlling (e.g., reducing, for example, minimizing) a particular target metric (e.g., an IGP metric, a TE metric, or other network performance metrics such as latency).
- the flexible algorithm K further defines a set of one or more restrictions (e.g., exclusionary constraints or excluded resources).
- the set of restrictions are identified by their Shared Risk Link Groups (SRLG), TE affinity and/or Internet Protocol (IP) address.
- SRLG Shared Risk Link Groups
- IP Internet Protocol
- the method 200 , the method 250 and/or the device 300 allow different operators to define different routing criterion (e.g., different routing algorithms)
- an operator K that controls one or more network nodes defines a first routing criterion as controlling an IGP metric (e.g., reducing the IGP metric, for example, minimizing the IGP metric) and avoiding a particular SRLG (e.g., avoiding SRLG1).
- an operator J that controls one or more network nodes defines a second routing criterion as controlling a TE metric (e.g., reducing the TE metric, for example, minimizing the TE metric) and avoiding TE affinity 1.
- the method 200 , the method 250 and/or the device 300 enable support for different routing criteria.
- the method 200 , the method 250 and/or the device 300 enable support for a routing criterion (e.g., ALG 11 ) that controls an IGP metric (e.g., reduces the IGP metric, for example, minimizes the IGP metric) and excludes TE affinity 2.
- a routing criterion e.g., ALG 11
- the method 200 , the method 250 and/or the device 300 enable support for another routing criterion (e.g., ALG 12) that controls the IGP metric and excludes TE affinity 1.
- a set of Table, Length and Value are utilized to encode the defining characteristics of a routing criterion.
- the method 200 , the method 250 and/or the device 300 allow network-wide automation of the assignment/modification of the routing criteria.
- the method 200 , the method 250 and/or the device 300 allow network nodes to indicate (e.g., advertise, for example, broadcast) the definition of their respective routing criterion.
- the device 300 advertises the example routing criteria ALG11 and ALG 12 as:
- ALG11 control (e.g., reduce, for example, minimize) IGP metric, exclude TE affinity 2
- Color 1 control (e.g., reduce, for example, minimize) IGP metric, exclude TE affinity 1
- the device 300 detects inconsistencies between network nodes that support the same routing criterion.
- the router capability (RC) of network nodes indicates the definition of the routing criteria supported by the network nodes.
- the router capability of network nodes M and N indicates that the network nodes M and N support ALG11:
- the device 300 determines that the definition of the routing criterion supported by the network nodes M and N is consistent.
- the network nodes utilize a new RC TLV to indicate the definition of the routing criterion that the network nodes support.
- the device 300 determines whether a network node N is enabled for (e.g., supports) a particular routing criterion (e.g., ALG(K)). If the device 300 determines that the network node N does not support ALG(K), the device 300 does not include the network node N in the route. For example, in some implementations, the network node N does not compute ALG(K) Dijkstra and does not install ALG(K) prefix SID.
- ALG(K) a particular routing criterion
- the device 300 determines that the network node N supports ALG(K)
- the device 300 prunes all the nodes that do not support ALG(K), prunes all the communication links falling under the exclude constraints defined for ALG(K), computes Dijkstra on the resulting topology according to the target metric associated with ALG(K), and installs the prefix SID according to the computed Dijkstra shortest route for any prefix leaf with an ALG(K) prefix SID.
- the device 300 determines a backup route (e.g., a secondary route, for example, a Fast Reroute (FRR) backup route).
- a backup route e.g., a secondary route, for example, a Fast Reroute (FRR) backup route.
- the backup route is associated with (e.g., respects) the same characteristics (e.g., constraints) as the route (e.g., the primary route).
- the backup route is determined based on the same routing criterion as the primary route.
- the device 300 determines (e.g., compute) the backup route for the Prefix SID S of ALG(K).
- the device 300 executes a Topology-Independent Loop-Free Alternate (TI-LFA) algorithm on the topology T′(K), where T′(K) is T(K) minus the resource protected with TILFA (e.g., link, node, SRLG).
- TILFA Topology-Independent Loop-Free Alternate
- the post-convergence backup route is encoded with SIDs associated with ALG(K).
- the device 300 provides automated steering of service traffic on the IGP prefix SID with the routing criterion implementing the service level agreement (SLA) associated with (e.g., required by) the service route.
- SLA service level agreement
- PE receives BGP route 1/8 via 2.2.2.2 with color 1 2.2.2.2 is advertised in IGP with Prefix SID 17002 for a particular routing criterion (e.g., ALG 11)
- ALG 11 is defined by Mapping Server as “IGP metric, exclude TE-affinity2, color 1”
- PE installs 1/8 via 17002 because 17002 is the Prefix SID of 2.2.2.2 according to ALG 11.
- ALG 11 is bound to color 1.
- a single SID is utilized to encode the shortest route instead of N SIDs.
- the device 300 encodes the shortest-route as a list of N SIDs of algorithm zero.
- the SID list size is the primary data plane constraint for a segment route (SR) deployment.
- the device 300 enables flexible and customized configuration of the network nodes and/or the communication links. In some implementations, the device 300 supports dual-plane policies. In some implementations, the device 300 encodes planes differently (e.g., any TE affinity value can be used, and/or any SRLG value can be used). In various implementations, the device 300 enables network operators, network nodes and/or communication links to define their own routing criterion.
- the device 300 enables network-wide automation of adopting/modifying routing criteria.
- a mapping server extension is defined to distribute the definition of routing criteria (e.g., an IGP algorithm) across all the network nodes of the domain/area.
- the device 300 detects inconsistent definitions of a routing criterion.
- a router capability extension is defined to indicate (e.g., advertise) the definition of a routing criterion for the network nodes. In some implementations, if network nodes supporting the same routing criterion do not indicate the same definition for the routing criterion, the device 300 detects an inconsistency.
- the device 300 enables automated steering of service flows onto the prefix SID associated with the routing criterion. In some implementations, the device 300 determines a backup route that is associated with (e.g., follows or respects) the same constraints as the primary route.
- first first
- second second
- first contact first contact
- first contact second contact
- first contact second contact
- the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context.
- the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
- This application claims priority to U.S. provisional patent application No. 62/527,611 filed on Jun. 30, 2017, the contents of which are hereby incorporated by reference.
- The present disclosure relates generally to routing, and in particular, to malleable routing for data packets.
- In some previously available networks, data packets are routed based on a fixed rule that aims to optimize a specific metric. For example, in some previously available networks, data packets are routed based on a shortest path algorithm. However, fixed rules are sometimes unsuitable for certain types of data packets. For example, not all data packets need to be routed based on the shortest path algorithm. As an example, some data packets may need to be routed via network nodes that support a heightened level of encryption even if such network nodes are not on the shortest path.
- So that the present disclosure can be understood by those of ordinary skill in the art, a more detailed description may be had by reference to aspects of some illustrative implementations, some of which are shown in the accompanying drawings.
-
FIGS. 1A-1F are schematic diagrams of a network environment that allows malleable routing for data packets in accordance with some implementations. -
FIG. 2A is a flowchart representation of a method of configuring network nodes in a network in accordance with some implementations. -
FIG. 2B is a flowchart representation of a method of propagating data packets in accordance with some implementations. -
FIG. 3 is a block diagram of a device enabled with various modules that are provided to configure network nodes and propagate data packets in accordance with some implementations. - In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures.
- Numerous details are described herein in order to provide a thorough understanding of the illustrative implementations shown in the accompanying drawings. However, the accompanying drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate from the present disclosure that other effective aspects and/or variants do not include all of the specific details of the example implementations described herein. While pertinent features are shown and described, those of ordinary skill in the art will appreciate from the present disclosure that various other features, including well-known systems, methods, components, devices, and circuits, have not been illustrated or described in exhaustive detail for the sake of brevity and so as not to obscure more pertinent aspects of the example implementations disclosed herein.
- Various implementations disclosed herein enable malleable routing for data packets. For example, in various implementations, a method of routing a type of data packets is performed by a device. In some implementations, the device includes a non-transitory memory and one or more processors coupled with the non-transitory memory. In some implementations, the method includes determining a routing criterion to transmit a set of data packets across a network. In some implementations, the method includes identifying network nodes and communication links in the network that satisfy the routing criterion. In some implementations, the method includes determining a route for the set of data packets through the network nodes and the communication links that satisfy the routing criterion. In some implementations, the method includes configuring the network nodes that are on the route with configuration information that allows the set of data packets to propagate along the route.
- Some networks treat various types of data packets equally. For example, some networks route different types of data packets according to the same routing criterion (e.g., routing algorithm). Many networks primarily utilize a shortest route criterion (e.g., the shortest path algorithm) to route data packets. In such networks, data packets corresponding to a video download are routed according to the same criterion as data packets corresponding to a phone call. This rigid approach of routing data packets according to a fixed criterion is sometimes unsuitable for certain types of data packets. For example, while the shortest route criterion may be suitable for data packets corresponding to a phone call, the shortest route criterion may be unsuitable for data packets corresponding to an encrypted file transfer.
- The present disclosure provides methods, systems and/or devices that enable malleable routing for data packets. In some implementations, different routing criteria are utilized to transport different types of data packets. In some implementations, different routing criteria include different routing algorithms or different routing schemes. This flexible approach of utilizing different routing criteria for different types of data packets tends to result in routes that are more suitable for the type of data packets. For example, in some implementations, the shortest route criterion is utilized to route data packets corresponding to a phone call but a different routing criterion is utilized to route data packets corresponding to an encrypted file transfer. The present disclosure provides more configuration options to network operators by allowing the network operators to support different routing criteria. For example, the present disclosure enables a network operator to support an existing routing criterion, support a modified version of an existing routing criterion, and/or create a new routing criterion. The present disclosure also provides more configuration options for individual network nodes. For example, the present disclosure enables a network node to install configuration information for an existing routing criterion, a modified version of an existing routing criterion, and/or a new routing criterion.
-
FIGS. 1A-1F are schematic diagrams of anetwork environment 10 that allows malleable routing for data packets in accordance with some implementations. While certain specific features are illustrated, those of ordinary skill in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the example implementations disclosed herein. To that end, thenetwork environment 10 includes client devices A and B, various network nodes N1, N2 . . . N9, various communication links L1, L2 . . . L19, and a network controller 20. Although the network controller 20 is shown as being separate from the network nodes N1, N2 . . . N9, in some implementations, the network controller 20 is implemented by one or more of the network nodes N1, N2 . . . N9. In other words, in some implementations, the network controller 20 is distributed across one or more of the network nodes N1, N2 . . . N9. - Referring to
FIG. 1B , in various implementations, the network nodes N1 . . . N9 support one or more routing criteria. For example, in some implementations, the network nodes N1 . . . N9 support one or more routing algorithms In the example ofFIG. 1B , the network node N1 supports algorithm (Alg) x, the network node N2 supports Alg.s x and y, the network node N3 supports Alg.s x and z, the network node N4 supports Alg.s x and y, the network node N5 supports Alg.s x, y and z, the network node N6 supports Alg.s y and z, the network node N7 supports Alg z, the network node N8 support Alg.s x and z, the network node N9 supports Alg.s x and z. In some implementations, the network nodes N1 . . . N9 are configured to route data packets in accordance with the routing criteria (e.g., the routing algorithm) that the network nodes N1 . . . N9 support. For example, the network node N1 is configured to route data packets in accordance with Alg x. - In some implementations, data packets that are to be routed in accordance with a particular routing criterion are propagated along a route that includes network nodes that support that particular routing criterion. In the example of
FIG. 1B , data packets that are to be routed in accordance with Alg x may be transmitted along a route that includes network node N1 or network node N4. However, data packets that are to be routed in accordance with Alg y are transmitted along a route that includes network node N4, and not network node N1. - Referring to
FIG. 1C , in some implementations, some network nodes indicate (e.g., advertise, broadcast and/or publish) the routing criteria that the network nodes support. In some implementations, the network nodes indicate their support for a particular routing criterion via a router capability (RC). In some implementations, the network nodes are associated with a respective set of router capabilities. In some implementations, a network node utilizes a first router capability to indicate the definition of a routing criterion, a second router capability to indicate whether or not the network node supports the routing criterion, and a third router capability to indicate a segment identifier (SID) that is associated with the routing criterion. In the example ofFIG. 1C , a router capability of network node N1 indicates that the SID for Alg x is 16,001. In the example ofFIG. 1C , a router capability of network node N2 indicates that the SID for Alg x is 16,002 and the SID for Alg y is 16,102. In the example ofFIG. 1C , a router capability of network node N5 indicates that the SID for Alg x is 16,005, the SID for Alg y is 16,105 and the SID for Alg z is 16,205. In the example ofFIG. 1C , a router capability of network node N9 indicates that the SID for Alg x is 16,009 and the SID for Alg z is 16,209. In some implementations, some network nodes do not advertise the routing criteria that the network nodes support. In some implementations, some network nodes are not associated with SIDs. For example, inFIG. 1C , network nodes N3, N4, N7 and N8 are associated with any SIDs. In some implementations, some network nodes have SIDs for some routing criteria while not for other routing criteria. For example, inFIG. 1C , the network node N6 has an SID for Alg y but the network node N6 does not have an SID for Alg z. - Referring to
FIG. 1D , in various implementations, the network nodes N1 . . . N9 install configuration information that allows the network nodes N1 . . . N9 to direct (e.g., propagate or transmit) data packets in accordance with different routing criteria. In some implementations, the configuration information includes forwarding entries that indicate downstream network nodes that support the routing criterion being used to transmit data packets. In some implementations, the forwarding entries include a mapping of SIDs to downstream network nodes that are associated with the SIDs. In the example ofFIG. 1D , the network node N1 includes forwarding entries that map SIDs 16,002, 16,005 and 16,009 to the network node N2. As such, the network node N1 transmits any data packets that are labeled with SID 16,002, 16,005 or 16,009 to the network node N2. In the example ofFIG. 1D , the network node N2 includes forwarding entries that map SID 16,009 to network node N3, and SID 16,106 to network node N6. As such, the network node N2 transmits any data packets that are labeled with SID 16,009 to network node N3, and any data packets that are labeled with SID 16,106 to network node N6. In the example ofFIG. 1D , the network node N5 includes forwarding entries that map SID 16,009 to network node N9. As such, the network node N5 transmits any data packets that are labeled with SID 16,009 to network node N9. - In some implementations, the forwarding entries (e.g., the configuration information) are installed at the network nodes in response to the network nodes being on a route that is being used to transport data packets. In some implementations, the forwarding entries are installed at the network nodes by a controller (e.g., the network controller 20 shown in
FIG. 1A ). For example, in some implementations, the controller pushes the forwarding entries to the network nodes after determining that the network nodes are on a selected route for transporting data packets. In some implementations, the forwarding entries are installed on the network nodes that are on the route, and not on the network nodes that are not on the route. More generally, in various implementations, network nodes that are on a route for transporting a set of data packets are configured with configuration information. - In various implementations, the configuration information is updated when data packets are transmitted using a different routing criterion. For example, in some implementations, different forwarding entries are installed when data packets are transmitted using different routing algorithms More generally, in various implementations, the configuration information (e.g., the forwarding entries) are based on the routing criterion (e.g., the routing algorithm) that is being used to transport the data packets. Updating the configuration information (e.g., updating the forwarding entries) allows more flexibility in transporting data packets using different routing criteria (e.g., different routing algorithms) For example, with reference to
FIG. 1D , updating the forwarding entries enables transporting a first set of data packets in accordance with Alg x, a second set of data packets in accordance with Alg y, and a third set of data packets in accordance with Alg z. - Referring to
FIG. 1E , the network obtains a request to transmit a set ofdata packets 100 from client device A to client device B. In the example ofFIG. 1E , the set of data packets are labeled with a MPLS (multiprotocol label switching) label of 16,009. The MPLS label of 16,009 indicates that the set of data packets are to be transmitted in accordance with Alg x because 16,009 is the SID for Alg x at network node N9. More generally, in various implementations, the set ofdata packets 100 indicates a routing criterion that is to be used to transport the set ofdata packets 100. In some implementations, the MPLS label is applied to the set ofdata packets 100 by the network node N1, the network controller 20 shown inFIG. 1A and/or the client device A. -
FIG. 1E illustrates a route which includes network nodes that support Alg x—the routing criterion that is to be used to transport the set ofdata packets 100. As illustrated inFIG. 1E , the route includes network nodes N1, N2, N5 and N9, and communication links L1, L2, L6, L14 and L19. In operation, the network node N1 receives the set ofdata packets 100. Since the set ofdata packets 100 are labeled with 16,009, in accordance with the forwarding entries installed at network node N1, network node N1 forwards the set ofdata packets 100 to network node N2. Network node N2 receives the set ofdata packets 100 from network node N1 over the communication link L2. Since the set ofdata packets 100 are labeled with 16,009, in accordance with the forwarding entries installed at network node N2, network node N2 forwards the set ofdata packets 100 to network node N5. Network node N5 receives the set ofdata packets 100 from network node N2 over the communication link L6. Since the set ofdata packets 100 are labeled with 16,009, in accordance with the forwarding entries installed at network node N5, network node N5 forwards the set ofdata packets 100 to network node N9. Network node N9 receives the set ofdata packets 100 from network node N5 over the communication link L14. Network node N9 forwards the set ofdata packets 100 to the client device B over the communication link L19. - In the example of
FIG. 1E , the network nodes N1, N2, N5 and N9 on the route satisfy the routing criterion associated with the set ofdata packets 100. For example, the network nodes N1, N2, N5 and N9 support Alg x. Network nodes that do not support the routing criterion associated with the set ofdata packets 100 are not included in the route. For example, network nodes N7 and N6 are not included in the route. In some implementations, a controller (e.g., the network controller 20 shown inFIG. 1A ) identifies all network nodes that support the routing criterion associated with the set ofdata packets 100. For example, the controller identifies that network nodes N1, N2, N3, N4, N5, N8 and N9 support Alg x. In some implementations, the controller determines the shortest/fastest route through the network nodes that support the routing criterion associated with the set ofdata packets 100. In the example ofFIG. 1E , the shortest/fastest route includes network nodes N1, N2, N5 and N9. - Referring to
FIG. 1F , in various implementations, the routing criterion indicates one or more restrictions (e.g., exclusionary constraints). In some implementations, the exclusionary constraints indicate characteristics of network nodes and/or communication links that are to be avoided. In such implementations, the determined route does not include network nodes and/or communication links with characteristics that match the exclusionary constraints. In the example ofFIG. 1F , the communication link L6 is associated with affinity red. Moreover, in the example ofFIG. 1F , one of the exclusionary constraints associated with the routing criterion (e.g., with Alg x) is to avoid communication links with affinity red. As such, in the example ofFIG. 1F , the route does not include communication link L6. Instead of communication link L6, the route in the example ofFIG. 1F includes communication links L3 and L7. - In various implementations, the routing criterion for the set of
data packets 100 is determined based on a type of thedata packets 100. For example, in some implementations, if the set ofdata packets 100 correspond to streaming video, then the routing criterion for the set ofdata packets 100 is set to Alg x. In some implementations, if the set ofdata packets 100 correspond to messaging, then the routing criterion for the set ofdata packets 100 is set to Alg y. In some implementations, if the set ofdata packets 100 correspond to encrypted traffic, then the routing criterion for the set ofdata packets 100 is set to Alg z. In some implementations, the routing criterion for the set ofdata packets 100 is set by the client device A. In some implementations, the routing criterion for the set ofdata packets 100 is set by network node Ni. In some implementations, the routing criterion for the set ofdata packets 100 is set by a controller (e.g., by the network controller 20 shown inFIG. 1A ). - In the example of
FIG. 1A , the network controller 20 is shown as being separate from the network nodes N1 . . . N9. However, in some implementations, the network controller 20 resides at one or more of the network nodes N1 . . . N9. In some implementations, the network controller 20 is distributed across various computing devices. For example, in some implementations, the network controller 20 is implemented by a cloud computing system. In the example ofFIG. 1A , a single instance of the network controller 20 is shown. However, in some implementations, there are multiple instances of the network controller 20. For example, in some implementations, different network controllers control different parts of the network. In some implementations, the network nodes N1 . . . N9 are controlled by different network operating entities. In such implementations, each network operating entity utilizes a network controller to control its network nodes. -
FIG. 2A is a flowchart representation of amethod 200 of configuring network nodes (e.g., the network nodes N1 . . . N9 shown inFIGS. 1A-1F ) in a network in accordance with some implementations. In various implementations, themethod 200 is implemented as a set of computer readable instructions that are executed at a device (e.g., the network controller 20 shown inFIG. 1A , one or more of the network nodes N1 . . . N9 shown inFIGS. 1A-1F and/or thedevice 300 shown inFIG. 3 ). Briefly, themethod 200 includes determining a routing criterion to transmit a set of data packets (210), identifying network nodes and communication links that satisfy the routing criterion (220), determining a route for the set of data packets through the network nodes and the communication links that satisfy the routing criterion (230), and configuring the network nodes that are on the route with configuration information that allows the set of data packets to propagate along the route (240). - As represented by
block 210, in various implementations, themethod 200 includes determining a routing criterion to transmit a set of data packets across a network. For example, in some implementations, themethod 200 includes determining a routing algorithm to transmit the set of data packets across a set of interconnected network nodes. As represented byblock 210 a, in some implementations, themethod 200 includes determining the routing criterion based on the set of data packets. In some implementations, themethod 200 includes determining the routing criterion based on a type of the set of data packets. For example, in some implementations, the type indicates whether the set of data packets carry messaging data (e.g., messages from an instant messaging application), media data (e.g., videos, music, etc.), voice data, file transfer data, streaming data (e.g., video streaming data, audio streaming data, etc.), and/or encrypted data. In some implementations, themethod 200 includes selecting a first routing criterion for data packets that correspond to video streaming, selecting a second routing criterion for data packets that correspond to messaging, and selecting a third routing criterion for data packets that correspond to all other types of traffic. As represented byblock 210 b, in some implementations, themethod 200 includes selecting the routing criterion from a plurality of routing criteria. In some implementations, themethod 200 includes determining the routing criterion by selecting an existing routing criterion. In some implementations, themethod 200 includes determining the routing criterion by modifying an existing routing criterion. In some implementations, themethod 200 includes determining the routing criterion by creating a new routing criterion. - In some implementations, the set of data packets are associated with a transmission priority value (e.g., ‘1’ for high priority, ‘0’ for medium priority and ‘-1’ for low priority). In such implementations, the
method 200 includes determining the routing criterion for the set of data packets based on the transmission priority value. For example, in such implementations, themethod 200 includes selecting a first routing criterion (e.g., Alg x shown inFIG. 1B ) for data packets with a transmission priority value of ‘1’, a second routing criterion (e.g., Alg y shown inFIG. 1B ) for data packets with a transmission priority value of ‘0’, and a third routing criterion (e.g., Alg z shown inFIG. 1B ) for data packets with a transmission priority value of ‘−1’. - In some implementations, the
method 200 includes determining the routing criterion based on a target metric associated with the set of data packets. In some implementations, the target metric includes an Interior Gateway Protocol (IGP) metric. In some implementations, the network nodes and/or the communication links are configured to control (e.g., maintain, reduce or increase) the IGP metric. In some implementations, the target metric relates to affinity values. For example, in some implementations, the target metric is to exclude predefined affinity values (e.g., exclude Traffic Engineering (TE) affinity 2, exclude TE affinity 1, etc.). In some implementations, the target metric relates to color values. For example, in some implementations, the target metric specifies specific color values (e.g., Color 1, Color 2, etc.). - As represented by
block 220, in various implementations, themethod 200 includes identifying network nodes and communication links in the network that satisfy the routing criterion. As represented byblock 220 a, in some implementations, themethod 200 includes selecting nodes that support and/or advertise support for the routing criterion for the set of data packets. In some implementations, themethod 200 includes selecting network nodes that support the routing criterion regardless of whether the network nodes advertise support for the routing criterion. In some implementations, themethod 200 includes selecting network nodes that support the routing criterion and advertise support for the routing criterion (e.g., the network nodes indicate that they support the routing criterion). - As represented by
block 220 b, in some implementations, themethod 200 includes selecting network nodes and/or communication links that are not associated with exclusionary constraints corresponding to the routing criterion. In some implementations, the routing criterion indicates one or more exclusionary constraints (e.g., characteristics of network nodes and/or communication links that are to be avoided). In such implementations, themethod 200 includes forgoing selecting network nodes and/or communication links with characteristics that are among the exclusionary constraints. - As represented by
block 230, in some implementations, themethod 200 includes determining a route through the network nodes and the communication links that satisfy the routing criterion. In some implementations, themethod 200 includes determining the shortest route through the network nodes and the communication links that satisfy the routing criterion. In some implementations, themethod 200 includes determining the fastest route through the network nodes and the communication links that satisfy the routing criterion. - In some implementations, the
method 200 includes determining the route based on a target metric associated with the set of data packets. In various implementations, the network nodes are associated with respective target metrics. For example, in some implementations, the network nodes are configured to control (e.g., maintain, reduce or increase) their respective target metrics. In such implementations, themethod 200 includes determining the route by selecting network nodes that are configured to control the target metric associated with the set of data packets. In some implementations, the target metric includes one or more of an Interior Gateway Protocol (IGP) metric, a Traffic Engineering (TE) metric, etc. - As represented by
block 240, in various implementations, themethod 200 includes configuring the network nodes that are on the route with configuration information that allows the set of data packets to propagate along the route. As represented byblock 240 a, in some implementations, the method 400 includes installing forwarding entries at the network nodes that are on the route. In some implementations, themethod 200 includes pushing, by a controller (e.g., the network controller 20 shown inFIG. 1A ) the forwarding entries to the network nodes that are on the route. In some implementations, themethod 200 includes fetching, by the network nodes that are on the route, the forwarding entries. As represented byblock 240 b, in some implementations, the forwarding entries identify downstream nodes that satisfy the routing criterion. For example, in some implementations, the forwarding entries map SIDs to corresponding network nodes (e.g., Internet Protocol (IP) addresses of corresponding network nodes). -
FIG. 2B is a flowchart representation of amethod 250 of propagating data packets in accordance with some implementations. In various implementations, themethod 250 is implemented as a set of computer readable instructions that are executed at a device (e.g., the network controller 20 shown inFIG. 1A , one or more of the network nodes N1 . . . N9 shown inFIGS. 1A-1F and/or thedevice 300 shown inFIG. 3 ). Briefly, themethod 250 includes obtaining a request to transmit a set of data packets (260), applying a label to the set of data packets (270), and propagating the data packets in accordance with forwarding entries and the label (280). - As represented by
block 260, in various implementations, themethod 250 includes obtaining a request to transmit the set of data packets (e.g., the set ofdata packets 100 shown inFIG. 1E ). In some implementations, themethod 250 includes receiving the request from a client device (e.g., the client device A shown inFIGS. 1A-1F ). In some implementations, themethod 250 includes receiving the request at a network node that is at an edge of the network (e.g., at an edge node, for example, at network node N1 shown inFIGS. 1A-1F ). In some implementations, themethod 250 includes receiving the request at a controller (e.g., the network controller 20 shown inFIG. 1A ). - As represented by
block 270, in various implementations, themethod 250 includes applying a label to the set of data packets. As represented byblock 270 a, in some implementations, the label indicates a routing criterion for transmitting the data packets. To that end, in some implementations, themethod 250 includes determining the routing criterion for transmitting the data packets. For example, in some implementations, themethod 250 includes determining the routing criterion based on the set of data packets. In some implementations, themethod 250 includes selecting an existing routing criterion, modifying an existing routing criterion or creating a new routing criterion based on a type of the set of data packets. In some implementations, themethod 250 includes determining the routing criterion based on the request (e.g., by retrieving the routing criterion from the request). As represented byblock 270 b, in some implementations, themethod 250 includes applying a MPLS label to the set of data packets. For example, in some implementations, themethod 250 includes inserting the label in respective header fields of the data packets. - As represented by
block 280, in various implementations, themethod 250 includes propagating the data packets in accordance with configuration information of the network nodes and the label. In some implementations, themethod 250 includes forwarding the data packets in accordance with forwarding entries and the label affixed to the set of data packets. For example, in some implementations, themethod 250 includes forwarding the data packets to the network node that is mapped to the label. - In various implementations, the
method 200 and/or themethod 250 allow routing of data packets even though the network nodes and the communication links have different routing capabilities. In some implementations, themethod 200 allows the network nodes and the communication links to support different routing criteria thereby providing more flexibility. For example, in some implementations, themethod 200 allows the network nodes and/or the communication links to support different routing algorithms. Advantageously, themethod 200 enables malleable routing for data packets by allowing the network nodes and/or the communication links to change their respective routing capabilities (e.g., by supporting different routing criteria). -
FIG. 3 is a block diagram of adevice 300 enabled with one or more components of a device (e.g., the network controller 20 shown inFIG. 1A , and/or one or more of the network nodes N1 . . . N9 shown inFIGS. 1A-1F ) in accordance with some implementations. While certain specific features are illustrated, those of ordinary skill in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity, and so as not to obscure more pertinent aspects of the implementations disclosed herein. To that end, as a non-limiting example, in some implementations thedevice 300 includes one or more processing units (CPUs) 302, anetwork interface 303, aprogramming interface 305, amemory 306, and one ormore communication buses 304 for interconnecting these and various other components. - In some implementations, the
network interface 303 is provided to, among other uses, establish and maintain a metadata tunnel between a cloud hosted network management system and at least one private network including one or more compliant devices. In some implementations, thecommunication buses 304 include circuitry that interconnects and controls communications between system components. Thememory 306 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Thememory 306 optionally includes one or more storage devices remotely located from the CPU(s) 302. Thememory 306 comprises a non-transitory computer readable storage medium. - In some implementations, the
memory 306 or the non-transitory computer readable storage medium of thememory 306 stores the following programs, modules and data structures, or a subset thereof including anoptional operating system 308, a routing criterion determination module 310, a node/link identification module 320, aroute determination module 330, and a configuration module 340. In various implementations, the routing criterion determination module 310 determines a routing criterion that is to be used to transmit a set of data packets across a network. To that end, the routing criterion determination module 310 includesinstructions 310 a, and heuristics andmetadata 310 b. In various implementations, the node/link identification module 320 identifies network nodes and communication links in the network that satisfy the routing criterion. To that end, the node/link identification module 320 includesinstructions 320 a, and heuristics andmetadata 320 b. In various implementations, theroute determination module 330 determines a route for the set of data packets through the network nodes and the communication links that satisfy the routing criterion. To that end, theroute determination module 330 includesinstructions 330 a, and heuristics andmetadata 330 b. In various implementations, the configuration module 340 configures the network nodes that are on the route with configuration information that allows the set of data packets to propagate along the route. To that end, the configuration module 340 includes instructions 340 a, and heuristics andmetadata 340 b. - In some implementations, the
method 200, themethod 250 and/or thedevice 300 enable a routing criterion (e.g., a routing algorithm, for example, an IGP prefix SID algorithm) to be defined on a per-deployment basis. For example, in some implementations, a flexible algorithm K is defined as controlling (e.g., reducing, for example, minimizing) a particular target metric (e.g., an IGP metric, a TE metric, or other network performance metrics such as latency). In some implementations, the flexible algorithm K further defines a set of one or more restrictions (e.g., exclusionary constraints or excluded resources). In some implementations, the set of restrictions are identified by their Shared Risk Link Groups (SRLG), TE affinity and/or Internet Protocol (IP) address. - In various implementations, the
method 200, themethod 250 and/or thedevice 300 allow different operators to define different routing criterion (e.g., different routing algorithms) For example, in some implementations, an operator K that controls one or more network nodes defines a first routing criterion as controlling an IGP metric (e.g., reducing the IGP metric, for example, minimizing the IGP metric) and avoiding a particular SRLG (e.g., avoiding SRLG1). In some implementations, an operator J that controls one or more network nodes defines a second routing criterion as controlling a TE metric (e.g., reducing the TE metric, for example, minimizing the TE metric) and avoiding TE affinity 1. - In various implementations, the
method 200, themethod 250 and/or thedevice 300 enable support for different routing criteria. For example, in some implementations, themethod 200, themethod 250 and/or thedevice 300 enable support for a routing criterion (e.g., ALG 11) that controls an IGP metric (e.g., reduces the IGP metric, for example, minimizes the IGP metric) and excludes TE affinity 2. In some implementations, themethod 200, themethod 250 and/or thedevice 300 enable support for another routing criterion (e.g., ALG 12) that controls the IGP metric and excludes TE affinity 1. In some implementations, a set of Table, Length and Value (TLVs) are utilized to encode the defining characteristics of a routing criterion. In various implementations, themethod 200, themethod 250 and/or thedevice 300 allow network-wide automation of the assignment/modification of the routing criteria. - In various implementations, the
method 200, themethod 250 and/or thedevice 300 allow network nodes to indicate (e.g., advertise, for example, broadcast) the definition of their respective routing criterion. For example, in some implementations, thedevice 300 advertises the example routing criteria ALG11 and ALG 12 as: - ALG11=control (e.g., reduce, for example, minimize) IGP metric, exclude TE affinity 2, Color 1
ALG12=control (e.g., reduce, for example, minimize) IGP metric, exclude TE affinity 1, Color 2 - In some implementations, the device 300 (e.g., the node/
link identification module 320 and/or the route determination module 330) detects inconsistencies between network nodes that support the same routing criterion. - In some implementations, the router capability (RC) of network nodes indicates the definition of the routing criteria supported by the network nodes. For example, in some implementations, the router capability of network nodes M and N indicates that the network nodes M and N support ALG11:
- RC of network node M indicates ALG 11=control IGP metric, exclude TE affinity 2
RC of network node N indicates ALG 11=control IGP metric, exclude TE affinity 2 - Since network nodes M and N indicate the same definition for ALG 11, the device 300 (e.g., the node/
link identification module 320 and/or the route determination module 330) determines that the definition of the routing criterion supported by the network nodes M and N is consistent. In some implementations, the network nodes utilize a new RC TLV to indicate the definition of the routing criterion that the network nodes support. - In various implementations, the
device 300 determines whether a network node N is enabled for (e.g., supports) a particular routing criterion (e.g., ALG(K)). If thedevice 300 determines that the network node N does not support ALG(K), thedevice 300 does not include the network node N in the route. For example, in some implementations, the network node N does not compute ALG(K) Dijkstra and does not install ALG(K) prefix SID. If thedevice 300 determines that the network node N supports ALG(K), thedevice 300 prunes all the nodes that do not support ALG(K), prunes all the communication links falling under the exclude constraints defined for ALG(K), computes Dijkstra on the resulting topology according to the target metric associated with ALG(K), and installs the prefix SID according to the computed Dijkstra shortest route for any prefix leaf with an ALG(K) prefix SID. - In various implementations, the
device 300 determines a backup route (e.g., a secondary route, for example, a Fast Reroute (FRR) backup route). In some implementations, the backup route is associated with (e.g., respects) the same characteristics (e.g., constraints) as the route (e.g., the primary route). For example, in some implementations, the backup route is determined based on the same routing criterion as the primary route. In some implementations, to determine (e.g., compute) the backup route for the Prefix SID S of ALG(K), thedevice 300 executes a Topology-Independent Loop-Free Alternate (TI-LFA) algorithm on the topology T′(K), where T′(K) is T(K) minus the resource protected with TILFA (e.g., link, node, SRLG). In some implementations, the post-convergence backup route is encoded with SIDs associated with ALG(K). - In some implementations, the
device 300 provides automated steering of service traffic on the IGP prefix SID with the routing criterion implementing the service level agreement (SLA) associated with (e.g., required by) the service route. In some implementations, when a provider edge (PE) receives a Border Gateway Protocol (BGP)/Service route R via N with Color Extended Community C, the PE installs R via S, where S is the Prefix SID of N for Alg(K) mapped to C. In such implementations, there is automated steering of BGP/Service routes onto prefix SIDs associated with the routing criterion. The following is a non-limiting example of automated steering of BGP/Service routes: - PE receives BGP route 1/8 via 2.2.2.2 with color 1
2.2.2.2 is advertised in IGP with Prefix SID 17002 for a particular routing criterion (e.g., ALG 11)
ALG 11 is defined by Mapping Server as “IGP metric, exclude TE-affinity2, color 1” - In the above example, PE installs 1/8 via 17002 because 17002 is the Prefix SID of 2.2.2.2 according to ALG 11. In the above example, ALG 11 is bound to color 1.
- In some implementations, a single SID is utilized to encode the shortest route instead of N SIDs. In some implementations, the
device 300 encodes the shortest-route as a list of N SIDs of algorithm zero. In some implementations, the SID list size is the primary data plane constraint for a segment route (SR) deployment. - In some implementations, the
device 300 enables flexible and customized configuration of the network nodes and/or the communication links. In some implementations, thedevice 300 supports dual-plane policies. In some implementations, thedevice 300 encodes planes differently (e.g., any TE affinity value can be used, and/or any SRLG value can be used). In various implementations, thedevice 300 enables network operators, network nodes and/or communication links to define their own routing criterion. - In some implementations, the
device 300 enables network-wide automation of adopting/modifying routing criteria. In some implementations, a mapping server extension is defined to distribute the definition of routing criteria (e.g., an IGP algorithm) across all the network nodes of the domain/area. In some implementations, thedevice 300 detects inconsistent definitions of a routing criterion. In some implementations, a router capability extension is defined to indicate (e.g., advertise) the definition of a routing criterion for the network nodes. In some implementations, if network nodes supporting the same routing criterion do not indicate the same definition for the routing criterion, thedevice 300 detects an inconsistency. In some implementations, thedevice 300 enables automated steering of service flows onto the prefix SID associated with the routing criterion. In some implementations, thedevice 300 determines a backup route that is associated with (e.g., follows or respects) the same constraints as the primary route. - While various aspects of implementations within the scope of the appended claims are described above, it should be apparent that the various features of implementations described above may be embodied in a wide variety of forms and that any specific structure and/or function described above is merely illustrative. Based on the present disclosure one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein.
- It will also be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, which changing the meaning of the description, so long as all occurrences of the “first contact” are renamed consistently and all occurrences of the second contact are renamed consistently. The first contact and the second contact are both contacts, but they are not the same contact.
- The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the claims. As used in the description of the embodiments and the appended claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
- As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
Claims (1)
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/360,283 US20210377162A1 (en) | 2017-06-30 | 2021-06-28 | Malleable routing for data packets |
US17/685,929 US20220272032A1 (en) | 2017-06-30 | 2022-03-03 | Malleable routing for data packets |
US17/685,857 US20220191133A1 (en) | 2017-06-30 | 2022-03-03 | Malleable routing for data packets |
US17/685,986 US20220191134A1 (en) | 2017-06-30 | 2022-03-03 | Malleable routing for data packets |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762527611P | 2017-06-30 | 2017-06-30 | |
US15/986,174 US11050662B2 (en) | 2017-06-30 | 2018-05-22 | Malleable routing for data packets |
US17/360,283 US20210377162A1 (en) | 2017-06-30 | 2021-06-28 | Malleable routing for data packets |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/986,174 Continuation US11050662B2 (en) | 2017-06-30 | 2018-05-22 | Malleable routing for data packets |
Related Child Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/685,986 Continuation US20220191134A1 (en) | 2017-06-30 | 2022-03-03 | Malleable routing for data packets |
US17/685,929 Continuation US20220272032A1 (en) | 2017-06-30 | 2022-03-03 | Malleable routing for data packets |
US17/685,857 Continuation US20220191133A1 (en) | 2017-06-30 | 2022-03-03 | Malleable routing for data packets |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210377162A1 true US20210377162A1 (en) | 2021-12-02 |
Family
ID=64738435
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/986,174 Active US11050662B2 (en) | 2017-06-30 | 2018-05-22 | Malleable routing for data packets |
US17/360,283 Pending US20210377162A1 (en) | 2017-06-30 | 2021-06-28 | Malleable routing for data packets |
US17/685,929 Pending US20220272032A1 (en) | 2017-06-30 | 2022-03-03 | Malleable routing for data packets |
US17/685,986 Abandoned US20220191134A1 (en) | 2017-06-30 | 2022-03-03 | Malleable routing for data packets |
US17/685,857 Pending US20220191133A1 (en) | 2017-06-30 | 2022-03-03 | Malleable routing for data packets |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/986,174 Active US11050662B2 (en) | 2017-06-30 | 2018-05-22 | Malleable routing for data packets |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/685,929 Pending US20220272032A1 (en) | 2017-06-30 | 2022-03-03 | Malleable routing for data packets |
US17/685,986 Abandoned US20220191134A1 (en) | 2017-06-30 | 2022-03-03 | Malleable routing for data packets |
US17/685,857 Pending US20220191133A1 (en) | 2017-06-30 | 2022-03-03 | Malleable routing for data packets |
Country Status (1)
Country | Link |
---|---|
US (5) | US11050662B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230012242A1 (en) * | 2021-07-08 | 2023-01-12 | T-Mobile Usa, Inc. | Intelligent route selection for low latency services |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10439927B2 (en) * | 2018-01-31 | 2019-10-08 | International Business Machines Corporation | Distributed storage path configuration |
CN113366804A (en) * | 2019-01-24 | 2021-09-07 | 瑞典爱立信有限公司 | Method and system for preventing micro-loops during network topology changes |
CN113615133B (en) * | 2019-03-20 | 2024-06-21 | 华为技术有限公司 | Method, node and system for performing optimal routing in inter-area SRMPLS IGP network |
CN113691445B (en) * | 2020-05-18 | 2022-12-02 | 华为技术有限公司 | Message forwarding backup path determining method and related equipment |
CN115987866A (en) * | 2020-05-26 | 2023-04-18 | 华为技术有限公司 | Notification information processing method and device and storage medium |
TWI733560B (en) * | 2020-08-13 | 2021-07-11 | 瑞昱半導體股份有限公司 | Switch and switch network system thereof |
CN114172836B (en) * | 2020-08-19 | 2024-05-14 | 瞻博网络公司 | Route reflector, computer readable medium and method for route reflection |
US20220060413A1 (en) * | 2020-08-19 | 2022-02-24 | Juniper Networks, Inc. | Utilizing flex-algorithms with route reflection |
CN115514640A (en) * | 2021-06-22 | 2022-12-23 | 华为技术有限公司 | Method and device for determining path |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150026313A1 (en) * | 2013-07-19 | 2015-01-22 | Dell Products L.P. | Data center bridging network configuration and management |
US9559985B1 (en) * | 2014-01-28 | 2017-01-31 | Google Inc. | Weighted cost multipath routing with intra-node port weights and inter-node port weights |
US20170171066A1 (en) * | 2015-12-09 | 2017-06-15 | Alcatel-Lucent Usa, Inc. | Optimizing restoration with segment routing |
US20180006931A1 (en) * | 2016-06-30 | 2018-01-04 | Alcatel-Lucent Canada Inc. | Near-real-time and real-time communications |
US20180088746A1 (en) * | 2016-09-26 | 2018-03-29 | Microsoft Technology Licensing, Llc | Navigation in augmented reality via a transient user interface control |
US20180167458A1 (en) * | 2016-12-13 | 2018-06-14 | Alcatel-Lucent Canada Inc. | Discovery of ingress provider edge devices in egress peering networks |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080301053A1 (en) * | 2007-05-29 | 2008-12-04 | Verizon Services Organization Inc. | Service broker |
US20090154699A1 (en) * | 2007-12-13 | 2009-06-18 | Verizon Services Organization Inc. | Network-based data exchange |
US9609575B2 (en) * | 2012-12-31 | 2017-03-28 | T-Mobile Usa, Inc. | Intelligent routing of network packets on telecommunication devices |
US9537769B2 (en) | 2013-03-15 | 2017-01-03 | Cisco Technology, Inc. | Opportunistic compression of routing segment identifier stacks |
US9853888B2 (en) * | 2013-07-15 | 2017-12-26 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and arrangements for QoS-aware routing in a LI system |
US10062036B2 (en) * | 2014-05-16 | 2018-08-28 | Cisco Technology, Inc. | Predictive path characteristics based on non-greedy probing |
AU2014401818B2 (en) * | 2014-07-24 | 2018-01-25 | Telefonaktiebolaget Lm Ericsson (Publ) | Segment routing in a multi-domain network |
US10244076B2 (en) * | 2014-12-29 | 2019-03-26 | Verizon Patent And Licensing Inc. | Secure cloud interconnect private routing |
US9800507B2 (en) * | 2015-02-10 | 2017-10-24 | Verizon Patent And Licensing Inc. | Application-based path computation |
US10171338B2 (en) | 2016-02-08 | 2019-01-01 | Cisco Technology, Inc. | On-demand next-hop resolution |
US10270691B2 (en) * | 2016-02-29 | 2019-04-23 | Cisco Technology, Inc. | System and method for dataplane-signaled packet capture in a segment routing environment |
US10142243B2 (en) * | 2016-09-12 | 2018-11-27 | Citrix Systems, Inc. | Systems and methods for quality of service reprioritization of compressed traffic |
US10382323B1 (en) * | 2016-12-29 | 2019-08-13 | Juniper Networks, Inc. | Flooding-based routing protocol having label switched path session information |
-
2018
- 2018-05-22 US US15/986,174 patent/US11050662B2/en active Active
-
2021
- 2021-06-28 US US17/360,283 patent/US20210377162A1/en active Pending
-
2022
- 2022-03-03 US US17/685,929 patent/US20220272032A1/en active Pending
- 2022-03-03 US US17/685,986 patent/US20220191134A1/en not_active Abandoned
- 2022-03-03 US US17/685,857 patent/US20220191133A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150026313A1 (en) * | 2013-07-19 | 2015-01-22 | Dell Products L.P. | Data center bridging network configuration and management |
US9559985B1 (en) * | 2014-01-28 | 2017-01-31 | Google Inc. | Weighted cost multipath routing with intra-node port weights and inter-node port weights |
US20170171066A1 (en) * | 2015-12-09 | 2017-06-15 | Alcatel-Lucent Usa, Inc. | Optimizing restoration with segment routing |
US20180006931A1 (en) * | 2016-06-30 | 2018-01-04 | Alcatel-Lucent Canada Inc. | Near-real-time and real-time communications |
US20180088746A1 (en) * | 2016-09-26 | 2018-03-29 | Microsoft Technology Licensing, Llc | Navigation in augmented reality via a transient user interface control |
US20180167458A1 (en) * | 2016-12-13 | 2018-06-14 | Alcatel-Lucent Canada Inc. | Discovery of ingress provider edge devices in egress peering networks |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230012242A1 (en) * | 2021-07-08 | 2023-01-12 | T-Mobile Usa, Inc. | Intelligent route selection for low latency services |
US12088495B2 (en) * | 2021-07-08 | 2024-09-10 | T-Mobile Usa, Inc. | Intelligent route selection for low latency services |
Also Published As
Publication number | Publication date |
---|---|
US20220272032A1 (en) | 2022-08-25 |
US20220191133A1 (en) | 2022-06-16 |
US20190007305A1 (en) | 2019-01-03 |
US20220191134A1 (en) | 2022-06-16 |
US11050662B2 (en) | 2021-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210377162A1 (en) | Malleable routing for data packets | |
US10637686B2 (en) | Capability aware routing | |
CN107078966B (en) | Method and apparatus for assigning receiver identifiers and automatically determining tree attributes | |
US10212088B2 (en) | Tactical traffic engineering based on segment routing policies | |
US10158561B2 (en) | Data plane learning of bi-directional service chains | |
US9680751B2 (en) | Methods and devices for providing service insertion in a TRILL network | |
EP3012999B1 (en) | Method, apparatus and system for creating virtual interfaces based on network characteristics | |
US9450874B2 (en) | Method for internet traffic management using a central traffic controller | |
US9369347B2 (en) | Service to node resolution | |
US10397044B2 (en) | Network function virtualization (“NFV”) based communications network resilience | |
WO2016108140A1 (en) | Ccn fragmentation gateway | |
US9584422B2 (en) | Methods and apparatuses for automating return traffic redirection to a service appliance by injecting traffic interception/redirection rules into network nodes | |
US20200314016A1 (en) | Tunneling inter-domain stateless internet protocol multicast packets | |
CN112822106A (en) | Segment routing service processing method, device, source node and storage medium | |
Papán et al. | The survey of current IPFRR mechanisms | |
US20140254423A1 (en) | System and method for improved routing in autonomous systems | |
US20210352012A1 (en) | Method for creating inter-domain bidirectional tunnel, communication method and device, and storage medium | |
WO2019212678A1 (en) | Explicit backups and fast re-route mechanisms for preferred path routes in a network | |
WO2018095438A1 (en) | Method and device for processing equal cost multi-path (ecmp) | |
JP6751059B2 (en) | Communication system and flow control method | |
JP2024537477A (en) | Method for receiving BGP-intent route and method for advertising BGP-intent route | |
CN116743649A (en) | Method, device, medium and equipment for expanding message segment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CISCO TECHNOLOGY, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FILSFILS, CLARENCE;PSENAK, PETER;CLAD, FRANCOIS;AND OTHERS;SIGNING DATES FROM 20180415 TO 20180622;REEL/FRAME:057274/0707 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STCV | Information on status: appeal procedure |
Free format text: NOTICE OF APPEAL FILED |
|
STCV | Information on status: appeal procedure |
Free format text: NOTICE OF APPEAL FILED |