US20130083660A1 - Per-Group ECMP for Multidestination Traffic in DCE/TRILL Networks - Google Patents
Per-Group ECMP for Multidestination Traffic in DCE/TRILL Networks Download PDFInfo
- Publication number
- US20130083660A1 US20130083660A1 US13/251,957 US201113251957A US2013083660A1 US 20130083660 A1 US20130083660 A1 US 20130083660A1 US 201113251957 A US201113251957 A US 201113251957A US 2013083660 A1 US2013083660 A1 US 2013083660A1
- Authority
- US
- United States
- Prior art keywords
- group
- parent
- information
- ecmp
- per
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/185—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with management of multicast group membership
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/24—Multipath
- H04L45/245—Link aggregation, e.g. trunking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/32—Flooding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/66—Layer 2 routing, e.g. in Ethernet based MAN's
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B70/00—Technologies for an efficient end-user side electric power management and consumption
- Y02B70/30—Systems integrating technologies related to power network operation and communication or information technologies for improving the carbon footprint of the management of residential or tertiary loads, i.e. smart grids as climate change mitigation technology in the buildings sector, including also the last stages of power distribution and the control, monitoring or operating management systems at local level
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Definitions
- FP/TRILL Fabricpath/Transparent Interconnection of Lots of Links
- the multipathing solution is provided to unicast by use of Equal-Cost Multi-Path Routing (“ECMP”).
- ECMP Equal-Cost Multi-Path Routing
- the mechanism to provide multipathing is by using multiple trees, with each tree rooted at a different switch.
- the use of multiple trees may be expensive to maintain both in terms of software and hardware resources. Therefore, there exists a need to obtain graphs constructed for unicast traffic and use them for multidestination traffic also. This not only provides a way of using ECMP for multidestination traffic, but also uses fewer resources and unified controlplane constructs for unicast and multidestination traffic.
- FIG. 1 is a block diagram of a network device operable according to embodiments of this disclosure
- FIG. 2 is a flow chart of a method according to embodiments of this disclosure.
- FIG. 3 is a flow chart of a method according to embodiments of this disclosure.
- FIG. 4 is a flow chart of a method according to embodiments of this disclosure.
- Embodiments enable per-group load balancing of multidestination traffic in FP/TRILL networks by creating a new IS-IS PDU to convey the affinity of the parent node for a given multicast group. For broadcast and unknown unicast flooded traffic, the load balancing may be done on a per-vlan basis.
- FP/TRILL may provide Layer 2 multicast multipathing by creating a plurality of trees for multidestination packets. This may provide for multipathing on a per-flow basis, but can require extra hardware and software resources.
- Classical Layer 2 Ethernet using Per-Vlan Rapid Spanning Tree Protocol (“PVRSTP”) for instance
- PVRSTP Per-Vlan Rapid Spanning Tree Protocol
- PIM Protocol Independent Mulitcast
- FP/TRILL may use an extension of Intermediate System to Intermediate System (“IS-IS”) as its routing protocol.
- IS-IS Intermediate System to Intermediate System
- a GM-LSP may be an LSP that conveys the per-VLAN Layer 2 multicast address derived from IPv4 IGMP or IPv6 MLD notification messages received from attached nodes in the vlan, indicating the location of listeners for these multicast addresses. Since the LSPs are flooded by reliable flooding, all the FP/TRILL nodes in the network have information on where different multicast group listeners are located in the network.
- the trees constructed for multipathing in FP/TRILL may not be driven by any control plane/data packets.
- FP/TRILL a certain number of FP/TRILL switches are selected as roots and trees are created using those nodes (switches) as roots.
- the FP/TRILL dataplane packets may have extra encapsulation that identifies both the source of the multicast as well as the chosen tree on which the multidestination packet must traverse.
- Embodiments described herein instead of using trees for multicast, may use ECMPs.
- the ECMPs constructed for unicast traffic forwarding may also be used for multidestination traffic forwarding.
- a switch can have multiple parent switches for a given source switch (compared to trees where there is just one parent). It should be ensured that a child switch receives at most one copy of the frame. Each switch should have only one parent, for a given source switch, which may forward a copy for a given group.
- embodiments herein propose a new PDU extension to IS-IS similar to extensions proposed in “Extensions to IS-IS for Layer-2 Systems” (http://tools.iettorg/html/draft-ietf-isis-layer2-03).
- a switch with interested multicast receivers for a multicast group indicates to each of the parent switches for a given source switch via the PDU (also referred to as Group Parent Select PDU or GPS-PDU), which one of its multiple parents (when multiple parents exist for a switch) should send traffic to it for a given multicast group address.
- PDU also referred to as Group Parent Select PDU or GPS-PDU
- a special tree identifier can be used to indicate that these data packets are using the enhanced protocol to facilitate interoperability between embodiments and the present scheme used in FP/TRILL.
- Nicknames from 0xFFC0 through 0xFFFF and 0x0000 are reserved nicknames in TRILL. One of these may be reserved for the special identifier).
- the outgoing interface list computed at an intermediate parent for a given multicast group may simply include the ECMP path that was signaled by the PDU.
- the Incoming Interface Check (IIC) in FP or the Reverse Path Forwarding Check (RPF Check) in TRILL is modified such that the check is performed on a per-multicast Group basis/per-Source Switch basis instead of a per-Source Switch (or per-ingress RBridge Switch in TRILL) basis as done in current FP/TRILL implementations.
- the GPS-PDU may be sent between a switch and its parent switch in a ECMP graph, for each switch in the network.
- each switch in the network may choose a parent switch to accept the traffic for a given group.
- the parent choice for a group is centralized at each switch.
- FIG. 1 is a block diagram of a system including network device 100 .
- Embodiments of per-group ECMP for multidestination traffic in FP/TRILL networks may be implemented in one or more network devices, such as network device 100 of FIG. 1 .
- network device 100 may be a network switch, network router, or other suitable network device. Any suitable combination of hardware, software, or firmware may be used to implement embodiments of per-group ECMP for multidestination traffic in FP/TRILL networks.
- embodiments of per-group ECMP for multidestination traffic in FP/TRILL networks may be implemented with network device 100 or any of other network devices 118 .
- the aforementioned system, device, and processors are examples and other systems, devices, and processors may comprise the aforementioned memory storage and processing unit, consistent with embodiments of per-group ECMP for multidestination traffic in FP/TRILL networks.
- a system consistent with embodiments of per-group ECMP for multidestination traffic in FP/TRILL networks may include a network device, such as network device 100 .
- network device 100 may include at least one processing unit 102 and a system memory 104 .
- system memory 104 may comprise, but is not limited to, volatile (e.g., random access memory (RAM)), non-volatile (e.g., read-only memory (ROM)), flash memory, or any combination.
- System memory 104 may include operating system 105 , one or more programming modules 106 , and may include program data 107 . Operating system 105 , for example, may be suitable for controlling network device 100 's operation.
- per-group ECMP for multidestination traffic in FP/TRILL networks may be practiced in conjunction with a graphics library, other operating systems, or any other application program and is not limited to any particular application or system.
- This basic configuration is illustrated in FIG. 1 by those components within a dashed line 108 .
- Network device 100 may have additional features or functionality.
- network device 100 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape.
- additional storage is illustrated in FIG. 1 by a removable storage 109 and a non-removable storage 110 .
- Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
- System memory 104 removable storage 109 , and non-removable storage 110 are all computer storage media examples (i.e., memory storage.)
- Computer storage media may include, but is not limited to, RAM, ROM, electrically erasable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store information and which can be accessed by network device 100 . Any such computer storage media may be part of device 100 .
- Network device 100 may also have input device(s) 112 such as a keyboard, a mouse, a pen, a sound input device, a touch input device, etc.
- Output device(s) 114 such as a display, speakers, a printer, etc. may also be included. The aforementioned devices are examples and others may be used.
- Network device 100 may also contain a communication connection 116 that may allow network device 100 to communicate with other network devices 118 , such as over a network in a distributed network environment, for example, an intranet or the Internet.
- Communication connection 116 is one example of communication media.
- Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media.
- modulated data signal may describe a signal that has one or more characteristics set or changed in such a manner as to encode information in the signal.
- communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media.
- wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media.
- RF radio frequency
- computer readable media may include both storage media and communication media.
- program modules and data files may be stored in system memory 104 , including operating system 105 . While executing on processing unit 102 , programming modules 106 may perform processes including, for example, one or more of method 200 , 300 , or 400 ′s stages as described below.
- FIG. 2 is a flow chart illustrating the steps to support multipathing across multicast groups according to embodiments described herein.
- the method may begin at step 200 where the unicast ECMP graph is initially obtained.
- the method may then proceed to step 210 where for each switch in the network, the available paths and the available parents for that switch are identified by using the unicast ECMP graph.
- the method may next proceed to step 220 where as group membership information is realized (using IGMP or MLD), local policy information may be used to inform the parents of the chosen parent of this group.
- the method may then advance to step 230 where the group membership info is flooded via GM-LSP as required by FP/TRILL.
- the parents and child may enforce the selection made using forwarding constructs.
- the parents and child may enforce the selection made using forwarding constructs.
- the parents and child may enforce the selection made using forwarding constructs.
- multidestination groups such as Broadcast or floods due to an unknown L2 unicast packet. These multidestination packets share the same group address and so they would not benefit from multipathing on a per-group basis.
- the parent may be chosen based on VLAN for these special groups. This would be over and above the selection of parent based on multicast groups for the same vlan.
- multipathing for multicast groups for a given vlan and multipathing for constant group multidestination groups across different vlans may still be achieved.
- Current FP/TRILL capable hardware may support embodiments of multi-pathing across groups without performing IIC/RPF check.
- IIC/PRF check There will be a small but significant asic change needed to perform the modified IIC/PRF checks using multicast group address. The involves checking the (multicast group, source switch ID) information against the allowed list of incoming interfaces. This is a version of Reverse Path Forwarding (RPF) check, and is meant to avoid loops and duplicates.
- RPF Reverse Path Forwarding
- DCE/TRILL may provide multidestination multipathing at a flow-level (at the extra cost of more trees, all paths not necessarily being used, etc.).
- the present disclosure simplifies hardware resources, and allows for multidestination multipathing on a per-group level by enabling per-group loadbalancing of multidestination traffic in DCE/L2MP networks by using a new ISIS PDU to convey the affinity of the parent node for a given multicast group. For broadcast and unknown unicast flooded traffic, loadbalancing may be done on a per-vlan basis.
- Enabling the use of unicast ECMP graph for multidestination traffic may eliminate the software and hardware complexity of maintaining multiple trees for the loadbalancing of multidestination traffic. This also has added benefit of faster convergence when there is a change in network topology.
- the receiver of the multicast traffic may need to ensure that it accepts traffic only on interfaces its control plane at a given time that indicates the traffic should be accepted.
- an RPF check is required per-group, per-source-switch. It should be understood that in most embodiments, the RPF checks can be considered optional.
- the described embodiments result in better table utilization.
- the number of RPF entries needed may be as great as the number of multicast groups.
- the RPF entries there will be only small group of switches that will have multiple parents as seen from a given switch where the RPF entries are installed and so the RPF entries would scale reasonably.
- FIG. 3 is a flow chart illustrating embodiments of the present disclosure.
- Method 300 may begin at step 310 where a unicast ECMP graph may be obtained. Once the unicast ECMP graph is obtained, method 300 may proceed to step 320 .
- available paths for a plurality of network devices may be identified from the unicast ECMP graph. For example, there may be a plurality of paths over which data can travel from a first network device to a second network device.
- method 300 may proceed to step 330 .
- the parents for each of the network devices may be identified.
- group membership information for each of the identified network devices may be obtained.
- the group membership information may be obtained using local policy information via IGMP or MLD protocols.
- Single parent switches may be designated on a per-group basis.
- step 350 identified parents of chosen group parent information derived from the group membership information are informed of their designated status.
- the GPS-PDU may indicate which of multiple parents existing for a network device should receive traffic from the source switch for a given multicast group address.
- method 300 may proceed to step 360 .
- the chosen group parent information may be flooded to the other network devices via group membership LSP (“GM-LSP”).
- GM-LSP group membership LSP
- step 360 may include selecting an associated parent network device to accept traffic for a designated group.
- enforcement may include dynamically adjusting chosen group parent information upon notification of a change in the unicast ECMP graph.
- the address associated with the designated group may be masked using any number of know methods.
- FIG. 4 is a flow chart illustrating embodiments of the present disclosure.
- Method 400 may start at step 410 where a PDU extension is established as an extension to the IS-IS protocol.
- an identifier may be established to alert switching devices that an enhanced protocol is employed. This identifier may be stored in space reserved for TRILL nicknames.
- the network may flood information associated with the first PDU extension to a plurality of switches in a network, wherein each of the plurality of switches has interested multicast receivers for a first group.
- an outgoing interface list may be created at an intermediate parent switch.
- the outgoing interface list may only include the ECMP path signaled by the PDU.
- the list information may be employed to further modify an incoming interface check or a reverse path forwarding check resulting in the check being done on a per-multicast group/per-source switch basis.
- Method 400 may then advance to step 430 where it may be indicating via PDU information which of a plurality of parent switches should send traffic associated with a given multicast address.
- Embodiments of the present disclosure include many advantages to prior art systems, including aiding in multicast multi-pathing across groups in same VLAN. Embodiments further provide important multicast features like resiliency, faster convergence and redundancy. Also, embodiments described herein do not require software to build and maintain multiple trees and allows for multicast forwarding to be derived from unicast forwarding information.
- Embodiments of the present disclosure are described above with reference to block diagrams and/or operational illustrations of methods, systems, and computer program products according to embodiments of this disclosure.
- the functions/acts noted in the blocks may occur out of the order as shown in any flowchart.
- two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Consistent with embodiments of the present disclosure, systems and methods are disclosed for providing per-group ECMP for multidestination traffic in a DCE/TRILL network. Embodiments enable per-group load balancing of multidestination traffic in DCE/L2MP networks by creating a new IS-IS PDU to convey the affinity of the parent node for a given multicast group. For broadcast and unknown unicast flooded traffic, the load balancing may be done on a per-vlan basis.
Description
- Fabricpath/Transparent Interconnection of Lots of Links (“FP/TRILL”) is a multipathing solution, amongst other benefits, provided in a layer 2 network. The multipathing solution is provided to unicast by use of Equal-Cost Multi-Path Routing (“ECMP”). For unknown unicast, broadcast and multicast traffic (henceforth referred to as multidestination traffic), the mechanism to provide multipathing is by using multiple trees, with each tree rooted at a different switch. The use of multiple trees may be expensive to maintain both in terms of software and hardware resources. Therefore, there exists a need to obtain graphs constructed for unicast traffic and use them for multidestination traffic also. This not only provides a way of using ECMP for multidestination traffic, but also uses fewer resources and unified controlplane constructs for unicast and multidestination traffic.
- The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate various embodiments. In the drawings:
-
FIG. 1 is a block diagram of a network device operable according to embodiments of this disclosure; -
FIG. 2 is a flow chart of a method according to embodiments of this disclosure; -
FIG. 3 is a flow chart of a method according to embodiments of this disclosure; -
FIG. 4 is a flow chart of a method according to embodiments of this disclosure. - Consistent with embodiments of the present disclosure, systems and methods are disclosed for providing per-group ECMP for multidestination traffic in a FP/TRILL network. Embodiments enable per-group load balancing of multidestination traffic in FP/TRILL networks by creating a new IS-IS PDU to convey the affinity of the parent node for a given multicast group. For broadcast and unknown unicast flooded traffic, the load balancing may be done on a per-vlan basis.
- It is to be understood that both the foregoing general description and the following detailed description are examples and explanatory only, and should not be considered to restrict the application's scope, as described and claimed. Further, features and/or variations may be provided in addition to those set forth herein. For example, embodiments of the present disclosure may be directed to various feature combinations and sub-combinations described in the detailed description.
- The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar elements. While embodiments of this disclosure may be described, modifications, adaptations, and other implementations are possible. For example, substitutions, additions, or modifications may be made to the elements illustrated in the drawings, and the methods described herein may be modified by substituting, reordering, or adding stages to the disclosed methods. Accordingly, the following detailed description does not limit the disclosure. Instead, the proper scope of the disclosure is defined by the appended claims.
- FP/TRILL may provide Layer 2 multicast multipathing by creating a plurality of trees for multidestination packets. This may provide for multipathing on a per-flow basis, but can require extra hardware and software resources. Classical Layer 2 Ethernet (using Per-Vlan Rapid Spanning Tree Protocol (“PVRSTP”) for instance) can construct a spanning tree for each vlan and provide multiple paths, but only on a per-vlan granularity. For Layer 3 multicast, there have been solutions that provide multicast multipathing on a per-group basis for L3 multicast using Protocol Independent Mulitcast (“PIM”). These solutions require a tree, constructed by PIM protocol packets, for multicast but allows for ECMP towards PIM RP or Source Router. However, none of these prior approaches obtain graphs constructed for unicast traffic and use them for multidestination traffic also. Described embodiments herein use fewer resources and unified controlplane constructs for unicast and multidestination traffic.
- It should be understood that FP/TRILL may use an extension of Intermediate System to Intermediate System (“IS-IS”) as its routing protocol. In described embodiments, it may be necessary to propagate a special type of Label Switched Path (“LSP”) also called Group-Membership LSPs (“GM-LSP”). A GM-LSP may be an LSP that conveys the per-VLAN Layer 2 multicast address derived from IPv4 IGMP or IPv6 MLD notification messages received from attached nodes in the vlan, indicating the location of listeners for these multicast addresses. Since the LSPs are flooded by reliable flooding, all the FP/TRILL nodes in the network have information on where different multicast group listeners are located in the network.
- Unlike Layer-3 PIM multicast, the trees constructed for multipathing in FP/TRILL may not be driven by any control plane/data packets. In FP/TRILL, a certain number of FP/TRILL switches are selected as roots and trees are created using those nodes (switches) as roots. The FP/TRILL dataplane packets may have extra encapsulation that identifies both the source of the multicast as well as the chosen tree on which the multidestination packet must traverse.
- Embodiments described herein, instead of using trees for multicast, may use ECMPs. The ECMPs constructed for unicast traffic forwarding may also be used for multidestination traffic forwarding. With ECMPs a switch can have multiple parent switches for a given source switch (compared to trees where there is just one parent). It should be ensured that a child switch receives at most one copy of the frame. Each switch should have only one parent, for a given source switch, which may forward a copy for a given group.
- To accomplish this, embodiments herein propose a new PDU extension to IS-IS similar to extensions proposed in “Extensions to IS-IS for Layer-2 Systems” (http://tools.iettorg/html/draft-ietf-isis-layer2-03). In addition to GM-LSP flooding, a switch with interested multicast receivers for a multicast group indicates to each of the parent switches for a given source switch via the PDU (also referred to as Group Parent Select PDU or GPS-PDU), which one of its multiple parents (when multiple parents exist for a switch) should send traffic to it for a given multicast group address.
- In the forwarding plane, there will no longer be a requirement to use the tree identifier in the data packet (FTAG in FP for ingress rbridgeid in TRILL), as there are no longer different trees for multidestination multipathing. In some embodiments, a special tree identifier can be used to indicate that these data packets are using the enhanced protocol to facilitate interoperability between embodiments and the present scheme used in FP/TRILL. Nicknames from 0xFFC0 through 0xFFFF and 0x0000 are reserved nicknames in TRILL. One of these may be reserved for the special identifier).
- The outgoing interface list computed at an intermediate parent for a given multicast group may simply include the ECMP path that was signaled by the PDU. In the forwarding plane, the Incoming Interface Check (IIC) in FP or the Reverse Path Forwarding Check (RPF Check) in TRILL, is modified such that the check is performed on a per-multicast Group basis/per-Source Switch basis instead of a per-Source Switch (or per-ingress RBridge Switch in TRILL) basis as done in current FP/TRILL implementations.
- The GPS-PDU may be sent between a switch and its parent switch in a ECMP graph, for each switch in the network. First, each switch in the network may choose a parent switch to accept the traffic for a given group. Next, the parent choice for a group is centralized at each switch.
-
FIG. 1 is a block diagram of a system includingnetwork device 100. Embodiments of per-group ECMP for multidestination traffic in FP/TRILL networks may be implemented in one or more network devices, such asnetwork device 100 ofFIG. 1 . In embodiments,network device 100 may be a network switch, network router, or other suitable network device. Any suitable combination of hardware, software, or firmware may be used to implement embodiments of per-group ECMP for multidestination traffic in FP/TRILL networks. For example, embodiments of per-group ECMP for multidestination traffic in FP/TRILL networks may be implemented withnetwork device 100 or any ofother network devices 118. The aforementioned system, device, and processors are examples and other systems, devices, and processors may comprise the aforementioned memory storage and processing unit, consistent with embodiments of per-group ECMP for multidestination traffic in FP/TRILL networks. - With reference to
FIG. 1 , a system consistent with embodiments of per-group ECMP for multidestination traffic in FP/TRILL networks may include a network device, such asnetwork device 100. In a basic configuration,network device 100 may include at least oneprocessing unit 102 and asystem memory 104. Depending on the configuration and type of network device,system memory 104 may comprise, but is not limited to, volatile (e.g., random access memory (RAM)), non-volatile (e.g., read-only memory (ROM)), flash memory, or any combination.System memory 104 may includeoperating system 105, one ormore programming modules 106, and may includeprogram data 107.Operating system 105, for example, may be suitable for controllingnetwork device 100's operation. Furthermore, embodiments of per-group ECMP for multidestination traffic in FP/TRILL networks may be practiced in conjunction with a graphics library, other operating systems, or any other application program and is not limited to any particular application or system. This basic configuration is illustrated inFIG. 1 by those components within a dashedline 108. -
Network device 100 may have additional features or functionality. For example,network device 100 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated inFIG. 1 by aremovable storage 109 and anon-removable storage 110. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.System memory 104,removable storage 109, andnon-removable storage 110 are all computer storage media examples (i.e., memory storage.) Computer storage media may include, but is not limited to, RAM, ROM, electrically erasable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store information and which can be accessed bynetwork device 100. Any such computer storage media may be part ofdevice 100.Network device 100 may also have input device(s) 112 such as a keyboard, a mouse, a pen, a sound input device, a touch input device, etc. Output device(s) 114 such as a display, speakers, a printer, etc. may also be included. The aforementioned devices are examples and others may be used. -
Network device 100 may also contain acommunication connection 116 that may allownetwork device 100 to communicate withother network devices 118, such as over a network in a distributed network environment, for example, an intranet or the Internet.Communication connection 116 is one example of communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” may describe a signal that has one or more characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media. The term computer readable media as used herein may include both storage media and communication media. - As stated above, a number of program modules and data files may be stored in
system memory 104, includingoperating system 105. While executing onprocessing unit 102,programming modules 106 may perform processes including, for example, one or more of method 200, 300, or 400′s stages as described below. -
FIG. 2 is a flow chart illustrating the steps to support multipathing across multicast groups according to embodiments described herein. The method may begin at step 200 where the unicast ECMP graph is initially obtained. The method may then proceed to step 210 where for each switch in the network, the available paths and the available parents for that switch are identified by using the unicast ECMP graph. - The method may next proceed to step 220 where as group membership information is realized (using IGMP or MLD), local policy information may be used to inform the parents of the chosen parent of this group. The method may then advance to step 230 where the group membership info is flooded via GM-LSP as required by FP/TRILL.
- Next, at
step 240 the parents and child may enforce the selection made using forwarding constructs. Finally, atstep 250, if there is a change in the ECMP graph, needed adjustments may be made after re-obtaining the ECMP graph. - In some embodiments, there could be some multidestination groups such as Broadcast or floods due to an unknown L2 unicast packet. These multidestination packets share the same group address and so they would not benefit from multipathing on a per-group basis. In these scenarios, the parent may be chosen based on VLAN for these special groups. This would be over and above the selection of parent based on multicast groups for the same vlan. Thus, multipathing for multicast groups for a given vlan and multipathing for constant group multidestination groups across different vlans may still be achieved.
- Current FP/TRILL capable hardware may support embodiments of multi-pathing across groups without performing IIC/RPF check. There will be a small but significant asic change needed to perform the modified IIC/PRF checks using multicast group address. The involves checking the (multicast group, source switch ID) information against the allowed list of incoming interfaces. This is a version of Reverse Path Forwarding (RPF) check, and is meant to avoid loops and duplicates.
- It should be noted that the current implementation of DCE/TRILL may provide multidestination multipathing at a flow-level (at the extra cost of more trees, all paths not necessarily being used, etc.). The present disclosure simplifies hardware resources, and allows for multidestination multipathing on a per-group level by enabling per-group loadbalancing of multidestination traffic in DCE/L2MP networks by using a new ISIS PDU to convey the affinity of the parent node for a given multicast group. For broadcast and unknown unicast flooded traffic, loadbalancing may be done on a per-vlan basis.
- Enabling the use of unicast ECMP graph for multidestination traffic may eliminate the software and hardware complexity of maintaining multiple trees for the loadbalancing of multidestination traffic. This also has added benefit of faster convergence when there is a change in network topology.
- In some embodiments, the receiver of the multicast traffic may need to ensure that it accepts traffic only on interfaces its control plane at a given time that indicates the traffic should be accepted. In these embodiments, an RPF check is required per-group, per-source-switch. It should be understood that in most embodiments, the RPF checks can be considered optional.
- For example, say there are 100 switches in a network and for multicast multipathing there are 5 different trees. In this example, there may be 100 multicast groups. The RPF check table in current implementations of DCE/TRILL would then require 500 entries in each of the switches. However the number of RPF check entries needed in embodiments described herein is quite different. At a given switch, if for the given source switch there is only one parent, then only one entry in the RPF check table is needed.
- If the table allows for masking of some lookup fields, then it would be possible to mask the group address and use only one entry. So for cases such as these, the described embodiments result in better table utilization. For situations where there are more than one parent switch for a given source switch, the number of RPF entries needed may be as great as the number of multicast groups. However in the majority of topologies, there will be only small group of switches that will have multiple parents as seen from a given switch where the RPF entries are installed and so the RPF entries would scale reasonably.
-
FIG. 3 is a flow chart illustrating embodiments of the present disclosure. Method 300 may begin atstep 310 where a unicast ECMP graph may be obtained. Once the unicast ECMP graph is obtained, method 300 may proceed to step 320. Atstep 320 available paths for a plurality of network devices may be identified from the unicast ECMP graph. For example, there may be a plurality of paths over which data can travel from a first network device to a second network device. - Once available network paths have been determined, method 300 may proceed to step 330. At
step 330, the parents for each of the network devices may be identified. Similarly, atstep 340, group membership information for each of the identified network devices may be obtained. In some embodiments, the group membership information may be obtained using local policy information via IGMP or MLD protocols. Single parent switches may be designated on a per-group basis. - Next, at
step 350, identified parents of chosen group parent information derived from the group membership information are informed of their designated status. In some embodiments, sending a GPS-PDU between each of the plurality of network devices and their respective associated parent network devices. The GPS-PDU may indicate which of multiple parents existing for a network device should receive traffic from the source switch for a given multicast group address. Following this, method 300 may proceed to step 360. Atstep 360, the chosen group parent information may be flooded to the other network devices via group membership LSP (“GM-LSP”). - Once the parent information has been distributed, method 300 may proceed to step 360 and enforce selection of group information through forwarding constructs. In some embodiments,
step 360 may include selecting an associated parent network device to accept traffic for a designated group. In some embodiments, enforcement may include dynamically adjusting chosen group parent information upon notification of a change in the unicast ECMP graph. Furthermore, in some embodiments, the address associated with the designated group may be masked using any number of know methods. -
FIG. 4 is a flow chart illustrating embodiments of the present disclosure. Method 400 may start atstep 410 where a PDU extension is established as an extension to the IS-IS protocol. In some embodiments, an identifier may be established to alert switching devices that an enhanced protocol is employed. This identifier may be stored in space reserved for TRILL nicknames. Once the PDU extension has been established, method 400 may proceed to step 420. Atstep 420, the network may flood information associated with the first PDU extension to a plurality of switches in a network, wherein each of the plurality of switches has interested multicast receivers for a first group. - Here, in some embodiments, an outgoing interface list may be created at an intermediate parent switch. The outgoing interface list may only include the ECMP path signaled by the PDU. The list information may be employed to further modify an incoming interface check or a reverse path forwarding check resulting in the check being done on a per-multicast group/per-source switch basis.
- Method 400 may then advance to step 430 where it may be indicating via PDU information which of a plurality of parent switches should send traffic associated with a given multicast address.
- Embodiments of the present disclosure include many advantages to prior art systems, including aiding in multicast multi-pathing across groups in same VLAN. Embodiments further provide important multicast features like resiliency, faster convergence and redundancy. Also, embodiments described herein do not require software to build and maintain multiple trees and allows for multicast forwarding to be derived from unicast forwarding information.
- Unlike current DCE/TRILL implementations there is no requirement for embodiments of the present disclosure to compute multiple trees rooted at different switches for multidestination multipathing. If the underlying topology changes then, the link state protocol reconverges at the changed topology. If a given node determines that the list of parents for a given source switch has changed, it sends out GPS-PDU to communicate the new mapping between the multicast groups and parent interface.
- Embodiments of the present disclosure, for example, are described above with reference to block diagrams and/or operational illustrations of methods, systems, and computer program products according to embodiments of this disclosure. The functions/acts noted in the blocks may occur out of the order as shown in any flowchart. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
- While certain embodiments of the disclosure have been described, other embodiments may exist. Furthermore, although embodiments of the present disclosure have been described as being associated with data stored in memory and other storage mediums, data can also be stored on or read from other types of computer-readable media, such as secondary storage devices, like hard disks, floppy disks, or a CD-ROM, a carrier wave from the Internet, or other forms of RAM or ROM. Further, the disclosed methods' stages may be modified in any manner, including by reordering stages and/or inserting or deleting stages, without departing from the disclosure.
- All rights including copyrights in the code included herein are vested in and are the property of the Applicant. The Applicant retains and reserves all rights in the code included herein, and grants permission to reproduce the material only in connection with reproduction of the granted patent and for no other purpose.
- While the specification includes examples, the disclosure's scope is indicated by the following claims. Furthermore, while the specification has been described in language specific to structural features and/or methodological acts, the claims are not limited to the features or acts described above. Rather, the specific features and acts described above are disclosed as examples for embodiments of the disclosure.
Claims (20)
1. A method comprising:
obtaining a unicast ECMP graph;
identifying available paths for a plurality of network devices from the unicast ECMP graph;
identifying parents for each of the identified network devices;
obtaining group membership information for each of the identified network devices;
informing the identified parents of chosen group parent information derived from the group membership information;
flooding chosen group parent information via group membership LSP (“GM-LSP”); and
enforcing selection of group information through forwarding constructs.
2. The method of claim 1 , further comprising obtaining group membership information using local policy information via one of: IGMP or MLD.
3. The method of claim 1 , further comprising dynamically adjusting chosen group parent information upon notification of a change in the unicast ECMP graph.
4. The method of claim 1 , further comprising sending a GPS-PDU between each of the plurality of network devices and their respective associated parent network devices.
5. The method of claim 4 , further comprising each of the plurality of network devices selecting an associated parent network device to accept traffic for a designated group.
6. The method of claim 5 , further comprising masking the address associated with the designated group.
7. The method of claim 1 , further comprising ensuring that each of the plurality of network devices has only one associated parent network device for a given source switch.
8. The method of claim 7 , further comprising forwarding the only one associated parent network device information for a given source switch.
9. A network device comprising:
a processor configured to:
obtain a unicast ECMP graph;
perform load balancing based on a hash packet associated with the unicast ECMP graph;
select a unicast path based on load balancing; and
determine per-group based on the unicast ECMP graph which of a plurality of parent switches can send traffic directed to each group such that a single parent switch exists for each source switch.
10. The network device of claim 9 , wherein the processor is further configured to add an outgoing interface to a tree associated with the determined parent switch.
11. The network device of claim 10 , wherein the processor is further configured to add a PDU extension to an IS-IS protocol.
12. A method comprising:
establishing a first PDU extension;
flooding information associated with the first PDU extension to a plurality of switches in a network, wherein each of the plurality of switches has interested multicast receivers for a first group; and
indicating via PDU information which of a plurality of parent switches should send traffic associated with a given multicast address.
13. The method of claim 12 , further comprising establishing an identifier that alerts switching devices that an enhanced protocol is employed.
14. The method of claim 13 , further comprising storing the identifier in space reserved for TRILL nicknames.
15. The method of claim 12 , further comprising computing an outgoing interface list at an intermediate parent switch.
16. The method of claim 15 , wherein the outgoing interface list only includes an ECMP path signaled by the PDU.
17. The method of claim 16 , further comprising modifying one of: an incoming interface check or a reverse path forwarding check.
18. The method of claim 17 , wherein the modification of one of: the incoming interface check or the reverse path forwarding check results in the check being done on a per-multicast group/per-source switch basis.
19. The method of claim 12 , further comprising receiving multicast packets for a multidestination group, wherein the multicast packets share an identical group address.
20. The method of claim 19 , further comprising selecting a parent switch for the identical group address based on VLAN information associated with the group.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/251,957 US20130083660A1 (en) | 2011-10-03 | 2011-10-03 | Per-Group ECMP for Multidestination Traffic in DCE/TRILL Networks |
US14/949,915 US9876649B2 (en) | 2011-10-03 | 2015-11-24 | ECMP parent group selection for multidestination traffic in DCE/TRILL networks |
US15/827,273 US10205602B2 (en) | 2011-10-03 | 2017-11-30 | ECMP parent group selection for multidestination traffic in DCE/TRILL networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/251,957 US20130083660A1 (en) | 2011-10-03 | 2011-10-03 | Per-Group ECMP for Multidestination Traffic in DCE/TRILL Networks |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/949,915 Division US9876649B2 (en) | 2011-10-03 | 2015-11-24 | ECMP parent group selection for multidestination traffic in DCE/TRILL networks |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130083660A1 true US20130083660A1 (en) | 2013-04-04 |
Family
ID=47992487
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/251,957 Abandoned US20130083660A1 (en) | 2011-10-03 | 2011-10-03 | Per-Group ECMP for Multidestination Traffic in DCE/TRILL Networks |
US14/949,915 Active 2032-03-20 US9876649B2 (en) | 2011-10-03 | 2015-11-24 | ECMP parent group selection for multidestination traffic in DCE/TRILL networks |
US15/827,273 Active US10205602B2 (en) | 2011-10-03 | 2017-11-30 | ECMP parent group selection for multidestination traffic in DCE/TRILL networks |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/949,915 Active 2032-03-20 US9876649B2 (en) | 2011-10-03 | 2015-11-24 | ECMP parent group selection for multidestination traffic in DCE/TRILL networks |
US15/827,273 Active US10205602B2 (en) | 2011-10-03 | 2017-11-30 | ECMP parent group selection for multidestination traffic in DCE/TRILL networks |
Country Status (1)
Country | Link |
---|---|
US (3) | US20130083660A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9246821B1 (en) * | 2014-01-28 | 2016-01-26 | Google Inc. | Systems and methods for implementing weighted cost multi-path using two-level equal cost multi-path tables |
WO2018137682A1 (en) * | 2017-01-25 | 2018-08-02 | 新华三技术有限公司 | Entry creation for equal cost paths |
WO2018137677A1 (en) * | 2017-01-25 | 2018-08-02 | 新华三技术有限公司 | Establishment for table entry of equal-cost path |
US10205602B2 (en) | 2011-10-03 | 2019-02-12 | Cisco Technology, Inc. | ECMP parent group selection for multidestination traffic in DCE/TRILL networks |
US10237206B1 (en) * | 2017-03-05 | 2019-03-19 | Barefoot Networks, Inc. | Equal cost multiple path group failover for multicast |
WO2019114810A1 (en) * | 2017-12-15 | 2019-06-20 | Huawei Technologies Co., Ltd. | Method and system of packet aggregation |
US10404619B1 (en) | 2017-03-05 | 2019-09-03 | Barefoot Networks, Inc. | Link aggregation group failover for multicast |
US11310099B2 (en) | 2016-02-08 | 2022-04-19 | Barefoot Networks, Inc. | Identifying and marking failed egress links in data plane |
US11811902B2 (en) | 2016-02-08 | 2023-11-07 | Barefoot Networks, Inc. | Resilient hashing for forwarding packets |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020236302A1 (en) | 2019-05-23 | 2020-11-26 | Cray Inc. | Systems and methods for on the fly routing in the presence of errors |
US11296980B2 (en) | 2019-08-29 | 2022-04-05 | Dell Products L.P. | Multicast transmissions management |
US11290394B2 (en) | 2019-10-11 | 2022-03-29 | Dell Products L.P. | Traffic control in hybrid networks containing both software defined networking domains and non-SDN IP domains |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070025276A1 (en) * | 2005-08-01 | 2007-02-01 | Cisco Technology, Inc. | Congruent forwarding paths for unicast and multicast traffic |
US20070177594A1 (en) * | 2006-01-30 | 2007-08-02 | Juniper Networks, Inc. | Forming equal cost multipath multicast distribution structures |
US7310335B1 (en) * | 2000-09-06 | 2007-12-18 | Nokia Networks | Multicast routing in ad-hoc networks |
US20090037607A1 (en) * | 2007-07-31 | 2009-02-05 | Cisco Technology, Inc. | Overlay transport virtualization |
US20100020797A1 (en) * | 2006-12-14 | 2010-01-28 | Nortel Networks Limited | Method and apparatus for exchanging routing information and establishing connectivity across multiple network areas |
US20100061272A1 (en) * | 2008-09-04 | 2010-03-11 | Trilliant Networks, Inc. | System and method for implementing mesh network communications using a mesh network protocol |
US20100061269A1 (en) * | 2008-09-09 | 2010-03-11 | Cisco Technology, Inc. | Differentiated services for unicast and multicast frames in layer 2 topologies |
US20110299528A1 (en) * | 2010-06-08 | 2011-12-08 | Brocade Communications Systems, Inc. | Network layer multicasting in trill networks |
US20120320800A1 (en) * | 2011-06-17 | 2012-12-20 | International Business Machines Corporation | Mac Learning in a Trill Network |
US8509087B2 (en) * | 2010-05-07 | 2013-08-13 | Cisco Technology, Inc. | Per-graph link cost assignment in layer 2 multipath networks |
US20130254356A1 (en) * | 2010-11-30 | 2013-09-26 | Donald E. Eastlake, III | Systems and methods for recovery from network changes |
US20130279513A1 (en) * | 2010-12-11 | 2013-10-24 | Donald E. Eastlake, III | Systems and methods for pseudo-link creation |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009061973A1 (en) * | 2007-11-09 | 2009-05-14 | Blade Network Technologies, Inc. | Session-less load balancing of client traffic across servers in a server group |
US8014278B1 (en) * | 2007-12-17 | 2011-09-06 | Force 10 Networks, Inc | Adaptive load balancing between ECMP or LAG port group members |
US7684316B2 (en) * | 2008-02-12 | 2010-03-23 | Cisco Technology, Inc. | Multicast fast reroute for network topologies |
US8351429B2 (en) * | 2009-05-13 | 2013-01-08 | Avaya Inc. | Method and apparatus for providing fast reroute of a packet that may be forwarded on one of a plurality of equal cost multipath routes through a network |
US8619584B2 (en) * | 2010-04-30 | 2013-12-31 | Cisco Technology, Inc. | Load balancing over DCE multipath ECMP links for HPC and FCoE |
US8660005B2 (en) * | 2010-11-30 | 2014-02-25 | Marvell Israel (M.I.S.L) Ltd. | Load balancing hash computation for network switches |
US8630291B2 (en) * | 2011-08-22 | 2014-01-14 | Cisco Technology, Inc. | Dynamic multi-path forwarding for shared-media communication networks |
US20130083660A1 (en) | 2011-10-03 | 2013-04-04 | Cisco Technology, Inc. | Per-Group ECMP for Multidestination Traffic in DCE/TRILL Networks |
-
2011
- 2011-10-03 US US13/251,957 patent/US20130083660A1/en not_active Abandoned
-
2015
- 2015-11-24 US US14/949,915 patent/US9876649B2/en active Active
-
2017
- 2017-11-30 US US15/827,273 patent/US10205602B2/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7310335B1 (en) * | 2000-09-06 | 2007-12-18 | Nokia Networks | Multicast routing in ad-hoc networks |
US20070025276A1 (en) * | 2005-08-01 | 2007-02-01 | Cisco Technology, Inc. | Congruent forwarding paths for unicast and multicast traffic |
US20070177594A1 (en) * | 2006-01-30 | 2007-08-02 | Juniper Networks, Inc. | Forming equal cost multipath multicast distribution structures |
US20100020797A1 (en) * | 2006-12-14 | 2010-01-28 | Nortel Networks Limited | Method and apparatus for exchanging routing information and establishing connectivity across multiple network areas |
US20090037607A1 (en) * | 2007-07-31 | 2009-02-05 | Cisco Technology, Inc. | Overlay transport virtualization |
US20100061272A1 (en) * | 2008-09-04 | 2010-03-11 | Trilliant Networks, Inc. | System and method for implementing mesh network communications using a mesh network protocol |
US20100061269A1 (en) * | 2008-09-09 | 2010-03-11 | Cisco Technology, Inc. | Differentiated services for unicast and multicast frames in layer 2 topologies |
US8509087B2 (en) * | 2010-05-07 | 2013-08-13 | Cisco Technology, Inc. | Per-graph link cost assignment in layer 2 multipath networks |
US20110299528A1 (en) * | 2010-06-08 | 2011-12-08 | Brocade Communications Systems, Inc. | Network layer multicasting in trill networks |
US20130254356A1 (en) * | 2010-11-30 | 2013-09-26 | Donald E. Eastlake, III | Systems and methods for recovery from network changes |
US20130279513A1 (en) * | 2010-12-11 | 2013-10-24 | Donald E. Eastlake, III | Systems and methods for pseudo-link creation |
US20120320800A1 (en) * | 2011-06-17 | 2012-12-20 | International Business Machines Corporation | Mac Learning in a Trill Network |
Non-Patent Citations (9)
Title |
---|
Banerjee et al., rfc 6165, Extensions to IS-IS for Layer-2 Systems, April 2011 * |
Eastlake et al., rfc 6326, Transparent Interconnection of Lots of Links (TRILL) Use of IS-IS, July 2011 * |
Eastlake, "RBridges and the IETF TRILL Protocol", North American Netowrk Operators' Group (NANOG) 50 meeting, October 3, 2010, 83 pages. * |
Gai, "[rbridge] How is an IS-IS packet differentiated from layer 3 ISIS,and TRILL-encapsulated data packets?", 05/20/2007, 3 pages. * |
Jacobsen, "Introduction to TRILL", The Internet Protocol Journal, Cisco, September 2011, Volume 14, Number 3, Total 32 Pages, Page 2-20. * |
Perlman et al., "RFC 6326: Transparent Interconnection of Lots of Links (TRILL) Use of IS-IS", Internet Engineering Task Force (IETF), July 2011, 25 Pages. * |
Perlman et al., rfc6325, "Routing Bridges (RBridges): Base Protocol Specification", July, 2011, 99 pages. * |
Perlman, "RBridges: Transparent Routing", IEEE INFOCOM 2004, Page 1211-1218. * |
Xi et al.: "Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP", IEEE 2011, Page 608-613. * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10205602B2 (en) | 2011-10-03 | 2019-02-12 | Cisco Technology, Inc. | ECMP parent group selection for multidestination traffic in DCE/TRILL networks |
US9246821B1 (en) * | 2014-01-28 | 2016-01-26 | Google Inc. | Systems and methods for implementing weighted cost multi-path using two-level equal cost multi-path tables |
US11811902B2 (en) | 2016-02-08 | 2023-11-07 | Barefoot Networks, Inc. | Resilient hashing for forwarding packets |
US11310099B2 (en) | 2016-02-08 | 2022-04-19 | Barefoot Networks, Inc. | Identifying and marking failed egress links in data plane |
US11115314B2 (en) | 2017-01-25 | 2021-09-07 | New H3C Technologies Co., Ltd. | Establishing entry corresponding to equal cost paths |
US11108682B2 (en) | 2017-01-25 | 2021-08-31 | New H3C Technologies Co., Ltd. | Establishing entry corresponding to equal-cost paths |
WO2018137677A1 (en) * | 2017-01-25 | 2018-08-02 | 新华三技术有限公司 | Establishment for table entry of equal-cost path |
WO2018137682A1 (en) * | 2017-01-25 | 2018-08-02 | 新华三技术有限公司 | Entry creation for equal cost paths |
US10404619B1 (en) | 2017-03-05 | 2019-09-03 | Barefoot Networks, Inc. | Link aggregation group failover for multicast |
US10728173B1 (en) | 2017-03-05 | 2020-07-28 | Barefoot Networks, Inc. | Equal cost multiple path group failover for multicast |
US10237206B1 (en) * | 2017-03-05 | 2019-03-19 | Barefoot Networks, Inc. | Equal cost multiple path group failover for multicast |
US11271869B1 (en) | 2017-03-05 | 2022-03-08 | Barefoot Networks, Inc. | Link aggregation group failover for multicast |
US11716291B1 (en) | 2017-03-05 | 2023-08-01 | Barefoot Networks, Inc. | Link aggregation group failover for multicast |
WO2019114810A1 (en) * | 2017-12-15 | 2019-06-20 | Huawei Technologies Co., Ltd. | Method and system of packet aggregation |
US11368873B2 (en) | 2017-12-15 | 2022-06-21 | Huawei Technologies Co., Ltd. | Method and system of packet aggregation |
Also Published As
Publication number | Publication date |
---|---|
US20160080162A1 (en) | 2016-03-17 |
US10205602B2 (en) | 2019-02-12 |
US9876649B2 (en) | 2018-01-23 |
US20180097645A1 (en) | 2018-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10205602B2 (en) | ECMP parent group selection for multidestination traffic in DCE/TRILL networks | |
US11206148B2 (en) | Bit indexed explicit replication | |
US9948574B2 (en) | Bit indexed explicit replication packet encapsulation | |
US10051022B2 (en) | Hot root standby support for multicast | |
EP3343833B1 (en) | Multicast flow prioritization | |
US11362954B2 (en) | Tunneling inter-domain stateless internet protocol multicast packets | |
US11233741B1 (en) | Replication mode selection for EVPN multicast | |
US11627066B2 (en) | IGP topology information and use for BIER-TE | |
US20240137307A1 (en) | Weighted multicast join load balance | |
EP3840310B1 (en) | Source-active community for improved multicasting | |
WO2024011897A1 (en) | Packet generation method, packet transmission method, network device, medium, and product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CISCO TECHNOLOGY, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RAJAGOPALAN, SANTOSH;KULHARI, AJAY;BALASUBRAMANIAN, HARIHARAN;SIGNING DATES FROM 20110908 TO 20110909;REEL/FRAME:027008/0056 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |