Vxlan Design Guide
Vxlan Design Guide
Vxlan Design Guide
April 2018
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 37
Contents
Introduction .............................................................................................................................................................. 3
Frame accurate switching ....................................................................................................................................... 4
Nexus 9000 for IP fabric for Media ......................................................................................................................... 4
Spine and leaf design with network controller ...................................................................................................... 5
Why use a layer 3 spine and leaf design .............................................................................................................. 5
Non-blocking multicast in CLOS architecture ........................................................................................................ 5
Why Media Fabrics can’t rely on ECMP and PIM alone ........................................................................................ 6
Flow orchestration with Cisco DCNM media controller ......................................................................................... 6
Cisco DCNM media controller features ................................................................................................................. 6
Cisco DCNM media controller installation ............................................................................................................. 7
Fabric topology ..................................................................................................................................................... 7
Fabric configuration using Power-On Auto Provisioning (POAP) .......................................................................... 9
Fabric configuration using the command-line interface ....................................................................................... 12
Topology discovery ............................................................................................................................................. 14
Host discovery .................................................................................................................................................... 15
Host policy .......................................................................................................................................................... 16
Flow policy .......................................................................................................................................................... 17
Flow setup........................................................................................................................................................... 18
Flow visibility and bandwidth tracking ................................................................................................................. 18
Flow statistics and analysis ................................................................................................................................. 20
Flow alias ............................................................................................................................................................ 20
Events and notification ........................................................................................................................................ 21
Precision Time Protocol ...................................................................................................................................... 22
Quality of Service (QoS) ..................................................................................................................................... 23
API integration with the broadcast controller ....................................................................................................... 23
Failure handling .................................................................................................................................................. 24
Cisco DCNM media controller connectivity options: fabric control ...................................................................... 26
Designing the control network ............................................................................................................................. 28
Cisco IP fabric for Media architecture ................................................................................................................. 29
Multi-Site and PIM Border ................................................................................................................................... 31
Guidelines and Limitations ................................................................................................................................... 32
Topology discovery ............................................................................................................................................. 32
Flow setup........................................................................................................................................................... 32
Host policy .......................................................................................................................................................... 32
Flow policy .......................................................................................................................................................... 33
Design ................................................................................................................................................................. 33
Live and File based workflows on same fabric .................................................................................................... 33
Single switch with controller ................................................................................................................................ 34
Single switch without a controller ........................................................................................................................ 35
Conclusion ............................................................................................................................................................. 36
For more information............................................................................................................................................. 36
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 2 of 37
Introduction
Today, the broadcast industry uses an SDI router and SDI cables to transport video and audio signals. The SDI
cables can carry only a single unidirectional signal. As a result, a large number of cables, frequently stretched over
long distances, are required, making it difficult and time-consuming to expand or change an SDI-based
infrastructure.
Cisco IP Fabric for Media helps you migrate from an SDI router to an IP-based infrastructure (Figure 1). In an IP-
based infrastructure, a single cable has the capacity to carry multiple bidirectional traffic flows and can support
different flow sizes without requiring changes to the physical infrastructure.
● Supports various types and sizes of broadcasting equipment endpoints with port speeds up to 100 Gbps
● Supports the latest video technologies, including 4K and 8K ultra HD
● Allows for a deterministic network with zero packet loss, ultra low latency, and minimal jitter, and
● Supports the AES67 and SMPTE-2059-2 PTP profiles
The Society of Motion Picture and Television Engineers (SMPTE) 2022-6 standard define the way that SDI is
encapsulated in an IP frame and SMPTE 2110 defines how video, audio and ancillary data is carried over IP.
Similarly, Audio Engineering Society (AES) 67 defines the way that audio is carried over IP. All these flows are
typically User Datagram Protocol (UDP) and IP multicast flows. A network built to carry these flows must help
provide zero-drop transport with guaranteed forwarding, low latency, and minimal jitter.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 3 of 37
There are multiple design options that are available today based on a given use case
1. Spine and Leaf CLOS based architecture using Nexus 9000 series switches with Data Center Network
Manager (DCNM) Controller – Provides a flexible and scalable architecture that is suitable for studio
deployments.
2. A single switch, Nexus 9000 series with DCNM controller – Provides a design that is identical to an SDI router.
Simple architecture with DCNM controller offering security and flow visibility. Studio or OBVAN type
deployment.
3. A single switch, Nexus 9000 without a controller – Provides a design that is identical to an SDI router. Simple
architecture. Switch operates like any multicast router.
Network Timed Switching: Here the network is responsible to switch on a frame boundary. This is not possible in a
standard off the shelf switch as it requires FPGA to switch at the frame boundary.
Source Timed Switching: Source timed switching requires the source to modify the UDP port number when a
switch is triggered. At a precise time interval, the source is instructed to change the destination UDP port number
of the signal. The network switch is programmed with a rule to steer the signal to the receiver just before the
source switches their destination UDP port. While this works, it does not scale when the network involves multiple
devices (routers and switches) and limits the ability to switch flows across different sites.
Destination Timed Switching: The destination subscribes or joins the new flow before leaving the old flow and
switches at the frame boundary. Destination timed switching does not require special capabilities in the network
and can also scale across multiple network switches and sites.
Given the goal of the industry is to use commercial off-the-shelf switches, in general it has chosen to move forward
with destination timed switching and Cisco has adopted that model.
Listed below are supported Nexus 9000 switches with their role
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 4 of 37
Part Number Description Supported Mode
9500 with N9K-X9636C-R 36x100G Spine with Controller/Standalone Switch
with/without Controller
9500 with N9K-X9636Q-R 36x40G Spine with Controller/Standalone Switch
with/without Controller
Although a Layer 2 network design may seem simple, it has a very large failure domain. A misbehaving endpoint
could potentially storm the network with traffic that is propagated to all devices in the Layer 2 domain. Also, in a
Layer 2 network, traffic is always flooded to the multicast router or querier, which can cause excessive traffic to be
sent to the router or querier, even when there are no active receivers. This results in non-optimal and non-
deterministic use of bandwidth.
Layer 3 multicast networks contain the fault domain and forward traffic across the network only when there are
active receivers, thereby ensures optimal use of bandwidth. This also provides granular application of filtering
policy that can be applied to a specific port instead of all devices like in case of a layer 2 domain.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 5 of 37
In an ideal scenario, the sender leaf (First hop router) sends one copy of the flow to one of the SPINE switch. The
SPINE creates “N” copies, one for each receiver LEAF switch that has interested receivers for that flow. The
receiver leaf (Last hop router) creates “N” copies of the flow, one per local receiver connected on the leaf. At times,
especially when the system is at its peak capacity, you could encounter a scenario where sender leaf has
replicated a flow to a certain spine, but the receiver leaf cannot get traffic from that spine as its link bandwidth to
that spine is completely occupied by other flows. When this happens, the sender leaf must replicate the flow to
another spine. This results in the sender leaf using twice the bandwidth for a single flow.
To ensure the CLOS network remains non-blocking, a sender leaf must have enough bandwidth to replicate all of
its local senders to all spines. By following this guideline, the CLOS network can be non-blocking.
Bandwidth of all senders connected to a leaf must be equal to the bandwidth of the links going from that
leaf to each of the spine. Bandwidth of all receivers connected to a leaf must be equal to the aggregate
bandwidth of all links going to all spines from that leaf.
For example: A two spine design using N9k-C93180YC-EX, with 6x100G uplinks, 300G going to each spine
can support 300G of senders and 600G of receiver connected to the leaf.
In a broadcasting facility, most of the endpoints are unidirectional – camera, microphone, multiviewers etc. Also,
there are more number of receivers than senders (ratio is 4:1). Also, when a receiver no longer needs a flow, it
leaves the flows, freeing up the bandwidth. Hence, the network can be designed with placement of senders and
receivers such that the CLOS architecture becomes non-blocking.
● Fabric configuration using Power-On Auto Provisioning (POAP) to help automate configuration
● Topology and host discovery to dynamically discover the topology and host connection
● Flow orchestration to help set up traffic flow with guaranteed bandwidth
● Flow policies, including bandwidth reservation and flow policing
● Host policies to secure the fabric by restricting senders and receivers
● Flow visibility and analytics, including end-to-end path visibility with bit rate set on a per-flow basis
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 6 of 37
● Events and notifications for operation and monitoring
● Northbound API integration with the broadcast controller
For more information about DCNM, see https://www.cisco.com/c/en/us/support/cloud-systems-management/data-
center-network-manager-10/model.html.
The recommended approach is to set up the DCNM media controller in native high-availability mode.
After DCNM is installed, you must manually configure it to work in media-controller mode. You accomplish this by
connecting to the DCNM server through Secure Shell (SSH) and logging in with the username root.
[root@mc ~]# appmgr stop dcnm
[root@mc ~]# appmgr set-mode media-controller
With OVA installation, ensure DCNM is deployed using Large Configuration (four vCPUs and 12G RAM) and in
Thick Provisioned mode.
Fabric topology
The number and type of leaf and spine switches required in your IP fabric depend on the number and type of
endpoints in your broadcasting center.
Follow these steps to help determine the number of leaf switches you need:
Count the number of endpoints (cameras, microphones, gateway, production switchers etc.) in your broadcasting
center. For example, assume that your requirements are as follows:
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 7 of 37
Table 3. Supported spine switch
● The 9236C can be used as a leaf switch for 40-Gbps endpoints. Each supports up to 25 x 40-Gbps
endpoints and requires 10 x 100-Gbps uplinks.
● The 93180YC-EX can be used as a leaf switch for 10-Gbps endpoints. Each supports up to 48 x 10-Gbps
endpoints and requires 6 x 100-Gbps uplinks.
● The 93108TC-EX can be used as a leaf switch for 1/10GBASE-T endpoints. Each supports up to 48 x
1/10GBASE-T endpoints with 6 x 100-Gbps uplinks.
● 40 x 40-Gbps endpoints would require 2 x 9236C leaf switches with 20 x 100-Gbps uplinks.
● 160 x 10-Gbps endpoints would require 4 x 93180-EX leaf switches with 24 x 100-Gbps uplinks.
● 70 x 1-Gbps endpoints would require 2 x 93108-EX leaf switches with 4 x 100-Gbps uplinks. (Not all uplinks
are used.)
● The total number of uplinks required is 48 x 100 Gbps.
● The 9500 with N9K-X9636C-R line card or a 9236C can be used as a SPINE.
● With 9236C, each switch supports up to 36 x 100-Gbps ports. Two spine switches with 24 x 100-Gbps ports
per spine can be used.
● With 9508 and N9K-X9636C-R line card, each line card supports 36x100G ports. Two line cards with a
single spine switch can be used.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 8 of 37
Figure 4. Network topology with 9508 as spine
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 9 of 37
The DCNM controller ships with configuration templates: the Professional Media Network (PMN) fabric spine
template and the PMN fabric leaf template. The POAP definition can be generated using these templates as the
baseline. Alternatively, you can generate a startup configuration for a switch and use it during POAP definition.
(The section “Fabric Configuration Using the Command-Line Interface” later in this document provides steps for
creating a configuration manually.)
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 10 of 37
Figure 8. DCNM > Configure > POAP > Images and Configurations
Figure 10. DCNM > Configure > POAP > POAP Definitions
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 11 of 37
Figure 11. Generate configuration using a template: DCNM > Configure > POAP > POAP Definitions > POAP Wizard
! Verification
show nbm controller
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 12 of 37
Configuring TCAM carving
! RACL TCAM for Host Policy and ing-l3-vlan-qos for flow statistics
! Requires a switch reload post configuration
! Only required on LEAF switches
router ospf 1
maximum-paths 36
! Maximum paths must be greater than number of uplinks between a single leaf and
spine layer. Default is 8 which is sufficient in most scenarios
interface etx/y
ip pim sparse-mode
ip pim passive
ip router ospf 1 area 0
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 13 of 37
ip igmp suppress v3-gsq
no shutdown
! Host Port Configuration when using an SVI with port as a trunk or access port
Topology discovery
The DCNM media controller automatically discovers the topology when fabric is brought up using POAP. If the
fabric is provisioned through the CLI, the switches need to be manually discovered by DCNM. Be sure that fabric
has been discovered after you set the default flow and host policy and before any endpoints are connected and
discovered on the fabric. Figures 12 and 13 show the steps required to discover the fabric.
Figure 12. DCNM > Inventory > Discover Switches > LAN Switches
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 14 of 37
Figure 13. DCNM > Media Controller > Topology
Host discovery
A host can be discovered in the following ways:
● When the host sends an Address Resolution Protocol (ARP) request for its default gateway: the switch
● When the sender host sends a multicast flow
● When a receiver host sends an IGMP join message
● When the host is manually added through an API
You can find host information by choosing Media Controller > Host.
The host is populated on the topology page, providing a visual representation showing where the host is
connected.
Only hosts that are discovered are displayed on the topology page.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 15 of 37
Figure 14(b) DCNM > Media Controller > Hosts
Host policy
Host policy is used to prevent or allow a sender to send traffic to certain multicast groups and a receiver to
subscribe to certain multicast groups. Default policy can use a whitelist or a blacklist model. Sender host policy is
implemented with an ingress access-list programmed on the sender leaf. Receiver host policy is implemented by
applying an IGMP filter on the receiver leaf.
The host policy page includes all policies created on DCNM through the GUI or API. It also indicates the devices to
which a given policy is applied. To see this information, click the “i” on the host policy page. (Figure 15).
Changes can be made to a host-specific policy at any time. However, no changes can be made to the default
policy anytime it is applied to any host. In order to make changes to the default policy, ports facing the hosts must
be shut down, the flows then time out and DCNM controller removes the policy associations on the switch.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 16 of 37
Figure 15. DCNM > Media Controller > Policies > Host Policies
Flow policy
The flow policy is used to specify the flow properties, including the flow bandwidth and Differentiated Services
Code Point (DSCP) marking. The default policy is applied to flows that do not have a specific policy defined. The
flow policy helps ensure that sufficient bandwidth is provisioned on the network. It also helps ensure that the
sender does not send traffic at a rate higher than the rate provisioned using the policy. Excessive traffic is policed.
(Figure 16).
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 17 of 37
Figure 16. DCNM > Media Controller > Policies > Flow Policies
Flow setup
Flow setup can be accomplished in two ways. A receiver, using Internet Group Management Protocol (IGMP) can
request for a flow, or the broadcast controller can setup a flow using an API. Details on the API and the API
integration is discussed in API Integration with the Broadcast Controller section.
One can view bandwidth utilization per link through the GUI or an API. Also, when the Bandwidth check box is
selected, the color of the links change based on their utilization. See Figures 17, 18, and 19.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 18 of 37
Figure 17. DCNM > Media Controller > Topology > Multicast Group
Figure 18. DCNM > Media Controller > Topology and Double-Click Link
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 19 of 37
Figure 19. DCNM > Media Controller > Topology
Flow alias
Every flow on the fabric is either a video, audio or ancillary flow. For an operator to recognize a flow a more
meaningful name can now be associated with a multicast group. Flow Alias provides option to name a flow. The
flow alias name can be referenced throughout the DCNM GUI instead of the multicast group. (Figure 21).
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 20 of 37
Figure 21. DCNM > Media Controller > Flow Alias
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 21 of 37
Figure 23. DCNM > Media Controller > Events
The active and passive grandmaster switches are connected to the leaf switch.
Two profiles are supported: 2059-2 and AES 67. Any profile can be enabled on any interface. The slave switches
connected to the grandmaster switch must be configured with the same PTP parameters (Figure 24).
Example
feature ptp
! ptp source IP can be any IP. If switch has a loopback, use the loopback IP as
PTP source
ptp source 1.1.1.1
interface Ethernet1/1
ptp
ptp delay-request minimum interval smpte-2059-2 -2
ptp announce interval smpte-2059-2 0
ptp sync interval smpte-2059-2 -3
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 22 of 37
Figure 24. Grandmaster and passive clock connectivity
Table 4. QoS
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 23 of 37
Figure 25. Integration of broadcast and DCNM media controllers
Failure handling
Most deployments implement redundancy by provisioning parallel networks and connecting endpoints to both
networks (for example, 2022-7). Any failure on one network will not affect traffic to endpoints because a copy of the
frame is always available on the redundant network (Figure 26).
Within a fabric, when a failure takes place, such as a link down or a spine failure, recovery mechanisms built into
the DCNM media controller will help ensure that traffic converges. However, a few seconds may be needed to
reprogram the flows on other available paths. Flows are not moved back to the original path after the original path
recovers. New flows use the recovered path. The DCNM media controller provides a list of flows affected by any
failure.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 24 of 37
Figure 26. Redundant IP fabrics: Completely independent of each other
Link failure
When a link fails between a leaf and spine, flows are moved to other available links. In scenarios where available
bandwidth is not sufficient to move all affected flows, the media controller will move certain flows (random). For the
flows that it was not able to service, an event will be generated notifying the user about the affected flows.
When bandwidth becomes available in the future, the flows will be programmed after an IGMP join message is
received from a receiver.
Spine failure
A spine reload causes all flows routed on that spine to fail. The media controller moves the affected flows to other
spine switches. In scenarios where available bandwidth is not sufficient to move all affected flows, the media
controller will move certain flows (random). For the flows that it was not able to service, an event will be generated
notifying the user about the affected flows.
When bandwidth becomes available in the future, the flows will be programmed after an IGMP join message is
received from a receiver.
Leaf failure
A Leaf failure will result in the loss of traffic to endpoints connected to that leaf.
After recovery, the endpoints will have to resend IGMP join messages.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 25 of 37
Connectivity loss between Cisco DCNM and switch
The DCNM media controller is the control plane for the fabric. When connectivity is lost between the switch and the
controller, no new control plane activities are possible. New flows cannot be programmed, and flows cannot be
removed. Existing flows will continue to work. The switch will periodically retry to connect to DCNM.
DCNM high-availability failover takes about three minutes to complete, during which time no new control-plane
activities can be performed (Future release will provide active, hot standby HA implementation).
DCNM ships with two Network Interface Cards (NICs): Eth0 and Eth1. POAP works on Eth1, with the DHCP server
on DCNM configured to listen to DHCP messages on Eth1 only. Switches can use POAP using the management
port or a front-panel port. For POAP to work, the OOB management port or the front-panel port used for
communication with DCNM must be on the same VLAN as the DCNM Eth1 port.
Figure 27. Using OOB management ports to communicate with Cisco DCNM media controller
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 26 of 37
Figure 28. Using OOB Front-Panel Ports to Communicate with Cisco DCNM media controller
Figure 29. Using Inband Ports to Communicate with Cisco DCNM media controller
The following CLI snippet shows the configuration using the OOB management port:
interface mgmt0
vrf member management
ip address 1.2.3.4/24
nbm mode controller
controller ip <ip address> vrf management
controller-credentials username admin password 7 ljw39655
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 27 of 37
The following CLI snippet shows the configuration using the OOB front-panel port:
! Define Fabric Control VRF
vrf context fabric_control
interface Ethernet1/10
description to_fabric_control_network
vrf member fabric_control
ip address x.x.x.x/x
no shutdown
Figure 30 shows the logical network connectivity between the broadcast controller, endpoints, and DCNM.
The control network typically carries unicast control traffic between controllers and endpoints.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 28 of 37
Figure 30. Control network
Figure 29 describes a topology that supports a redundant IP network that can carry 2022-6/7 or 2110 flows.
Endpoints must be capable of supporting hitless merge.
Figure 30 shows the flexibility offered by a SPINE and LEAF design that can be used in a distributed studio
deployment model.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 29 of 37
Figure 31 shows a possible deployment scenario to connect a remote site using the concept of a remote leaf. The
leaf is not configured any different from a leaf in the main facility. It is a part of the fabric.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 30 of 37
Figure 33. Remote leaf
This solution requires designating a leaf as a border-leaf. The border-leaf is the leaf that is used to interconnect the
fabric to remote sites or a PIM router. The current software release supports a single border-leaf per site, however,
the border-leaf itself can have multiple layer 3 routed links towards the external network or sites.
The following CLI snippet shows the configuration of the Border Leaf:
! Specify the switch role under NBM controller configuration
nbm mode controller
switch-role border-leaf
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 31 of 37
Figure 34. Multisite
Topology discovery
Because the topology can be discovered when the switches use POAP or when they are manually added to
DCNM, be sure to add only switches that are participating in the Cisco IP Fabric for Media and that are managed
by the DCNM media controller.
Flow setup
Flows can be set up using IGMP join messages and through an API call. The system supports IGMP Version 2
(IGMPv2) and IGMPv3. Also, a static OIF cannot be added through the CLI. Any static flow setup must be
implemented through an API call.
Note the following guidelines and limitations for flows set up using an API:
● DCNM notifies the broadcast controller and user about any unsuccessful API call that requires the
broadcast controller or user to retry the call. The notification is done via the AMQP bus.
● Static APIs are not supported when the receiver is connected through a Layer 2 port and Switch Virtual
Interface (SVI).
Host policy
The default host policy shipped with DCNM is a blacklist policy (allow all by default). Any changes to the default
policy must be made before any hosts are discovered on the system. The default policy cannot be edited after
hosts are active. This restriction does not apply to host-specific policies.
The receiver host policy applied to a host connected through a Layer 2 port and SVI applies to all the join
messages sent by all the hosts on that VLAN and cannot be applied to just a single receiver.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 32 of 37
When there are multiple hosts behind a Layer 3 port or SVI and the default host policy is a whitelist, in certain
scenarios even when a host policy permits a host, ARP is the only mechanism through which the host is detected
and that specific host policy is programmed. Host detection through IGMP or multicast traffic may not work.
Do not modify any host policy from the CLI. You cannot apply any custom polices from CLI. Changes has to be
made using the DCNM media controller.
Flow policy
You must make any changes to the default policy before any flows are active on the fabric.
Flow policy cannot be modified while the flow is active. You can make modifications only when the flow ages out,
which typically occurs after the sender stops transmitting the flow (inactive flows time out after 180 seconds).
As a best practice, assume a flow size that is at least 5 percent greater than the flow bit rate. This configuration will
accommodate a certain amount of burst without causing the flow to be policed. For example, a 3-Gbps flow must
be provisioned as 3.15 Gbps or 3.3 Gbps.
Design
Always be sure that the uplink bandwidth from the leaf layer to the spine layer is equal to or greater than the
bandwidth to the endpoint. For example, when using a Cisco Nexus 93180YC-EX Switch, 48 x 10-Gbps
connectivity to the endpoints requires a minimum of 5 x 100-Gbps uplinks. You can use all 6 x 100-Gbps uplinks.
When possible, spread endpoint across different leaf switches to evenly distribute sources and receivers on all leaf
switches.
Limit the number of Spines to 2. In scenarios where a fixed SPINE like the 9326C is not sufficient, consider using
the Nexus 9508 as a SPINE.
Design redundant IP networks (2022-7). The networks should be independent, and each should have its own
instance of the DCNM media controller.
The bandwidth management algorithm does not account for unicast traffic on the network. Be sure that unicast
traffic, if any, occupies the minimum network bandwidth.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 33 of 37
ip access-list pmn-mcast
10 permit ip any 224.0.0.0/4
!Apply the service policy on all interfaces on all switches in the fabric
interface ethernet 1/1-54
service-policy type qos input pmn-qos
Controller offers features such as fabric configuration, flow visibility and analytics, security and north bound
integration with broadcast controller.
(Support for 9508 with R series line card will be available in a future release)
! Verification
show nbm controller
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 34 of 37
Configuring TCAM carving
! RACL TCAM for Host Policy and ing-l3-vlan-qos for flow statistics
! Requires a switch reload post configuration
! Configuration below not required on R series Nexus 9500
Configuring interface
! Host Port Configuration (link between switch and end host)
interface Ethernet1/1
ip address x.x.x.x/x
ip router ospf 1 area 0.0.0.0
ip ospf passive-interface
ip pim sparse-mode
ip pim passive
ip igmp version 3
ip igmp immediate-leave
ip igmp suppress v3-gsq
no shutdown
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 35 of 37
nbm mode flow
policy <policy-name>
bandwidth <flow-bandwidth>
ip group-range <ip-address> to <ip-address>
Configuring interface
! Host Port Configuration (link between switch and end host)
interface Ethernet1/1
ip address x.x.x.x/x
ip router ospf 1 area 0.0.0.0
ip ospf passive-interface
ip pim sparse-mode
ip pim passive
ip igmp version 3
ip igmp immediate-leave
ip igmp suppress v3-gsq
no shutdown
Conclusion
Cisco IP Fabric for Media provides the broadcast industry a path to migrate from SDI to an IP Based infrastructure
with multiple modes of deployment, be it a flexible Spine and leaf design or a single modular chassis or switch. The
fabric guarantees zero-drop multicast transport with minimal latency and jitter. The fabric is highly visible and
provides flow visibility and bit-rate statistics on a per-flow basis. Host polices help ensure that the fabric is secured
from any unauthorized endpoints. Integration with any broadcast controller through open REST APIs helps ensure
that any complexity in the network is abstracted and that the user has an unchanged operator experience.
https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/datasheet-c78-
735989.html.
https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/datasheet-c78-
736651.html.
https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/datasheet-c78-
738321.html.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 36 of 37
Printed in USA C11-738605-02 04/18
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 37 of 37