Config Guide PSD
Config Guide PSD
Config Guide PSD
Junos OS
Protected System Domain Configuration Guide
Release
11.1
Published: 2011-01-20
This product includes memory allocation software developed by Mark Moraes, copyright © 1988, 1989, 1993, University of Toronto.
This product includes FreeBSD software developed by the University of California, Berkeley, and its contributors. All of the documentation
and software included in the 4.4BSD and 4.4BSD-Lite Releases is copyrighted by the Regents of the University of California. Copyright ©
1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994. The Regents of the University of California. All rights reserved.
GateD software copyright © 1995, the Regents of the University. All rights reserved. Gate Daemon was originated and developed through
release 3.0 by Cornell University and its collaborators. Gated is based on Kirton’s EGP, UC Berkeley’s routing daemon (routed), and DCN’s
HELLO routing protocol. Development of Gated has been supported in part by the National Science Foundation. Portions of the GateD
software copyright © 1988, Regents of the University of California. All rights reserved. Portions of the GateD software copyright © 1991, D.
L. S. Associates.
This product includes software developed by Maker Communications, Inc., copyright © 1996, 1997, Maker Communications, Inc.
Juniper Networks, Junos, Steel-Belted Radius, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United
States and other countries. The Juniper Networks Logo, the Junos logo, and JunosE are trademarks of Juniper Networks, Inc. All other
trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners.
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify,
transfer, or otherwise revise this publication without notice.
Products made or sold by Juniper Networks or components thereof might be covered by one or more of the following patents that are
owned by or licensed to Juniper Networks: U.S. Patent Nos. 5,473,599, 5,905,725, 5,909,440, 6,192,051, 6,333,650, 6,359,479, 6,406,312,
6,429,706, 6,459,579, 6,493,347, 6,538,518, 6,538,899, 6,552,918, 6,567,902, 6,578,186, and 6,590,785.
®
Junos OS Protected System Domain Configuration Guide
Release 11.1
Copyright © 2011, Juniper Networks, Inc.
All rights reserved. Printed in USA.
Revision History
January 2011— R 1 Junos OS 11.1
The information in this document is current as of the date listed in the revision history.
Juniper Networks hardware and software products are Year 2000 compliant. The Junos OS has no known time-related limitations through
the year 2038. However, the NTP application is known to have some difficulty in the year 2036.
READ THIS END USER LICENSE AGREEMENT (“AGREEMENT”) BEFORE DOWNLOADING, INSTALLING, OR USING THE SOFTWARE.
BY DOWNLOADING, INSTALLING, OR USING THE SOFTWARE OR OTHERWISE EXPRESSING YOUR AGREEMENT TO THE TERMS
CONTAINED HEREIN, YOU (AS CUSTOMER OR IF YOU ARE NOT THE CUSTOMER, AS A REPRESENTATIVE/AGENT AUTHORIZED TO
BIND THE CUSTOMER) CONSENT TO BE BOUND BY THIS AGREEMENT. IF YOU DO NOT OR CANNOT AGREE TO THE TERMS CONTAINED
HEREIN, THEN (A) DO NOT DOWNLOAD, INSTALL, OR USE THE SOFTWARE, AND (B) YOU MAY CONTACT JUNIPER NETWORKS
REGARDING LICENSE TERMS.
1. The Parties. The parties to this Agreement are (i) Juniper Networks, Inc. (if the Customer’s principal office is located in the Americas) or
Juniper Networks (Cayman) Limited (if the Customer’s principal office is located outside the Americas) (such applicable entity being referred
to herein as “Juniper”), and (ii) the person or organization that originally purchased from Juniper or an authorized Juniper reseller the applicable
license(s) for use of the Software (“Customer”) (collectively, the “Parties”).
2. The Software. In this Agreement, “Software” means the program modules and features of the Juniper or Juniper-supplied software, for
which Customer has paid the applicable license or support fees to Juniper or an authorized Juniper reseller, or which was embedded by
Juniper in equipment which Customer purchased from Juniper or an authorized Juniper reseller. “Software” also includes updates, upgrades
and new releases of such software. “Embedded Software” means Software which Juniper has embedded in or loaded onto the Juniper
equipment and any updates, upgrades, additions or replacements which are subsequently embedded in or loaded onto the equipment.
3. License Grant. Subject to payment of the applicable fees and the limitations and restrictions set forth herein, Juniper grants to Customer
a non-exclusive and non-transferable license, without right to sublicense, to use the Software, in executable form only, subject to the
following use restrictions:
a. Customer shall use Embedded Software solely as embedded in, and for execution on, Juniper equipment originally purchased by
Customer from Juniper or an authorized Juniper reseller.
b. Customer shall use the Software on a single hardware chassis having a single processing unit, or as many chassis or processing units
for which Customer has paid the applicable license fees; provided, however, with respect to the Steel-Belted Radius or Odyssey Access
Client software only, Customer shall use such Software on a single computer containing a single physical random access memory space
and containing any number of processors. Use of the Steel-Belted Radius or IMS AAA software on multiple computers or virtual machines
(e.g., Solaris zones) requires multiple licenses, regardless of whether such computers or virtualizations are physically contained on a single
chassis.
c. Product purchase documents, paper or electronic user documentation, and/or the particular licenses purchased by Customer may
specify limits to Customer’s use of the Software. Such limits may restrict use to a maximum number of seats, registered endpoints, concurrent
users, sessions, calls, connections, subscribers, clusters, nodes, realms, devices, links, ports or transactions, or require the purchase of
separate licenses to use particular features, functionalities, services, applications, operations, or capabilities, or provide throughput,
performance, configuration, bandwidth, interface, processing, temporal, or geographical limits. In addition, such limits may restrict the use
of the Software to managing certain kinds of networks or require the Software to be used only in conjunction with other specific Software.
Customer’s use of the Software shall be subject to all such limitations and purchase of all applicable licenses.
d. For any trial copy of the Software, Customer’s right to use the Software expires 30 days after download, installation or use of the
Software. Customer may operate the Software after the 30-day trial period only if Customer pays for a license to do so. Customer may not
extend or create an additional trial period by re-installing the Software after the 30-day trial period.
e. The Global Enterprise Edition of the Steel-Belted Radius software may be used by Customer only to manage access to Customer’s
enterprise network. Specifically, service provider customers are expressly prohibited from using the Global Enterprise Edition of the
Steel-Belted Radius software to support any commercial network access services.
The foregoing license is not transferable or assignable by Customer. No license is granted herein to any user who did not originally purchase
the applicable license(s) for the Software from Juniper or an authorized Juniper reseller.
4. Use Prohibitions. Notwithstanding the foregoing, the license provided herein does not permit the Customer to, and Customer agrees
not to and shall not: (a) modify, unbundle, reverse engineer, or create derivative works based on the Software; (b) make unauthorized
copies of the Software (except as necessary for backup purposes); (c) rent, sell, transfer, or grant any rights in and to any copy of the
Software, in any form, to any third party; (d) remove any proprietary notices, labels, or marks on or in any copy of the Software or any product
in which the Software is embedded; (e) distribute any copy of the Software to any third party, including as may be embedded in Juniper
equipment sold in the secondhand market; (f) use any ‘locked’ or key-restricted feature, function, service, application, operation, or capability
without first purchasing the applicable license(s) and obtaining a valid key from Juniper, even if such feature, function, service, application,
operation, or capability is enabled without a key; (g) distribute any key for the Software provided by Juniper to any third party; (h) use the
5. Audit. Customer shall maintain accurate records as necessary to verify compliance with this Agreement. Upon request by Juniper,
Customer shall furnish such records to Juniper and certify its compliance with this Agreement.
6. Confidentiality. The Parties agree that aspects of the Software and associated documentation are the confidential property of Juniper.
As such, Customer shall exercise all reasonable commercial efforts to maintain the Software and associated documentation in confidence,
which at a minimum includes restricting access to the Software to Customer employees and contractors having a need to use the Software
for Customer’s internal business purposes.
7. Ownership. Juniper and Juniper’s licensors, respectively, retain ownership of all right, title, and interest (including copyright) in and to
the Software, associated documentation, and all copies of the Software. Nothing in this Agreement constitutes a transfer or conveyance
of any right, title, or interest in the Software or associated documentation, or a sale of the Software, associated documentation, or copies
of the Software.
8. Warranty, Limitation of Liability, Disclaimer of Warranty. The warranty applicable to the Software shall be as set forth in the warranty
statement that accompanies the Software (the “Warranty Statement”). Nothing in this Agreement shall give rise to any obligation to support
the Software. Support services may be purchased separately. Any such support shall be governed by a separate, written support services
agreement. TO THE MAXIMUM EXTENT PERMITTED BY LAW, JUNIPER SHALL NOT BE LIABLE FOR ANY LOST PROFITS, LOSS OF DATA,
OR COSTS OR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES, OR FOR ANY SPECIAL, INDIRECT, OR CONSEQUENTIAL DAMAGES
ARISING OUT OF THIS AGREEMENT, THE SOFTWARE, OR ANY JUNIPER OR JUNIPER-SUPPLIED SOFTWARE. IN NO EVENT SHALL JUNIPER
BE LIABLE FOR DAMAGES ARISING FROM UNAUTHORIZED OR IMPROPER USE OF ANY JUNIPER OR JUNIPER-SUPPLIED SOFTWARE.
EXCEPT AS EXPRESSLY PROVIDED IN THE WARRANTY STATEMENT TO THE EXTENT PERMITTED BY LAW, JUNIPER DISCLAIMS ANY
AND ALL WARRANTIES IN AND TO THE SOFTWARE (WHETHER EXPRESS, IMPLIED, STATUTORY, OR OTHERWISE), INCLUDING ANY
IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NONINFRINGEMENT. IN NO EVENT DOES
JUNIPER WARRANT THAT THE SOFTWARE, OR ANY EQUIPMENT OR NETWORK RUNNING THE SOFTWARE, WILL OPERATE WITHOUT
ERROR OR INTERRUPTION, OR WILL BE FREE OF VULNERABILITY TO INTRUSION OR ATTACK. In no event shall Juniper’s or its suppliers’
or licensors’ liability to Customer, whether in contract, tort (including negligence), breach of warranty, or otherwise, exceed the price paid
by Customer for the Software that gave rise to the claim, or if the Software is embedded in another Juniper product, the price paid by
Customer for such other product. Customer acknowledges and agrees that Juniper has set its prices and entered into this Agreement in
reliance upon the disclaimers of warranty and the limitations of liability set forth herein, that the same reflect an allocation of risk between
the Parties (including the risk that a contract remedy may fail of its essential purpose and cause consequential loss), and that the same
form an essential basis of the bargain between the Parties.
9. Termination. Any breach of this Agreement or failure by Customer to pay any applicable fees due shall result in automatic termination
of the license granted herein. Upon such termination, Customer shall destroy or return to Juniper all copies of the Software and related
documentation in Customer’s possession or control.
10. Taxes. All license fees payable under this agreement are exclusive of tax. Customer shall be responsible for paying Taxes arising from
the purchase of the license, or importation or use of the Software. If applicable, valid exemption documentation for each taxing jurisdiction
shall be provided to Juniper prior to invoicing, and Customer shall promptly notify Juniper if their exemption is revoked or modified. All
payments made by Customer shall be net of any applicable withholding tax. Customer will provide reasonable assistance to Juniper in
connection with such withholding taxes by promptly: providing Juniper with valid tax receipts and other required documentation showing
Customer’s payment of any withholding taxes; completing appropriate applications that would reduce the amount of withholding tax to
be paid; and notifying and assisting Juniper in any audit or tax proceeding related to transactions hereunder. Customer shall comply with
all applicable tax laws and regulations, and Customer will promptly pay or reimburse Juniper for all costs and damages related to any
liability incurred by Juniper as a result of Customer’s non-compliance or delay with its responsibilities herein. Customer’s obligations under
this Section shall survive termination or expiration of this Agreement.
11. Export. Customer agrees to comply with all applicable export laws and restrictions and regulations of any United States and any
applicable foreign agency or authority, and not to export or re-export the Software or any direct product thereof in violation of any such
restrictions, laws or regulations, or without all necessary approvals. Customer shall be liable for any such violations. The version of the
Software supplied to Customer may contain encryption or other capabilities restricting Customer’s ability to export the Software without
an export license.
13. Interface Information. To the extent required by applicable law, and at Customer's written request, Juniper shall provide Customer
with the interface information needed to achieve interoperability between the Software and another independently created program, on
payment of applicable fee, if any. Customer shall observe strict obligations of confidentiality with respect to such information and shall use
such information in compliance with any applicable terms and conditions upon which Juniper makes such information available.
14. Third Party Software. Any licensor of Juniper whose software is embedded in the Software and any supplier of Juniper whose products
or technology are embedded in (or services are accessed by) the Software shall be a third party beneficiary with respect to this Agreement,
and such licensor or vendor shall have the right to enforce this Agreement in its own name as if it were Juniper. In addition, certain third party
software may be provided with the Software and is subject to the accompanying license(s), if any, of its respective owner(s). To the extent
portions of the Software are distributed under and subject to open source licenses obligating Juniper to make the source code for such
portions publicly available (such as the GNU General Public License (“GPL”) or the GNU Library General Public License (“LGPL”)), Juniper
will make such source code portions (including Juniper modifications, as appropriate) available upon request for a period of up to three
years from the date of distribution. Such request can be made in writing to Juniper Networks, Inc., 1194 N. Mathilda Ave., Sunnyvale, CA
94089, ATTN: General Counsel. You may obtain a copy of the GPL at http://www.gnu.org/licenses/gpl.html, and a copy of the LGPL
at http://www.gnu.org/licenses/lgpl.html .
15. Miscellaneous. This Agreement shall be governed by the laws of the State of California without reference to its conflicts of laws
principles. The provisions of the U.N. Convention for the International Sale of Goods shall not apply to this Agreement. For any disputes
arising under this Agreement, the Parties hereby consent to the personal and exclusive jurisdiction of, and venue in, the state and federal
courts within Santa Clara County, California. This Agreement constitutes the entire and sole agreement between Juniper and the Customer
with respect to the Software, and supersedes all prior and contemporaneous agreements relating to the Software, whether oral or written
(including any inconsistent terms contained in a purchase order), except that the terms of a separate written agreement executed by an
authorized Juniper representative and Customer shall govern to the extent such terms are inconsistent or conflict with terms contained
herein. No modification to this Agreement nor any waiver of any rights hereunder shall be effective unless expressly assented to in writing
by the party to be charged. If any portion of this Agreement is held invalid, the Parties agree that such invalidity shall not affect the validity
of the remainder of this Agreement. This Agreement and associated documentation has been written in the English language, and the
Parties agree that the English version will govern. (For Canada: Les parties aux présentés confirment leur volonté que cette convention de
même que tous les documents y compris tout avis qui s'y rattaché, soient redigés en langue anglaise. (Translation: The parties confirm that
this Agreement and all related documentation is and will be in the English language)).
Part 8 Appendix
Appendix A Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Part 9 Indexes
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Index of Statements and Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
displaylog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
fuelg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
read . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
reset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
temps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
volts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
write . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Part 8 Appendix
Appendix A Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Troubleshooting a Routing Engine on the JCS1200 Platform . . . . . . . . . . . . . . . . 253
Restarting a Routing Engine on the JCS1200 Platform . . . . . . . . . . . . . . . . . . . . 254
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Part 9 Indexes
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Index of Statements and Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
If the information in the latest release notes differs from the information in the
documentation, follow the JUNOS Release Notes.
®
To obtain the most current version of all Juniper Networks technical documentation,
see the product documentation page on the Juniper Networks website at
http://www.juniper.net/techpubs/.
Juniper Networks supports a technical book program to publish books by Juniper Networks
engineers and subject matter experts with book publishers around the world. These
books go beyond the technical documentation to explore the nuances of network
architecture, deployment, and administration using the Junos operating system (Junos
OS) and Juniper Networks devices. In addition, the Juniper Networks Technical Library,
published in conjunction with O'Reilly Media, explores improving network security,
reliability, and availability using Junos OS configuration techniques. All the books are for
sale at technical bookstores and book outlets around the world. The current list can be
viewed at http://www.juniper.net/books .
Objectives
This guide is designed to provide an overview of the Juniper Networks JCS1200 Control
System and the concept of Protected System Domains (PSDs). The JCS1200 platform,
which contains up to 12 Routing Engines (or 6 redundant Routing Engine pairs) running
Junos OS, is connected to up to three T Series routers , including any combination of
T320 Core Routers, T640 Core Routers, and T1600 Core Routers.
RSDs and PSDs can run different versions of Junos OS. Each RSD and PSD must be
running Junos OS Release 9.4 or later.
Different PSDs can share interfaces on a single Physical Interface Card (PIC) owned by
the RSD. The RSD and PSDs must be running Junos OS Release 9.3 or later.
NOTE: This guide documents Release 11.1 of the Junos OS. For additional
information about the Junos OS—either corrections to or information that
might have been omitted from this guide—see the software release notes at
http://www.juniper.net/.
Audience
This guide is designed for network administrators who are configuring and monitoring a
Juniper Networks T Series router and JCS1200 platform.
To use this guide, you need a broad understanding of networks in general, the Internet
in particular, networking principles, and network configuration. You must also be familiar
with one or more of the following Internet routing protocols:
Personnel operating the equipment must be trained and competent; must not conduct
themselves in a careless, willfully negligent, or hostile manner; and must abide by the
instructions provided by the documentation.
For the features described in this manual, the Junos OS currently supports the following
routing platforms:
• T320 Core Routers, T640 Core Routers, and T1600 Core Routers
This guide contains two indexes: a complete index of all index entries, and an index of
statements and commands only.
The complete index points to pages in the statement summary chapters. The index entry
for each configuration statement contains at least two entries:
• The second entry, usage guidelines, points to the section in a configuration guidelines
chapter that describes how to use the statement.
If you want to use the examples in this manual, you can use the load merge or the load
merge relative command. These commands cause the software to merge the incoming
configuration into the current candidate configuration. If the example configuration
contains the top level of the hierarchy (or multiple hierarchies), the example is a full
example. In this case, use the load merge command.
If the example configuration does not start at the top level of the hierarchy, the example
is a snippet. In this case, use the load merge relative command. These procedures are
described in the following sections.
1. From the HTML or PDF version of the manual, copy a configuration example into a
text file, save the file with a name, and copy the file to a directory on your routing
platform.
For example, copy the following configuration to a file and name the file ex-script.conf.
Copy the ex-script.conf file to the /var/tmp directory on your routing platform.
system {
scripts {
commit {
file ex-script.xsl;
}
}
}
interfaces {
fxp0 {
disable;
unit 0 {
family inet {
address 10.0.0.1/24;
}
}
}
}
2. Merge the contents of the file into your routing platform configuration by issuing the
load merge configuration mode command:
[edit]
user@host# load merge /var/tmp/ex-script.conf
load complete
Merging a Snippet
To merge a snippet, follow these steps:
1. From the HTML or PDF version of the manual, copy a configuration snippet into a text
file, save the file with a name, and copy the file to a directory on your routing platform.
For example, copy the following snippet to a file and name the file
ex-script-snippet.conf. Copy the ex-script-snippet.conf file to the /var/tmp directory
on your routing platform.
commit {
file ex-script-snippet.xsl; }
2. Move to the hierarchy level that is relevant for this snippet by issuing the following
configuration mode command:
[edit]
user@host# edit system scripts
[edit system scripts]
3. Merge the contents of the file into your routing platform configuration by issuing the
load merge relative configuration mode command:
For more information about the load command, see the Junos OS CLI User Guide.
Documentation Conventions
Caution Indicates a situation that might result in loss of data or hardware damage.
Laser warning Alerts you to the risk of personal injury from a laser.
Documentation Feedback
Technical product support is available through the Juniper Networks Technical Assistance
Center (JTAC). If you are a customer with an active J-Care or JNASC support contract,
or are covered under warranty, and need post-sales technical support, you can access
our tools and resources online or open a case with JTAC.
• JTAC hours of operation—The JTAC centers have resources available 24 hours a day,
7 days a week, 365 days a year.
• Find solutions and answer questions using our Knowledge Base: http://kb.juniper.net/
To verify service entitlement by product serial number, use our Serial Number Entitlement
(SNE) Tool: https://tools.juniper.net/SerialNumberEntitlementSearch/
Product Overview
• JCS1200 Chassis and T Series Routers as a Single Platform on page 3
• JCS1200 and T Series Platform Software Views on page 17
• JCS1200 Platform Components on page 21
Product Overview
The JCS1200 chassis and T Series routers as a single platform include the following
components and product benefits:
Existing Juniper Networks technology already separates the tasks of the Routing Engine
from the Packet Forwarding Engine on a single routing platform. Each component
performs its primary tasks independently, while constantly communicating through a
high-speed internal link. This arrangement provides streamlined forwarding and routing
control and the capability to run Internet-scale networks at high speeds. Now, with
Routing Engines located in a separate chassis, the JCS1200 platform provides a greatly
expanded control plane capacity without sacrificing any forwarding slots in the T Series
router. All memory-intensive processing occurs on the Routing Engines on the JCS chassis,
whereas the FPCs on the T Series router are dedicated to efficient high-speed forwarding.
• The parameters used to create Protected System Domains (PSDs) under the RSD,
namely:
• Which Routing Engine or redundant Routing Engine pair on the JCS1200 platform is
assigned to the PSD.
• Which Flexible PIC Concentrator (FPC) or FPCs on the T Series router are assigned
to the PSD.
Because you can connect up to three T Series routers to the JCS1200 chassis, you can
configure up to three RSDs. The PSD identifiers must be unique for each RSD. That is,
PSD1 can only belong to RSD1, and not to RSD2 or RSD3.
Related • JCS1200 Chassis and T Series Core Routers as a Single Platform on page 3
Documentation
• Protected System Domains on page 4
Any number of FPCs can be assigned to a PSD. Only one redundant Routing Engine pair
(or single Routing Engine) can be assigned to a PSD.
NOTE: When an FPC is not assigned to a PSD, it belongs to the Root System
Domain (RSD) by default. A Physical Interface Card (PIC) on an FPC owned
by the RSD can be configured as an interface that is shared by multiple PSDs.
For more information, see “Shared Interfaces” on page 5.
You create each PSD under the RSD configuration through the Junos OS running on the
Routing Engines on the T Series router. Once a PSD is configured, you access it as you
would any separate physical router by connecting to the console port on the master
Routing Engine on the JCS1200 chassis for the PSD you want to configure. Using the
Junos OS, configure basis system properties, such as hostname, domain name, Ethernet
management IP address, and so on. You can also download a configuration file to the
PSD.
A PSD detects and manages only its own Routing Engines in the JCS1200 chassis and
the assigned FPCs and PICs in the T Series router. In addition, failures on one PSD do not
affect other PSDs.
Shared Interfaces
A single Physical Interface Card (PIC) can host a physical interface that is shared by
different Protected System Domains (PSDs). The Flexible PIC Concentrator (FPC) and
the physical shared interface are owned by the Root System Domain (RSD). However,
the logical interfaces configured under the shared interface are assigned to and owned
by different PSDs. By sharing a single interface among multiple PSDs, the cost of traffic
forwarding is reduced and resources can be allocated flexibly at a more granular level.
Any FPC that has not been assigned to a specific PSD can be used to host shared
interfaces. On the RSD, multiple logical interfaces are configured on the physical interface
and each individual logical interface is assigned to a different PSD. On the PSD, each
assigned logical interface is configured and peered with an uplink tunnel interface
(ut-fpc/pic/slot), which transports packets between the PSD and the shared interface
on the RSD. See Figure 2 on page 6.
logical
interface 1 Tunnel PIC
Root System Domain
OC192
logical
interface 2
Tunnel PIC
Cross-connect
NOTE:
When applied to shared interfaces:
• Junos features that are configured under physical interfaces, such as drop
profiles and schedule maps, are configured on the RSD.
The packets belonging to a shared interface pass between the Packet Forwarding Engine
on the PIC in the RSD and the Packet Forwarding Engine on the uplink tunnel PIC in the
PSD through a cross-connect in the forwarding fabric.
Traffic flow from the PSD to the RSD over a shared interface is as follows:
1. A packet destined for the shared PIC at the RSD is received on an interface at the PSD
and sent to the Packet Forwarding Engine on the PSD’s tunnel PIC. (The tunnel PIC
is configured to peer with the shared PIC at the RSD.)
3. The tunnel PIC loops the packet back to the input side of its Packet Forwarding Engine
and the packet is sent over the switch fabric to the Packet Forwarding Engine on the
shared PIC at the RSD.
1. The Packet Forwarding Engine on the shared PIC at the RSD determines on which
logical interface the packet arrived.
2. Based on the RSD configuration, the PSD that is associated with this logical interface
is known and the packet is sent over the switch fabric to the tunnel PIC at that PSD.
4. The tunnel PIC loops the packet back to the input side of its Packet Forwarding Engine
and the packet is then handled as if it had arrived on a directly-connected PIC.
Ethernet
1-port 10-Gigabit Ethernet DWDM PC-1XGE-DWDM-CBAND 9.4
SONET/SDH
4-port OC48 SONET, SFP PC-4OC48-SON-SFP 9.3
NOTE: Only SONET PICs that are installed on an Enhanced Services (ES)
FPC on a T320 router or on a T1600 router can support shared interfaces.
Related • JCS1200 Chassis and T Series Core Routers as a Single Platform on page 3
Documentation
• Root System Domains on page 4
Inter-PSD forwarding is achieved by using tunnel PICs that reside on the PSD. Each PSD
you configure for inter-PSD forwarding must have a tunnel PIC available to the PSD. The
PSDs communicate over logical interfaces configured on the tunnel PICs. Multiple logical
interfaces can be configured on each tunnel PIC, allowing the PSD to communicate with
multiple PSDs over the same tunnel PIC.
Typically, route reflection is performed by a dedicated router. The router is not in the
forwarding path (does not forward IP packets) but is equipped with a large memory and
a good CPU.
The number of route reflectors in an IP network is much smaller than the number of
routers. A network with 30 or more routers might have one route reflector (or possibly
two for redundancy). Larger networks with hundreds of routers, might have 20 router
reflectors.
Figure 3 on page 9 shows a typical network with route reflectors. These route reflectors
are not in the forwarding path.
PE RR 1 RR 3 PE
PE
g017292
PE RR 2 RR 4 PE
Figure 4 on page 9 shows the JCS1200 platform providing four distinct route reflectors
and preserving the current network architecture.
PE PE
RR1 RR3
PE
RR2 RR4
g017293
PE PE
POP1 RR1 RR2 RR3 RR4 RR5 RR5 RR6 RR6 POP2 RR1 RR1 RR2 RR2 RR3 RR3 RR4 RR4
M B M B M B M B M B M B
PE PE
g017294
VPN 1-100 VPN 101-200 VPN 201-300 VPN 301-400
NOTE: Support for dual Routing Engines (master and backup) is currently
not available, but is planned for a future release.
As shown in Figure 6 on page 11, each of the 12 Routing Engines can be configured as a
standalone route reflector. The 12 Routing Engines on the JCS1200 platform are connected
to the JCS switch modules in a dual-star configuration. Each Routing Engine has access
to two interfaces (fxp0 and fxp1), one on each switch. These interfaces are used for
protocol peerings.
Each JCS switch module has 6 Gigabit Ethernet ports to connect to the outside world
for a total of 12 ports. One port on each JCS switch module is used as a management
port, three of the remaining ports on each JCS switch module can be used to connect to
the network. (The remaining two ports are reserved.) Each Gigabit Ethernet port represents
a separate LAN.
Multiple route reflectors can be configured to share the same port and hence be part of
the same LAN. Port sharing enables JCS1200 route reflectors to conserve Gigabit Ethernet
ports and reduce the cost of adding additional line cards for connectivity to the network.
The result is a cost-effective solution for networks where multiple route reflectors are
deployed.
RE1 RR1
RE2 RR2
RE3 RR3
RE4 RR4
Six GE ports fxp0
RE5
Switch Module 1
RE6
RE7
Switch Module 2
Six GE ports RE8
fxp1
RE9
RE10
g017295
RE12
In Figure 7 on page 12, Ethernet port 1 (RSD1) on JCS switch module 1 is connected to the
T Series Connector Interface Panel (CIP) port on T-CB-0, whereas Ethernet port 1 (RSD1)
on JCS switch module 2 is connected to the CIP port on T-CB-1.
When there are two JCS switch modules, each Routing Engine can be configured with
two Ethernet management ports. One port (fxp0.0) is connected to port 6 on the JCS
switch module in bay 1, whereas the other port (fxp1.0) is connected to port 6 on the JCS
switch module in bay 2. Each connection is a dedicated 1000-Mbps link.
When you first access a PSD through the console port on the Routing Engine, you configure
the IP address for one or both of these management ports.
Figure 8 on page 12 provides a more detailed look at the connections between the two
platforms. RE m indicates a master Routing Engine on the JCS1200 platform, whereas
RE b represents a backup Routing Engine.
• Increased efficiency and investment protection—A single T Series router used with the
JCS1200 platform supports up to eight Protected System Domains (PSDs). With
multiple (up to 3) T Series chassis connected to the JCS1200 chassis, 12 PSDs can be
supported. Instead of purchasing 12 physical routers, a service provider can configure
12 PSDs using a single interconnected platform. In addition, operations and
administration are simplified through consolidation of resources.
• Maximum scaling and flexibility—A highly scalable control plane chassis preserves
slots in the router chassis that can be used for revenue-generating, high-speed
forwarding of Internet traffic. Service providers can assign control processors and
memory space on the Routing Engines in the JCS chassis to achieve the most efficient
use of resources, while delivering outstanding performance. In addition, different PSDs
can share interfaces on a single Physical Interface Card (PIC), reducing capital
expenditure and enabling you to allocate resources with finer granularity.
• Rapid service rollout—New services can be planned, tested, and deployed more quickly
with fewer resources. Each PSD provides a secure administration domain, where new
features can be tested, while other PSDs continue to provide tested software to
customers. Through fault isolation and streamlined administration domains, service
providers achieve faster revenue and accommodate rapid customer growth. In addition,
RSDs and PSDs can run different version of Junos OS; however, the supported Junos
OS Release version must be one release up, or one release down, from the current
Junos OS Release version. For example, if RSD is running Junos OS Release 10.1, then
PSD can run Junos OS Release 10.0 or 10.2, however, it cannot run Junos OS Release
9.6 or 10.3. Each RSD and PSD must be running Junos Release 9.4 or later.
Network Consolidation
Many carriers operate separate IP networks for public and private services. Others have
application-specific IP networks (voice and video, for example). PSDs enable carriers to
consolidate and simplify network architecture. Rather than adding more routing at the
edge to support individual services, a single platform provides service-specific virtualization
in the core of the network.
In Figure 9 on page 14, three separate networks (IPTV, enterprise VPN, and public IP) are
consolidated into one network. Instead of three core routers, only the JCS 1200 platform
interconnected with a single T640 router is required to support all three services.
By delineating fault and administrative domains on a single system, PSDs enable network
administrators to decrease the number of nodes and fiber interconnections between
routers, reducing the cost and complexity of existing point of presence (PoP) architectures.
Because each PSD maintains its own routing and processes in separate partitions, security
is enhanced. With fault isolation, network anomalies in one PSD do not affect another
PSD. Streamlined boundaries allow operational domains to be isolated logically, providing
more control over router administration.
Cost Efficiency
With PSDs, forwarding resources are allocated to where they are most needed. This
flexibility ensures that the most bandwidth-intensive services receive the resources
needed to guarantee service license agreements. By consolidating network equipment
and functions and streamlining management and administrative tasks, the utilization of
resources is maximized. Shared interfaces enable you to assign expensive forwarding
resources with more granularity to specific routing domains. RSDs and PSDs can run
different versions of Junos OS; howeve, the supported Junos OS Release version must
be one release up, or one release down, from the current Junos OS Release version. For
example, if RSD is running Junos OS Release 10.1, then PSD can run Junos Release 10.0
or 10.2; however, it cannot run Junos OS Release 9.6 or 10.3. To configure shared interfaces,
each RSD and PSD must be running Junos OS Release 9.3 or later.
Service providers can use a separate partition for testing and activating new services
without having to deploy a new system. Software upgrades can occur without affecting
software versions used for existing services. Carriers can begin generating revenue more
quickly and minimize the cost of introducing new services. RSDs and PSDs can run different
versions of Junos OS; however, the supported Junos OS Release version must be one
release up, or one release down, from the current Junos OS Release version. For example,
if RSD is running Junos OS Release 10.1, then PSD can run Junos OS Release 10.0 or 10.2;
however, it cannot run Junos OS Release 9.6 or 10.3. Each RSD and PSD must be running
Junos Release 9.4 or later.
Related • Example: Configuring a JCS1200 Platform and a Single T Series Router on page 123
Documentation
• Example: Configuring a JCS1200 Platform and Multiple T Series Routers on page 128
Configuring and managing the Juniper Networks JCS1200 control system and connected
T Series routers requires three separate control points (views). Each view provides a
different access to different parts of the system:
• Through the Junos OS running on the Routing Engine pair in the T Series router (called
the Root System Domain), the RSD administration view enables you to create the
Protected System Domains (PSDs) and to manage all the hardware in the T Series
chassis.
• Through the Junos OS running on a Routing Engine (or Routing Engine pair) in the JCS
chassis, the PSD administration view enables you to configure and manage the hardware
that is assigned to the PSD.
The JCS administration view is controlled by JCS supervisors and operators who have
access to configuration and settings associated with the hardware and software that
reside on the JCS1200 platform. This includes JCS management modules, JCS switch
modules, the JCS Routing Engines (blades), JCS media trays, power supplies, and so on.
• Operator—Login accounts configured with operator privileges enable you to view and
enter JCS management module operational commands such as info and health. JCS
management module configuration commands are not available for operator logins
• You can set the command target of the info command to selectively display information
about a specific Routing Engine in the JCS chassis, all Routing Engines in the chassis,
and so on.
• JCS operators can use the ifconf command to display network interface settings for
the JCS Ethernet interfaces. In addition, JCS supervisors can use the ifconf command
to change network interface settings.
The Root System Domain (RSD) view is controlled by the administrators and users of
the Junos OS running on the Routing Engines on the T Series router. RSD administration
view considerations include:
Access Privileges
The RSD administrator creates the PSDs through the Junos OS running on the Routing
Engines in the T Series chassis. With the correct user privileges and authentication, an
RSD administrator can log in to a PSD from the RSD.
System Information
The RSD administrator can use the show chassis psd command to view which PSDs are
configured within the RSD. Otherwise, when issuing show commands on the RSD, the
administrator views all hardware on the T Series router.
By default, system log files are stored in the /var/log/message directory on the router. If
a system log message on an RSD originates from an FPC that is assigned to a PSD, the
system message is logged locally at the RSD and is also forwarded to that particular
PSD. If a system log message originates from a hardware resource that is shared between
and RSD and PSDs, the message is logged locally at the RSD and is also forwarded to
all PSDs associated with the RSD.
Management Tasks
The RSD administrator manages all hardware on the T Series router, including the Routing
Engines, FPCs, Switch Interface Boards (SIBs), the Switch Processor Mezzanine Board
(SPMB), Power Entry Modules (PEMs), and fans. The RSD administrator can issue show,
request, clear, and test commands for any hardware on the T Series router and for any
FPCs that are part of a PSD.
The Protected System Domain (PSD) view is controlled by the administrators and users
of the Junos OS running on the Routing Engines in the JCS chassis that belong to a
particular PSD. Topics in this section include:
Access Privileges
Each PSD is independent of all other PSDs and requires login authentication. When you
initially configure a PSD, you set its root authentication parameters. Authentication is
enforced when a user attempts to log in to a PSD directly or from the RSD.
System Information
The PSD administrator can display information about the Routing Engines, FPCs, and
PICs that are assigned to the PSD. The administrator can also display information about
shared T Series hardware, such as SIBs, the SPMB, PEMs, and fans. When a show
command is issued on a PSD, a field heading such as psd1-re0: precedes the set of
information that pertains only to the PSD, whereas a field heading such as rsd-re0:
precedes the set of information that pertains to the shared hardware.
System log messages originating from an FPC that is assigned to a PSD are logged locally
at the RSD and forwarded to the PSD. If a system log message originates from a hardware
resource that is shared between and RSD and PSDs, the message is logged locally at
the RSD and is forwarded to all PSDs associated with the RSD. Again, you can determine
the origin of a system message by labels such as psd1-re0: and rsd-re0:.
Management Tasks
The PSD administrator controls and manages Routing Engines and FPCs assigned to
that PSD. For example, the PSD administrator can issue request, clear, and test commands
for the FPCs and PICs that are part of the PSD. The PSD administrator has view-only
access to shared T Series hardware, such as SIBs, the SPMB, PEMs, and fans.
Routing Engines
The JCS chassis provides 12 slots (bays) for Routing Engines. A Routing Engine is a
hot-swappable, independent server with its own processors, memory, storage, network
controllers, operating system, and applications. The Routing Engine is installed in a slot
in the JCS chassis and shares power, fans, switches, and ports with other Routing Engines.
Routing Engines in the JCS1200 platform have the latest Junos OS preinstalled on them.
Management Module
The JCS management module is a hot-swappable module that you use to configure and
manage JCS components. The JCS chassis comes with one hot-swappable management
module in management module slot 1. To provide redundancy, you can add a second
management module in management module slot 2. Only one management module is
active. The other is a backup in case of failure. Each JCS management module has a
separate internal link to each JCS switch module. See Figure 10 on page 22.
You can access the JCS management module CLI through a local connection to the serial
port on the JCS management module. Or, you can access the CLI from a remote network
management station on the network through the console (Ethernet) connector.
Switch Module
The JCS switch module connects Routing Engines on the JCS1200 platform to a T Series
router and controls traffic between the two devices. The JCS chassis comes with one
hot-swappable switch module in switch module slot 1. To provide redundancy, you can
add a second switch module in switch module slot 2.
Media Tray
The media tray is a hot-swappable module that provides two USB connectors for use
by the Routing Engines, error LEDs, an ambient air temperature sensor and a pressure
sensor for use by the JCS management module, and two CompactFlash card slots. Junos
OS is preloaded onto each Routing Engine. The media tray USB ports are used to copy
new Junos OS packages onto Routing Engines. See Figure 11 on page 22. The JCS chassis
comes with one hot-swappable media tray in media tray slot 1. To provide redundancy,
you can add a second media tray in media tray slot 2.
Each pair of power modules operates as a redundant pair. If either power module fails,
the remaining power module continues to supply power, but there is no redundancy.
Replace a failed power module as soon as possible.
Fan Modules
The JCS1200 platform comes with four hot-swappable fan modules for cooling
redundancy. The fan module speeds vary depending on the ambient air temperature
within the JCS1200 platform.
If the ambient temperature is 25°C (77°F) or below, the JCS1200 platform fan modules
will run at their minimum rotational speed. If the ambient temperature is above 25°C
(77°F), the fan modules will run faster, increasing their speed as required to control
internal JCS1200 platform temperature.
Each fan module contains two fans operating as a pair in a series. If one fan fails, the
remaining fan will run at full speed and continue to cool the JCS1200 platform. To maintain
cooling redundancy, replace a failed fan module as soon as possible.
The JCS management module CLI is a straightforward command interface. You type
commands on a single line, and commands are executed when you press the Enter key.
The CLI provides command help and command history.
Unlike the Junos OS CLI, in which configuration commands you enter are stored in a
candidate configuration and the changes you add are not activated until you commit the
configuration, configuration commands you enter with the JCS management module
CLI are activated as soon as you enter the command.
• Strings that contain spaces are enclosed in quotation marks. For example:
• Depending on which command options you enter, you can use the same JCS CLI
command to display configuration information or to change a configuration. For
example, compare the following:
mt -T system
Displays the Routing Engine (blade) that currently controls (owns) the media tray.
mt -T system –b 6
Configures the Routing Engine (blade) in slot 6 to control the media tray.
Username: USERID
Password: PASSW0RD
The 0 in PASSW0RD is a zero, not the letter O.
When you have created user accounts, you can log in as a specific user.
• For a list of available commands, enter the help command. For example:
system> help
? – Display commands
accseccfg — View/edit account security config
advfailover — View/edit advanced failover mode
alarm — Manage Telco System Management alarm(s)
alertcfg — Displays/Configures the global remote alert systems
alertentries — View/edit remote alarm recipients
baydata — View/edit Blade Bay Data string
...
• For help on individual commands, enter command -help, where command is the name
of the command for which you want help. For example:
usage:
clock [-options]
options:
-d - date (mm/dd/yyyy)
-t - time (hh:mm:ss)
-g - GMT offset
-dst - daylight savings time (on|off|special case)
For a GMT offset of +2:00, use one of the following values for dst:
ee - Eastern Europe
gtb - Great Britain
egt - Egypt
fle - Finland
off
For a GMT offset of +10:00, use one of the following values for dst:
ea - Eastern Australia
tas - Tasmania
vlad - Vladivostok
off
For a GMT offset in set {-9:00, -8:00, -7:00, -6:00, -5:00}, use one of
the
following values for dst:
uc - USA and Canada
other - Other locations
off
For a GMT offset of -4:00, use one of the following values for dst:
can - Canada
other - Other locations
off
• You can also use the ? or -h shortcuts to get help. For example:
system> clock -h
system> clock ?
| Denotes a choice
• Use the env command to change the command target. For example:
• The following command changes the command target from system to JCS
management module 1 (mm[1]):
OK
system:mm[1]>
• To return the command target to the top level of the hierarchy, type the following:
OK
system>
• Use the -T option to temporarily override the active command target for individual
commands. For example, to include the following command option to redirect a
command to the JCS management module in slot (bay 1), type:
-T system:mm[1]
Table 4 on page 26 lists command targets you typically use to configure and monitor the
JCS1200 platform.
Routing Engine (blade server) system:blade[x] x is the blade slot number (1 through 12)
NOTE: Additional target paths are available in the JCS management module
CLI.
• Configure the graceful-switchover statement for all PSDs on the JCS1200 platform
that include redundant Routing Engines.
Configuration Overview
• Before You Begin on page 31
• Configuration Roadmap on page 33
Before connecting the JCS1200 platform to any T Series router, ensure that the bootROM
version for all FPCs on the T Series chassis is ROM Monitor Version 6.4 or later. If an FPC
bootROM version is earlier than Version 6.4, the FPC will not come online. To upgrade
the firmware, you must contact your Juniper Networks customer support representative.
To determine if you need to upgrade the FPC firmware, display the version of the firmware
on all FPCs by issuing the show chassis firmware command:
Related • Keeping the JCS Management Module Default SNMP Setting on page 32
Documentation
• Configuration Roadmap on page 33
Configuration Roadmap
Configuration Roadmap
Complete the following tasks to configure the JCS1200 platform and T Series routers:
• Configure Routing Engine (blade) bay data to assign a single Routing Engine (or
redundant pair) to a Root System Domain (RSD) and to a unique Protected System
Domain (PSD).
This ID number must match the ID number set through the JCS management module
baydata command.
NOTE: Any FPC that is not assigned to a PSD belongs to the RSD. For
information about how to configure shared interfaces on a SONET PIC in
an unassigned FPC, see “Step Four: Configure Shared Interfaces
(Optional)” on page 35.
The ID number must match the ID number set through the JCS management module
baydata command.
6. Assign a Routing Engine (or redundant pair) on the JCS1200 platform to the PSD.
Routing Engine assignments must match the assignments configured through the
JCS management module baydata command.
7. Repeat Step 2 through Step 6 for each PSD to be configured under the RSD.
• Hostname
• Domain name
2. For SONET interfaces, configure Frame Relay encapsulation. For Ethernet interfaces,
configure virtual LAN (VLAN) tagging.
b. Specify the Protected System Domain (PSD) that owns the shared interface.
2. For a SONET interface, configure Frame Relay encapsulation on the physical interface
to match the RSD configuration. For an Ethernet interface, configure VLAN tagging
to match the RSD configuration.
3. For a SONET physical interface, configure the maximum transmission unit (MTU) size
to match the RSD configuration. (If the RSD has no MTU size specified, do not include
an MTU size on the PSD.)
5. Configure the logical interfaces that belong to the PSD (as specified in the RSD
configuration).
a. Configure the same DLCI (for SONET) or VLAN ID (for Ethernet) that has been
specified in the RSD configuration
b. Specify the logical tunnel interface that is peered with the logical SONET or Ethernet
interface.
The logical unit number of the tunnel interface must be the same as the one that
is configured on the SONET or Ethernet interface.
c. Configure the protocol family and IP address of the logical SONET or Ethernet
interface.
9. For each logical tunnel interface, specify its peer logical SONET or Ethernet interface.
The logical unit number must be the same as the one that is configured on the logical
tunnel interface.
You use the JCS management module CLI to configure basic system parameters on the
JCS1200 platform:
This is equivalent to pressing the recessed button on the front panel of the JCS
management module for more than 5 seconds.
If you are logging in for the first time, use the default username and password:
Username: USERID
Password: PASSW0RD
2. Use the env command to set JCS management module 1 (mm[1]) as the configuration
target. For example:
This example clears the configuration on mm[1] and returns the JCS management
module to the factory default settings.
2. Use the env command to set JCS management module 1 (mm[1]) as the configuration
target. For example:
NOTE: You only need to configure the Ethernet interface on the primary
management module. The backup management module will use the IP
address from the primary if it becomes the primary management module.
NOTE: The IP address for the JCS switch modules must be on the same
subnet as the IP address for the JCS management module.
To configure the JCS switch module Ethernet interface on the JCS management module:
2. Use the env command to set JCS switch module 1 (switch[1]) as the configuration
target. For example:
In this example, the Ethernet interface for JCS switch module 1 is configured for an IP
address of 192.168.171.98 and a gateway address of 192.168.171.254. The subnet mask
is 255.255.252.0. The external ports (ep) of the switch module are enabled.
4. Repeat this procedure for JCS switch module 2. Use the env command to set switch
module 2 (switch[2]) as the configuration target. For example:
In this example, the Ethernet interface for JCS switch module 2 is configured for an IP
address of 192.168.171.99 and a gateway address of 192.168.171.254. The subnet mask
is 255.255.252.0. The external ports (ep) of the switch module are enabled.
• Supervisor—This role has full read and write access to the JCS1200 platform. Users
can configure the JCS management module, the JCS switch module, and Routing
Engines (blades) on the JCS1200 platform. You must configure at least one user to
have a Supervisor role.
• Operator—This role has read-only access to the JCS platform. Users can view the
configuration of the JCS management module, the JCS switch module, and the JCS
Routing Engines. They can monitor JCS operations, but they cannot change the JCS
configuration
You can add up to 12 users to the JCS management module. Each user you add must be
assigned a unique number (1 through 12).
2. Use the env command to set JCS management module 1 (mm[1]) as the configuration
target. For example:
2. Use the env command to set JCS management module 1 (mm[1]) as the configuration
target. For example:
In this example, the IP address of the NTP server is 172.17.28.5, the JCS management
module clock is updated by the NTP server every 60 minutes, and NTP is enabled.
2. Use the env command to set JCS management module 1 (mm[1]) as the configuration
target. For example:
3. Use the clock command to configure the time zone. For example:
In this example, the clock is configured for 8 hours earlier than UTC (GMT) (-g -8),
and daylight saving time for the USA and Canada (-dst uc) is set.
To configure the system name, location, and contact information for the JCS management
module:
2. Use the env command to set JCS management module 1 (mm[1]) as the configuration
target. For example:
3. Use the config command to configure the system name, location, and contact
information for the JCS. For example:
In this example, the system name is system5. This name identifies the JCS on the
network, appears in monitoring command output, and so on. The contact information
is for George Chang and the location is Software Lab.
The Simple Network Management Protocol (SNMP) enables the monitoring of network
devices from a central location. This section describes how to configure SNMP traps on
the JCS management module.
Tasks to configure SNMP traps and alerts on the JCS management module are:
2. Use the env command to specify mm[1] as the configuration target. For example:
3. Use the snmp command to configure the SNMP community. For example:
In this example, the community 3 name is trap, the IP address of the trap destination
is 192.168.171.100, and the community 3 type is trap.
2. Use the env command to specify mm[1] as the configuration target. For example:
3. Use the alertentries command to configure the alert recipient. For example:
In this example, the alert recipient number is 1, the recipient is named trap, the alert
status is on, alert filtering is none (all alerts are received, not just critical alerts), and
the alert type is SNMP.
2. Use the env command to specify mm[1] as the configuration target. For example:
3. Use the monalerts command to configure the monitored alerts. For example:
In this example, the enhanced alert categories are enabled. All critical (ca), warning
(wa), and informational (ia) alerts are enabled.
• monalerts on page 69
• snmp on page 74
Secure Shell, or SSH, is a network protocol that allows data to be exchanged over a
secure channel between two systems. This section describes how to use JCS commands
to configure SSH access to the JCS1200 platform.
1. Use an existing username and password to connect to the JCS management module
serial port. For example:
tcsh-1:telnet bcgmm1-con
In this example, the serial port is connected to a telnet server port identified as
bcgmm1-con.
2. Use the env command to specify mm[1] as the configuration target. For example:
4. You can use the displaylog command to monitor host key generation. For example:
system:mm[1]> displaylog —f
5. Once the host key is generated, use the sshcfg command to enable SSH for the JCS
CLI. For example:
1. See the “Generating the Host Key” section to generate a host key.
2. Locate the /.ssh/authorized_keys file and copy your public key from this file.
You copy the public key from the authorized_keys file and paste it on the command
line. For example:
4. Issue the users command to verify that the public key has been installed. For example:
system:mm[1]> users —2
- n chang
- a Role:supervisor
...
Number of SSH public keys installed for this user: 1
Last login: 1/28/08 09:26:59
5. Log out, and then use SSH to log back in. For example:
system:mm[1]> exit
In this example, the JCS management module Ethernet port is identified as bcgmm1.
• sshcfg on page 76
• users on page 78
The JCS switch module in the JCS chassis connects JCS Routing Engines to a T Series
router. For redundancy, the JCS chassis includes two JCS switch modules. The JCS switch
module is preconfigured with defaults, and the configuration should not be changed. A
script is available to complete switch configuration. This script enables you to configure
the following items on the switch module:
• Network Time Protocol (NTP)—The JCS switch module does not have a real-time
clock. You must configure NTP so that the system clock on the JCS switch module has
the correct time. The script sets the IP address for the NTP server, enables the NTP
server, and sets the time zone for the switch module.
• SNMP traps—The script also configures SNMP trap information for the switch module.
This includes setting the SNMP community name and type and specifying alert
recipients.
For more information on the JCS switch configuration script, see the Junos OS Release
Notes.
On a JCS1200 route reflector, you can install 64-bit Junos OS to improve memory and
performance. However, you cannot mix a 32-bit image and a 64-bit image in the same
JCS chassis. This upgrade is available only for route reflector applications. Protected
System Domain is not supported.
NOTE: You can also order a routing engine with 64-bit Junos OS image
preinstalled.
• Download the 64-bit software package from the Juniper Networks Support website
at http://www.juniper.net/support/. Under Download Software, select either Junos
(US & Canada) or Junos (Worldwide).
To download the software package, you must have a service contract and an access
account. If you need help obtaining an account, complete the registration form at the
Juniper Networks website: https://www.juniper.net/registration/Register.jsp.
1. Back up the currently running file system so that you can recover to a known, stable
environment in case something goes wrong with the upgrade:
The root file system is backed up to /altroot, and /config is backed up to /altconfig.
The root and /config file systems are on the router’s CompactFlash card, and the
/altroot and /altconfig file systems are on the router’s hard disk.
NOTE: After you issue the request system snapshot command, you cannot
return to the previous version of the software, because the running copy
and the backup copy of the software are identical.
2. Copy the downloaded software package to the /var/tmp directory on the hard disk:
installation-package is the full name of the file copied in the previous step. For 64-bit
Junos OS, the full name would be jinstall64.tgz.
This message indicates that someone manually deleted or changed an item that was
in a package. You do not need to take any action; the package is still properly deleted.
5. After you have upgraded the software and are satisfied that the new software is
properly running, issue the request system snapshot command to back up the new
software:
The root file system is backed up to /altroot, and /config is backed up to /altconfig. The
root and /config file systems are on the router’s CompactFlash card, and the /altroot and
/altconfig file systems are on the router’s hard disk.
NOTE: After you issue the request system snapshot command, you cannot
return to the previous version of the software, because the running copy and
backup copy of the software are identical.
To pass system configuration information to the Routing Engines on the JCS, you must
configure the blade bay data. Blade bay data is stored as a 60-byte text string that
contains information about how the Routing Engines on the JCS1200 platform are mapped
to PSDs and to the RSD. The blade bay mapping information is passed from the JCS
management module to the appropriate Routing Engine, so that it is available when the
Junos OS boots.
You enter a blade bay data string for each primary and standby Routing Engine on the
JCS chassis.
Blade bay data is entered as a text string with the following format. See Table 5 on
page 51 for details.
Vn-JCSn-SDn-PSDn-REPn-REBn-PRDplatform-type
V Version number of the blade bay data. The accepted value is 01.
JCS JCS identifier. The range of values is 01 through 04. The value for this parameter must match the
value set by the control-system-id statement configured through the Junos OS CLI.
SD RSD identifier. The range of values is 01 through 03. The value for this parameter must match the
value set by the root-domain-id statement configured in the Junos OS CLI.
PSD PSD identifier. Each identifier must be unique. The value range is 01-31. The value for this parameter
must match the value set by the protected-system-domains statement configured through the
Junos OS CLI.
REP Slot identifier of the primary Routing Engine. The value range is 01 through 12. In the absence of
any Junos OS CLI configuration that affects mastership, the Routing Engine in the slot indicated
by REP will boot as the master, and the Routing Engine in slot REB will boot as the backup. The
value for this parameter must match the value set by the control-slot-numbers statement
configured through the Junos OS CLI.
REB Slot identifier of the backup Routing Engine. Typically, the value range is 01 through 12. Use 00 if
no backup Routing Engine is installed. In the absence of any Junos OS CLI configuration that
affects mastership, the Routing Engine in the slot indicated REB will boot as the backup.
PRD Routing platform type. The accepted values are T1600, T640, T320, or SCE (standalone control
element).
2. Use the baydata command to configure the blade bay data. For example:
The bay data slots are Routing Engine slots 1 through 12 on the JCS chassis. In this
example, the blade bay data is configured for the Routing Engine in slot 1 and the
Routing Engine in slot 2. Blade 1 is the primary Routing Engine of PSD 1. Blade 2 is the
backup Routing Engine of PSD 1. PSD 1 is connected to RSD 1, and RSD 1 is a T640
router.
3. Repeat this procedure for each Routing Engine on the JCS1200 platform.
JCS configuration should include a name for each Routing Engine (blade) included with
the JCS1200 platform. This name is used to identify each Routing Engine in CLI command
output and so on.
2. Use the env command to specify the blade you want to configure. For example:
3. Use the config command to configure the blade name. For example:
In this example, the blade name is BLADE01. This name identifies the JCS Routing
Engine on the network, and it appears in monitoring command output.
alertentries
Description (JCS management module CLI) Display or configure the recipients of SNMP alerts
generated by the JCS management module.
-f filter-type—(Optional) Filter the type of alerts received by the alert recipient. Replace
filter-type with a value of critical (receive critical alerts only) or none (no filtering, receive
all alerts).
-status (on | off)—(Optional) Set alert status for the specified alert recipient. When the
status is on, the recipient receives alarm notifications. When the status is off, the recipient
does not receive alarm notifications.
-t snmp—(Optional) Sets SNMP as the alert notification method for the specified alert
recipient.
• snmp on page 74
Output Fields Table 6 on page 54 lists the output fields for the alertentries command. Output fields are
listed in the approximate order in which they appear.
-status Alert status for the specified recipient. Alert status is on or off.
baydata
Description (JCS management module CLI) Display, configure, or remove informational data (blade
bay data) associated with Routing Engine blades.
NOTE: When a blade restarts, the status should change from “BSMP” to
“Supported”. The "Supported" status indicates that the blade has been
restarted since the last baydata change for that blade, and it should have
the proper baydata configuration information. However, if the management
module (MM) is reset, all blades will show a "BSMP" status, because the MM
does not know if the blades have current baydata information after a restart.
As individual blades are restarted, their status should change to "Supported".
Options -b n—(Optional) Specify a specific Routing Engine. Replace n with the Routing Engine
slot number (1 through 12). If a Routing Engine is not specified, the command applies to
all Routing Engines in the JCS chassis.
-data “data-definition”—Set the blade bay data. Blade bay data is an ASCII text string
with the following format: Vn-JCSn-SDn-PSDn-REPn-REBn-PRDplatform-type. Enclose
the text string in double quotation marks (” “).
• Vn—Version number of the blade bay data. Replace n with a version number. The
accepted value is 01.
• JCSn—JCS identifier. Replace n with the ID number of the JCS. The range of values is
01 through 04.
• SDn—RSD identifier. Replace n with the ID number of the RSD. The range of values is
01 through 03.
• PSDn—PSD identifier. Replace n with the ID number of the PSD. The range is 01 through
31.
• REPn—Slot number of the primary Routing Engine in a primary, backup Routing Engine
pair. Replace n with the slot number of the Routing Engine. The range is 01 through 12.
• REBn—Slot number of the backup Routing Engine in a primary, backup Routing Engine
pair. Replace n with the slot number of the Routing Engine. The range is 01 through 12.
Related • Configuring the Routing Engine Parameters (Blade Bay Data) on page 51
Documentation
• control-slot-numbers on page 114
Output Fields Table 7 on page 57 lists the output fields for the baydata command. Output fields are
listed in the approximate order in which they appear.
Definition Blade bay data (if any) assigned to the Routing Engine.
clear
Description (JCS management module CLI) Restore the JCS management module configuration to
the default settings.
NOTE: Use this command to clear the JCS management module configuration
only. Do not clear the JCS switch module configuration.
Output Fields No results are returned from this command. After the JCS management module resets,
you must start a new CLI session.
clock
Syntax clock <-d date> <-dst dst-mode> <-g offset> <-t time> -T system:mm[x]
Description (JCS management module CLI) Display or configure the JCS management module clock
settings.
-dst dst-mode—(Optional) Daylight saving mode for the clock. Choices include:
• others—Nonstandard daylight saving time (outside the United States and Canada)
-g offset—(Optional) UTC (GMT) offset, in hours. Replace offset with a value from -12 to
+12.
Output Fields When you enter this command, you are provide with feedback on the status of your
request.
config
Description (JCS management module CLI) Display configuration information or configure a device
on the JCS1200 platform.
Routing Engine names can contain any character except for less than (<) and greater
than (>) symbols.
JCS management module names can contain only alphanumeric characters, hyphens
(–), pound signs (#), underscores (_), and periods (.).
NOTE: Unlike the contact name and location, the device name is not enclosed
in quotation marks.
-loc “location”—(Optional) JCS management module only. Specify the location of the
primary JCS management module. The location must be enclosed in double quotation
marks (“ ”) and can be up to 47 characters. A location can contain any character except
for less than (<) and greater than (>) symbols.
Output Fields Table 8 on page 61 lists the output fields for the config command. Output fields are listed
in the approximate order in which they appear.
config (Configure a JCS system> config –T system:mm[1] -contact “George Chu x2556” -name SW-MM1 -loc “SW Lab”
Management Module) OK
env
Description (JCS management module CLI) Set the persistent environment for commands you enter
in the JCS management module. Commands entered during the remainder of the login
session apply to this target, unless you specify a new command target.
Output Fields When you enter this command, you are provided feedback on the status of your request.
The command prompt changes to reflect the new command target.
exit
Description (JCS management module CLI) Terminate the current CLI session.
Output Fields When you enter this command, no feedback is provided. Instead, the user login prompt
appears.
help
Syntax [help | ?]
Description (JCS management module CLI) Display a list of available commands with a brief
description of each command. You can also add a –help, -h, or ? option to a command
to display help for the command.
Output Fields When you enter this command, you are provided feedback on the status of your request.
Description (JCS management module CLI) Configure or display the JCS management module
Ethernet interface.
Output Fields Table 9 on page 65 lists the output fields for the ifconfig command. Output fields are
listed in the approximate order in which they appear.
Syntax ifconfig -T system:switch[x] <-c static> <-em (enabled | disabled)> <-ep (enabled |
disabled)> <-g gateway-address> <-i static-ip-address> <-s subnet-mask>
Description (JCS management module CLI) Configure or display the JCS switch module Ethernet
interface.
NOTE: For redundancy, you must configure the Ethernet interface for both
JCS switch modules.
-ep (enabled | disabled)—(Optional) Enable or disable external ports on the JCS switch
module.
Output Fields Table 10 on page 68 lists the output fields for the ifconfig command. Output fields are
listed in the approximate order in which they appear.
ifconfig (Configure) system> ifconfig -T system:switch[1] -c static –em enabled —ep enabled —i 157.210.171.98 -g
157.210.171.254 —s 255.255.252.0
OK
monalerts
Syntax monalerts -T system:mm[x] <-ca (enabled | disabled)> <-ec (enabled | disabled)> <-ia
(enabled | disabled)> <-wa (enabled | disabled)>
Description (JCS management module CLI) Display or configure alerts monitored by the JCS
management module.
NOTE: Make sure enhanced legacy alerts are enabled for the JCS1200
platform.
• snmp on page 74
Output Fields Table 11 on page 69 lists the output fields for the monalerts command. Output fields are
listed in the approximate order in which they appear.
mt
Description (JCS management module CLI) Configure or display the Routing Engine (blade) that is
in control of the JCS media tray (mt). You can use the media tray to copy Junos OS from
a USB device to a Routing Engine installed in the JCS chassis.
-b n—(Optional) Configure which Routing Engine controls (owns) the media tray. Replace
n with a value of 1 through 12 to indicate the slot number of the Routing Engine to which
you want to assign control of the media tray.
Output Fields When you enter this command, you are provided feedback on the status of your request.
OK
-b 12
ntp
Syntax ntp -T system:mm[x] <-en (enabled | disabled)> <-i ip-address | hostname> <-f
update-frequency> <-synch>
Description (JCS management module CLI) Configure or display the JCS management module
network time protocol (NTP) settings.
-en (enabled | disabled)—(Optional) Enable or disable NTP for the JCS management
module.
-synch—(Optional) Synchronize the JCS management module clock with the NTP server.
Output Fields Table 12 on page 72 lists the output fields for the ntp command. Output fields are listed
in the approximate order in which they appear.
snmp
Description (JCS1200 platform only) Display or configure SNMP settings on the JCS management
module.
Options -T system:mm[x]—Specify the JCS management module as the target of the command.
Replace x with a value of 1 or 2.
-cax community-type—(Optional) Specify an SNMPv3 view type for the community. View
types can be get, set, or trap. Replace x with a value of 1 through 3 to represent the
community number.
-cn contact-name—(Optional) Specify a contact name for the SNMP community host
server.
• monalerts on page 69
Output Fields Table 13 on page 74 lists the output fields for the snmp command. Output fields are listed
in the approximate order in which they appear.
snmp (Configure) system> snmp -T system:mm[1] -ca1 trap -c1 Traps -c3i1 192.168.171.100
OK
sshcfg
Syntax sshcfg -T system:mm[x] <-cstatus (enabled | disabled)> <-hk (rsd | dsa | gen)> <-v1 (on |
off)>
Description (JCS1200 platform only) Display or configure SSH access on the JCS management
module.
Options -T system:mm[x]—Specify the JCS management module as the target of the command.
Replace x with a value of 1 or 2.
-cstatus (enabled | disabled)—(Optional) Enable or disable the SSH server on the JCS
management module.
-hk gen—(Optional) Generate a host key for the JCS management module.
-hk (rsa | dsa)—(Optional) Display RSA or DSA host key information for the JCS
management module.
-v1 (on | off)—(Optional) Enable or disable SSH v1 on the JCS management module. (SSH
v2 is always enabled.)
Output Fields Table 14 on page 76 lists the output fields for the sshcfg command. Output fields are
listed in the approximate order in which they appear.
CLI SSH port Port number assigned to the CLI SSH server.
SMASH SSH port Port number assigned to the SMASH SSH server.
ssh-dss DSS fingerprint for the SSH server. This fingerprint is used to verify
the authenticity of the server.
ssh-rsa RSA fingerprint for the SSH server. This fingerprint is used to verify
the authenticity of the server.
users
Description (JCS1200 platform only) Display, configure, or clear user accounts on the JCS
management module.
Output Fields Table 15 on page 78 lists the output fields for the users command. Output fields are listed
in the approximate order in which they appear.
Role Authority level assigned to the user. Users can have either supervisor
or operator authority.
Blades Routing Engines (blades) to which the user has access. By default,
users have access to all Routing Engines.
Switches JCS switch modules to which the user has access. By default, users
have access to all switch modules.
Using the Junos OS command-line interface (CLI), you configure Root System Domain
(RSD) and Protected System Domain (PSD) parameters at the [edit chassis
system-domains] hierarchy level:
[edit chassis]
system-domains {
protected-system-domains psdn {
control-plane-bandwidth-percent percent;
control-slot-numbers [ slot-numbers ];
control-system-id control-system-id;
description description;
fpcs [ slot-numbers ];
}
root-domain-id root-domain-id;
}
To configure a Root System Domain (RSD), create Protected System Domains (PSDs)
under it, and assign FPCs from the T Series router and Routing Engines from the JCS1200
routing platform to each PSD, perform the following steps.
NOTE: Several of the values set through the following Junos configuration
statements must match the values set by the baydata command through
the JCS management module CLI. For the baydata command format, see
baydata.
This value for this statement must match the SD value set through the baydata
command.
NOTE: The PSD identifier must be unique for each RSD. For example, if
PSD1 is assigned to RSD1, neither RSD2 nor RSD3 can contain PSD1.
The value for this statement must match the PSD value set through the baydata
command.
For Junos OS Release 9.4, supported values for slot-numbers are 0 through 7.
The value for this statement must match the JCS value set through the baydata
command.
The value for control-slot-numbers for the primary Routing Engine assigned to the
PSD must match the REP value set through the JCS management module baydata
command. Similarly, the value for control-slot-numbers for the backup Routing
Engine must match the REB value set through the baydata command. In the absence
of any Junos OS CLI configuration that affects mastership, the Routing Engine in
the slot indicated by REP will boot as the master, and the Routing Engine in slot
REB will boot as the backup. See baydata.
• Example: Configuring a JCS1200 Platform and a Single T Series Router on page 123
• Example: Configuring a JCS1200 Platform and Multiple T Series Routers on page 128
1. Connect to the console port on the Routing Engine that is assigned to the PSD you
want to configure.
2. At the login prompt on the console, log in with the username root.
Initially, the root user account requires no password. You can see that you are the root
user, because the prompt on the routing platform shows the username root@%.
root@% cli
root@>
cli> configure
[edit]
root#
5. Configure the name of the routing platform (the routing platform hostname). We do
not recommend spaces in the routing platform name. However, if the name does
include spaces, enclose the entire name in quotation marks (" ").
[edit]
root# set system host-name host-name
[edit]
root# set system domain-name domain-name
7. Configure the IP addresses and prefix lengths for one or both of the router management
Ethernet interfaces (fxp0 and fxp1) on each Routing Engine.
[edit]
If both interfaces are configured (for JCS switch module redundancy), we recommend
that the IP address for each interface be on a separate subnet. The fxp0 interface
connects to port 6 on the JCS switch module in bay 1, whereas the fxp1 interface
connects to port 6 on the JCS switch module in bay 2.
[edit]
root# set system backup-router address
Choose a router that is directly connected to the local routing platform by way of the
management interface.
9. Configure the IP address of a DNS server. The routing platform uses the DNS name
server to translate hostnames into IP addresses.
[edit]
root# set system name-server address
10. Set the root password, entering a clear-text password that the system will encrypt,
a password that is already encrypted, or an SSH public key string.
[edit]
root# set system root-authentication plain-text-password
New password: type password
Retype new password: retype password
[edit]
root# set system root-authentication encrypted-password encrypted-password
[edit]
root# set system root-authentication ssh-rsa key
11. Commit the configuration, which activates the configuration on the routing platform:
[edit]
root# commit
After committing the configuration, you see the newly configured hostname appear
after the username in the prompt; for example, user@host#.
If you want to configure additional Junos OS properties at this time, remain in the CLI
configuration mode and add the necessary configuration statements. For more
information about how to configure additional properties, see the Junos System Basics
Configuration Guide. You will need to commit your configuration changes to activate
them on the routing platform.
[edit]
root@host-name# exit
root@host-name>
13. Issue the request system snapshot command to back up the configuration to the
/altconfig file system on the hard drive.
If you do not issue the request system snapshot command, the configuration on the
alternate boot device will be out of sync with the configuration on the primary boot
device. The request system snapshot command causes the root file system to be
backed up to /altroot, and /config to be backed up to /altconfig. The root and /config
file systems are on the routing platform’s flash disk, and the /altroot and /altconfig
file systems are on the routing platform’s hard disk.
NOTE: After you issue the request system snapshot command, you cannot
return to the previous version of the software, because the running copy
and the backup copy of the software are identical.
1. Connect to the console port on the Routing Engine that is assigned to the PSD you
want to configure.
2. At the login prompt on the console, log in with the username root.
Initially, the root user account requires no password. You can see that you are the root
user, because the prompt on the routing platform shows the username root@%.
root@% cli
root@>
cli> configure
[edit]
root#
5. Configure a hostname and the IP addresses and prefix lengths for one or both of the
router management Ethernet interfaces (fxp0 and fxp1) on each Routing Engine.
If both interfaces are configured (for JCS switch module redundancy), we recommend
that the IP address for each interface be on a separate subnet. The fxp0 interface
connects to port 6 on the JCS switch module in bay 1, whereas the fxp1 interface
connects to port 6 on the JCS switch module in bay 2.
[edit]
root# edit groups
[edit groups]
root# set re0 system host-name router1
root# set re0 interfaces fxp0 unit 0 family inet address 10.10.10.1/24
root# set re1 system host-name router2
root# set re1 interfaces fxp0 unit 0 family inet address 10.10.10.2/24
root# set re0 system host-name router1
root# set re0 interfaces fxp1 unit 0 family inet address 10.20.20.1/24
root# set re1 system host-name router2
root# set re1 interfaces fxp1 unit 0 family inet address 10.20.20.2/24
[edit]
root# set system domain-name domain-name
[edit groups]
root# set re0 interfaces lo0 unit 0 family inet address 2.2.2.1/32
root# set re1 interfaces lo0 unit 0 family inet address 2.2.2.2/32
[edit groups]
root# top
[edit]
root# set apply-groups [re0 re1]
[edit]
root# set chassis redundancy routing-engine 0 master
root# set chassis redundancy routing-engine 1 backup
root# set chassis redundancy routing-engine graceful-switchover
[edit]
root# commit synchronize
[edit]
root# set system backup-router address
Choose a router that is directly connected to the local routing platform by way of the
management interface.
12. Configure the IP address of a DNS server. The routing platform uses the DNS name
server to translate hostnames into IP addresses.
[edit]
root# set system name-server address
13. Set the root password, entering a clear-text password that the system will encrypt,
a password that is already encrypted, or an SSH public key string.
[edit]
root# set system root-authentication plain-text-password
New password: type password
Retype new password: retype password
[edit]
root# set system root-authentication encrypted-password encrypted-password
[edit]
root# set system root-authentication ssh-rsa key
14. After you have installed the new software and are satisfied that it is successfully
running, issue the request system snapshot command to back up the new software
on both master and backup Routing Engines.
{master}
user@host> request system snapshot
The root file system is backed up to /altroot, and /config is backed up to /altconfig.
The root and /config file systems are on the routing platform’s flash disk, and the
/altroot and /altconfig file systems are on the routing platform’s hard disk.
NOTE: After you issue the request system snapshot command, you cannot
return to the previous version of the software, because the running copy
and backup copy of the software are identical.
Interfaces Hierarchy
To configure shared interfaces, you must be familiar with the [edit interfaces] hierarchy
in the Junos configuration command-line interface (CLI).
For detailed information about all other configuration statements under the [edit
interfaces] hierarchy, see the Junos OS Network Interfaces Configuration Guide.
interfaces {
ge-fpc/pic/slot {
vlan-tagging;
shared-interface;
unit logical-unit-number {
vlan-id number;
peer-interface interface-name;
interface-shared-with psdn;
family family {
address ip-address;
}
}
}
so-fpc/pic/slot {
encapsulation frame-relay;
shared-interface;
unit logical-unit-number {
dlci dlci-identifier;
peer-interface interface-name;
interface-shared-with psdn;
family family {
address ip-address;
}
}
}
xe-fpc/pic/slot {
shared-interface;
unit logical-unit-number {
peer-interface interface-name;
vlan-id number;
interface-shared-with psdn;
family family {
address ip-address;
}
}
}
ut-fpc/pic/slot {
unit logical-unit-number {
peer-interface interface-name;
}
}
}
On the PSD, you configure the physical interface as well and identify it as a shared
interface. Then configure the assigned logical interfaces under it and bind each one to a
peer interface on the Tunnel PIC owned by the PSD.
When you configure shared interfaces, the values for several parameters configured on
the RSD and the PSD must match:
On the physical Gigabit Ethernet interface, VLAN tagging must be configured in both
the RSD and the PSD.
• On the physical SONET interface, the same maximum transmission unit (MTU) size
must be used in both the RSD and PSD. For example, in both the RSD and PSD, do not
include any MTU configuration to allow the default MTU size to be applied to the
physical interface. Or, in both the RSD and PSD, configure the same MTU size. For
example, in both the RSD and PSD configuration, include the mtu 5000 statement
under the [edit so-0/0/1] hierarchy level.
• The same logical unit number must be specified on the physical shared interface
(so-fpc/pic/slot.logical unit-number, ge-fpc/pic/slot.logical unit-number, or
xe-fpc/pic/slot.logical unit-number) and on the physical uplink tunnel interface
(ut-fpc/pic/slot.logical unit-number) owned by the PSD. For example, in both the RSD
and PSD configuration, specify so-0/0/0.1 as the logical SONET interface. In the PSD
configuration, configure ut-0/0/0.1 as the logical peer tunnel interface.
• On the logical SONET interface, the same DLCI must be configured in both the RSD
and the PSD. For example, at the [edit interfaces so-0/0/0.1] hierarchy level, include
the dcli 101 statement in both the RSD and PSD configuration.
On the logical Ethernet interface, the same virtual LAN (VLAN) identifier must be
configured in both the RSD and the PSD. For example, at the [edit interfaces ge-0/0/0.2]
hierarchy level, include the vlan-id 102 statement in both the RSD and PSD configuration.
• For VLAN tagging, use the vlan-tagging statement at one of the following hierarchy
levels: [edit interfaces ge-fpc/pic/slot] or [edit interfaces xe-fpc/pic/slot].
3. Configure logical interfaces under the physical interface using the unit
logical-unit-number statement at the [edit interfaces so-fpc/pic/slot] hierarchy level,
the [edit interfaces ge-fpc/pic/slot] hierarchy level, or the [edit interfaces
xe-fpc/pic/slot] hierarchy level.
4. For each logical SONET interface, include the following statements at the [edit
interfaces so-fpc/pic/slot.logical-unit-number] hierarchy level:
For each logical Gigabit Ethernet interface, include the following statements at one
of the following hierarchy levels: [edit interfaces ge-fpc/pic/slot.logical-unit-number]
or [edit interfaces xe-fpc/pic/slot.logical-unit-number].
In the following example, so-0/0/0.0 and so-0/0/0.1 belong to PSD1, whereas PSD2
owns so-0/0/0.2:
interfaces {
so-0/0/0 {
encapsulation frame-relay;
unit 0 {
dlci 100;
interface-shared-with psd1;
}
unit 1 {
dlci 101;
interface-shared-with psd1;
}
unit 2 {
dlci 102;
interface-shared-with psd2;
}
}
}
In the following example, ge-1/0/0.1 and ge-1/0/0.2 belong to PSD1, whereas PSD2 owns
ge-1/0/0.3:
interfaces {
ge-1/0/0 {
vlan-tagging;
unit 1{
vlan-id 100;
interface-shared-with psd1;
}
unit 2{
vlan-id 101;
interface-shared-with psd1;
}
unit 3{
vlan-id 102;
interface-shared-with psd2;
}
}
}
interfaces {
xe-5/0/0 {
vlan-tagging;
unit 0{
vlan-id 209;
interface-shared-with psd4;
}
unit 1{
vlan-id 200;
interface-shared-with psd4;
}
}
}
1. Configure the physical interface at the [edit interfaces] hierarchy level by doing one
of the following:
• Configure the physical Gigabit Ethernet interface using the ge-fpc/pic/slot statement.
• For VLAN tagging, use the vlan-tagging statement at one of the following hierarchy
levels: [edit interfaces ge-fpc/pic/slot] or [edit interfaces xe-fpc/pic/slot].
4. Configure logical interfaces under the physical interface using the unit
logical-unit-number statement at one of the following hierarchy levels: [edit interfaces
so-fpc/pic/slot], or [edit interfaces ge-fpc/pic/slot], or [edit interfaces xe-fpc/pic/slot].
The values for logical-unit-number must match the values set in the RSD configuration.
• For SONET interfaces, include the dlci dlci-identifier statement at the [edit interfaces
so-fpc/pic/slot unit logical-unit-number] hierarchy level to assign a data-link
connection identifier (DLCI) for the point-to-point Frame Relay connection between
the RSD and the PSD. The value for dlci-identifier must match the value set in the
RSD configuration for the specified logical SONET interface.
• For Gigabit Ethernet interfaces, include the vlan-id number statement at one of the
following hierarchy levels: [edit interfaces ge-fpc/pic/slot unit logical-unit-number]
or [edit interfaces xe-fpc/pic/slot unit logical-unit-number] to bind an 802.1Q VLAN
identifier tag to the logical interface. The value for number must match the value
set in the RSD configuration for the specified logical Gigabit Ethernet interface.
7. For each logical interface, include the family family statement to configure the protocol
family for the logical interface.
8. Configure the IP address of the logical interface using the address address statement
at one of the following hierarchy levels: [edit interfaces so-fpc/pic/slot unit
logical-unit-number family family], or [edit interfaces ge-fpc/pic/slot unit
logical-unit-number family family], or [edit interfaces xe-fpc/pic/slot unit
logical-unit-number family family].
9. Configure the physical tunnel interface using the ut-fpc/pic/slot statement at the [edit
interfaces] hierarchy level.
10. Configure the logical tunnel interfaces using the unit logical-unit-number statement
at the [ut-fpc/pic/slot] hierarchy level.
The logical unit number must match the value of the logical unit number for the
physical shared interface. For example, if the shared interface logical unit is 1 (as part
of so-0/0/0.1), configure ut-0/0/0.1 as the logical peer tunnel interface.
11. For each logical tunnel interface, specify the logical peer interface on the SONET,
Gigabit Ethernet, or 10-Gigabit Ethernet PIC using the peer-interface statement at the
[ut-fpc/pic/slot unit logical-unit-number] hierarchy level.
As described in Step 10, the logical unit number for the shared interface and the uplink
tunnel interface must match.
SONET (PSD1) In the following example, logical SONET interface so-0/0/0.0 is peered with logical
tunnel interface ut-1/0/0.0 and so-0/0/0.1 is peered with ut-1/0/0.1.
interfaces {
so-0/0/0 {
encapsulation frame-relay;
shared-interface;
unit 0 {
dlci 100;
peer-interface ut-1/0/0.0;
family inet {
address 10.10.10.1/24;
}
}
unit 1 {
dlci 101;
peer-interface ut-1/0/0.1
family inet {
address 10.10.11.1/24;
}
}
}
ut-1/0/0 {
unit 0 {
peer-interface so-0/0/0.0;
}
unit 1 {
peer-interface so-0/0/0.1;
}
}
}
SONET (PSD2) In the following example, logical SONET interface so-0/0/0.2 is peered with logical tunnel
interface ut-2/0/0.2.
interfaces {
so-0/0/0 {
encapsulation frame-relay;
shared-interface;
unit 2 {
dlci 102;
peer-interface ut-2/0/0.2;
family inet {
address 10.10.12.1/24;
}
}
ut-2/0/0 {
unit 0 {
peer-interface so-0/0/0.2;
}
}
}
Ethernet (PSD1) In the following example, logical Gigabit Ethernet interface ge-1/0/0.1 is peered with
logical tunnel interface ut-3/0/0.1, and ge-1/0/0.2 is peered with ut-4/0/0.2.
interfaces {
ge-1/0/0 {
vlan-tagging;
shared-interface;
unit 1{
vlan-id 100;
peer-interface ut-3/0/0.1;
family inet {
address 10.10.13.1/24;
}
}
unit 2{
vlan-id 101;
peer-interface ut-4/0/0.2
family inet {
address 10.10.14.1/24;
}
}
}
ut-3/0/0 {
unit 1{
peer-interface ge-1/0/0.1;
}
unit 2{
peer-interface ge-1/0/0.2;
}
}
}
Ethernet (PSD2) In the following example, logical Gigabit Ethernet interface ge-1/0/0.3 is peered with
logical tunnel interface ut-4/0/0.3.
interfaces {
ge-1/0/0 {
vlan-tagging;
shared-interface;
unit 3{
vlan-id 102;
peer-interface ut-4/0/0.3;
family inet {
address 10.10.15.1/24;
}
}
ut-4/0/0 {
unit 3{
peer-interface ge-1/0/0.3;
}
}
}
10-Gigabit Ethernet In the following example, logical 10-Gigabit Ethernet interface xe-5/0/0.0 is peered with
(PSD4) logical tunnel interface ut-2/0/0.0 and xe-5/0/0.1 is peered with ut-2/0/0.1.
interfaces {
xe-5/0/0 {
vlan-tagging;
shared-interface;
unit 0{
vlan-id 209;
peer-interface ut-2/0/0.0;
family inet {
address 10.1.1.2/30;
}
family inet6 {
address ::10.1.1.2/126;
}
}
unit 1{
vlan-id 200;
peer-interface ut-2/0/0.1
family inet {
address 11.1.1.2/30;
}
family inet6 {
address ::11.1.1.2/126;
}
}
}
ut-2/0/0 {
unit 0{
peer-interface xe-5/0/0.0;
}
unit 1{
peer-interface xe-5/0/0.1;
}
}
}
Whereas the RSD controls the physical shared interface and allocates a logical interface
on it to the PSD, the PSD controls the configuration under the logical interface, including
the protocol family. The shared interface on the RSD is not aware of the protocol family
information associated with the logical interface. Therefore, on the PSD, the firewall filter
must be configured under the [edit firewall family any] hierarchy level and the filter applied
to the entire logical interface (as opposed to a protocol family under the interface). With
Junos OS Release 9.4, only output filters are supported.
To configure a firewall filter on the PSD, create the filter conditions and apply the filter
to the logical interfaces:
a. Include the filter filter-name statement at the [edit firewall family any] hierarchy
level.
b. Include the term term-name statement at the [edit firewall family any filter
filter-name] hierarchy level.
c. Include the from match-conditions statement at the [edit firewall family any filter
filter-name term term-name] hierarchy level.
d. Include the then action statement at the [edit firewall family any filter filter-name
term term-name] hierarchy level.
e. Include the then action-modifiers statement at the [edit firewall family any filter
filter-name term term-name] hierarchy level.
2. Apply the firewall filter to the logical interface on the shared interface by including
the filter output filter-name statement at the [edit interfaces interface-name unit
logical-unit-number] hierarchy level.
Starting with Junos OS Release 10.1, firewall filters on logical interfaces can be configured
on the RSD. Filtering is performed on the PSD, but logical interface filters configured on
the RSD are applied automatically by the PSD.
To configure a logical interface filter on the RSD, apply the firewall filter to the logical
interface on the shared interface by including the filter output filter-name statement at
the [edit interfaces interface-name unit logical-unit-number] hierarchy level on the RSD.
Filters configured on the RSD can co-exist with filters configured on the PSD. Counter
statistics related to PSD filtering are available on the RSD.
In the following example, term 1 and term 2 of the firewall filter-out provide per-class
policing and term 3 provides logical interface-based policing. The filter is applied to the
so-4/5/6.0 logical interface.
family inet6 {
address fec0::1/64;
}
}
}
For more information about firewall filters, see the Junos OS Policy Framework
Configuration Guide.
• Random early detection (RED) drop profiles and scheduler maps that are bound to
physical shared interfaces must be configured on the RSD.
• Classifiers and rewrite rules that are bound to logical shared interfaces must be
configured on the PSD.
• CoS queues and forwarding classes must be configured identically on both the RSD
and on the PSD that owns the logical shared interfaces.
For example, the following CoS forwarding classes need to be configured on both the
RSD and the PSD:
class-of-service {
forwarding-classes {
queue 0 be priority high;
queue 1 ef priority high;
queue 2 af priority high;
queue 3 nc priority high;
queue 4 fc4 priority high;
queue 5 fc5 priority high;
queue 6 fc6 priority high;
queue 7 fc7 priority high;
}
}
To view queue statistics on a shared interface, you must issue the show interfaces queue
so-fpc/pic/slot command or the show interfaces queue ge-fpc/pic/slot command on the
RSD. If you issue the command on the PSD, the system displays this message: “Egress
queue statistics are not applicable to this interface.”
For more information about CoS features, see the Junos OS Class of Service Configuration
Guide.
Interface Hierarchy
For detailed information about all other configuration statements under the [edit
interfaces] hierarchy, see the Junos OS Network Interfaces Configuration Guide.
interfaces {
xt-fpc/pic/slot {
unit logical-unit-number {
dlci dlci-number;
encapsulation frame-relay;
peer-interfaceinterface-name;
peer-psd psdn;
}
}
}
1. Use the xt-fpc/pic/slot statement at the [edit interfaces] hierarchy level to configure
cross-connections with the other PSDs.
2. Configure logical interfaces under the cross-connect interface using the unit
logical-unit-number statement at the [edit interfaces xt-fpc/pic/slot] hierarchy level.
The values for logical-unit-number must match values set in the Root System Domain
(RSD) configuration.
4. Repeat this procedure for each PSD that you want to include in inter-PSD forwarding.
In the example illustrated in Figure 12 on page 109, a cross-connect using a tunnel interface
transports packets between the logical interfaces configured on each PSD.
xt-3/3/0.1
xt-3/3/0.2
xt-2/3/0.2
xt-2/3/0.1
xt-4/3/0.2
xt-4/3/0.1
g017291
Table 16: Example: Inter-PSD Forwarding
PSD Interfaces
PSD 5 xt-4/3/0.1
10.0.0.2
2121:2121::2/64
xt-4/3/0.2
10.0.1.2
PSD 7 xt-3/3/0.1
10.0.0.1
2121:2121::1/64
xt-3/3/0.2
10.1.1.2
PSD 3 xt-3/3/0.1
10.0.0.1
2121:2121::1/64
xt-2/3/0.2
10.1.1.1
interfaces {
xt-4/3/0 {
unit 1 {
peer-psd psd7;
peer-interface xt-3/3/0.1;
encapsulation frame-relay;
point-to-point;
dlci 1;
family inet {
address 10.0.0.2/32 {
destination 10.0.0.1;
}
}
family inet6 {
address 2121:2121::2/64;
}
}
unit 2 {
peer-psd psd3;
peer-interface xt-2/3/0.1;
encapsulation frame-relay;
point-to-point;
dlci 2;
family inet {
address 10.0.1.2/32 {
destination 10.0.1.1;
}
}
}
}
}
interfaces {
xt-3/3/0 {
unit 1 {
peer-psd psd5;
peer-interface xt-4/3/0.1;
encapsulation frame-relay;
point-to-point;
dlci 1;
family inet {
address 10.0.0.1/32 {
destination 10.0.0.2;
}
}
family inet6 {
address 2121:2121::1/64;
}
}
unit 2 {
peer-psd psd3;
peer-interface xt-2/3/0.2;
encapsulation frame-relay;
point-to-point;
dlci 2;
family inet {
address 10.1.1.1/32 {
destination 10.1.1.2;
}
}
}
}
}
interfaces {
xt-2/3/0 {
unit 1 {
peer-psd psd5;
peer-interface xt-4/3/0.2;
encapsulation frame-relay;
point-to-point;
dlci 1;
family inet {
address 10.0.1.1/32 {
destination 10.0.1.2;
}
}
}
unit 2 {
peer-psd psd7;
peer-interface xt-3/3/0.2;
encapsulation frame-relay;
point-to-point;
dlci 2;
family inet {
address 10.1.1.2/32 {
destination 10.1.1.1;
}
}
}
}
}
control-plane-bandwidth-percent
Description Allocate a percentage of the bandwidth that exists on the JCS switch modules and the
T Series Control Boards (T-CBs) to the specified Protected System Domain (PSD).
Allocating bandwidth prevents potential overutilization by one PSD over another.
control-slot-numbers
Description Configure the slot numbers for the Routing Engines on the JCS1200 platform that are
part of the specified Protected System Domain (PSD).
Options slot-numbers—Slot numbers for the Routing Engines on the JCS1200 platform to be
assigned to the PSD.
Range: 1 through 12
NOTE: The slot numbers for the Routing Engines for the specified PSD must
match the REP (primary Routing Engine) and REB (backup Routing
Engine) values set through the JCS management module baydata
command. In the absence of any Junos OS CLI configuration that affects
mastership, the Routing Engine in the slot indicated by REP will boot as
the master, and the Routing Engine in slot REB will boot as the backup.
The baydata command assigns the corresponding PSD through the PSD
parameter.
control-system-id
description
Description Provide a description for the specified Protected System Domain (PSD).
fpcs
Description Assign Flexible PIC Concentrators (FPCs) to a Protected System Domain (PSD).
interface-shared-with
Description Assign a logical interface under a shared physical interface to a Protected System Domain
(PSD).
peer-interface
Description Configure a peer interface on a Protected System Domain (PSD) for PSD-to-PSD
communication over internal tunnel PICs.
peer-psd
Description Configure a peer Protected System Domain (PSD) for inter-PSD forwarding.
protected-system-domains
root-domain-id
NOTE: This value must match the value of the SD (Root System Domain)
parameter set using the baydata command.
shared-interface
Syntax shared-interface;
system-domains
Syntax system-domains {
protected-system-domains psdn {
control-plane-bandwidth-percent percent;
control-slot-numbers [ slot-numbers ];
control-system-id control-system-id;
description description;
fpcs [ slot-numbers ];
}
root-domain-id root-domain-id;
}
Description Configure Root System Domain (RSD) and Protected System Domain (PSD) parameters.
Configuration Examples
• Configuration Examples on page 123
Configuration Examples
• Example: Configuring a JCS1200 Platform and a Single T Series Router on page 123
• Example: Configuring a JCS1200 Platform and Multiple T Series Routers on page 128
• Example: Configuring Shared Interfaces (SONET) on page 136
• Example: Configuring Shared Interfaces (Ethernet) on page 147
• Example: Configuring Route Reflection—Roadmap on page 157
• Example: Configuring the JCS1200 Platform as a Route Reflector on page 157
• Example: Configuring Client-to-Client Reflection (OSPF) on page 166
• Example: Consolidating a Layer 2 VPN Network on page 177
In this configuration example, the JCS1200 platform is connected to a single T640 router.
The configuration is described in the following sections:
Requirements
This configuration example requires the following hardware and software components:
Overview
This example configures the JCS1200 platform and one connected T640 router. For this
example, you need to configure a single Root System Domain (RSD), create Protected
System Domains (PSDs), and assign Routing Engines in the JCS chassis and Flexible PIC
Controllers (FPCs) on the T640 router to each PSD as follows:
• PSD1—Routing Engines in slots 1 and 2 on the JCS chassis and FPCs in slots 0, 1, and 2
on the T640 router
• PSD2—Routing Engines in slots 3 and 4 on the JCS chassis and the FPC in slot 3 on the
T640 router
Configuration
First, configure the Routing Engines on the JCS1200 platform using the management
module command-line interface (CLI). Then, configure the T640 router using the Junos
OS CLI.
Step-by-Step To configure the parameters required for the Routing Engines in the JCS chassis:
Procedure
1. Log in to the JCS management module.
2. Assign the Routing Engines in slots 1 (primary) and 2 (backup) to RSD1 and PSD1:
3. Assign Routing Engines in slots 3 (primary) and 4 (backup) to RSD1 and PSD2:
The baydata command specifies the target as a bay blade (-b), identifies the blade
(Routing Engine) slot, and specifies the following parameters:
• V—Product version.
• SD—RSD identifier.
• PSD—PSD identifier.
system> baydata
Step-by-Step To configure the RSD and create the PSDs on the T640 router:
Procedure
1. At the [edit chassis system-domains] hierarchy level of the Junos OS CLI, include
the root-domain-id 1 statement to identify the RSD.
a. Include the fpcs 0 fpcs 1 fpcs 2 statement to assign the FPCs in slots 0, 1, and 2
to PSD1.
system-domains {
root-domain-id 1;
protected-system-domains {
psd1 {
description “psd for customer1”;
fpcs [ 0 1 2 ];
control-system-id 1;
control-slot-numbers [ 1 2 ];
}
psd2 {
description “psd for customer2”;
fpcs [ 3 ];
control-system-id 1;
control-slot-numbers [ 3 4 ];
}
}
}
Verification
Verify the status of the RSD and PSDs:
Purpose Verify that the PSDs configured under the RSD are online.
Purpose Verify that each PSD has been assigned the correct FPCs on the T640 router and the
appropriate Routing Engines on the JCS1200 platform.
Hardware inventory:
Item Version Part number Serial number Description
Chassis S19068 T1600
Midplane REV 04 710-002726 AX5666 T640 Backplane
psd1-re0:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis 740-023156 SNJCSJCSAC00 JCS1200 AC Chassis
Routing Engine 1 REV 01 740-023157 SNBLJCSAC005 RE-JCS1200-1x2330
Routing Engine 2 REV 01 740-023158 SNBLJCSAC006 RE-JCS1200-1x2330
rsd-re0:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis S19068 T1600
Midplane REV 04 710-002726 AX5666 T640 Backplane
CIP REV 05 710-002895 HC0474 T Series CIP
PEM 0 Rev 06 740-017906 TE27806 Power Entry Module 3x80
SCG 0 REV 04 710-003423 HF6042 T640 Sonet Clock Gen.
SCG 1 REV 11 710-003423 HW7765 T640 Sonet Clock Gen.
Routing Engine 0 REV 04 740-014082 1000660098 RE-A-2000
Routing Engine 1 REV 01 740-005022 210865700324 RE-3.0
CB 0 REV 06 710-007655 WE9377 Control Board (CB-T)
CB 1 REV 06 710-007655 WE9379 Control Board (CB-T)
FPC 3 REV 01 710-013560 JE4851 E2-FPC Type 3
CPU REV 05 710-010169 HX8637 FPC CPU-Enhanced
MMB 0 REV 04 710-010171 HX7130 MMB-5M3-288mbit
MMB 1 REV 04 710-010171 HX9460 MMB-5M3-288mbit
SPMB 0 REV 10 710-003229 WE9582 T Series Switch CPU
SPMB 1 REV 10 710-003229 WE9587 T Series Switch CPU
SIB 0 REV 05 710-013074 DB2624 SIB-I8-SF
SIB 1 REV 05 710-013074 DE7881 SIB-I8-SF
SIB 2 REV 05 710-013074 DE7889 SIB-I8-SF
SIB 3 REV 05 710-013074 DE9972 SIB-I8-SF
SIB 4 REV 05 710-013074 DE7937 SIB-I8-SF
Fan Tray 0 Front Top Fan Tray
Fan Tray 1 Front Bottom Fan Tray
Fan Tray 2 Rear Fan Tray
psd2-re0:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis 740-023156 SNJCSJCSAC00 JCS1200 AC Chassis
Routing Engine 3 REV 01 740-023160 SNBLJCSAC007 RE-JCS1200-1x2330
Routing Engine 4 REV 01 740-023161 SNBLJCSAC008 RE-JCS1200-1x2330
Meaning On PSD1, under the rsd-re0 heading, you can see that PSD1 owns the FPCs in slots 0, 1,
and 2 in the T640 chassis. Under the psd1-re0 output field, the output indicates that the
Routing Engines in slots 1 and 2 in the JCS chassis are assigned to PSD1.
On PSD2, under the rsd-re0 heading, you can see that PSD2 owns the FPC in slot 3 in the
T640 chassis. Under the psd2-re0 output field, the output indicates that the Routing
Engines in slots 3 and 4 in the JCS chassis are assigned to PSD2.
Requirements
This configuration example requires the following hardware and software components:
Overview
This example configures the JCS1200 platform and three connected T Series routers.
For this example, you need to configure a Root System Domain (RSD) for each connected
T Series router. Within each RSD, create Protected System Domains (PSDs) and assign
Flexible PIC Controllers (FPCs) and Routing Engines to each PSD.
Configuration
First, configure the Routing Engines on the JCS1200 platform using the management
module CLI. Then, configure each T Series router using the Junos OS CLI.
Step-by-Step To configure the parameters required for the Routing Engines in the JCS chassis:
Procedure
1. Log in to the JCS management module.
2. Refer to the data presented in Table 17 on page 129 for Routing Engine assignments.
1 1 01 02 T320
2 03 04
2 3 05 06 T640
4 07 08
3 5 09 10 T1600
6 11 12
3. Assign the Routing Engines in slots 1 (primary) and 2 (backup) to RSD1 and PSD1,
which are associated with the T320 router. Assign the Routing Engines in slots 3
(primary) and 4 (backup) to RSD1 and PSD2, which also belong to the T320 router.
4. Assign the Routing Engines in slots 5 (primary) and 6 (backup) to RSD2 and PSD3,
which are associated with the T640 router. Assign the Routing Engines in slots 7
(primary) and 8 (backup) to RSD2 and PSD4, which also belong to the T640 router.
5. Assign the Routing Engines in slots 9 (primary) and 10 (backup) to RSD3 and PSD5,
which are associated with the T1600 router. Assign the Routing Engines in slots 11
(primary) and 12 (backup) to RSD3 and PSD6, which also belong to the T1600
router.
system> baydata
4 Supported V01–JCS01–SD01–PSD02–REP03–REB04–PRDT320
5 Supported V01–JCS01–SD02–PSD03–REP05–REB06–PRDT640
6 Supported V01–JCS01–SD02–PSD03–REP05–REB06–PRDT640
7 Supported V01–JCS01–SD02–PSD04–REP07–REB08–PRDT640
8 Supported V01–JCS01–SD02–PSD04–REP07–REB08–PRDT640
9 Supported V01–JCS01–SD03–PSD05–REP09–REB10–PRDT1600
10 Supported V01–JCS01–SD03–PSD05–REP09–REB10–PRDT1600
11 Supported V01–JCS01–SD03–PSD06–REP11–REB12–PRDT1600
12 Supported V01–JCS01–SD03–PSD06–REP11–REB12–PRDT1600
Step-by-Step To configure the RSD and create the PSDs on the T320 router:
Procedure
1. Log in to the T320 router.
1 0, 1, 2, and 3 1 and 2
2 4, 5, 6, and 7 3 and 4
3. At the [edit chassis system-domains] hierarchy level of the Junos OS CLI, include
the root-domain-id 1 statement to identify the RSD.
a. Include the fpcs 0 fpcs 1 fpcs 2 fpcs 3 statement to assign the FPCs in slots 0, 1,
2, and 3 to PSD1.
a. Include the fpcs 4 fpcs 5 fpcs 6 fpcs 7 statement to assign the FPCs in slots 4, 5,
6, and 7 to PSD2.
system-domains {
root-domain-id 1;
protected-system-domains {
psd1 {
description “psd for customer1”;
fpcs [ 0 1 2 3];
control-system-id 1;
control-slot-numbers [ 1 2 ];
}
psd2 {
description “psd for customer2”;
fpcs [ 4 5 6 7];
control-system-id 1;
control-slot-numbers [ 3 4 ];
}
}
}
Step-by-Step To configure the RSD and create the PSDs on the T640 router:
Procedure
1. Log in to the T640 router.
3 0, 1, 2, and 3 5 and 6
4 4, 5, 6, and 7 7 and 8
3. At the [edit chassis system-domains] hierarchy level of the Junos OS CLI, include
the root-domain-id 2 statement to identify the RSD.
a. Include the fpcs 0 fpcs 1 fpcs 2 fpcs 3 statement to assign the FPCs in slots 0, 1,
2, and 3 to PSD3.
a. Include the fpcs 4 fpcs 5 fpcs 6 fpcs 7 statement to assign the FPCs in slots 4, 5,
6, and 7 to PSD4.
system-domains {
root-domain-id 2;
protected-system-domains {
psd3 {
description “psd for customer3”;
fpcs [ 0 1 2 3];
control-system-id 1;
control-slot-numbers [ 5 6 ];
}
psd4 {
description “psd for customer4”;
fpcs [ 4 5 6 7];
control-system-id 1;
control-slot-numbers [ 7 8 ];
}
}
}
Step-by-Step To configure the RSD and create the PSDs on the T1600 router:
Procedure
1. Log in to the T1600 router.
5 0, 1, 2, and 3 9 and 10
6 4, 5, 6, and 7 11 and 12
3. At the [edit chassis system-domains] hierarchy level of the Junos OS CLI, include
the root-domain-id 3 statement to identify the RSD.
a. Include the fpcs 0 fpcs 1 fpcs 2 fpcs 3 statement to assign the FPCs in slots 0, 1,
2, and 3 to PSD3.
a. Include the fpcs 4 fpcs 5 fpcs 6 fpcs 7 statement to assign the FPCs in slots 4, 5,
6, and 7 to PSD4.
system-domains {
root-domain-id 3;
protected-system-domains {
psd5 {
description “psd for customer5”;
fpcs [ 0 1 2 3];
control-system-id 1;
control-slot-numbers [ 9 10 ];
}
psd6 {
description “psd for customer6”;
fpcs [ 4 5 6 7];
control-system-id 1;
control-slot-numbers [ 11 12 ];
}
}
}
Verification
• Verifying Configured PSDs on page 134
• Verifying PSD Ownership of FPCs on page 135
• Verifying PSD Ownership of Routing Engines on page 135
Purpose Verify that the PSDs configured under each RSD are online.
Meaning RSD1 owns PSD1 and PSD2. RSD2 owns PSD3 and PSD4. RSD3 owns PSD4 and PSD5.
All PSDs are online.
Purpose Verify that each PSD is assigned the correct FPCs on the T Series router.
Action For each PSD, issue the show chassis fpc command. For example:
rsd-re0:
--------------------------------------------------------------------------
Temp CPU Utilization (%) Memory Utilization (%)
Slot State (C) Total Interrupt DRAM (MB) Heap Buffer
0 Online 34 3 1 256 12 50
1 Online 52 4 0 2048 3 24
2 Online 34 3 1 256 12 50
3 Online 52 4 0 2048 3 24
Meaning In this example, PSD1 owns the FPCs in slots 0, 1, 2, and 3 on the T Series router.
Purpose Verify that each PSD owns the correct Routing Engines on the JCS chassis.
Action On each PSD, issue the show chassis routing-engine command. For example:
CPU utilization:
User 0 percent
Background 0 percent
Kernel 0 percent
Interrupt 0 percent
Idle 100 percent
Model RE-JCS1200-1x2330
Serial ID SNBLBCD001
Start time 2008-09-03 13:49:00 PDT
Uptime 27 days, 2 hours, 50 minutes, 9 seconds
Last reboot reason 0x49:power cycle/failure power-button hard
power off thermal shutdown
Routing Engine status:
Slot 1:
Physical Slot 4
Current state Backup
Election priority Backup (default)
DRAM 13312 MB
Memory utilization 10 percent
CPU utilization:
User 0 percent
Background 0 percent
Kernel 0 percent
Interrupt 0 percent
Idle 100 percent
Model RE-JCS1200-1x2330
Serial ID SNBLBCD002
Start time 2008-09-24 17:04:01 PDT
Uptime 5 days, 23 hours, 35 minutes, 18 seconds
Last reboot reason 0x49:power cycle/failure power-button hard
power off thermal shutdown
Load averages: 1 minute 5 minute 15 minute
0.00 0.00 0.00
Meaning In this example, PSD2 owns the Routing Engines in slots 3 and 4 on the JCS chassis as
indicated by the values in the Physical Slot fields. The Routing Engine in slot 3 is the
master, whereas the Routing Engine in slot 4 is the backup.
In this configuration example, two Protected System Domains (PSDs) share a single
interface on a Flexible PIC Controller (FPC) that is owned by the Root System Domain
(RSD).
Requirements
This configuration example requires the following hardware and software components:
• Two Tunnel PICs—one installed on the FPC in slot 1 and the other installed on the FPC
in slot 7
Overview
With this example configuration, PSD5 and PSD6 can both transport packets using a
single SONET PIC owned by RSD3.
As illustrated in Figure 13 on page 137, RSD3 owns physical interface (so-6/0/0). PSD5
owns logical interfaces so-6/0/0.0, so-6/0/0.1, and so-6/0/0.2. A cross-connect using
tunnel interface ut-1/0/0 transports packets between the logical interfaces configured
on the PSD and the physical SONET interface on RSD3. Similarly, PSD5 owns logical
interface so-6/0/0.3 and uses ut-7/0/0 to transport packets between so-6/0/0.3 and
the physical interface on RSD3.
tunnel to so-6/0/0.3
tunnel to so-6/0/0.0
g016951
tunnel to so-6/0/0.1
tunnel to so-6/0/0.2
Configuration
First, configure the Routing Engines on the JCS1200 platform using the management
module command-line interface (CLI). Then, configure each T Series router using the
Junos OS CLI.
JCS1200 Configuration
Step-by-Step To configure the parameters required for the Routing Engines in the JCS chassis:
Procedure
1. Log in to the JCS management module.
2. Assign the Routing Engines in slots 5 (primary) and 6 (backup) to RSD3 and PSD1.
Assign the Routing Engine in slot 7 to RSD3 and PSD2.
system> baydata
RSD Configuration
2. At the [edit chassis system-domains] hierarchy level of the Junos OS CLI, include
the root-domain-id 3 statement to identify the RSD.
a. Include the fpcs 1 fpcs 2 fpcs 3 statement to assign the FPCs in slots 1, 2, and 3
to PSD1.
a. Include the fpcs 4 fpcs 5 fpcs 6 fpcs 7 statement to assign the FPCs in slots 4, 5,
6, and 7 to PSD2.
7. At the [edit interfaces] hierarchy level, include the so-6/0/0 statement to configure
the physical SONET interface.
9. At the [edit interfaces so-6/0/0 unit n] hierarchy level, include the following
statements:
system-domains {
root-domain-id 3;
protected-system-domains {
psd5 {
description customerA;
fpcs [ 1 2 3 ];
control-system-id 1;
control-slot-numbers [ 5 6 ];
control-plane-bandwidth-percent 50;
}
psd6 {
description customerB;
fpcs [ 4 5 7 ];
control-system-id 1;
control-slot-numbers 7;
control-plane-bandwidth-percent 50;
}
}
}
}
interfaces
so-6/0/0 {
no-keepalives;
encapsulation frame-relay;
unit 0 {
interface-shared-with psd5;
dlci 16;
}
unit 1 {
interface-shared-with psd5;
dlci 17;
}
unit 2 {
interface-shared-with psd5;
dlci 18;
}
unit 3 {
interface-shared-with psd6;
dlci 100;
}
}
PSD5 Configuration
2. At the [edit interfaces ut-1/0/0] hierarchy level, include the unit 0, unit 1, and unit 2
statements to configure the logical tunnel interfaces.
3. At the [edit interfaces ut-1/0/0 unit n] hierarchy level, include the peer-interface
so-6/0/0.logical-unit-number statement to bind the tunnel and SONET interfaces
together. Use the following logical-unit-number values:
5. At the [edit interfaces so-6/0/0] hierarchy level, include unit 0, unit 1, and unit 2
statements to configure logical interfaces.
6. At the [edit interfaces so-6/0/0 unit n] hierarchy level, include the following
statements:
• dlci dlci—Configure the DLCI for the point-to-point Frame Relay connection. Use
the following dlci values:
• family inet address address—Configure the IP version 4 (IPv4) suite protocol family
on the logical SONET interface. Use the following address values:
interfaces {
ut-1/0/0 {
unit 0 {
peer-interface so-6/0/0.0;
}
unit 1 {
peer-interface so-6/0/0.1;
}
unit 2 {
peer-interface so-6/0/0.2;
}
}
so-6/0/0 {
encapsulation frame-relay;
shared-interface;
unit 0 {
peer-interface ut-1/0/0.0;
dlci 16;
family inet {
address 10.70.0.1/30;
}
}
unit 1 {
peer-interface ut-1/0/0.1;
dlci 17;
family inet {
address 17.17.17.1/30;
}
}
unit 2 {
peer-interface ut-1/0/0.2;
dlci 18;
family inet {
address 18.18.18.1/30;
}
}
}
PSD6 Configuration
2. At the [edit interfaces ut-7/0/0] hierarchy level, include the unit 0 statement to
configure the logical tunnel interface.
3. At the [edit interfaces ut-1/0/0 unit 0] hierarchy level, include the peer-interface
so-6/0/0.logical-unit-number statement to bind the tunnel and the SONET interfaces
together.
5. At the [edit interfaces so-6/0/0] hierarchy level, include the unit 3 statement to
configure the logical interface.
6. At the [edit interfaces so-6/0/0 unit 3] hierarchy level, include the following
statements:
• dlci 100—Configure the DLCI for the point-to-point Frame Relay connection.
interfaces {
ut-7/0/0 {
unit 0 {
peer-interface so-6/0/0.3;
}
}
so-6/0/0 {
encapsulation frame-relay;
shared-interface;
unit 3 {
peer-interface ut-7/0/0.0;
dlci 100;
family inet {
address 10.10.10.1/24;
}
}
}
Verification
• Verifying Shared Interfaces on RSD3 on page 143
• Verifying Shared Interfaces on PSD5 on page 145
• Verifying Shared Interfaces on PSD6 on page 146
Enquiries received : 0
Full enquiries received : 0
Enquiry responses sent : 0
Full enquiry responses sent : 0
Common statistics:
Unknown messages received : 0
Asynchronous updates received : 0
Out-of-sequence packets received : 0
Keepalive responses timedout : 0
CoS queues : 8 supported, 8 maximum usable queues
Last flapped : 2008-08-11 10:51:51 PDT (1w1d 04:47 ago)
Input rate : 0 bps (0 pps)
Output rate : 0 bps (0 pps)
SONET alarms : LOL, PLL
SONET defects : LOL, PLL, LOF, SEF, AIS-L, AIS-P
Meaning Under the Physical interface section of the output, the Shared-interface field displays the
value Owner, meaning that RSD owns the physical shared interface so-6/0/0. In the
Shared interface fields for each logical interface, you see that so-6/0/0.0, so-6/0/0.1,
and so-6/0/0.2 are shared with PSD5, whereas logical interface so-6/0/0.3 is shared
with PSD6.
Meaning Under the Physical interfaces section of the output, the Shared-interface field displays a
value of Non-owner, indicating that the shared physical interface so-6/0/0 is not owned
by PSD5. The Shared interface field for each logical interface provides the name of its
peer uplink tunnel (ut-) interface. For example, for so-6/0/0.0, the peer interface is
ut-1/0/0.0.
Meaning Under the Physical interfaces section of the output, the Shared-interface field displays a
value of Non-owner, indicating that the shared physical interface so-6/0/0 is not owned
by PSD6. The Shared interface field for so-6/0/0.3 indicates that its peer interface is
ut-7/0/0.3.
In this configuration example, two Protected System Domains (PSDs) share a single
interface on a Flexible PIC Controller (FPC) that is owned by the Root System Domain
(RSD).
Requirements
This configuration example requires the following hardware and software components:
• Two Tunnel PICs—one installed on the FPC in slot 1 and the other installed on the FPC
in slot 7
Overview
With this example configuration, PSD5 and PSD6 can both transport packets using a
single Gigabit Ethernet PIC owned by RSD3.
As illustrated in Figure 14 on page 148, RSD3 owns physical interface (ge-6/0/0). PSD5
owns logical interfaces ge-6/0/0.0, ge-6/0/0.1, and ge-6/0/0.2. A cross-connect using
tunnel interface ut-1/0/0 transports packets between the logical interfaces configured
on the PSD and the physical Gigabit Ethernet interface on RSD3. Similarly, PSD6 owns
logical interface ge-6/0/0.3 and uses ut-7/0/0 to transport packets between ge-6/0/0.3
and the physical interface on RSD3.
tunnel to ge-6/0/0.3
tunnel to ge-6/0/0.0
g016986
tunnel to ge-6/0/0.1
tunnel to ge-6/0/0.2
Configuration
First, configure the Routing Engines on the JCS1200 platform using the management
module command-line interface (CLI). Then, configure each T Series router using the
Junos OS CLI.
JCS1200 Configuration
Step-by-Step To configure the parameters required for the Routing Engines in the JCS chassis:
Procedure
1. Log in to the JCS management module.
2. Assign the Routing Engines in slots 5 (primary) and 6 (backup) to RSD3 and PSD1.
Assign the Routing Engine in slot 7 to RSD3 and PSD2.
system> baydata
RSD Configuration
2. At the [edit chassis system-domains] hierarchy level of the Junos OS CLI, include
the root-domain-id 3 statement to identify the RSD.
a. Include the fpcs 1 fpcs 2 fpcs 3 statement to assign the FPCs in slots 1, 2, and 3
to PSD1.
a. Include the fpcs 4 fpcs 5 fpcs 6 fpcs 7 statement to assign the FPCs in slots 4, 5,
6, and 7 to PSD2.
7. At the [edit interfaces] hierarchy level, include the ge-6/0/0 statement to configure
the physical Gigabit Ethernet interface.
• Include the unit 0, unit 1, unit 2, and unit 3 statements to configure the logical
interfaces.
9. At the [edit interfaces ge-6/0/0 unit n] hierarchy level, include the following
statements:
• vlan vlan-id—Configure the virtual LAN (VLAN) identifier to bind the 802.1Q VLAN
tag ID to the logical interface:
system-domains {
root-domain-id 3;
protected-system-domains {
psd5 {
description customerA;
fpcs [ 1 2 3 ];
control-system-id 1;
control-slot-numbers [ 5 6 ];
control-plane-bandwidth-percent 50;
}
psd6 {
description customerB;
fpcs [ 4 5 7 ];
control-system-id 1;
control-slot-numbers 7;
control-plane-bandwidth-percent 50;
}
}
}
}
interfaces {
ge-6/0/0 {
vlan-tagging;
unit 0 {
interface-shared-with psd5;
vlan-id 16;
}
unit 1 {
interface-shared-with psd5;
vlan-id 17;
}
unit 2 {
interface-shared-with psd5;
vlan-id 18;
}
unit 3 {
interface-shared-with psd6;
vlan-id 100;
}
}
PSD5 Configuration
2. At the [edit interfaces ut-1/0/0] hierarchy level, include the unit 0, unit 1, and unit 2
statements to configure the logical tunnel interfaces.
3. At the [edit interfaces ut-1/0/0 unit n] hierarchy level, include the peer-interface
ge-6/0/0.logical-unit-number statement to bind the tunnel and Gigabit Ethernet
interfaces together. Use the following logical-unit-number values:
4. At the [edit interfaces ge-6/0/0] hierarchy level, include the vlan-tagging statement
to match the configuration on the RSD, and the shared-interface statement to
identify the physical interface as the shared interface.
5. At the [edit interfaces ge-6/0/0] hierarchy level, include unit 0, unit 1, and unit 2
statements to configure logical interfaces.
6. At the [edit interfaces ge-6/0/0 unit n] hierarchy level, include the following
statements:
• vlan vlan-id—Bind the 802.1Q VLAN tag ID to the logical interface. Use the following
vlan-id values:
• family inet address address—Configure the IP version 4 (IPv4) suite protocol family
on the logical Gigabit Ethernet interface. Use the following address values:
interfaces {
ut-1/0/0 {
unit 0 {
peer-interface ge-6/0/0.0;
}
unit 1 {
peer-interface ge-6/0/0.1;
}
unit 2 {
peer-interface ge-6/0/0.2;
}
}
ge-6/0/0 {
vlan-tagging;
shared-interface;
unit 0 {
peer-interface ut-1/0/0.0;
vlan-id 16;
family inet {
address 10.70.0.1/30;
}
}
unit 1 {
peer-interface ut-1/0/0.1;
vlan-id 17;
family inet {
address 17.17.17.1/30;
}
}
unit 2 {
peer-interface ut-1/0/0.2;
vlan-id 18;
family inet {
address 18.18.18.1/30;
}
}
}
PSD6 Configuration
2. At the [edit interfaces ut-7/0/0] hierarchy level, include the unit 0 statement to
configure the logical tunnel interface.
3. At the [edit interfaces ut-1/0/0 unit 0] hierarchy level, include the peer-interface
ge-6/0/0.logical-unit-number statement to bind the tunnel and the Gigabit Ethernet
interfaces together.
4. At the [edit interfaces ge-6/0/0] hierarchy level, include the vlan-tagging statement
to match the configuration on the RSD, and the shared-interface statement to
identify the Gigabit Ethernet interface as the shared physical interface.
5. At the [edit interfaces ge-6/0/0] hierarchy level, include the unit 3 statement to
configure the logical interface.
6. At the [edit interfaces ge-6/0/0 unit 3] hierarchy level, include the following
statements:
interfaces {
ut-7/0/0 {
unit 0 {
peer-interface ge-6/0/0.3;
}
}
ge-6/0/0 {
vlan-tagging;
unit 3 {
peer-interface ut-7/0/0.0;
vlan-id 100;
family inet {
address 10.10.10.1/24;
}
}
}
Verification
• Verifying Shared Interfaces on RSD3 on page 154
• Verifying Shared Interfaces on PSD5 on page 155
• Verifying Shared Interfaces on PSD6 on page 156
Meaning Under the Physical interface section of the output, the Shared-interface field displays the
value Owner, meaning that RSD owns the physical shared interface ge-6/0/0. In the
Shared interface fields for each logical interface, you see that ge-6/0/0.0, ge-6/0/0.1,
and ge-6/0/0.2 are shared with PSD5, whereas logical interface ge-6/0/0.3 is shared
with PSD6.
Meaning Under the Physical interfaces section of the output, the Shared-interface field displays a
value of Non-owner, indicating that the shared physical interface ge-6/0/0 is not owned
by PSD5. The Shared interface field for each logical interface provides the name of its
peer uplink tunnel (ut-) interface. For example, for ge-6/0/0.0, the peer interface is
ut-1/0/0.0.
Encapsulation: ENET2
Shared-interface:
Peer interface: ut-7/0/0.3
Tunnel token: Rx: 14.538
Input packets : 13
Output packets: 7774
Output Filters: filter-safari
Protocol inet, MTU: 1500
Addresses, Flags: Dest-route-down Is-Preferred Is-Primary
Destination: 173.16.254.0/30, Local: 173.16.254.1, Broadcast: 173.16.254.3
Meaning Under the Physical interfaces section of the output, the Shared-interface field displays a
value of Non-owner, indicating that the shared physical interface ge-6/0/0 is not owned
by PSD6. The Shared interface field for ge-6/0/0.3 indicates that its peer interface is
ut-7/0/0.3.
This section includes examples on how to configure the JCS1200 platform as a standalone
route reflector.
Related • Example: Configuring the JCS1200 Platform as a Route Reflector on page 157
Documentation
• Example: Configuring Client-to-Client Reflection (OSPF) on page 166
In this configuration example, a T640 router and four Routing Engines on the JCS1200
platform are configured for route reflection.
Requirements
This configuration example requires the following hardware and software components:
R1
ge-1/1/1 ge-1/1/0
10.12.100.6 10.12.100.10
ge-2/0/2 ge-0/2/3
10.12.100.5 10.12.100.9
ge-2/0/3 ge-0/1/7
10.12.100.1 10.12.100.2
R2 R3
ge-2/0/1 ge-0/1/0
10.12.100.13 10.12.100.17
fxp1.1 fxp0.1
10.12.100.14 10.12.100.18
RR
g017296
Figure 15 on page 158 shows a T640 router (T-route) and four Routing Engines (RR, R1,
R2, and R3) on the JCS1200 chassis (bcg) configured for route reflection.
Each router is configured as a separate PSD. Each PSD has an associated Routing Engine
assigned on the JCS chassis and an associated FPC assigned on the T640 router. Table
21 on page 158 provides the chassis parameters required for the JCS1200 platform and
for the T640 router for each PSD.
Configuration
The configuration of route reflection is described in the following sections:
JCS1200 Configuration
Step-by-Step To configure the parameters required for Routing Engines in the JCS chassis:
Procedure
1. Log in to the JCS management module CLI.
2. Configure the Routing Engine that is part of PSD15. This Routing Engine is located
in slot 1 of the JCS chassis and acts as the route reflector.
The baydata command specifies the target as a bay blade (-b), identifies the Routing
Engine (blade) in slot 01, and specifies the following parameters:
• REB00—Slot in which the backup Routing Engine resides. 00 indicates that there
is no backup Routing Engine.
3. Configure the Routing Engine that is part of PSD11 This Routing Engine is located in
slot 2 of the JCS chassis and acts as standalone router (not a route reflector).
The baydata command specifies the target as a bay blade (-b), identifies the blade
(Routing Engine) in slot 02, and specifies the following parameters:
• SD1—RSD identifier, which is 1. This router is in SD1, the route reflector is in SD2.
• REB00–Slot in which the backup Routing Engine resides. 00 indicates that there
is no backup Routing Engine.
4. Configure the Routing Engine that is part of PSD12. This Routing Engine is located
in slot 3 of the JCS chassis and acts as standalone router (not a route reflector).
The baydata command specifies the target as a bay blade (-b), identifies the blade
(Routing Engine) in slot 03, and specifies the following parameters:
• SD1—RSD identifier, which is 1. This router is in SD1, the route reflector is in SD2.
• REB00–Slot in which the backup Routing Engine resides. 00 indicates that there
is no backup Routing Engine.
5. Configure the Routing Engine that is part of PSD13. This Routing Engine is located
in slot 4 of the JCS chassis and acts as standalone router (not a route reflector).
The baydata command specifies the target as a bay blade (-b), identifies the blade
(Routing Engine) in slot 04, and specifies the following parameters:
• SD1—RSD identifier, which is 1. This router is in SD1, the route reflector is in SD2.
• REB00–Slot in which the backup Routing Engine resides. 00 indicates that there
is no backup Routing Engine.
system> baydata
Bay Status Definition
1 Supported V01-JCS1-SD2-PSD15-REP01-REB00-PRDSCE
2 Supported V01-JCS1-SD1-PSD11-REP02-REB00-PRDT640
3 Supported V01-JCS1-SD1-PSD12-REP03-REB00-PRDT640
4 Supported V01-JCS1-SD1-PSD13-REP04-REB00-PRDT640
5 No blade present
6 No blade present
7 No blade present
8 No blade present
9 No blade present
10 No blade present
11 No blade present
12 No blade present
RSD Configuration
2. At the [edit chassis system-domains] hierarchy level of the Junos OS CLI, include
the root-domain-id 1 statement to identify the RSD associated with Router 1, 2, and
3.
a. Include the description “bcgcpu2 SWLab R1” statement to describe the PSD.
a. Include the description “bcgcpu3 SWLab R2” statement to describe the PSD.
a. Include the description “bcgcpu4 SWLab R3” statement to describe the PSD.
chassis {
system-domains {
root-domain-id 1;
protected-system-domains {
psd11 {
description "bcgcpu2 SWLab R1";
fpcs 1;
control-system-id 1;
control-slot-numbers 2;
}
psd12 {
description "bcgcpu3 SWLab R2";
fpcs 2;
control-system-id 1;
control-slot-numbers 3;
}
psd13 {
description "bcgcpu4 SWLab R3";
fpcs 0;
control-system-id 1;
control-slot-numbers 4;
}
}
}
}
The RSD configuration on the T640 router does not include a root-domain-id 2 statement
for domain 2 (the route reflector domain) or any protected-system-domains PSD15
statements for the route reflector PSD. This is because the route reflector is self-contained
within the JCS chassis and does not require configuration on the T640 router.
2. At the [edit interfaces fxp0] hierarchy level, include the unit 1 statement to configure
the logical interface.
3. At the [edit interfaces fxp0 unit 1] hierarchy level, include the family inet address
10.12.100.18/30 statement to configure the IP version 4 (IPv4) suite protocol family.
4. At the [edit interfaces] hierarchy level, include the fxp1 statement to configure the
internal Ethernet interface.
5. At the [edit interfaces fxp1] hierarchy level, include the unit 1 statement to configure
the logical interface.
6. At the [edit interfaces fxp1 unit 1] hierarchy level, include the family inet address
10.12.100.14/30 statement to configure the IPv4 suite protocol family.
interfaces {
fxp0 {
unit 1 {
family inet {
address 10.12.100.18/30;
}
}
}
fxp1 {
unit 1 {
family inet {
address 10.12.100.14/30;
}
}
}
}
2. At the [edit interfaces ge-1/1/0] hierarchy level, include the unit 0 statement to
configure the logical interface.
3. At the [edit interfaces ge-1/1/0 unit 0] hierarchy level, include the family inet address
10.12.100.10/30 statement to configure the IPv4 suite protocol family.
4. At the [edit interfaces] hierarchy level, include the ge-1/1/1 statement to configure
the internal Ethernet interface.
5. At the [edit interfaces ge-1/1/1] hierarchy level, include the unit 0 statement to
configure the logical interface.
6. At the [edit interfaces ge-1/1/1 unit 0] hierarchy level, include the family inet address
10.12.100.6/30 statement to configure the IPv4 suite protocol family.
interfaces {
ge-1/1/0 {
unit 0 {
family inet {
address 10.12.100.10/30;
}
}
}
ge-1/1/1 {
unit 0 {
family inet {
address 10.12.100.6/30;
}
}
}
}
2. At the [edit interfaces ge-2/0/1] hierarchy level, include the unit 0 statement to
configure the logical interface.
3. At the [edit interfaces ge-2/0/1 unit 0] hierarchy level, include the family inet address
10.12.100.13/30 statement to configure the IPv4 suite protocol family.
4. At the [edit interfaces] hierarchy level, include the ge-2/0/2 statement to configure
the internal Ethernet interface.
5. At the [edit interfaces ge-2/0/2] hierarchy level, include the unit 0 statement to
configure the logical interface.
6. At the [edit interfaces ge-2/0/2 unit 0] hierarchy level, include the family inet address
10.12.100.5/30 statement to configure the IPv4 suite protocol family.
7. At the [edit interfaces] hierarchy level, include the ge-2/0/3 statement to configure
the internal Ethernet interface.
8. At the [edit interfaces ge-2/0/3] hierarchy level, include the unit 0 statement to
configure the logical interface.
9. At the [edit interfaces ge-2/0/3 unit 0] hierarchy level, include the family inet address
10.12.100.1/30 statement to configure the IPv4 suite protocol family.
interfaces {
ge-2/0/1 {
unit 0 {
family inet {
address 10.12.100.13/30;
}
}
}
ge-2/0/2 {
unit 0 {
family inet {
address 10.12.100.5/30;
}
}
}
ge-2/0/3 {
unit 0 {
family inet {
address 10.12.100.1/30;
}
}
}
}
2. At the [edit interfaces ge-0/1/7] hierarchy level, include the unit 0 statement to
configure the logical interface.
3. At the [edit interfaces ge-0/1/7 unit 0] hierarchy level, include the family inet address
10.12.100.2/30 statement to configure the IPv4 suite protocol family.
4. At the [edit interfaces] hierarchy level, include the ge-0/2/3 statement to configure
the internal Ethernet interface.
5. At the [edit interfaces ge-0/2/3] hierarchy level, include the unit 0 statement to
configure the logical interface.
6. At the [edit interfaces ge-0/2/3 unit 0] hierarchy level, include the family inet address
10.12.100.9/30 statement to configure the IPv4 suite protocol family.
7. At the [edit interfaces] hierarchy level, include the ge-0/1/0 statement to configure
the internal Ethernet interface.
8. At the [edit interfaces ge-0/1/0] hierarchy level, include the unit 0 statement to
configure the logical interface.
9. At the [edit interfaces ge-0/1/0 unit 0] hierarchy level, include the family inet address
10.12.100.17/30 statement to configure the IPv4 suite protocol family.
interfaces {
ge-0/1/7 {
unit 0 {
family inet {
address 10.12.100.2/30;
}
}
}
ge-0/2/3 {
unit 0 {
family inet {
address 10.12.100.9/30;
}
}
}
ge-0/1/0 {
unit 0 {
family inet {
address 10.12.100.17/30;
}
}
}
}
Requirements
This example requires the following hardware and software components:
R1
10.12.1.2
R2 R3
10.12.1.3 10.12.1.4
g017297
RR
The example configuration shown in Figure 16 on page 167 contains one router reflector
(RR) and three client routers (R1, R2, and R3). The three routers (R1 through R3) and the
route reflector (RR) are configured as PSDs that include Routing Engines on the JCS
chassis.
• RR—10.12.1.1
• R1—10.12.1.2
• R2—10.12.1.3
• R3—10.12.1.4
With this configuration example, a route added to client router R1 is propagated to the
route reflector (RR) and to the other client routers (R2, R3). This example uses OSPF as
the IGP and enables BFD for the connections from the route reflector.
Configuration
First, configure protocols for the route reflector (RR), then configure protocols for the
routers (R1, R2, and R3).
2. At the [edit protocols] hierarchy level, include the bgp statement to enable BGP on
the router.
3. At the [edit protocols bgp] hierarchy level, include the group int statement to define
the routing group.
A BGP system must know which routers are its peers (neighbors). You define the
peer relationship explicitly by configuring the neighboring routers that are the peers
of the local BGP system. After peer relationships have been established, the BGP
peers exchange update messages to advertise network reachability information.
b. Include the local-address 10.12.1.1 statement to specify the address of the local
end of a BGP session. This address is used to accept incoming connections to
the peer and to establish connections to the remote peer.
c. Include the cluster 1.2.3.4 statement to specify the cluster identifier (IPV4 address)
to be used by the route reflector cluster in the internal BGP group.
d. Include the neighbor 10.12.1.2, neighbor 10.12.1.2, and neighbor 10.12.1.4 statements
to specify which routers (Router 1, Router 2, and Router 3) are peers (neighbors)
of the route reflector.
5. At the [edit protocols] hierarchy level, include the ospf statement to enable OSPF
on the router.
6. At the [edit protocols ospf] hierarchy level, include the overload statement to prevent
other routers from attempting to route data traffic through the route reflector. This
option is set in the route reflector (RR), but not in Router 1, 2, and 3.
7. At the [edit protocols ospf] hierarchy level, include the area 0.0.0.0 statement to
specify the area identifier for this router to use when participating in OSPF routing.
All routers in the area must use the same area identifier to establish adjacencies.
8. At the [edit protocols ospf area 0.0.0.0] hierarchy level, include the interface fxp1.1
statement to configure the internal Ethernet interface in the backbone area.
9. At the [edit protocols ospf area 0.0.0.0 fxp1.1] hierarchy level, include the
bfd-liveness-detection statement to specify bidirectional failure detection timers.
10. At the [edit protocols ospf area 0.0.0.0 fxp1.1 bfd-liveness-detection] hierarchy level,
include the minimum-interval 333 statement to specify 333 milliseconds as the
minimum interval at which the local router transmits a hello packet and then expects
to receive a reply from its BFD neighbor.
11. Repeat Steps 7 through 9 for the fxp0.1 internal Ethernet interface:
a. At the [edit protocols ospf area 0.0.0.0] hierarchy level, include the interface
fxp0.1 statement to configure the internal Ethernet interface in the backbone
area.
b. At the [edit protocols ospf area 0.0.0.0 fxp0.1] hierarchy level, include the
bfd-liveness-detection statement to specify bidirectional failure detection timers.
routing-options {
autonomous-system 2;
}
protocols {
bgp {
group int {
type internal;
local-address 10.12.1.1;
cluster 1.2.3.4;
neighbor 10.12.1.2;
neighbor 10.12.1.3;
neighbor 10.12.1.4;
}
}
ospf {
overload;
area 0.0.0.0 {
interface fxp1.1 {
bfd-liveness-detection {
minimum-interval 333;
}
}
interface fxp0.1 {
bfd-liveness-detection {
minimum-interval 333;
}
}
}
}
}
2. At the [edit protocols] hierarchy level, include the bgp statement to enable BGP on
the router.
3. At the [edit protocols bgp] hierarchy level, include the group int statement to define
the routing group.
b. Include the local-address 10.12.1.2 statement to specify the address of the local
end of a BGP session. This address is used to accept incoming connections to
the peer and to establish connections to the remote peer.
c. Include the export nh-self statement to apply the nh-self policy to routes being
exported from the routing table into BGP.
d. Include the neighbor 10.12.1.1 statement to specify the route reflector (RR) as
peer (neighbor) of Router 1. You do not need to include neighbor statements for
Router 2 or Router 3.
5. At the [edit protocols] hierarchy level, include the ospf statement to enable OSPF
on the router.
6. At the [edit protocols ospf] hierarchy level, include the area 0.0.0.0 statement to
specify the area identifier for this router to use when participating in OSPF routing.
All routers in the area must use the same area identifier to establish adjacencies.
7. At the [edit protocols ospf area 0.0.0.0] hierarchy level, include the interface ge-1/1/1.0
statement to configure the Gigabit Ethernet interface in the backbone area.
8. At the [edit protocols ospf area 0.0.0.0] hierarchy level, include the interface
ge-1/1/0.0 statement to configure the Gigabit Ethernet interface in the backbone
area.
A routing policy contains one or more terms. The nh-self policy you are defining
includes three terms (term a, term b, and term c). This policy is applied to routes
exported from the routing table into BGP.
10. At the [edit policy-options nh-self] hierarchy level, include the term a statement to
define the first term for the nh-self policy.
11. At the [edit policy-options nh-self term a] hierarchy level, include the following
statements to specify that any static route with destination prefix 0.0.0.0/0 is
rejected:
from {
protocol static;
route-filter 0.0.0.0/0 exact;
}
then reject;
12. At the [edit policy-options nh-self] hierarchy level, include the term b statement to
define the next term for the nh-self policy.
13. At the [edit policy-options nh-self term b] hierarchy level, include the following
statements:
These statements specify that for all remaining static routes, the next-hop address
is replaced by the local IP address used for the BGP adjacency. The router is then
accepted with the new, next-hop value.
14. At the [edit policy-options nh-self] hierarchy level, include the term c statement to
define the next term for the nh-self policy.
15. At the [edit policy-options nh-self term c] hierarchy level, include the then reject
statement to reject all other routes.
routing-options {
autonomous-system 2;
}
protocols {
bgp {
group int {
type internal;
local-address 10.12.1.2;
export nh-self;
neighbor 10.12.1.1;
}
}
ospf {
area 0.0.0.0 {
interface ge-1/1/1.0;
interface ge-1/1/0.0;
}
}
}
policy-options {
policy-statement nh-self {
term a {
from {
protocol static;
route-filter 0.0.0.0/0 exact;
}
then reject;
}
term b {
from protocol static;
then {
next-hop self;
accept;
}
}
term c {
then reject;
}
}
}
2. At the [edit protocols] hierarchy level, include the bgp statement to enable BGP on
the router.
3. At the [edit protocols bgp] hierarchy level, include the group int statement to define
the routing group.
b. Include the local-address 10.12.1.3 statement to specify the address of the local
end of a BGP session. This address is used to accept incoming connections to
the peer and to establish connections to the remote peer.
c. Include the export nh-self statement to apply the nh-self policy to routes being
exported from the routing table into BGP.
d. Include the neighbor 10.12.1.1 statement to specify the route reflector (RR) as
peer (neighbor) of Router 2. You do not need to include neighbor statements for
Router 1 or Router 3.
5. At the [edit protocols] hierarchy level, include the ospf statement to enable OSPF
on the router.
6. At the [edit protocols ospf] hierarchy level, include the area 0.0.0.0 statement to
specify the area identifier for this router to use when participating in OSPF routing.
All routers in the area must use the same area identifier to establish adjacencies.
7. At the [edit protocols ospf area 0.0.0.0] hierarchy level, include the interface
ge-2/0/2.0 statement to configure the Gigabit Ethernet interface in the backbone
area.
8. At the [edit protocols ospf area 0.0.0.0] hierarchy level, include the interface
ge-2/0/3.0 statement to configure the Gigabit Ethernet interface in the backbone
area.
9. At the [edit protocols ospf area 0.0.0.0] hierarchy level, include the interface
ge-2/0/1.0 statement to configure the Gigabit Ethernet interface in the backbone
area.
10. At the [edit protocols ospf area 0.0.0.0 interface ge-2/0/1.0] hierarchy level, include
the bfd-liveness-detection statement to specify bidirectional failure detection timers.
11. At the [edit protocols ospf area 0.0.0.0 interface ge-2/0/1.0 bfd-liveness-detection]
hierarchy level, include the minimum-interval 333 statement to specify 333
milliseconds as the minimum interval at which the local router transmits a hello
packet and then expects to receive a reply from its BFD neighbor.
12. At the [edit policy-options] hierarchy level, include the policy-statement nh-self
statement to define the nh-self policy.
A routing policy contains one or more terms. The nh-self policy you are defining
includes three terms (term a, term b, and term c). This policy is applied to routes
exported from the routing table into BGP.
13. At the [edit policy-options nh-self] hierarchy level, include the term a statement to
define the first term for the nh-self policy.
14. At the [edit policy-options nh-self term a] hierarchy level, include the following
statements to specify that any static route with destination prefix 0.0.0.0/0 or
destination prefix 10.12.1.1/32 is rejected:
from {
protocol static;
route-filter 0.0.0.0/0 exact;
route-filter 10.12.1.1/32 exact;
}
then reject;
15. At the [edit policy-options nh-self] hierarchy level, include the term b statement to
define the next term for the nh-self policy.
16. At the [edit policy-options nh-self term b] hierarchy level, include the following
statements:
These statements specify that for all remaining static routes, the next-hop address
is replaced by the local IP address used for the BGP adjacency. The router is then
accepted with the new, next-hop value.
17. At the [edit policy-options nh-self] hierarchy level, include the term c statement to
define the next term for the nh-self policy.
18. At the [edit policy-options nh-self term c] hierarchy level, include the then reject
statement to reject all other routes.
routing-options {
autonomous-system 2;
}
protocols {
bgp {
group int {
type internal;
local-address 10.12.1.3;
export nh-self;
neighbor 10.12.1.1;
}
}
ospf {
area 0.0.0.0 {
interface ge-2/0/2.0;
interface ge-2/0/3.0;
interface ge-2/0/1.0 {
bfd-liveness-detection {
minimum-interval 333;
}
}
}
}
}
policy-options {
policy-statement nh-self {
term a {
from {
protocol static;
route-filter 0.0.0.0/0 exact;
route-filter 10.12.1.1/32 exact;
}
then reject;
}
term b {
from protocol static;
then {
next-hop self;
accept;
}
}
term c {
then reject;
}
}
}
2. At the [edit protocols] hierarchy level, include the bgp statement to enable BGP on
the router.
3. At the [edit protocols bgp] hierarchy level, include the group int statement to define
the routing group.
b. Include the local-address 10.12.1.4 statement to specify the address of the local
end of a BGP session. This address is used to accept incoming connections to
the peer and to establish connections to the remote peer.
c. Include the export nh-self statement to apply the nh-self policy to routes being
exported from the routing table into BGP.
d. Include the neighbor 10.12.1.1 statement to specify the route reflector (RR) as
peer (neighbor) of Router 3. You do not need to include neighbor statements for
Router 1 or Router 2.
5. At the [edit protocols] hierarchy level, include the ospf statement to enable OSPF
on the router.
6. At the [edit protocols ospf] hierarchy level, include the area 0.0.0.0 statement to
specify the area identifier for this router to use when participating in OSPF routing.
All routers in the area must use the same area identifier to establish adjacencies.
7. At the [edit protocols ospf area 0.0.0.0] hierarchy level, include the interface
ge-0/2/3.0 statement to configure the Gigabit Ethernet interface in the backbone
area.
8. At the [edit protocols ospf area 0.0.0.0] hierarchy level, include the interface
ge-0/1/7.0 statement to configure the Gigabit Ethernet interface in the backbone
area.
9. At the [edit protocols ospf area 0.0.0.0] hierarchy level, include the interface
ge-0/1/0.0 statement to configure the Gigabit Ethernet interface in the backbone
area.
10. At the [edit protocols ospf area 0.0.0.0 interface ge-0/1/0.0] hierarchy level, include
the bfd-liveness-detection statement to specify bidirectional failure detection timers.
11. At the [edit protocols ospf area 0.0.0.0 interface ge-0/1/0.0 bfd-liveness-detection]
hierarchy level, include the minimum-interval 333 statement to specify 333
milliseconds as the minimum interval at which the local router transmits a hello
packet and then expects to receive a reply from its BFD neighbor.
12. At the [edit policy-options] hierarchy level, include the policy-statement nh-self
statement to define the nh-self policy.
A routing policy contains one or more terms. The nh-self policy you are defining
includes three terms (term a, term b, and term c). This policy is applied to routes
exported from the routing table into BGP.
13. At the [edit policy-options nh-self] hierarchy level, include the term a statement to
define the first term for the nh-self policy.
14. At the [edit policy-options nh-self term a] hierarchy level, include the following
statements to specify that any static route with destination prefix 0.0.0.0/0 is
rejected:
from {
protocol static;
15. At the [edit policy-options nh-self] hierarchy level, include the term b statement to
define the next term for the nh-self policy.
16. At the [edit policy-options nh-self term b] hierarchy level, include the following
statements:
These statements specify that for all remaining static routes, the next-hop address
is replaced by the local IP address used for the BGP adjacency. The router is then
accepted with the new, next-hop value.
17. At the [edit policy-options nh-self] hierarchy level, include the term c statement to
define the next term for the nh-self policy.
18. At the [edit policy-options nh-self term c] hierarchy level, include the then reject
statement to reject all other routes.
routing-options {
autonomous-system 2;
}
protocols {
bgp {
group int {
type internal;
local-address 10.12.1.4;
export nh-self;
neighbor 10.12.1.1;
}
}
ospf {
area 0.0.0.0 {
interface ge-0/2/3.0;
interface ge-0/1/7.0;
interface ge-0/1/0.0 {
bfd-liveness-detection {
minimum-interval 333;
}
}
}
}
}
policy-options {
policy-statement nh-self {
term a {
from {
protocol static;
route-filter 0.0.0.0/0 exact;
}
then reject;
}
term b {
from protocol static;
then {
next-hop self;
accept;
}
}
term c {
then reject;
}
}
}
In this configuration example, a Layer 2 VPN network topology is reduced and simplified
by replacing two M320 routers at the provider edge (PE) of the network with a single
platform. The configuration for the consolidated Layer 2 VPN topology is described in
the following sections:
Requirements
This example requires the following hardware and software components:
Figure 17 on page 178 illustrates a typical Layer 2 VPN topology, with T640 routers as P
routers, M320 routers as PE routers, and MX Series Ethernet Services routers as CE routers.
The service provider uses a separate PE router for each customer.
PE PE
M320 M320
router I-BGP mesh router
P I-BGP mesh P
g016912
T640 T640
router router
MPLS core
By replacing the two M320 routers with the JCS1200 chassis interconnected with the
T640 router, the service provider simplifies and consolidates the network at the provider
edge. One platform supports both customer networks through the creation of PSDs, as
shown in Figure 18 on page 178.
Configuration
Table 22 on page 178 provides the chassis parameters required for the JCS1200 platform
and the T640 router for each PSD.
PSD2 Routing Engines in slots 5 and 6 FPC in slot 5 (with PICs supporting
Fast Ethernet and SONET interfaces)
The configuration for the consolidated Layer 2 VPN topology is described in the following
sections:
Step-by-Step To configure the parameters required for the Routing Engines in the JCS chassis:
Procedure
1. Log in to the JCS management module CLI.
2. Configure the Routing Engine that is part of PSD1. The Routing Engine in slot 4 of
the JCS chassis is the master Routing Engine. There is no backup Routing Engine.
OK
The baydata command specifies the target as a bay blade (-b), identifies the blade
(Routing Engine) in slot 04, and specifies the following parameters:
• REP04—Slot in which the primary (or master) Routing Engine resides, which is
04.
• REB00—Slot in which the backup Routing Engine resides. 00 indicates that there
is no backup Routing Engine.
3. Configure the baydata command parameters for the Routing Engines that are part
of PSD2. The Routing Engine in slot 5 is the master, whereas the Routing Engine in
slot 6 is the backup Routing Engine.
OK
The baydata command specifies the target as a bay blade (-b), identifies the
blade (Routing Engine) in slot 05, and specifies the following parameters:
• V01—Product version. Junos OS Release 9.1 supports only the value of 01.
• JCS01—JCS platform identifier of 01. Junos OS Release 9.1 supports only the
value of 01.
• REP05—Slot in which the primary (or master) Routing Engine resides, which
is 05.
OK
The baydata command specifies the target as a bay blade (-b), identifies the
blade (Routing Engine) in slot 06, and specifies the following parameters:
• REP05—Slot in which the primary (or master) Routing Engine resides, which
is 05.
system> baydata
5 Supported V01-JCS01-SD01-PSD02-REP05-REB06-PRDT640
6 Supported V01-JCS01-SD01-PSD02-REP05-REB06-PRDT640
7 No blade present
8 No blade present
9 No blade present
10 No blade present
11 No blade present
12 No blade present
RSD Configuration
Step-by-Step To configure the RSD and create the PSDs on the master Routing Engine in the T640
Procedure router:
1. At the [edit chassis system-domains] hierarchy level of the Junos OS CLI, include
the root-domain-id 1 statement to identify the RSD.
chassis {
system-domains {
root-domain-id 1;
protected-system-domains {
psd1 {
fpcs 4;
control-system-id 1;
control-slot-numbers 4;
}
psd2 {
fpcs 5;
control-system-id 1;
control-slot-numbers [5 6];
}
}
}
}
PSD1 Configuration
Step-by-Step The configuration for PSD1 is much the same as the configuration that was running on
Procedure the T640 router in the original VPN network topology before the consolidation of two
routers into a single platform.
The key difference is the management configuration. To configure the unique parameters
for PSD1:
1. Configure the following statements at the [edit groups re0 system] hierarchy level:
re0 {
system {
host-name customer-a;
backup-router 192.168.71.254 destination [ 172.16.0.0/12 192.168.0.0/16
207.17.136.192/32 10.9.0.0/16 10.10.0.0/16 10.13.10.0/23 10.84.0.0/16 10.5.0.0/16
10.6.128.0/17 192.168.102.0/23 207.17.136.0/24 10.209.0.0/16 10.227.0.0/16
10.150.0.0/16 10.157.64.0/19 10.204.0.0/16 ];
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 192.168.66.240/21;
}
}
}
}
}
interfaces {
fe-0/2/3 {
unit 0 {
family inet {
address 10.6.1.1/30;
}
family iso;
family mpls;
}
}
so-4/0/3 {
encapsulation frame-relay-ccc;
unit 2 {
encapsulation frame-relay-ccc;
dlci 512;
}
}
fe-4/3/0 {
unit 0 {
family inet {
address 10.5.1.2/30;
}
family iso;
family mpls;
}
}
}
routing-options {
autonomous-system 65299;
confederation 702 members [ 65299 65235 65240 65269 ];
}
protocols {
mpls {
interface all;
}
bgp {
group ibgp {
type internal;
local-address 10.255.171.124;
import match-all;
family l2vpn {
signaling;
}
export match-all;
neighbor 10.255.171.125;
}
}
isis {
interface fe-4/2/3.0 {
level 2 metric 10;
level 1 disable;
}
interface fe-4/3/0.0 {
level 2 metric 10;
level 1 disable;
}
interface all;
interface fxp0.0 {
disable;
}
interface lo0.0 {
passive;
}
}
ldp {
interface all;
interface lo0.0;
}
}
policy-options {
policy-statement frame-relay-vpn-export {
term a {
then {
community add frame-relay-vpn-comm;
accept;
}
}
term b {
then reject;
}
}
policy-statement frame-relay-vpn-import {
term a {
from {
protocol bgp;
community frame-relay-vpn-comm;
}
then accept;
}
term b {
then reject;
}
}
policy-statement match-all {
then accept;
}
community frame-relay-vpn-comm members target:65299:400;
}
routing-instances {
frame-relay-vpn {
instance-type l2vpn;
interface so-4/0/3.2;
route-distinguisher 10.255.171.124:4;
vrf-import frame-relay-vpn-import;
vrf-export frame-relay-vpn-export;
protocols {
l2vpn {
encapsulation-type frame-relay;
site 2 {
site-identifier 2;
interface so-4/0/3.2 {
remote-site-id 1;
}
}
}
}
}
}
PSD2 Configuration
Step-by-Step The configuration for PSD2 is much the same as the configuration that was running on
Procedure the T640 router in the original VPN network topology before the consolidation of two
routers into a single platform.
The key difference is the management configuration. To configure the unique parameters
for PSD2:
1. Configure the following statements at the [edit system re0] hierarchy level:
a. Include the host-name customer-b statement to configure the hostname for the
master Routing Engine (re0) on PSD2.
c. Include the address 192.168.66.240/21 statement at the [edit interfaces fxp0 unit
0 family inet] hierarchy level to configure the fxp0 management interface.
c. Include the address 192.168.66.242/21 statement at the [edit interfaces fxp0 unit
0 family inet] hierarchy level to configure the fxp0 management interface.
re0 {
system {
host-name customer-b;
backup-router 192.168.71.254 destination [ 172.16.0.0/12 192.168.0.0/16
207.17.136.192/32 10.9.0.0/16 10.10.0.0/16 10.13.10.0/23 10.84.0.0/16 10.5.0.0/16
}
}
routing-options {
autonomous-system 65299;
confederation 702 members [ 65299 65235 65240 65269 ];
}
protocols {
mpls {
interface all;
}
bgp {
group ibgp {
type internal;
local-address 10.255.171.125;
import match-all;
family l2vpn {
signaling;
}
export match-all;
neighbor 10.255.171.124;
}
}
isis {
interface fe-5/1/1.0 {
level 2 metric 10;
level 1 disable;
}
interface fe-5/1/2.0 {
level 2 metric 10;
level 1 disable;
}
interface all;
interface fxp0.0 {
disable;
}
interface lo0.0 {
passive;
}
}
ldp {
interface all;
interface lo0.0;
}
}
policy-options {
policy-statement frame-relay-vpn-export {
term a {
then {
community add frame-relay-vpn-comm;
accept;
}
}
term b {
then reject;
}
}
policy-statement frame-relay-vpn-import {
term a {
from {
protocol bgp;
community frame-relay-vpn-comm;
}
then accept;
}
term b {
then reject;
}
}
policy-statement match-all {
then accept;
}
community frame-relay-vpn-comm members target:65299:400;
}
routing-instances {
frame-relay-vpn {
instance-type l2vpn;
interface so-5/3/0.1;
route-distinguisher 10.255.171.125:4;
vrf-import frame-relay-vpn-import;
vrf-export frame-relay-vpn-export;
protocols {
l2vpn {
encapsulation-type frame-relay;
site 1 {
site-identifier 1;
interface so-5/3/0.1 {
remote-site-id 2;
}
}
}
}
}
}
Verification
Verify that the two PSDs are configured and operating properly:
Purpose Verify that PSD1 and PSD2 are configured and online.
{master}
Meaning The example shows that the PSDs are configured and online.
Purpose Display information about the FPCs and Routing Engines that are part of each PSD.
psd1-re0:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis 740-023156 SNJCSJCSAC00 JCS1200 AC Chassis
Routing Engine 0 REV 01 740-023157 SNBLJCSAC004 RE-JCS1200-1x2330
rsd-re0:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis S19068 T640
Midplane REV 04 710-002726 AX5666 T640 Backplane
FPM GBUS REV 02 710-002901 HE3251 T640 FPM Board
FPM Display REV 02 710-002897 HE7860 FPM Display
CIP REV 05 710-002895 HC0474 T Series CIP
PEM 1 Rev 03 740-002595 MH15367 Power Entry Module
SCG 0 REV 04 710-003423 HF6042 T640 Sonet Clock Gen.
SCG 1 REV 11 710-003423 HW7765 T640 Sonet Clock Gen.
Routing Engine 0 REV 04 740-014082 1000660098 RE-A-2000
Routing Engine 1 REV 01 740-005022 210865700324 RE-3.0
CB 0 REV 06 710-007655 WE9377 Control Board (CB-T)
CB 1 REV 06 710-007655 WE9379 Control Board (CB-T)
FPC 5 REV 01 710-010233 HM4187 E-FPC Type 1
CPU REV 01 710-010169 HS9939 FPC CPU-Enhanced
MMB 1 REV 01 710-010171 HR0833 MMB-288mbit
SPMB 0 REV 10 710-003229 WE9582 T Series Switch CPU
SPMB 1 REV 10 710-003229 WE9587 T Series Switch CPU
SIB 0 REV 05 750-005486 HV8445 SIB-I8-F16
SIB 1 REV 05 750-005486 HW2650 SIB-I8-F16
SIB 2 REV 05 750-005486 HW7041 SIB-I8-F16
SIB 3 REV 05 750-005486 HV4274 SIB-I8-F16
SIB 4 REV 05 750-005486 HV8464 SIB-I8-F16
Fan Tray 0 Front Top Fan Tray
Fan Tray 1 Front Bottom Fan Tray
Fan Tray 2 Rear Fan Tray
psd2-re0:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis 740-023156 SNJCSJCSAC00 JCS1200 AC Chassis
Routing Engine 0 REV 01 740-023157 SNBLJCSAC006 RE-JCS1200-1x2330
Routing Engine 1 REV 01 740-023157 SNBLJCSAC005 RE-JCS1200-1x2330
Meaning In the command output, the FPC that belongs to the PSD is displayed under the rsd-re0:
field heading. The Routing Engines on the JCS chassis that belong to the PSD are displayed
under the psd2-re0: heading.
Purpose Display detailed information about the Routing Engines assigned to each PSD.
The following example displays detailed information about the Routing Engine assigned
to PSD1.
CPU utilization:
User 0 percent
Background 0 percent
Kernel 0 percent
Interrupt 0 percent
Idle 100 percent
Model RE-JCS1200-1x2330
Serial ID SNBLJCSAC004
Start time 2008-03-30 03:19:49 PDT
Uptime 11 hours, 46 minutes, 24 seconds
Load averages: 1 minute 5 minute 15 minute
0.00 0.00 0.00
The following example displays detailed information about the Routing Engines assigned
to PSD2.
Meaning The Physical Slot field displays the JCS chassis slot number in which each Routing Engine
is installed.
Action On each PSD, issue the show chassis ethernet-switch statistics command.
The following example displays information about the Ethernet switch statistics for PSD1:
RX Discards 1936
RX Errors 0
RX Unknown Protocol 0
Link State Changes 231
Statistics for switch[2] port EXT1 connected to RSD 1:
TX Octets 3175062867
TX Unicast Packets 4961873
TX Multicast Packets 509882
TX Broadcast Packets 5622328
Tx Discards 0
TX Errors 0
RX Octets 478994886
RX Unicast Packets 49004
RX Multicast Packets 251
RX Broadcast Packets 7419709
RX Discards 12
RX Errors 6
RX Unknown Protocol 0
Link State Changes 129
Statistics for switch[2] port EXT6 connected to external management:
TX Octets 778154
TX Unicast Packets 4244
TX Multicast Packets 73
TX Broadcast Packets 769
Tx Discards 0
TX Errors 0
RX Octets 723965940
RX Unicast Packets 43991082
RX Multicast Packets 91767677
RX Broadcast Packets 15655861
RX Discards 24442638
RX Errors 3331664
RX Unknown Protocol 0
Link State Changes 1
The following example displays information about the Ethernet switch statistics for
PSD2:
RX Octets 2392164387
RX Unicast Packets 26421668
RX Multicast Packets 226
RX Broadcast Packets 6026725
RX Discards 8
RX Errors 4
RX Unknown Protocol 0
Link State Changes 113
Statistics for switch[1] port EXT6 connected to external management:
TX Octets 1528692454
TX Unicast Packets 7591911
TX Multicast Packets 112
TX Broadcast Packets 6027
Tx Discards 0
TX Errors 0
RX Octets 510609656
RX Unicast Packets 30209066
RX Multicast Packets 3037753
RX Broadcast Packets 12229518
RX Discards 18650
RX Errors 6
RX Unknown Protocol 0
Link State Changes 1
Statistics for switch[2] port INT6 connected to fpx1:
TX Octets 3938845805
TX Unicast Packets 27450378
TX Multicast Packets 90293642
TX Broadcast Packets 35156025
Tx Discards 0
TX Errors 0
RX Octets 2016108068
RX Unicast Packets 9832240
RX Multicast Packets 448
RX Broadcast Packets 1897002
RX Discards 1844
RX Errors 0
RX Unknown Protocol 0
Link State Changes 195
Statistics for switch[2] port EXT1 connected to RSD 1:
TX Octets 3175192403
TX Unicast Packets 4961873
TX Multicast Packets 510063
TX Broadcast Packets 5624171
Tx Discards 0
TX Errors 0
RX Octets 479053702
RX Unicast Packets 49004
RX Multicast Packets 251
RX Broadcast Packets 7420628
RX Discards 12
RX Errors 6
RX Unknown Protocol 0
Link State Changes 129
Statistics for switch[2] port EXT6 connected to external management:
TX Octets 778154
TX Unicast Packets 4244
TX Multicast Packets 73
TX Broadcast Packets 769
Tx Discards 0
TX Errors 0
RX Octets 732481038
• INT4 provides the internal connection between the Routing Engine in slot 4 and the
JCS switch module.
• EXT1 provides the connection between the JCS switch module and the RSD.
• EXT6 provides the connection between the JCS switch module and the management
ports on each Routing Engine in the JCS chassis.
• INT6 provides the internal connection between the master Routing Engine in slot 6
and the JCS switch module.
• EXT1 provides the connection between the JCS switch module and the RSD.
• EXT6 provides the connection between the JCS switch module and the management
ports on each Routing Engine in the JCS chassis.
Table 23 on page 199 lists some JCS management module verification tasks useful for
monitoring JCS operations.
Table 23: Summary of Commonly Used JCS Management Module Verification Tasks
Items to Check Description Task
Vital product data Display hardware part numbers, system component “Displaying Vital Product Data” on
counts, and software versions. page 200
Event log Display (or clear) the contents of the event “Clearing the Event Log” on page 202
log—including user access events.
“Displaying the Event Log” on page 202
System component status Display the status of all system components. “Displaying System Component
Status” on page 204
System configuration Display the system configuration list. “Displaying a List of Components” on
page 205
Temperature Display component temperature values and ranges. “Displaying Temperature Information”
on page 206
Table 23: Summary of Commonly Used JCS Management Module Verification Tasks (continued)
Items to Check Description Task
Voltage Display voltage information for system components. “Displaying Voltage Information” on
page 207
Action Display identification and configuration information using the JCS management module
CLI info command.
The following sample output appears when the info command is targeted for the entire
JCS1200 platform:
system> info
UUID: 597A 6B81 C99F 333 9DE7 A3D4 52F9 95D1
Manufacturer: ZX1234
Manufacturer ID: 20301Product code: System Enclosure/CHAS—BP—JCS1200–S
Serial number: 02Part no.: 740–025747
Component serial no.: ZX001
CLEI: Not Available
AMM slots: 2
Blade slots: 12
I/O Module slots: 10
Power Module slots: 4
Blower slots: 4
Media Tray slots: 2
...
The sample output shows relevant hardware information about the JCS1200 platform,
including the JCS chassis serial number (Manufacturer field) and JCS midplane serial
number (Component Serial No. field). It also includes the number and type of chassis
slots supported.
The following sample output appears when the info command is targeted for the JCS
management module:
The sample output shows relevant hardware information about the JCS management
module including the product code and part number. It also includes the build ID and
release date of the management module firmware.
The following sample output appears when the info command is targeted for the JCS
switch module:
The following sample output appears when the info command is targeted for a Routing
Engine:
Rev: 1.04
Blade sys. mgmt. proc.
Build ID: BCBT42B
Rev: 1.11
Local Control
KVM: Yes
Media Tray: Yes
SCOD: Unknown
Power On Time: 5 days 20 hours 35 min 12 secs
Number of Boots: 3
Action Clear the event log using the JCS management module CLI clearlog command.
The following sample output appears when the event log for the JCS management
module is cleared:
The following sample output shows information that is returned if the displaylog command
is run after the event log has been cleared.
system:mm[1]> displaylog -f
1 I SERVPROC 01/28/08 19:50:15 System log cleared.
(There are no more entries in the event log.)
You can display the event log to monitor JCS1200 platform operations and to diagnose
and troubleshoot problems.
Action Display the event log using the JCS management module CLI displaylog command:
The following sample output appears when the displaylog command is targeted for the
most recent events:
The sample output shows the event log for the JCS management module. By default,
the first time the command is executed, the five most recent log entries are displayed.
Each subsequent time the command is issued, the next five log entries are displayed.
The following sample output shows the complete event log for the JCS management
module. All events since the last time the log was cleared are shown.
system:mm[1]> displaylog -a
1 I Audit 01/28/08 19:47:01 Remote logoff successful for user 'gdickey'
2 I Blade_03 01/28/08 19:46:14 (bcgcpu3) Blade reboot
3 I SERVPROC 01/27/08 19:45:57 Login ID:''USERID' CLI telnet authenticated.
4 E SERVPROC 01/27/08 19:42:58 Failure reading I2C device. Check bus 4.
5 I SERVPROC 01/27/08 19:41:54 Login ID:''USERID' from WEB browser.
6 E SERVPROC 01/27/08 19:41:53 Blower 2 Fault Multiple blower failures.
7 E SERVPROC 01/27/08 19:41:53 Blower 1 Fault Single blower failure.
8 I SERVPROC 01/27/08 19:41:48 Ethernet[1] Link Established at 100Mb.
9 I SERVPROC 01/27/08 19:41:48 Ethernet[1] configured to do 100Mb/Full Duplex.
10 I SERVPROC 01/27/08 19:41:48 Ethernet[1] MAC Address: 0x00-09-6B-CA-0C-81
11 I SERVPROC 01/27/08 19:41:48 Ethernet[0] Link Established at 100Mb.
12 I SERVPROC 01/27/08 19:41:48 Ethernet[0] configured to do Auto Speed/Auto.
13 I SERVPROC 01/27/08 19:41:48 Ethernet[0] MAC Address: 0x00-09-6B-CA-0C-80
14 I SERVPROC 01/27/08 19:41:48 Management Module Network Initialization.
15 I SERVPROC 01/27/08 19:41:46 ENET[1] IP-Cfg:HstName=MM00096BCA0C81.
Action Display power domain information using the JCS CLI fuelg command.
The following sample output appears when power domain information is displayed:
system> fuelg
Note: All power values are displayed in Watts.
Power Domain 1
--------------
Status: Power domain status is good.
Modules:
Bay 1: 2880
Bay 2: 2880
Power Management Policy: Basic Power Management
Power in Use: 907
Total Power: 4000
Allocated Power (Max): 1921
Remaining Power: 2079
Power Domain 2
--------------
Status: Power domain status is good.
Modules:
Bay 3: 2880
Bay 4: 2880
Power Management Policy: Basic Power Management
Power in Use: 116
Total Power: 4000
Allocated Power (Max): 800
Remaining Power: 3200
• Ok
• Warning
• Critical
Action Display health status for the JCS1200 platform using the JCS management module CLI
health command.
The following sample output appears when health status is displayed for all components
installed in the JCS1200 platform:
system> health —l a
OK
mm[1] : OK
mm[2] : OK
blade[1] : OK
blade[2] : OK
blade[3] : OK
blade[4] : OK
blade[5] : Minor
blade[6] : OK
power[1] : OK
power[2] : OK
power[3] : OK
power[4] : OK
blower[1] : OK
blower[2] : OK
blower[3] : OK
blower[4] : OK
switch[1] : OK
switch[2] : OK
The following sample output appears when health status is displayed for a JCS Routing
Engine:
In this example, a minor warning appears for the Routing Engine in slot 5. The voltage
level has risen, causing a temperature increase.
Action Display a list of components in the JCS chassis using the JCS management module CLI
list command.
The following sample output appears when the list is displayed for all components
installed in the JCS chassis:
system> list —l a
system
mm[1] primary
mm[2] standby
power[1]
power[2]
power[3]
power[4]
blower[1]
blower[2]
blower[3]
blower[4]
switch[1]
switch[2]
blade[1] bcgcpu1
sp
cpu[1]
blade[3] bcgcpu3
sp
cpu[1]
blade[4] bcgcpu4
sp
cpu[1]
blade[5] bcgcpu5
sp
cpu[1]
blade[6] bcgcpu6
sp
cpu[1]
mt[1]
mt[2]
tap
mux[1]
mux[2]
In this example, two management modules (mm) are installed and mm[1] is the primary
management module. There are also four power supplies (power), four fan assemblies
(blowers), and two JCS switch modules (switch). There are six Routing Engines (blade)
and two media trays (mt).
Action Display temperature information (in degrees Fahrenheit) for components in the JCS
chassis using the JCS CLI temps command.
The following sample output appears when temperature information is displayed for a
JCS management module:
The following sample output appears when temperature information is displayed for a
JCS Routing Engine:
Action Display voltage information for components in the JCS chassis using the JCS CLI volts
command.
The following sample output appears when voltage information is displayed for a JCS
management module:
The following sample output appears when voltage information is displayed for a JCS
Routing Engine:
boot
Description (JCS management module CLI) Perform an immediate reset and restart of a specified
Routing Engine (blade).
Options -T system:blade[x]—Specify the Routing Engine to boot. Replace x with the Routing
Engine slot number (1 through 12).
Output Fields When you enter this command, you are provided feedback on the status of your request.
clearlog
Description (JCS management module CLI) Clear the JCS management module event log to remove
existing events.
Output Fields When you enter this command, you are provided feedback on the status of your request.
The command prompt changes to reflect the new command target.
displaylog
Description (JCS management module CLI) Display the JCS management module event log entries.
-a—(Optional) Display all entries in the JCS management module event log. By default,
the displaylog command displays only the first five entries in the log.
-date date-filter—(Optional) Display all entries in the JCS management module event
log that meet the date filter criteria. Replace date-filter with a list of dates in mm/dd/yy
format. Use the pipe symbol (|) to separate dates in the list. For example, specify the a
-date 03/17/2008|03/18/2008 filter to show all events in the log that occurred on March
17, 2008 and March 18, 2008.
-sev severity-filter—(Optional) Display all entries in the JCS management module event
log that meet the severity filter criteria. Replace severity-filter with a list of severities. Use
the pipe symbol (|) to separate severities in the list. Severities you can specify include:
• I—information
• E—error
• W—warning
For example, you can specify a -sev E|W filter to show all error and warning events in the
log.
-src source-filter—(Optional) Display all entries in the JCS management module event
log that meet the source filter criteria. Replace source-filter with a list of event sources.
Use the pipe symbol (|) to separate sources in the list. Sources you can specify include:
Output Fields Table 24 on page 213 lists the output fields for the displaylog command. Output fields are
listed in the approximate order in which they appear.
Index Number Log entry number. The most recent entries have the lowest numbers.
Entry Type Type of log entry. Log entries can be informational (I), warnings
(W), or errors (E).
System System where the entry occurred; for example, Blade_03 (Routing
Engine in slot 3).
Date and Time Date and time the entry was logged.
fuelg
Description (JCS management module CLI) Display or configure power domain information for power
supplies on the JCS1200 platform.
Options -T system:mm[x]—Specify the JCS management module as the target of the command.
Replace x with a value of 1 or 2.
The JCS1200 platform has two power domains. Power domain 1 supports all JCS modules
and slots (bays) 1 through 6. Power domain 1 uses power supply modules 1 and 2. Power
domain 2 supports slots 7 through 12 and uses power supply modules 3 and 4.
• on—Fans will remain at fixed speed. Power is throttled (for components that support
power throttling) to reduce power consumption and heat.
Output Fields Table 25 on page 214 lists the output fields for the fuelg command. Output fields are listed
in the approximate order in which they appear.
Domain Number Status information for the power domain (power domain 1 or power
domain 2).
Bay x Bay (slot) number and power value (in watts) for the power supply.
Power Budget Total amount of power (in watts) allocated to the domain.
Remaining Power Amount of power (in watts) available to the domain (power budget
– reserved power = remaining power).
Power in Use Amount of power (in watts) currently being used by the power
supplies in the domain.
Power Domain 2
— — — — — — — —
Status: Power domain status is good.
Modules:
Bay 3: 1800
Bay 4: 1800
Power Budget: 2880
Reserved Power: 0
Remaining Power: 2880
Power in Use: 0
health
Description (JCS management module CLI) Display the current health status of a device on the
JCS1200 platform.
-l depth—(Optional) Display health status for a hierarchy of devices (starting with the
device specified as the command target). Replace depth with one of the following values:
• 2 | all | a—Health status for the full hierarchy of devices (starting at the command target
level). You can enter a as an abbreviation for all.
-f—(Optional) Display health status and active alerts for the device specified as the
command target.
Output Fields When you enter this command, you are provided feedback on the status of your request.
power[3]: OK
power[4]: OK
blower[1]: OK
blower[2]: OK
blower[3]: OK
blower[4]: OK
switch[1]: OK
switch[2]: OK
history
Syntax history
<!n>
Description (JCS management module CLI) Display the last eight commands entered. You can use
this list to reenter commands. To reenter a command, use the history command to display
a list of recent commands, then type an exclamation point (!) followed by the number
of the command you wish to reenter.
Options !n—(Optional) Reenter a command from the history list. Replace n with a value of 0
through 7 to indicate the number of the command you wish to reenter.
Output Fields When you enter this command, you are provided feedback on the status of your request.
0 dns
1 dns —on
2 dns
3 dns —i1 192.168.70.29
4 dns
5 dns —i1 192.168.70.29 —on
6 dns
7 history
info
Description (JCS management module CLI) Display information about JCS hardware components
and component configuration.
• system:mt[x]—Specify a JCS media tray as the command target. Replace x with the
media tray number (1 or 2).
• system:mux[x]—Specify a JCS MUX card as the command target. Replace x with the
MUX card number (1 or 2).
Output Fields Table 26 on page 220 lists the output fields for the info command. Output fields are listed
in the approximate order in which they appear.
Component Serial Number (system keyword only) JCS midplane serial number.
AMM Firmware (JCS Management Module only) Build ID, filename, release date,
and revision number of the firmware installed on the JCS
management module.
AMM Slots (system keyword only) Number of JCS management module slots.
Blade Slots (system keyword only) Number of Routing Engine (blade) slots.
I/O Module Slots (system keyword only) Number of I/O module slots.
Power Module Slots (system keyword only) Number of power module slots.
Media Tray Slots (system keyword only) Number of media tray slots.
Name: bcgcpu1
UUID: 7393 CA1C 00C3 3A97 AC4C 6EE7 608B CA0D
Manufacturer: ZX1234
Manufacturer ID: 20301
Product code: 4 X86 CPU Blade Server/JCS Routing Engine
Serial number: 02
Part no.: 740-023157
Component serial no.: ZX0014
CLEI: Not Available
MAC Address 1: 00:1A:64:32:E4:D8
MAC Address 2: 00:1A:64:32:E4:DA
BIOS
Build ID: LJE104BUS
Rel date: 12/11/2007
Rev: 1.00
Diagnostics
Build ID: BCYT24AUS
Rel date: 08/27/2007
Rev: 1.04
Blade sys. mgmt. proc.
Build ID: BCBT42B
Rev: 1.11
Local Control
KVM: Yes
Media Tray: Yes
SCOD: Unknown
Power On Time: 5 days 20 hours 35 min 12 secs
Number of Boots: 3
list
Description (JCS management module CLI) List devices on the JCS1200 platform. This information
is useful for determining how many Routing Engines and JCS management modules are
installed and which JCS management module is primary.
Options -T target—Specify a command target. Valid targets for this command include:
If no command target is specified, all devices on the JCS1200 platform are listed.
-l depth—(Optional) List a hierarchy of devices (starting with the device specified as the
command target). Replace depth with one of the following values:
• 2 | all | a—List the full hierarchy of devices (starting at the command target level). You
can enter a as an abbreviation for all.
Output Fields When you enter this command, you are provided feedback on the status of your request.
mm[1] primary
mm[2] standby
power[1]
power[2]
power[3]
power[4]
blower[1]
blower[2]
blower[3]
blower[4]
switch[1]
switch[2]
blade[1] bcgcpu1
sp
cpu[1]
blade[3] bcgcpu3
sp
cpu[1]
blade[4] bcgcpu4
sp
cpu[1]
blade[5] bcgcpu5
sp
cpu[1]
blade[6] bcgcpu6
sp
cpu[1]
mt[1]
mt[2]
tap
mux[1]
mux[2]
power
Description (JCS management module CLI) Power on or power off a specified Routing Engine (blade)
or JCS switch module. Alternatively, display the power setting for a specified Routing
Engine or switch module.
Options -T target—Specify the target of the power command. Valid targets for this command are:
-cycle—Cycle power for the specified Routing Engine or JCS switch module. If the Routing
Engine or JCS switch module is off, it will turn on. If the Routing Engine or JCS switch
module is on, it will turn off.
-state—Display the current power state (on or off) for the specified Routing Engine or
JCS switch module.
Output Fields When you enter this command, you are provided feedback on the status of your request.
OK
Off
On
read
Description (JCS management module CLI) Restore the JCS management module configuration
from an image previously saved to the JCS chassis with the write command. This
command is useful for restoring a backup copy of the JCS management module
configuration.
Options -config chassis—Specify the location within the chassis from which the configuration is
restored.
-config file (-i, -l, -p)—Specify the location outside the chassis and the name of the file
from which the configuration is restored.
-i—Specify the IP address of the TFTP server where the configuration file is located.
-l—Specify the file name of the configuration file to read(the default filename is asm.cfg).
-p—Specify the quote-delimited passphrase that is required when saving to a file with
encryption enabled (passphrases have a maximum of 1600 chars).
Output Fields When you enter this command, you are provided feedback on the status of your request.
When you enter this command, the amm.cfg file will be loaded from the TFTP server
that corresponds with the IP address you entered.
reset
Description (JCS management module CLI) Reset a specified Routing Engine (blade), JCS switch
module, or JCS management module.
Options -T target—Specify the target of the reset command. Valid targets for this command are:
Output Fields When you enter this command, you are provided feedback on the status of your request.
shutdown
Description (JCS management module CLI) Shut down the operating system on the Routing Engine
(blade).
Options -T system:blade[x]—Specify a Routing Engine as the target of the command (the Routing
Engine to be shut down). Replace x with a value of 1 through 12.
Output Fields When you enter this command, you are provided feedback on the status of your request.
temps
Description (JCS1200 platform only) Show temperature information (in degrees Fahrenheit) for
components in the JCS chassis. This information is useful for viewing current temperature
values and temperature threshold settings.
Options -T target—Specify the target of the temps command. Valid targets for this command are:
Output Fields Table 27 on page 229 lists the output fields for the temps command. Output fields are
listed in the approximate order in which they appear.
Hysteresis The amount the temperature must decrease below the Warning
threshold before the warning is cleared.
volts
Description (JCS1200 platform only) Show voltage information for components in the JCS chassis.
This information is useful for viewing current voltage values and voltage threshold settings.
Options -T target—Specify the target of the volts command. Valid targets for this command are:
Output Fields Table 28 on page 231 lists the output fields for the volts command. Output fields are listed
in the approximate order in which they appear.
Hysteresis The amount the voltage must decrease below the Warning threshold
before the warning is cleared.
write
Description (JCS management module CLI) Save the management module configuration to the
chassis of the JCS1200 platform (in the midplane NVRAM). This command is useful for
creating a backup copy of the JCS management module configuration.
Options -config chassis—Specify the location within the chassis where the configuration is saved.
-config file (-i, -l, -p)—Specify the name of the configuration file and location where it is
saved outside the chassis.
-i—Specify the IP address of TFTP server to save the configuration file to.
-l—Specify an optional filename to save the configuration file as (the default filename
is asm.cfg).
-p—Specify the quote-delimited passphrase that is required when saving to a file with
encryption enabled (passphrases have a maximum of 1600 chars).
Output Fields When you enter this command, you are provided feedback on the status of your request.
OK
Configuration settings were successfully saved to the chassis.
When you enter this command, the configuration file will be named “amm.cfg” and saved
on the TFTP server at 172.17.59.183.
As a Root System Domain (RSD) administrator, if you have the appropriate access
privileges, you can log in to a Protected System Domain (PSD) from the Junos OS
command-line interface (CLI) on the RSD:
The PSD being accessed must be specified under the RSD configuration.
In the following example, the RSD administrator logs in to the master Routing Engine on
PSD1:
{master}
Table 29 on page 238 lists tasks that are commonly used to verify information that is
specific to RSDs and PSDs.
Configured PSDs Display PSDs configured under the “Displaying Configured PSDs” on
RSD. page 241
Routing Engine information Display Routing Engine information. “Displaying Routing Engine
Information” on page 242
Ethernet switch statistics Display information about the “Displaying Ethernet Switch
receive and transmit packets Statistics” on page 244
traveling between PSDs and the
respective RSD.
On the PSD, you can display information about the Routing Engines, FPCs, and PICs that
are assigned to the PSD and about hardware that is shared with the RSD, such as Switch
Interface Boards (SIBs), the Switch Processor Mezzanine Board (SPMB), Power Entry
Modules (PEMs), and fans.
Action Display hardware information using the show chassis hardware command.
On the RSD The following example provides output from the show chassis hardware command issued
from the RSD.
Hardware inventory:
Item Version Part number Serial number Description
Chassis S19068 T640
Midplane REV 04 710-002726 AX5666 T640 Backplane
FPM GBUS REV 02 710-002901 HE3251 T640 FPM Board
FPM Display REV 02 710-002897 HE7860 FPM Display
CIP REV 05 710-002895 HC0474 T Series CIP
PEM 1 Rev 03 740-002595 MH15367 Power Entry Module
SCG 0 REV 04 710-003423 HF6042 T640 Sonet Clock Gen.
SCG 1 REV 11 710-003423 HW7765 T640 Sonet Clock Gen.
Routing Engine 0 REV 04 740-014082 1000660098 RE-A-2000
Routing Engine 1
CB 0 REV 06 710-007655 WE9377 Control Board (CB-T)
CB 1 REV 06 710-007655 WE9379 Control Board (CB-T)
FPC 0 REV 01 710-013560 JE4851 E2-FPC Type 3
CPU REV 05 710-010169 HX8637 FPC CPU-Enhanced
PIC 0 REV 05 750-007141 HG2427 10x 1GE(LAN), 1000 BASE
On the PSD The following example provides output from the show chassis hardware command issued
from a PSD.
psd1-re0:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis 740-023156 SNJCSJCSAC00 JCS1200 AC Chassis
Routing Engine 0 REV 01 740-023157 SNBLJCSAC006 RE-JCS1200-1x2330
Routing Engine 1 REV 01 740-023157 SNBLJCSAC005 RE-JCS1200-1x2330
Meaning On the RSD, information about all the FPCs in the chassis (which are located in slots 1,
2, and 4 through 7) is displayed.
On the PSD, at the beginning of the output, the rsd-re0 field displays all of the information
pertaining to the components on the T Series router that are assigned to or shared by
the PSD. For example, only information about the FPCs in slots 1, 2, and 4 is displayed.
At the end of the output, the psd1-re0: field provides information about the JCS1200
chassis and the Routing Engines assigned to PSD1.
Action From the RSD, issue the show chassis psd command as shown in the following example:
{master}
Meaning Two PSDs are configured under the RSD: PSD1 and PSD2. Both are online.
Action Display information about the Routing Engines that are part of the RSD or PSD using the
show chassis routing-engine command.
On the RSD When the show chassis routing-engine command is issued on the RSD, the Slot field
indicates the slot on the T Series router that holds the Routing Engine. In the following
example, the master Routing Engine is in slot 0, whereas the backup Routing Engine is
in slot 1.
Background 0 percent
Kernel 0 percent
Interrupt 0 percent
Idle 99 percent
Model RE-A-2000
Serial ID 1000688746
Start time 2008-08-07 18:37:53 PDT
Uptime 12 days, 21 hours, 10 minutes, 57 seconds
Last reboot reason 0x1:power cycle/failure
On the PSD When the show chassis routing-engine command is issued on the PSD, the Physical Slot
field indicates the slot on the JCS platform that holds the Routing Engine. In the following
example, psd2 owns the Routing Engines in slots 5 and 6 in the JCS chassis. The master
Routing Engine is in slot 6, whereas the backup Routing Engine is in slot 5.
Action Display information about receive and transmit packets traveling between all PSDs and
the RSD using the show chassis ethernet-switch command. In the following sample
output:
• INT6 provides the internal connection between the master Routing Engine in slot 6
and the JCS switch module.
• EXT1 provides the connection between each JCS switch module (switch[1] and
switch[2]) and the RSD.
• EXT6 provides the connection between each JCS switch module (switch[1] and
switch[2]) and the management ports on each Routing Engine in the JCS chassis.
Action Using the show interfaces so-fpc/pic/slot or show interfaces ge-fpc/pic/slot command,
display logical interfaces configured on a shared physical interface.
The following fields in the output from the command display information about shared
interfaces:
• Shared-interface—Located under the Physical interface: section of the output, this field
indicates whether the routing domain is the owner or non-owner of the shared interface.
If the routing domain is the RSD, the value is Owner. If the routing domain is a PSD
under the RSD, the value is Non-owner.
• Shared interface—Located under the Logical interface: section of the output, this section
includes these fields:
• shared with—(RSD only) Provides the identity of the PSD that owns the shared
interface; for example, psd3.
• peer interface—(PSD only) Lists the logical tunnel interface that peers with the logical
interface; for example, ut-2/1/0.2.
• tunnel token—Specifies the receive (RX) and transmit (TX) tunnel tokens. For
example, Rx: 5.519, Tx: 13.514.
NOTE: When you issue this command on the PSD for SONET interfaces, the
following information about the physical interface is not provided:
• Media status
• SONET mode
On the RSD (SONET In the following sample output, rsd1 is the owner of the physical SONET interface so-7/2/0
Interface) and the logical SONET interface so-7/2/0.0 is shared by psd5.
On the PSD (SONET The following sample output shows that so-0/3/0 is not owned by the PSD. The logical
Interface) SONET interface so-0/3/0.0 is configured on a shared physical interface and ut-1/0/0.0
is its peer tunnel interface.
On the RSD (Gigabit In the following sample output, rsd1 is the owner of the physical Gigabit Ethernet interface
Ethernet Interface) so-7/2/0, and the logical Gigabit interface so-7/2/0.0 is shared by psd5.
On the PSD (Gigabit The following sample output shows that ge-0/3/0 is not owned by the PSD. The logical
Ethernet Interface) SONET interface ge-0/3/0.0 is configured on a shared physical interface and ut-1/0/0.0
is its peer tunnel interface.
Encapsulation: ENET2
Shared-interface:
Peer interface: ut-1/0/0.0
Tunnel token: Rx: 14.538
Input packets : 13
Output packets: 7774
Output Filters: filter-safari
Protocol inet, MTU: 1500
Addresses, Flags: Dest-route-down Is-Preferred Is-Primary
Destination: 173.16.254.0/30, Local: 173.16.254.1, Broadcast: 173.16.254.3
Action Using the show interfaces xt-fpc/pic/slot command, display logical interfaces configured
on the cross-connect interface. In the following example, the interface type is
Inter-PSD-tunnel and there is one logical interface (xt-5/0/0.1).
Appendix
• Troubleshooting on page 253
• Glossary on page 255
Troubleshooting
Solution Manually load the Junos OS on the Routing Engine in the JCS chassis using the media
tray.
NOTE: This procedure requires that you issue commands on the JCS
management module CLI and to interactively respond to prompts from the
Junos OS through a console port session on the Routing Engine.
CAUTION: When you manually reload the Junos OS, the hardware disk and
CompactFlash card are erased.
To manually load the Junos OS on a specific Routing Engine in the JCS chassis:
1. Obtain the Junos OS package from the Juniper Networks support Web site and transfer
the software onto a USB device. For more information, contact your Juniper Networks
support representative.
2. Insert the USB device with the Junos OS into either USB port on the media tray on the
JCS chassis.
3. To select the Routing Engine, either press the CD button on the Routing Engine or
issue the following command using the JCS management module CLI. In this example,
the Routing Engine to be reloaded is in slot 1 on the JCS chassis.
system> mt -b 1
4. To restart the Routing Engine and begin loading the software, issue the following
command:
5. Type y and press Enter when the system issues the following prompt during the console
session on the Routing Engine:
WARNING: The installation will erase the contents of your disks. Do you
wish to continue (y/n)?
6. When the system issues the following prompt on the console port session:
a. Using the JCS management module, issue the following command to deselect the
media tray:
system> mt -b 0
b. On the console port session on the Routing Engine, press Enter to reboot the system.
Amnesiac (ttyd0)
Login: root
8. You can now load an existing configuration file onto the Routing Engine or configure
the system with basic system properties.
• From the PSD, you can use the restart chassis-control command (or the restart
jcs-control command) in the Junos OS CLI to restart the Routing Engine. For example:
• From the JCS management module, you can use the reset command in the JCS
management module CLI to restart the Routing Engine. For example:
B
blade bay data (BBD) 60-byte text string stored in the JCS management module NVRAM that conveys configuration
information to the Routing Engines (blades) in the JCS chassis.
F
Flexible PIC Interface concentrator on which PICs are mounted. An FPC is inserted into a slot in a Juniper
Concentrator (FPC) Networks router. See also PIC.
I
inter-PSD forwarding A configuration that enables PSDs on the JCS1200 platform to communicate on a peer-to-peer
basis without requiring external links. Inter-PSD forwarding is achieved by using tunnel PICs
that reside on each PSD. The PSDs communicate over logical interfaces configured on the
tunnel PICs.
J
JCS management Chassis management hardware and software included used to access and configure the
module (MM) Juniper Control System (JCS) platform.
JCS switch module Hardware device that connects Routing Engines in the Juniper Control System (JCS) chassis
to a Juniper Networks router and controls traffic between the two devices. For redundancy,
the JCS chassis can include two JCS switch modules.
Juniper Control System OEM blade server customized to work with Juniper Networks routers. The JCS chassis holds
(JCS) up to 12 single Routing Engines (or 6 redundant Routing Engine pairs). The JCS1200 chassis
connects to up to three T Series routers, enabling the control plane and forwarding plane of
a single interconnected platform to be scaled independently.
P
PIC Physical Interface Card. A network interface-specific card that can be installed on an FPC in
the router.
Protected System One or more Flexible PIC Concentrators (FPCs) on a Juniper Networks router matched with a
Domain (PSD) Routing Engine (or redundant pair) on the JCS1200 platform to form a secure, virtual hardware
router.
R
Root System Domain A pair of redundant Routing Engines on a Juniper Networks router connected to the switch
(RSD) fabric on the Juniper Control System (JCS) platform. The configuration on the Routing Engines
on a single Juniper Networks router provides the RSD identification and the configuration of
up to eight Protected System Domains (PSDs).
S
shared interface A physical interface that is owned by the Root System Domain (RSD) on which logical interfaces
can be shared by multiple Protected System Domains (PSDs). Each individual logical interface
is assigned to a different PSD. On the PSD, each assigned logical interface is configured and
peered with an uplink tunnel interface (ut-fpc/pic/slot), which transports packets between
the PSD and the shared interface on the RSD.
Indexes
• Index on page 259
• Index of Statements and Commands on page 265
protected-system-domains statement........................118 S
usage guidelines............................................................84 shared interfaces
PSD administration view......................................................19 benefits...............................................................................13
psd statement concepts..............................................................................6
usage guidelines............................................................84 configuring
PSDs CoS...........................................................................104
basic properties, configuring firewall filters.........................................................102
redundant Routing Engines..............................89 on the PSD...............................................................97
single Routing Engine..........................................87 on the RSD...............................................................95
benefits...............................................................................13 task overview..........................................................94
configuring........................................................................84 defined..................................................................................5
defined..................................................................................5 matching RSD and PSD parameters
displaying configured PSDs............................188, 242 DLCIs..........................................................................95
displaying hardware for.............................................189 Frame Relay encapsulation..............................95
displaying information.................................................20 logical unit numbers............................................95
Ethernet switch statistics, displaying....................191 MTU size...................................................................95
management tasks.......................................................20 VLAN IDs..................................................................95
VLAN tagging..........................................................95
R supported PICs..................................................................7
read command......................................................................226 supported platforms........................................................7
redundancy, configuring......................................................90 traffic flow...........................................................................6
request routing-engine login command......................237 tunnel PICs..........................................................................7
request system snapshot command.......................89, 91 shared-interface statement...............................................119
reset command.....................................................................227 usage guidelines............................................................98
restoring default configuration show chassis ethernet-switch command..................244
JCS management module.........................................40 show chassis ethernet-switch statistics
root password, configuring...........................................88, 91 command.............................................................................191
Root System Domain See RSD show chassis hardware command...............................238
root-authentication statement..................................88, 91 example...........................................................................189
root-domain-id statement.................................................119 show chassis psd command............................................242
usage guidelines............................................................84 example...........................................................................188
route reflection show chassis routing-engine command.....................242
defined..................................................................................8 example...........................................................................190
examples..........................................................................157 show interfaces (Gigabit Ethernet)...............................245
Routing Engines show interfaces (SONET/SDH)......................................245
blade data, configuring.................................................51 shutdown command..........................................................228
blade name, configuring..............................................52 SIBs, shared hardware...................................................19, 20
redundancy, configuring.............................................90 snmp command......................................................................74
RSD usage guidelines.............................................................44
configuring........................................................................84 SNMP community
defined..................................................................................4 configuring on JCS management module............44
management tasks........................................................19 configuring on JCS switch module...........................47
managing PSDs...........................................................238 SNMP monitored alerts, configuring...............................45
operational mode command options.................238 SNMP trap alert recipients
system information, displaying..................................19 configuring on JCS switch module...................44, 47
RSD administration view......................................................18 SNMP traps
configuring on JCS management module............43
configuring on JCS switch module...........................47
T
T-CBs (T Series Control Boards)........................................11
target path for JCS modules...............................................26
technical support
contacting JTAC............................................................xxiii
temperature information, displaying............................206
temps command.......................................................206, 229
time zone, configuring...........................................................43
troubleshooting a Routing Engine.................................253
U
unit statement
shared interfaces
PSD............................................................................98
RSD............................................................................96
user accounts, configuring..................................................42
users command.......................................................................78
JCS management module.........................................46
usage guidelines.............................................................42
V
virtual LAN identifier See VLAN ID
vital product data, displaying .........................................200
VLAN ID......................................................................................95
A I
alertentries command..........................................................54 ifconfig command
apply-groups statement.....................................................90 JCS management module..........................................65
JCS switch module........................................................67
B info command........................................................................219
backup-router statement............................................88, 90 interface-shared-with statement....................................116
baydata command................................................................56 interfaces statement (management ports)........88, 90
boot command......................................................................210
L
C list command...............................................................205, 222
clear command.......................................................................58
clearlog command.................................................................211 M
cli command.....................................................................87, 89 monalerts command............................................................69
clock command.......................................................................59 mt command.............................................................................71
commit command.................................................................88
commit synchronize command........................................90 N
config command....................................................................60 name-server statement................................................88, 91
configure command.......................................................87, 89 ntp command...........................................................................72
control-plane-bandwidth-percent statement...........113
control-slot-numbers statement.....................................114 P
control-system-id statement............................................115 peer-interface statement.....................................98, 99, 117
peer-psd statement...............................................................117
D power command..................................................................224
description statement..........................................................115 protected-system-domains statement........................118
displaylog command............................................................212
domain-name statement............................................87, 90 R
read command......................................................................226
E request routing-engine login command......................237
env command..........................................................................62 request system snapshot command.......................89, 91
exit command..................................................................63, 88 reset command.....................................................................227
root-authentication statement..................................88, 91
F root-domain-id statement.................................................119
fpcs statement........................................................................116
fuelg command......................................................................214 S
shared-interface statement...............................................119
H show chassis ethernet-switch command..................244
health command...................................................................216 show chassis ethernet-switch statistics
help command........................................................................64 command.............................................................................191
history command..................................................................218 show chassis hardware command...............................238
host-name statement...................................................87, 90 show chassis psd command............................................242
T
temps command.......................................................206, 229
U
users command.......................................................................78
V
volts command.............................................................207, 231
W
write command.....................................................................233