Monday, April 22, 2013

Cisco VSS


Cisco VSS Configuration using primarily Catalyst 6509 switches with Sup 720 – 10 G Supervisors.
Here are some useful Cisco docs on the subject, all are in the Documentation area of Cisco’s Web site under:  Products – LAN Switches - Cisco Catalyst 6500 Virtual Switching System 1440
Catalyst 6500 Release 12.2SXH and Later Software Configuration Guide

http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/vss.html

Cisco Catalyst 6500 Virtual Switching System Deployment Best Practices


http://www.cisco.com/en/US/products/ps9336/products_tech_note09186a0080a7c837.shtml

Replace Supervisor Module in Cisco Catalyst 6500 Virtual Switching System 1440


http://www.cisco.com/en/US/products/ps9336/products_configuration_example09186a0080a64891.shtml

Hardware Requirements/Restrictions
Chassis and Supervisor Requirements
2) 6500 Chassis capable of running VS-S720-10G supervisor engines and WS-X670X-10GE switching modules. (6704, 6708 and 6716)
2) Sup 720s. They must both be the same so either (2) VS-S720-10G-3C or (2) VS-S720-10G-3CXL
This is important. The supervisors must completely match, down to the PFCs.
Line Cards
Only 67xx Line cards that are Interface Module Class typeCEF720 are supported.  If they have a Distributed Feature Card it must be DFC3C or DFC3CXL.
Classic, CEF256 and dCEF256 cards are not supported and will remain powered off in a chassis running VSS. Any line card with a DFC3A/3B/3BXL will also remain powered off in a chassis running VSS
3C or 3CXL
As stated above, both will work. However, if the Sups and line cards are not all the same, there can be issues.
If the Sups are 3C and the line cards are 3CXL, the line cards will operate as 3C.
If the Sups are 3CXL and the line cards are 3C, the system will come up in RPR (Route Processor Redundancy) mode instead of SSO (Stateful Switchover) mode. This can be confirmed with the show redundancy command. To correct this, use the “platform hardware vsl pfc mode pfc3c” command to tell VSS to run the Sups as 3C.








Here is what we’ll be configuring.
The switches running VSS are 6509s with a WS-X670X-10GE supervisor in slot 5 and a WS-X6708-10GE blade in slot 1 and a WS-X6748-GE-TX in slot 2.
For the Virtual Switch Link the 10G ports on the supervisor cards will be used.

Later will add an upstream switch connected to a MultiChassis EtherChannel (MEC) on the VSS pair.


! Switch 1
! Note:
!The switch ID is stored as a variable in
! ROMmon, not in the config
!Once VSS is up you can see this with
switch read switch_num local
! The switch virtual domain number should
! be unique across the network.
! The priority tells which will begin as the
! active supervisor.
! Higher number gets priority
switch virtual domain 9
switch 1
switch 1 priority 110
switch 2 priority 100
exit
! Set Up the VSL link
! port-channel IDs must be unique
! on each chassis to form the VSL
! We will be using 1 and 2.
interface port-channel 1
no shut
desc VSL to switch 2
switch virtual link 1
! The etherchannel mode must be set to on.
! Best practice for etherchannel is normally
! desirable (PAgP) or active (LACP).
! But this is not a normal etherchannel.
! This is a special type of etherchannel and
! requires mode on.
! For our lab, we will use the 10G ports
! on the supervisor.
interface range T5/4 -5
no shutdown
channel-group 1 mode on
! NOTE: After VSS is enabled on both
! switches, the switches will need to be
! converted to virtual switch mode
switch convert mode virtual
! You’ll be asked if it is OK to save the
! running config and reboot the switch.
! Answer yes and then be patient.
! It takes a while for the switch to reboot.
! On the Active Switch Only.
! This command gets executed only once.
switch accept mode virtual


! Switch 2
! Note:
! The switch ID is stored as a variable in
! ROMmon, not in the config
!Once VSS is up you can see this with
switch read switch_num local
! The switch virtual domain number should
! be unique across the network.
! The priority tells which will begin as the
! active supervisor.
! Higher number gets priority
switch virtual domain 9
switch 2
switch 1 priority 110
switch 2 priority 100
exit
! Set Up the VSL link
! port-channel IDs must be unique
! on each chassis to form the VSL
! We will be using 1 and 2.
interface port-channel 2
no shut
desc VSL to switch 1
switch virtual link 2
! The etherchannel mode must be set to on.
! Best practice for etherchannel is normally
! desirable (PAgP) or active (LACP).
! But this is not a normal etherchannel.
! This is a special type of etherchannel and
! requires mode on.
! For our lab, we will use the 10G ports
! on the supervisor.
interface range T5/4 -5
no shutdown
channel-group 2 mode on
! NOTE: After VSS is enabled on both ! switches, the switches will need to be
! converted to virtual switch mode
switch convert mode virtual
! You’ll be asked if it is OK to save the
! running config and reboot the switch.
! Answer yes and then be patient.
! It takes a while for the switch to reboot.

! You now have a single switch with a single configuration file.

! A console connection to switch 1 will show the active switch. A connection to switch 2 will show it to be the standby switch.
! The two switch configs have been merged into 1. In truth, the config on switch 1 is maintained while anything (other than VSS) from switch 2 is lost.
! For example, had you given both switches a hostname, the hostname of the merged switch would be that of switch 1.

Interfaces are now referenced by switch/module/port. So T1/1 on switch 1 is now T1/1/1. T1/1 on switch 2 is now T2/1/1.


To reference the modules on switch 1 or switch 2, the command is now show modules switch 1 or show modules switch 2.


show run will show the entire running config.
show run switch 1 will show the part of the config that is specific to switch 1.

show run switch 2 will show the part of the config that is specific to switch 2.

! The following commands can be used to verify the status of the VSS.
! Notice the reference to the switch number – 1 or 2.
show switch virtual
show switch virtual link
show switch virtual role

! The following command is used to synchronize mac-address tables across forwarding
! engines on the 2 switches. If a WS-670x-10G line card is present in the VSS system,
! mac-syncronization is turned on automatically. Otherwise, it has to be enabled manually.
! It certainly doesn’t hurt to always include this command.

mac-address-table synchronize
! The following command sets the redundancy mode to SSO.
! However, it should be SSO by default.
redundancy
mode sso
exit
! Do show redundancy to see that it is SSO. If it comes up RPR,
! chances areSups are 3CXL and the line cards are 3C.
! If that is the case, you’ll need to execute platform hardware vsl pfc mode pfc3c

Configuring a MultiChassis EtherChannel (MEC)




The upstream switch is a 6509 with 2) WS-X670X-10GE sups.  1 in slot5 and 1 in slot 6. This might be one of a pair of data center distribution switches, with the VSS pair being a server switch. The second distribution switch would also be connected using a standard etherchannel back to a MEC on the VSS pair. And of course the distribution pair would be connected to each other.
Notice the port designations on the VSS pair. They are now in the form of switch/module/port.

Configuring  the VSS pair for connectivity to the upstream switch

Here etherchannel is configured as a layer 3 etherchannel. However, it can just as easily be configured as a layer 2 etherchannel or even an access port etherchannel.
!The layer three etherchannel gets configured just as it would on any other switch.
interface port-channel 10
no switchport  ip address 172.16.0.1 255.255.255.252
no shut
! What makes it a MEC is the fact that it includes ports from both chassis of the VSS domain.
interface range TenGigabitEthernet 1/1/1,  TenGigabitEthernet 2/1/1
no switchport  channel-group 10 mode desirable  no shut
exit

Configuring  the upstream switch
interface port-channel 10
no switchport  ip address 172.16.0.2 255.255.255.252
no shut
!Note: The etherchannel on the upstream switch is not a MEC. ! The MEC resides on the VSS pair.
interface range TenGigabitEthernet 5/4 -5
no switchport  channel-group 10 mode desirable  no shut
exit
From here you’ll want to confirm the etherchannel is up and you can ping across it.
show etherchannel summary
ping 172.16.0.1
At this point you can do anything you want from a simulation perspective. Configure loopbacks with addresses and configure a routing protocol. Configure a local DHCP scope and use one of the Gig interfaces on the WS-X670X-10GE supervisor to connect a computer. If you do configure a routing protocol, you’ll want to make certain to include the nsf command. VSS will take advantage of both SSO and NSF.

! For OSPF
router ospf 1
nsf
exit
! If using EIGRP
router eigrp 1  nsf
exit

Through all of this I used 10 Gig interfaces with which to connect switches. However the reason I showed a WS-X6748-GE-TX in slot 1/2 and 2/2 of the VSS pair is because being a server switch I’d expect to be connecting to servers with 1G. A MEC can be built on the 6748 ports and be used to connect to servers. If the server supports LACP, the MEC could configured as active and negotiate the etherchannel with the server. Otherwise you’ll have to configure the etherchannel as on.
For a server connecting to a single vlan, the etherchannel would be configured as an access port. However, for VM servers, it would be reasonable to connect using dot.q tagged frames. In that case the MEC could be configured as a trunk. All of that is pretty well documented in the Cisco docs I referenced.








Wednesday, April 10, 2013

Cisco 6500 and 4500


Multilayer Switch Feature Card (MSFC)



Multilayer Switch Feature Card is the Layer 3 switching engine that sites on the Catalyst Supervisor as a daughter card. The MSFC is an integral part of the Supervisor Engine, providing high performance, multilayer switching and routing intelligence. On the MSFC daughter card, the route processor (RP) is located on the MSFC itself. Equipped with a high performance processor, the MSFC runs layer 2 protocols on one CPU and layer 3 protocols on the second CPU. These include routing protocol support, layer 2 protocols (Spanning Tree Protocol and VLAN Trunking Protocol for example), and security services.

The control plane functions in the Cisco Catalyst 6500 are processed by the MSFC and include handling Layer 3 routing protocols, maintaining the routing table, some access control, flow initiation, and other services not found in hardware. Performance of the control plane is dependent on the type and number of processes running on the MSFC. The MSFC3 can support forwarding rates up to 500Kpps. The MSFC provide a means to perform Multilayer Switching (MLS) and interVLAN routing.

The MSFC builds the Cisco Express Forwarding information Base (FIB) table in software and then downloads this table to the hardware Application-specific-integrated circuits (ASICs) on the PFC and DFC (if present) that make the forwarding decisions for IP unicast and multicast traffic.

Role of MSFC

  1. Provide IOS based multi-protocol routing using a variety of routing protocols.
  2. Work with the PFC for implementing layer 3 switching & traditional router based input/output ACL's. Note, PFC can implement ACL's without requiring a MSFC.
  3. Provide other SW based features (like NAT, Policy Routing, Encryption etc) which are not supported in PFC hardware.


Policy Feature Card (PFC)

The PFC3 is the ASIC-based forwarding engine daughtercard for the Sup720; the DFC3 is the ASIC-based forwarding engine daughtercard for various fabric-enabled linecards (CEF256, CEF720). Contains the ASICs that are used to accelerate Layer 2 and Layer 3 switching, store and process QoS and security ACLs, and maintain NetFlow statistics.

The PFC3/DFC3 generation is built upon a forwarding architecture known as EARL7. Within this generation, there are three different versions - 'A', 'B', and 'BXL' - that are all based on the same fundamental technologies but that each have incremental functionality. 'A' is the standard offering; 'B' is the intermediate option, and 'BXL' is the high-end option.

The PFC contains a Layer 2 and a Layer 3 forwarding engine.

Role of PFC Layer 2 engine

  1. Layer 2 MAC address lookups into the Layer 2 CAM table.
  2. Looking into the packet headers to determine if this switching operation will be a Layer 2 or a Layer 3 operation. If it is going to be a Layer 3 operation, then it will hand off the packet to the Layer 3 engine for further processing.

Role of PFC Layer 3 Engine

  1. NetFlow Statistics collection.
  2. Hardware based forwarding of IPv4, IPv6 and MPLS tagged packets.
  3. QoS mechanism for ACL classification, marking of packets, and policing (rate limiting).
  4. Security mechanism for validating ACL rules against incoming packets.
  5. Maintaining Adjacency entries and statistics.
  6. Maintaining Security ACL counters.

The PFC3 supports hardware based Layer 2 and Layer 3 switching, processing security and QoS ACLs in hardware and the collection of NetFlow statistics.

There are five versions of the Policy Feature Card in use today. The PFC3A , PFC3B, and PFC3BXL are integrated into the Supervisor 720-3A, Supervisor 720-3B and Supervisor 720-3BXL respectively. The PFC3B is the only option for the Supervisor 32, while the PFC3C and PFC3CXL are integrated into the Supervisor 720-10G-3C and Supervisor 720-10G-3CXL.


Distributed Forwarding Card (DFC)

The Catalyst 6500 architecture supports the use of Distributed Forwarding Cards (DFC). Distributed Forwarding Card is a combo daughter card comprising a MSFC and PFC used by a fabric enabled Cat6500 linecard to perform distributed switching. DFCs are located in linecards, not in Supervisors.

A DFC is used to hold a local copy of the forwarding tables (constructed by the MSFC) along with Security and QoS policies to facilitate local switching on the linecard. The DFC3A is available as an option on CEF256 and CEF720 based linecards. The DFC3B and DFC3BXL were introduced for linecards to operate with the Supervisor 720 equipped with PFC3B and PFC3BXL. The last generation of DFC, the DFC3C, is available as an option on the CEF720 based linecards but are integrated on the latest generation linecards, the WS-X6708 and WS-X6716.

It is important to note that there are some operational considerations that can impact the ability of the Catalyst 6500 system to provide specific QoS features. This can happen when you mix different generations of PFC's and DFC's together. The rule is that the system will operate at the lowest common feature denominator.

The primary MSFC3 will calculate, then push down a FIB table (Forwarding Information Base) giving the DFC3x its layer 3 forwarding tables. The MSFC3 will also push down a copy of the QoS policies so that they are also local to the line card. Subsequent to this, local switching decisions can reference the local copy of any QoS policies providing hardware QoS processing speeds and yielding higher levels of performance though distributed switching.

Benefits of DFC

Performance is the biggest and most obvious reason to implement DFCs. You move from a 30 Mpps centralized forwarding system anywhere up to a 400 Mpps distributed forwarding system. This forwarding performance is for all L2 bridging, L3 routing, ACLs, QoS, and Netflow features, i.e., not just L3.

The performance benefit of a DFC is most applicable when you use the 67xx series modules. This is because these modules have enough ports and bandwidth to generate much more than the 30Mpps centralized forwarding engine has available. A 67xx-series module without a DFC is subject to the same centralized performance characteristics of all other centralized forwarding modules.

DFC also minimize the impact that a classic module has in a system. Classic modules do affect the centralized forwarding performance of a system, limiting the maximum centralized forwarding rate to 15Mpps. Modules enabled with DFCs have their own forwarding engine and are not subject to this performance degradation. If a classic module used, the inclusion of a DFC mitigates any performance issues/concerns. Any non-DFC modules are still subject to the available 15 Mpps of forwarding available when a classic-module is present.

Packet Forwarding

Packet Forwarding is done on the ingress forwarding engine. Therefore, packets coming into the ports on the Sup720-3B will have forwarding done on the PFC3B of the Supervisor. Packets coming into ports of line cards with DFC3s will have the forwarding done on the DFC3. Packets coming into ports of line cards with CFCs will have the forwarding done on the PFC3B of the Supervisor. The MSFC3 only does forwarding in the cases where the PFC3 or DFC3 cannot make the forwarding decision. Some of these cases include when traffic has IP Options set, when ACLs are applied to an interface but the ACL is not programmed into the ACL TCAM for some reason, when packets have TTL expiration, when packets hit an ACE with the "log" keyword, and others.

Centralized Forwarding Card (CFC)

CFC is a centralized forwarding card for the switching modules which makes IPv4 Routing over the PFC. CFC does not do local forwarding, the forwarding is done by the PFC in the Supervisor. As the forwarding is centralized, the PFC performance, FIB entries, ACL lables are shared among the line cards that uses the Supervisor PFC for forwrding. WS-F6700-CFC is the CFC card used on WS-X67xx Ethernet Modules. This daughter card is supported only by the Supervisor Engine 720.

Note: CFC or the Centralized Forwarding Card was introduced along with the CEF720 modules. It provides centralized connectivity to the supervisor for look-ups and results. Though the switch fabric is used for the data, but the CFC is responsible to send a look-up request from the Supervisor and then get those results back.



DFC are on board the line card itself and since the DFC does look ups, the forwarding decision.  CFC is *Centralized* forwarding which as the name implies, the forwarding decision is done cetralized on the sup's PFC and not done locally on the line card.
The diffence on between DFCs are the forwarding capability.
For the main differences click here








Misc Stuff

BASIC IP SLA SETUP




The aim here is to pass all traffic generated in POLICY ROUTER to go through ISP1, until it is up.IP SLA will be used as a proactive process to check whether ISP1 is up or down.

Here is the configuration:

PR(config)#ip sla monitor 1
PR(config-sla-monitor)#type echo protocol ipICMPEcho 200.11.2
PR(config-sla-monitor-echo)#timeout 1000 (in ms)
PR(config-sla-monitor-echo)#frequence 3

PR(config)#ip sla monitor schedule 1 start-time now life forever
PR(config)#track 1 rtr 1 reachability

PR(config)#ip acc-list ext ROUTER
PR(config-ext-nacl)#permit ip any any
PR(config)#route-map ROUTER_TRAFFIC permit 10
PR(config-route-map)#match ip address ROUTER
PR(config-route-map)#set ip next-hop verify-availibility 200.1.1.2 10 track 1
PR(config-route-map)#set ip next-hop 201.1.1.2

now to apply the route-map for the router itself

PR(config)#ip local policy route-map ROUTER_TRAFFIC

now to test we issue ping to ISP1 and ISP2, ISP1 should be success and ISP2 should fail.Then shutdown interface to ISP1 and ping ISP2, this time the ping to ISP2 should succeed.





  

Tuesday, April 9, 2013

Virtual Private Networks



SPLIT TUNNEL



Split tunneling is a function where the VPN gateway admin decides which traffic the client pushes through the VPN tunnel while the rest goes straight to the Internet (non-tunneled).  As a simple example, if your corporate network uses the following IP space:

10.1.1.0 /24
10.1.2.0 /24
10.1.3.0 /24

and the VPN gateway admin decides that the VPN client should talk to these networks, but everything else (such as Internet-destined traffic) doesn't need to route through the corporate network via the VPN tunnel, at connect time the VPN client receives a policy which updates the client's routing table to route traffic destined for these three networks through the tunnel.  Any traffic destined for networks outside of this range goes directly out of the physical interface.

There are advantages and disadvantages to split-tunneling.  An advantage is that Internet connections (or traffic to any non-defined networks) go direct and increases performance and there's less overhead since there's less traffic that needs to go through the tunnel and thus through the corporate network.  The disadvantage is that the IT admin can't perform packet inspection / scrub web traffic and filter potentially harmful traffic (assuming the devices which perform those functions exist on the corporate network).  Another disadvantage is that you can't force users to route through an internal proxy which might authenticate users to get to the Internet, thus controlling the user's Internet experience by a defined company security policy.

Wireless Communication

802.11n Wireless Standard



We need multiple transmit and receive antennas working in parallel (MIMO) to conduct spatial multiplexing. It is also a fact that spatial multiplexing is a mandatory component of the 802.11n standard and is the primary means to 802.11n's increased throughput promise. 
Spatial multiplexing involves multiple antennas separately sending different flows of separately encoded signals over the air at the same time. By multiplexing the signals over a wireless path, more data gets through. Simplistically, N transmitting antennas send to N receiving antennas, and each receiver detects a unique stream, resulting in an N-fold increase in throughput.
The numbers in the “NxN” jargon represent, respectively, the number of transmitting (Tx) antennas and the number of receiving (Rx) antennas involved in the MIMO-based, spatially multiplexed transmission.
To date, there are systems on the market using 2x2 MIMO supporting 2 spatial streams, as well as those using 2x3 MIMO, which also support two spatial streams. How does having a different N at the Tx and Rx ends affect the transmission?
When you have more receive antennas, you have what is called ‘combining gain,’ In other words, you have more copies of the same signal, and…the greater the signal-to-noise ratio, which strengthens the signals.
Simulated and real-world performance tests show about a 20% increase in average performance when moving from a 2x2, 2-stream system to a 2x3, 2-stream system in the uplink direction in a 20MHz channel. It is also a fact that average uplink throughput rates when transmitting across 40MHz channels (two, 20-MHz bonded channels, which the 11n standard allows) can be up to 40% greater in the 2x3 configuration than in the 2x2 configuration over distances of 30 to 40 feet and 20% greater in the 60- to 100-foot range.

Vlan Trunking Protocol

Vlan Trunking Protocols - Message Types



Once the VLAN Trunking Protocol (VTP) is configured on the switches, the switches start advertising VTP information between them on their trunk ports. The main information which the switches advertise are management domain name, configuration revision number and the configured VLANs. VTP advertisements are sent as multicast frames and all neighbor devices receive the frames.

Three types of VLAN Trunking Protocol (VTP) advertisement messages are


1 Client advertisement request: A client advertisement request message is a VTP message which a client generates for VLAN information to aserver. Servers respond with both summary and subset advertisements.


A switch needs a VTP advertisement request in these situations:
  • The switch has been reset.
  • The VTP domain name has been changed.
  • The switch has received a VTP summary advertisement with a higher configuration revision than its own.
Upon receipt of an advertisement request, a VTP device sends a summary advertisement. One or more subset advertisements follow the summary advertisement.

2 Summary advertisement: Summary advertisements are sent out every 300 seconds (5 minutes) by default or when a configuration change occurs, which is the summarized VLAN information.


By default, Catalyst switches issue summary advertisements in five-minute increments. Summary advertisements inform adjacent Catalysts of the current VTP domain name and the configuration revision number.
When the switch receives a summary advertisement packet, the switch compares the VTP domain name to its own VTP domain name. If the name is different, the switch simply ignores the packet. If the name is the same, the switch then compares the configuration revision to its own revision. If its own configuration revision is higher or equal, the packet is ignored. If it is lower, an advertisement request is sent.

3 Subset advertisement: Subset advertisements are sent when a configuration change takes place on the server switch. Subset advertisements are VLAN specific and contain details about each VLAN

When you add, delete, or change a VLAN in a Catalyst, the server Catalyst where the changes are made increments the configuration revision and issues a summary advertisement. One or several subset advertisements follow the summary advertisement. A subset advertisement contains a list of VLAN information. If there are several VLANs, more than one subset advertisement can be required in order to advertise all the VLANs.