2.0 Network Implementations

icon picker
2.3 Given a scenario, configure and deploy common Ethernet switching features

Last edited 388 days ago by Makiel [Muh-Keel]

What is Layer-2 Ethernet Switching?

Layer-2 switching is the process of using the hardware addresses of devices on a LAN to segment a network. This is important because the very purpose of a Switch is separate Collision Domains: A Network Segment where 2 or more devices share the same bandwidth.
Switches truly have changed the way networks are designed and implemented. If a pure switched design is properly implemented, it will result in a clean, cost-effective, and resilient internetwork.
Routing protocols like RIP, which you learned about, employ processes for preventing network loops from occurring at the Network layer. This is all good, but if you have redundant physical links between your switches, routing protocols won't do a thing to stop loops from occurring at the Data Link layer.
That's exactly the reason Spanning Tree Protocol was developed—to put a stop to loops taking place within a layer 2 switched network
Layer 2 switches and bridges are faster than routers because they don't take up time looking at the Network layer header information. Instead, they look at the frame's hardware addresses before deciding to forward, flood, or drop the frame.
Layer 2 switching provides the following benefits:
Hardware-based bridging (ASIC)
Wire speed
Low latency
Low cost
Switches create private, dedicated collision domains and provide independent bandwidth on each port, unlike hubs.
Image below shows five hosts connected to a switch—all running 100 Mbps full duplex to the server. Unlike with a hub, each host has full-duplex, 100 Mbps of dedicated communication to the server. Common switchports today can pass traffic at 10/100/1000 Mbps depending on the connected device. By default, the switchports are set to auto-configure.
Bridging vs. LAN Switching
Bridges are software based, whereas switches are hardware based because they use ASIC chips to help make filtering decisions.
A switch can be viewed as a multiport bridge.
There can be only one spanning-tree instance per bridge, whereas switches can have many. (I'm going to tell you all about spanning trees in a bit.)
Switches have a higher number of ports than most bridges.
Both bridges and switches forward layer 2 broadcasts.
Bridges and switches learn MAC addresses by examining the source address of each frame received.
Both bridges and switches make forwarding decisions based on layer 2 addresses.

3 Switch Functions at Layer-2

There are three distinct functions of layer 2 switching—you need to know these! They are as follows:
Address Learning
Layer 2 switches and bridges are capable of Address Learning; that is, they remember the source hardware address of each frame received on an interface and enter this information into a MAC database known as a forward/filter table
When a device transmits and an interface receives a frame, the switch places the frame's source address in the MAC forward/filter table, which allows it to remember the interface on which the sending device is located. The switch then has no choice but to flood the network with this frame out of every port except the source port because it has no idea where the destination device is actually located.
If a device answers this flooded frame and sends a frame back, then the switch will take the source address from that frame and place that MAC address in its database as well, thereby associating the newly discovered address with the interface that received the frame. Because the switch now has both of the relevant MAC addresses in its filtering table, the two devices can make a point-to-point connection. The switch doesn't need to flood the frame as it did the first time because now the frames can and will be forwarded only between the two devices recorded in the table
This is exactly the thing that makes layer 2 switches better than hubs, because in a hub network, all frames are forwarded out all ports every time—no matter what. This is because hubs just aren't equipped to collect, store, and draw upon data in a table as a switch is.
Empty MAC Forward/Filter Table
MAC Forward/Filter Table being Filled with hardware addresses
In this figure, you can see four hosts attached to a switch. When the switch is powered on, it has nothing in its MAC address forward/filter table (just as in Figure 11.5). But when the hosts start communicating, the switch places the source hardware address of each frame in the table along with the port that the frame's address corresponds to.
Let me give you a step-by-step example of how a forward/filter table becomes populated:
Host A sends a frame to Host B. Host A's MAC address is 0000.8c01.000A, and Host B's MAC address is 0000.8c01.000B.
The switch receives the frame on the E0/0 interface and places the source address in the MAC address table, associating it with the port it came in on.
Because the destination address is not in the MAC database, the frame is forwarded (flooded) out of all interfaces—except the source port. Host B receives the frame and responds to Host A. The switch receives this frame on interface E0/1 and places the source hardware address in the MAC database, associating it with the port it came in on.
Host A and Host B can now make a point-to-point connection, and only the two devices will receive the frames. Hosts C and D will not see the frames, nor are their MAC addresses found in the database, because they haven't yet sent a frame to the switch.
Oh, by the way, it's important to know that if Host A and Host B don't communicate to the switch again within a certain amount of time, the switch will flush their entries from the database to keep it as current as possible.

Forward/filter decisions
When a frame arrives at a switch interface (comes from a host), the destination hardware address is compared to the forward/filter MAC database and the switch makes a forward/filter decision. In other words, if the destination hardware address is known (listed in the database), the frame is only sent out the specified exit interface or port.
The switch will not transmit the frame out any interface except the destination interface (If it’s already known, that is). Not transmitting the frame to unnecessary ports will preserve bandwidth on the other network segments and is called frame filtering.
But if the destination hardware address isn't listed in the MAC database, then the frame is flooded out of all active interfaces except the interface on which the frame was received. Whenever the desired device answers the flooded frame, the MAC database is updated with the device's location—its particular interface or port (Ex. fa/07).
Above ↑, You’ll see you can see Host A sending a data frame to Host D. What will the switch do when it receives the frame from Host A?
The switch will save the source hardware address of Host A to the Forward/Filter Table since Host A’s hardware info has not yet been added to the forward/filter table, then the switch will check the table to see if it knows the destination hardware address of Host D.
Looking at the table, the switch does in fact know the destination hardware address + destination port number of Host D, so It’s going to send the data frame directly from Host A to Host D.
If in a different but still very plausible scenario, the switch did not know the destination source address, it would send out the data frame to every port on the switch except for Host A’s port (which would Fa0/3).
At this point, the correct destination hardware address and destination port (Host D) would respond and the switch would add it to it’s forward filter table. Then from that point on, they would have point-to-point communication.

Another Example of Forward/Filter Table
Loop avoidance
Redundant links between switches can be a wise thing to implement because they help prevent complete network failures in the event that one link stops working.
But it seems like there's always a downside—even though redundant links can be extremely helpful, they often cause more problems than they solve. This is because frames can be flooded down all redundant links simultaneously, creating network loops as well as other evils. These network loops are called Broadcast Storms.
Here are a few of the other problems you can be faced with:
If no loop avoidance schemes are put in place, the switches will flood broadcasts endlessly throughout the internetwork. This is sometimes referred to as a Broadcast Storm. A device can receive multiple copies of the same frame because that frame can arrive from different segments at the same time.
The server in the figure sends a unicast frame to another device connected to Segment 1. Because it's a unicast frame, Switch A receives and forwards the frame, and Switch B provides the same service—it forwards the unicast. This is bad because it means that the destination device on Segment 1 receives that unicast frame twice, causing additional overhead on the network.
One of the nastiest things that can happen is having multiple loops propagating throughout a network.
You may have thought of this one: The MAC address filter table could be totally confused about the device's location because the switch can receive the frame from more than one link. Worse, the confused switch could get so caught up in constantly updating the MAC filter table from the constant barrage of network loops and source hardware address locations that it might fail to forward a frame! This is called thrashing the MAC table.\

“Always remember, Switches break up collision domainsRouters break up Broadcast Domains.”

Spanning Tree Protocol (STP) is a layer-2 protocol that is used to maintain a loop-free switched network. It achieves this feat by vigilantly monitoring the network to find all links and making sure that no loops occur by shutting down any redundant ones. STP uses the Spanning Tree Algorithm (STA) to first create a topology database and then search out and destroy redundant links.
Understand that the network in Figure 11.11 would actually sort of work, albeit extremely slowly. This clearly demonstrates the danger of switching loops. And to make matters worse, it can be super hard to find this problem once it starts!
Spanning Tree Port States
The ports on a bridge or switch running STP can transition through five different states:
Blocking A blocked port won't forward frames; it just listens to BPDUs and will drop all other frames. The purpose of the blocking state is to prevent the use of looped paths. All ports are in a blocking state by default when the switch is powered up.
Listening The port listens to BPDUs to make sure no loops occur on the network before passing data frames. A port in listening state prepares to forward data frames without populating the MAC address table.
Learning The switch port listens to BPDUs and learns all the paths in the switched network. A port in learning state populates the MAC address table but doesn't forward data frames. Forward delay is the time it takes to transition a port from listening to learning mode. It's set to 15 seconds by default.
Forwarding The port sends and receives all data frames on the bridged port. If the port is still a designated or root port at the end of the learning state, it enters the forwarding state.
Disabled A port in the disabled state (administratively) does not participate in the frame forwarding or STP. A port in the disabled state is virtually nonoperational.

Virtualization In The Networking World

In a virtual environment such as you might find in many of today's data centers, not only are virtual servers used in the place of physical servers, but virtual switches (software based) are used to provide connectivity between the virtual systems. These virtual servers reside on physical devices that are called hosts. The virtual switches can be connected to a physical switch to enable access to the virtual servers from the outside world.
How do we break up broadcast domains in a pure switched internetwork? By creating a Virtual Local Area Network (VLAN).
A VLAN is a logical grouping of network users and resources connected to administratively defined ports on a switch. When you create VLANs, you gain the ability to create smaller broadcast domains within a layer 2 switched internetwork by assigning the various ports on the switch to different subnetworks.
A VLAN is treated like its own subnet or broadcast domain, meaning that frames broadcasted onto the network are only switched between the ports logically grouped within the same VLAN. It allows you you to logically group hosts to resources regardless of their location.
Virtual means its not tied down to physical limitations or proximity. It means you can assign any host on your network to a VLAN regardless of the location of the actual physical host machine (Computer) and they’ll have access to that VLANs resources.
VLAN 1 is an administrative VLAN, and even though it can be used for a workgroup, Cisco recommends that you use it for administrative purposes only
Example of a Normal LAN

Examples of a VLAN
Communication between VLANs still must go through a layer 3 device. VLAN or Regular LAN, it still has to go through a router to communicate with hosts on different VLAN/LAN because by default, they’re in their own broadcast domain.
Voice LAN is an access port that can be configured for both data and voice VLANs. This allows you to connect both a phone and a PC device to one switch port but still have each device in a separate VLAN. If you are configuring voice VLANs, you'll want to configure quality of service (QoS) on the switch ports to provide a higher precedence to voice traffic over data traffic to improve sound quality.
Static VLANS are when a network admin or network engineer manually configures each switch port. Static VLAN configuration is pretty easy to set up and supervise, and it works really well in a networking environment where any user movement within the network needs to be controlled.
Dynamic VLANs determines a host's VLAN assignment automatically. Using intelligent management software, you can base VLAN assignments on hardware (MAC) addresses, protocols, or even applications that work to create dynamic VLANs.
For example, let's say MAC addresses have been entered into a centralized VLAN management application and you hook up a new host. If you attach it to an unassigned switch port, the VLAN management database can look up the hardware address and both assign and configure the switch port into the correct VLAN. Needless to say, this makes management and configuration much easier because if a user moves, the switch will simply assign them to the correct VLAN automatically.

Port Configurations

Access Ports belong to and carries the traffic of only one VLAN. Anything arriving on an access port is simply assumed to belong to the VLAN assigned to the port.
You can only create a switch port to be either an access port or a trunk port—not both. So you've got to choose one or the other, and know that if you make it an access port, that port can be assigned to one VLAN only.
Trunk Ports are 100 Mbps or 1000 Mbps point-to-point link between two switches, between a switch and router, or even between a switch and server, and it carries the traffic of multiple VLANs—from 1 to 4,094 VLANs at a time.
The term trunk port was inspired by the telephone system trunks that carry multiple telephone conversations at a time. So it follows that trunk ports can similarly carry multiple VLANs at a time.
Trunking can be a real advantage because with it, you get to make a single port part of a whole bunch of different VLANs at the same time. Another benefit of trunking comes into play when you're connecting switches.
Check out Figure 11.17. It shows how the different links are used in a switched network. All hosts connected to the switches can communicate to all ports in their VLAN because of the trunk link between them. Remember, if we used an access link between the switches, this would allow only one VLAN to communicate between switches. As you can see, these hosts are using access links to connect to the switch, so they're communicating in one VLAN only. But their using a trunking link to communicate all ports to this other switch!
Example of Trunking
VLAN Identification Methods is what switches use to keep track of all those frames as they're traversing a switch fabric. All of our hosts connect together via a switch fabric in our switched network topology. It's how switches identify which frames belong to which VLANs.
“The basic purpose of ISL and IEEE 802.1Q frame-tagging methods is to provide inter-switch VLAN communication.”
Port Tagging/IEEE 802.1Q works by inserting a field into the frame to identify the VLAN. This is one of the aspects of 802.1Q that makes it your only option if you want to trunk between a Cisco switched link and another brand of switch. In a mixed environment, you've just got to use 802.1Q for the trunk to work!
If you want to communicate to different VLANs on a different switch, you’re going to have to make one of the ports on your switch a Trunk Port. Once you have a trunk port in place, you can then use this port as a communication vessel between VLANs on different switches.
Ex. By having a trunk port, you’re going to allow the RED, BLUE, & GREEN VLANs to travel across one switch port and onto a different switch. In order the switch to know what information belongs to what color VLAN, a VLAN Identification method is going to be used at the Trunk Port (Either ISL or IEEE 802.1Q).
The VLANs frames are identified by having either ISL or IEEE 802.1Q actually “Tag” or “Label” the frames with for example, “Red VLAN Frame” or “Blue VLAN Frame”. They tag frames with VLAN info so whenever its time for these same frames to traverse the trunk port, the switches know where they’re going.

What is Port Aggregation?
Port Aggregation or Port Channeling between two devices allows your devices to treat multiple Ethernet links as if they were a single link. Combining or Bundling two network connections for LAN aggregation allows you to increase network bandwidth and provide network redundancy between your router and a client device if one link fails.
LACP 802.3ad (Link Aggregation Control Protocol) is a non-proprietary port aggregation method that works between multivendor networks.
All links in the bundle must match the same parameters (speed, duplex, VLAN info).This is then added to STP as a single bridge port. At this point, LACP’s job is to send packets every 30 seconds to manage the link for consistency, any link additions and modifications, and failures.

Network Qualities

Explain Duplex and Speed?
There are generally three duplex settings on each port of a network switch: full, half, and auto.
In order for two devices to connect effectively, the duplex setting has to match on both sides of the connection. If one side of a connection is set to full and the other is set to half, they're mismatched. More elusively, if both sides are set to auto but the devices are different, you can also end up with a mismatch because the device on one side defaults to full and the other one defaults to half.
Duplex mismatches can cause lots of network and interface errors, and even the lack of a network connection. If you have all switches and no hubs, feel free to set all interfaces to full-duplex, but if you've got hubs in the mix, you have shared networks, so you're forced to keep the settings at half-duplex.
Leaving the speed and duplex setting to auto (the default on both switches and hosts) is the recommended way to go.
Speed and duplex – Speed: 10 / 100 / 1,000 / 10 Gig – Duplex: Half/Full – Automatic and manual – Needs to match on both sides

Explain Flow Control?
Flow control provides a means for the receiver to govern the amount of data sent by the sender. It prevents a sending host on one side of the connection from overflowing the buffers in the receiving host—an event that can result in lost data.
Buffers are used when a machine receives a flood of datagrams too quickly for it to process. It stores them in this memory section until it can process it.
Reliable data transport employs a connection-oriented communications session between systems, and the protocols involved ensure that the following will be achieved:
The segments delivered are acknowledged back to the sender upon their reception.
Any segments not acknowledged are retransmitted.
Segments are sequenced back into their proper order upon arrival at their destination.
A manageable data flow is maintained in order to avoid congestion, overloading, and data loss.
During fundamental, reliable, connection-oriented data transfer, datagrams are delivered to the receiving host in exactly the same sequence they're transmitted. So, if any data segments are lost, duplicated, or damaged along the way, a failure notice is transmitted. This error is corrected by making sure the receiving host acknowledges it has received each and every data segment, and in the correct order.

Switch Port Protection

Explain Port Mirroring?
Port-Mirroring allows you to place a port in span mode so that every frame from the source device (Host A) is captured by both the desired destination (Host B) and the sniffer, as shown in Figure 11.28. This would also be a helpful option to take advantage of if you were connecting an IDS or IPS to the switch as well. It can be used to monitor any traffic going to the switch before it actually gets to its destination device.
Be careful when using port mirroring because it can cause a lot of overhead on the switch and possibly crash your network. Because of this, it's a really good idea to use this feature at strategic times, and only for short periods if possible.

Explain Port Security?
Port security on a switch port restricts port access by MAC address. By using port security, you can limit the number of MAC addresses that can be assigned dynamically to a port, set static MAC addresses, and set penalties for users who abuse the policy!
It also prevents someone from simply plugging a host into one of our switch ports—or worse, adding a hub, switch, or access point into the Ethernet wall port in their office.

Other Network Topics

Explain Jumbo Frames?
We have all been taught that bigger is always better. And while this can be true, it can also create some problems for us. Jumbo Frames are ethernet frames that are bigger than 1500 bytes! The Ethernet specification clearly calls out that the maximum transmission unit (MTU) of an Ethernet frame is 1500 bytes long. Well with all the enhancements and the natural evolution of Ethernet, frames have slowly begun to grow all the way up to 9000 bytes!
The problem arises when a Jumbo frame larger than 1500 bytes tries to cross an interface that is set at the standard MTU of 1500. This clearly does not work!
There are two things we can do here: either configure the sending device to that standard MTU size or change the configuration on your network interface to accept a larger MTU. If you have giant or jumbo frames on your network, you will need to make sure that all devices in the data path are configured to accept the larger size.
Failure to do so can cause strain and congestion in your network.
• Ethernet frames with more than 1,500 bytes of payload – Up to 9,216 bytes (9,000 is the accepted norm) • Increases transfer efficiency – Per-packet size – Fewer packets to switch/route
• Ethernet devices must support jumbo frames – All Switches, interface cards – Not all devices are compatible with others

What is MDI-X?
MDI-X is used to auto detect the needed cable required and adjusts the interface as needed.

What is PoE?
Power over Ethernet (PoE and PoE+) technology describes a system for transmitting electrical power, along with data, to remote devices over standard twisted-pair cable in an Ethernet network. This technology is useful for powering IP telephones (Voice over IP, or VoIP), wireless LAN access points, network cameras, remote network switches, embedded computers, and other appliances—situations where it would be inconvenient, expensive, and possibly not even feasible to supply power separately.

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
) instead.