Modern data center networking divides the task up into three sections called tiers or layers. Each layer has a specific function in the data center for various connectivity types.
Image of Three-Tier architecture
What is the Access/Edge Layer?
Starting at the outside of the network and working toward the middle is the access layer, which is also referred to as the edge. This is where all of the devices and users in the data center attach to the network.This could include servers, ethernet-based storage devices, printers, PCs, mobile devices, etc.
Access Layer consists of a large number of switches; Access switches will feature high-speed uplink ports from 1G all the way up to 100G to connect to the rest of the network at the distribution layer.
In today’s data centers, much of the data flows are between servers, sometimes called East-West traffic. Since the data often stays inside the data center and is server to server, the access switches provide high-speed, low-latency local interconnections between the servers.
Analogy: This is the suburbs of the City, where people live.
What is the Distribution/Aggregation Layer?
The task of the distribution layer is to provide redundant interconnections for all of the access switches, connect the access layer to the core switches, and implement security and access control + layer 3 routing.
The Distribution layer provides network redundancy; if one switch should should fail, the other can assume the traffic load without having any downtime.
The Distribution layer also acts as the midpoint and connects the access layer (users) to the core layer (resources)
All of the access switches have high-speed uplinks to each of the distribution switches and the distribution switches are all interconnected.
Analogy: These are the highways and interstates of the city that decrease the amount of time it takes to get to downtown.
What is the Core Layer?
Also referred to as the network backbone, this layer is responsible for transporting large amounts of traffic quickly and providing users with the resources that are needed. The core layer provides interconnectivity between distribution layer devices and usually consists of high speed devices, like high end routers and switches with redundant links.
The Core Layer does not implement advanced features such as security, since the job of the core is to exchange resources with minimal delays.
This layer contains all of the network resources: Web servers, databases, applications.
Analogy: This is the Downtown of the city; This is where all of the jobs and resources are located.
What is SD-Networking?
Software-Defined Networking
Why use SD-Networking?
As modern networks grow in size and complexity, it becomes increasingly difficult to configure, manage, and control them. There has traditionally been no centralized control plane, which means to make even the simplest changes you’ll have to individually access many switches.
SD-Networking allows a centralized controller to implement and manage all of the networking devices as a complete set and not individually.
This greatly reduces the amount of time and configurations needed to be done and allows the entire network to be managed as a single entity and not a bunch of individual routers + switches.
What are the SD-Networking Layers?
The Three layers in an SDN architecture are:
Application: the applications and services running on the network
Control: the SDN controller or “brains” of the network
Infrastructure: switches and routers, and the supporting physical hardware
Application Layer
Contains the standard network applications or functions like intrusion detection/prevention appliances, load balancers, proxy servers, and firewalls that either explicitly and programmatically communicate their desired network behavior or provide their network requirements to the SDN controller.
Control Layer
Also known as the Management plane; Where the SDN Controller resides, making this layer the brains of the network.
Carries out any instructions or requirements received from the application layer devices and configures the SDN-controlled network devices in the infrastructure layer.
The SDN Controller sits in the control layer and processes configuration, monitoring, and any other application-specific information between the application layer and infrastructure layer.
Infrastructure Layer
Also called the forwarding plane, consists of the actual networking hardware devices that control the forwarding and processing for the network; This is where switiches and routers are located.
Infrastructure Layer is responsible for collecting network health and stats such as traffic, topology, usage, logging, errors, and analytics and sending this information to the control layer.
Image of SD-Networking on Hardware.
Image of SD-Networking Data Flow.
What is a Spine and Leaf?
Spine + Leaf
Data Center networks are evolving into two-tier fabric-based networks that are also referred to as a Spine + Leaf architecture.
Spine Switches
Spine Switches have extremely high-throughput, low-latency, and port dense switches that have direct high-speed (10,25,40, to 100 Gbps) connections to each leaf switch.
The Spine switches are used for routing and forwarding, acting as the backbone for the network.
These are the same as the core network components in the Data Center 3-Tier Architecture; It’s the same in the sense that it’s responsible for providing all of the network resources in a extraordinarily fast manner.
Leaf Switches
The Leaf Switches are the equivalent to the access layer in the Data Center 3-Tier architecture; They’re the same in the sense that they connect the internal servers to the network resources (Spine Switches).
The leaf layer consists of access switches that aggregate (combine) traffic from servers and connect directly into the spine or network core.
Advantages of Two-Tier architectures
Resiliency
Offers full redundancy due to each Leaf Switch connecting to every Spine Switch.
STP is not needed, and due to layer 3 routing protocols being used, now every uplink can be used concurrently.
Latency
There is a max of 2 hops for any East-West packet flow over very high-speed links, so ultra-low latency is standard.
Performance
True active-active uplinks enable traffic to flow over the least congested high-speed links available, making performance top-tier.
Scalability
You can increase leaf switch quantity to your desired port capacity and add spine switches as needed for uplinks; This makes increasing your network access points relatively easy.
Adaptability
Multiple spine-leaf networks across a multitenant or private cloud design are often managed from SD-Networking controllers.
Because of this, any changes that have to be made are simple and concise due to the centralized SD-Networking controller.
Top-Of-Rack Switching
Each leaf switch is physically on top of the servers in your physical data center; Each Leaf switch is physically on top of the 19” Racks
Advantages
Simple Cabling: By using this setup, it allows you to have very simple cabling layouts
Redundant: Also since every leaf and spine are connected to each, every server that’s connected to the leaf connects to every spine on the network
Fast: Since every leaf and spine are connected, this provides very efficient + very swift communication
Disadvantages
Expensive to Scale: Adding a leaf switch means you’ll have to create and buy additional connections for all of the spine switches on the network; This could rapidly increase the cost.
Image of Top-Of-Rack Switching
What are the backbones?
Data center backbones are either going to be Spine Switches or Core Switches, it all depends on your topology (Three-tier or Two-tier).
Backbone switches have very high-speed interfaces and are used to interconnect the access or leaf switches. The backbone does not have any direct server connections, it only has connections to other networking devices (switches/routers).
Common to have 10G+ interface speeds while also being Highly Redundant.
What are Traffic Flows?
Traffic Flows
In a data center there are server-to-server communication and also communication to the outside world. These are called North-to-South + East-to-West. It’s important to understand your network and make sure it is designed properly so there are no congestion points that could cause bottlenecks or slowdowns.
North → South
Traffic typically indicates data flow that either enters or leaves the data center from/to a system physically residing outside the data center, such as user → server.
Northbound Traffic is data leaving the data center.
Southbound traffic is data entering the data center from the outside, such as from the internet or a private network. Usually the border network consists of a router and firewall to define the border of the data center network with the outside world.
East → West
Refers to the data sent and received between devices inside of a data center.
Thanks to modern decentralized app designs, virtualizations, private clouds, converged and hyper-converged infrastructures, East—West traffic volume is usually greater than the North—South counterpart.
This traffic source benefits from high-speed interconnections for low-latency transfers of large amounts of data that the spine-leaf architecture provides.
Image of Traffic Flows
What is the difference between the different locations of a datacenter?
Branch Office vs. On-Premise Datacenter vs Co-location
Branch Office Datacenter
Having a data center in the branch office close to the end users can be both beneficial and detrimental in some cases.
Beneficial in the sense that network speeds and uptime are both improved due to the datacenter being so close to the people that use the resources the most.
Detrimental in the sense that it can be difficult to manage and administrator smaller datacenters spread out over a region instead of one large centralized datacenter
Redundancy will be play a role too. If you have lots of smaller datacenters for every office belonging to the enterprise, than it’s going to be expensive keeping backups for each separate site as well.
On-Premise Datacenter
The traditional approach is to maintain one or more private on-premise data centers; This puts everything under your control by having it In-House.
You’ll be able to place your own staff and security while possessing a backup datacenter that’s miles away from your original.
Only Con is that you’ll have to provide your own cooling, security, and monitoring.
Colocation Datacenter
Lower Costs– When comparing the costs of a colocation data center with the option of building your own facility, it is an obvious choice. Unless your equipment requires a huge amount of room, the costs will be far lower when using a colocation option.
Fewer Technical Staff – You don’t need to worry about things like running cables, managing power, installing equipment, or any number of other technical processes. In many cases, the colocation data center will even be able to replace components or perform other tasks as needed. This means you don’t need to have a large IT staff employed to handle this work.
Exceptional Reliability – Colocation data centers are typically built with the highest specifications for redundancy. This includes backup power generators, excellent physical security, multiple network connections through multiple telcos, and much more.
Geographic Location – You can choose the location of your data center so that it is near your users.
Predictable Expenses – The costs associated with a colocation data center will be very predictable. You can typically sign contracts that last one or more years, so you know exactly how to budget your IT needs.
Easy Scalability – When your business is growing, you can quickly have new servers or other equipment added to the facility. When you have your equipment in a small local data center or server closet, it can be much more difficult to expand
What are storage area networks?
Storage Area Connection Types
SANs comprise high-capacity storage devices that are connected by a high-speed storage devices that are connected by a high-speed private network (separate from the LAN) using a switch used just for SANs.
SANs take care of the collection, management, and use of data. To do this SANs have their own private dedicated network; Because of this, SANs require a ton of bandwidth.
Below, are the protocols than be used to access the data and the client systems that can use those various protocols.
Fiber Channel over Ethernet (FCoE)
You can use ethernet cables to connect your devices to the fiber channel network, but in this case you’d be integrating with an already existing fiber channel infrastructure.
This is a switch technology meaning the FCoE it is not routable.
Fiber Channel
Allows you to connect servers and storage together in a very high-speed network; very common to see 2,4,6,8, or 16 GBps
Its supported over Fiber & Copper using a Fiber Channel Switch.
You would need to connect your servers (initiator) to the fiber channel switch, and the storage (target) would be connected via SCSI, SAS, or SATA commands
Internet Small Computer Systems Interface (iSCSI)
An IP-Based networking storage standard method of encapsulating SCSI commands within IP packets. This allows the use of the same network for storage as is used for the balance of the network.
It Looks and acts as if its a local drive on your system; Meaning it can be managed virtually through software
Want to print your doc? This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (