routing and switching

1.4 Explain the purpose and properties of routing and switching

Today's networks are not “your father's network.” Networks continue to evolve, and what we want to do on them continues to evolve. We are placing very fast computers on our networks now and expecting to receive reports, email, chat, music, videos, games, and so forth—often all at once! Because of these challenges, network administrators have to rely on newer and better technologies to both control traffic and to provide security for a network. However, the two major components that we use for our network are the same two that we used many years ago, namely, routers and switches. In this section, I will discuss the many protocols that have evolved over time that control and enhance our use of these two main components.

EIGRP

Enhanced Interior Gateway Routing Protocol (EIGRP) is a Cisco proprietary protocol that combines the ease of configuration of distance vector routing protocols such as RIP or RIPv2 (discussed later in this chapter) with the advanced features and fast convergence of link state protocols. It is said to be a distance vector routing protocol with link state attributes. It can also be considered an advanced distance vector routing protocol or a hybrid routing protocol.
EIGRP uses a much more sophisticated metric than RIP or RIPv2. This metric includes the bandwidth of a connection and the delay, which is an experiential factor of how long it takes to pass traffic over the path of the network. It can also be tweaked by an administrator with load and reliability factors. Because of its more sophisticated metric, EIGRP is well suited for small, medium, and even large networks. The only possible disadvantage to EIGRP is that it is Cisco proprietary and therefore operates only on Cisco routers and Cisco layer 3 switches.

OSPF

Open Shortest Path First (OSPF) is by far the most common link state routing protocol in use today. OSPF is so named because it is an “open” protocol. In other words, it's not proprietary, and it uses the Shortest Path First (SPF) algorithm developed by Dijkstra.
The principle advantages of this protocol include that it is quiet on the network—not “chatty” like some of the protocols that preceded it—and that it converges very rapidly when there is a change in the network. In other words, when the tables need to be changed to control network traffic, it makes that happen very fast—usually within a few seconds. Because of these advantages, OSPF can be used on small, medium, and large networks.

RIP

Routing Information Protocol (RIP) is one of the first routing protocols. As you can imagine, being first in regard to technology does not necessarily mean being the best. In fact, RIP is now considered obsolete and is being replaced by more sophisticated routing protocols, such as RIPv2, OSPF, and IS-IS.
The principal reasons for RIP's demise are that it is a “chatty” protocol in which all information that each router knows regarding networks is broadcast every 30 seconds. In addition, RIP uses a “hop count” metric that doesn't take into account the bandwidth of a connection. Finally, RIPv1, commonly referred to as RIP, is classful, which means it does not provide the means to advertise the true subnet mask of a network. In today's varied networks, this type of routing protocol does not have the intelligence needed to route packets efficiently.
RIPv2 solves some of the problems associated with RIPv1 but not all of them. It does not broadcast every 30 seconds but instead uses multicast addressing for its advertisements. This provides for much more efficient use of network bandwidth. In addition, it can be configured to be classless, which means it can carry the true subnet mask of a network and can therefore be used on more complex networks.
RIPv2, however, still uses only a hop count metric. Because of this limitation, it cannot be used effectively in today's networks that provide redundant and sometimes varied speed connections from point to point. It is therefore also considered by today's standards to be a legacy routing protocol.

Link state vs. distance vector vs. hybrid

As I discussed each of the most common routing protocols, I classified them into categories such as link state, distance vector, and hybrid. This is an area of confusion for some, so I want to make very clear the differences between these categories of protocols. You may use one or more of these categories in your network.
Link state identifies and describes one of the most common categories of routing protocols in use today. Link means interface, and state means the attributes of the interface, in other words, where it is, what is connected to it, how fast it is, and so forth. Link state routing protocols send all this interface information in the form of link state advertisements (LSAs). From these LSAs, the routers will build a map of the network. Each router in the same area will have the same map and will therefore be able to make decisions as to how to forward a packet. The two most common link state routing protocols are OSPF and IS-IS.
Distance vector routing protocols are also exactly what they say they are. Distance, as you know, is “how far.” Vector, as you may know, is “which direction.” Distance vector routing protocols make decisions by examining these two factors against their routing tables. The most common distance vector routing protocols in use today are RIP and RIPv2. Internet Group Routing Protocol (IGRP) was also a distance vector routing protocol, but it is considered “retired” and is no longer in use today.
There is only one hybrid routing protocol with which you must be familiar, EIGRP. It is said to be a hybrid because it is actually a distance vector routing protocol that works like a link state routing protocol. EIGRP is one of the most commonly used routing protocols today, especially on networks that contain exclusively Cisco devices.

Static vs. dynamic

Along with all of this discussion of dynamic routing protocols (such as EIGRP, OSPF, and RIPv2), we should also mention that it's entirely possible for you to configure your own settings in regard to the network tables. The method you use depends on the vendor of the router, but the general principle is the same. Although it would likely not be to your advantage to reconfigure the tables with every network change, there are some times when a specific static configuration might be advantageous. These static configuration tweaks are usually for the purpose of enhancing security, ensuring the reliability of a link, or forcing the system to do something that it otherwise would not do.

Routing metrics

Some routing protocols are much “smarter” than others. By this I don't mean that you are smarter if you use one or the other but that the routing protocol itself makes more intelligent decisions. The data that every routing protocol uses to make decisions is referred to as its routing metric. Different routing protocols use different routing metrics. There are four routing metrics used by routing protocols today. These are as follows:
Hop Counts    A hop is actually the process of a packet passing through two router interfaces and therefore into a new network or subnet. It's just more fun to say that it “hopped” over the router and into the next network. Routing protocols that only use hop counts, such as RIP and RIPv2, are of limited intelligence because they don't take into account the bandwidth of each link or the traffic currently on it. One hop is equal to any other, regardless of the bandwidth of each option.
MTU, Bandwidth, Delay    Maximum Transmission Unit (MTU) is metric that is carried by EIGRP but not actually used in the calculation of the best route. It can be considered a legacy metric that used to signify the largest size packet that could be sent over the entire route. With today's modern networks, it is no longer needed. The two most common metrics used by EIGRP are bandwidth and delay. Bandwidth is defined as the lowest configured bandwidth of any interface in a proposed route. This is similar to the idea that “The weakest link in a chain determines its strength!” Delay, as I mentioned, is an experiential factor of how long it takes to pass data over the link. These types of metrics offer greater intelligence and usually better routing decisions than hop counts can.
Costs    Whereas EIGRP uses bandwidth and delay to make decisions, link state routing protocols such as OSPF use a metric referred to as cost. With OSPF, cost is calculated 10 to the power 8 divided by the bandwidth in bits per second. By this calculation, a connection with a bandwidth of 100Mbps has a cost of 1. Cost is relatively simple metric, but since it is calculated for all possibilities, it can be resource intensive in a complex and dynamic network.
Latency    Latency is a metric that is very similar to delay when used with respect to routing. It defines the amount of time that it takes for a packet to travel from a source to a destination. The difference is that while delay is specifically a routing metric, latency is a term that is also used outside of routing, such as in hard drives or memory. The assumption is that something else is waiting for the data to arrive and that the less time it waits, the faster everything else can move.

Next hop

Generally speaking, routers couldn't care less where a packet comes from when they make a routing decision. What they care about is where the packet wants to go. In other words, they are concerned with the destination address in the header of the packet. Based on the destination address, they can determine whether they can deliver the packet themselves or whether they need to send it to another router. If they cannot deliver the packet themselves, then they will consult their routing table to determine the next step. As I mentioned earlier, the routing table will give them the information about the next interface that they can get to, which would be the appropriate place to send the packet. This interface is referred to as the next hop interface. This is because going from one network to another is like hopping over a router in the network diagram. As I mentioned, it's really just going through two consecutive interfaces, but isn't it a lot more fun to say “hop”?

Spanning Tree Protocol

In today's networks, switches are often connected with redundant links to provide for fault tolerance and load balancing. Unfortunately, these redundant links can also create physical loops in the network. If these physical loops were allowed to be seen by data traffic as logical loops, the result could be broadcast storms, multiple copies of the same frame sent to hosts, and MAC database instability on devices. To prevent the logical loops from occurring while still maintaining physical redundancy, modern network switches use the Spanning Tree Protocol (STP).
The original STP is defined by the IEEE as 802.1D. Many other faster and more sophisticated spanning tree protocols have been developed over the past 10 years, including Rapid Spanning Tree Protocol (RSTP), Multiple Spanning Tree Protocol (MSTP), and Per-VLAN Spanning Tree Protocol (PVSTP). Each of these protocols has the same goal in mind: to provide multiple viable paths for data fault tolerance and load balancing without creating loops and the problems they cause.

VLAN (802.1q)

virtual local area network (VLAN) is a subnet created using a switch instead of a router. Because of this fact, VLANs have many advantages over subnets created by routers. One of the main advantages of VLANs is that the logical network design does not have to conform to the physical network topology. This gives administrators much more flexibility in network design and in the subsequent changes of that design.
The problem is that subnets created by a router are, by definition, local to the interface from which the subnet was created. In addition, all the hosts off each router interface are in the same subnet. This might be fine if all the hosts in a specific geographic area were always in the same department or security group of the organization, but often this is not the case. This means that an administrator cannot set up security policies for resource use by department and use the subnet address to control the policy, because many departments might be mixed into the same subnet.
VLANs solve this problem by creating the subnets using a switch or even groups of switches. Ports on the switches are assigned to a specific VLAN and therefore in a specific subnet. Now here is the important difference, so pay attention—all ports that are assigned to the same VLAN are logically in the same subnet regardless of where those ports are located in the organization. Because of this fact, the administrator can manage the network and its resources by departments represented by subnets, regardless of where each of the users actually resides. This offers a tremendous advantage to an administrator.
Now, you may be wondering how all the switches know about all the VLANs. Well, the administrator will assign some ports on a switch to carry all the VLAN information to the other switches. These ports, which allow all VLANs to pass through them, are referred to as trunks. A VLAN switch that is connected to other VLAN switches will have at least one trunk port. Switches that are central to a topology may have multiple trunk ports. While other trunking protocols exist, the most common trunking protocol by far is IEEE 802.1q.
Another advantage of VLANs is that the traffic that is communicated within the interfaces of the VLAN is only on the interfaces of that VLAN and on the trunks. This increases the security in an organization. Furthermore, connecting one VLAN's traffic to another VLAN requires a centrally located (logically) layer 3 device such as a router or a multilayer switch. The administrator can place access lists on this device that will control all traffic between the VLANs. This represents a tremendous improvement over placing separate access lists on all the routers in the organization. Because of these advantages, VLANs are commonly used in many of today's networks.

Port mirroring

Some devices, such as the sensor on an IDS/IPS system, require the ability to monitor all network traffic. Since the VLANs separate the traffic for security reasons, monitoring all traffic sometimes requires getting a copy of network packets from one switch port sent to another switch port, strictly for the purpose of monitoring and logging them. This process, called port mirroring, is becoming a more common practice as organizations continue to install more IDS/IPS systems. It is referred to as Switched Port Analyzer (SPAN) on Cisco switches and as Roving Analysis Port (RAP) on HP switches.

Broadcast domain vs. collision domain

In general, routers and other layer 3 devices create additional broadcast domains, while switches create additional collision domains. Now you may be thinking, “Why do I want more of either one of them in my network”? Well, let's take a look at what each one does.
Broadcast domains determine a boundary for messages sent as a broadcast. Many protocols, such a DHCP, use broadcasts to perform their service. This does not negatively affect a network as long as it is controlled in such a way that all broadcasts cannot get to all devices in a large network. Generally speaking, routers and other layer 3 devices stop broadcasts from getting from one network or subnet to another one. This applies whether the subnets were created by router interfaces or by VLANs on a switch. Additional broadcast domains mean less broadcast traffic on each domain and greater control.
Collision domains control which devices can “see” each other through the network. If two devices put data on the network at the same exact time and can sense each other, it results in a collision. Collisions can cause resending of data and slow the network down. (We will discuss collision detection and prevention methods in Chapter 3.) The ironic thing is, the more collision domains that we have, the less the possibility for collisions. This is because there will be fewer devices in each collision domain. Now you may ask, “Why don't we just put each communication into its own collision domain?” Well, in essence that is exactly what modern switch designs do!

IGP vs. EGP

All the routing protocols I've discussed thus far have been Interior Gateway Protocols (IGPs). Border Gateway Protocol (BGP) is an Exterior Gateway Protocol (EGP). Understanding the difference relies upon your knowledge of an autonomous system. An autonomous system is a group of devices under the same administrative domain. If a routing protocol works within one autonomous system, it is considered to be an IGP. If it works across autonomous systems, in effect connecting them, then it is considered to be an EGP. That's all there is to it, so don't make it any harder than it really is. The only EGP that you should be concerned with today is BGP; all of the rest are IGPs.

Routing tables

Simply put, routers really do only two things; either they deliver a packet to an intended destination host if that host is on one of the subnets for which they have an active interface or they consult their routing table to determine what to do next. Table 1.5 is a simple illustration of a RIPv2 routing table using hop count. This is actually a “Reader's Digest” version of what you might see in a Cisco router, but you get the point. As you can see, the router that contains this table knows how to get to other networks by virtue of the table. In other words, a packet that comes into this router that is destined for the 10.1.0.0 network will be sent out of a different interface from one that is destined for the 192.168.1.0 network.

Table 1.5: RIPv2 hop count
 Open table as spreadsheet
Destination network
Subnet mask
Interface
Metric (hop count)
10.1.0.0
255.255.0.0
S0
1
192.168.1.0
255.255.255.0
S1
1
172.16.0.0
255.255.0.0
S1
2

Convergence (steady state)

Convergence means that everything is in agreement again after change has taken place. In other words, let's say you have a network that is all settled and in a steady state. All routers know the best interface to send a packet out based on the destination address of the packet. Now let's say you add a new interface to a router and thereby create a new path on which traffic could flow. This would cause the routing protocols to acknowledge and examine the new path and determine whether it is a more efficient path than the one they are currently using. In fact, each router that has the intelligence required would need to examine the new path against its current path for each network in its table. It would then make a decision as to whether to make a change. This could temporarily create quite a flurry of activity on a network in regard to routing protocol information exchange. If a router does not have this capability, then you would need to make the changes to the tables manually.
Once all the options are considered and the decisions are made, then the activity will settle down again. A network that has settled back down is said to be have converged, so the process of moving through this unsettled state to the settled state is referred to as convergence. Some routing protocols offer much faster convergence than others. As I discussed earlier, routing protocols such as EIGRP and OSPF are “smarter” and thus are not normally chatty, but they become very chatty for a short burst of time when something changes on the network. Their ability to move very quickly from an unsettled state to a settled state is referred to as fast convergence. This means that a change on an interface that affects the routing tables will have minimal effect on the user data that is traversing the network.

Popular posts from this blog

learning normalization today data modeling