Chương 8: Ethernet Switching

Shared Ethernet works extremely well under ideal conditions. When the number of devices trying to access the network is low, the number of collisions stays well within acceptable limits.

ppt77 trang | Chia sẻ: lylyngoc | Lượt xem: 1641 | Lượt tải: 0download
Bạn đang xem trước 20 trang tài liệu Chương 8: Ethernet Switching, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
Chương 8: ETHERNET SWITCHING Overview Shared Ethernet works extremely well under ideal conditions. When the number of devices trying to access the network is low, the number of collisions stays well within acceptable limits. when the number of users on the network increases, the increased number of collisions can cause intolerably bad performance. Bridging was developed to help ease performance problems that arose from increased collisions. Switching evolved from bridging to become the key technology in modern Ethernet LANs. The concept of collision domains and broadcast domains is concerned with the ways that networks can be designed to limit the negative effects of collisions and broadcasts. This module explores the effects of collisions and broadcasts on network traffic and then describes how bridges and routers are used to segment networks for improved performance. Students completing this module should be able to: Define bridging and switching. Define and describe the content-addressable memory (CAM) table. Define latency. Describe store-and forward and cut-through switching modes. Explain Spanning-Tree Protocol (STP). Define collisions, broadcasts, collision domains, and broadcast domains. Identify the Layer 1, 2, and 3 devices used to create collision domains and broadcast domains. Discuss data flow and problems with broadcasts. Explain network segmentation and list the devices used to create segments. 8.1. Ethernet Switching 8.1.1 Layer 2 bridging As more nodes are added to an Ethernet physical segment, contention for the media increases. Ethernet is a shared media, which means only one node can transmit data at a time. The addition of more nodes increases the demands on the available bandwidth and places additional loads on the media. By increasing the number of nodes on a single segment, the probability of collisions increases, resulting in more retransmissions. A solution to the problem is to break the large segment into parts and separate it into isolated collision domains. To accomplish this a bridge keeps a table of MAC addresses and the associated ports. The bridge then forwards or discards frames based on the table entries. The following steps illustrate the operation of a bridge: The bridge has just been started so the bridge table is empty. The bridge just waits for traffic on the segment. When traffic is detected, it is processed by the bridge. Host A is pinging Host B. Since the data is transmitted on the entire collision domain segment, both the bridge and Host B process the packet. The bridge adds the source address of the frame to its bridge table. Since the address was in the source address field and the frame was received on port 1, the frame must be associated with port 1 in the table. The destination address of the frame is checked against the bridge table. Since the address is not in the table, even though it is on the same collision domain, the frame is forwarded to the other segment. The address of Host B has not been recorded yet as only the source address of a frame is recorded. Host B processes the ping request and transmits a ping reply back to Host A. The data is transmitted over the whole collision domain. Both Host A and the bridge receive the frame and process it. The bridge adds the source address of the frame to its bridge table. Since the source address was not in the bridge table and was received on port 1, the source address of the frame must be associated with port 1 in the table. Host A is now going to ping Host C. Since the data is transmitted on the entire collision domain segment, both the bridge and Host B process the frame. Host B discards the frame as it was not the intended destination. The bridge adds the source address of the frame to its bridge table. Since the address is already entered into the bridge table the entry is just renewed. The destination address of the frame is checked against the bridge table to see if its entry is there. Since the address is not in the table, the frame is forwarded to the other segment. The address of Host C has not been recorded yet as only the source address of a frame is recorded. Host C processes the ping request and transmits a ping reply back to Host A. The data is transmitted over the whole collision domain. Both Host D and the bridge receive the frame and process it. Host D discards the frame, as it was not the intended destination. The destination address of the frame is checked against the bridge table to see if its entry is present. The address is in the table but it is associated with port 1, so the frame is forwarded to the other segment. The bridge adds the source address of the frame to its bridge table. Since the address was in the source address field and the frame was received on port 2, the frame must be associated with port 2 in the table. When Host D transmits data, its MAC address will also be recorded in the bridge table These are the steps that a bridge uses to forward and discard frames that are received on any of its ports. 8.1.2. Layer 2 switching Generally, a bridge has only two ports and divides a collision domain into two parts. All decisions made by a bridge are based on MAC or Layer 2 addressing and do not affect the logical or Layer 3 addressing. Thus, a bridge will divide a collision domain but has no effect on a logical or broadcast domain. A switch is essentially a fast, multi-port bridge, which can contain dozens of ports.  Rather than creating two collision domains, each port creates its own collision domain. In a network of twenty nodes, twenty collision domains exist if each node is plugged into its own switch port. If an uplink port is included, one switch creates twenty-one single-node collision domains. A switch dynamically builds and maintains a Content-Addressable Memory (CAM) table, holding all of the necessary MAC information for each port. 8.1.3. Switch operation A switch is simply a bridge with many ports. When only one node is connected to a switch port, the collision domain on the shared media contains only two nodes. The two nodes in this small segment, or collision domain, consist of the switch port and the host connected to it. These small physical segments are called microsegments. In a network that uses twisted-pair cabling, one pair is used to carry the transmitted signal from one node to the other node. A separate pair is used for the return or received signal. It is possible for signals to pass through both pairs simultaneously. The capability of communication in both directions at once is known as full duplex. Most switches are capable of supporting full duplex, as are most network interface cards (NICs). In full duplex mode, there is no contention for the media. Thus, a collision domain no longer exists. Theoretically, the bandwidth is doubled when using full duplex. In addition to faster microprocessors and memory, two other technological advances made switches possible. Content-addressable memory (CAM) is memory that essentially works backwards compared to conventional memory. Entering data into the memory will return the associated address. Using CAM allows a switch to directly find the port that is associated with a MAC address without using search algorithms. An application-specific integrated circuit (ASIC) is a device consisting of undedicated logic gates that can be programmed to perform functions at logic speeds. Operations that might have been done in software can now be done in hardware using an ASIC. The use of these technologies greatly reduced the delays caused by software processing and enabled a switch to keep pace with the data demands of many microsegments and high bit rates. 8.1.4. Latency Latency is the delay between the time a frame first starts to leave the source device and the time the first part of the frame reaches its destination A wide variety of conditions can cause delays as a frame travels from source to destination: Media delays caused by the finite speed that signals can travel through the physical media. Circuit delays caused by the electronics that process the signal along the path. Software delays caused by the decisions that software must make to implement switching and protocols. Delays caused by the content of the frame and where in the frame switching decisions can be made. For example, a device cannot route a frame to a destination until the destination MAC address has been read. 8.1.5. Switch modes How a frame is switched to the destination port is a trade off between latency and reliability. A switch can start to transfer the frame as soon as the destination MAC address is received. Switching at this point is called cut-through switching and results in the lowest latency through the switch. However, no error checking is available At the other extreme, the switch can receive the entire frame before sending it out the destination port. This gives the switch software an opportunity to verify the Frame Check Sum (FCS) to ensure that the frame was reliably received before sending it to the destination. If the frame is found to be invalid, it is discarded at this switch rather than at the ultimate destination. Since the entire frame is stored before being forwarded, this mode is called store-and-forward. When using cut-through methods of switching, both the source port and destination port must be operating at the same bit rate in order to keep the frame intact. This is called synchronous switching. If the bit rates are not the same, the frame must be stored at one bit rate before it is sent out at the other bit rate. This is known as asynchronous switching. Store-and-forward mode must be used for asynchronous switching Asymmetric switching provides switched connections between ports of unlike bandwidths, such as a combination of 100 Mbps and 1000 Mbps. Asymmetric switching is optimized for client/server traffic flows in which multiple clients simultaneously communicate with a server, requiring more bandwidth dedicated to the server port to prevent a bottleneck at that port. 8.1.6. Spanning-Tree Protocol When multiple switches are arranged in a simple hierarchical tree, switching loops are unlikely to occur. However, switched networks are often designed with redundant paths to provide for reliability and fault tolerance. While redundant paths are desirable, they can have undesirable side effects. Switching loops are one such side effect. Switching loops can occur by design or by accident, and they can lead to broadcast storms that will rapidly overwhelm a network. To counteract the possibility of loops, switches are provided with a standards-based protocol called the Spanning-Tree Protocol (STP). Each switch in a LAN using STP sends special messages called Bridge Protocol Data Units (BPDUs) out all its ports to let other switches know of its existence and to elect a root bridge for the network. The switches then use the Spanning-Tree Algorithm (STA) to resolve and shut down the redundant paths. Each port on a switch using Spanning-Tree Protocol exists in one of the following five states: Blocking Listening Learning Forwarding Disabled 8.2. Collision Domains and Broadcast Domains 8.2.1. Shared media environments Understanding collision domains requires understanding what collisions are and how they are caused. To help explain collisions, Layer 1 media and topologies are reviewed here. Some networks are directly connected and all hosts share Layer 1. Examples are listed in the following: Shared media environment – Occurs when multiple hosts have access to the same medium. For example, if several PCs are attached to the same physical wire, optical fiber, or share the same airspace, they all share the same media environment. Extended shared media environment – Is a special type of shared media environment in which networking devices can extend the environment so that it can accommodate multiple access or longer cable distances. Point-to-point network environment – Is widely used in dialup network connections and is the most familiar to the home user. It is a shared networking environment in which one device is connected to only one other device, such as connecting a computer to an Internet service provider by modem and a phone line. It is important to be able to identify a shared media environment, because collisions only occur in a shared environment. A highway system is an example of a shared environment in which collisions can occur because multiple vehicles are using the same roads. As more vehicles enter the system, collisions become more likely. A shared data network is much like a highway. Rules exist to determine who has access to the network medium, but sometimes the rules simply cannot handle the traffic load and collisions occur. 8.2.2. Collision domains Collision domains are the connected physical network segments where collisions can occur. The types of devices that interconnect the media segments define collision domains. These devices have been classified as OSI Layer 1, 2 or 3 devices. Layer 1 devices do not break up collision domains, Layer 2 and Layer 3 devices do break up collision domains. Breaking up, or increasing the number of collision domains with Layer 2 and 3 devices is also known as segmentation.  Layer 1 devices, such as repeaters and hubs, serve the primary function of extending the Ethernet cable segments.By extending the network more hosts can be added. However, every host that is added increases the amount of potential traffic on the network. Since Layer 1 devices pass on everything that is sent on the media, the more traffic that is transmitted within a collision domain, the greater the chances of collisions. The final result is diminished network performance, which will be even more pronounced if all the computers on that network are demanding large amounts of bandwidth. Simply put, Layer 1 devices extend collision domains, but the length of a LAN can also be overextended and cause other collision issues. The four repeater rule in Ethernet states that no more than four repeaters or repeating hubs can be between any two computers on the network. To assure that a repeated 10BASE-T network will function properly, the round-trip delay calculation must be within certain limits otherwise all the workstations will not be able to hear all the collisions on the network. Repeater latency, propagation delay, and NIC latency all contribute to the four repeater rule. Exceeding the four repeater rule can lead to violating the maximum delay limit. When this delay limit is exceeded, the number of late collisions dramatically increases. The 5-4-3-2-1 rule requires that the following guidelines should not be exceeded: Five segments of network media Four repeaters or hubs Three host segments of the network Two link sections (no hosts) One large collision domain The 5-4-3-2-1 rule also provides guidelines to keep round-trip delay time in a shared network within acceptable limits. 8.2.3. Segmentation One important skill for a networking professional is the ability to recognize collision domains. Connecting several computers to a single shared-access medium that has no other networking devices attached creates a collision domain. This situation limits the number of computers that can use the medium, also called a segment. Layer 1 devices extend but do not control collision domains. Layer 2 devices segment or divide collision domains. Controlling frame propagation using the MAC address assigned to every Ethernet device performs this function. Layer 2 devices, bridges, and switches, keep track of the MAC addresses and which segment they are on. By doing this these devices can control the flow of traffic at the Layer 2 level. This function makes networks more efficient by allowing data to be transmitted on different segments of the LAN at the same time without the frames colliding. By using bridges and switches, the collision domain is effectively broken up into smaller parts, each becoming its own collision domain. Layer 3 devices, like Layer 2 devices, do not forward collisions. Because of this, the use of Layer 3 devices in a network has the effect of breaking up collision domains into smaller domains. Layer 3 devices perform more functions than just breaking up a collision domain. Layer 3 devices and their functions will be covered in more depth in the section on broadcast domains. 8.2.4. Layer 2 broadcasts To communicate with all collision domains, protocols use broadcast and multicast frames at Layer 2 of the OSI model. When a node needs to communicate with all hosts on the network, it sends a broadcast frame with a destination MAC address 0xFFFFFFFFFFFF. This is an address to which the network interface card (NIC) of every host must respond.  Layer 2 devices must flood all broadcast and multicast traffic. The accumulation of broadcast and multicast traffic from each device in the network is referred to as broadcast radiation. In some cases, the circulation of broadcast radiation can saturate the network so that there is no bandwidth left for application data. In this case, new network connections cannot be established, and existing connections may be dropped, a situation known as a broadcast storm. The probability of broadcast storms increases as the switched network grows. The three sources of broadcasts and multicasts in IP networks are workstations, routers, and multicast applications 8.2.5. Broadcast domains A broadcast domain is a grouping of collision domains that are connected by Layer 2 devices. Breaking up a LAN into multiple collision domains increases the opportunity for each host in the network to gain access to the media. This effectively reduces the chance of collisions and increases available bandwidth for every host. But broadcasts are forwarded by Layer 2 devices and if excessive, can reduce the efficiency of the entire LAN. Broadcasts have to be controlled at Layer 3, as Layer 2 and Layer 1 devices have no way of controlling them. The total size of a broadcast domain can be identified by looking at all of the collision domains that the same broadcast frame is processed by. In other words, all the nodes that are a part of that network segment bounded by a layer three device. Broadcast domains are controlled at Layer 3 because routers do not forward broadcasts.  Routers actually work at Layers 1, 2, and 3. They, like all Layer 1 devices, have a physical connection to, and transmit data onto, the media. They have a Layer 2 encapsulation on all interfaces and perform just like any other Layer 2 device. It is Layer 3 that allows the router to segment broadcast domains. In order for a packet to be forwarded through a router it must have already been processed by a Layer 2 device and the frame information stripped off. Layer 3 forwarding is based on the destination IP address and not the MAC address. For a packet to be forwarded it must contain an IP address that is outside of the range of addresses assigned to the LAN and the router must have a destination to send the specific packet to in its routing table. 8.2.6. Introduction to data flow Data flow in the context of collision and broadcast domains focuses on how data frames propagate through a network. It refers to the movement of data through Layer 1, 2 and 3 devices and how data must be encapsulated to effectively make that journey. Remember that data is encapsulated at the network layer with an IP source and destination address, and at the data-link layer with a MAC source and destination address. Layer 1 devices do no filtering, so everything that is received is passed on to the next segment. The frame is simply regenerated and retimed and thus returned to its original transmission quality. Any segments connected by Layer 1 devices are part of the same domain, both collision and broadcast. Layer 2 devices filter data frames based on the destination MAC address. A frame is forwarded if it is going to an unknown destination outside the collision doma
Tài liệu liên quan