Do-it-yourself local network: General rules for building a home network and its main components. What is a switch and where is it used

To build local computer networks, the question often arises of which equipment is better to use for a particular task - a switch or a router. In this topic, we will consider the difference between a switch and a router and try to explain their purpose and principle of operation in an accessible way.

To begin with, in order to find out what is the difference between these two devices, you need to understand that the router and the switch belong to different classes of equipment designed for building local networks. To understand their differences, we will give each of them a definition and briefly describe the principle of their work.

Router - purpose and principle of operation

The router is a higher class device than the switch and
is a networked computer designed to work with multiple network segments. That is, it is able to provide network interaction of several computers and simultaneously enable them to access the Internet.

The main difference between a router and a switch lies in the principle of operation, which is based on the OSI network model using TCP / IP (Transmission Control Protocol and Internet Protocol) protocols. They are also called stacks of the above model. TCP is responsible for breaking up data into blocks of information (datagrams) and creating a virtual circuit. IP, in turn, takes responsibility for the transmission of these individual blocks with control over their receipt.

The use of these protocols in IP networks allows for a clear interaction between wired and wireless networks. Therefore, using a Wi-Fi router to create a home LAN allows you to easily connect all digital devices into a single network for viewing and exchanging various kinds of information, including on the Internet.

In addition, these devices have advanced hardware, equipped with enough memory to create a large local area network. Some models are capable of providing work with local traffic of 1 GB. Also, we must not forget about the software. Because often routers are equipped with security features such as network firewalls.

Switch - purpose and principle of operation

A switch, or as it is commonly called, a switch, is designed to connect several network nodes, but, unlike a router, only within one segment. That is, the difference in the principle of operation lies in the use of the OSI channel model layer, and not the network layer, as with routers. In addition, switches work from the sender and recipient hosts of the local network, and the router relies on their IP addresses.

Thus, access to the Internet for all computers united into a single local network through only a switch (switch) is conventionally impossible. What does conditionally impossible mean? This means that access to the Internet for all local PCs only through a switch can in principle be configured, but according to a certain scheme. To do this, you need to plug the Internet cable into one computer, let's call it the main one, and set up an Internet connection on it. Further, already through the switch, distribute Internet access from it to all other PCs of the local network.

The disadvantage of this scheme is that the settings for Internet access to all local PCs through the switch may seem complicated. In addition, in order for all computers to have the Internet, the first (main PC) must be turned on. Otherwise, you will have to purchase a router and connect all local computers according to the following scheme: computers → switch → router → internet. In this case, the switch will act as a link between the PC and the router, which in turn is connected to the Internet.

In theory, in this scheme, you can easily do without a switch, but provided that there are enough ports in the above device connected to the Internet for all local computers.

The advantages of switches, in contrast to routers, include faster data transfer within a local network. Therefore, if the goal is not to open Internet access to all local computers, then you can get by with just a switch. The data exchange speed between PCs will be much higher.

In principle, it makes no sense to delve into the technical features of the operation of the router and the switch in more detail, the difference between them, I think, has already become clear to you.

Ethernet segment switching technology was introduced by Kalpana in 1990 in response to the growing need to increase the bandwidth of high-performance servers to workstation segments.

The block diagram of the EtherSwitch proposed by Kalpana is shown in Fig. 12.6.

Fig. 12. 6 Example of switch structure

Each of the 8 10Base-T ports is served by one Ethernet-EPP (EthernetPacketProcessor). In addition, the switch has a system module that coordinates the work of all EPP processors. The system module maintains the general address table of the switch and provides management of the switch using the SNMP protocol. To transfer frames between ports, a switching fabric is used, similar to those found in telephone switches or multiprocessor computers, connecting multiple processors with multiple memory modules.

Switching matrix works on the principle of switching channels. For 8 ports, the matrix can provide 8 simultaneous internal channels at half-duplex port operation and 16 at full-duplex, when the transmitter and receiver of each port operate independently of each other.

When a frame arrives at a port, the EPP processor buffers the first few bytes of the frame to read the destination address. After receiving the destination address, the processor immediately decides to transfer the packet, without waiting for the arrival of the remaining bytes of the frame. To do this, he looks through his own cache of the address table, and if he does not find the required address there, he turns to the system module, which works in multitasking mode, in parallel serving the requests of all EPP processors. The system module scans the general address table and returns the found row to the processor, which it buffers in its cache for later use.

After finding the destination address, the EPP processor knows what to do next with the incoming frame (while viewing the address table, the processor continued buffering the frame bytes arriving at the port). If a frame needs to be filtered, the processor simply stops writing frame bytes to the buffer, clears the buffer, and waits for a new frame to arrive.

If the frame needs to be transmitted to another port, then the processor contacts the switching matrix and tries to establish a path in it that connects its port with the port through which the route to the destination address goes. The switch fabric can only do this if the destination port is free at that moment, that is, not connected to another port.

If the port is busy, then, as with any circuit-switched device, the matrix fails the connection. In this case, the frame is fully buffered by the processor of the input port, after which the processor waits for the release of the output port and the formation of the desired path by the switching matrix.

Once the correct path has been set, buffered frame bytes are sent to it and received by the output port processor. As soon as the processor of the output port accesses the attached Ethernet segment using the CSMA / CD algorithm, the frame bytes are immediately transferred to the network. The input port processor permanently stores several bytes of the received frame in its buffer, which allows it to independently and asynchronously receive and transmit frame bytes (Figure 4.24).

With the free state of the output port at the time of receiving the frame, the delay between the reception of the first byte of the frame by the switch and the appearance of the same byte at the output of the destination address port was only 40 μs for the Kalpana switch, which was much less frame delay during its transmission by the bridge.

Fig. 12. 7 Frame transmission through the switch fabric

The described method of transmitting a frame without its full buffering is called “on-the-fly” or “cut-through.” This method is, in fact, pipelined processing of the frame when partially overlapping in time there are several stages of its transmission (fig. 12.8).

The reception of the first bytes of the frame by the input port processor, including the reception of the destination address bytes.

Finding the destination address in the switch address table (in the processor cache or in the general table of the system module).

Matrix commutation.

Reception of the remaining bytes of the frame by the processor of the input port.

Receive bytes of a frame (including the first) by the processor of the output port through the switching matrix.

Egress port processor gains access to the environment.

Transmission of frame bytes by the processor of the output port to the network.

Stages 2 and 3 cannot be combined in time, since without knowing the number of the output port, the matrix switching operation does not make sense.

P

Fig. 12. 8 Saving time during pipeline processing of the frame: a - pipeline processing; b - normal processing with full buffering


in comparison with the full frame buffering mode, also shown in Fig. 12.8, the savings from pipelining are tangible.

However, the main reason for improving network performance when using a switch is parallel processing of multiple frames.

Since the main advantage of the switch, thanks to which it has won very good positions in local networks, is its high performance, the developers of the switches are trying to produce called non-blocking ( non - blocking ) switch models .

A non-blocking switch is a switch that can transmit frames through its ports at the same rate at which they arrive. Naturally, even a non-blocking switch cannot resolve for a long period of time a situation where frames are blocked due to limited output port speed.

Usually they mean the stable non-blocking mode of the switch, when the switch transmits frames at the rate of their arrival for an arbitrary period of time. To ensure such a mode, it is necessary, of course, such a distribution of frame streams over the output ports so that they can cope with the load and the switch can always, on average, transmit as many frames to the outputs as there are to the inputs. If the input stream of frames (summed over all ports) on average exceeds the output stream of frames (also summed over all ports), then the frames will accumulate in the buffer memory of the switch, and if its size is exceeded, it is simply discarded. To ensure the non-blocking mode of the switch, a simple condition is enough:

,

where
- switch performance,
- maximum performance of the protocol supported by the i-th switch port.

The total port performance counts each passing frame twice - as an incoming frame and as an outgoing one, and since in steady mode the incoming traffic is equal to the outgoing traffic, the minimum sufficient switch performance to support non-blocking mode is half the total port performance. If the port is operating in half duplex mode, such as Ethernet 10 Mbps, then the port performance
equals 10 Mbps, and if in full duplex, then its
will be 20 Mbps.

The widespread use of switches was undoubtedly facilitated by the fact that the introduction of switching technology did not require replacing the equipment installed in the networks - network adapters, hubs, cable systems. The ports on the switches operated in normal half-duplex mode, so it was possible to transparently connect both an end node and a hub that organizes an entire logical segment.

Since switches and bridges are transparent to network layer protocols, their appearance on the network did not have any effect on the network's routers, if any.

The ease of use of the switch also lies in the fact that it is a self-learning device and, if the administrator does not load it with additional functions, it is not necessary to configure it - you just need to correctly connect the cable connectors to the switch ports, and then it will work independently and effectively perform the set before it. the task of improving network performance.

IGMP and many others, as well as knowledge of how these technologies can be applied in practice most effectively.

The book "Building Switched Computer Networks" appeared thanks to long-term cooperation between D-Link and the country's leading technical university - MSTU named after N.E.Bauman. The book is aimed at a deep presentation of theory and the formation of practical knowledge. It was based on the training materials of the D-Link company, as well as practical exercises conducted at the D-Link training center - Moscow State Technical University named after N.E.Bauman - D-Link and the Department of Computer Systems and Networks.

The book contains a complete description of the fundamental technologies of switching local area networks, examples of their use, as well as settings on D-Link switches. It will be useful for students studying in the field of "Informatics and Computer Engineering", graduate students, network administrators, enterprise specialists who are introducing new information Technology, as well as everyone who is interested in modern network technologies and the principles of building switched networks.

The authors would like to thank all the people involved in the consultation, editing and drawing preparation for the course. The authors express their gratitude to the heads of the Representative Office of D-Link International PTE Ltd and MSTU im. N.E.Bauman, D-Link specialists Pavel Kozik, Ruslan Bigarov, Alexander Zaitsev, Evgeny Ryzhov and Denis Evgrafov, Alexander Schadnev for technical advice; Olga Kuzmina for editing the book; Alesya Dunaeva for her help in preparing the illustrations. Great help in preparing the manuscript and testing practical classes was provided by the teachers of the Moscow State Technical University. N.E.Bauman Mikhail Kalinov, Dmitry Chirkov.

Conventions Used in the Course

The following pictograms are used throughout the course text to denote various types of network devices:

Command syntax

The following characters are used to describe how to enter commands, expected values and arguments when configuring the switch through the command line interface (CLI).

Symbol Appointment
< angle brackets > Contains the expected variable or value to be specified
[ square brackets] Contains a required value or a set of required arguments. One value or argument can be specified
| vertical bar Separates two or more mutually exclusive items from the list, one of which must be entered / specified
{ braces} Contains an optional value or a set of optional arguments

Evolution of local area networks

The evolution of local area networks is inextricably linked with the history of the development of Ethernet technology, which to this day remains the most common technology for local area networks.

Initially, LAN technology was seen as a time-saving and cost-effective technology for sharing data, disk space and expensive peripherals. The falling cost of personal computers and peripherals has led to their widespread adoption in business, and the number of network users has increased dramatically. At the same time, the architecture of applications ("client-server") and their requirements for computing resources, as well as the architecture of computing ( distributed computing). Became popular downsizing (downsizing) - transfer of information systems and applications from mainframes to network platforms. All this led to a shift in emphasis in the use of networks: they have become an indispensable tool in business, providing the most efficient processing of information.

The first Ethernet networks (10Base-2 and 10Base-5) used a bus topology, when each computer was connected to other devices using a single coaxial cable, used as data transmission media... The network environment was shared and the devices had to make sure that it was free before starting to transmit data packets. While these networks were easy to install, they had significant drawbacks in terms of size, functionality and extensibility, lack of reliability, and inability to cope with the exponential increase in network traffic. New solutions were required to improve the efficiency of local networks.

The next step was the development of the 10Base -T standard with a "star" topology, in which each node was connected with a separate cable to a central device - hub... The concentrator worked at the physical layer of the OSI model and repeated the signals coming from one of its ports to all other active ports, after restoring them. The use of hubs has improved the reliability of the network, since a break in any cable did not cause the entire network to fail. However, despite the fact that the use of hubs in the network simplified the tasks of its management and maintenance, the transmission medium remained shared (all devices were in the same collision domain). In addition, the total number of hubs and the network segments they connect was limited due to time delays and other reasons.

Task network segmentation, i.e. dividing users into groups (segments) according to their physical location in order to reduce the number of clients competing for bandwidth was solved using a device called bridge... The bridge was developed by Digital Equipment Corporation (DEC) in the early 1980s and was an OSI-based data link layer device (usually two-port) for connecting network segments. Unlike a hub, a bridge did not just forward data packets from one segment to another, but analyzed and transmitted them only if such a transfer was really necessary, that is, the destination workstation address belonged to another segment. Thus, the bridge isolated the traffic on one segment from the traffic on the other, reducing the collision domain and increasing the overall performance networks.

However, bridges were effective only as long as the number of workstations in the segment remained relatively small. As soon as it increased, congestion occurred in the networks (overflow of the receive buffers of network devices), which led to packet loss.

The increase in the number of devices connected in the network, the increase in the processing power of workstations, the emergence of multimedia applications and client-server applications required more bandwidth. In response to these growing demands, Kalpana launched its first switchdubbed EtherSwitch.


Figure: 1.1.

The switch was a multiport bridge and also operated at the data link layer of the OSI model. The main difference between the switch and the bridge was that it could install several connections simultaneously between different pairs of ports. When a packet was transmitted through a switch, a separate virtual (or real, depending on the architecture) channel was created in it, through which data was sent directly from the source port to the receiving port at the highest possible speed for the technology used. This principle of work is called "micro-segmentation"... Thanks to micro-segmentation, the switches were able to operate in full duplex mode (

Ethernet segment switching technology was introduced by Kalpana in 1990 in response to the growing need to increase the bandwidth of high-performance servers to workstation segments.

The block diagram of the EtherSwitch proposed by Kalpana is shown in Fig. 4.23.

Figure: 4.23. The structure of the EtherSwitch of the Ka1rapa company

Each of the 8 10Base-T ports is served by one Ethernet Packet Processor (EPP). In addition, the switch has a system module that coordinates the work of all EPP processors. The system module maintains the general address table of the switch and provides management of the switch using the SNMP protocol. To transfer frames between ports, a switching fabric is used, similar to those found in telephone switches or multiprocessor computers, connecting multiple processors with multiple memory modules.

Switching matrix works on the principle of switching channels. For 8 ports, the matrix can provide 8 simultaneous internal channels at half-duplex port operation and 16 at full-duplex, when the transmitter and receiver of each port operate independently of each other.

When a frame arrives at a port, the EPP processor buffers the first few bytes of the frame to read the destination address. After receiving the destination address, the processor immediately decides to transfer the packet, without waiting for the arrival of the remaining bytes of the frame. To do this, he looks through his own cache of the address table, and if he does not find the required address there, he turns to the system module, which works in multitasking mode, in parallel serving the requests of all EPP processors. The system module scans the general address table and returns the found row to the processor, which it buffers in its cache for later use.

After finding the destination address, the EPP processor knows what to do next with the incoming frame (while viewing the address table, the processor continued to buffer the frame bytes arriving at the port). If a frame needs to be filtered, the processor simply stops writing frame bytes to the buffer, clears the buffer, and waits for a new frame to arrive.

If the frame needs to be transmitted to another port, then the processor contacts the switching matrix and tries to establish a path in it that connects its port with the port through which the route to the destination address goes. The switching fabric can only do this if the destination port is free at that moment, that is, not connected to another port.

If the port is busy, then, as with any circuit-switched device, the matrix fails the connection. In this case, the frame is fully buffered by the processor of the input port, after which the processor waits for the release of the output port and the formation of the desired path by the switching matrix.

Once the correct path has been set, buffered frame bytes are sent to it and received by the output port processor. As soon as the downstream processor accesses the attached Ethernet segment using the CSMA / CD algorithm, the frame bytes are immediately transferred to the network. The input port processor permanently stores several bytes of the received frame in its buffer, which allows it to independently and asynchronously receive and transmit frame bytes (Figure 4.24).

Figure: 4.24. Frame transmission through the switch fabric

With the free state of the output port at the time of receiving the frame, the delay between the reception of the first byte of the frame by the switch and the appearance of the same byte at the output of the destination address port was only 40 μs for the Kalpana switch, which was much less frame delay when it was transmitted by the bridge.

The described method of transmitting a frame without its full buffering is called “on-the-fly” or “cut-through” switching. This method is, in fact, pipelined processing of a frame, when several stages of its transmission are partially overlapped in time (Fig. 4.25).

Figure: 4.25. Saving time in pipelined frame processing: and - conveyor processing; b - normal processing with full buffering

1. Reception of the first bytes of the frame by the processor of the input port, including the reception of the destination address bytes.

2. Finding the destination address in the switch address table (in the processor cache or in the general table of the system module).

3. Commutation of the matrix.

4. Reception of the remaining bytes of the frame by the processor of the input port.

5. Reception of the bytes of the frame (including the first) by the processor of the output port through the switching matrix.

6. Accessing the environment by the processor of the output port.

7. Transfer of frame bytes by the processor of the output port to the network.

Stages 2 and 3 cannot be combined in time, since without knowing the number of the output port, the matrix switching operation makes no sense.

Compared to the full frame buffering mode, also shown in Fig. 4.25, the savings from pipelining are tangible.

However, the main reason for improving network performance when using a switch is parallel processing of multiple frames.

This effect is illustrated in Fig. 4.26. The figure shows an ideal situation in terms of improving performance, when four out of eight ports transmit data at a maximum speed of 10 Mb / s for the Ethernet protocol, and they transmit this data to the other four ports of the switch without conflict - the data flows between the network nodes are distributed so that each receiving port has its own output port. If the switch manages to process the input traffic even at the maximum rate of incoming frames to the input ports, then the total switch performance in the given example will be 4 * 10 \u003d 40 Mbit / s, and when generalizing the example for N ports - (N / 2) * l0 Mbit / s from. A switch is said to provide each station or segment connected to its ports with dedicated protocol bandwidth.

Naturally, the situation in the network does not always develop as shown in Fig. 4.26. If two stations, for example stations connected to ports 3 and 4, at the same time you need to write data to the same server connected to the port 8, then the switch will not be able to allocate a 10 Mbps data stream to each station, since the port 8 cannot transfer data at 20 Mbps. Station frames will wait in internal queues of input ports 3 and 4, when the port becomes free 8 to transmit the next frame. Obviously, a good solution for such distribution of data streams would be to connect the server to a higher speed port, such as Fast Ethernet.

Figure: 4.26. Parallel frame transmission by the switch

Since the main advantage of the switch, thanks to which it has won very good positions in local networks, is its high performance, the developers of the switches are trying to produce the so-called non-blocking switch models.

A non-blocking switch is one that can forward frames through its ports at the same rate as they arrive on them. Naturally, even a non-blocking switch cannot resolve for a long period of time situations like the one described above, when frame blocking occurs due to the limited output port speed.

Usually they mean the stable non-blocking mode of the switch, when the switch transmits frames at the rate of their arrival for an arbitrary period of time. To ensure such a mode, it is necessary, of course, such a distribution of frame streams over the output ports so that they can cope with the load and the switch can always, on average, transmit as many frames to the outputs as there are to the inputs. If the input stream of frames (summed over all ports) on average exceeds the output stream of frames (also summed over all ports), then the frames will accumulate in the buffer memory of the switch, and if its volume is exceeded, they are simply discarded. To ensure the non-blocking mode of the switch, a fairly simple condition must be met:

Cк \u003d (∑ Cpi) / 2,

where Ck is the switch performance, Cpi is the maximum performance of the protocol supported by the i-th switch port. The total port performance counts each passing frame twice - as an incoming frame and as an outgoing one, and since in steady mode the incoming traffic is equal to the outgoing traffic, the minimum sufficient switch performance to support non-blocking mode is half the total port performance. If the port operates in half-duplex mode, for example Ethernet 10 Mbps, then the performance of the Cpi port is 10 Mbps, and if in full-duplex mode, then its Cpi will be 20 Mbps.

The switch is sometimes said to support instant non-blocking mode. This means that it can receive and process frames from all of its ports at the maximum protocol speed, regardless of whether the conditions for a stable balance between ingress and egress traffic are ensured. True, the processing of some frames in this case may be incomplete - when the output port is busy, the frame is placed in the switch buffer. To support non-blocking instant mode, the switch must have higher inherent performance, namely, it must be equal to the total performance of its ports:

It is no coincidence that the first LAN switch appeared for Ethernet technology. In addition to the obvious reason associated with the greatest popularity of Ethernet networks, there was another, no less important reason - this technology suffers more than others from the increase in the waiting time for access to the medium when the segment load increases. Therefore, Ethernet segments in large networks primarily needed a means of unloading network bottlenecks, and this tool was the switches from Kalpana, and then other companies.

Several companies have begun to develop switching technology to improve the performance of other LAN technologies such as Token Ring and FDDI. These switches supported both transparent bridging and source routed bridging. The internal organization of switches from different manufacturers was sometimes very different from the structure of the first EtherSwitch, but the principle of parallel processing of frames on each port remained the same.

The widespread use of switches was undoubtedly facilitated by the fact that the introduction of switching technology did not require replacement of equipment installed in networks - network adapters, hubs, and cabling systems. The ports on the switches operated in normal half-duplex mode, so it was possible to transparently connect both an end node and a hub that organizes an entire logical segment.

Since switches and bridges are transparent to network layer protocols, their appearance on the network did not have any effect on the network's routers, if any.

The ease of use of the switch also lies in the fact that it is a self-learning device and, if the administrator does not load it with additional functions, it is not necessary to configure it - you just need to correctly connect the cable connectors to the switch ports, and then it will work independently and effectively perform the set before it. the task of improving network performance.


Similar information.


Switch

Switch (switch) - a device that selects one of the possible options for the direction of data transmission.

Figure: 9.1 Layout of Switch 2000

In a communication network, a switch is a relay system (a system designed to transfer data or convert protocols), which has the property of transparency (i.e., switching is carried out here without any data processing). The switch has no buffers and cannot accumulate data. Therefore, when using a switch, the signal transmission rates in the connected data transmission channels must be the same. Channel processes implemented by the switch are performed by special integrated circuits. Unlike other types of relay systems, software is usually not used here.

Figure: 9.2 Switch structure

In the beginning, switches were used only in WANs. Then they appeared on local networks, for example, private office switches. Later, switched local networks appeared. Local network switches became their core.

Switch (Switch) can connect servers in a cluster and serve as the basis for combining several workgroups. It routes data packets between LAN nodes. Each switched segment gets access to the data transmission channel without contention and sees only the traffic that is directed to its segment. The switch must provide each port with maximum speed connectivity without competition from other ports (as opposed to a shared hub). Typically, switches have one or two high-speed ports and good management tools. You can replace a router with a switch, add it to a stackable router, or use the switch as a base for connecting multiple hubs. The switch can be a great device for routing traffic between workgroup LAN hubs and busy file servers.

LAN switch (local-area network switch) - a device that ensures the interaction of segments of one or a group of local networks.

A local network switch, like a regular switch, provides interaction between local networks connected to it (Figure 9.8). But in addition to this, it performs interface conversion if different types of LAN segments are connected. Most often these are Ethernet networks, IBM ring networks, fiber-optic distributed data interface networks.

Figure: 9.1 Scheme of connecting local networks to switches

The list of functions performed by a LAN switch includes:

Providing end-to-end switching;

Availability of routing facilities;

Supports simple network control protocol;

Imitation of a bridge or router;

Organization of virtual networks;

High-speed retransmission of data blocks.

Did you like the article? To share with friends: