About storage networks. SAN building blocks. Fiber Channel Protocol

Ethernet switches of various classes - from those designed for home networks and small workgroups to equipment for distributed networks of large companies - are used as the main “building block” for creating corporate data transmission networks. The choice of certain products, their functionality and options for building a network infrastructure depends on the problem being solved and the requirements for bandwidth, scale, network reliability, user mobility, application support.

Switch (switch) - a device designed to connect several nodes of a computer network within one or more of its segments.

To choose the right switch, you need to present the network topology, know the approximate number of users, the data transfer rate for each network section, security requirements and much more, as well as understand the specifics of this network equipment.

Switches vary in the number and type of ports, architecture, design, functionality, reliability, performance, and price.

Introduction to Switching Technology

What is a switch and why is it needed

The switch integrates various network devices, such as PCs, servers connected to the storage system network into a single network segment, and gives them the opportunity to communicate with each other. It determines to which recipient the data is addressed, and sends it directly to the addressee. An exception is broadcast traffic to all network nodes and device traffic for which the outgoing port of the switch is unknown.

This improves network performance and security, eliminating the need for other network segments to process non-intended data.

The switch transmits information only to the addressee.


switch) works on the channel (second,L2) OSI model level. In this case, to connect several networks at the network level (third level OSI,L3) serve routers (router).

Switch Principles

The switching table is stored in the memory of the switch, where the MAC addresses of the devices connected to the ports are recorded, that is, the correspondence of the MAC address of the network node to the switch port is indicated. When receiving data from one of the ports, the switch analyzes them and determines the destination address, selects the port where to transfer them from the table.

When the switch is turned on, the table is empty, and it works in training mode: the data arriving at any port is transmitted to all other ports. In this case, the switch analyzes the frames (frames) and, having determined the MAC address of the sending host, enters it into the table. Subsequently, if one of the ports on the switch receives a frame intended for a host whose MAC address is already in the table, then this frame will be transmitted only through the port specified in the table. If the MAC address of the recipient host is not associated with any port on the switch, then the frame is sent to all ports except the source port.

Formation of the switching table. The MAC addresses of network devices are mapped to specific switch ports.


How does switching work with a generated table? For example, the subscriber with address A sends a frame to the recipient with address D. According to the table, the switch determines that the station with address A is connected to port 1, and the station with address D is connected to port 4. Based on this data, it establishes a virtual connection to transfer the message between ports 1 and 4. After the transfer, the virtual connection is disconnected.

Switching modes

For all the variety of switch designs, the basic architecture of these devices is determined by four components: ports, buffers, an internal bus, and a packet forwarding mechanism.

The mechanism for promoting packets / frames may be as follows. When switching with intermediate buffering, the switch, receiving the packet, does not pass it on until it has completely read all the information it needs. It not only determines the address of the recipient, but also checks the checksum, that is, it can cut off defective packets. This allows you to isolate the error-generating segment. Thus, this mode is focused on reliability, not speed. With end-to-end switching, the switch reads only the address of the incoming packet. The packet is transmitted further regardless of errors. This method is characterized by low latency.

Some switches use a hybrid method called threshold or adaptive switching. Under normal conditions, they perform end-to-end switching, check checksums. If the number of errors reaches a predetermined threshold value, then they enter the switching mode with intermediate buffering, and when the number of errors decreases, they return to the end-to-end switching mode.

One of the important parameters of a switch is its performance. It is determined by three main indicators: the data transfer rate between ports, the total throughput (the highest speed at which data is transmitted to the recipients) and the delay (time between receiving a packet from the sender and before transmitting it to the recipient). Another key feature is management capabilities.

Types and features of switches

Managed and Unmanaged Switches

Ethernet switches are usually divided into two main types - unmanaged and managed. Unmanaged switches do not include configuration changes or any other settings. These are simple devices, ready to go right after switching on. Their advantages are low price and battery life that does not require intervention. Cons - lack of management tools and low productivity.

Simple unmanaged switches are most common in home networks and small businesses.

Managed switches are more advanced devices that also work in automatic mode, but also have manual control. It allows you to configure the switch, for example, provides the ability to configure network policies, create virtual networks and fully manage them. The price depends on the functionality of the switch and its performance.

Switching can be controlled at the channel (second) and network (third) level of the OSI model. Devices are called, respectively, managed switches L2 and L3. Management can be done through the web interface, command line interface (CLl), Telnet, SSH, RMON, network management protocol (SNMP), etc.

A managed switch allows you to configure bandwidth, create virtual networks (VLAN) and etc.

It is worth paying attention to SSH access and SNMP. The web interface facilitates the initial configuration of the switch, but almost always has fewer functions than the command line, so its presence is appreciated, but not required. Many models support all popular control types.

The so-called smart switches, devices with a limited set of configuration settings, are also managed.

Unmanaged, smart switches and fully managed switches. Smart switches can provide web-based management and basic settings.

Sophisticated corporate switches have a complete set of management tools, including CLI, SNMP, a web interface, and sometimes additional functions, such as backing up and restoring configurations.

Many managed switches support additional features, such as QoS, port aggregation and / or mirroring, and stacking. Some switches can be clustered, MLAG, or create a virtual stack.

Stackable Switches

Stacking is the ability to combine several switches using special (or standard) cables so that the resulting design works as a single switch. Typically, the stack is used to connect a large number of nodes in the local network. If the switches are connected by a ring, then in case of failure of a switch, the stack continues to work.

Why is such a stack created? The first is investment protection. If you need to increase the number of users / devices on the network, and there are not enough ports, you can add the switch to the stack. Secondly, the stack is more convenient to manage. From the point of view of monitoring and control systems, this is one device. Thirdly, the stack switches have a single address table, one IP and MAC address.

A stackable (or stacked) switch has special ports (interfaces) for connecting to the stack, often when this happens the physical integration of internal buses. As a rule, a stack connection has a data transfer speed that is several times faster than the transfer speed on other switch ports. And in switches with a non-blocking architecture, there is no traffic blocking during the exchange between stack switches.

Stackable managed switches can be combined into one logical device - the stack, thereby increasing the number of ports.

Typically used proprietary stacking technology. Cables with terminal connectors SFP, GBIC, etc. are sometimes used. As a rule, up to 4, 8, 16 or 32 switches can be stacked. Many modern switches are fault tolerant, along with stacking, they support all the functions of L2 and L3, and many specialized protocols.

There are also virtualization technologies for switches, such as the Cisco Virtual Switching System (VSS) and the HPE Intelligent Resilient Framework (IRF). They can also be attributed to stacking technologies, but, unlike the “classic” stacking (StackWise, FlexStack, etc.), Ethernet ports are used to communicate with the switches. Thus, the switches can be located at a relatively large distance from each other.

Redundancy and Resiliency

Modern stack architectures include redundancy according to the N-1 scheme, support distributed L2 / L3 switching, aggregation of channels throughout the stack, as well as the ability to switch channels in the event of an accident and switch the active device in the stack without service failure. In addition to the traditional protocols STP, RSTP and MSTP, switches can support advanced technologies, for example, Smart Link and RRPP, perform protective channel switching at the millisecond level, and guarantee reliable network operation.

Some models support SEP (Smart Ethernet Protection), a ring network protocol that provides continuous service delivery. Another protocol, ERPS (Ethernet Ring Protection Switching), uses the Ethernet OAM features and automatic ring protection switching mechanism - also in milliseconds.

Many vendors use proprietary ring redundancy technologies that provide faster recovery than standard STP / RSTP protocols. One example is shown below.

The primary and backup ports for data transfer in the ring are selected. The switch blocks the backup port, and transmission occurs along the main route. All switches in the ring exchange synchronization packets. If the connection is lost, the backup port will be unlocked and the backup route will be activated.

To increase reliability, “hot” replacement and / or redundancy of power supplies and switch cooling elements can be provided. Thanks to the optical ports available on some models, the switch can be connected to the core switch at a distance of up to 80 km. Such equipment allows you to create a productive fail-safe switching cluster or to build any modern L2-topology spaced several tens of kilometers, to get a fail-safe stack of hundreds of ports with a single control point, which greatly simplifies administration.

Switches in Network Architecture

The place and role of the switch in the network

Switches and routers play a critical role, especially in an enterprise environment. Switching is one of the most common networking technologies. Switches force routers to the periphery of local networks, leaving behind them the role of organizing communications over the global network.

Due to microsegmentation, they allow you to increase network performance, make it possible to organize connected devices into logical networks and regroup them when necessary.

The traditional corporate network architecture includes three levels: access, aggregation / distribution, and core. On each of them, the switches perform specific network functions.

Switches can play the role of primary switches in medium-sized branches and organizations, function as local access switches in large organizations, and can be used to join small groups into a single second-level network. They are widely used in the data center and in the network core, in the networks of providers at the level of access and aggregation, and with the spread of Ethernet technology - and in a number of vertical applications, for example, in industry, in building automation systems. Despite the proliferation of wireless technology, such networking equipment is also growing in popularity in the SMB and SOHO segments.

Many developers focus on improving information protection and traffic management mechanisms, in particular for voice or video. The growing volumes of traffic dictate the introduction of 10-gigabit and even higher speeds.

Modern switches can support numerous security protocols, including a full set of ARP instructions for filtering data packets at L2 – L7 levels, as well as dynamic routing, which includes all the necessary protocols for finding the shortest paths. A highly competitive market provides ample opportunity to choose products from well-known Western brands, manufacturers from Asian countries and Russian products.

Global Switch Market and Key Vendors

The main contribution to the 3% growth in the global market for switches and routers in 2015 was made by the corporate equipment segment: it accounted for almost 60% of sales. The world's largest manufacturers of Ethernet L2 / L3 switches are Cisco (over 62%), HPE, Juniper, Arista, Huawei. There is a growing demand for equipment for data centers, 10 and 40 Gigabit Ethernet switches, switches for large providers.

Sales Leads for Five Leading Switch ProvidersEthernet   in the world over recent quarters (according toIDC).

In the EMEA region, the Ethernet switch segment showed a 6.7% decline in the first half of 2016. An IDC report says Cisco remains the largest switch manufacturer in the EMEA market. Cisco and HPE accounted for over 68% of switchgear sales in the region. The leaders also included Arista and Huawei.

According to Dell Oro Group forecasts, the switch segment for data centers will grow at the fastest pace. The transition to the cloud model should also contribute to the implementation of SDN and switch sales for cloud data centers, while reducing demand for enterprise-class switches.

Features and varieties of switches

Kernel, distribution, and access switches allow you to create network architectures of different topologies, levels of complexity, and performance. The variety of these platforms ranges from simple switches with eight fixed ports to modular devices consisting of more than a dozen “blades” and numbering hundreds of ports.

Workgroup switches typically have a small number of ports and supported MAC addresses.

Backbone switches are distinguished by a large number of high-speed ports, the presence of additional management functions, advanced packet filtering, etc. In general, such a switch is much more expensive, more functional, and more productive than switches for workgroups. It provides efficient network segmentation.

The main parameters of the switches: the number of ports (when choosing a switch, it is better to provide a margin for expanding the network), switching speed (for entry-level devices it is much lower than for an enterprise-class switch), bandwidth, automatic detection of MDI / MDI-X (standards, according to which twisted pair is crimped), the presence of expansion slots (for example, for connecting SFP interfaces), the size of the MAC address table (selected taking into account the network expansion), form factor (desktop / rack).

By design, switches with a fixed number of ports are distinguished; modular based on the chassis; stacked (stackable); modular stack. Switches for service providers are divided into aggregation switches and access layer switches. The former aggregate traffic at the network boundary, the latter include such functions as data control at the application level, integrated security and simplified management.

The data center should use switches that provide scalability of the infrastructure, continuous operation and flexibility of data transport. In Wi-Fi networks, a switch can play the role of a controller that controls access points.

Switches and Wi-Fi Networks

Depending on the scenario of design and deployment of a Wi-Fi network (WLAN), the role of switches in it also changes. For example, it can be a centralized / managed architecture or converged architecture (a combination of wired and wireless access). Most medium and large scale Wi-Fi networks are built on a centralized architecture with a switch as a Wi-Fi controller. All major manufacturers of high-level Wi-Fi solutions (Cisco, Aruba (HPE), Ruckus (Brocade), HPE, Huawei, etc.) have such offers.

Simple networkWLAN   does not need a controller, and the switch performs its basic functions.

The controller manages software loading / changing, configuration changes, RRM (dynamic radio resource management), communication with external servers (AAA, DHCP, LDAP, etc.), user authentication, QoS profiles, special functions, etc. Controllers can be grouped together to seamlessly roam clients between access points in the coverage area.

The controller centrally manages devices on a wireless network and is designed for campus, branch, and SMB networks. Centralized network architectureWi- Fi   allows you to build large networks and manage them from one point.

In a small corporate Wi-Fi network covering part of the floor, floor, small building, etc., switch controllers designed for a small number of access points (up to 10-20) can be used. Large corporate Wi-Fi networks spanning campuses, factory sites, ports, etc., require powerful and functional controllers (for example, Cisco 5508, Aruba A6000, Ruckus ZoneDirector 3000). Sometimes they offer a solution on modules for switches or routers, for example, the Cisco WiSM2 module in the Cisco Catalyst 6500/6800 family switch, the Huawei ACU2 module in the Huawei S12700, S9700, S7700 switches, the HPE JD442A module in the HPE 9500 switch.

In the new edition of the “magic quadrant” Gartner (August 2016) for suppliers of equipment for infrastructure of wired and wireless LANs, only HPE, which absorbed Aruba, was among the leaders except Cisco.

The automatic discovery of access points and centralized management will save on the cost of configuring configurations. Controllers can also provide protection against potential attacks, and self-optimization and recovery functions guarantee the smooth operation of the wireless network. PoE support will simplify WLAN deployment.

Functional and design features of switches

Ethernet Switch Features and Supported Protocols

Functions for working with traffic can include flow control (Flow Control, IEEE 802.3x), which provides for the reception-send coordination at high loads to avoid packet loss. Support for Jumbo Frame (enlarged packets), improves overall network performance. Prioritizing traffic (IEEE 802.1p) allows you to identify more important packets (for example, VoIP) and send them first. It is worth paying attention to this function if you plan to transmit audio or video traffic.

VLAN support (IEEE 802.1q) is a convenient tool for delimiting the enterprise network for different departments, etc. The Traffic Segmentation function for distinguishing domains at the data link layer allows you to configure the ports or groups of switch ports used to connect servers or backbone networks.

Mirroring (duplication) of traffic (Port Mirroring) can be used to provide security within the network, monitor or verify the performance of network equipment. The LoopBack Detection function automatically blocks the port when looping (especially important when choosing unmanaged switches).

Link Aggregation (IEEE 802.3ad) improves link throughput by combining multiple physical ports into a single logical port. IGMP Snooping comes in handy when broadcasting IPTV. Storm Control gives the port the ability to continue to operate to forward all other traffic during a broadcast / unidirectional “storm”.

Switches can support dynamic routing protocols (e.g., RIP v2, OSPF) and Internet group management (e.g., IGMP v3). With support for BGP and OSPF, the device can be used as a switching router for domains and subdomains of a local network. Some models support the creation of superimposed networks (TRILL), which reduces the load on the MAC address tables and ensures uniform loading of channels for the same routes, which significantly increases the speed of access to network resources. This network equipment also differs in the way it works.

Switches L1-L4

The higher the level at which the switch operates according to the OSI network model, the more complex and expensive the device is, its functionality is more developed.

Level 1 Switches   (hubs and repeaters) operate on a physical level and process not electrical data, but electrical signals. Such equipment is practically not being produced now.

Layer 2 Switches   work at the channel level with frames (frames), can perform their analysis, determine the sender and recipient. They operate only with MAC addresses, and they are not able to distinguish IP addresses. Such devices include all unmanaged switches and some managed models.

  • RMON   (4 groups: Statistic, History, Alarm and Event)
  • Two password levels - user password and backup password.
  • Access and traffic prioritization profile
  • Traffic segmentation
  • Bandwidth control
  • Functions Port security(limiting the number of MACs on a given port)
  • Port / MAC Address Based IEEE 802.1x Access Control
  • Event Logging with Syslog
  • Support TACACS, RADIUS, SSH
  • Software update and save configuration file to external media
  • Supports IEEE 802.1Q VLAN (Label Based)
  • Prioritization of IEEE 802.1p packets and 4 queues
  • Spanning Tree protocol (IEEE 802.1D)
  • Rapid Spaning Tree protocol (IEEU 802.1w)
  • Broadcast Storm Control
  • Support for port trunking - Link Aggregation (IEEE 802.3ad Static mode)
  • Port mirroring (multiple port traffic, to one selected port)
  • TFTP / BOOTP / DHCP client
  • TELNET support, built-in web server
  • CLI - command line interface
  • IGMP   to restrict broadcast domains in VLAN
  • SNMP v1 / v3

General Switch FeaturesL2.

L2 switches make up switching tables, support IEEE 802.1p (traffic prioritization), IEEE 802.1q (VLAN), IEEE 802.1d (Spanning Tree Protocol, STP), used to increase network resiliency, IEEE 802.1w (Rapid Spanning Tree Protocol, RSTP) with higher resilience and shorter recovery time, or the more modern IEEE 802.1s (Multiple Spanning Tree Protocol, MSTP), IEEE 802.3ad (Link Aggregation) for combining multiple ports into one high-speed port.

Layer 3 Switches   work at the network level. These include a number of models of managed switches, routers. They can route network traffic and redirect it to other networks, support work with IP addresses and establish network connections.

Thus, they are actually routers that implement the mechanisms of logical addressing and choice of the data delivery route (route) using routing protocols (RIP v.1 and v.2, OSPF, BGP, proprietary protocols). Traditionally, L3 switches are used in local and territorial networks to provide data transfer of a large number of devices connected to them, unlike routers that access a distributed network (WAN).

Layer 4 Switches   operate at the transport level and support working with applications, have some intelligent functions. They can define TCP / UDP ports for identifying applications, SYN and FIN bits that indicate the beginning and end of sessions, recognize information in message headers. The design of the switches also varies.

Fixed Configuration Ethernet Switches and Modular Switches

Modular switches provide scalable performance, configuration flexibility, and phased expansion capabilities. Fixed-configuration switches allow you to build a network infrastructure to solve a wide range of tasks, including the construction of networks of building complexes, branches of large enterprises, medium-sized organizations, as well as SMB enterprises

Fixed configuration switches typically support up to 48 ports. Sometimes it is possible to install additional SFP / portsSfp+.

Using SFP + uplinks, many switches can be connected to the upper level — the core of the network, providing high performance and load balancing across all channels. The high port density allows for more efficient use of limited space and power.

Modular switches are usually high-performance platforms that support a wide range of L3 protocols, a flexible set of interfaces, service virtualization and application optimization, network clusters (SMLT, SLT, RSMLT). They can be used in the core of large and medium-sized networks, in data center networks (network core and concentration of server connections).

Typical Modular Switch Features

Modular switches can have very high port densities by adding expansion modules. For example, some support more than 1000 ports. In large corporate networks, to which thousands of devices are connected, it is better to use modular switches. Otherwise, many fixed configuration switches are required.

The Cisco Catalyst 6800 is a modular switch for campus networks with 10/40 / 100G support. An extensible platform with a height of 4.5 RU contains from 16 to 80 1 / 10GE ports with support for BGP and MPLS.

Ethernet Switch Features

The main characteristics of a switch that measure its performance are switching speed, bandwidth, and frame transmission delay. These indicators are affected by the size of the frame buffer (s), internal bus performance, processor performance, and the size of the MAC address table.

General characteristics also include the ability to install in a rack, the capacity of RAM, the number of ports and uplinks / SFP ports, the speed of uplinks, support for working in the stack, and management methods.

Some vendors offer convenient configurators on their sites for selecting switches according to their characteristics: the number and type of ports (1/10 / 40GbE, optics / copper), the type of switching / routing (L2 / L3 - basic or dynamic), the speed and type of uplinks, the presence of PoE / PoE +, support for IPv6 and OpenFlow (SDN), FCoE, redundancy (power / factory / fans), the possibility of stacking. Energy Efficient Ethernet (IEEE 802.3az, Energy Efficient Ethernet) reduces power consumption by automatically adjusting it to match the actual network traffic of the switch.

Less expensive and less productive switches can be used at the access level, while more expensive high-performance switches are better used at the distribution and core levels of the network, where the performance of the entire system depends on the switching speed.

Types and Density of Ports

The group of switch ports for connecting end-users traditionally consists of ports for a twisted-pair cable with RJ-45 connectors. The signal transmission range in this case is up to 100 meters of the total line length, and for offices this, in most cases, is enough.

PortsEtherhet   1/10 Gbit /c   for copper cables with connectorsRj-45.

More difficult is the choice of uplink port type, designed for communication with higher-level network nodes. In many cases, optical communication cables that do not have such length restrictions as a twisted pair cable are preferable. Such ports often use plug-in SFP modules (Small Form-factor Pluggable). The height and width of the SFP module is comparable to the height and width of the RJ-45 slot.

Optical moduleSfp.

Popular SFP + and XFP interfaces can provide 10 Gb / s transfer rates and ranges up to 20 km. The footprint for SFP + modules has the same dimensions as SFP, the difference lies in the information transfer protocols between the module and the switch. XFP has larger than SFP + dimensions. Switches with SFP and SFP + ports are often used on the network at the aggregation level. Meanwhile, in the data center, not only Ethernet switches are widely used, but also other types of switching equipment.

In a network of a large enterprise or in a large data center, where there are thousands of ports, port density is of greater importance, that is, how many maximum ports per 1U (or per rack) of the required transfer speed can be accommodated taking into account expansion slots and additional modules. It is necessary to remember the growing need for the transfer of large amounts of data and, accordingly, take into account the density of ports of the required speed in the considered switches.

As for office networks, PoE and EEE support can be a good quality switch.

Network Power - PoE

Power over Ethernet (PoE) technology allows the switch to power the device through an Ethernet cable. This feature is commonly used by some IP phones, wireless access points, CCTV cameras, etc.

Power over Ethernet is a convenient alternative way to power network devices.

PoE provides flexibility in the installation of this type of equipment: it can be installed wherever there is an Ethernet cable. But PoE should be really necessary, because the switches supporting it are much more expensive.

According to the IEEE 802.3af (PoE) standard, a direct current of up to 400 mA with a nominal voltage of 48 V is provided through two pairs of conductors in a four-pair cable with a maximum power of 15.4 watts.

The IEEE 802.3at (PoE +) standard provides for an increase in power (up to 30 W) and a new mechanism for the mutual identification (classification) of devices. It allows devices to mutually identify each other when connected.

Network Evolution and Switches

Data Center Switches: Ethernet, Fiber Channel, InfiniBand

For high-performance switching of servers and storage systems today, a wide range of technologies and devices is used - Ethernet switches, Fiber Channel, InfiniBand, etc.

In virtualized and cloud data centers, where “horizontal” traffic prevails between servers and virtual machines, the “trunk and leaf” configuration comes to the rescue. Sometimes this configuration is called a "distributed core". Often also use the term "Ethernet factory".

Spine-switches can be considered as a distributed core, only instead of one or two core switches it is formed from a large number of “trunk” switches with a high port density.

The advantages of this configuration are as follows: horizontal traffic between the "leaves" is guaranteed to go with one hop, through the "tree", so the delay is predictable, when equipment fails, performance suffers less, and it’s easier to scale such a configuration.

There is a growing need for a higher data rate. In previous years, six Ethernet standards have been created: 10 Mbit / s, 100 Mbit / s, Gbit / s, 10 Gbit / s, 40 Gbit / s and 100 Gbit / s. In 2016, the Ethernet community is working hard to implement new speed standards: 2.5 Gbit / s, 5 Gbit / s, 25 Gbit / s, 50 Gbit / s, 200 Gbit / s. The recently adopted IEEE 802.3 specifications (including subgroups) cover a speed range from 25 Gbit / s per port to a total channel throughput of 400 Gbit / s. It is planned to complete work on the 400GbE standard (802.3bs) in March 2017. It will use several lines of 50 or 100 Gbit / s.

On the world marketEthernetswitches for data centers dominatesCisco Systems   (according toIDC, 2015).

Along with 40 / 100GbE, InfiniBand is gaining wider adoption in the data center. InfiniBand (IB) technology is mainly used in high-performance computing (HPC), multi-node clusters, and GRID computing. It is used in internal connections (backplane) and switches (crossbar switch) manufacturers of modular servers. On InfiniBand EDR (Enhanced Data Rate) 12x switches, the port speed reaches 300 Gb / s.

Modular server with integrated switchInfiniband.

Storage Area Networks (SANs) are traditionally built on the basis of the FC (Fiber Channel) protocol, which provides fast and reliable transport for transferring data between disk arrays and servers. FC provides guaranteed low latency, high reliability and performance of the disk subsystem.

SwitchFC   (redundant factory) - key elementSan.

FC traffic can also be transmitted over Ethernet while maintaining the predictability and performance of Fiber Channel (FCoE). For this, the Converged Enhanced Ethernet (CEE) protocol was developed.

It is believed that combining SAN and LAN traffic in the same network segment using FCoE allows you to get a number of advantages when building data centers, including reducing the initial cost of equipment and operational costs of supporting, servicing, powering and conditioning equipment. However, this approach was widely adopted.

SwitchFCoE   provides convergenceSan   andLAN.

Dedicated SAN (FC or iSCSI-based) remains the best choice for high-speed data access. Its traditional Fiber Channel protocol was originally designed for fast large block transfers and low latency. An important factor in the growth of the SAN market will be the transition to next-generation equipment - switches and directors of Fiber Channel Gen 6 (32 Gb / s). It has already begun.

Change data transfer rates in deployed networksFC, Infiniband   andEthernet according to Mellanox.

It is important to choose the equipment suitable for the current requirements of the network, but with a margin of productivity for further growth.

Ethernet Factory Technology

Fiber Channel SAN fabrication technology has also found use in Ethernet networks. Along with virtual routing platforms and SDN controllers, Ethernet factories pave the way for the implementation of SDN / NFV, suggest the use of open, automated, software-configurable components, which contributes to flexibility and lower costs.

Ethernet factories along with their complementary TRILL and Shortest Path Bridging technologies (SPB is an alternative to complex and inefficient three-tier networks and Spanning Tree.

Switching factories now cover storage networks, campus networks and data center networks. They reduce operating costs, increase network utilization, accelerate application deployment, and support virtualization. The evolution of switching factories continues.

White-box, Bare-metal, and Open Networking Switches

Recently, the Open Networking concept is gaining ground, the goal of which is to “separate” the operating system of the switch from the hardware platform and give customers the opportunity to choose combinations of network operating systems and equipment. Unlike traditional switches that come with a preinstalled OS, you can purchase a Bare-metal switch (bare metal) from one manufacturer, and software from another.

Bare-metal means that the network OS is not installed in the switch, there is only a bootloader for installing it.

Such equipment is produced, for example, by Taiwanese and Russian manufacturers. A number of vendors also offer White-box switches - Bare-metal with a preinstalled network OS. Such switches provide greater flexibility and a certain independence of the customer from the equipment manufacturer. Their price is lower compared to the products of large vendors. According to Dell’Oro Group, they are 30-40% cheaper than traditional branded models. Network OS features typically provide support for all standard L2 / L3 protocols and, in some cases, the OpenFlow protocol.

Traditional switches (left) and White box switches (right).

The main target segment of the White-box switch market is the data center. They allow you to refine the network OS to solve specific problems. However, the appropriateness of their use in campus or distributed corporate networks depends on how many switches are in the network and how often the configuration changes, if the company has specialists who can support open-source network operating systems. On small campus networks, the benefits are dubious.

According to the forecast of Infonetics Research, in 2019, bare iron will account for almost 25% of the total number of ports in switches supplied to data centers around the world.

Virtual switches

With the increase in computing power of x86 processors, the software, virtual switch can quite cope with the role of a switch. It is convenient to use, for example, to provide a network level of access to virtual machines running on a physical server. On virtual machines (or in containers, for example, Docker), logical (virtual) Ethernet ports are created. VMs connect to the virtual switch through these ports.

The three most popular virtual switches are VMware Virtual Switch, Cisco Nexus 1000v, and Open vSwitch. The latter is an open source virtual switch, licensed under the Apache 2.0 license, and designed to run on Linux-based hypervisors such as KVM and Xen.

Open vSwitch is a multi-level software program Open Source, designed to work in hypervisors and on computers with virtual machines. Supports OpenFlow protocol for switching logic control.

Open vSwitch (OVS) supports a wide range of technologies, including NetFlow, sFlow, Port Mirroring, VLAN, LACP. It can work both in virtual environments and can be used as a Control Plane for hardware switches. Network OSs based on OVS are widely used on White-box and Bare-metal switches. OVS has many areas of application - in SDN networks, when switching traffic between virtual network functions (NFV).

SDN / NFV Switches

With the expansion of the functionality of equipment, networks will become more high-speed and intelligent. The performance of modern models of network core switches is up to 1.5 Tbit / s and higher, and the traditional way of development involves a further increase in their capacity. The expansion of functionality is accompanied by an increasing specialization of network core devices and its peripherals. Corporate customers have new requirements in areas such as information security, flexibility, reliability and cost-effectiveness.

The concept of SDN (Software Defined Networking) is now widely discussed. The main essence of SDN is the physical separation of the level of network management (Control Plane) and the level of data transfer (Forwarding) by transferring the management functions of the switches to software running on a separate server (controller).

The goal of SDN is a flexible, manageable, adaptive and economical architecture that can adapt effectively to the transmission of large flows of heterogeneous traffic.

SDN switches typically use the OpenFlow control protocol. Most SDN switches support standard network protocols at the same time. Currently, the scope of SDNs is mainly server farm data centers and niche solutions where SDN successfully complements other technologies. In the Russian market, SDN technology is most in demand by public cloud operators.

Network Functions Virtualization (NFV), virtualization of network functions, is aimed at optimizing network services by separating network functions (for example, DNS, caching, etc.) from the implementation of hardware. It is believed that NFV allows you to universalize software, accelerate the implementation of new network functions and services, and at the same time does not require abandonment of the already deployed network infrastructure.

According to a CNews Analytics survey (2015), Russian customers are generally optimistic about the prospects for SDN and NFV technologies, which can reduce capital costs and accelerate the introduction of new services.

Forecasts of SDN and NFV in Russia are so far controversial. According to J’son & Partners, the volume of the Russian SDN segment in 2017 will amount to $ 25-30 million. The main users of SDN and NFV will be the owners of large data centers and federal telecom operators.

Meanwhile, switch manufacturers for the enterprise market segment are offering high-speed equipment with lower cost of ownership, flexible networking capabilities, support for various classes of applications, and advanced security features.

In this article, we will consider what types of data storage systems (SHDs) currently exist, and also consider some of the main components of SHDs - external connection interfaces (interaction protocols) and drives on which data is stored. We will also conduct a general comparison of the opportunities provided. For examples, we will refer to the storage line presented by DELL.

  • DAS Model Examples
  • NAS Model Examples
  • SAN Model Examples
  • Media Types and Storage Protocol Fiber Channel Protocol
  • ISCSI protocol
  • SAS Protocol
  • Storage Protocol Comparison

Existing Storage Types

In the case of a separate PC, storage system can be understood as an internal hard disk or disk system (RAID array). If we are talking about data storage systems at different levels of enterprises, then traditionally we can distinguish three technologies for organizing data storage:

  • Direct Attached Storage (DAS);
  • Network Attach Storage (NAS);
  • Storage Area Network (SAN).

DAS devices (Direct Attached Storage) - a solution when a data storage device is connected directly to the server, or to a workstation, usually through an interface using the SAS protocol.

Network Attached Storage (NAS) devices are a freestanding integrated disk system, essentially a NAS server, with its specialized OS and a set of useful functions for quickly launching the system and providing access to files. The system connects to a regular computer network (LAN), and is a quick solution to the problem of lack of free disk space available to users of this network.

Storage Area Network (SAN) is a dedicated dedicated network that integrates storage devices with application servers, usually built on the basis of the Fiber Channel protocol or iSCSI protocol.

Now let's take a closer look at each of the above types of storage systems, their positive and negative sides.

DAS (Direct Attached Storage) Storage Architecture

The main advantages of DAS systems include their low cost (compared to other storage solutions), ease of deployment and administration, as well as high speed data exchange between the storage system and the server. Actually, precisely because of this, they have gained great popularity in the segment of small offices, hosting providers and small corporate networks. At the same time, DAS systems have their drawbacks, which include non-optimal resource utilization, since each DAS system requires a dedicated server and allows you to connect a maximum of 2 servers to a disk shelf in a specific configuration.

Figure 1: Direct Attached Storage Architecture

  • Fairly low cost. In essence, this storage system is a disk bin with hard disks that has been taken out of the server.
  • Easy to deploy and administer.
  • High speed exchange between the disk array and the server.
  • Low reliability. When the server to which this storage is connected fails, the data ceases to be available.
  • Low degree of resource consolidation - the entire capacity is available to one or two servers, which reduces the flexibility of data distribution between servers. As a result, it is necessary to purchase either more internal hard drives or install additional disk shelves for other server systems
  • Low utilization of resources.

DAS Model Examples

Of the interesting models of devices of this type, I would like to note the model range of the DELL PowerVault MD series. The initial models of disk shelves (JBOD) MD1000 and MD1120 allow you to create disk arrays with the number of disks up to 144. This is achieved due to the modularity of the architecture, up to 6 devices can be connected to the array, three disk shelves for each channel of the RAID controller. For example, if you use a rack of 6 DELL PowerVault MD1120, we will implement an array with an effective data volume of 43.2 TB. Such disk shelves are connected with one or two SAS cables to the external ports of the RAID controllers installed in the Dell PowerEdge servers and are controlled by the management console of the server itself.

If there is a need to create an architecture with high fault tolerance, for example, to create a fault tolerant MS Exchange cluster, SQL server, then for this purpose, the DELL PowerVault MD3000 model is suitable. This system already has active logic inside the disk shelf and is completely redundant due to the use of two integrated RAID controllers operating according to the “active-active” scheme and having a mirrored copy of the data buffered in the cache memory.

Both controllers simultaneously process the read and write data streams, and in case of failure of one of them, the second "picks up" the data from the neighboring controller. At the same time, connection to a low-level SAS controller inside 2 servers (cluster) can be done via several interfaces (MPIO), which provides redundancy and load balancing in Microsoft environments. To increase disk space, PowerVault MD3000 can connect 2 additional MD1000 disk shelves.

Network Attached Storage (NAS) architecture

NAS technology (network storage subsystems, Network Attached Storage) is developing as an alternative to universal servers that have many functions (print, applications, fax server, email, etc.). In contrast, NAS devices perform only one function - the file server. And they try to do it as best as possible, easier and faster.

NASs are connected to the LAN and access data for an unlimited number of heterogeneous clients (clients with different operating systems) or other servers. Currently, almost all NAS devices are focused on use in Ethernet networks (Fast Ethernet, Gigabit Ethernet) based on TCP / IP protocols. Access to NAS devices is made using special file access protocols. The most common file access protocols are CIFS, NFS and DAFS. Inside these servers are specialized OSs such as MS Windows Storage Server.

Figure 2: Network Attached Storage Architecture

  • Cheapness and availability of its resources not only for individual servers, but also for any organization’s computers.
  • Ease of sharing resources.
  • Easy to deploy and administer
  • Versatility for clients (one server can serve MS, Novell, Mac, Unix clients)
  • Access to information through the protocols of “network file systems” is often slower than as to a local disk.
  • Most low-cost NAS servers do not provide a fast and flexible method of accessing data at the block level inherent in SAN systems, rather than at the file level.

NAS Model Examples

Classic NAS solutions such as the PowerVault NF100 / 500/600 are currently available. These are systems based on the massive 1 and 2 processor servers Dell, optimized for the rapid deployment of NAS-services. They allow you to create file storage up to 10 TB (PowerVault NF600) using SATA or SAS disks, and connecting this server to the LAN. There are also more high-performance integrated solutions, such as the PowerVault NX1950, which contain 15 disks and expandable to 45 by connecting additional MD1000 disk shelves.

A major advantage of the NX1950 is the ability to work not only with files, but also with data blocks at the iSCSI protocol level. Also, the NX1950 variant can act as a “gateway”, which allows organizing file access to iSCSI-based storage systems (with a block access method), for example MD3000i or to Dell EqualLogic PS5x00.

Storage Area Network (SAN) Architecture

Storage Area Network (SAN) is a special dedicated network that combines storage devices with application servers, usually built on the basis of the Fiber Channel protocol, or on the iSCSI protocol, which is gaining momentum. Unlike a NAS, a SAN has no idea about files: file operations are performed on servers connected to the SAN. SAN operates with blocks, like some kind of large hard drive. The ideal result of SAN operation is the ability to access any server under any operating system to any part of the disk capacity located in the SAN. SAN endpoints are application servers and storage systems (disk arrays, tape libraries, etc.). And between them, as in a normal network, there are adapters, switches, bridges, hubs. ISCSI is a more "friendly" protocol, because it is based on the use of standard Ethernet infrastructure - network cards, switches, cables. Moreover, iSCSI-based storage systems are the most popular for virtualized servers, due to the ease of protocol configuration.

Figure 3: Storage Area Network Architecture

  • High reliability of access to data located on external storage systems. Independence of SAN topology from used storage systems and servers.
  • Centralized data storage (reliability, security).
  • Convenient centralized management of switching and data.
  • Transfer intensive I / O traffic to a separate network, offloading the LAN.
  • High speed and low latency.
  • Scalability and flexibility of the SAN logical structure
  • The ability to organize backup, remote storage and remote backup and data recovery systems.
  • The ability to build fault-tolerant cluster solutions at no additional cost based on the existing SAN.
  • Higher cost
  • Difficulty in setting up FC systems
  • The need for certification of specialists in FC networks (iSCSI is a simpler protocol)
  • More stringent requirements for component compatibility and validation.
  • The appearance of DAS “islands” in networks based on the FC protocol due to the high cost, when single servers with internal disk space, NAS servers or DAS systems appear at enterprises due to lack of budget.

SAN Model Examples

Currently, there is a fairly large selection of disk arrays for building SAN, ranging from models for small and medium-sized enterprises, such as the DELL AX series, which allow you to create storage with a capacity of up to 60 TB, and ending with disk arrays for large corporations DELL / EMC CX4 series, they allow you to create storage with a capacity of up to 950 TB. There is an inexpensive solution based on iSCSI, this is PowerVault MD3000i - the solution allows you to connect up to 16-32 servers, you can install up to 15 disks in one device, and expand the system with two MD1000 shelves, creating an array of 45 TB.

Of particular note is the iSCSI-based Dell EqualLogic. It is positioned as enterprise-wide storage and is comparable in price to Dell systems | EMC CX4, with a modular port architecture that supports both the FC protocol and iSCSI protocol. EqualLogic is a peer-to-peer system, i.e. each disk shelf has active RAID controllers. When these arrays are connected to a single system, the performance of the disk pool grows smoothly with the increase in available data storage capacity. The system allows you to create arrays of more than 500TB, it is configured in less than an hour, and does not require specialized knowledge of administrators.

The licensing model also differs from the rest and already includes at the initial cost all possible options for snapshots, replication and integration tools in various OS and applications. This system is considered one of the fastest systems in tests for MS Exchange (ESRP).

Types of storage media and protocol of interaction with storage

Having decided on the type of storage system that is most suitable for you for solving certain problems, you need to go to the choice of a protocol for interacting with storage systems and the choice of drives that will be used in the storage system.

Currently, SATA and SAS disks are used to store data in disk arrays. Which drives to choose in storage depends on specific tasks. It is worth noting a few facts.

SATA II drives:

  • Available volumes of one drive up to 1 TB
  • Rotational Speed \u200b\u200b5400-7200 RPM
  • I / O speeds up to 2.4 Gb / s
  • MTBF is approximately half that of SAS drives.
  • Less reliable than SAS drives.
  • About 1.5 times cheaper than SAS drives.
  • Available disk sizes up to 450 GB
  • Rotational speeds of 7200 (NearLine), 10000 and 15000 RPM
  • I / O speeds up to 3.0 Gb / s
  • MTBF is twice as long as SATA II drives.
  • More reliable drives.

Important! Last year, the industrial production of SAS disks with a reduced rotation speed began - 7200 rpm (Near-line SAS Drive). This made it possible to increase the amount of data stored on one drive up to 1 TB and reduce the energy consumption of drives with a high-speed interface. Despite the fact that the cost of such drives is comparable to the cost of SATA II drives, and the reliability and speed of input / output remains at the level of SAS drives.

Thus, at the moment it’s really worth seriously thinking about the data storage protocols that you are going to use in the framework of corporate storage.

Until recently, the main protocols for interacting with storage systems were FibreChannel and SCSI. Now replacing SCSI, expanding its functionality, came the iSCSI and SAS protocols. Let's look at the pros and cons of each of the protocols and the corresponding interfaces for connecting to storage below.

Fiber Channel Protocol

In practice, the modern Fiber Channel (FC) has speeds of 2 Gb / s (Fiber Channel 2 Gb), 4 Gb / s (Fiber Channel 4 Gb) full-duplex or 8 Gb / s, that is, such speed is provided simultaneously in both directions. At such speeds, the connection distances are practically unlimited - from standard 300 meters on the most "ordinary" equipment to several hundred or even thousands of kilometers using specialized equipment. The main advantage of the FC protocol is the ability to combine many storage devices and hosts (servers) into a single storage area network (SAN). At the same time, there is no problem of the distribution of devices over long distances, the possibility of aggregating channels, the possibility of redundant access paths, “hot-plugging” equipment, and high noise immunity. But on the other hand, we have a high cost and high complexity of installing and maintaining disk arrays using FC.

Important! The two terms Fiber Channel protocol and the Fiber Channel fiber interface must be separated. Fiber Channel protocol can work on different interfaces - both on a fiber optic connection with different modulation and on copper connections.

  • Flexible scalability of storage systems;
  • Allows you to create storage systems at significant distances (but shorter than in the case of iSCSI protocol; where, in theory, the entire global IP network can act as a carrier.
  • Great redundancy features.
  • The high cost of the solution;
  • Even higher cost when organizing an FC network for hundreds or thousands of kilometers
  • High complexity during implementation and maintenance.

Important! In addition to the appearance of the FC8 Gb / s protocol, the FCoE (Fiber Channel over Ethernet) protocol is expected to appear, which will allow the use of standard IP networks to organize the exchange of FC packets.

ISCSI protocol

The iSCSI protocol (encapsulation of SCSI packets into the IP protocol) allows users to create storage networks based on the IP protocol using Ethernet infrastructure and RJ45 ports. Thus, iSCSI allows you to circumvent the limitations that characterize direct-attached data warehousing, including the inability to share resources through servers and the inability to expand capacity without shutting down applications. The transfer speed is currently limited to 1 Gb / s (Gigabit Ethernet), but this speed is sufficient for most medium-sized business applications, and numerous tests confirm this. Interestingly, the speed of data transfer on a single channel is not so much important as the algorithms of RAID controllers and the ability to aggregate arrays into a single pool, as is the case with DELL EqualLogic, when three 1GB ports are used on each array, and load balancing among arrays one group.

It is important to note that iSCSI-based SANs offer the same benefits as SANs using Fiber Channel, but they simplify network deployment and management and significantly reduce the cost of storage.

  • High availability;
  • Scalability
  • Ease of administration because Ethernet technology is used;
  • Lower SAN cost on iSCSI than on FC.
  • Easy integration into virtualization environments
  • There are certain restrictions on the use of storage with iSCSI protocol with some OLAP and OLTP applications, with Real Time systems and when working with a large number of video streams in HD format
  • ISCSI-based high-level storage systems, like FC storage systems, require the use of fast, expensive Ethernet switches
  • It is recommended that either dedicated Ethernet switches or a VLAN be used to separate data streams. Network design is no less important part of the project than when developing FC networks.

Important! Soon, manufacturers promise to mass-produce SAN based on iSCSI protocol with support for data transfer rates up to 10 Gb / s. The final version of the DCE protocol (Data Center Ethernet) is also being prepared, a mass appearance of devices supporting the DCE protocol is expected by 2011.

From the point of view of the interfaces used, iSCSI protocol uses 1Gbit / C Ethernet interfaces, and they can be either copper or fiber-optic interfaces when working over long distances.

SAS Protocol

The SAS protocol and the eponymous interface are designed to replace parallel SCSI and achieve higher throughput than SCSI. Although SAS uses a serial interface as opposed to the parallel interface used by traditional SCSI, SCSI commands are still used to control SAS devices. SAS allows you to provide a physical connection between the data array and several servers over short distances.

  • Acceptable price;
  • Ease of storage consolidation - although SAS-based storage cannot connect to as many hosts (servers) as SAN configurations that use FC or iSCSI protocols, using the SAS protocol does not cause difficulties with additional equipment for organizing shared storage for several servers.
  • The SAS protocol allows for greater throughput with 4 channel connections within a single interface. Each channel provides 3 Gb / s, which allows achieving a data transfer rate of 12 Gb / s (currently it is the highest data transfer rate for storage).
  • Limited reach - the length of the cable can not exceed 8 meters. Thus, storages with SAS connection will be optimal only when the servers and arrays are located in the same rack or in the same server;
  • The number of connected hosts (servers) is usually limited to several nodes.

Important! In 2009, the advent of SAS technology with a data transfer rate on one channel - 6 Gb / s is expected, which will significantly increase the attractiveness of using this protocol.

Comparison of storage connection protocols

The following is a summary table comparing the capabilities of various protocols for interacting with storage systems.

Parameter

Storage Connection Protocols

Architecture SCSI commands are encapsulated in an IP packet and transmitted over Ethernet, serial Serial transfer of SCSI commands Dial-up
The distance between the disk array and the node (server or switch) It is limited only by the distance of IP networks. No more than 8 meters between devices. 50,000 meters without the use of specialized rippers
Scalability Millions of devices - running IPv6. 32 devices 256 devices
  16 million devices using FC-SW (fabric switches) architecture
Performance 1 Gb / s (planned to develop up to 10 Gb / s) 3 Gb / s when using 4 ports, up to 12 Gb / s (in 2009 up to 6 Gb / s for one port) Up to 8 Gb / s
Level of investments (implementation costs) Minor - Ethernet is used Average Significant

Thus, the solutions presented at first glance are quite clearly divided according to customer requirements. However, in practice, everything is not so simple, additional factors are included in the form of budget constraints, the dynamics of the organization's development (and the dynamics of increasing the volume of stored information), industry specifics, etc.

In the simplest case, a SAN consists of storage systems, switches, and servers connected by optical communication channels. In addition to directly disk storage systems, SANs can also connect disk libraries, tape libraries (tape drives), devices for storing data on optical disks (CD / DVD, etc.), etc.

An example of a highly reliable infrastructure in which servers are connected simultaneously to a local area network (left) and to a storage network (right). Such a scheme provides access to data located on the storage system in case of failure of any processor module, switch, or access path.

Using SAN allows you to provide:

  • centralized resource management of servers and storage systems;
  • connection of new disk arrays and servers without stopping the operation of the entire storage system;
  • use of previously purchased equipment in conjunction with new data storage devices;
  • quick and reliable access to data storage devices located at a great distance from the servers * without significant performance losses;
  • speed \u200b\u200bup the backup and recovery process - BURA.

  Story

The development of network technologies has led to the emergence of two network solutions for storage systems - Storage Area Networks (SAN) for exchanging data at the block level supported by client file systems, and servers for storing data at the file level Network Attached Storage (NAS). To distinguish traditional storage from network, another retronym was proposed - Direct Attached Storage (DAS).

DASs, SANs, and NASs that have appeared on the market sequentially reflect the evolving chains of links between applications that use data and bytes on a medium containing this data. Once upon a time, application programs themselves read and wrote blocks, then drivers appeared as part of the operating system. In modern DAS, SAN, and NAS, the chain consists of three links: the first link is the creation of RAID arrays, the second is the processing of metadata to interpret binary data in the form of files and records, and the third is the services for providing data to the application. They differ in where and how these links are implemented. In the case of DAS, storage is "bare", it only provides the ability to store and access data, and everything else is done on the server side, starting with interfaces and drivers. With the advent of SAN, RAID provisioning moves to the storage side; everything else remains the same as with DAS. And the NAS is different in that metadata is also transferred to the storage system to ensure file access, here the client only needs to support data services.

The emergence of SAN was made possible after the Fiber Channel (FC) protocol was developed in 1988 and ANSI was approved as a standard in 1994. The term Storage Area Network dates back to 1999. Over time, the FC gave way to Ethernet, and became widespread IP-SAN with iSCSI connectivity.

The idea of \u200b\u200ba NAS network storage server belongs to Brian Randall of the University of Newcastle and was implemented in machines on a UNIX server in 1983. This idea was so successful that it was picked up by many companies, including Novell, IBM, and Sun, but eventually replaced the leaders of NetApp and EMC.

In 1995, Garth Gibson developed the principles of NAS and created Object Storage Systems (OBS). He began by dividing all disk operations into two groups; one included more frequently performed operations, such as reading and writing, and the other, more rare ones, such as operations with names. Then he proposed, in addition to blocks and files, another container, he called it an object.

OBS is distinguished by a new type of interface; it is called the object interface. Client data services interact with metadata using the Object API. OBS not only stores data, but also supports RAID, stores metadata related to objects and supports the object interface. DAS, and SAN, and NAS, and OBS coexist in time, but each of the types of access is more consistent with a certain type of data and application.

  SAN Architecture

Network topology

SAN is a high-speed data network designed to connect servers to storage devices. A variety of SAN topologies (point-to-point, Arbitrated Loop and switching) replace traditional server-to-storage bus connections and provide greater flexibility, performance and reliability over them. The basis of the SAN concept is the ability to connect any of the servers to any data storage device that uses the Fiber Channel protocol. The principle of node interaction in a SAN with point-to-point or switching topologies is shown in the figures. In a SAN with Arbitrated Loop topology, data is transferred sequentially from node to node. In order to start data transfer, the transmitting device initiates arbitration for the right to use the data transmission medium (hence the name of the topology - Arbitrated Loop).

The SAN transport is based on the Fiber Channel protocol, using both copper and fiber-optic device connections.

  SAN Components

SAN components are divided into the following:

  • Data storage resources;
  • Devices that implement the SAN infrastructure;

  Host bus adaptors

  Data storage resources

Storage resources include disk arrays, tape drives, and fiber channel libraries. Storage resources realize many of their capabilities only when they are included in the SAN. So top-class disk arrays can replicate data between arrays over Fiber Channel networks, and tape libraries can transfer data to tape directly from disk arrays with Fiber Channel interface, bypassing the network and servers (Serverless backup). The most popular disk arrays in the market were EMC, Hitachi, IBM, Compaq (the Storage Works family, inherited by Compaq from Digital), and among the manufacturers of tape libraries we should mention StorageTek, Quantum / ATL, IBM.

  SAN Infrastructure Devices

SAN devices include Fiber Channel switches (FC switches), hubs (Fiber Channel Hubs), and routers (Fiber Channel-SCSI routers). Hubs are used to combine Fiber Channel Arbitrated Loop (FC_AL) devices. ) The use of hubs allows you to connect and disconnect devices in the loop without stopping the system, since the hub automatically closes the loop if the device is turned off and automatically opens the loop if a new device was connected to it. Each change of the loop is accompanied by a complex process of its initialization. The initialization process is multi-stage, and until its completion, data exchange in the loop is impossible.

All modern SANs are built on switches that enable a full network connection. Switches can not only connect Fiber Channel devices, but also delimit access between devices, for which so-called zones are created on the switches. Devices placed in different zones cannot exchange information with each other. The number of ports in a SAN can be increased by connecting switches to each other. A group of related switches is called Fiber Channel Fabric or simply Fabric. Connections between switches are called Interswitch Links, or ISL for short.

  Software

The software allows for redundant server access paths to disk arrays and dynamic load balancing between paths. For most disk arrays, there is an easy way to determine that the ports available through different controllers belong to the same disk. Specialized software maintains a table of access paths to devices and provides disconnection of paths in the event of an accident, dynamic connection of new paths and load balancing between them. As a rule, manufacturers of disk arrays offer specialized software of this type for their arrays. VERITAS Software manufactures VERITAS Volume Manager software designed for organizing logical disk volumes from physical disks and providing backup access paths to disks, as well as load balancing between them for most known disk arrays.

  Protocols Used

Storage networks use low-level protocols:

  • Fiber Channel Protocol (FCP), SCSI transport over Fiber Channel. The most commonly used protocol at the moment. There are options for 1 Gbit / s, 2 Gbit / s, 4 Gbit / s, 8 Gbit / s and 10 Gbit / s.
  • iSCSI, SCSI transport over TCP / IP.
  • FCoE, transporting FCP / SCSI over pure Ethernet.
  • FCIP and iFCP, encapsulation and transmission of FCP / SCSI in IP packets.
  • HyperSCSI, SCSI transport over Ethernet.
  • FICON transport via Fiber Channel (used only by mainframes).
  • ATA over Ethernet, ATA transport over Ethernet.
  • SCSI and / or TCP / IP transport through InfiniBand (IB).

  Benefits

  • High reliability of access to data located on external storage systems. Independence of SAN topology from used storage systems and servers.
  • Centralized data storage (reliability, security).
  • Convenient centralized management of switching and data.
  • Transfer intensive I / O traffic to a separate network - LAN offload.
  • High speed and low latency.
  • Scalability and flexibility of the SAN logical structure
  • The geographic dimensions of the SAN, in contrast to the classic DAS, are practically unlimited.
  • The ability to quickly distribute resources between servers.
  • The ability to build fault-tolerant cluster solutions at no additional cost based on the existing SAN.
  • Simple backup scheme - all data is in one place.
  • Availability of additional features and services (snapshots, remote replication).
  • High security SAN.

Sharing storage systems typically simplifies administration and adds a fair amount of flexibility since cables and disk arrays do not need to be physically transported and switched from one server to another.

Another advantage is the ability to load servers directly from the storage network. With this configuration, you can quickly and easily replace a failed one.

In the matter of cognition, the SAN faced a certain obstacle - the inaccessibility of basic information. In the matter of studying other infrastructure products that I have encountered, it’s easier - there are trial versions of the software, the ability to install them on a virtual machine, there are a bunch of tutorials, reference guides and blogs on the topic. Cisco and Microsoft rivet very high-quality textbooks, MS in addition, at the very least, combed his infernal attic pantry called technet, even there is a book on VMware, albeit one (and even in Russian!), And with an efficiency of about 100%. Already on the storage devices themselves, you can get information from seminars, marketing events and documents, forums. On the same storage network - silence and the dead with braids stand. I found two textbooks, but did not dare to buy. This "Storage Area Networks For Dummies" (there is one, it turns out. Very curious English-speaking "dummies" in the target audience, apparently) for 1,500 rubles and "Distributed Storage Networks: Architecture, Protocols and Management" - it looks more reliable, but 8200r at a discount of 40%. Together with this book, Ozon also recommends The Art of Masonry.

I don’t know what to advise to a person who decides to study at least the theory of organizing a data storage network from scratch. As practice has shown, even expensive courses can yield zero at the exit. People in relation to SAN are divided into three categories: those who do not know what it is at all, who know that such a phenomenon simply exists, and those who look at the question "why do two or more factories in a storage network" with such bewilderment as if they were asked something like "why should a square have four corners?"

I’ll try to fill in the gap that was missing for me - to describe the base and to describe is simple. I will consider SAN based on its classic protocol - Fiber Channel.

So SAN - Storage area network   - designed to consolidate server disk space on dedicated storage storages. The bottom line is that this way disk resources are used more economically, easier to manage and have greater performance. And in matters of virtualization and clustering, when several servers need access to the same disk space, such storage systems are generally irreplaceable.

By the way, in the SAN terminology, due to the translation into Russian, there is some confusion. SAN in translation means "storage area network" - SHD. However, classically in Russia under the storage system is understood the term "data storage system", that is, it is a disk array ( Storage array), which in turn consists of a Control Unit ( Storage Processor, Storage Controller) and disk shelves ( Disk enclosure) However, in the original Storage Array is only part of the SAN, although at times the most significant. In Russia we get that SHD (data storage system) is part of SHD (data storage network). Therefore, storage devices are usually called storage systems, and the storage network is called SAN (and confused with "Sun", but this is a trifle).

Components and Terms

  Technologically, a SAN consists of the following components:
1. Nodes
  • Disk arrays (data storage systems) - storages (targets)
  • Servers - consumers of disk resources (initiators).
2. Network infrastructure
  • Switches (and routers in complex and distributed systems)
  • Cables

Features

  Without going into details, the FC protocol is similar to the Ethernet protocol with WWN addresses instead of MAC addresses. Only, instead of two levels, Ethernet has five (of which the fourth is not yet defined, and the fifth is mapping between FC transport and high-level protocols that are transmitted via this FC - SCSI-3, IP). In addition, FC switches use specialized services, the analogues of which for IP networks are usually located on servers. For example: Domain Address Manager (responsible for assigning Domain ID to the switches), Name Server (stores information about connected devices, a kind of WINS analogue within the switch), etc.

For SANs, the key parameters are not only performance, but also reliability. After all, if the database server disappears the network for a couple of seconds (or even minutes) - well, it will be unpleasant, but you can survive. And if the hard drive with the base or the OS falls off at the same time, the effect will be much more serious. Therefore, all SAN components are usually duplicated - ports in storage devices and servers, switches, links between switches and, a key feature of SAN, compared to LAN - duplication at the level of the entire infrastructure of network devices - factories.

Factory (fabric   - which is actually translated from English fabric, because the term symbolizes the interwoven connection diagram of network and end devices, but the term has already been established) - a set of switches interconnected by inter-switch links ( ISL - InterSwitch Link).

Highly reliable SANs necessarily include two (and sometimes more) factories, because the factory itself is a single point of failure. Those who have at least once observed the consequences of a ring on the network or the adroit movement of the keyboard, introducing a kernel-level switch or distribution into an unsuccessful firmware or command, understand what this is about.

Factories may have identical (mirror) topology or vary. For example, one factory can consist of four switches, and the other one, and only highly critical nodes can be connected to it.

Topology

  The following types of factory topologies are distinguished:

Cascade   - The switches are connected in series. If there are more than two, then it is unreliable and unproductive.

Ring   - closed cascade. More reliable is just a cascade, although with a large number of participants (more than 4), performance will suffer. And a single failure of ISL or one of the switches turns the circuit into a cascade with all the consequences.

Mesh) it happens Full mesh   - when each switch connects to each. Characteristically high reliability, performance and price. The number of ports required for inter-switch communications, with the addition of each new switch to the circuit, grows exponentially. With a certain configuration, there simply will be no ports left for the nodes - everyone will be busy under ISL. Partial mesh   - any chaotic association of switches.

Center / Peripherals (Core / Edge)   - close to the classic LAN topology, but without a distribution level. Often, storages connect to Core switches, and servers connect to Edge. Although storage can be allocated an additional layer (tier) Edge-switches. Storage and servers can also be connected to the same switch to improve performance and reduce response time (this is called localization). This topology is characterized by good scalability and manageability.

Zoning (zoning)

  Another SAN-specific technology. This is the definition of initiator-target pairs. That is, which servers to which disk resources can be accessed, so that it does not work out that all servers see all possible disks. This is achieved as follows:
  • the selected pairs are added to the zones (zones) previously created on the switch;
  • zones are placed in zone sets (zone set, zone config) created in the same place;
  • zone sets are activated in the factory.

For the initial post on the topic of SAN, I think enough. I apologize for the variegated pictures - there is no way to draw yourself at work yet, but there is no time at home. There was an idea to draw on paper and photograph, but decided that it was better.

Finally, as a postscript, I will list basic SAN factory design guidelines.

  • Design the structure so that between the two end devices there are no more than three switches.
  • It is desirable that the factory consisted of no more than 31 switches.
  • It is necessary to set the Domain ID manually before entering a new switch into the factory - it improves manageability and helps to avoid problems of the same Domain ID, in cases, for example, reconnecting the switch from one factory to another.
  • Have multiple equivalent routes between each storage device and the initiator.
  • In cases of undefined performance requirements, proceed from the ratio of the number of Nx ports (for end devices) to the number of ISL ports as 6: 1 (EMC recommendation) or 7: 1 (Brocade recommendation). This ratio is called oversubscription.
  • Zoning Recommendations:
       - use informative names of zones and zone sets;
       - use WWPN zoning, not Port-based (based on device addresses, not physical ports of a particular switch);
       - each zone is one initiator;
       - clean the factory of the "dead" zones.
  • Have a reserve of free ports and cables.
  • Have a reserve of equipment (switches). At the site level - mandatory, possibly at the factory level.
  July 7, 2010 at 15:12

SN6000 - Storage Network Development Switch

  • Hewlett Packard Enterprise Blog

Today we will tell you about the new StorageWorks SN6000 stackable switch with 20 eight-gigabit Fiber Channel ports. Such a device is primarily intended for building a SAN storage network in a small company, where an IT professional usually has no experience configuring Fiber Channel equipment.

The HP StorageWorks SN6000 comes standard with the Simple SAN Connection Manager (SSCM) utility, which, even with the help of graphical wizards, helps SAN newcomers to correctly configure the SAN devices, including the switch itself, server HBAs and the HP StorageWorks MSA or EVA disk array (of course if available to the customer).

Typically, each of these SAN components uses a separate Fiber Channel configuration utility, and SSCM replaces them with one universal tool. As a result, SAN deployment is greatly simplified and the risk of configuration errors is reduced. SSCM automatically recognizes Fiber Channel switches, servers, and HP StorageWorks disk arrays attached to the storage network. Also, using the convenient graphical interface of the utility, you can divide the storage network into zones and distribute disk resources between them.

The capabilities of SSCM do not end there - the utility allows you to monitor the state of SAN components from the graphical console and make changes to its configuration when new equipment is added to the storage network. It automates SAN maintenance processes such as monitoring its status, distributing LUNs, and updating device microcode, displays network topology, maintains an event log, and tracks SAN configuration changes.

To reduce cost, the SN6000 can be purchased in an initial eight-port configuration. HP also offers companies that want to move to using an external disk array and build their first SAN, the ready-made SAN Starter Kit. The kit consists of the new HP StorageWorks P2000 G3 FC MSA () array with two RAID controllers, two SN6000 switches, four HP 81Q Single-Port PCI-e FC server HBAs, 12 HP 8Gb Short Wave FC SFP + modules, and 8 five-meter cables Fiber Channel With this kit, even a newcomer to Fiber Channel technology can easily deploy a small storage network with four hosts.

As the SAN evolves and new devices connect to it, you can activate the remaining SN6000 ports by purchasing licenses for four additional ports. In addition, to increase the fault tolerance of the switch, on which the operation of the SAN depends, the installation of a second power supply and the provision of hot replacement of a faulty power supply are provided.

If all 20 SN6000 ports are enabled, then switch stacking is used to further expand the SAN. The SN6000 differs from other entry-level Fiber Channel switches by the presence of four dedicated ten-gigabit Fiber Channel ports for stacking (Inter-Switch Link, ISL); therefore, when combining the switches into a stack, it is not necessary to free part of the ports to which the servers and storage systems of the SAN are connected.

Due to this, stacking is performed in hot mode (without disrupting the normal operation of the SAN) and there is less risk of incorrect cable connections between the switches. Note that stacking ports have long been standard for modular Ethernet switches, but they have only recently been used in equipment for Fiber Channel networks. The SN6000 stacking ports use a 10-gigabit Fiber Channel with the option of switching to a 20-gigabit interface, and after switching to a faster interface, you do not need to replace the cables connecting the ISL ports.

Up to six switches with 120 ports can be stacked, and SSCM manages the entire stack as a single device. In addition, up to five SN6000 switch stacks can be interconnected.

Compared to combining non-stackable Fiber Channel switches with a mesh topology, the SN6000 stack reduces the number of ports and cables used to connect individual switches — for example, building an 80-port configuration requires four SN6000s with 6 cables versus five non-stackable 24-port switches with 20 cables . In addition, to connect the ports of non-stackable switches, you also need to purchase SFP modules for ports that perform ISL functions, and the SN6000 stackable ports provide higher bandwidth than the main eight-gigabit ports of the switch.

To optimize the operation of the SN6000 stacking ports, the Adaptive Trunking function is used, which automatically redistributes traffic between several ISL stack paths. Another I / O StreamGuard function guarantees the continuous transfer of data streams through the storage network for mission-critical applications (for example, backup to tape) when one of the servers connected to the SAN reboots,

The SN6000 is also suitable for expanding the existing large SAN of a large enterprise. Due to the compatibility issues of Fiber Channel switches when building and expanding SANs, customers usually try to use equipment from the same manufacturer on the storage network. SN6000 allows you to build a heterogeneous network thanks to the Transparent Routing function implemented in this switch, which transparently connects it to large Fiber Channel switches (the so-called director, for example, HP StorageWorks B-Series and C-Series) and as a result, connected to the existing SAN are added SN6000 storage systems and servers, but the stackable switch itself will be invisible to the old SAN.

Such an SN6000 deployment scenario for expanding an existing SAN can be used when using these switches to build an additional SAN for backup, in which tape libraries are installed, or a separate SAN department connected to the main storage network of the enterprise, as well as for the gradual transfer of SAN from technologies 2 or 4 Gb / s on an eight-gig version of Fiber Channel.

Do you like the article? To share with friends: