The Ethernet Alternative: InfiniBand


Over the past few months I’ve written about different ways to access data over a network. Beginning with storage area networks (SAN) and network attached storage (NAS) in February and iSCSI in March, this month’s focus is on InfiniBand and better understanding this re-emerging technology.

Ethernet has dominated the computer interconnect market for over two decades. The first Ethernet standard was approved in 1983; “Fast Ethernet” appeared in 1995; 1 Gigabit Ethernet four years later and most recently, in 2002, the 10 Gigabit Ethernet (10gbe) standard was approved. 10gbe is supposed to be the end-all of all networks, but its adoption has been slow.

InfiniBand was announced in 2000, but has had a slow adoption rate. Being a new technology, and not just a faster version of an established solution, that is to be expected. In fact, InfiniBand all but disappeared a few years ago.

Over the past few years, it has made a remarkable comeback and has been backed by large companies using it to develop strategic products (GRID solutions, blade servers, storage, cluster interconnects, etc.) and many smaller InfiniBand companies are being bought by industry giants who want to enter the market quickly.

Many companies are considering an upgrade to their existing Gig-E infrastructures. Most of these companies are only looking at 10Gbe as an option. InfiniBand may be a viable alternative.

Performance

InfiniBand has always been associated with high-performance computing environments. While it is an excellent fit for those that need maximum performance and minimum latency, it is also a great solution for today’s general computing environment.

Ethernet, the most prevalent interconnect available today, is a mature technology. Like most developed technologies, Ethernet has been slow to advance. 10gbe came out two years after 10Gb InfiniBand was announced.

Since its adoption, InfiniBand has jumped to 60Gb performance and 120Gb is planned. There is no talk of 120Gb Ethernet. InfiniBand latency, the amount of time it takes data to travel from source to destination, is also well below that of 10GbE.

It is not uncommon for a server to have multiple network connections. Each server will have a primary connection into the main network (usually Gig-E), another connection to a private backup network (again, Gig-E) and a connection for storage (either Gig-E for iSCSI or Fibre Channel for a SAN. Many will have two or more of each of these connections for redundancy). Servers that are part of a cluster will also have another dedicated network connection to maintain the cluster.

How can InfiniBand help here? Simple. With three Gig-E connections (primary, backup and cluster) and a redundant pair of 2Gb FC connections for storage, there is a total of five connections and seven Gb worth of performance for that one server. A single InfiniBand connection, with 10Gb worth of bandwidth, will easily outperform the five connections that were previously needed. The lower latencies of InfiniBand will also improve the overall performance of the server.

Cost

The obvious result of decreasing connections from a server is a drop in cost. To obtain maximum performance, each of the Gig-E connections should have a TOE (TCP/IP Offload Engine) card and each FC connection should have a FC HBA. All of these can be replaced with a single InfiniBand host channel adapter (HCA). Many servers will have an onboard HCA which further drops the cost of the solution.

Now that each server doesn’t need five cards for connectivity, many solutions will fit comfortably in a blade-based server. Using blade-servers can significantly reduce the amount of space needed in a rack and in the datacenter and decrease the amount of cooling and power needed to keep a system running.

As just discussed, InfiniBand can dramatically decrease the cost associated with the servers in a solution. InfiniBand can also decrease the cost of the networking infrastructure. Again, using a server with five connections into the infrastructure, that is five ports in various switches. With InfiniBand, there is a need for only 20% of the switch ports required with the non-InfiniBand solution.

Assuming that the servers in question can perform adequately with a 10GbE connection, there will still be a cost savings with InfiniBand. The price of 10GbE TOE cards and switches is more than the InfiniBand alternatives. Assuming a comparison of like solutions (the same amount of switches and HBAs and HCAs) the InfiniBand solution will be more economical.

There are some solutions that will thrive in an InfiniBand environment. Two of these solutions are data replication and clusters.

Clusters. Clusters were initially implemented to provide protection against the loss of a server. Now, clusters can be implemented to provide scalable performance as well. As mentioned above, these clusters can grow very large without taking a lot of spaces.

A server now fits on a blade as long as there is no need for multiple network cards; InfiniBand eliminates that need. A major function of a cluster is the ability to take multiple servers and have them act as one. In order to do this, there must be a common set of data. This data sharing has been managed by the application or by using NAS.

Recently, the concept of a shared block-based file system (implemented with a SAN) has become more popular. Some of the drawbacks of these solutions have been the latency and the overhead of the locking mechanism needed to maintain data integrity.

With today’s faster servers and InfiniBand, the overhead is minimal and these shared file systems can run close to the speed of un-shared file systems.

Replication. There are many reasons to replicate data from one storage array to another: disaster recovery, remote backup and analysis are just a few. Copying data from one storage array to another has always had its challenges.

One of the biggest challenges has been to keep the primary copy and the remote copy of the data completely in sync without impacting the performance of the applications too much. With the low latencies of an InfiniBand infrastructure, replications can occur with minimal impact to the applications.

Jim McKinstry is senior systems engineer with the Engenio Storage Group of LSI Logic, an OEM of storage solutions for IBM, TeraData, Sun/StorageTek, SGI and others.