InfiniBand Group Sharply, Evenly Divided

Months after half of the founding members of the InfiniBand Trade Association put aside infinite bandwidth because they deemed it impractical at this juncture in IT, the other half rallied around the once ballyhooed technology, and pledged to bring products to the fore next year.

Systems vendors IBM, Sun Microsystems and Dell Thursday are the remaining major torch-bearers for the high-speed switch fabric architecture (speeding up communications between servers and networked devices at 10 gigabytes-per-second), while Microsoft, Intel and HP have taken themselves out of the mix — for now.

The latter trio has said they don’t see high enough adoption rates to warrant a great embrace, arguing, among other things, that customers will not rip out old servers containing the PCI architecture to replace them with InfiniBand servers in this era of incredibly shrunken budgets. The former trio said hardware and software products utilizing the technology are forthcoming.

With the promise of juiced-up application performance once luring many infrastructure firms, InfiniBand offers features for I/O interconnects, including a mechanism to share I/O components among many servers. The architecture is designed to create a more efficient way to connect storage, communications networks and server clusters together. InfiniBand is also designed to integrate with Ethernet and Fiber Channel infrastructure.

IBM, Sun and Dell said they will add the technology for various aspects of their server lines. Dr. Tom Bradicich, chief technology officer of IBM’s xSeries servers, said his Armonk, N.Y.-based firm will enable an InfiniBand switched network that includes a host channel adapter, switch, and a fabric management on its eServer xSeries line. Moreover, for its next generation of mid-range and high-end Unix servers, IBM is developing a common clustering interconnect and IPC fabric using InfiniBand I/O. With this, Bradicich said, Big Blue’s customers will be able to satisfy application requirements for high-performance computing and server clustering.

Sun and Dell see InfiniBand as vital parts of their future server technologies. Round Rock, Texas’ Dell will fit its PowerEdge modular blades with the architecture and is currently testing InfiniBand cluster solutions in its labs, as well as teaming with its myriad hardware and software partners to increase support for the technology.

Sun might be leaning on InfiniBand the most, according to some analysts, as it plans to use InfiniBand as the linchpin of its N1 strategy to make many computers work together. Subodh Bapat, CTO of volume systems products at Sun, said his Santa Clara, Calif. firm plans to plaster InfiniBand technology across its server platforms, application environments, switches and storage. Future InfiniBand-based platforms are expected to include Sun’s blade servers in 2004 and enterprise servers and will add it to storage virtualization and aggregation products and controllers. On the software front, Sun plans to use InfiniBand to enhance Sun Open Net Environment (Sun ONE) products for Web services.

The bulls and the bears of InfiniBand

Why are IBM, Sun and Dell speaking up now? In a nutshell, InfiniBand has suffered from a deluge of bad publicity in the last several months since the other half put aside their InfiniBand endeavors. To be sure, Enterprise Storage Group Senior Analyst Arun Taneja said he was asked six months ago if the low-latency technology was dead. Not in the least bit, Taneja said.

“I think it’s a big mistake for Microsoft to back away from InfiniBand, or put it on the back-burner as they have,” Taneja said. “Microsoft, as well as all of its partners, probably has the most to win from InfiniBand because fundamentally it is a technology that will allow standard Intel-based servers to be ganged up and produce performance that the largest Intel server is not capable of producing.”

Microsoft submitted this statement:

“In the current economic climate, IT managers are gravitating towards evolutionary technologies that leverage existing infrastructure and staffing,” the company said. “The emphasis today is on efficiency not expansion, incremental growth not wholesale replacement. Ethernet is ubiquitous from the desktop to the server. As we do with other leading edge technologies, we will monitor the industry interest in InfiniBand and apply necessary resources to meet customer and partner demand.”

Karl Walker, vice president of technological development/CTO of hardware for HP’s Industry Standard Servers unit, echoed Microsoft’s philosophy that InfiniBand is just not practical at this point and said it is simply too risky a proposition.

“We saw InfiniBand go through the hype curve,” Walker said. “Two years ago at this time, it was supposed to be the be all-end-all of fabric and storage interconnect. I can’t speak in detail about why the others [Microsoft and Intel] dropped away, but we don’t see a mass market in the near term. Who knows where it might go in the future? But we pulled back when we learned it wasn’t going to hit the volume level of our expectations. We do see it as something that could be successful in the specialized markets, such as in high-performance clustering and as [an architecture] for replacing proprietary interconnect technologies. We’re taking a wait-and see kind of attitude.”

Walker said HP is playing it safe by using other fabrics, such as TCP/IP and Fibre Channel.

“We see customers who already have investments in technologies as augmenting their infrastructure with InfiniBand, but not replacing it entirely,” Walker said. “As far as making InfiniBand the interconnect on specific systems such as blade servers, that is extremely risky, because you need to make a multi-year, multi-generational bet.”

Some firms are confident in InfiniBand’s ability to flourish. Yankee Group, a firm bullish on InfiniBand’s prospects, predicts that 42 percent of all servers shipped will be InfiniBand-enabled by 2005, with the market increasing from $32 million in 2002 to more than $1.53 billion in 2006.

Indeed, Enterprise Storage Group’s Taneja sees only benefits for HP and others to take up InfiniBand.

“The hardware is relatively inexpensive and you get practically a supercomputing type of performance, with low latency,” Taneja said. “That’s where the biggest win is. For the life of me I can’t understand why HP would back away from it, especially since they want to make a larger dent in the high-end enterprise. For Intel-based folks such as Microsoft and HP, this is a mechanism to get the big performance that they’ve not been able to deliver so far.”

As for Sun, Taneja said it is a bit more complicated, but perhaps more important because the firm is banking heavily on InfiniBand to shore up N1.

“The defensive part is if Intel scales into their space [Unix market] they would lose some of that,” Taneja said. “The offensive part of it is that they are already operating at this level, so now they are ganging up [with IBM and Dell] to operate at a higher level.”

As for the strategy overall, Taneja said IBM’s Sun’s and Dell’s embrace could hold nothing but positives for the future. Moreover, their involvement and success could force Microsoft, Intel and HP to “wake up.”

The value prop in InfiniBand is much too great to be thrown away,” Taneja said. “You can kick it and harass it, but you can’t kill it.”

Cautionary tales

Another analyst, Enterprise Management Associates’ Anne Skamarock, wasn’t as sure the dissenters were doing the wrong thing by not joining IBM, Sun and Dell for the colossal InfiniBand hug. There are risks, she said, and it takes time to put the proper infrastructure in place to commit to InfiniBand.

“All major transitions in technology are risky,” Skamarock said. “Why does it seem more so now? All businesses hope to have a fairly quick return on investment. In this economy, investing in major technology changes doesn’t seem to provide the ROI most companies (and stockholders) hope to receive.

Skamarock said the architecture of using InfiniBand for blade servers that can scale as the business requirements grow is a powerful architecture.

“However, to become a reality in an IT department, a great deal of work must be done at all levels of the “system” to make it easier to deploy and manage. If the customer expects an application to participate (that is, scale with the addition of processing power), the application must be made “aware” of this capability. The number of applications that can take advantage of this today is fairly small (Oracle parallel server and scientific apps that have parallel processing capabilities come to mind off the bat). To make a applications work in a parallel fashion usually takes a complete architecture overhaul… not something most applications vendors are willing to do.”

The analyst said another environment that InfiniBand-powered blades could be used is in server consolidation.

“The servers are now tightly coupled and scalable. This environment requires significant integrated management software capabilities to gain the greatest benefit (primarily automation so the customer doesn’t have to hire an army to manage the entire infrastructure; servers, storage network [SAN] and LAN). What SANs have taught us is flexibility and scalability come at a complexity price… the only way to manage complexity is through automation. We’re not quite there yet.”

Skamarock’s final answer?

“Do I support a move to IB by server vendors? Yep. I think the IB protocol solves a lot of I/O issues in a clean way. Do I think the IB evolution is to a drop-in state? Nope. It’s going to take time to evolve the applications and management software to be able to leverage the technology.”