Wednesday, January 17, 2007

Predictions for the future of low-latency computing, it's not where you think it is (Part 1)

Low-latency cluster interconnects have always been esoteric. Digital Equipment Corporation (DEC) used reflective memory channel (along with many other interconnects) in its VAX clustering. Sequent used Scalable Coherent Interconnect (SCI) for its NUMA-Q. SGI used HIPPI to connect its Power Challenge, and later Origin servers. IBM had its SP interconnect for its RS/6000 clusters. Sun Microsystems flirted with its Fire Link technology for its high-end SPARC servers. In the late 1990s, Myrinet ruled the day for x86 high-performance computing clustering.

Today, the low-latency interconnect of choice is InfiniBand (IB). Unlike the other technologies, IB is both an industry standard, and offered by multiple vendors. SCI, while standards-based, was a single-vendor implementation. Similarly, Myricom submitted Myrinet as a standard, but remains single vendor. A multi-vendor environment drives down prices while forcing increased innovation.

Another key aspect of IB is the software is also proceeding down a standards path. The OpenIB Alliance was created to drive a standardized, open source set of APIs and drivers for InfiniBand.
An interesting thing happened on the path to OpenIB. The emergence of 10 gigabit Ethernet, and its required TCP/IP Offload Engine (TOE) Network Interface Cards (NICs), offered another standards-based, high performance interconnect. As a result, OpenIB was rechristened as the OpenFabrics Alliance, and became “fabric neutral”.

My next post will look at some of the mistaken assumptions about the progress of computer networking.

Part 2 | Part 3

No comments: