Thursday, January 25, 2007

Is this a crazy idea?

The Linux BIOS projects seeks to put a small Linux image into a ROM to manage PC type hardware.

Many have developed "boot from thumbdrive" operating systems, which put the whole OS into a few hundred megabytes, similar to a "Live CD".

Xensource has a developed bare metal hypervisors, including one which targets Windows-only environments. I assume these use a locked-down Linux or BSD kernel to provide the Xen "Domain 0" function. Xensource also provides a Xen Live CD for evaluation.

Meanwhile, as virtualization like Xen and VMware continue to mature, as CPUs evolve to support virtualization (Intel VT, AMD-V), and PCI Express I/O virtualization coming soon, x86 virtualization will become more robust and approach native performance. It is likely virtualized x86 servers will become the norm for production environments.

What would happen if the Linux BIOS, boot from flash memory, and a bare-metal Xen Live CD ideas merged? Imagine a "XenBIOS" project, using a few hundred megabytes of flash memory to hold a live image on board. It would mean virtualization sedimenting into hardware, not into operating systems, as most are currently predicting.

The effect of free, hardware based virtualization which is automatically there would make for very interesting x86 servers. Even more so with a few on-board, fully virtualized, multi-fabric I/O channels. Kind a baby mainframe.

Maybe it's a crazy idea. Maybe its a vision of the future of computing.

Wednesday, January 24, 2007

Predictions for the future of low-latency computing, it's not where you think it is (Part 2)

Many people make Mistaken Assumptions when speaking on the history of computer networking.

One assumption made is that 100Mb, then 1Gb Ethernet replaced many other protocols. Certainly, in some cases, this is true. But people claiming this often overstate the facts.

Ethernet is first and foremost a LAN protocol, not a specialized, high-performance cluster interconnect for connecting multiple, large shared memory systems or vector supercomputers. To put it simply, I doubt anyone can point to any case of Ethernet replacing HIPPI. HIPPI was used both a storage connection or a cluster interconnect for supercomputers. Clearly what killed HIPPI in storage interconnects was fibre-channel. In cluster interconnects, one of the only vendors using HIPPI was SGI, for clustering multiple multi-hundred CPU Origin systems together. SGI used InfiniBand for clustering its Altix follow-ons to the Orgin.

Certainly 10Mb shared Ethernet killed Token Ring. But think about it, how prevalent were Token ring networks? For how many people was Ethernet the first LAN technology they experienced?

What is important is not that Ethernet killed Token Ring, but that Ethernet, by being standardized and multivendor, drove local area networking prices down low enough to become ubiquitous, which enabled the emergence of LAN email and the client server revolution in the early to mid 1990s.

Certainly 100Mb switched Ethernet with QoS killed ATM to the desktop (very small, niche market). MPLS was probably the key technology replacing ATM in the Wide Area Network, and FDDI in the Metro Area Network, but now Metro Ethernet is often used over MPLS connections.

What is important is not that 100Mb Ethernet killed ATM's promise in the LAN (primarily of network video), but that 100Mb Ethernet, by being standardized and multivendor, drove high-performance local area networking prices down low enough to become ubiquitous, and along with inexpensive Ethernet routers, the emergence of campus wide LANs, which enabled the emergence web-based computing using Java and other technologies in the late 1990s. As for network video, that came in a highly compressed form, primarily over the Internet via 1.5Mb down/256Kb up DSL and cable modem connections, not via bidirectional 100Mb Fast Ethernet connections.

Now what was 1Gb Ethernet supposed to kill? Answer: Fibre-Channel. Many predicted back in 1997-1998 GigE would kill Fibre Channel. I remember first it was going to be NFS over GigE, then it was DAFS over GigE, then it was iSCSI over GigE, as if to blame the protocol for the failure, instead of the true reason: TCP/IP overhead in the pre-TOE, pre-GHz class CPU era. Instead, GigE enabled the easy clustering of applications between servers, such as Java application servers like BEA WebLogic and IBM WebSphere, and databases such as Oracle 9i RAC, rather than servers to storage. Oddly enough, iSCSI has reemerged in the last couple of years not as a replacement for Fibre-Channel, but for storage replication and as a remote boot technology for centrally managed client PCs.

Do you see a trend here? It is not the technology which is superseded which determines the success of the new technology, it is the new innovation the new technology enables. Those who predicted uses of GigE by looking at other 1Gb networks (i.e., Fibre-Channel), instead of a faster Ethernet, were wrong. Just as those before them.

So the truth is while it is wrong to bet against Ethernet, it is also wrong to assume Ethernet has killed every other networking protocol before it. It is also the antithesis of innovative thought to look at a technology by what old things it can kill, rather than what new things it can enable.

But often few people are experiencing the latest speed on their desktop and laptop computers, and this may be part of the reason some people naturally look at current high-performance networking to estimate where 10Gb Ethernet will make its impact. The problem is, as we have see, higher-performance Ethernet's impact is always somewhere other than where the preceding high performance networking technology was. Often, higher-performance Ethernet solves a different, unforseen problem than the preceding high-performance networking technology of similar bandwidth.

My next post will look at 10 gigabit Ethernet and the some predictions on the future of computer networking.

Part 1 | Part 3

Monday, January 22, 2007

Delta to acquire 124-seat Boeing 737-700s

Delta's recent decision to acquire 10 Boeing 737-700s configured with 124 seats each confirms my theory in the value of a 110-130 seat aircraft for the major airlines.

Previous Delta aircraft in this category were the Boeing 737-300 and 737-200, both were gained through the acquisition of Western Airlines. Delta later acquired more 737-300s from other carriers. Prior to the 737-200/300s, Delta flew over a hundred DC-9-30s.

While Delta may never fly hundreds of 110-130 seat airliners again, Delta's recent decision shows the gap between the 70-seat regional jets and 150 seat 737-800 is too significant to ignore.

I expect if Delta successfully exits bankruptcy and avoids US Air's hostile takeover attempt, more 737-700s will be ordered in the future.

Wednesday, January 17, 2007

Predictions for the future of low-latency computing, it's not where you think it is (Part 1)

Low-latency cluster interconnects have always been esoteric. Digital Equipment Corporation (DEC) used reflective memory channel (along with many other interconnects) in its VAX clustering. Sequent used Scalable Coherent Interconnect (SCI) for its NUMA-Q. SGI used HIPPI to connect its Power Challenge, and later Origin servers. IBM had its SP interconnect for its RS/6000 clusters. Sun Microsystems flirted with its Fire Link technology for its high-end SPARC servers. In the late 1990s, Myrinet ruled the day for x86 high-performance computing clustering.

Today, the low-latency interconnect of choice is InfiniBand (IB). Unlike the other technologies, IB is both an industry standard, and offered by multiple vendors. SCI, while standards-based, was a single-vendor implementation. Similarly, Myricom submitted Myrinet as a standard, but remains single vendor. A multi-vendor environment drives down prices while forcing increased innovation.

Another key aspect of IB is the software is also proceeding down a standards path. The OpenIB Alliance was created to drive a standardized, open source set of APIs and drivers for InfiniBand.
An interesting thing happened on the path to OpenIB. The emergence of 10 gigabit Ethernet, and its required TCP/IP Offload Engine (TOE) Network Interface Cards (NICs), offered another standards-based, high performance interconnect. As a result, OpenIB was rechristened as the OpenFabrics Alliance, and became “fabric neutral”.

My next post will look at some of the mistaken assumptions about the progress of computer networking.

Part 2 | Part 3