Sunday, September 30, 2007

Youth and Innocence Beat Age and Guile

Urban Meyers "Lucy Moment"

One was born in July of 1964. The other in March of 1989. There can be little doubt the elder had more guile, demonstrated with three seconds left in the game.

In a scene reminiscent of Lucy holding the football for Charlie Brown to kick, Urban Meyers, like Lucy Van Pelt, proved he is a bitch.

But the younger man showed innocence. The kind of childlike innocence that is oblivious to the malicious meddling of the grown-ups.

Urban, in the game of college football brinkmanship, you were beaten by a man less than half your age.

Wednesday, August 08, 2007

I Told You So

Eight months ago I postulated a crazy idea. It is one of those things I predicted would happen within the next three to five years. The idea was to embed a Xen hypervisor into the BIOS of an x86 server.

Today I read Dell (yes Dell) is planning to embed a hypervisor into an x86 computer. The hypervisor is likely to be VMware ESX based. This makes sense, because Dell already has a relationship with EMC, the owner of VMware. It also makes sense because Dell knows it must differentiate its products rather than simply being a low-cost provider.

More at:

The Register: Dell to stuff hypervisors in flash memory

The Inquirer: Dell plans to embrace virtualisation

ZDNet Blogs: Speculation about embedded hypervisors VMware prepping embedded 'ESX Lite' hypervisor

Special hat tip to Timothy Prickett Morgan:

The UNIX Guardian: The X Factor: Virtualization Belongs in the System, Not in the Software

Related Post:

Is this a crazy idea?

Monday, June 11, 2007

That was leadership

It was 20 years ago today:

"There is one sign the Soviets can make that would be unmistakable, that would advance dramatically the cause of freedom and peace.

"General Secretary Gorbachev, if you seek peace, if you seek prosperity for the Soviet Union and Eastern Europe, if you seek liberalization: Come here to this gate! Mr. Gorbachev, open this gate! Mr. Gorbachev, tear down this wall!"

Thursday, May 31, 2007

Are there any pragmatic libertarians?

The emergence of the Fair Tax as a viable, well-researched consumption based alternative to the income tax should have libertarians (both small "l" and large "L") ecstatic. It is the first proposal in years to try to return power to individuals, and away from the Federal government and Washington lobbyists. However, many self-described libertarians seem opposed to the Fair Tax because of their unrealistic vision of a taxless American society. Or perhaps it is the natural conflict between well-researched pragmatism and political purism. I mean if you research something, you might actually be asked to do it.

Which brings us to the crux of the problem. Every journey begins with a single step. If you live in New York, but want fly to to California, you probably have to take some other form of transportation (train or car) to get to the airport. Libertarians seem like somebody who believes traveling by land from their New York residence to JFK somehow violates their vision of "flying" to California.

Because of this, many libertarians seem to be more comfortable as political outsiders, who would rather criticize from a position of political purity rather than roll up their sleeves and start the hard work of solving problems. They are like the proverbial dog chasing the car. What will the dog do with the car if he actually catches it?

This can also be seen in the Libertarian Party's intolerance with those who do not toe the LP line 100%. This was eerily similar what the Democrats did to Bob Casey Sr. Do we really need such political muttawas in American political parties? And how on earth can such a puritan party ever lead a multi-political party government?

While widely varying views on many issues are the norm in the Republican party, and even to some extent in the Democrat party, the LP would rather be small, pure, insignificant, and whiny, rather than actually lead. It's tiring, unproductive, and immature.

And I would add, I firmly believe the Libertarian Party (and many self-described small "l" libertarians) are simply jealous of the success Americans for Fair Taxation has had rallying tens of thousands of people to its Fair Tax rallies. I would not be surprised if more people have attended Fair Tax rallies in the last year than have attended LP national conventions over the last 10 years.

So if you are smug, you like to think you are better than everyone else, you like complain, you like to champion the impossible, you like to bitch, and you really, really do not have any desire whatsoever to actually lead, the Libertarian Party needs you.

Tuesday, May 29, 2007

The Return of In-Flight Broadband

I stumbled onto this purely by accident.

Like the mythical bird, Phoenix, in-flight broadband Internet access may soon rise from the ashes of Connexion by Boeing. Panasonic Avionics Corporation has improved upon Connexion's original concept in its eXconnect offering.

eXconnect improves on Connexion by leasing satellite Internet connectivity, reducing the breakeven point for the service. It also improves speed using newer technology. The airplane antenna is more compact and lighter, saving fuel costs. And Panasonic is looking to partner with existing airport WiFi vendors, so you can pay for your in-flight connection, and use the airport departure lounge WiFi prior to boarding under a single package. Finally, the goal is to start the service at a price similar to Connexion by Boeing, and get the price down to about $20 for the duration of long-distance international flight within a year of launching.

It makes total sense the in-flight entertainment companies are bringing back in-flight broadband, and as any second attempt, it should be faster, smaller, and cheaper.

Now if they would just bring back Concorde.


Panasonic May Relaunch Connexion

Aircraft Interiors: Panasonic plans broadband launch in fourth quarter

Tuesday, April 03, 2007

Rhetorically Brilliant

So Sun Microsystems is bringing back a dedicated microelectronics division.

Why? Or more specifically, why now?

Here are my thoughts as to why.

First, the obvious. Sun in the past had an solid OEM SPARC business, primarily in the low-end embedded and SPARC clone workstation market, as well as low-end servers, and specialized systems such as telco market products and hardened systems for the military. These systems included workstations and servers where customers built their own systems based on UltraSPARC processors OEMed from Sun, as well as systems where the entire motherboard was OEMed from Sun.

Back when the techical desktop was ruled by UNIX/RISC, and Sun UltraSPARC II processors were solid desktop performers, this made a lot of sense. It also made sense with the initial UltraSPARC IIIi systems. However, the MHz race between Intel and AMD, along with the rise of Linux caused this specialized OEM business to shift towards x86, and Sun's OEM business shrank accordingly.

However, it never entirely went away, as Tadpole still sells UltraSPARC IIi and UltraSPARC IIIi based laptop computers, and the mil spec SPARC systems business still exists.

So why bring a dedicated microelectronics OEM business back? One reason is Sun has a very good OEM server chip with the UltraSPARC T1 "Niagara" processor. This processor has a system on a chip architecture which makes it easy for companies to build compute solutions around. And there are companies focused on markets where US-T1 fits well, such as telco, security, networking, etc. The other is Sun has Niagara 2 in the works, which could allow Sun to reposition the original Niagara as more of an embedded play, but also offer Niagara 2 to OEM customers where it may fit. Also, Sun now has a 10Gb Ethernet ASIC they wish to offer to the OEM market. And as you know if you follow Sun, volume matters. Sun knows it cannot drive its network ASIC into the larger market by itself.

But I think that is only part of the story. It explains the "Why". It does not explain the "When", or more precisely, the "Why Now".

As you may know, Sun will soon announce the servers which are part of the Sun-Fujitsu "Advanced Product Line" (APL) project. These systems will use Fujitsu's SPARC64-VI processors. Sun has had to face a lot of FUD around the future of its SPARC processors due to its decision to partner on these midrange and high-end systems.

What better preemptive action to take against he obvious FUD and annoying reporter questions than to create an Executive Vice President of SPARC? Who is the obvious leader of the SPARC processor business? People in the industry know who David Yen is. Does anyone know who run's Fujitsu's SPARC64 business?

Sun has grabbed the leadership of the SPARC industry in a very visible way just before the announcement of a product line which uses third-part SPARC processors.

All I can say is, this is rhetorically brilliant.

It has Jonathan Schwartz's fingerprints all over it.

Was Margaret Thatcher the last real man in Britain?

A disgusted Ralph Peters asks the question:

Where's Winston?
It's Iran 15, Brits 0 In The Gulf

If you are not familiar with Ralph Peters, he is a retired Army Intelligence officer, novelist, columnist, and television commentator.

Thursday, March 29, 2007

10 Gigabit Ethernet "Crossover"

I just saw this article comparing Fibre Channel, 10Gb Ethernet, and InfiniBand, and I thought it was interesting.

It points out information from the Dell'Oro consulting firm showing Gigabit Ethernet ports first outshipped Fast Ethernet ports in 2004, some seven years after GigE was introduced, and five years after the 1000BASE-T spec was introduced in 1999. (There is a good article here on the 1000BASE-T PHY.)

So if it took five years from the 1000Base-T release until 1000BASE-T ports exceeded 100BASE-T ports, despite backwards compatibility.

"Crossover", as this point is known, is very important in for a new replacement technology. Once a new standard or product reaches crossover, the second half of market penetration typically occurs quickly. Backwards compatibility helps, but realize a compatible 100/1000BASE-T port does not mean a switch blade with 48 of those ports will be compatible with an older blade chassis. Also, the increased cost of the new technology causes some resistance.

Today, there is a much bigger challenge for 10GBASE-T: Power consumption. From the article on the 1000BASE-T PHY I linked to earlier:

"Because of the complexity of the signal-processing task, a 10/100/1000Base-T copper PHY is the dominant consumer of power in essentially all gigabit switch designs supporting copper media. First-generation 1000Base-T copper PHYs introduced in 1999 in 0.35-micron CMOS consumed well over 5 watts of power-too high for widespread use in high-density Gigabit Ethernet switch form factors."
With 10GBASE-T, the power required is much higher. Chelsio's new 10GBASE-T NIC requires 24 watts of power, and can only drive a signal over 50 meters of the 100 meter distance of the 10GBASE-T spec. Now some of the NICs power is the supporting TCP Offload Engine (TOE) and other circuitry. But it is probably safe to say 15 watts for each 10GBASE-T switch or NIC port is currently required.

So while some say the promise of consolidated I/O will drive the transition from Gigabit Ethernet to 10 Gigabit Ethernet faster than the transition from Fast Ethernet to Gigabit Ethernet, power consumption will likely slow this transition significantly.

So indeed the transition from Gigabit Ethernet to 10 Gigabit Ethernet may follow the transition from Fast Ethernet to Gigabit Ethernet. In that case, if 1000BASE-T is a gauge, and the 10GBASE-T spec was just approved in 2006, it could take until 2011 before 10GBASE-T ports outnumber 1000BASE-T ports.

What is good about this is the software (iWARP, iSER, NFSoRDMA, pNFSoRDMA, etc.) and supporting networking protocols for this new generation of Ethernet will have plenty of time to catch up. This means once crossover does happen, it should have a very strong impact on the market.

Meanwhile, it appears there continues to be plenty of opportunity for Fibre Channel for storage and InfiniBand for low-latency IPC.

Related posts:
Predictions for the future of low-latency computing, it's not where you think it is
Part 1 | Part 2 | Part 3

Wednesday, March 28, 2007

Big Sky Theory

If you are over the Pacific Ocean halfway between Chile and New Zealand, and you get killed by a falling satellite, guess what? It's not your day.

But for these people, as the falling satellite missed them, the "Big Sky" theory continues to hold:

Space junk falls around airliner: report

Tuesday, March 27, 2007

I wonder why Microsoft has not bought Adobe

Adobe, with its new Creative Suite 3, is all the buzz now. But PhotoShop is not what makes Adobe interesting. It is Adobe's ability to establish two of its formats (PDF and Flash) as defacto standards.

Microsoft loves defacto standards. And Microsoft hates the fact that Adobe owns THE web animation standard (which is becoming the streaming media standard), and Adobe owns THE print-formatted document standard.

Microsoft tried to create an alternative to PDF, but has had zero success, despite the fact most PDF's original documents are created in Microsoft Office applications.

Which begs the question. Microsoft has a market cap of about $275 billion, compared to Adobe's $25 billion. Why Microsoft has not attempted a takeover, hostile or otherwise, of Adobe, is beyond me.

Perhaps it is all the bad blood between Microsoft and Adobe in the past. Maybe it is Microsoft's failed acquisition of SoftImage in the 1990s. But we are in a new decade, make that a new millennium.

Tuesday, March 13, 2007

The Economy is Booming

Don't pay attention to the Dow's recent correction. Just check vacancy rates of hotels, airline load factors, and rental car availability.

Every flight I have booked recently has been nearly full, forcing my employer to have to pay higher fares. I have also had serious problems finding vacancy in hotels, and have been forced to stay in hotels on the outskirts of town. And for my next trip, not only are those first two points a factor, but there are no rental cars available.

Granted the airlines have reduced flights. And perhaps rental car companies have reduced their fleets. But other than the Stardust Hotel in Vegas last night, I don't recall hotels being imploded to reduce capacity.

I don't remember air, rental car, and hotels being this tight since 1999 and 2000, in the peak of the dot-com era.

The airplane that refuses to die

The Boeing 767, that is.

This is an interesting article:

Boeing considering new 767 freighter to counter A330-200F

And an excellent complement at Randy's Blog here:

Year of the 767

Sidebar: Besides Sun CEO Jonathan Schwartz, Randy Baseler, Vice President of Marketing for Boeing Commercial Airplanes, is probably corporate America's highest ranking and best known blogger.

Boeing is considering enhancements to its 767-300 freighter to make it more competitive with the Airbus A330 freighter. Now the 767 is over a decade older than the A330, and the freighter variant of the A330 is a brand-new model of the A330, designed to replace the A300 freighter, itself almost a decade older than the 767.

But it is amazing the 767 is even able to challenge the A330. The A330 is larger, and can carry both more volume and weight. The A330's lower deck is wider, and can carry more and larger containers. That the 767 is still competitive against the larger A330 points to much deeper problems with Airbus. But it also points to a company, Boeing, which is a model of aggressive competitiveness.

The way I describe it is Boeing is able to play chess while simultaneously being involved in a street fight. Playing chess is Boeing accomplishing its strategic plans. The street fight is the daily tactical selling of what Boeing has "on the truck". The chess part is the future: The 787 and the deft move to offer the derivative 747-8. The street fight is today: The numerous 737 variants, the phenomenal 777, and now the possibility of an enhanced 767F. What is interesting is an enhanced 767F could eat into production capacity for the 787, which will be produced in the same Everett Washington facility.

I think Boeing will cross that bridge when they reach it. Today it is about winning.

There are many companies who could learn a thing or two from Boeing's commercial airplane division.

Tuesday, March 06, 2007

Anarchists Flock to Join Rioters

Shouldn't somebody have predicted this?

European anarchists flock to join rioters in Copenhagen

I originally saw this story in a local paper with a similar headline.

Of course, I personally don't believe anarchists "flock". A flock is far too organized for anarchists.

And do European anarchists live in Europe? It seems too organized for them. If anarchists like anarchy, it seems they would be more comfortable in a place in anarchy, maybe Lebanon, Somalia, or Iraq.

Wednesday, February 28, 2007

"When he woke up [he] had a huge headache"

After five bottles of vodka "one straight after the other", I would think so. It hurts my head just reading this story:

The father of all hangovers

Where on Earth do the British get off ...

... in any way criticizing any kind of food?

Prince Charles says ban McDonald's food

Am I the only one to see the humor in this? Okay, I get that Prince Charles was concerned about the health aspects of McDonalds, but please, ask the Italians to take up this fight.

YRH, go back to your bangers and mash, and leave food commentary to the continentals.

Friday, February 23, 2007

What Boeing Needs To Do (Part 4)

In previous posts, I proposed Boeing split their proposed Y1 project into two aircraft, one in a smaller category, 100-150 seats, and another to target the 180-250 seat market. This article suggests Boeing might do just that:

Boeing may offer two 737 replacement solutions

I also said Boeing needs to act now. Why?

One thing I have learned in business is, companies have the ability to shape the market and drive the behavior of competitors. They also have the ability, through inaction or the wrong actions, to be held hostage to the decisions of competitors.

Airbus is struggling. They are under severe pressure from Boeing in the long-ranged widebody airliner market currently defined by the Boeing 777 and Airbus A330/A340, and the future Boeing 787 and Airbus A350. The success of Boeing's 787 project has put extreme pressure on Airbus. It not only forced them to create an response (the A350), but to scrap the original A350 design and completely redesign it as the A350XWB. Now Airbus has internal problems, as this article covers:

Ex-Airbus boss says EADS structure doomed to fail

Despite these problems, Airbus is quite successful in the medium ranged narrow-bodied market where their A320 aircraft outsells the Boeing 737 about two to one. But larger aircraft are more profitable.

But Boeing has a unique opportunity. Not necessarily to vanquish a competitor (I for one despise monopolies), but to further define the future airliner market. As a result, Boeing can force its primary competitor to either cede a major market segment to Boeing, or force Airbus to further dilute its efforts.

This is why Boeing needs to act now. An aircraft targeting the Boeing 757, Boeing 767-200, and A310 markets would give Boeing a "hammer and anvil" strategy against Airbus. It would limit Airbus' markets or spread it too thin. And it would put Boeing firmly in the driver's seat of the commercial aviation industry.

Part 1 | Part 2 | Part 3

Saturday, February 17, 2007

Monday, February 05, 2007

Predictions for the future of low-latency computing, it's not where you think it is (Part 3)

Where is the future of general-purpose computer networks going? What could 10Gb Ethernet to the desktop enable? Good questions. For the last three years, I have carried a laptop with a 1Gb connection. Only twice in that time has it connected at above 100Mb. The truth is most LAN switches are still 100Mb, and 100Mb is more than enough bandwidth to support both computing and VoIP phones. A MPEG4 HDTV signal only requires about 4Mb/sec. You can put a lot on a 100Mb connection.

So back to 10Gb to the desktop. I recently built a new PC. I chose an Nvidia GeForce 7600 GS-based fanless video card. This is based on an older video chip, but leverages semiconductor process shrinks to deliver a much lower power solution. However, this card easily surpasses the performance of a state of the art, high-end UNIX workstation graphics card of four years ago. The cards for RISC/UNIX workstations of that era were PCI based, as the RISC/UNIX workstations did not have the Intel AGP slot. A 64-bit, 66MHz PCI slot provided about 500MB/sec of bandwidth. The idea behind these cards was all graphics processing was done on the card, which had very high bandwidth memory, and only instructions and small amounts of data were passed thorough the 500MB/sec PCI bus. Hold that thought.

At about the same time, there was an interest in creating high-end visualization solutions using these high-performance PCI graphics cards in small servers interconnected with a low-latency network such as Myrinet. The idea is these “graphics grids” could replace high-end visualization solutions from SGI and Evans and Sutherland.

So, what would happen if the 1000MB/sec 10Gb Ethernet replaced the PCI bus? Now, take the low-power graphics card, and add a TOE enabled NIC, and I have a high-performance networked display, without the need for an entire computer to support it. But what about Ethernet's latency? Those previous visualization clusters used Myrinet for latency as well as bandwidth. That is where iWARP comes in. One can run a protocol over a low-latency RDMA connection. This could fundamentally change computing, as 10Gb Ethernet replaces the system bus, and the graphics card becomes an add-on device to the display. Add a keyboard and mouse interface, and you now have an engineering thin-client, perfectly suited for a virtualized desktop running in a VM on a larger server.

Speaking of iWARP, the reduction of latency iWARP offers will be more important than the increase in bandwidth 10Gb Ethernet offers. You read it here first: Low-latency Ethernet will have a far greater impact than higher-performance Ethernet. Why? Simple. Lower-latency offers more potential for innovation than more bandwidth. There are plenty of options for bandwidth today, such as EtherChannel for IP and 4Gb Fibre-Channel for storage.

There is a key trend happening in computing over the last few years. Some call it commoditization. Some call it the “trend to free”. Operating systems “went to free” with the emergence of Linux. Some have said all software is “going to free”, with the emergence of open source. Others have spoken of “free bandwidth”. Certainly, one can look at Web 2.0 as an example of what happens to Internet sites when it is assumed everyone has a broadband connection. The emergence of AMD's Opteron and Intel's EM64T 64-bit extensions to the x86 architecture means 64-bit memory addressing is now “free” with the purchase of an x86 system, and is no longer requires an expensive RISC/UNIX platform. And with the emergence of Xen as a standard option for major Linux distributions, and multiple free options from VMware (VMware Player, VMware Server), virtualization is “going to free”.

What happens when something like this becomes “free”? New innovation is enabled at a level above the free layer. And that is why low-latency Ethernet will be so empowering to innovation.

Low-latency computing has always been a very high cost technology. For decades, low-latency computing has been limited to the realm of supercomputers, mainframes, and their logical follow-ons, HPC clusters and high-end RISC/UNIX systems. As a result, most of the battle against latency has occurred in software. The application clustering enabled by 1Gb Ethernet required proprietary software to manage state among many cluster nodes. Replication, caching, and specialized protocols were required to make it all work. In fact, in the clustered Java appserver space, the clustering technology became a key differentiator. But the truth is, if there was a ubiquitous low-latency interconnect available at the time, and intelligent operating system clustering, much less work would have been required on the part of the ISV. Simply put, if BEA was building a clustered Java appserver for a DEC VAXcluster, it would have been much easier, and they would have come to market much faster.

One can look at ISVs currently developing on InfiniBand as early adopters of low-latency computing. The basic “80/20” rule would suggest for every player investing in IB, there are four who could benefit but are not. Or it could be a 90/10 rule. It is not hard to believe for every ISV currently trying to gain competitive advantage with IB, there are nine others who don't feel it is currently worth the effort to pursue high-performance, low-latency networking as an enabler. This is much like early ISV support of Linux. Some felt it was a good fit for their product, others waited for more market acceptance and maturity.

So when low-latency networking becomes free, that is when all x86 servers come with iWARP ready on-board 10Gb Ethernet with TOEs, operating system and application developers will have new assumptions about cluster latency. It could open up initiatives for true single-system image clustering in Linux, and perhaps even Windows. Applications which previously were not clusterable, may be made so, which may disrupt existing applications. The promise of grid/utility computing becomes much more viable with a unified fabric. Blade server backplanes will probably be RDMA Ethernet based. Perhaps a shared storage clustered database alternative to Oracle RAC will emerge. Fundamental changes in the world of real-time computing, such as electronics, data capture, etc. are very likely. Radical changes to client computing are certainly possible, with thin clients offering far more potential than before. Basically, every form of computing which was weird or expensive because it required highly specialized, high-performance interconnects, will be commoditized.

This is my prediction of what 10Gb iWARP Ethernet will enable: Shared system image clustering will emerge as the defacto form of clustering, global filesystems will emerge as the defacto server filesystems, and "grid computing" (shared resource clustering) will become the normal method of deploying multiple servers. My guess for a target date for this becoming the norm in computing will be around 2015.

Fortunately, we do have an opportunity to examine in real-time what happens when a high-end computing technology becomes commoditized. The technology which offers this opportunity to observe is cheap, high-performance 3d graphics card technology. Once an industry unto itself, then an optional feature of a high-performance, expensive workstation, and now standard equipment of an ordinary desktop PC, only now is an x86 desktop operating system being released (Windows Vista), which requires a 3d graphics card. 3d displays are officially commoditized, and they are assumed to be there. Watch what happens in the graphical user interface space over the next few years. It will be a good benchmark for the innovation which occurs around a technology which has been commoditized.

Part 1 | Part 2

Thursday, January 25, 2007

Is this a crazy idea?

The Linux BIOS projects seeks to put a small Linux image into a ROM to manage PC type hardware.

Many have developed "boot from thumbdrive" operating systems, which put the whole OS into a few hundred megabytes, similar to a "Live CD".

Xensource has a developed bare metal hypervisors, including one which targets Windows-only environments. I assume these use a locked-down Linux or BSD kernel to provide the Xen "Domain 0" function. Xensource also provides a Xen Live CD for evaluation.

Meanwhile, as virtualization like Xen and VMware continue to mature, as CPUs evolve to support virtualization (Intel VT, AMD-V), and PCI Express I/O virtualization coming soon, x86 virtualization will become more robust and approach native performance. It is likely virtualized x86 servers will become the norm for production environments.

What would happen if the Linux BIOS, boot from flash memory, and a bare-metal Xen Live CD ideas merged? Imagine a "XenBIOS" project, using a few hundred megabytes of flash memory to hold a live image on board. It would mean virtualization sedimenting into hardware, not into operating systems, as most are currently predicting.

The effect of free, hardware based virtualization which is automatically there would make for very interesting x86 servers. Even more so with a few on-board, fully virtualized, multi-fabric I/O channels. Kind a baby mainframe.

Maybe it's a crazy idea. Maybe its a vision of the future of computing.

Wednesday, January 24, 2007

Predictions for the future of low-latency computing, it's not where you think it is (Part 2)

Many people make Mistaken Assumptions when speaking on the history of computer networking.

One assumption made is that 100Mb, then 1Gb Ethernet replaced many other protocols. Certainly, in some cases, this is true. But people claiming this often overstate the facts.

Ethernet is first and foremost a LAN protocol, not a specialized, high-performance cluster interconnect for connecting multiple, large shared memory systems or vector supercomputers. To put it simply, I doubt anyone can point to any case of Ethernet replacing HIPPI. HIPPI was used both a storage connection or a cluster interconnect for supercomputers. Clearly what killed HIPPI in storage interconnects was fibre-channel. In cluster interconnects, one of the only vendors using HIPPI was SGI, for clustering multiple multi-hundred CPU Origin systems together. SGI used InfiniBand for clustering its Altix follow-ons to the Orgin.

Certainly 10Mb shared Ethernet killed Token Ring. But think about it, how prevalent were Token ring networks? For how many people was Ethernet the first LAN technology they experienced?

What is important is not that Ethernet killed Token Ring, but that Ethernet, by being standardized and multivendor, drove local area networking prices down low enough to become ubiquitous, which enabled the emergence of LAN email and the client server revolution in the early to mid 1990s.

Certainly 100Mb switched Ethernet with QoS killed ATM to the desktop (very small, niche market). MPLS was probably the key technology replacing ATM in the Wide Area Network, and FDDI in the Metro Area Network, but now Metro Ethernet is often used over MPLS connections.

What is important is not that 100Mb Ethernet killed ATM's promise in the LAN (primarily of network video), but that 100Mb Ethernet, by being standardized and multivendor, drove high-performance local area networking prices down low enough to become ubiquitous, and along with inexpensive Ethernet routers, the emergence of campus wide LANs, which enabled the emergence web-based computing using Java and other technologies in the late 1990s. As for network video, that came in a highly compressed form, primarily over the Internet via 1.5Mb down/256Kb up DSL and cable modem connections, not via bidirectional 100Mb Fast Ethernet connections.

Now what was 1Gb Ethernet supposed to kill? Answer: Fibre-Channel. Many predicted back in 1997-1998 GigE would kill Fibre Channel. I remember first it was going to be NFS over GigE, then it was DAFS over GigE, then it was iSCSI over GigE, as if to blame the protocol for the failure, instead of the true reason: TCP/IP overhead in the pre-TOE, pre-GHz class CPU era. Instead, GigE enabled the easy clustering of applications between servers, such as Java application servers like BEA WebLogic and IBM WebSphere, and databases such as Oracle 9i RAC, rather than servers to storage. Oddly enough, iSCSI has reemerged in the last couple of years not as a replacement for Fibre-Channel, but for storage replication and as a remote boot technology for centrally managed client PCs.

Do you see a trend here? It is not the technology which is superseded which determines the success of the new technology, it is the new innovation the new technology enables. Those who predicted uses of GigE by looking at other 1Gb networks (i.e., Fibre-Channel), instead of a faster Ethernet, were wrong. Just as those before them.

So the truth is while it is wrong to bet against Ethernet, it is also wrong to assume Ethernet has killed every other networking protocol before it. It is also the antithesis of innovative thought to look at a technology by what old things it can kill, rather than what new things it can enable.

But often few people are experiencing the latest speed on their desktop and laptop computers, and this may be part of the reason some people naturally look at current high-performance networking to estimate where 10Gb Ethernet will make its impact. The problem is, as we have see, higher-performance Ethernet's impact is always somewhere other than where the preceding high performance networking technology was. Often, higher-performance Ethernet solves a different, unforseen problem than the preceding high-performance networking technology of similar bandwidth.

My next post will look at 10 gigabit Ethernet and the some predictions on the future of computer networking.

Part 1 | Part 3

Monday, January 22, 2007

Delta to acquire 124-seat Boeing 737-700s

Delta's recent decision to acquire 10 Boeing 737-700s configured with 124 seats each confirms my theory in the value of a 110-130 seat aircraft for the major airlines.

Previous Delta aircraft in this category were the Boeing 737-300 and 737-200, both were gained through the acquisition of Western Airlines. Delta later acquired more 737-300s from other carriers. Prior to the 737-200/300s, Delta flew over a hundred DC-9-30s.

While Delta may never fly hundreds of 110-130 seat airliners again, Delta's recent decision shows the gap between the 70-seat regional jets and 150 seat 737-800 is too significant to ignore.

I expect if Delta successfully exits bankruptcy and avoids US Air's hostile takeover attempt, more 737-700s will be ordered in the future.

Wednesday, January 17, 2007

Predictions for the future of low-latency computing, it's not where you think it is (Part 1)

Low-latency cluster interconnects have always been esoteric. Digital Equipment Corporation (DEC) used reflective memory channel (along with many other interconnects) in its VAX clustering. Sequent used Scalable Coherent Interconnect (SCI) for its NUMA-Q. SGI used HIPPI to connect its Power Challenge, and later Origin servers. IBM had its SP interconnect for its RS/6000 clusters. Sun Microsystems flirted with its Fire Link technology for its high-end SPARC servers. In the late 1990s, Myrinet ruled the day for x86 high-performance computing clustering.

Today, the low-latency interconnect of choice is InfiniBand (IB). Unlike the other technologies, IB is both an industry standard, and offered by multiple vendors. SCI, while standards-based, was a single-vendor implementation. Similarly, Myricom submitted Myrinet as a standard, but remains single vendor. A multi-vendor environment drives down prices while forcing increased innovation.

Another key aspect of IB is the software is also proceeding down a standards path. The OpenIB Alliance was created to drive a standardized, open source set of APIs and drivers for InfiniBand.
An interesting thing happened on the path to OpenIB. The emergence of 10 gigabit Ethernet, and its required TCP/IP Offload Engine (TOE) Network Interface Cards (NICs), offered another standards-based, high performance interconnect. As a result, OpenIB was rechristened as the OpenFabrics Alliance, and became “fabric neutral”.

My next post will look at some of the mistaken assumptions about the progress of computer networking.

Part 2 | Part 3