Sunday, January 17, 2016

"True" Private Clouds

Wikibon is talking about "True" Private Clouds. I think their definition is too narrow, and gets into the weeds. It misses the true customer of a "true" private cloud. And there are two customers. The first is the organizational customer that purchases a private cloud. The second is the internal end-consumer of cloud services.

To Wikibon's credit, the definition of "Private Cloud" is an issue that needs to be addressed. In my career I have seen too many organizations overuse the term "Private Cloud". I have seen a VMware cluster deployed on disparate hardware with no upper level cloud management platform called a private cloud. I have seen converged infrastructure, acquired but managed identically to non-converged infrastructure (as discrete components each managed by their functional staff) called private clouds.

Converged infrastructure plays a role in a private cloud, be even that term is challenged. I have seen disparate servers and storage, purchased separately at different times, cobbled together and called converged infrastructure after the fact. I have also seen single-SKU converged infrastructure broken apart, support for component infrastructure separated, and individual components upgraded on different life-cycles.

From an operations perspective, I have seen mature IT organizations in large enterprises provide similar levels of managed services as traditional managed service providers. I have also seen the converged infrastructure single-support model dramatically fail organizational customers, and provide no better single support that that provided by an reseller or managed service provider.

If the goal of a "true" private cloud is to provide a similar level of service offering to internal end-consumers they receive from a public cloud, but with higher levels of compliance and data sovereignty, then much of the detailed requirements Wikibon mentions are not necessary. As long as the organization can provide an offering to internal end-consumers which is competitive (on cost,  ease of consumption, and reliability), it should meet the definition.

Here are what I believe are required of a "True" Private Cloud:
  • Acquired in consolidated units of management, virtualization, compute, network, and storage with common amortization, and common life-cycle management.
  • Components supported as an integrated whole, with a single number, first-call support model, and escalated support abstracted from the internal end-consumer.
  • Compute, storage, network, and virtualization managed as a single entity by a single, cross-functional team.
  • Provisioned and managed via a cloud management platform (CMP).
  • Consumed by internal end-consumer as a shared resource in logical, not physical increments, i.e., VMs and GBs.
  • End-consumer offerings include multiple performance and data protection SLAs.
  • Provides charge-back to internal end-consumers.
  • Provides the Private Cloud operator performance, capacity, and licensing budgeting of the infrastructure; performance metering and capacity measurement to manage over-subcription, prevent over-consumption (especially of performance), and allow for elastic performance and capacity scaling; and provide built-in performance and capacity planning for predictable infrastructure growth.
  • Managed by high IT maturity organizational customer IT staff, or optionally part of a managed services offering  that does not require organizational customer IT staff to manage.
  • Financed to organizational customer either through capital purchase, capital lease, operational lease, capacity lease, or pay-per-use offering.

Some organizational customers will want to capitalize the "True" Private Cloud and manage it themselves. Others will want to basically rent the whole stack to include the software, and have it managed for them. But the common denominator should be how the internal end-consumer consumes the offering. It should look, feel, and cost as much like the public cloud as possible.

Wednesday, September 30, 2015

U.S. Cyber Command's Requirements Demand Warrant Officers

Yesterday there was a hearing in front of the House Armed Services Committee, "Outside Perspectives on the Department of Defense Cyber Strategy".

Some of the points brought up were about the personnel management of military cyber warriors. This is a challenge, because "cyber" (BTW, I HATE the term) is both an infrastructure (i.e, IT Infrastructure), and a domain (Information Warfare). It is an area where both warriors and janitors walk, more akin to urban warfare than other historic domains.

Since the late 1980s, the military has treated IT as an area where COTS technologies should rule, to both increase the productivity of the military, and to reduce operating costs. At the same time, the PC and client-server boom of the 1990s drew skilled IT technicians from the military to the higher paying civilian sector.

Through the 1980s, the military, had its own data, uniformed, processing specialists. The father of a high-school friend was a Technical Sergeant in the Air Force and a Burroughs mainframe programmer.  In the 1990s, most of the programming positions were either converted civil service, or outsourced to contractors.

The second wave occurred in the 1990s with the decentralization of IT acquisition, management, and support from central service commands (i.e., Air Force Communications Command) to the local military bases.  This was followed with A-76 studies converting many base level IT jobs to a combination of civil service management and contractor work forces.

The result of all of this is the military lost its uniformed expertise in information technology.

Fast forward to today, and information infrastructure is as much a domain in warfighting as the seas and the air, yet the military is left without the skills in uniform which correlate not only to captains of ships and pilots of airplanes, but also to the technicians, operators, and maintainers. As a result, the military has once again centralized IT acquisition, management, and support, and is once again filling positions with uniformed personnel.

However, IT skills is unique in several ways. They are perishable. Old skill requirements (i.e., Novell NetWare, UNIX) become obsolete and unneeded, and are replaced with new skill requirements (i.e., Windows Server, Linux). To ensure quality, they require validation (i.e., IT Certifcations). Because they are COTS based, they are inexpensive compared to unique military skills. They are fungible and readily transferable to the civilian sector.

Another unique aspect of the military is ab initio training. The military will take someone from high school with the appropriate aptitude, enlist them, and train them up to a level of reasonable, beginners level, productivity. It then will use on the job training and continuing education to build expertise. In the case of an in demand skill set, this creates issues with retention. This is a bigger deal than a military turbine engine mechanic--there are only a handful of airlines needing them. But almost every organization needs a Windows administrator.

Then there are the challenges. The military needs, smart, highly skilled, problem solvers for day to day operation of the IT infrastructure. The military information infrastructure is more likely to be attacked both in peace and in wartime, but rapid recovery is critical in wartime. Poor retention hurts this need. The military needs deeply skilled, highly experienced IT technicians. But the need for operational managers is not that great, so the college educated, commissioned officer corps is not the appropriate career path for an IT technician. Something else is needed.

The military position of Warrant Officer is that of an technical specialist. Historically, the technical expertise came from experience serving in the enlisted corps.  In modern times, the Army uses Warrant Officers as helicopter pilots and trains them to the appropriate level or technical expertise.

Warrant Officers can serve as highly skilled individual contributors or as first level managers. It would seem a perfect career path for an enlisted military IT specialist. Tie it to certifications, and perhaps an Associate Degree, along with a service commitment and a retention bonus.

On the commissioned officer side, the career plan should be more on IT architecture, Information Warfare and advanced academic education. College educated officers would start focusing on both supervisory roles, and architectural roles. Then the focus should be on an advanced degree in the appropriate field of study. From there, moving to an Information Warfighting planning role, followed by the appropriate mid-career professional military education. Cross flow between related fields such as military intelligence would also be appropriate, however, this should be treated with care, as military intelligence often recruits from liberal arts studies such as history, foreign language, and political science. A cross flow program should not disrupt either the military intelligence corps, or the information warfare corps. Finally, the Joint Forces Staff College should create a dedicated Command and Staff school for information warfighters, with the goal of creating cadre of information warfighting leaders for all of the services.

Ultimately, the combination of a cadre of commissioned information warfighting leaders, combined with a corps of highly skilled warrant officer information warfighting specialists, would go a long way towards developing the cyber warrior force our nation requires.

Wednesday, July 01, 2015

A Reply to Chris M. Evans' "The NetApp Conundrum"

Storage blogger Chris M. Evans wrote a recent post on LinkedIn entitled "The NetApp Conundrum".

As a NetApp employee, and long-time member of the IT vendor industry, I have provided the following response.

I am having trouble resolving two points you made Chris. One is Data ONTAP is old (23 years, to be exact), and storage architectures only last 20 odd years. The second is clustered Data ONTAP is not Data ONTAP (the 23 year old one), but a new and different product created by merging some of Spinnaker's technology (acquired in 2003), with some of NetApp's technology in 2009. By my math, that makes clustered Data ONTAP six years old, and by your own calculation, it has 14 years of longevity left.

A few other points:

It is impressive HDS VSP's SVOS can run on a laptop. I can run a four-node clustered Data ONTAP cluster on my laptop.

The debate over the HA-Pair construct vs. a multi-node HA construct is an engineering and design debate, based on customer requirements, performance, time to market, predictable failure characteristics, and trade-offs--not ideology or perceived elegance. I would note VMAX engines are failover pairs for the same reason we use failover pairs in clustered Data ONTAP. It is also worth noting EMC changed the cache mirroring approach in Isilon with its Endurant Cache to a logical cache pair construct to maximize performance. Similar to clustered Data ONTAP, EMC XtremIO uses a cluster constructed of failover pairs, and Pure Storage use a failover pair scale up architecture similar to NetApp 7-Mode or EMC VNX.

True scale-out, distributed storage is interesting, but it presents challenges in developing fast, reliable, predictable failover. It is also very difficult to implement highly efficient data protection schemes, such as parity, double/triple-parity, and erasure coding in such an architecture. There is a reason Hadoop clusters, VSAN clusters, and Nutanix clusters use mirroring and triple mirroring for data protection. Nutanix's just announced EC is only for cold data.

What is happening today is almost all of the new all-flash array start-ups (XtremIO, Pure, Kaminario, Whiptail/Cisco, and Nimbus Data), and hybrid array start-ups (Nimble, Tintri, and Tegile), use log-structured filesystems, non-volatile memory and write coalescing, write to free-space, and parity RAID algorithms as the basic underlying technologies for their arrays. These concepts are more than 20 years old. NetApp built WAFL and Data ONTAP on these concepts more than 20 years ago because they worked. And they still work today, especially for NAND flash media. That is why NetApp continues to improve and develop Data ONTAP. Because the alternative to Data ONTAP looks a awful lot like Data ONTAP. Don't take my word for it--just look at the recent hybrid and all-flash storage players out there.

Tuesday, October 07, 2014

Thoughts on the HP Split

Too many people equate the PC business side of current HPQ as Compaq, and the enterprise side of current HPQ as the old HP. The truth is the old HP was nearly dead as a enterprise computing products company after spinning out Agilent and before acquiring Compaq. A quick look at HP's current technology portfolio shows much of it came in through acquisition. Much original HP technology has faded away. What is worse is much of HP's acquired technologies have been neglected to atrophy.

All of HP's current x86 server technology is former Compaq technology. The HP c-Class Blade System is a Compaq design which was in the works prior to the acquisition. HP's rack-mount x86 server technology is former Compaq. Engineering for HP x86 servers is done at the former Compaq facility in Houston.

Prior to the Compaq acquisition, HP's x86 server business was struggling to compete with IBM and Compaq's x86 server offerings. HP's x86 servers suffered from product quality issues, and little innovation.

HP's enterprise storage portfolio was a joke prior to the Compaq acquisition. Their organic mid-range system was sub-par, and they relied on an OEM relationship with EMC for their high-end solution.

Through the Compaq acquisition HP acquired the most sophisticated mid-range SAN platform of its time, the Enterprise Virtual Array (EVA). This was developed by Digital's StorageWorks division, which was working on the EVA prior to Compaq's acquiring them.

Within a decade, HP failed to innovate the EVA, and had to acquire 3PAR (and pay three times its market price due to a bidding war with Dell), to reinvigorate its mid-range storage line. HP also acquired LeftHand Network's SMB iSCSI systems to address the low end of its portfolio. HP still relies on an OEM relationship for the high-end, but now with HDS.

HP divested itself of the microprocessor business, ceding its HP-WideWord VLIW design to Intel to become the Itanium EPIC processor.

In the enterprise server space, HP's acquisition of Convex Computer gave it the SuperDome system, which originated as Convex's next generation Exemplar. While HP has iterated and evolved Convex's NUMA interconnect several times, there has been no net-new high-end server design from HP. The SuperDome 2 simply marries the "Convex Exemplar++" interconnect with the Compaq c-Class I/O backplane. And the idea that the coming "x86 SuperDome" will be anything other than a niche system is not going to happen.

In operating systems, other than its "Ignigte" bare-metal provisioning technology, HP-UX has lagged technologically behind Solaris and AIX for two decades now. HP's "innovations" were to OEM Veritas filesystem and volume management technology.

In automation, HP acquired OpsWare, the best technology out there in 2007. But now all of the oxygen in data center automation is being sucked up by either VMware or OpenStack.

HP had an excellent managed services organization (it used to be headquartered here in Atlanta), but this organization was subsumed into whatever is left of the former EDS post acquisition.

So the only organic components of HP I see still having value are the 30% of HP Services which was not part of the EDS acquisition and HP's printing division. Hewlett-Packard Enterprise is little more than a publicly traded private equity fund, a holding company of various technology brands (Tandem/DEC/Compaq/3PAR/OpsWare/EDS), in the mold of CA Technologies. In that way, they are similar to IBM, which also has acquired and failed to maintain many technologies. The difference is, IBM's organic enterprise technology (Mainframe, POWER, DB2, etc.) is aggressively maintained.

I honestly think the HP PC/Printer spin out will never happen as envisioned. Instead, HP will likely sell of HP PC/Printer to a private equity company who seeks the printer division as a cash flow business, and sees the PC division as something they have to buy in order to get the printer business. They will likely sell the PC business to an ODM who seeks a branded entry.

Sunday, September 11, 2011

Where was I?

Where was I? It seems everyone is answering this question.

I worked for Sun Microsytems at the time and was in the King and Queen building complex in Atlanta in a sales training class. There were no TVs, so we only go the news via cell phones and the Internet. But the Internet had ground to a halt.. As soon as it happened, I knew it was Bin Laden. I was convinced Bin Laden (not Iranian Hezbollah) was behind the Khobar Towers bombing, which killed five of my 71st Rescue Squadron mates in 1996. I felt it odd to be explaining Bin Laden (who I described that day as the closest thing to a James Bond super villain who actually existed on this earth), continuance of government, and SCATANA to my coworkers. It was like I was in on everything which was happening and everyone else was blind. Somewhere in there I called my Reserve unit in Alabama and let them know if they needed me I could be ready and down there in four hours.

At some point someone said Sun's New York sales office was in the World Trade Center (floors 25 and 26 of the South Tower, the second tower hit, and the first to collapse). That realization changed the dynamic of our class. Within about an hour we got word the entire Sun office evacuated after the North Tower was hit, and everyone in the office made it out safely. Crazily enough, we pressed on with our class. We wandered like zombies to HoneyBaked Ham for lunch, came back, and I presented my portion of the training class.

That evening, after a couple of Jack on the Rocks at Joey D's Oak Room with my colleagues, I drove home. On the drive, I called a former 71ster (Darryle Grimes) stationed at the Pentagon. He had been in the Pentagon during the attack, but was far enough away to not actually feel the impact. He told me the Pentagon had gone to 24 hours operations and he had to be back there in a about an hour.

The second thing I remember is I was not able to sleep that night. That is only one of two nights in my adult life I was not able to sleep at all.

Monday, July 26, 2010

Politicizing Everything

The July 12, 2010 letter from five of the members of the Columbia Accident Investigation Board to Senator Barbara Mikulski is a piece of political, not scientific work, and can only be seen as an attempt to offer a fig leaf to an otherwise naked policy. I do not believe these five people just spontaneously decided to write this letter without being solicited to do so. First, it is a letter from only five of the thirteen CAIB members. Second, those members are claiming to speak on behalf of the CAIB:
"We would be glad to answer any questions that you or other members of Congress may have concerning the CAIB report and its application to today’s space policy issues."

Third, one of the five authors, Shelia Widnall was a Democrat political appointee, and three others, Steven Wallace, Douglas Osheroff, and John Logsdon were all Obama campaign contributors. Without knowing the opinions of the other eight members of the CAIB, these are just the opinions of individuals, and more accurately, potentially biased individuals. Fourth, the letter misrepresents some of the conclusions of the CAIB, specifically the following:
"The design of the system should give overriding priority to crew safety, rather than trade safety against other performance criteria, such as low cost and reusability, or against advanced space operation capabilities other than crew transfer."
"This conclusion implies that whatever design NASA chooses should become the primary means for taking people to and from the International Space Station, not just a complement to the Space Shuttle. And it follows from the same conclusion that there is urgency in choosing that design, after serious review of a "concept of operations" for human space flight, and bringing it into operation as soon as possible. This is likely to require a significant commitment of resources over the next several years. The nation must not shy from making that commitment."

Abandoning Ares I and Orion is being done for cost reasons, not for safety reasons. The primary means of taking people to the ISS will be the Russian Soyuz. Abandoning Ares I and Orion abandons urgency and does not bring a system into operation as soon as possible. It specifically abandons the significant commitment of resources over the next several years. It is shying away from the needed commitment.

Furthermore, the letter misrepresents the Ares I when it compares it to current EELV boosters. The first stage of the Ares I is based on the man-rated Space Shuttle SRB, of which 262 have flown successfully, a fact which escapes the "CAIB Five", because it makes the 34 EELV launches pale in comparison. The J-2X Ares I upper stage engine is based on the man-rated J-2 engine which had a 96% success rate, and despite a handful of engine failures, it had a 100% mission success record.

The Orion spacecraft is a simply scale up of the Apollo Command Module spacecraft. Scaling up an existing design is a proven cost and risk mitigation strategy, and was the same strategy used to develop the highly successful Gemini spacecraft. The Gemini capsule was based on an enlarged Mercury capsule, which allowed engineers to focus on the advanced features of Gemini rather than the capsule itself. This is no different from Orion. Much of the original aerodynamic work done on the Apollo Command Module still applies, so it means a safer, quicker, less costly solution.

Additionally the CAIB noted:
"It is the view of the Board that the previous attempts to develop a replacement vehicle for the aging Shuttle represent a failure of national leadership. The cause of the failure was continuing to expect major technological advances in that vehicle."

Ares I / Orion, by leveraging existing boosters, engines, and spacecraft designs, avoids the expectation of technological advances. Even the decision to move to a splashdown water landing was done to reduce risk and cost.
"With the amount of risk inherent in the Space Shuttle, the first step should be to reach an agreement that the overriding mission of the replacement system is to move humans safely and reliably into and out of Earth orbit. To demand more would be to fall into the same trap as all previous, unsuccessful, efforts."

While the Constellation project encompassed more than simply transporting astronauts to orbit, the Ares I / Orion system was focused only on this. The only additional demand was that a future uprated version of Orion, carrying four astronauts rather than six astronauts, be capable of flying to lunar orbit, be parked unmanned in orbit, and later return to Earth. Most of these capabilities would impact Orion's service module, not the manned capsule.
"Continued U.S. leadership in space is an important national objective. That leadership depends on a willingness to pay the costs of achieving it."

It is clear President Obama does not have the will desired by the CAIB, and Obama's decision represents another failure of national leadership. It also seems the "CAIB Five" no longer agree with the importance of U.S. leadership in space. This letter can only be seen as a dissent from Chapter 9 of the original CAIB report. The authors should be vigorously challenged not only on their statements in this letter, but also on their support of the original CAIB report's conclusions.

Wednesday, December 23, 2009

x86 Rises, Part 4: The emergence of Linux as a viable datacenter OS

Several years ago I drafted a white paper I called "x86 Everywhere". I started it in the fall of 2004, let it sit, and updated it in April 2005. It remains unfinished, but with the release today of Intel's Nehalem processor, I took a look at it again. Here it is:

Three trends could allow what I call "x86 Everywhere" to happen.

The third trend necessary for "x86 Everywhere" is the possibility of the emergence of Linux as a viable datacenter OS.

This seems less likely than high-end x86 servers at this point, but it is certainly possible in several years time, if the efforts of the Datacenter Linux project bear fruit. Windows on 32-bit x86 systems did not penetrate the datacenter, in part because the hardware was not 64-bit, the hardware was not scalable, and customers did not trust Windows with their critical data.

Today, the hardware is 64-bit, AMD Opteron is scalable to eight-sockets today, Intel is pursuing efforts that will likely address the scalability limitations of Xeon, both AMD and Intel are aggressively pursuing multicore chip strategies, and customers trust Linux in places they formerly only trusted UNIX. The result is a very real, industry standard ABI/ISA platform combination that scales from embedded systems, to an inexpensive developer platform (the PC), to midrange enterprise datacenter computers. This could be enough to cause a tipping point, creating a fundamental driver for the Datacenter Linux initiative. Such a change in the primary enterprise compute platform from RISC/UNIX to x86/Linux would likely be highly disruptive to the industry, and would rival the move of commercial computing in the early 1990's from proprietary minicomputers to SMP RISC/UNIX servers. Once established in the datacenter as a viable midrange enterprise platform, like SPARC/Solaris it becomes a straightforward scaling exercise for x86/Linux to establish itself as a high-end platform.

Finally, while not a trend driving large scale x86 adoption, there are other developments to consider. Intel has a virtualization technology, called Vanderpool on desktops and Silvervale on servers, that will help provide partitioning on its systems. AMD has also stated it intends to offer a virtualization layer, called Pacifica. AMD has also stated it plans to improve RAS features of its Opteron, and it is likely Intel will do the same with Xeon, using features it already offers on Itanium. Both of these key technology areas will improve adoption of x86 servers in the enterprise market.

How will this play out?

First, Dell's strategy is to only enter established markets, and to do so with a superior fulfillment system. For markets that are not at that point, Dell has used partnerships, such as its existing partnership with EMC. Dell also partners with Unisys to resell Unisys' 8-way Intel Xeon systems. Therefore the most likely path for Dell is to primarily continue the status quo, assuming four socket x86 systems and below represent the lion's share of the server market. If there is a need to address the greater than eight-socket x86 server market, Dell could expand the Unisys agreement beyond 8-way. If Dell expands into the Opteron market, and needs to address the greater than eight-socket x86 server market, it could partner with Newisys (also an Austin TX company).

IBM already is a player with its Enterprise X Architecture (EXA) for Intel systems. However, IBM has close ties to Newisys (the founder is ex-IBM, and the Horus chipset is based on similar principals to EXA), IBM sold its North Carolina based PC Server manufacturing plants to SCI-Samna, IBM has a strong presence in Austin TX, Newisys' home, and IBM has strategic agreements with AMD around CPU fabrication technology. It is possible IBM could offer the Newisys system in addition to its own EXA systems.

HP is committed to x86 in the four-socket and below space, and is a strong backer of Linux. If the x86/Linux platform gains momentum, it would simultaneously weaken Itanium sales. This would require a strategy change for HP, but such a change would be necessary to remain a viable datacenter systems vendor. To address this, HP could OEM a solution if needed to address a short term requirement. HP did this with NEC's high-end Itanium system before HP adapted its Superdome system to accept Itanium processors. Here the most likely partner would be Newisys, with similar Texas roots to the Compaq, whose former Texas offices server as headquarters for HP's x86 division in the post-merger HP. Longer term, HP's relationship with Intel could produce a high-end x86 system, especially given the common chipset Intel promises for Itanium and Xeon. In fact, HP's “Arches” system, the follow-on to Superdome, could easily accept future Xeon processors, given the common Itanium chipset. HP could also acquire a solution, but the most likely acquisition in this case would be Unisys. A Unisys acquisition would be defensive as well if Unisys had or was considering a significant Dell agreement.

Sun has some of the closest ties to AMD, and Sun has the technology to build large systems. Sun already plans eight-socket Opteron systems. If a significant market for larger than eight socket x86 servers emerges, Sun will have to decide how to address that market. However, balancing the high-end SPARC and x86 business would be a challenge for Sun. If the scalable x86 market shows great promise, the best technical solution for Sun could be an even tighter AMD partnership with technology sharing to allow common systems to be built with either AMD or SPARC processors. The potential for Sun to leverage common technologies such as coherent Hypertransport for SPARC systems as well as Opteron could offer considerable economies of scale. This could make the most sense in the post APL timeframe. A secondary solution, which also offers a near term solution, would be an OEM deal with Newisys. Sun has relationships with SCI-Samna, OEMing Newisys' two socket and four socket Opteron servers as the V20z and V40z, and Sun contracts with SCI-Samna to manufacture low-end UltraSPARC servers. A deal with Newisys around higher-end systems would also server to more strongly establish Sun in the Texas information technology community, clearly one of the top IT centers in the world, and the most important in the x86 business.

AMD's best interests are served if it does not depend on other vendor's chipsets for scalability. Therefore, offering a higher-end Opteron processor with more coherent Hypertransport links allowing greater glueless SMP scalability is the most likely path for AMD.

Similarly, Intel's best interests are served if it can offer everything needed to build a scalable server directly to the distributor. This is the shift needed to move high-end servers into the commodity space, and allow Dell to enter the market with superior logistics.

Based on all of this, a two-phased industry approach is likely. The first being server-vendor based proprietary scalable solutions (such as IBM's EXA, Unisys' CMP, and Newisys' Horus), followed by processor vendor solutions based on in-chip features.

Who is threatened most by x86 Everywhere? One could say Sun, who relies on SPARC systems for the vast majority of its revenues. However if x86 Everywhere happens, SPARC's installed base is still very large, and will not be replaced overnight. The bigger victim is likely IBM, who is trying to repeat Sun's SPARC success with its POWER architecture. In fact, assuming a Sun/AMD partnership could allow Sun to build SPARC or Opteron systems from common technology (i.e., memory controllers and memory subsystems, coherent Hypertransport MP interconnects, and common Hypertransport I/O bridges), SPARC systems could be continued as long as customer demand supported the design of SPARC processors.

The big loser in this appears to be Newisys. SCI-Samna's business model is two-fold: Contract manufacturing and OEM manufacturing. Newisys' low-end systems fit well in the OEM model, and SCI-Samna has had success selling these systems to its OEM partners. However, the high-end Horus systems do not fit the OEM model. Several have tried OEMing datacenter servers, and few have succeeded. In the late 1990s, Unisys OEMed its x86 CMP system to both Dell and Compaq. The Dell OEM lasted only months. Dell realized a 32-way datacenter server did not fit its direct business model. Compaq's deal lasted a little longer, but it too abandoned the OEM arrangement. Other OEM deals include HP's OEMing of NEC's first generation Itanium system, which delivered few sales. The most successful OEM deal of datacenter servers appears to be Bull Worldwide's OEMing of IBM's pSeries servers, but this arrangement created significant channel conflict for IBM in europe, and seems to always be in danger whenever IBM announced a new generation of RISC/UNIX servers. Fujitsu's deal with Siemens is not considered as an OEM deal here because it is really more of a partnership. The Fujitsu-Siemens model is worth considering by Newisys, as it is a successful model of a business relationship between a high-end server manufacturer and a IT solutions provider. The most likely target customers for Newisys' Horus system are IT integrators such as EDS. IBM has a high-end x86 server in its product portfolio. EDS does not. IT integrators can provide the professional services required in selling such systems. Also, because this would be an OEM arrangement, there is the opportunity for greater margins and services to the IT integrator, compared to deals which involve simply reselling an server vendor's product.

x86 Rises, Part 3: x86 Grows in Performance and Scalability

x86 Rises, Part 2: Decreasing Value of Big UNIX

x86 Rises, Part 1: The Background