Several years ago I drafted a white paper I called "x86 Everywhere". I started it in the fall of 2004, let it sit, and updated it in April 2005. It remains unfinished, but with the release today of Intel's Nehalem processor, I took a look at it again. Here it is:
Three trends could allow what I call "x86 Everywhere" to happen.
The third trend necessary for "x86 Everywhere" is the possibility of the emergence of Linux as a viable datacenter OS.
This seems less likely than high-end x86 servers at this point, but it is certainly possible in several years time, if the efforts of the Datacenter Linux project bear fruit. Windows on 32-bit x86 systems did not penetrate the datacenter, in part because the hardware was not 64-bit, the hardware was not scalable, and customers did not trust Windows with their critical data.
Today, the hardware is 64-bit, AMD Opteron is scalable to eight-sockets today, Intel is pursuing efforts that will likely address the scalability limitations of Xeon, both AMD and Intel are aggressively pursuing multicore chip strategies, and customers trust Linux in places they formerly only trusted UNIX. The result is a very real, industry standard ABI/ISA platform combination that scales from embedded systems, to an inexpensive developer platform (the PC), to midrange enterprise datacenter computers. This could be enough to cause a tipping point, creating a fundamental driver for the Datacenter Linux initiative. Such a change in the primary enterprise compute platform from RISC/UNIX to x86/Linux would likely be highly disruptive to the industry, and would rival the move of commercial computing in the early 1990's from proprietary minicomputers to SMP RISC/UNIX servers. Once established in the datacenter as a viable midrange enterprise platform, like SPARC/Solaris it becomes a straightforward scaling exercise for x86/Linux to establish itself as a high-end platform.
Finally, while not a trend driving large scale x86 adoption, there are other developments to consider. Intel has a virtualization technology, called Vanderpool on desktops and Silvervale on servers, that will help provide partitioning on its systems. AMD has also stated it intends to offer a virtualization layer, called Pacifica. AMD has also stated it plans to improve RAS features of its Opteron, and it is likely Intel will do the same with Xeon, using features it already offers on Itanium. Both of these key technology areas will improve adoption of x86 servers in the enterprise market.
How will this play out?
First, Dell's strategy is to only enter established markets, and to do so with a superior fulfillment system. For markets that are not at that point, Dell has used partnerships, such as its existing partnership with EMC. Dell also partners with Unisys to resell Unisys' 8-way Intel Xeon systems. Therefore the most likely path for Dell is to primarily continue the status quo, assuming four socket x86 systems and below represent the lion's share of the server market. If there is a need to address the greater than eight-socket x86 server market, Dell could expand the Unisys agreement beyond 8-way. If Dell expands into the Opteron market, and needs to address the greater than eight-socket x86 server market, it could partner with Newisys (also an Austin TX company).
IBM already is a player with its Enterprise X Architecture (EXA) for Intel systems. However, IBM has close ties to Newisys (the founder is ex-IBM, and the Horus chipset is based on similar principals to EXA), IBM sold its North Carolina based PC Server manufacturing plants to SCI-Samna, IBM has a strong presence in Austin TX, Newisys' home, and IBM has strategic agreements with AMD around CPU fabrication technology. It is possible IBM could offer the Newisys system in addition to its own EXA systems.
HP is committed to x86 in the four-socket and below space, and is a strong backer of Linux. If the x86/Linux platform gains momentum, it would simultaneously weaken Itanium sales. This would require a strategy change for HP, but such a change would be necessary to remain a viable datacenter systems vendor. To address this, HP could OEM a solution if needed to address a short term requirement. HP did this with NEC's high-end Itanium system before HP adapted its Superdome system to accept Itanium processors. Here the most likely partner would be Newisys, with similar Texas roots to the Compaq, whose former Texas offices server as headquarters for HP's x86 division in the post-merger HP. Longer term, HP's relationship with Intel could produce a high-end x86 system, especially given the common chipset Intel promises for Itanium and Xeon. In fact, HP's “Arches” system, the follow-on to Superdome, could easily accept future Xeon processors, given the common Itanium chipset. HP could also acquire a solution, but the most likely acquisition in this case would be Unisys. A Unisys acquisition would be defensive as well if Unisys had or was considering a significant Dell agreement.
Sun has some of the closest ties to AMD, and Sun has the technology to build large systems. Sun already plans eight-socket Opteron systems. If a significant market for larger than eight socket x86 servers emerges, Sun will have to decide how to address that market. However, balancing the high-end SPARC and x86 business would be a challenge for Sun. If the scalable x86 market shows great promise, the best technical solution for Sun could be an even tighter AMD partnership with technology sharing to allow common systems to be built with either AMD or SPARC processors. The potential for Sun to leverage common technologies such as coherent Hypertransport for SPARC systems as well as Opteron could offer considerable economies of scale. This could make the most sense in the post APL timeframe. A secondary solution, which also offers a near term solution, would be an OEM deal with Newisys. Sun has relationships with SCI-Samna, OEMing Newisys' two socket and four socket Opteron servers as the V20z and V40z, and Sun contracts with SCI-Samna to manufacture low-end UltraSPARC servers. A deal with Newisys around higher-end systems would also server to more strongly establish Sun in the Texas information technology community, clearly one of the top IT centers in the world, and the most important in the x86 business.
AMD's best interests are served if it does not depend on other vendor's chipsets for scalability. Therefore, offering a higher-end Opteron processor with more coherent Hypertransport links allowing greater glueless SMP scalability is the most likely path for AMD.
Similarly, Intel's best interests are served if it can offer everything needed to build a scalable server directly to the distributor. This is the shift needed to move high-end servers into the commodity space, and allow Dell to enter the market with superior logistics.
Based on all of this, a two-phased industry approach is likely. The first being server-vendor based proprietary scalable solutions (such as IBM's EXA, Unisys' CMP, and Newisys' Horus), followed by processor vendor solutions based on in-chip features.
Who is threatened most by x86 Everywhere? One could say Sun, who relies on SPARC systems for the vast majority of its revenues. However if x86 Everywhere happens, SPARC's installed base is still very large, and will not be replaced overnight. The bigger victim is likely IBM, who is trying to repeat Sun's SPARC success with its POWER architecture. In fact, assuming a Sun/AMD partnership could allow Sun to build SPARC or Opteron systems from common technology (i.e., memory controllers and memory subsystems, coherent Hypertransport MP interconnects, and common Hypertransport I/O bridges), SPARC systems could be continued as long as customer demand supported the design of SPARC processors.
The big loser in this appears to be Newisys. SCI-Samna's business model is two-fold: Contract manufacturing and OEM manufacturing. Newisys' low-end systems fit well in the OEM model, and SCI-Samna has had success selling these systems to its OEM partners. However, the high-end Horus systems do not fit the OEM model. Several have tried OEMing datacenter servers, and few have succeeded. In the late 1990s, Unisys OEMed its x86 CMP system to both Dell and Compaq. The Dell OEM lasted only months. Dell realized a 32-way datacenter server did not fit its direct business model. Compaq's deal lasted a little longer, but it too abandoned the OEM arrangement. Other OEM deals include HP's OEMing of NEC's first generation Itanium system, which delivered few sales. The most successful OEM deal of datacenter servers appears to be Bull Worldwide's OEMing of IBM's pSeries servers, but this arrangement created significant channel conflict for IBM in europe, and seems to always be in danger whenever IBM announced a new generation of RISC/UNIX servers. Fujitsu's deal with Siemens is not considered as an OEM deal here because it is really more of a partnership. The Fujitsu-Siemens model is worth considering by Newisys, as it is a successful model of a business relationship between a high-end server manufacturer and a IT solutions provider. The most likely target customers for Newisys' Horus system are IT integrators such as EDS. IBM has a high-end x86 server in its product portfolio. EDS does not. IT integrators can provide the professional services required in selling such systems. Also, because this would be an OEM arrangement, there is the opportunity for greater margins and services to the IT integrator, compared to deals which involve simply reselling an server vendor's product.
x86 Rises, Part 3: x86 Grows in Performance and Scalability
x86 Rises, Part 2: Decreasing Value of Big UNIX
x86 Rises, Part 1: The Background
Wednesday, December 23, 2009
Friday, October 02, 2009
x86 Rises, Part 3: x86 Grows in Performance and Scalability
Several years ago I drafted a white paper I called "x86 Everywhere". I started it in the fall of 2004, let it sit, and updated it in April 2005. It remains unfinished, but with the release today of Intel's Nehalem processor, I took a look at it again. Here it is:
Three trends could allow what I call "x86 Everywhere" to happen.
The second trend is the prospect of several vendors offering scalable 64-bit x86 systems large enough to meet most customer's workloads.
The desktop megahertz wars of the late 1990s and early 2000s between Intel and AMD drove x86 performance at a rate exceeding Moore's law. This directly benefited Intel x86 server performance, making x86 servers available for larger workloads. At the same time, enterprise applications were being rearchitected to multi-tier web-based applications, requiring deployment of additional web and application servers. RISC still had advantages over x86 in this environment, as running Microsoft Windows on x86 servers required the purchase of client access licenses (CALs) for each discreet user. This was extremely expensive for emerging self-service web-based ERP and CRM applications, but it was impossible for B2C ecommerce applications. Enter Linux. In the late 1990s, Linux became established as an entry server operating system, which unlike Microsoft Windows, did not require the purchase of client access licenses (CALs) for each user. Linux quickly became established as the web server OS of choice. The result was a positive feeback loop. Application server ISVs aggressively ported their J2EE appservers to Linux, and improved their clustering so their appservers would work well on clusters of low-cost entry x86 servers. ERP vendors quickly followed porting their application tier to Linux on x86. The low purchase cost of the Linux/x86 architecture was driven home by the dot-com bust and worldwide recession of the early 2000s.
At the same time as the desktop megahertz war, the smaller x86 chip manufacturers each tried to establish their products into a niche area. Via acquired Cyrix and focused in the “system on a chip” market for very low-cost desktops. Transmeta focused on very low power consumption chips for low-end laptops and embedded markets. AMD, long a player in the budget desktop market, decided to focus on the server market by designing an x86 architecture, called “Hammer” which addressed the weaknesses of Intel's existing Xeon x86 server processor, primarily the latter's lack of 64-bit memory addressing. The release of Hammer, branded as Opteron, forced Intel to follow suit with its 64-bit x86 technology, long rumored under the codename “Yamhill”, and branded as EM64T technology.
The emergence of a truly competitive x86 server processor marketplace is driving new innovation in x86 processors, as AMD tries to stay one step ahead of Intel, and as Intel tries to leapfrog AMD. Dual-core processors, improved power management, virtualization technologies, and other improvements are announced on a regular basis.
After the emergence of 64-bit x86 technology in 2004, in 2005 dual-core x86 processors were released. These two technologies have strong synergies. 64-bit addressing increases the size of the workload which can run on an x86 server, and dual-core processors increases the size of server which can be built with x86 processors.
With dual-core 64-bit x86 processors now shipping, and four-core 64-bit x86 processors possible in two to three years, four to eight socket servers may provide the capacity required for most customers' workloads. Beyond that, workloads requiring large, single system image servers (HPTC, large data warehouses, etc.), may be relegated to a niche market. Ordinarily, such a niche market could still justify large, scalable RISC/UNIX systems. But the market for large, single system image servers is not limited to RISC/UNIX. For some time, the scalable x86 market has been a targeted by some system vendors.
In the mid-1990s, Sequent, with its NUMA-Q system, was one of the first vendors of large, scalable x86 systems. Data General offered a very similar NUMA system during the same time period. Both of these systems provided very limited performance because of their architecture. Data General's system failed to gain significant market share, and was end of lifed not long after EMC acquired Data General. Sequent targeted decision support and data warehouse workloads with its NUMA-Q system and had some success. Sequent was acquired by IBM, and IBM released a more advanced x86 NUMA system which offered greater node to node bandwidth and large L4 caches to better manage inter-node latencies. In 2005 IBM released its third generation of x86 NUMA systems.
In the late 1990s, Unisys built a large, scalable SMP x86 system using a technology it calls cellular multiprocessing, or CMP. This technology was derived from Unisys' Clearpath mainframe systems. In fact, Unisys offers a version of its x86 CMP system which runs the Clearpath mainframe OS ported to the x86 architecture. Despite the mainframe heritage and mainframe variant of Unisys' x86 CMP systems, sales have not been strong. These systems were limited by the lack of scalability of Intel's x86 architecture, as well as the x86's lack of 64-bit memory addressing. Unisys now offers a second-generation CMP design, with simpler eight socket entry systems as well as large 32 socket systems.
Both IBM and Unisys offer 32-socket Intel Xeon systems, but both of these systems continue to be limited by the inherent lack of scalability in Intel's Xeon architecture.
The limits of x86 scalability changed with AMD's Opteron. Opteron is the first scalable x86 processor architecture. By virtue of its high-performance, coherent Hypertransport MP interconnect, Opteron is scalable in SMP design. Because of its 64-bit memory addressing, Opteron is scalable in memory capacity, with memory addressing balanced with processor performance. Four to eight socket x86 servers are no longer crippled with saturated SMP busses or inadequate memory capacity. Intel has followed suit with 64-bit memory addressing for Xeon, and a unique dual front side bus (FSB). But the dual FSB, while providing temporary relief to Xeon's saturated SMP bus, is actually designed for the soon to be released dual-core Xeon processors. Dual-core Xeons will likely once again saturate the SMP busses. Better SMP interconnects will be required for efficient scaling of Xeon systems to four sockets and above.
Over the next several years, x86 systems with eight-sockets and greater will become more prevalent. Newisys, a division of SCI-Samna, a major OEM manufacturer of AMD Opteron systems, is planning a 32-way Opteron chipset called Horus. Intel has promised future Itanium and Xeon processors will support a common chipset, allowing a next generation scalable Itanium server architecture to also serve as a scalable Xeon platform. This means traditional large scalable Itanium system vendors, HP, SGI, and NEC could enter the large scalable Xeon system market. The other possibilities are a higher-end AMD Opteron chip with more Hypertransport links allowing more scalable glueless MP topologies, similar to Compaq Alpha EV7's architecture, or the possibility of Intel introducing a scalable glueless chip to chip interconnect. It is important to note, Intel has access to the design of the EV7 interconnect and now employees the developers of the EV7's interconnect through an agreement with Compaq before Compaq was acquired by HP. Regardless, increased SMP scalability of x86 servers seems likely in the next few years.
Related Posts:
x86 Rises, Part 2: Decreasing Value of Big UNIX
x86 Rises, Part 1: The Background
Three trends could allow what I call "x86 Everywhere" to happen.
The second trend is the prospect of several vendors offering scalable 64-bit x86 systems large enough to meet most customer's workloads.
The desktop megahertz wars of the late 1990s and early 2000s between Intel and AMD drove x86 performance at a rate exceeding Moore's law. This directly benefited Intel x86 server performance, making x86 servers available for larger workloads. At the same time, enterprise applications were being rearchitected to multi-tier web-based applications, requiring deployment of additional web and application servers. RISC still had advantages over x86 in this environment, as running Microsoft Windows on x86 servers required the purchase of client access licenses (CALs) for each discreet user. This was extremely expensive for emerging self-service web-based ERP and CRM applications, but it was impossible for B2C ecommerce applications. Enter Linux. In the late 1990s, Linux became established as an entry server operating system, which unlike Microsoft Windows, did not require the purchase of client access licenses (CALs) for each user. Linux quickly became established as the web server OS of choice. The result was a positive feeback loop. Application server ISVs aggressively ported their J2EE appservers to Linux, and improved their clustering so their appservers would work well on clusters of low-cost entry x86 servers. ERP vendors quickly followed porting their application tier to Linux on x86. The low purchase cost of the Linux/x86 architecture was driven home by the dot-com bust and worldwide recession of the early 2000s.
At the same time as the desktop megahertz war, the smaller x86 chip manufacturers each tried to establish their products into a niche area. Via acquired Cyrix and focused in the “system on a chip” market for very low-cost desktops. Transmeta focused on very low power consumption chips for low-end laptops and embedded markets. AMD, long a player in the budget desktop market, decided to focus on the server market by designing an x86 architecture, called “Hammer” which addressed the weaknesses of Intel's existing Xeon x86 server processor, primarily the latter's lack of 64-bit memory addressing. The release of Hammer, branded as Opteron, forced Intel to follow suit with its 64-bit x86 technology, long rumored under the codename “Yamhill”, and branded as EM64T technology.
The emergence of a truly competitive x86 server processor marketplace is driving new innovation in x86 processors, as AMD tries to stay one step ahead of Intel, and as Intel tries to leapfrog AMD. Dual-core processors, improved power management, virtualization technologies, and other improvements are announced on a regular basis.
After the emergence of 64-bit x86 technology in 2004, in 2005 dual-core x86 processors were released. These two technologies have strong synergies. 64-bit addressing increases the size of the workload which can run on an x86 server, and dual-core processors increases the size of server which can be built with x86 processors.
With dual-core 64-bit x86 processors now shipping, and four-core 64-bit x86 processors possible in two to three years, four to eight socket servers may provide the capacity required for most customers' workloads. Beyond that, workloads requiring large, single system image servers (HPTC, large data warehouses, etc.), may be relegated to a niche market. Ordinarily, such a niche market could still justify large, scalable RISC/UNIX systems. But the market for large, single system image servers is not limited to RISC/UNIX. For some time, the scalable x86 market has been a targeted by some system vendors.
In the mid-1990s, Sequent, with its NUMA-Q system, was one of the first vendors of large, scalable x86 systems. Data General offered a very similar NUMA system during the same time period. Both of these systems provided very limited performance because of their architecture. Data General's system failed to gain significant market share, and was end of lifed not long after EMC acquired Data General. Sequent targeted decision support and data warehouse workloads with its NUMA-Q system and had some success. Sequent was acquired by IBM, and IBM released a more advanced x86 NUMA system which offered greater node to node bandwidth and large L4 caches to better manage inter-node latencies. In 2005 IBM released its third generation of x86 NUMA systems.
In the late 1990s, Unisys built a large, scalable SMP x86 system using a technology it calls cellular multiprocessing, or CMP. This technology was derived from Unisys' Clearpath mainframe systems. In fact, Unisys offers a version of its x86 CMP system which runs the Clearpath mainframe OS ported to the x86 architecture. Despite the mainframe heritage and mainframe variant of Unisys' x86 CMP systems, sales have not been strong. These systems were limited by the lack of scalability of Intel's x86 architecture, as well as the x86's lack of 64-bit memory addressing. Unisys now offers a second-generation CMP design, with simpler eight socket entry systems as well as large 32 socket systems.
Both IBM and Unisys offer 32-socket Intel Xeon systems, but both of these systems continue to be limited by the inherent lack of scalability in Intel's Xeon architecture.
The limits of x86 scalability changed with AMD's Opteron. Opteron is the first scalable x86 processor architecture. By virtue of its high-performance, coherent Hypertransport MP interconnect, Opteron is scalable in SMP design. Because of its 64-bit memory addressing, Opteron is scalable in memory capacity, with memory addressing balanced with processor performance. Four to eight socket x86 servers are no longer crippled with saturated SMP busses or inadequate memory capacity. Intel has followed suit with 64-bit memory addressing for Xeon, and a unique dual front side bus (FSB). But the dual FSB, while providing temporary relief to Xeon's saturated SMP bus, is actually designed for the soon to be released dual-core Xeon processors. Dual-core Xeons will likely once again saturate the SMP busses. Better SMP interconnects will be required for efficient scaling of Xeon systems to four sockets and above.
Over the next several years, x86 systems with eight-sockets and greater will become more prevalent. Newisys, a division of SCI-Samna, a major OEM manufacturer of AMD Opteron systems, is planning a 32-way Opteron chipset called Horus. Intel has promised future Itanium and Xeon processors will support a common chipset, allowing a next generation scalable Itanium server architecture to also serve as a scalable Xeon platform. This means traditional large scalable Itanium system vendors, HP, SGI, and NEC could enter the large scalable Xeon system market. The other possibilities are a higher-end AMD Opteron chip with more Hypertransport links allowing more scalable glueless MP topologies, similar to Compaq Alpha EV7's architecture, or the possibility of Intel introducing a scalable glueless chip to chip interconnect. It is important to note, Intel has access to the design of the EV7 interconnect and now employees the developers of the EV7's interconnect through an agreement with Compaq before Compaq was acquired by HP. Regardless, increased SMP scalability of x86 servers seems likely in the next few years.
Related Posts:
x86 Rises, Part 2: Decreasing Value of Big UNIX
x86 Rises, Part 1: The Background
Tuesday, June 16, 2009
x86 Rises, Part 2: Decreasing Value of Big UNIX
Several years ago I drafted a white paper I called "x86 Everywhere". I started it in the fall of 2004, let it sit, and updated it in April 2005. It remains unfinished, but with the release today of Intel's Nehalem processor, I took a look at it again. Here is Part 2:
Three trends could allow what I call "x86 Everywhere" to happen.
The first trend is the decrease in value of large, partitionable, RISC/UNIX systems.
All major commercial RISC/UNIX systems vendors offer large systems that can support large workloads, or can be partitioned to support many medium-sized workloads. The primary reasons for deploying a medium-sized workload in a partition on a large server are expected growth beyond the capacity of typical midrange servers, higher system resource utilization, system management efficiencies of server consolidation, and customer politics and preferences. Each of these reasons is coming under assault by the advancement of Moore's law, and as a result, the value proposition of large, partitionable datacenter servers is declining.
The performance improvements brought about by Moore's law over the last several years have outpaced customer workload growth, allowing midrange systems to handle the expected growth of most customer workloads. In addition, the price of midrange RISC/UNIX system has declined significantly over the last several years, starting with Sun's UltraSPARC III based V880, whose price point was then met by IBM with the POWER4-based p650, and HP's strategy of offering standard configurations of PA-RISC and Itanium midrange systems at very aggressive prices. Moore's law has caused system utilization to drop, as processors are now very powerful.
Traditional physical based partitioning, such as Sun's Dynamic System Domains and HP's Node Partitions (nPars) do not provide adequate granularity given the performance of today's processors. The result is the rise of software-based partitioning, logical partitioning, and virtual machine technology, which are portable to smaller, less expensive RISC/UNIX systems. In the case of purely software based partitioning technology, it is portable to other ISAs such as x86 platforms. For example, virtual machine technology is primarily being used on x86 systems via VMware's products. These shift in server partitioning technology are also decreasing the value proposition of large and midrange RISC/UNIX servers.
The recent emphasis in the industry for provisioning and system management solutions, along with policy-based computing solutions to manage large numbers of discreet servers has yet to significantly change the industry, however improvements in this area have improved the system management efficiencies of distributed servers. This, along with some of the inherent provisioning and management efficiencies of software-based partitioning technologies (shared network and disk resources) have resulted in a decrease in the relative value of large partitionable systems.
One should note, this decrease in value is real. It is not simply a customer perception. First, physical partitioning is simply too expensive a method to achieve partitioning in a server. Markets define prices, not vendors. Costs define margins, not prices. In a scenario with two otherwise equivalent servers, one using physical partitioning, the other using logical partitioning, the logical partitionable server will offer the vendor greater margins. Similarly, designing a server with physical partitioning which offers the same granularity as logical partitioning would likely be abandoned for having too high a cost. Second, customers really are moving workloads from previous generation large servers to smaller servers of the current generation, rather than partitions on larger current generation servers. In 1998 a Sun customer might consider paying the 50% price premium of an E10K over multiple E4500s. The value the 50% premium represented, primarily in growth capacity, justified the premium. Today the premium an E20K has over multiple V890s or V490s is so much higher (around 150% more), few customers can justify the value the E20K price premium provides.
The effect of this is a leveling of playing field between RISC/UNIX servers and x86 servers. Midrange RISC/UNIX servers are becoming simpler and cheaper. Midrange x86 servers have become more robust. RISC ISAs versus the x86 ISA is become a "Coke versus Pepsi" decision: a flavor choice.
Related Post:
x86 Rises, Part 1: The Background
Three trends could allow what I call "x86 Everywhere" to happen.
The first trend is the decrease in value of large, partitionable, RISC/UNIX systems.
All major commercial RISC/UNIX systems vendors offer large systems that can support large workloads, or can be partitioned to support many medium-sized workloads. The primary reasons for deploying a medium-sized workload in a partition on a large server are expected growth beyond the capacity of typical midrange servers, higher system resource utilization, system management efficiencies of server consolidation, and customer politics and preferences. Each of these reasons is coming under assault by the advancement of Moore's law, and as a result, the value proposition of large, partitionable datacenter servers is declining.
The performance improvements brought about by Moore's law over the last several years have outpaced customer workload growth, allowing midrange systems to handle the expected growth of most customer workloads. In addition, the price of midrange RISC/UNIX system has declined significantly over the last several years, starting with Sun's UltraSPARC III based V880, whose price point was then met by IBM with the POWER4-based p650, and HP's strategy of offering standard configurations of PA-RISC and Itanium midrange systems at very aggressive prices. Moore's law has caused system utilization to drop, as processors are now very powerful.
Traditional physical based partitioning, such as Sun's Dynamic System Domains and HP's Node Partitions (nPars) do not provide adequate granularity given the performance of today's processors. The result is the rise of software-based partitioning, logical partitioning, and virtual machine technology, which are portable to smaller, less expensive RISC/UNIX systems. In the case of purely software based partitioning technology, it is portable to other ISAs such as x86 platforms. For example, virtual machine technology is primarily being used on x86 systems via VMware's products. These shift in server partitioning technology are also decreasing the value proposition of large and midrange RISC/UNIX servers.
The recent emphasis in the industry for provisioning and system management solutions, along with policy-based computing solutions to manage large numbers of discreet servers has yet to significantly change the industry, however improvements in this area have improved the system management efficiencies of distributed servers. This, along with some of the inherent provisioning and management efficiencies of software-based partitioning technologies (shared network and disk resources) have resulted in a decrease in the relative value of large partitionable systems.
One should note, this decrease in value is real. It is not simply a customer perception. First, physical partitioning is simply too expensive a method to achieve partitioning in a server. Markets define prices, not vendors. Costs define margins, not prices. In a scenario with two otherwise equivalent servers, one using physical partitioning, the other using logical partitioning, the logical partitionable server will offer the vendor greater margins. Similarly, designing a server with physical partitioning which offers the same granularity as logical partitioning would likely be abandoned for having too high a cost. Second, customers really are moving workloads from previous generation large servers to smaller servers of the current generation, rather than partitions on larger current generation servers. In 1998 a Sun customer might consider paying the 50% price premium of an E10K over multiple E4500s. The value the 50% premium represented, primarily in growth capacity, justified the premium. Today the premium an E20K has over multiple V890s or V490s is so much higher (around 150% more), few customers can justify the value the E20K price premium provides.
The effect of this is a leveling of playing field between RISC/UNIX servers and x86 servers. Midrange RISC/UNIX servers are becoming simpler and cheaper. Midrange x86 servers have become more robust. RISC ISAs versus the x86 ISA is become a "Coke versus Pepsi" decision: a flavor choice.
Related Post:
x86 Rises, Part 1: The Background
Monday, June 15, 2009
Three types of people
I have come to a conclusion there are three types of people in the world:
Process people.
Idea people.
People people.
Process people.
Idea people.
People people.
Tuesday, March 31, 2009
x86 Rises, Part 1, The Background
Several years ago I drafted a white paper I called "x86 Everywhere". I started it in the fall of 2004, let it sit, and updated it in April 2005. It remains unfinished, but with the release today of Intel's Nehalem processor, I took a look at it again. Here it is:
What is “x86 Everywhere”? x86 Everywhere is a concept that the dominance the x86 instruction set architecture (ISA) currently has on the desktop and entry server markets will expand into the midrange and high-end datacenter server markets, eventually reaching a tipping point, and displacing most RISC/UNIX platforms. Over time the x86 ISA establishes a monopoly in the datacenter similar to its current monopoly on the desktop.
The drivers for such a scenario are purely economic, but this does not refer to server acquisition costs. Instead it refers to the economic advantages a single, dominant ISA would bring to system vendors and independent software vendors. This is not the first time such a scenario has been speculated. In the early and mid 1990s, when Microsoft announced Window NT as a portable, multiplatform operating system for both RISC and x86, many speculated Windows would become the dominant operating system and programming application binary interface (ABI) from the desktop to large datacenter servers. A few years later, many speculated Intel's IA-64 “Merced” (later branded Itanium) ISA would dominate all computers, displacing RISC from the datacenter. Desktop PCs, entry and midrange servers running Microsoft Windows and Novell Netware, and high-end datacenter servers running UNIX would all use the IA-64 architecture. Despite this speculation, few put two and two together and speculated a Windows/IA-64 monopoly platform combination. The latest domination scenario proposed a few years ago was Linux would displace all UNIX variants. In this scenario, system vendors with their own UNIX variants would simply abandon their UNIX distributions and instead port Linux to their RISC architectures. This scenario is amazingly similar to the speculation about Microsoft Windows in the mid 1990s. Then experts suggested RISC vendors would abandon their UNIX variants to instead embrace Windows.
There is a huge difference with x86 Everywhere. The difference is the current installed base of x86 systems, and the current willingness of customers to use x86 systems for critical tasks. This is not to say other ISAs will exist. While RISC/UNIX established dominance in the datacenter in the 1990s, mainframes still exist, and while x86 is dominant on the desktop, the Apple Macintosh continues to be successful as an alternative platform. However, in this scenario, traditional RISC/UNIX systems are rendered to a smaller, niche market.
Three trends could allow what I call "x86 Everywhere" to happen.
I will cover those three trends in my next post.
What is “x86 Everywhere”? x86 Everywhere is a concept that the dominance the x86 instruction set architecture (ISA) currently has on the desktop and entry server markets will expand into the midrange and high-end datacenter server markets, eventually reaching a tipping point, and displacing most RISC/UNIX platforms. Over time the x86 ISA establishes a monopoly in the datacenter similar to its current monopoly on the desktop.
The drivers for such a scenario are purely economic, but this does not refer to server acquisition costs. Instead it refers to the economic advantages a single, dominant ISA would bring to system vendors and independent software vendors. This is not the first time such a scenario has been speculated. In the early and mid 1990s, when Microsoft announced Window NT as a portable, multiplatform operating system for both RISC and x86, many speculated Windows would become the dominant operating system and programming application binary interface (ABI) from the desktop to large datacenter servers. A few years later, many speculated Intel's IA-64 “Merced” (later branded Itanium) ISA would dominate all computers, displacing RISC from the datacenter. Desktop PCs, entry and midrange servers running Microsoft Windows and Novell Netware, and high-end datacenter servers running UNIX would all use the IA-64 architecture. Despite this speculation, few put two and two together and speculated a Windows/IA-64 monopoly platform combination. The latest domination scenario proposed a few years ago was Linux would displace all UNIX variants. In this scenario, system vendors with their own UNIX variants would simply abandon their UNIX distributions and instead port Linux to their RISC architectures. This scenario is amazingly similar to the speculation about Microsoft Windows in the mid 1990s. Then experts suggested RISC vendors would abandon their UNIX variants to instead embrace Windows.
There is a huge difference with x86 Everywhere. The difference is the current installed base of x86 systems, and the current willingness of customers to use x86 systems for critical tasks. This is not to say other ISAs will exist. While RISC/UNIX established dominance in the datacenter in the 1990s, mainframes still exist, and while x86 is dominant on the desktop, the Apple Macintosh continues to be successful as an alternative platform. However, in this scenario, traditional RISC/UNIX systems are rendered to a smaller, niche market.
Three trends could allow what I call "x86 Everywhere" to happen.
I will cover those three trends in my next post.
Monday, February 23, 2009
On Power and Journalism
Another great quote. This time from Jonah Goldberg:
"But it’s worth remembering that government and corporations aren’t the only institutions that can abuse power. Factions, to borrow a word from the Federalist Papers, have a power all their own. When governments cave to that power, they become mere tools of bullies. And when journalists go along for the ride, there’s no one left to speak truth to power when that is what’s needed most."
An Excellent Observation on the Financial Bailout
Great comment from Mark Steyn on the Hugh Hewitt show last Thursday:
UPDATE: Another great comment by Steyn, where he calls the press "eunuchs to the PC sultans". That one's going to leave a mark (pun intended).
" ... what the government has been trying to do since October has been to re-inflate a credit bubble, to say that people should be able to get spectacular returns on mediocre assets as a permanent feature of life. And that is simply unsustainable. And my objection to what started back in mid-September is that no matter how much you pump into it, you cannot re-inflate a credit bubble, and you shouldn’t try. And that is something that if necessary, people have to take a bit of temporary pain ... "Steyn is exactly right. We should be trying to ensure a soft landing on a reasonable bottom (i.e., preventing the crashing through a reasonable bottom into a worse situation), we should not be trying to reinflate a balloon with a huge gash in its side. That money is lost, and worse, takes with it more.
UPDATE: Another great comment by Steyn, where he calls the press "eunuchs to the PC sultans". That one's going to leave a mark (pun intended).
Friday, February 20, 2009
Thoughts on "Great Depression 2.0"
When I was in junior high (I think it was 8th grade), my Social Studies teacher (Mr. James) made everyone in the class interview someone who had lived through the Great Depression (that would be "Great Depression 1.0", or "Great Depression 29"). For Generation X, that generally meant interviewing a grandparent.
Mr. James gave us a list of questions to ask in our interview.
So, I took my Radio Shack monoural cassette tape recorder, and interviewed my grandmother, born in 1909 (or maybe it was 1908). The one thing I remember from that interview was one question: "What ended the Great Depression?" I still remember my grandmother's answer: "The wower" That would be "The War" for those who cannot translate a southern accent. "The War" refeedr to World War II. Now, "The War" didn't happen for America until twelve years after the stock market crash, and nine years after the election of FDR.
Wasn't there a New Deal? What about the WPA? The CCC? What, building the Hoover and Grand Coulee Dams didn't pull us out of the depression? Rural Electric Administration and the TVA? Nope.
Now, after we all did our interviews, we had to listen to them. For a week, we all listened to each of the interview tapes. And the one thing I remember is almost every subject answered the "What ended the Great Depression?" question the same way: World War II.
Is there a lesson in this? Perhaps. Perhaps the lesson is spending billions of dollars of taxpayers' money on make-work will not pull you out of depression, but spending billions of dollars of taxpayers' money on trucks, tanks, airplanes, and ammo will. Perhaps the lesson is economic problems are rarely solved quickly from the bottom up (i.e., jobs programs, consumer focused programs, tax cuts to individuals, etc.). Does it mean economic problems can be solved faster at the top (money supply, business lending, business taxes, etc.)? One could say the war spending was a direct subsidy to large American industrial companies, like GM and Boeing, and the jobs were a byproduct, that is, it was top-down. Certainly the decade-long economic downturn of 1973-1984 never adequately responded to the demand side economic efforts of Nixon, Ford, and Carter, and only recovered after Reagan's tight money supply and supply side focused efforts.
One thing I can say is, I can lean on that CCC-built rail at the Grand Canyon. I can use electricity from the TVA. They might not have pulled the U.S. out of depression, but one could argue we got something for the money, and some people were employed for some period of time.
But can we say the same thing about the current stimulus plan?
Mr. James gave us a list of questions to ask in our interview.
So, I took my Radio Shack monoural cassette tape recorder, and interviewed my grandmother, born in 1909 (or maybe it was 1908). The one thing I remember from that interview was one question: "What ended the Great Depression?" I still remember my grandmother's answer: "The wower" That would be "The War" for those who cannot translate a southern accent. "The War" refeedr to World War II. Now, "The War" didn't happen for America until twelve years after the stock market crash, and nine years after the election of FDR.
Wasn't there a New Deal? What about the WPA? The CCC? What, building the Hoover and Grand Coulee Dams didn't pull us out of the depression? Rural Electric Administration and the TVA? Nope.
Now, after we all did our interviews, we had to listen to them. For a week, we all listened to each of the interview tapes. And the one thing I remember is almost every subject answered the "What ended the Great Depression?" question the same way: World War II.
Is there a lesson in this? Perhaps. Perhaps the lesson is spending billions of dollars of taxpayers' money on make-work will not pull you out of depression, but spending billions of dollars of taxpayers' money on trucks, tanks, airplanes, and ammo will. Perhaps the lesson is economic problems are rarely solved quickly from the bottom up (i.e., jobs programs, consumer focused programs, tax cuts to individuals, etc.). Does it mean economic problems can be solved faster at the top (money supply, business lending, business taxes, etc.)? One could say the war spending was a direct subsidy to large American industrial companies, like GM and Boeing, and the jobs were a byproduct, that is, it was top-down. Certainly the decade-long economic downturn of 1973-1984 never adequately responded to the demand side economic efforts of Nixon, Ford, and Carter, and only recovered after Reagan's tight money supply and supply side focused efforts.
One thing I can say is, I can lean on that CCC-built rail at the Grand Canyon. I can use electricity from the TVA. They might not have pulled the U.S. out of depression, but one could argue we got something for the money, and some people were employed for some period of time.
But can we say the same thing about the current stimulus plan?
Subscribe to:
Posts (Atom)