Monday, April 24, 2017

The Big Payoff

The big payoff for driverless vehicles is with driverless trucks, not driverless cars, especially driverless "Ubers". By driverless trucks, I specifically mean long-haul trucks.
 
A typical long-haul trucker drives 10 hours a day, meaning the truck is idle the other 14. Some downtime is needed for refueling, weigh stations, etc., but it is reasonable that driverless long-haul trucks will double the productivity of human driven trucks, quite literally overnight.

There has been a shortage of people willing to work as long haul truckers, even given it pays a middle-class income without the need for excessive education or training. This has caused labor costs to rise.

There are currently over 1.5 million long-haul truckers and estimates are the need for long-haul truckers will approach 2 million in the next 5 years.

There are about 250,000 taxi and limo drivers, and they make less than long-haul truckers. Uber and Lyft have exposed there is much greater demand for car services than originally expected, and the capital-less model of ride-shares works well for that demand. The flood of ride-shares has depressed wages for both taxis and ride-shares. But more importantly, self-driving Ubers will be a capital intensive model and will have all the flexibility of a taxi, and none of the flexibility of a ride-sharing service.

The long-haul truck driver replacement market is a $100 billion addressable market, about 10 times that of the taxi driver replacement market.

Follow the money.

Tuesday, April 18, 2017

When Did Expertise Die?

I saw a recent Facebook post of Dr. Tom Nichol's commentary on PBS about The Death of Expertise.

For some unknown reason, Nichols blocked me on Twitter, so I cannot provide this opinion directly. That is his loss.

But Nichols accurately posits the rise of the public Internet has created the side effect of everyone thinking they individually are an expert. However, individuals believing themselves to be experts is only half of the equation. The other half is the discrediting of the true experts, and I believe that happened about a decade or more before the rise of the public Internet. There is a third point, which is the rise of the well known pseudo-expert, and in some cases the celebrity pseudo-expert, such as Jenny McCarthy in the Anti-Vaxxer movement, and Rosie O'Donnell in the 9/11 Truther movement. Celebrity pseudo-experts provide credibility to lay pseudo-experts such as the producers of the original "Loose Change" 9/11 Truther film.

But back to the second point, the discrediting of true experts, or "when expertise died".

In 1989, while in college, I had a roommate who was a journalism major. At that time, they were teaching journalism students expertise is a subject was inherently biasing, and that the opinions of an expert in a subject must be balanced with an opinion of someone who was not an expert in the subject.

He later worked on a story on management, and interviewed an expert in the subject, who happened to be a management professor I worked for as a graduate assistant. He had to then find rebuttal information not from another management professor, but from someone completely unrelated. To me, this was surreal, because I knew both the interviewer and interviewee, and had no reason to question the good intentions of either.

But later it all made sense to me. I grew up watching expert reporters: Jules Bergman, ABC's science reporter; and Irving R. Levine, NBC's economics reporter. I also noticed those expert reporters completely disappeared in the 1980s. Except for the doctors the networks use as medical correspondents and the aviation expert they bring in for airplane crashes, there are no expert reporters any more. I also remember every time in the 1980s we launched a Space Shuttle, the various national news anchors would state the Soviet Union's public statement opinion about the purpose of the mission, as if it was as valid as NASA's stated mission objectives, or as if NASA's stated mission was as invalid as the Soviet's opinion. This latter point goes straight to my original point about my what my roommate was taught: NASA is an expert on their space missions, their opinion must be balanced. Was the Soviet statement credible? Was it valid? Was it simply propaganda? It didn't matter. Was NASA's statement credible? Was it valid? Was it simply propaganda? It didn't matter. To the media, the Soviet position was just as valid as NASA's position. Propagandists at the Kremlin were just as valid as rocket scientists in Houston.

From a purely pop-culture standpoint, I think we tended to believe Jules Bergman on science issues because his name sounded similar to science fiction writer Jules Verne's. I think we believed the bespectacled and bow-tied Irving R. Levine because he fit our visual of what a college economics professor should look like. They were journalists, and not scientists or economists, and they fit a persona, but they were experts in their field as far as journalism went. They had connections, they could get a meeting with the real experts, they had developed a working expertise on their subject, and they had credibility with the public. But they are gone now, and have been for about 40 years.



So I think before we blame the general public, driven by curiosity, and enabled by the Internet (be it WebMD, Wikipedia, or "FakeNews"), we need to consider nature abhors a vacuum, and realize the television media created a vacuum when it cut out those quirky expert reporters, and promoted skepticism and outright distrust of expertise.

Thursday, March 16, 2017

Everything I need to know about NetApp’s All-Flash Storage Portfolio I learned from watching College Football

Okay, silly title. I got the idea when Andy Grimes referred to NetApp’s all-flash storage portfolio as a “Triple Option”. To me, when I hear triple option, I think of the famous Wishbone triple option offense popular in college football in the 1970s and 1980s. And that got me to thinking of how NetApp’s flash portfolio had similarities to the old Wishbone offense.

The Wishbone triple-option is basically three running plays in one. The first option is the fullback dive play. This is an up the middle run with no lead blocker. It is up to the fullback to use his strength and power to make yardage. The second option is the quarterback running the ball. While most quarterbacks are not great runners, the real threat of the quarterback in running offenses is the play action pass, where a running play is faked, but the quarterback instead passes the ball. In today’s college football, while the Wishbone may have faded, option football remains popular, and many of the most exciting players are “dual-threat” quarterbacks who can both run well and pass well. But, back to the Wishbone. The third option is the halfback, an agile, quick running back who often depends more on his ability to cut, make moves, and change direction to make the play successful.

In considering this analogy, I wanted to find the right pictures or videos of Wishbone football to make the comparisons to NetApp’s flash portfolio, but found the older pictures and videos from the 1980s to not be that great. So I decided to take the three basic concepts: The powerful fullback, the dual-threat quarterback, and the agile halfback and look at more recent examples. I just happen to use examples from my alma mater, Auburn University, because I knew of a few plays that visually represent the comparisons I am about to make.

So first up is the fullback. The fullback is all about power. It is not about finesse. The fullback position is not glamorous. The fullback had to have the strength to face the defense head-on. To me, the obvious comparison in the NetApp flash portfolio is the EF-Series. The EF is all about performance: Low latency, high bandwidth, without extra bells and whistles which can slow other platforms down.

While I don’t have a good fullback example, I have a similar powerful running back demonstrating the comparison I am trying to make. Here we see Rudi Johnson on a power play break eight tackles and dragging defenders 70 yards to a touchdown from the 2000 Auburn-Wyoming game.

Rudi Johnson great 70 yard TD against Wyoming 2000



The next comparison is to the dual-threat quarterback. The dual-threat quarterback can run or pass with equal effectiveness. In NetApp’s flash portfolio, the obvious comparison is the All-Flash FAS (AFF), the only multi-protocol (SAN and NAS) all-flash storage array from a leading vendor. The multi-protocol capability of AFF (Fibre Channel, iSCSI, and FCoE SAN; NFS and SMB NAS) allows storage consolidation, and truly brings the all-flash data center to reality.

The play which best demonstrates the dual-threat quarterback’s potential is the run-pass option (RPO), where a quarterback rolls out and can either keep the ball and run with it, or pass it to a receiver if the receiver is open. Here we see Nick Marshall on an RPO play which tied the 2013 Iron Bowl with 33 seconds left in the game. The reason the play worked is Nick Marshall, a gifted runner, had already run for 99 yards including a touchdown.

2013 Iron Bowl: Marshall to Coates




That brings us to the halfback, also known as the tailback, or just the running back. For the sake of this discussion, and keeping with the original Wishbone concept, I will use the term halfback. The handful of teams who still run a variation of the Wishbone (Georgia Tech, Navy, Army, Air Force, and a few others), tend to use smaller, more agile athletes as halfbacks. These running backs usually get the ball on the outside, and leverage their agility to make the defenders miss. When I think of agility in flash storage, I think of SolidFire. Agility is a key feature of SolidFire. It scales with agility, provisions with agility, adapts with agility, and is the best storage for agile infrastructures like private clouds, especially private clouds using OpenStack. The best recent example I have seen of a running back leveraging agility to make a play is this run by Kerryon Johnson against Arkansas State.

Watch Kerryon Johnson's incredible touchdown against Arkansas State





So enough fun for now. But if you have a dedicated application needing performance acceleration, such as a performance critical database, NetApp’s EF-Series might be your tackle-breaking fullback powering through spaghetti code and getting the job completed despite the challenge. If you are looking to move to an all-flash data center and need consolidated flash storage to accelerate iSCSI MS-SQL databases and NFS VMware datastores on the same infrastructure, AFF is your dual-threat quarterback. And if you are looking to deploy a private cloud with the agility to grow with your workload, SolidFire is your agile halfback.

Wednesday, December 21, 2016

On Disruption

A few months ago, there was an email thread at my employer asking the question if All-Flash Storage was a “disruptive” technology. Disruptive, in the business sense, refers to Clayton Christensen’s definition of the term from his book, “The Innovators’ Dilemma”.

This, from a year ago, Christensen reviews his concept:

What Is Disruptive Innovation?

However, I think this is a narrow, and perhaps obsolete definition. He says Uber is not disruptive, because it did not originate in the low-end or new-market segments. However, while Uber did not disrupt car for hire, it did disrupt the capital model of cars for hire, and it did disrupt the medallion licensing model. Then the article also talks about how Netflix, in its original format (DVDs by mail) attacked an underserved periphery—not the low end, and not a new segment—of the market.

If we use the pure Christensen definition, All-Flash Arrays (AFAs) are not disruptive, but HyperConverged Infrastructure meets the definition. But perhaps we should look more broadly at the definition.

“The Innovator’s Dilemma” is 20 years old. It was written during the Dot-Com boom. Business books are not canonical. If they were there would never be revisions and follow-ons.

I think we need to take a wider view of disruptive technologies. Uber disrupted car for hire capital and licensing models. Driving an Uber is much less expensive than buying a taxi medallion, so the cost of entry was disrupted.

So how does that apply to AFAs? We know cost of IOPS is much lower with AFAs. We also know the costs of sizing and performance management dramatically decrease. One can argue the TCO of AFAs is lower. While AFAs did not enter at the low-end or a new-market segment, it did enter at a periphery, at a market segment (high transactional performance storage) where it offered a lower cost. AFAs disrupted a market segment of the overall frame storage market. Not the Mainframe attach segment, and not the extreme reliability segment, but at the assured high performance segment.

But here is another aspect of AFAs I am seeing—they mandate changes to a customer’s operational model. AFAs were made cost effective in part by using data reduction technologies (deduplication and compression). While there were some hard-drive based storage arrays which leveraged data reduction technologies (NetApp FAS, EMC Celerra, Sun/Oracle ZFS based arrays), these data reduction technologies were not available on high-end frame storage (EMC Symmetrix/DMX/VMAX, HDS USP/VSP, IBM ESS/DS8000). These data reduction technologies worked well for certain workloads: virtual machines benefited from deduplication, and OLTP databases benefited from compression.

This meant AFAs with built-in data reduction, targeting small, peripheral workloads (VDI, high-transaction OLTP), were set up for easy success.

However, at the same time other trends were occurring. To more effectively leverage the expensive high-end frame storage, some DBAs were turning on compression within their database software. Yes, this increased the number of CPUs needed to run the database, and increased their cost, but often DB licensing was a sunk cost. It was also possible to compress at the OS/filesystem level. It was not unusual in organizations where IT departments charged back storage capacity to users, for users to turn on compression in their servers to reduce their chargeback.

The second thing that happened over the last five years has been the fear of a data breach. This has driven the need to encrypt data at rest. While storage arrays offer this capability through Self-Encrypting Drives, encryption boards, or software encryption running on the array’s controller, often enabling storage encryption could only be done after upgrading the storage array to a new model. As a result, turning on encryption at the application level (i.e., the database), at the OS level (encrypting file systems), or at the VM level (using products like HyTrust) was a much faster path to security for many customers. Also, customers were assured only host level encryption ensured data was encrypted “over the wire” in addition to at rest.

The result of either of these technologies is it eliminates ability of the data reduction technology in the storage array to provide any benefit, and it returns the cost per gigabyte of flash storage to what it was with early generation, non-efficient architectures, which ultimately lost out to the AFAs with built-in data reduction.

The only way to benefit from an AFA’s data reduction features are to ensure applications and operating systems are not running host level compression or encryption. It may mean ripping out products like HyTrust and Vormetric. It may mean internal battles with DBAs. It may mean new terms and conditions in internal SLAs and storage chargebacks. The All-Flash Data Center sounds innovative on paper, but implementing it means working across traditional IT divides of applications, servers, security, and storage.

There are some data types which are natively compressed. For example, all the current Microsoft Office file formats are compressed. Additionally, most image files are compressed. Traditional file shares full of PowerPoint files are not going to benefit from AFA data reduction. Generally these workloads have never rated high-performance storage, and because of the lack of reducible data, it will take more time for the cost per gigabyte of All-Flash storage to come down to a point to provide the necessary payback to justify migrating these workloads to flash.

Why did I go down this path? It was to point the potential limits of a disruptive technology. When AFAs were narrowly applied to certain workloads, there was a cost-benefit which accelerated their adoption. When they are applied more broadly, they hit organizational barriers to adoption. Perhaps these barriers mean AFAs do not fit the definition of a disruptive technology. However, in IT I see many “disruptive technologies” which ultimately force significant operational changes on IT organizations. That was true for UNIX, Storage Area Networks, Windows, Linux, and VMware. It will likely be true for All-Flash Storage, Software Defined Networking, and adoption of Cloud Computing.

Friday, October 07, 2016

Why is There Not More Scepticism on Climate Science?

I continue to be surprised at how many people, especially Millennials (who are supposed to be skeptical), take "Climate Change" as gospel, despite evidence of highly questionable, and in some cases fraudulent science, such as the math used in Mann's "Hockey Stick" formula, and other questionable science revealed in the East Anglia email leaks.

Here are the questions I pose to anyone on the topic:
  • What percentage of warming is due to CO2 emissions due to the burning of fossil fuels?
  • What percentage of warming is due to other man-caused reasons?
  • What percentage of warming is due to changes in solar activity?
  • What percentage of warming is due to changes in other natural reasons?

Given observed questionable surface temperature measurement stations, and a noticeable difference in surface station temperatures and atmospheric temperatures, do Climate Scientist's heavy dependence on surface temperature measurements lead to unreliable results?

Source: New study shows half of the global warming in the USA is artificial

Source: 7 questions with John Christy and Roy Spencer: Climate change skeptics for 25 years

Given many Climate Scientists claim solar activity plays no significant role in Climate Change, but other Climate Scientists claim the significant pause in global warming is due to a decline in solar activity, how trustworthy is the climate science regarding solar activity?

Source: New study claims low solar activity caused "the pause" in global temperature – but AGW will return!

Source: Tiny Solar Activity Changes Affect Earth's Climate

Given one can insert random numbers into Michael Mann's equation and still produce a "Hockey Stick" output, how trustworthy should Dr Mann's science be considered?



Source: Michael Crichton - On Michael Mann's Climate Temperature Graph

Given evidence scientist Keith Briffa selectively picked evidence to support his desired outcome, and discarded evidence which did not support his desired outcome, how trustworthy should Dr Briffa's science be considered?

Source: YAD06 – the Most Influential Tree in the World


Given evidence scientist Philip Jones stated he used Michael Mann's "trick" to "hide the decline" of late 20th century cooling to overstate warming in the industrial era, how trustworthy should Dr Jones's science be considered?

Source: Climategate reveals 'the most influential tree in the world'

Source: IPCC and the "Trick"

Given climate scientists refused to allow critical peer review of their research, and only allowed it to be peer-reviewed within their tight circle of fellow climate scientists who believed the same way they did, how trustworthy should their science be considered?

Source: The tribalistic corruption of peer review – the Chris de Freitas incident

Given climate scientists working at government organizations refused FOIA requests for details of their research, how trustworthy should their science be considered?

Source: Climategate: James Hansen Finds Complying with FOIA To Be Too Much of a Burden


So there it is. Why not more skepticism, not that temperatures are rising, but skepticism of the science? I have said repeatedly, Climate Science is a Social Science, not a Physical Science. It is more about computer methods and curated data, and less about measurement. And other Social Science is held to much greater skepticism than Climate Science.

UPDATE:

Now there is this. The data used to dispute the "pause" in Global Warming is in dispute. By definition, science based on data that is in dispute cannot be considered "settled".

Exposed: How world leaders were duped into investing billions over manipulated global warming data

Sunday, January 17, 2016

"True" Private Clouds

Wikibon is talking about "True" Private Clouds. I think their definition is too narrow, and gets into the weeds. It misses the true customer of a "true" private cloud. And there are two customers. The first is the organizational customer that purchases a private cloud. The second is the internal end-consumer of cloud services.

To Wikibon's credit, the definition of "Private Cloud" is an issue that needs to be addressed. In my career I have seen too many organizations overuse the term "Private Cloud". I have seen a VMware cluster deployed on disparate hardware with no upper level cloud management platform called a private cloud. I have seen converged infrastructure, acquired but managed identically to non-converged infrastructure (as discrete components each managed by their functional staff) called private clouds.

Converged infrastructure plays a role in a private cloud, be even that term is challenged. I have seen disparate servers and storage, purchased separately at different times, cobbled together and called converged infrastructure after the fact. I have also seen single-SKU converged infrastructure broken apart, support for component infrastructure separated, and individual components upgraded on different life-cycles.

From an operations perspective, I have seen mature IT organizations in large enterprises provide similar levels of managed services as traditional managed service providers. I have also seen the converged infrastructure single-support model dramatically fail organizational customers, and provide no better single support that that provided by an reseller or managed service provider.

If the goal of a "true" private cloud is to provide a similar level of service offering to internal end-consumers they receive from a public cloud, but with higher levels of compliance and data sovereignty, then much of the detailed requirements Wikibon mentions are not necessary. As long as the organization can provide an offering to internal end-consumers which is competitive (on cost,  ease of consumption, and reliability), it should meet the definition.

Here are what I believe are required of a "True" Private Cloud:
  • Acquired in consolidated units of management, virtualization, compute, network, and storage with common amortization, and common life-cycle management.
  • Components supported as an integrated whole, with a single number, first-call support model, and escalated support abstracted from the internal end-consumer.
  • Compute, storage, network, and virtualization managed as a single entity by a single, cross-functional team.
  • Provisioned and managed via a cloud management platform (CMP).
  • Consumed by internal end-consumer as a shared resource in logical, not physical increments, i.e., VMs and GBs.
  • End-consumer offerings include multiple performance and data protection SLAs.
  • Provides charge-back to internal end-consumers.
  • Provides the Private Cloud operator performance, capacity, and licensing budgeting of the infrastructure; performance metering and capacity measurement to manage over-subcription, prevent over-consumption (especially of performance), and allow for elastic performance and capacity scaling; and provide built-in performance and capacity planning for predictable infrastructure growth.
  • Managed by high IT maturity organizational customer IT staff, or optionally part of a managed services offering  that does not require organizational customer IT staff to manage.
  • Financed to organizational customer either through capital purchase, capital lease, operational lease, capacity lease, or pay-per-use offering.

Some organizational customers will want to capitalize the "True" Private Cloud and manage it themselves. Others will want to basically rent the whole stack to include the software, and have it managed for them. But the common denominator should be how the internal end-consumer consumes the offering. It should look, feel, and cost as much like the public cloud as possible.

Wednesday, September 30, 2015

U.S. Cyber Command's Requirements Demand Warrant Officers

Yesterday there was a hearing in front of the House Armed Services Committee, "Outside Perspectives on the Department of Defense Cyber Strategy".

Some of the points brought up were about the personnel management of military cyber warriors. This is a challenge, because "cyber" (BTW, I HATE the term) is both an infrastructure (i.e, IT Infrastructure), and a domain (Information Warfare). It is an area where both warriors and janitors walk, more akin to urban warfare than other historic domains.

Since the late 1980s, the military has treated IT as an area where COTS technologies should rule, to both increase the productivity of the military, and to reduce operating costs. At the same time, the PC and client-server boom of the 1990s drew skilled IT technicians from the military to the higher paying civilian sector.

Through the 1980s, the military, had its own data, uniformed, processing specialists. The father of a high-school friend was a Technical Sergeant in the Air Force and a Burroughs mainframe programmer.  In the 1990s, most of the programming positions were either converted civil service, or outsourced to contractors.

The second wave occurred in the 1990s with the decentralization of IT acquisition, management, and support from central service commands (i.e., Air Force Communications Command) to the local military bases.  This was followed with A-76 studies converting many base level IT jobs to a combination of civil service management and contractor work forces.

The result of all of this is the military lost its uniformed expertise in information technology.

Fast forward to today, and information infrastructure is as much a domain in warfighting as the seas and the air, yet the military is left without the skills in uniform which correlate not only to captains of ships and pilots of airplanes, but also to the technicians, operators, and maintainers. As a result, the military has once again centralized IT acquisition, management, and support, and is once again filling positions with uniformed personnel.

However, IT skills is unique in several ways. They are perishable. Old skill requirements (i.e., Novell NetWare, UNIX) become obsolete and unneeded, and are replaced with new skill requirements (i.e., Windows Server, Linux). To ensure quality, they require validation (i.e., IT Certifcations). Because they are COTS based, they are inexpensive compared to unique military skills. They are fungible and readily transferable to the civilian sector.

Another unique aspect of the military is ab initio training. The military will take someone from high school with the appropriate aptitude, enlist them, and train them up to a level of reasonable, beginners level, productivity. It then will use on the job training and continuing education to build expertise. In the case of an in demand skill set, this creates issues with retention. This is a bigger deal than a military turbine engine mechanic--there are only a handful of airlines needing them. But almost every organization needs a Windows administrator.

Then there are the challenges. The military needs, smart, highly skilled, problem solvers for day to day operation of the IT infrastructure. The military information infrastructure is more likely to be attacked both in peace and in wartime, but rapid recovery is critical in wartime. Poor retention hurts this need. The military needs deeply skilled, highly experienced IT technicians. But the need for operational managers is not that great, so the college educated, commissioned officer corps is not the appropriate career path for an IT technician. Something else is needed.

The military position of Warrant Officer is that of an technical specialist. Historically, the technical expertise came from experience serving in the enlisted corps.  In modern times, the Army uses Warrant Officers as helicopter pilots and trains them to the appropriate level or technical expertise.

Warrant Officers can serve as highly skilled individual contributors or as first level managers. It would seem a perfect career path for an enlisted military IT specialist. Tie it to certifications, and perhaps an Associate Degree, along with a service commitment and a retention bonus.

On the commissioned officer side, the career plan should be more on IT architecture, Information Warfare and advanced academic education. College educated officers would start focusing on both supervisory roles, and architectural roles. Then the focus should be on an advanced degree in the appropriate field of study. From there, moving to an Information Warfighting planning role, followed by the appropriate mid-career professional military education. Cross flow between related fields such as military intelligence would also be appropriate, however, this should be treated with care, as military intelligence often recruits from liberal arts studies such as history, foreign language, and political science. A cross flow program should not disrupt either the military intelligence corps, or the information warfare corps. Finally, the Joint Forces Staff College should create a dedicated Command and Staff school for information warfighters, with the goal of creating cadre of information warfighting leaders for all of the services.

Ultimately, the combination of a cadre of commissioned information warfighting leaders, combined with a corps of highly skilled warrant officer information warfighting specialists, would go a long way towards developing the cyber warrior force our nation requires.