Monday, June 29, 2009

HPQ & CSCO: Analysis of New Blade Environments

I've been spending some significant time analyzing new entries into the blade computing market, and poking around in the corners where the trade rags and analysts have failed to investigate. And, as the line goes, "some of the answers may surprise you."

The two big recent entrants/announcements were Cisco's Unified Computing System (made this past March) and then HP's BladeSystem Matrix (made in June). Both are implicitly or explicitly taking aim at each other as they chase the enterprise data center market. They're also both teaming with virtualization providers, as well as hoping for success in cloud computing. Each has a differing technology approach to blade repurposing, and each differs in the type (and source) of management control software. But how revolutionary and simplifying are they?

HP's BladeSystem Matrix architecture is based on VirtualConnect infrastructure, and bundled with a suite of mostly existing HP software (Insight Dynamics - VSE, Orchestration, Recovery, Virtual Connect Enterprise Manager) which itself consists of about 21 individual products. Cautioned Paul Venezia in his Computerworld review:
“The setup and initial configuration of the Matrix product is not for the faint of heart. You must know your way around all the products quite well and be able to provide an adequate framework for the Matrix layer to function.”
From a network performance perspective, Matrix includes 2x10Gb ‘fabric’ connections, 16x8Gb SAN uplinks, and 16x10Gb Ethernet uplinks. The only major things missing from their "Starter Kit" suite they offer are the addition of VMware - not cheap if you choose to purchase it - as well as the addition of a blade (or two) to serve as controllers of the system.

From Cisco, the UCS System is based on a series of server enclosures interconnected via a converged network fabric (which does a somewhat analogous job of repurposing blades as does HP's VirtualConnect). The UCS Manager software bundled with the system provides core functionality (see diagram, right). Note, that at the bottom of their "stack", Cisco turns to partners such as BMC for "higher level" value such as high-availability and VMware for virtualization management. As sophisticated as it is, in contrast to HP, this software is essentially "1.0" and full integration w/third-party software is probably a bit more nascent than with HP.

As you would expect, the system has pretty fast networking; Cisco’s system includes 2x10Gb fabric interconnects, 8x4Gb SAN uplink ports, and 8x10Gb Ethernet uplink ports. (But as the system scales to 100's of blades, you can't get true 10Gb fabric point-to-point.)

But really, how simple?

What I continue to find surprising is how both vendors boast about simplicity. True, both have made huge strides in the hardware world to allow for blade repurposing, I/O, address, and storage naming portability, etc. However, in the software domain, each still relies on multiple individual products to accomplish tasks such as SW provisioning, HA/availability, VM management, load balancing, etc. So there's still that nasty need to integrate multiple products and to work across multiple GUIs.

A little comparison chart (at right) shows what an IT shop might have to do to accomplish a list of typical functions. Clearly there are still many 3rd-party products to buy, and many GUIs and controls to learn.

Still, these systems are - believe it or not - a major step forward in IT management. As technology progresses, I would assume both vendors will attempt to more closely integrate (and/or acquire?) technologies and products to form more seamless management products for their gear.

Thursday, June 11, 2009

RTI Fabrics... not just a networking play

Pete Manca, Egenera's CTO, posted an excellent Blog Explaining RTI Architectures, (a term coined by Gartner some time ago) and does a nice job of taking a pretty objective approach to 3 types:

"A converged fabric architecture takes a single type of fabric (e.g. Ethernet) and converges various protocols on it in a shared fashion. For example, Cisco’s UCS converges IP and Fiber Channel (FC) packets on the same Ethernet fabric. Egenera’s fabric does the same thing on both Ethernet fabrics (with our Dell PAN System solution) and on an ATM fabric (on our BladeFrame solution)...

"Dynamic Fabrics are not converged, but rather separate fabrics that can be have their configuration modified dynamically. This is the approach that HP uses. Rather than utilize a converged fabric, HP has separate fabrics for FC and Ethernet. These fabrics can be dynamically re-configured to account for server fail-over and migration. HP’s VirtualConnect and Flex10 products are separate switches for Fiber Channel and Ethernet traffic, respectively."

"The 3rd type of fabric is a Managed Fabric. In this architecture there is no convergence at all. Rather, the vendor programs the Ethernet and Fiber Channel switches to allow servers to migrate. This is a bit like the Dynamic Fabric above, however, these typically are not captive switches and there is no convergence whatsoever."

I'll take some liberty here, and emphasize a pretty important point:

Converged /managed fabrics aren't attractive just because they simplify networking. It's because they are a perfectly complementary technology to managing server repurposing as well. That's for *both* physical servers and virtual hosts.

It's no wonder why IBM (with their Open Fabric Manager), HP (with their Matrix bundle), Cisco (with UCS) and Dell/Egenera (with the Dell PAN System) are all pushing in this area.

Why? Because once you have control over networking, I/O and storage connectivity, you've greatly simplified the problem of repurposing any given CPU. That means scaling-out is easier, failing-over is easier, and even recovering entire environmentns is easier. You don't have to worry about re-creating IPs, MACs, WWNs etc., because it's taken care of.

So, if you can combine Fabric control with SLA management and then with server (physical and virtual) provisioning, you've got an elegant, flexible compute environment.

Tuesday, June 2, 2009

CA's Acquisition of Cassatt - Hindsight & Foresight

Today I read the press release and Gordon Haff's analysis that Computer Associates has acquired Cassatt -- a former employer of mine.

CA probably appreciates that they have a real gem. But like all things Tech, most cool products are not "build it and they will come". However, I can say that Bill Coleman (Cassatt's CEO) and Rob Gingell (Cassatt's CTO and former Sun Fellow) really have a break-the-glass vision. Now lets see if the new lease-on-life for the vision (and product) will take shape.

Vision vs. speedbumps
Cassatt's vision - led by Rob - is still out in front of the current IT trends... but not by too far. As much as 3 years ago, the company was anticipating "virtualization sprawl", the need for automating VMs, the expectation that IT environments will have both physical and virtual machines, and the fact that "you shouldn't care what machines your software runs on, so long as you meet your SLA". That last bit, BTW, presaged all of our current 'hype' about cloud computing!

The instantiation of these observations was a product that put almost all of the datacenter on "autopilot" -- Servers, VMs, switches, load-balancers, even server power controllers and power strips. The controller was then managed/triggered by user-definable thresholds, which could build/re-build/scale/virtualilze servers on-the-fly, and do just about anything needed to ensure SLAs were beging met. And it worked, all-the-time making most efficient use of IT resources and power. As Rob would say "we don't tell you what just happened - like so many management products. We actually take action and tell you what we did." Does it sound like Amazon's recent CloudWatch, Auto-Scaling and Elastic Load Balancing announcement? Yep.

Finally, the coup the company had -- and what the industry still has to appreciate -- is that the product takes a "services-centric" view of the data center. Rather than focusing on *servers* the GUI focuses on *services*. This scales more easily, and gives the user a more intuitive sense of what they really care about -- service availability... not granular stuff like physical servers or how they're connected.

Unfortunately for Cassatt, there is an inherent tension between how ISVs develop products, and how IT customers buy them. ISVs are always looking for the next leap-frog.... while IT customers almost always play the conservative card by purchasing incremental/non-disruptive technology.

So the available market of real leap-frog CIOs is still small... but growing. I would expect the first-movers to adopt this won't be traditional enterprises -- but rather Service Providers, Hosting Providers and perhaps even IT Disaster Recovery operations looking to get into the IaaS and/or Cloud Computing space.

What it could mean to CA
So why would CA buy Cassatt? Unfortunately, it's not to acquire Cassatt's customers. It is much more likely to acquire technology and talent.

Given that CA seems to be a tier-2 player in the data center management space, Cassatt would help them legitimize their strategy, and pull-together a cloud-computing play that other competitors of CA's are already moving down the road on. Cassatt's product ought to also complement CA's "Lean IT" marketing initiative

The other good news is that CA has a number of Infrastructure Management products that ought to complement Cassatt technology. There is Spectrum (infrastructure monitoring), Workload Automation (more of a RBA soulution that might get partially displaced by Cassatt), Services Catalog, and Wily's APM suite. BTW, there's a pretty decent WP available on CA's website on Automating Virtualization.
Per Donald Ferguson, CA’s Chief Architect: “Cassatt invented an elegant and innovative architecture and algorithms for data center performance optimization. Incorporating Cassatt’s analysis and optimization capabilities into CA’s world-class business-driven automation solution will enable cloud-style computing to reliably drive efficiencies in both on-premises, private data centers and off-premises, utility data centers. We believe the result will be a uniquely comprehensive infrastructure management approach, spanning monitoring, analysis, planning, optimization and execution.”
I could see CA now beginning to target large enterprises as well as xSPs to begin to leverage Cassatt technology, as their engineering teams begin integrating bridges to other CA suite products. It will also take CA's sales and support organizations some time to digest all of this, and then bring it to market through their channels.

But Cassatt will bring to them a bunch of sharp technical and marketing minds. Stay tuned. CA's a new player now.