Published

What You Need To Know About Blade Systems

Blade systems are compact and efficient compute servers that primarily save space and power, infrastructure, and compute management. Blades also cut costs. Here's how.

Share

Heat, space, and compute power. That's what blades are all about. "Blades," "blade systems," "blade servers," and similar iterations are a computer hardware design ("form factor") that are wired once, racked up once. Users merely push a blade into a chassis and get all the power and networking they need." This frees data centers from having to cable central processing units (CPU), power units, data communications, and other IT devices each time they add or replace compute power. "It's about integration. The more you integrate, the more you save," says Ishan Sehgal, IBM's program director of BladeCenter Marketing (Research Triangle Park, NC; www-1.ibm.com/servers/eserver/bladecenter/blade_servers/index.html).

Blades are still relatively new, their adoption still early. Nobody goes out and does wholesale replacement of a complete data center. However, they do replace X-number of servers at a time. Here's why that next server might very well be a blade system.

 

BOXES WITHIN A BOX

About three years ago, when blades first hit the market, compute servers sat in racks. Taking up physical space in and around those racks were switches, power units, fans, storage units, and all the other devices that make up a general major server. Cables connected all of these together with the CPU. Lots and lots of cables. One estimate is that the cabling for a conventional 40-CPU rack-mount server weighs a ton. According to the research firm Giga Group (now Forrester Research, Inc.), up to 25% of a system administrator's time is spent on cable management. By the way, cable failures are a prime cause of downtime. 

The blade approach, by contrast, effectively packages an entire compute server—1, 2, or 4 processors, memory, storage, network controllers, and operating system—into a "pizza box." Each of these pizza boxes is an independent server that slides into a bay in some sort of chassis or enclosure and plugs into a mid- or backplane. Once plugged in, the server shares power, fans, switches, ports, and so on with other blade servers. The cabling that connects all of these components together is already done within the enclosure. So at the very least, blades consolidate the space required for multiple compute servers. That's important; data centers are always trying to reduce the "footprint" of computers, storage, power units, and the like.
It's like this, explains Barry Sinclair, Houston-based Hewlett-Packard platform marketing manager for HP BladeSystem (http://h71028.www7.hp.com/enterprise/cache/80316-0-0-0-121.aspx): "Blades consolidate the space required for multiple compute servers; they reduce the overall cost of the sum of all those subsystems; and [the consolidated system] is easier to manage. You're managing a system instead of discrete components in racks. This reduces a tremendous amount of one-by-one labor-intensive setup and ongoing management of all the devices." Equally nice is that as a data center's computing needs grow, the IT department can literally just slide another blade server into an empty slot and immediately have more compute capacity.*

A blade's biggest savings probably comes from its reduced power-related requirements. Instead of power units on each of those pizza-box servers, a blade enclosure has a power supply and fan distribution system to support all of the servers within. Goodbye redundant power systems and redundant fans. Goodbye also to unnecessary items needing power (read: "generating heat"), possibly failing, or needing repair. According to IBM's Sehgal, power budgets can be the limiting factor in how many servers a data center can accommodate. In fact, reduced power demands could well surpass the savings in floor space and cabling. Better still is the other savings related to reduced power: Less cooling is needed because less heat is generated. Get this: Cooling alone for a 30,000-ft2 data center can cost $8-million a year.

 

IT'S NOT ALL JUST HARDWARE

Blades come with management software that automates the initial setup, provisioning, and reprovisioning of the multiple blades within a blade system, which, adds Sinclair, is "all about saving time and labor." Actually, the software does more than that, including infrastructure discovery and monitoring, provisioning, and reprovisioning; change and patch management; dynamic recovery and scaling; and remote management. HP blade management software also reports thermal, power, and fuse events to all server blades within an enclosure; provides asset and inventory information; and lets each server blade communicate with other server blade enclosures. The software also consolidates events pertaining to shared infrastructure components so that IS administrators receive a single message about the affected enclosure rather than a message from each component within that enclosure.

Because everything is integrated in a blade system, the management software provides a single point of access for systems administration. The savings from this approach become abundantly clear in the following example: Blades can be reprovisioned automatically to replace failed blades or to rebalance data traffic across more blades within the same enclosure. All this happens with no loss in sessions, no interruption in data processing. In the past, such rebalancing required additional external hardware.

 

LIFETIME CONSIDERATIONS

The lifecycle for most servers is three years or less because of higher performance chips with higher wattage requirements, and therefore higher cooling requirements. Blades are no different. Explains Sinclair, each enclosure has a limited range of power (kilowatts) and cooling (BTU) capacity. Vendors, therefore, have to design their enclosures to handle the next-generation blades, specifically the newer processors on those blades. (Not surprisingly, blade enclosures are unique to each vendor; i.e., the blade server from one vendor won't fit into the enclosure from another vendor.)

To stave off obsolescence, HP planned for four generations of processors in its blade system, banking on its blade enclosures powering the higher and higher wattage for each new generation of processor. IBM offers power module upgrades that are blade independent; a data center need only sum up the power requirements for all the blades within an enclosure, and then order a different power supply as required.

Nevertheless, as with all things compute-like, time marches on. Sehgal points out that while IBM BladeCenters are backwards compatible—processors and power modules can be moved from chassis to chassis—customers typically fill a chassis up with blades in a year or less (if not immediately). Those customers then install a new chassis and new blades in other parts of the data center as the need arises. "We haven't really seen applications move from one set of blades to another," says Sehgal.

 

ADDING RESOURCES; INTEGRATING MANAGEMENT

Blades are suitable in a number of compute environments. First, blades may be better where yet-another-server just might not fit because of physical space, power distribution, or cooling constraints within the data center. Second, blades can fill a gap where "some" extra compute resources are needed immediately, and more may be needed later. Third, blades let a data center dynamically add and subtract computing resources fairly regularly. This can happen when the data center needs to modularly "repurpose" or expand compute resources (or both) for new or existing services and applications (or both), such as the end-of-month close and massive compute projects like computational fluid dynamics.

WHERE BLADES CUT COSTS
Here is a comparison pitting rack-mount servers against HP BladeSystem in a data center with 100 servers that need updates and changes four times a year, and the data center itself adds 25 new servers a year. Keep in mind that for 16 servers, the data center would be replacing 18 1U conventional rack-mounted servers with one 9U BladeSystem—a 50% savings in rack space.
RACK-MOUNTED SERVERSHP BLADESYSTEM
(using infrastructure automation)
Initial setup and provisioning* (average person-hours per server)
12 hours30 minutes
Cost per hour of administrators**
$43 per hour$43 per hour
Cost of initial provisioning per server
$516$21.50
Annual costs for 25 servers (added or reconfigured)
$12,900$538
Implementing changes, updates, and reconfigurations (average person hours)*
4 hours30 minutes
Costs for change management 4 times per year per server
$688$86
Annual costs for 100 servers
$68,800$8,600
* Once the blade infrastructure is in place, adding a new server significantly reduces the time to rack, cable, and provision the operating system, and configure VLAN and storage connections. Similarly, changes also take less time.

** Based on an annual cost of $125,000 per administrator. (Source: Hewlett-Packard)

There's actually another place where blades fit. Sinclair considers blade servers a "catalyst for the adoption of next-generation management tools that allow higher productivity in terms of the number of devices that can be managed by each administrator." That is, blades help change the way IT managed IT in the past. Most IT shops, Sinclair continues, have "segregated server management from network management from storage management from facilities/data center infrastructure management." That's four teams. Often because of internal politics and just the way they're organized, tasks have to be handed off from one team to the other. With meetings and emails and everything else, those hand-offs are often productivity killers. So, posits Sinclair, many data centers look at blades as the catalyst to integrate—both organizationally and technologically—the four heretofore independent management functions, with the result being a consolidation of the management of the whole data center infrastructure and improved productivity overall.

*Grid computing is another way to add compute resource; however, grid systems are "really a style of computing, a way of managing workloads across multiple compute resources," explains Peter ffoulkes, director of marketing for High-Performance & Technical Computing, Network Systems Group, for Sun Microsystems, Inc. (Menlo Park, CA; www.sun.com). Grid computing is independent of form factor and type of compute resources, e.g., desktop workstations, minicomputers, mainframes; all in one building, scattered around a campus, scattered worldwide. Blades, continues ffoulkes, are a compute form factor with a specialized technique for cabling, cooling, and management of compute resources. There's nothing stopping a grid from having a mix of blades and traditional servers.

RELATED CONTENT

  • On Electric Pickups, Flying Taxis, and Auto Industry Transformation

    Ford goes for vertical integration, DENSO and Honeywell take to the skies, how suppliers feel about their customers, how vehicle customers feel about shopping, and insights from a software exec

  • Ford Copies Nature

    As Nature (yes, capital N Nature) has done a pretty good job of designing things, it is somewhat surprising that Man (ditto) doesn’t follow Nature’s lead more often when it comes to designing objects.

  • Engineering the 2019 Jeep Cherokee

    The Jeep Cherokee, which was launched in its current manifestation as a model year 2014 vehicle, and which has just undergone a major refresh for MY 2019, is nothing if not a solid success.

Gardner Business Media - Strategic Business Solutions