Previous Entry Share Next Entry
Hewlett Packard, er, I mean, the other HP.....
blaisepascal
Done.

I suspect sixapart, technorati, netflix, et alia, will be looking at their datacenter contracts carefully in regard to what was promised or not promised in regard to backup power.

  • 1
This is what I dont get

When the idiot threw the fire switch a year ago or so they added ups's to all their racks.

I was wondering where the generators were.

I have a link to the story about the outraged employee's drunken sabotage, in my current post.

Very interesting. Was it poor design, or an out-of-control employee, or something else? Will we ever really know?


I saw a posting by someone who was in the data center when the power went out. He reported that all of sudden the lights went out and it got real quiet. No report of a rampaging disgruntled employee.

I also saw a description of the backup power system at the data center. It has sets of electric motors connected to flywheels and generators. The data center is powered from the generator side of the system, and the flywheels both store energy and smooth out the power. The flywheels can hold about 60 seconds of power, so they smoothly hold over the system during short brown/blackouts.

For longer power disturbances, they have gas-powered engines that are designed to kick in after 15 seconds or so. That way, there is enough energy in the flywheels to last until the engines can get up to speed.

In this case, there were six short power failures in a row, each probably long enough to bleed a lot of energy off the flywheels, but short enough to not trigger the gas engines. So after 6 of these, the flywheels were shot and didn't have enough power to last until the engines could kick in.

That's one theory, at least.

Whose brilliant idea was it to use flywheels, instead of proven-reliable battery backup? Doesn't anybody actually test their inventions under real-world conditions before marketing them as the greatest thing since flush toilets? (I just typed, and then deleted, a long description of the emergency power setup at any given radio station in NYC, as I remember it from working in radio 35 years ago. And it was already old, well-known technology then. But an equivalent system would have kept those servers from ever going down in the first place.)

There is nothing new with flywheel systems. They provide a number of significant benefits over battery-based UPS systems -- higher energy density, lower maintenance needs, lower TCO, higher efficiency, better output power quality and isolation, etc. As near as I can tell from reading about the system and what happened, the system worked as designed. The trouble was that this particular incident happened to be the one-in-a-million incident that the system wasn't designed, or specified, to handle.

The entire purpose of an emergency back-up power system is to be able to keep the machinery running no matter what happens. The short interruptions at short intervals, before the power goes down completely, is a very common pattern during power outages - in fact, it's very rare for the electricity to just up and quit, with no fluctuations beforehand. At the very least, the parameters designed into the flywheel system were inadequate; the stored energy ought to be able to keep the load running, seamlessly, for anywhere from five minutes to half an hour before the diesels kick in.

  • 1
?

Log in

No account? Create an account