How to reduce downtime with better cooling systems
Thursday, Jan 5th 2017

In a data center, downtime is expensive. An early 2016 report from the Ponemon Institute found that:

  • Each minute of an unplanned data center outage cost $8,851. This is a substantial increase from $7,908 in 2013 and $5,617 in 2010.
  • Both average total downtime cost and maximum downtime costs have risen sharply this decade. The average is over $740,000 and the maximum is $2.4 million, and they have increased 38 percent and 81 percent respectively since 2010.
  • There are many possible causes of downtime currently. The leading one is failure of uninterruptible power supplies, which factor into one quarter of all outages. Cybercrime is becoming a more prominent issue, as is mechanical failure caused by overheating.

Minimizing downtime requires a multi-pronged approach. Data center operators have to strengthen their cybersecurity systems and implement environmental monitoring solutions to keep tabs on conditions such as temperature, humidity and airflow around their facilities. While oversight of UPS equipment is especially important, organizations also have to pay attention to their cooling systems.

The cooling issue: How overheating can lead to downtime

"Cooling equipment is complex and energy-intensive."

In general, cooling equipment is complex and energy-intensive. It has many moving parts and requires a lot of energy in order to run properly. Moreover, its utilization is uneven: It may sit idle for long periods of times, and then be called into action suddenly when a heavy workload causes a server rack to heat up rapidly. These characteristics make it particularly prone to failures, which in turn can trigger broader outages.

Even if the equipment doesn't fail outright, it may expend a lot of extraneous electricity in attempting to keep everything cool. This problem is particularly pronounced in legacy data centers, in which hardware replacement cycles may be long and there may not be sufficient infrastructure to support modern solutions such as liquid-based cooling. These facilities often have to get by with the use of old-fashioned air-based systems that gobble up electricity, water and maintenance effort.

"Data centers are required under standards set by [the American Society of Heating, Refrigerating, and Air-Conditioning Engineers] to run a temperature up to 81 degrees Fahrenheit," explained Raymond Acciardo in a post on LinkedIn. "Data centers with aging cooling equipment are not only at risk of equipment failure, but they may also struggle to capture the efficiency entitlements that these temperature standards were meant to create, making the data center potentially less efficient and wasting money through unnecessary energy costs."

Overheating equipment can be disastrous for a data center.Overheating equipment can be disastrous for a data center.

Formulating an effective cooling strategy

To move past these issues, data center operators have several options in front of them. For starters, environmental monitoring systems from ITWatchDogs help you address a wide range of possible downtime causes:

  • Humidity monitors protect your equipment from corrosion as well as static electricity complications.
  • With voltage sensors, you can track the performance of your UPS devices and service providers based on how many brownouts you have.
  • Electrical monitoring helps you stay on top of how much current is coming into the data center, so that you can prevent potential server shutdowns and failures.
  • Smoke alarms, dry-sensor doors and video surveillance systems provide additional protection from threats to your equipment.
  • All of these mechanisms can be set up to provide timely notifications to technicians whenever a potential issue comes up.

A comprehensive set of environmental sensors in tandem with a regular maintenance schedule is ideal for effective cooling. Precise monitoring allows cooling resources to be deployed as efficiently as possible. Find out more about your options at