Best CRM
CRM search»Sustainability»Green Data Centers

SaaS and Green Data Centers Green Data Centers

As the executive sponsor for data centers in the US and UK, I've conducted data center tours for sales prospects, customers and ISO auditors for about a decade. However, in the last few years I received far more questions regarding environmental friendliness and going green. I'm a bit skeptical – and even a bit cynical - when I hear corporations' claims of green. Nonetheless, as an advocate of environmental friendliness I'm quite pleased to share my experience and lessons learned with customers or pretty much anybody willing to listen. I'm going to use this blog post to show how going green in the data center can save you some money and mother earth at the same time. While this post is highly relevant to IT folks which manage their own data centers many of the recommendations can be applied throughout the organization.

The Software as a Service (SaaS) or cloud delivery model offers a two punch combination to carbon dioxide emission reductions. The first punch is purely from the economies of scale realized from centralized processing and a shared services model. Instead of hundreds or thousands of customers individually operating tens of thousands of servers and the power hungry facilities needed to support those servers, the SaaS multi-tenant model consolidates and centralizes data center operations to use less equipment and a small fraction of the supporting facility costs. When you recognize that the data center supporting facility costs outweigh the cost and emissions production of the servers and related computer equipment you get a handle on how big this savings really is. The second punch to carbon dioxide emissions is achieved by data center operators via their thoughtful decision making and a desire for improved power consumption, cooling efficiency and equipment density.

A green data center is one which maximizes energy efficiency and minimizes environmental impact. IT leaders are in an ideal position to make a difference as operational reforms and rethinking the ways you use equipment can have as big an impact as purchasing newer more energy-efficient computing products. Here are some suggestions you can consider to achieve lower costs, increased power availability and reduced emissions.

  • First, build your power consumption baseline.You need to understand where you consume power and the trends of where power consumption is changing. You can then apply costs to each material power consuming device. Be sure to recognize early that IT computing equipment does not consume the bulk of the power in the data center. According to a well researched APC study, the average data center power usage allocation is as follows:

Data Center Power Allocation

  • Turn off your mystery equipment and servers. Oracle's Sun estimates that between 8% and 10% of all data center servers have no identifiable function. It’s a fact that power consumption over a server's life exceeds the procurement cost of the server. The best thing you can do is turn off unnecessary servers and related equipment. If you don’t know a server’s purpose after an investigation, turn it off and wait for the help desk call. If you don’t get that help desk call after 90 days or so, remove the server from the power grid.
  • Virtualize your servers for greater efficiency. It’s been my experience that many IT managers don’t know how underutilized their servers really are. Server virtualization allocates a single server into multiple, smaller virtual servers without additional increments of power consumption. Tapping into unused processing capacity without drawing additional power can deliver exponential savings to the bottom line. While virtualization provides energy savings, recognize it needs to be complimented with new centralization tools in order to provide proactive management, real-time visibility, high availability and load balancing across your virtual assets. While virtualization facilitates network and system management in many ways (i.e. you can configure a new VM in minutes as opposed to hours for a physical server), it also adds some complexity as the number of virtual assets will exceed the prior number of physical assets, management of the virtual assets requires new tools and skills and the relationships between virtual and physical assets must be well understood. My experience is that the learning curve is not long and the tools are not expensive, in fact some of the best tools are open source, however, like anything else in the data center the processes must be thought-through for implications, documented, scheduled in advance, performed according to change management and integrated with other related SOPs (such as security, redundancy, performance, scalability, system fault tolerance, disaster recovery, etc.)
  • Don’t forget about storage. Power and density efficiency improvements are often limited to servers, yet storage systems while fewer in number are generally growing their capacity at 50% plus per year (source: IDC) and are among the highest energy consumers in the data center. Storage equipment is often ideally suited for improvement as it consumes 13 times more power than server processors and storage equipment often has very low utilization (less than 20%). Storage virtualization can drive use rates from 20% to 50% or higher. I’ve also discovered energy benefits from inter-operable equipment such as hot-swappable disk drives, power supplies and fans as well as modular components such as blades, power supplies and network cards.
  • Consider network equipment virtualization. I’ve been reading blogs and talking with some data center veterans on this savings opportunity, but quite frankly, haven’t really figured out how to achieve a measurable savings without compromising HA (high availability), or N+1 redundancy. I’ll probably post an update to this entry if I make some progress on this opportunity.
  • Take a look at data deduplication. In addition to being a good systems management practice, this reduces redundant data processed and maintained in storage assets. The new twist on this practice is the relatively new technology which creates better calculated hashes for each data block so if a duplicate hash matches that of another block already being stored the system it doesn’t store the redundant block. Unfortunately, I’ve not yet found a lot of good products for this at our company. While I have seen some products oriented toward file storage, there seems to be a lack of data deduplication products that work with block LUNs (which is where we store the most data). Duplication technologies provide an added boost when considering virtualization. Each virtual machine redeploys the operating system multiple times. Deduplication tools can reduce storage space requirements and leverage data access to data residing in the filer’s cache.
  • Consider thin provisioning. This storage management procedure lets storage administrators limit the allocation of physical storage based on levels or need and then automatically adds incremental capacity only when it is actually required. This capability is generally a SAN or iSCI hardware function and actually allocates disk blocks to a given application only when the blocks are actually written. This removes the initial allocation guessing game and over-allocation of storage processing just in case you need it down the line.
  • Implement IT charge-backs. Pay-for-use formulas are more flexible and can assist in improving equipment accountability and utilization.
  • Don’t bypass supplemental cooling if you’re using high density equipment. I’ve seen this mistake made several times. Many data centers are already well past the cooling capacity available from raised floors, which is generally about 4kW to 7kW per cabinet. While I’ve heard claims of higher capacity, I’ve not witnessed a raised floor cooling system exhaust more than 7kW per cabinet. Cabinets with blade servers are often above 12kW – and I’ve seen some at over 30 kW - and should get supplemental, localized cooling.
  • Hot and cold isles. I initially took this age old practice as a given, however, have engaged in a few debates with data center staff who question the savings. Our US and UK data centers use hold and cold isles and I have clearly witnessed the energy efficiency benefits. Also, if you choose to implement this practice, make sure all tenants in the colo follow the configuration. I’ve been in a few colo’s where the exhaust sides of equipment don’t always face each other which means that one cabinets heated exhaust becomes somebody else’s incoming airflow.
  • Consider data center segmentation. Servers and most network appliances operate well at higher ambient air temperatures than storage devices. The one caveat to this statement is the 1U servers which in my experience overheat much more quickly than 2U+ servers. I suspect, but haven't verified, that the 1U servers have insufficient airflow and less fans as they are so small. Because storage equipment requires approximately 15% to 25% cooler airflow, isolating this equipment into a separate location would permit the ambient air temperature for the rest of the data center to increase to 78 to 80 degrees F and thereby result in a substantial power savings.
  • Cabinet glass doors look cool, especially with blinking lights, however, they turn cabinets into ovens. Avoid them.
  • Good cable management both in the cabinet and the raised flooring will result in better airflow and more efficient cooling. Be sure to keep cables neat, labeled and bundled – and don’t overstuff cables into limited space.
  • Don’t over-refrigerate the data center. I’ve heard experts indicate that data center ambient air temperature can be as high as 78 degrees F. As we use fibre SANs which require a bit more cooling than the rest of the equipment I’ve not quite reached that benchmark, although we do maintain temperatures of 72 +/- 1 and humidity at 52% +/- 2. Cooling much below 70 degrees adds cost with little to no associated benefit. I’ve heard that for every degree increase of ambient temperature, there is at least a few corresponding percentage points in cooling systems energy savings. I’ve not measured this myself and I’m sure the correlation is not exactly linear, but it seems to be justified.
  • See if you can use 208V/30A power to reduce energy costs. Each cabinet which uses higher voltage 220 instead of 110 power reduces power loss through the normal wire and transformation process which permits the hardware to operate more efficiently.
  • Don’t forget to use blank plates in your cabinets. This is a very small effort which prevents hot spots and hot air from blowing into cool isles.
  • Don’t use perforated tiles on raised floors in a hot isle. I don’t see this much anymore, but I did recently see it in a San Jose data center so I’ve added it to my list.
  • Turn the lights off. Just like your mother told you. Our data centers use motion detection lights so convenience is not lost and savings are achieved.
  • Ask your hardware suppliers for advice. Every reputable hardware and equipment vendor has staff that can provide advice and design assistance for energy savings. We recently reached out to our SAN vendor (EMC) and found them to be very informative and helpful.
  • For smaller UPSs, consider upgrading your UPS. UPSs now have inverter circuitry that turns DC power from batteries into AC power for IT equipment using a base level of power irrespective of load.
  • For big UPSs, consider replacing battery UPSs with flywheels. This is a significant capital expenditure so for most readers this is more likely something consider for when evaluating data center colo space. Battery UPSs are messy and an environmental adversity. I’ve often witnessed the inherent management weaknesses such as a lack of battery standardization, loose battery maintenance procedures and no well defined schedule for battery replacement. The best battery replacement I’ve seen is flywheels. I first saw these years ago at the NAP of the Americas (in Miami) and I’m now happy to report that they are being adopted at an increased pace. Flywheels completely replace chemical batteries by spinning a wheel which provides temporary power during disturbances until generator power kicks in. Unfortunately, these devices are still expensive, however, they consume less floor space than batteries so may initially make more sense for data centers in high dollar real estate locations.
  • Check out utility incentives. There are more than 200 state and local utility energy conservation programs which provide rebates or incentives for energy efficiency improvements. For example, California’s Pacific Gas & Electric reimburses a portion of the hardware, software and consulting costs, up to $4 million, for server and storage consolidation programs.
  • Make sure IT has visibility to the monthly electric bill. I’ve seen a number of facilities where the electric bill is approved by the CFO or facilities management people without IT review. IT staff should be accountable for electricity utilization in the data center and incented for operational savings. For this to be most meaningful, data center electrical draw should be separated from the rest of operations (via sensors or installing a separate power meter).
  • Watch for future power supply ratings and changes. I think the EPA is about to deliver new efficiency ratings on power supplies. According to the EPA, between 1kWh and 1.5kWh can be saved for every 1kWh saved at the plug - so this could result in big savings. The EPA is working with the Climate Savers Computing Initiative to create new power supplies which are about 90% more efficient and can reduce greenhouse gas emissions by 54 million tons per year, and resulting in an energy cost savings of $5.5 billion.
  • I recognize most IT managers are more likely to upgrade their data centers than build new, however, if capital expenditures and new construction are in your future, make sure to consider things like catalytic converters on generators, alternative energy supplies such as photovoltaic electrical pumps and water cooling technologies.

The savings from any one or a combination of these suggestions are material and sustainable. As a quick reference example, a single dual-socket, quad server with sufficient memory can replace 30 older, lightly used single processor systems. The power savings are in the range of 12 to 15 kilowatts and could easily translate to $15,000 or more annually.

For another perspective, below are the 'Ways To Go Green' results from an Infoworld research project completed by 472 IT professionals.

Data Center Energy Savings

Data center power growth is a significant and growing challenge. According to Gartner, 50% of current data centers already have a lack of power and cooling capacity to meet the needs of high density equipment. A survey of 369 IT professionals performed by OnStor reported that 63% of survey respondents have run out of space, power or cooling without warning.

So what are some of the industry leading technology companies doing?

  • As part of its Greenfield initiative, HP is consolidating 85 global data centers into 6 which will leverage low power/high capacity blade servers, upgraded power and cooling devices and extensive virtualization.
  • Sun has finished a construction project which delivered a new 76,000 square foot data center in Santa Clara which uses less than half of the electricity of its prior data centers.
  • IBM announced it will consolidate 3,900 servers into 33 virtualized System z mainframes resulting in about 80% power reduction and a $250 million savings in energy, software and system support costs over five years.
  • SAP's Sybase reduced their number of servers by about 45%. The company expects to achieve $1 million to $2 million per year in operational savings and has opened up much more space thereby canceling a plan to build out another data center to accommodate growth.
  • Intel announced its plan to consolidate 133 IT facilities into 8 data center regional hubs. According to Brently Davis, manager of Intel's data center efficiency initiative, Intel forecasts cost savings of as much as $1.8 billion over the next seven years.

    Additionally, each of these new data centers will power down idle servers, provision new servers more intelligently and increase storage utilization to the 70% to 90% range.

So what is the U.S. government doing?

The U.S. federal government has commissioned a number of smart people to provide a macro level data center base line (see for more information). According to the Department of Energy (DOE) and Environmental Protection Agency's (EPA) report to Congress, U.S. data centers consumed 61 billion kilowatt hours in 2006 – which equates to 1.5% of all electricity consumed in the country at a cost of approximately $4.5 billion. The DOE forecasts that data center energy consumption will grow 12% annually through 2011. So if we do nothing, data center power usage will double by 2011 or if we implement the EPAs recommendations we could lower overall data center power usage to 2001 levels by 2011 – a savings difference of 90 billion kilowatt hours. Green Government

Several IT industry leaders and the 92 member Green Grid Alliance (made up mostly of IT professionals) completed a memorandum of understanding with the DOE that solidifies the process to establish the metrics for data center measurement. Making the data center the initial focal point was viewed as strategic for two key reasons. First, data centers are extremely large energy consumers, their consumption is rising at an aggressive rate and any offset to this increased consumption may result in a big impact. Second, their operations can be isolated for efficient measurement, modeling and extrapolation. Lessons learned can be shared for broader involvement.

Most important, the memo of understanding sets an objective of using the research and analysis to improve overall data center energy efficiency by 10% in 2011. However, the next action items include the DOE publishing more metrics specifications and completing a request for comments period with the participating community. Want to get involved and compare your data center metrics with others? The DOE is publishing a web site with the metrics and a suite of tools. Anybody is free to use the tools and encouraged to submit comments on the metrics. I can assure you I will be performing a comparison analysis and suspect submitting several comments. For example, I'm a believer that corporate America won't achieve wide spread adoption until it makes financial sense. If the feds were to match state enticements or provide even modest additional financial or tax incentives, I suspect it would tip the balance for many companies to act and benefit the economy for all.

So what can you do?

I recommend you begin with executive sponsorship and an inquisitive mind. From there, numerous opportunities are likely to avail themselves. Here are a few examples to stimulate your creative thinking.

  • Perform an internal IT investigation and demonstrate a visible, vocal and vigorous management commitment.
  • Solicit broad participation and encourage staff to identify and suggest practices that save energy.
  • Create a plan backed with measurable goals for energy savings, power efficiency and carbon reduction.
  • Update your purchase policies and practices to seek out and favor energy efficient vendors and systems.
  • Have IT management perform a baseline power definition; be sure to measure power consumption for IT equipment and the data center infrastructure.
  • If you don't already, begin routing the electric and utility bills to IT management and coordinate financial incentives among IT management, facilities management and possibly your CFO for future cost savings.
  • Schedule periodic power consumption audits to verify monthly reporting and trend analysis.
  • Create a program to recycle obsolete or discarded equipment and recycle technology consumables (e.g. paper and printer cartridges).
  • Evaluate videoconferencing as a substitute for travel.
  • Consider telecommuting where it makes sense.
  • Consider replacing servers more than three years old with newer energy-efficient models. Also review the CPU performance stepping technology which dynamically adjusts the processor energy consumption based on processor load.
  • Minimize storage equipment by using SANs and other network attached storage hardware which consolidate storage space and save power.
  • Push for multi-threaded software applications which take advantage of multicore-processor machines.
  • Participate in an industry group in order to share knowledge and collaborate with other like minded professionals.

Have other suggestions? I'd like to hear them.

Marketing Category
 Filed In Categories: Sustainability
Marketing Lists Tag
 Tags Tags: Green Data Centers
 Trackback Permalink:
Author  Author: Chuck Schaeffer
 Share Share:    Bookmark and Share
Marketing Software

Comments (4) — Comments for this page are closed —

Guest Jerry Vanders
  I've heard that data center air temperature can be raised without equipment compromise. However, I've detected exhaust area hot spots of over 100 degrees F. Any comment on how to remove hot spots without restructuring? Seems like a simple question but I've asked a few times and can't get an answer.
  Chuck Chuck Schaeffer
    I'm assuming you are using a hot and cold isle approach. The return air in this configuration often hits high marks of 100 to 115 degrees F so your hot spots may not be an issue at all. Verify that your supply air is cooled to around 65 to 70 degrees F and that equipment ambient temperature remains in your target range. If these items are in check I wouldn't be too concerned about your return air hot spots.

Guest Dennis Isaacs
  So what do you think the data centers will look like in five years? Have you heard of container data centers?
  Chuck Chuck Schaeffer
    While I don't have a crystal ball, I suspect data center advancements will be more evolutionary than revolutionary. I believe that the current generation of blade servers will become much more efficient and that changes to power at the plug will have a big impact on all electrical equipment. I also think that data centers in new rural locations may have a profound and positive impact on the national power grid. Intel publicized the material savings achieved from their data centers in Oregon and New Mexico. These state locations have repealed sales tax on capital purchases and offer some of the lowest electric rates in the country (up to 30% lower than some states such as California). Google has built its newest data centers in remote areas such as Oregon and Iowa and I suspect other major electrical consumers will follow the lead. I have read a few articles and studies on containers and while I like creative ideas I'm not sure this one makes long-term economic sense. For the short term, placing high density blade and rack servers into containers similar to those on trucks and ships can probably achieve the claimed 40% to 50% capital expenditure savings relative to brick and mortar facilities - and provide flexibility with disaster recovery. However, I'm less clear on the alleged cooling savings due to the contained environment and I suspect the savings could be negated when you consider the necessary security, maintenance and environmental requirements of a container compound.



Related Articles




A Passion for Sustainability
  • IDG Computerworld twice named Chuck's company one of the top Green IT Organizations based on his data center Global Reach for Energy and Efficiency Next Generation (GREEN) program, which reduced carbon emissions through several sustained efforts including cooling efficiency, re-engineered power distribution, server virtualization and new footprint schemas for improved equipment density.
  • Chuck received a 2009 American Business Award for his company's Green Approach to Software Product Development in 2009.
  • Based in part to demonstrating how sustainability objectives can spur global economic benefits, Chuck was appointed Examiner for 2007 Malcolm Baldrige National Quality Award.



Best CRM Software

Follow Us

crm search

Home   |  CRM  |  Sales  |  Marketing  |  Service  |  Call Centers  |  Channels  |  Blog