Cool Solutions

Save The Planet: Virtualize


October 12, 2010 12:00 am





A couple of years ago, I was researching an article for EContent Magazine on green publishing. I assumed that if you moved content online, it would naturally be more environmentally friendly than paper products, produced from trees, and moved by trucks with a huge environmental footprint. I was wrong.

As I spoke to sources, I came to realize that there was an environmental price to pay for online content too, and just how high that price was depended on factors such as how efficient the data center was that hosted the content, and what type of power source the data center was closest to.

If the data center was near a coal burning power plant, for instance, its environmental impact was larger than one near a cleaner energy source. But that was only part of the story.

There are many factors inside the data center that can affect its environmental impact. If the center itself has been built efficiently, it can reduce significantly its impact on the environment. How much?

According to Rahul Singh, principal at Pace Harmon, as quoted in this article using virtualization can increase efficiency because instead of dedicating an application to a given server, you are spreading it out across many servers. This results in far less server down time. Singh said:

The increased utilization can significantly reduce the power, cooling, network infrastructure, storage infrastructure and real estate requirements — resulting in significant decreases in energy consumption (50 to 70 percent) and the carbon footprint of enterprise data centers.

That could explain why Google, owners of vast server farms, and always looking for an edge recently joined forces with Good Energies, a Park Avenue energy investment firm to build a 350-mile underwater spine, which according to a Boston Globe article, “could remove some critical obstacles to wind power development.”

Of course, it’s in Google’s business interests to come up with more efficient ways to run its enourmous data centers, and as I stated earlier, while they can control how efficient they run the inside of the building, they can’t always control how clean the closest power source is when it comes to generating that energy.

I’ve heard people suggest that Google was getting involved in the wind turbine plan as a publicity stunt, but I tend to believe it’s far more practical than that. As a company that uses colossal amounts of power and has the corresponding bills that go with that, anything that will lower those overall costs will increase its profits, especially when you consider that Google provides the majority of its services for free.

Just makes good business sense. And what’s good for Google could be good for you too. If you can use virtualization and the cloud to help generate greater efficiencies in your data centers, and you can reduce power consumption, you’ll be helping the planet for which you’ll probably get feel-good and PR points, but you’ll also be saving money and from a business perspective that should please your shareholders and other investors.

Whatever your reasons or motivation, virtualizing can help you run a leaner, cleaner data center and that’s a goal we should be able to all get behind.

0 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 5 (0 votes, average: 0.00 out of 5)
You need to be a registered member to rate this post.

Categories: Uncategorized


Disclaimer: This content is not supported by Micro Focus. It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test it thoroughly before using it in a production environment.

1 Comment

  1. By:FlyingGuy

    Hey Ron,

    So in keeping with the subject line here I gotta say that confusing the two is not good.

    Ok so for sake of argument, application X runs on a machine that runs at 95% to 100% utilization. I think we could both agree that the application efficiently uses the hardware Yes?

    Now lets say that hardware is a moder single processor with a dual core, and when running at the load we have assumed ( don’t EVEN go there ) this machine draws 100 watts of power and assume that when running at those utilization numbers it satisfies all of its consumers.

    No having said that, we cannot possibly throw a VM under whatever OS this machine is running because it will not have the power to sustain the VM PLUS the application and still satisfy all its consumers.

    So as we both know things never scale linearly, thats just the way life is. So to virtualize this application we need to boost the processing power of this hardware by X so that it can support both the VM AND the application. But if we do that we will want to add other services to the hardware.

    So the big problem is how does one keep the hardware running and maximum throughput with n applications running in VM’s without introducing a huge amount of slack and therefor waste?

    How do we continuously shuffle “workloads” ( I hate marketing speak ) around to the hardware so we can then keep everything running to as close to 100% utilization as possible?

    I think this will turn into a pretty big cluster *&#$ when you deliberately try to reduce your carbon footprint to the greatest extent possible. You see it all the time in the form of messages from an ISP like ( We are moving your account from server X to server Y and boy let me tell you that is always a huge PITA because invariably it is rare that it works the first time out of the gate.

    I think Google is a special case in this regard because the designed from the get go that processing comes from very small, very purpose designed machines. It will be interesting to see how this model holds up as folks continue to try and push things onto external servers. This will invariably lead to them having to have specialized servers that start to take those efficiencies away as server X now becomes dedicated to a certain application and cannot be VM’d since the application it is running has peak demand that only THAT server can satisfy.