تبلیغات
وبلاگ تخصصی امنیت، شبکه، کامپیوتر و فناوری اطلاعات شرکت ایمن رایان پارسی فردا - Server Rack Strategies for Energy Efficient Data Centers
شنبه 1389/04/5  01:52 ب.ظ    ویرایش: - -

استراتژی آرایش رک های دیتا سنتر برای کاهش مصرف انرژی
Server Rack Strategies for Energy Efficient Data Centers

Server Rack Strategies for Energy Efficient Data Centers

Many data centers historically didn't put much more thought into their deployment of server racks beyond basic functionality, air flow, and the upfront costs of the rack itself. These days, the widespread adoption of hi-density applications are causing major hot spots concerns and capacity issues. These factors along with the high costs of energy require a sound understanding of how closely tied your server rack deployment is to overall server room and data center efficiency strategy.

IDC estimates for every $1.00 spent on new data center hardware, an additional $0.50 is spent on power and cooling, more than double the amount of five years ago. Keeping operating costs down is a key concern of all CTOs and Data Center Managers. As the biggest energy costs in running a data center is in cooling, having a sound server rack strategy is critical to your overall data center energy consumption and operating costs.  Over the coming years most medium to larger organizations will be adopting virtualization and higher density servers. As energy costs continue to rise and as your data center grows and expands, companies will look to their facilities and data center managers for sound strategies on how best to address those rising energy costs.

In a recent survey conducted by the Uptime Institute, enterprise data center managers responded that 39% of them expected that their data centers would run out of cooling capacity in the next 12-24 months and 21% claimed they would run out of cooling capacity in 12-60 months. The power required to cool IT equipment in your data center far exceeds the power required to run that equipment. Overall power in the data center is fast reaching capacity as well and an obvious area that needs to be addressed is implementing cooling best practices where ever possible and utilization of in-row cooling to address hot spots. In the same Uptime survey 42% of these data center managers expected to run out of power capacity within 12-24 months and another 23% claimed that they would run out of power capacity in 24-60 months. Greater attention to energy efficiency and consumption is critical.

The server enclosure is a staple of the data center. Despite many different monikers and manufacturers, the makeup of a server enclosure remains consistent: folded and welded steel, configured to secure servers, switches, and connectivity- the lifeblood of any on-demand organization. The enclosure comes in a variety of dimensions (height x width x depth) and is often customizable to a user's individual needs with provisions for cable management and PDU installation.

At the simplest level, the server enclosure is a well-engineered box. Yet that description doesn't do it justice. Perhaps no accessory in the data center is more numerous or important because of the equipment it holds and the data it safeguards. Though it consumes no electricity and contains no moving parts, the enclosure orientation has a significant impact on a data center's ability to become an energy efficient enterprise.

 
 
 
 
 
 
 

Hot Aisle / Cold Aisle Server Rack Configuration

 
Hot Aisle/Cold Aisle
  • Conceived by Robert Sullivan of the Uptime Institute, hot aisle/cold aisle is an accepted best practice for cabinet layout within a data center. The design uses air conditioners, fans, and raised floors as a cooling infrastructure and focuses on separation of the inlet cold air and the exhaust hot air.

    In this scheme, the cabinets are adjoined into a series of rows, resting on a raised floor. The front of each row becomes a cold aisle, due to the front-to-back heat dissipation of most IT equipment. Air conditioners, positioned around the perimeter of the room, push cold air under the raised floor and through the cold aisle, where it's ingested by the servers. As the air moves through the servers, it's heated and eventually dissipated into the hot aisle. The exhaust air is then routed back to the air handlers.

    Early versions of server enclosures, often with "smoked" or glass front doors, became obsolete with the adoption of hot aisle/cold aisle; perforated doors are necessary for the approach to work. For this reason, perforated doors remain the standard for most off-the-shelf server enclosures, though there's often debate about the amount of perforated area needed for effective cooling (Server manufacturers like HP utilize 65% perforation on their server cabinets, while other cabinet manufacturers tout doors with an excess of 80% perforation).

    While doors are important, the rest of the enclosure plays an important role in maintaining airflow. Rack accessories must not impede air ingress or egress. Blanking panels are important as are side "air dams" or baffle plates, for they prevent any exhaust air from returning to the equipment intake (Blanking panels install in unused rackmount space while air dams install vertically outside the front EIA rails). These extra pieces must coexist with any cable management scheme or any supplemental rack accessories the user deems necessary.

    Server Rack with Plexiglas Door vs. Perforated Door

  • Server Rack with Plexiglas Door vs. Perforated Door
    The planning doesn't stop at the accessory level. Hot aisle/cold aisle forces the data center staff to be especially detailed with spacing - sizing each aisle to ensure optimal cooling and heat dissipation. To maintain spacing, end users must establish a consistent cabinet footprint, paying particular attention to enclosure depth. Legacy server enclosures were often shallow, ranging from 32" to 36" in depth. As both the equipment and the requirements grew, so did the server enclosure. A 42" depth has become common in today's data center with many manufacturers also offering 48" deep versions. While accommodating deeper servers, this extra space allows for previously mentioned, cable management products, rack accessories, and rack-based PDUs.

    Though hot aisle/cold aisle is deployed in data centers around the world, the design is not foolproof. An Uptime Institute whitepaper, written in 2002 on the design, regards heat loads of 50 watts/sq ft as significant (a far cry from the 1500 watts/sq ft touted by SuperNAP in August of 2008). Medium to high density installations have proven difficult for this layout, because it often lacks precise air delivery. Even with provisions at the enclosure (blanking panels, air dams), bypass air is common as is hot air recirculation. Thus, more cold air is thrown at the servers to offset the mixing of the air paths, requiring excess energy at the fan and chiller levels.

    Despite this limitation of hot aisle/cold aisle, its premise of separation is widely accepted. Some cabinet manufacturers are taking this premise further, making the goal of complete air separation (or containment, if you will) a reality for the data center space.

     
     
     

     
     

    Cold Aisle Containment Diagram

     

    Cold Aisle Containment (CAC)
    Cold Aisle Containment augments the hot aisle/cold aisle arrangement by enclosing the cold aisle. The aisle then becomes a room unto itself, sealed with barriers made of metal, plastic, or Plexiglas. These barriers prevent hot exhaust air from re-circulating, while ensuring the cold air stays where it's needed at the server intake. With mixing out of the equation, users can afford to set thermostats higher and return very warm exhaust air to the air handlers.

    With the prevalence of hot aisle/cold aisle in existing facilities, Cold Aisle Containment seeks to leverage the arrangement. The cabinets, already loaded with equipment and likely secured to the floor, should not need to be moved or altered. The bulk of activity will center on the piece parts, which create the containment. They must be sized accordingly and attached firmly to the end of the rows and the tops of the enclosures.

    Cold Aisle Containment, over the last year, has gone from sparsely known to widely marketed by a number of cabinet manufacturers. Its emergence was highlighted by the recent Silicon Valley Leadership Group Data Center Energy Summit, which studied CAC among a number of air management strategies. While results are always relative to individual data center setups, many parties view cold aisle containment as a viable, efficient high-density solution. As the green movement becomes more pervasive the dialogue among users, manufacturers, and scholars is certain to continue.

     
     
     
     
     

    Hot Aisle Containment Diagram

     

    Hot Aisle Containment (HAC)
    Hot Aisle Containment takes the opposite approach. The same barriers now encase the hot aisle in the hopes of returning the warmest possible air to the air conditioners, which have changed in form and location. These air handlers, more compact, are now embedded within the actual row of enclosures. From this location, the air conditioner captures exhaust air, cools it, and returns it to the cold aisle, where the process recurs. The efficiency, in this case, is related to distance. Neither the exhaust air nor the cold air has far to travel.

    The complexity of deployment depends on the facility. The actual server enclosure shouldn't change with regard to form and function. The HAC design, however, is predicated on the use of a row-based air conditioner, itself a newer product and not as established as perimeter CRAC units. A new facility can plan this configuration from the outset. An existing facility with hot aisle/cold aisle in place would have to create ducting systems into a false ceiling or rework entire rows to incorporate the air conditioner. That work may involve the running of new chilled water pipes to the row location.

    Like its counterpart, many organizations and parties are interested in Hot Aisle Containment, for the layout realizes that complete air separation "improves the predictability and efficiency of data center cooling systems" (Niemann). Though it stopped short of endorsing either approach, the Silicon Valley Leadership Group envisioned a 75% reduction in fan energy, capacity gains at the CRAC level, and reduced energy consumption at the chiller level through containment

    Close-Coupled Cooling Solution
    Close-Coupled Cooling Solution

    Close-Coupled Cooling
    With a close-coupled cabinet orientation, containment has fully evolved-both hot aisle and cold aisle within the same cabinet footprint. The equipment cabinet is adjoined to a row-based air conditioner (chilled water based), which, like the hot aisle containment design, achieves efficiency through proximity. The difference: close-coupled cooling all but removes the room from the equation. Both the server enclosure and air conditioner work exclusively with one another. No air, hot or cold, is introduced into the space. The combination of proximity and isolation allows for extremely dense installations of ~35kW per rack.

    The enclosure design is a throwback to legacy models. Glass front doors have returned, as have solid rear doors to seal the rack from the room. The cabinets, however, have adopted a deeper footprint (consider the 51" deep HP Modular Cooling Solution G2) to provide ample space for cold air delivery and hot air return. Blanking panels and side baffles are still present, maintaining the air separation.

    Like Hot Aisle Containment, green field data centers can plan for close-coupled cooling in advance, ensuring that chilled water piping and the necessary infrastructure is in place. Existing data centers with mechanical capacity could use the product for a thermal neutral expansion- deploying new high density blades, for instance, in an isolated cluster. This new equipment would not strain the existing cooling plant.

    For an existing space without the infrastructure or space to expand, there's little recourse for using the already populated enclosures with close-coupled cooling.

    For those with the infrastructure and space, the efficiency gains for close-coupled cooling are compelling-most notably at the mechanical chiller plant and the potential for free cooling with water-side economizers.

    Conclusion
    The server enclosure, once an afterthought in data center planning, has become a pertinent talking point, for no cooling strategy can exist without it. Many data center authorities stress that new cooling approaches are essential to achieving energy efficiency. The process begins, according to The Green Grid, with airflow management-an understanding of how air gets around, into, and through the server enclosure. That management is attainable through the any of the cabinet configurations discussed herein.

     
    How much can your organization save by having a more energy efficient Data Center?

    As much as 50% of a data center's energy bill is from infrastructure (power & cooling equipment). Try our Interactive data center efficiency calculator and find out how reducing PUE will result in significant energy and cost savings!

    Data Center Energy Savings Calculator

     
     
  •    


    نظرات()   

    وبلاگ تخصصی امنیت، شبکه، کامپیوتر و فناوری اطلاعات شرکت ایمن رایان پارسی فردا

    ایمن رایان پارسی فردا، تکنولوژی و فراتر از آن