Over the years, there has been a rapid increase in large, stand-alone data centers housing computer systems, hosting cloud computing servers, and supporting telecommunications equipment. These are critical to every company in global IT operations.
For IT equipment manufacturers, increased computing power and improved computing efficiency are critical. With the proliferation of data centers that need to house large numbers of servers, they have become important consumers of power. All stakeholders, including equipment manufacturers, data center designers, and operators, have been working to reduce the power consumption of the non-IT equipment portion of the overall power load: a major cost is a cooling infrastructure that supports IT equipment.
Too much or too little humidity can make people uncomfortable. Likewise, computer hardware does not like these extreme conditions as much as we do. Too much humidity creates condensation and too little humidity creates static electricity: both conditions can have a significant impact and can cause damage to computers and equipment in the data center.
Therefore, ideal environmental conditions must be maintained and controlled, and humidity and temperature must be accurately measured using temperature and humidity transmitters to improve energy efficiency while reducing data center energy costs. ASHRAE's Thermal Guidelines for Data Processing Environments helps the industry establish a framework to follow and better understand the impact of cooling components of information technology equipment.
Why do I Need to Measure Temperature and Humidity?
1. Maintaining data center temperature and humidity levels can reduce unplanned downtime triggered by environmental conditions and can save companies thousands or even millions of dollars each year. A previous Green Grid white paper ("Updated Airside Natural Cooling Map: Impact of ASHRAE 2011 Allowable Ranges") discusses the latest ASHRAE recommended and allowable ranges in the context of natural cooling.
2. The absolute humidity in the data center should not be less than 0.006 g/kg nor more than 0.011 g/kg.
3. Temperature control at 20℃~ 24℃ is the best choice to ensure system reliability. This temperature range provides a safety buffer for equipment operation when air conditioning or HVAC equipment fails while making it easier to maintain a safe relative humidity level. In general, it is not recommended to use IT equipment in data centers where the ambient temperature exceeds 30°C. It is recommended that the ambient relative humidity be maintained between 45% ~ 55%.
In addition, real-time temperature and humidity sensor monitoring systems are required to be able to alert data center operations and maintenance managers to abnormal changes in temperature and humidity levels.
The Importance of Cabinet-level Temperature Monitoring
A "hot spot" in terms of news coverage means an important event, and a "hot spot" within a data center infrastructure rack means a potential risk. Rack-based temperature monitoring is the use of temperature sensors in server racks to manually or automatically adjust them to maintain optimal levels. If you don't have a rack-based temperature monitoring system in your data center, here are a few reasons to think about it.
1. Sub-healthy Temperatures Can Damage Equipment
Computer systems and servers are designed to work best at a specific temperature, no more than 24 degrees Celsius. At the same time, if the temperature around the equipment is not consciously controlled and maintained, the equipment itself will release a certain amount of heat and may damage itself. High temperatures pose a risk of equipment failure and self-protection, which can further lead to unexpected downtime.
2. The Cost of Downtime is Expensive
Uncontrolled temperatures are the second most common environmental factor contributing to unplanned data center downtime. Between 2010 and 2016 (a roughly six-year period), data center downtime costs soared 38 percent, and the trend is likely to continue to rise in the coming years. If the average downtime is about 90 minutes, then every minute of downtime adds significantly to costs, including employee productivity at data center customer companies. Many enterprises today run their business entirely on the cloud. One of the main reasons downtime costs are so high is that more and more enterprises today rely exclusively on cloud technology. For example, one minute of downtime in a company with 100 employees represents 100 minutes of downtime. In addition, with the huge impact of the new crown epidemic and telecommuting becoming the norm, downtime can have a huge impact on productivity and revenue.
3. Air Conditioning is Not Enough
Of course, your data center is equipped with HVAC systems, heat exhaust, and other cooling elements. While these air conditioning systems within the data center do work to maintain optimal ambient temperatures, they cannot detect or correct thermal problems that occur within the confines of the server racks. By the time the heat released by the equipment reaches a level high enough to change the overall ambient temperature, it may be too late.
Since temperatures vary from rack to rack within the same data center, rack-level temperature monitoring is the most effective way to prevent the risk of overheating damage to IT equipment. The effective collaboration of intelligent PDUs and temperature and humidity sensors within the racks will bring continuous value to the high availability of the data center infrastructure.
Hengko's Temperature and Humidity Transmitter can solve your lab's monitor and control temperature and humidity changes.
Also You Can Send Us Email Directly As Follow : ka@hengko.com
We Will Send Back With 24-Hours, Thanks for Your Patient !
Post time: Sep-26-2022