In-Rack Cooling at a data center is a great technology if you want to cool server racks energy-efficiently. But it’s a bit like relish at a hot dog stand: not everyone has it. Why is that?

That’s because – as Don Harris, Director, Mission Critical puts it – it’s not exactly a ‘known quantity’. No, that would be ketchup. The trick is convincing more industry stakeholders that you can and probably should have both ketchup and relish. Encouraging the adoption of new emerging technology is the best weapon we have at present in the battle for speedily delivered and sustainable data centers. 

We spoke to Don about the upcoming tech that excites him the most, as well as the logistical realities of delivering data centers in today’s supply-constrained market. 

Which emerging technologies for data centers excite you the most and why?

In-Rack Cooling. The average kilowatt (kw) per server rack at a data center today is between 8-12kw per rack. That’s a pretty good density compared to when I first started building centers 20 years ago, where we aimed for 4 or 5kw. I was with a client at a data center recently where they achieved 50kw per rack, and that was all because of In-Rack Cooling. That power density allowed them to have a smaller footprint, smaller data haul, and a smaller data center in general across the board.    

There are other exciting prospects, of course, like hydrogen energy generation, in addition to complementing that with solar and wind, etc. While those are all exciting from an environmentally green perspective, the best way we can be kinder to the environment is to use less energy for these data centres and to do more with less. If we can quadruple our power density with In-Rack liquid cooling, we can use the same amount of electricity for almost quadruple the amount of IT load that we normally would have. 

What does In-Rack Cooling mean for the costs of developing a data center?

In terms of capital expenditures and the upfront costs of a data center, In-Rack Cooling is not necessarily an overt cost expansion compared to what the usual heating, ventilation, and air conditioning (HVAC) costs would be.

With In-Rack Cooling, you’re precisely cooling and conditioning air in close proximity to the servers, and in doing so, you’re eliminating a significant amount of floor space. From a design standpoint, this change is easy to accommodate because data centers are architecturally benign. There’s nothing special about them; they’re essentially a big concrete box full of mechanical and electrical equipment, which comprises roughly 65% of the cost of data centres in most cases (as opposed to 35% for a standard commercial building). Less floorspace means lower construction costs, less mechanical and electrical equipment that needs purchasing, etc. That’s where their cost savings lie. 

More importantly than cost, solutions like this significantly speeds up delivery to market. In this very dynamic and aggressive market, speed-to-market is the number one concern for our clients. We do a lot at Gleeds in terms of cost control, schedule and project management, but the best thing we can do for our clients in this market to satisfy their business needs is to deliver the data center as fast as possible. Yesterday is too late in most cases. That’s where these technologies come in quite handy.

Why has cooling become such a core focus for data centers?

It all comes down to the trending rise in power requirements. Everything runs on the cloud these days. The more web-based applications and programs become, the more need we have for complex and sophisticated hardware, including brick-and-mortar solutions to house that hardware. 

That exponentially increases the amount of electrical distribution needed. We now have to buy larger and more complex equipment to adequately power the server racks, and subsequently, we need larger and more efficient HVAC equipment to cool them. Again, this is where In-Rack Cooling becomes useful. The density capacity of individual server racks has a reciprocal effect across the entire MEP and architectural solution offered in a data center. 

If In-Rack Cooling is so useful, what’s stopping its wider adoption by the industry?

We first have to understand what makes one data center better than the other. The answer is reliability. That’s the number one thing data center providers stake their marketability on. 

So where do you get reliability? You get that out of using known quantities. It’s the fundamental thing driving our decision-making. It’s the reason people repeatedly buy one brand of peanut butter over another. That brand they like so much is a known quantity. By the same token, people come to me because I know how to build data centers, not nuclear reactors (I wouldn’t know where to start). That’s my known quantity. 

The same thing applies to data centres design, construction methods and strategies. Right now, the known quantity is, for example, an Uninterruptible Power Supply system that provides backup generation for the facility. There are tried and true strategies that we know are reliable because they work. 

In-Rack cooling is relatively new in terms of data center tech, so it does not yet have the reputation of being a known quantity. It’s still perceived as a “nice to have. It’s like relish being sold at a hot dog stand. Not everyone has relish, but they do have ketchup, the tried and true safe option. Ideally, the best sales driver is to have both, but at the moment, a lot of the data center providers are saying, “we don’t know if relish tastes good, you should stick with tried and true ketchup.” 

It’s important to remember that this industry is run by human beings, and people are the same no matter what industry they’re in. You have those human characteristics driving the business and until we can change our thought process surrounding construction and design strategies, it’s going to continue to be that way. That’s the biggest drawback for new emerging technologies like In-Rack cooling and hydrogen generation.

What I’m saying is we need more relish.

Considering how supply-constrained the market is now, how can data center providers navigate this challenge?

The most pragmatic and prolific way we address that is through transparency to the client. A lot of consultants will say, “we’ll happy to do our best, we promise.” The reality, however, is that you can’t change the market conditions being what they are. You can only work around them.

As our clients’ trusted advisor and advocate, we do the market study, the supply chain analysis, reviewing all other issues as well, predicting and forecasting, and then we work within the parameters. Sometimes that means advising our clients that they have to re-tool their business model to meet the new needs. 

If they want to deliver sooner, they can redesign their facilities using alternative MEP equipment available at the time (such as off-the-shelf rather than custom-built equipment). If the client is very specifically looking for bespoke solutions, then we need to advise them appropriately that schedules must be adjusted, which will have reciprocal effects on procurement of contractors. It doesn’t make sense to sign agreements with MEP contractors if the equipment isn’t delivered on time. So the whole strategy has to be adapted to mitigate the risk. 

At the end of the day, our motto is simple: “don’t tell the client what they want to hear; tell them the truth.”

Speak to Don
Don Harris

Don Harris
Director, Mission Critical

Solutions like In-Rack Cooling not only save on construction cost for data centers, but significantly speed up delivery to market. Don Harris, Director, Mission Critical