by Andy Fenton, VP Sales and Marketing, Telehouse Canada, Ken Miyashita, Managing Director, Telehouse Thailand and Sami Slim, CEO, Telehouse France

AI is no longer confined to the innovation lab. It is now central to real-world deployments, reshaping the way data centres operate. This shift is visible across Asia, Europe, and North America, where organisations want inference engines positioned as close as possible to users and data sources. As a result, data centre design, buyer priorities, and long-term infrastructure strategy are all evolving to meet these demands.

Often, it is about moving this capability closer to where the demand resides. Retailers, for example, deploy micro-regions at suburban edge sites, so recommendation engines answer in milliseconds. Trading desks, meanwhile, run live language translations in servers located in racks with power densities exceeding 100 kW, cooled by liquid, inside the same metro fibre ring as the exchange matching engine. If those GPUs sat farther away, WAN latency would erase the speed gained from extra computing.

This drive to bring AI workloads closer to where data is generated is prompting data centre operators and their customers alike to rethink how infrastructure is designed, connected, and managed to keep pace with AI’s next phase of growth.

Building future-proof services

With AI evolving so quickly, infrastructure must scale in months, not years. That means designing GPU zones for high density racks, preparing dark-fibre routes that bypass congested exchanges, and specifying layouts that can switch from air to liquid cooling without full rebuilds. If a provider can’t raise thermal and power ceilings quickly, AI projects will stall.

Connectivity is another key constraint, though it is often underestimated. Recent research by S&P Global Market Intelligence 451 Research, commissioned by Telehouse, which surveyed more than 900 senior IT executives globally, found that more than 90% of businesses see cloud on-ramps as important to their AI/ML architecture, while 55% of the total survey sample said they had experienced significant network issues with AI.

In practice, these concerns are driving a shift to carrier-neutral data centre campuses with route-optimized links to hyperscale clouds, together with installation timelines measured in days and bandwidth that scales as the model grows. AI-native businesses are now prioritizing interconnection density and speed of provisioning over raw floor space. They arrive with precise technical requirements and expect campus ecosystems will meet them on day one.

Demand for specialized services, such as GPU-as-a-service or tailored environments for different types of AI workloads, is rising all the time.

Some data centres are offering significant megawatts of liquid-cooled capacity dedicated to GPUs. Others are helping cross-border AI providers navigate regulatory requirements on data residency. Data centre providers must understand the workload, not just the hardware.

That means offering engineering expertise, transparent power budgets, and a willingness to retrofit when economics demand it.

Key considerations for planning AI infrastructure

As AI momentum builds, it is increasingly important to ask the right questions when assessing data centre capabilities. It starts with understanding the workload profile – what systems, users, or data sources need to be connected. Defining the use case at this stage is critical, as training and inference place very different demands on location, cooling, and interconnect needs.

Buyers must ask for the current rack-power ceiling and the precise milestones that trigger an upgrade, then probe the cooling roadmap to understand how and when liquid cooling will be introduced. They also need to know which carriers and cloud gateways are live on day one and what round-trip latency can truly be guaranteed, while confirming what support exists for data-sovereignty rules and live carbon reporting.

Scoping out the next 18 months

Three forces will shape requirements through 2026.

  • First, mainstream software vendors are embedding generative assistants into office suites and customer-service platforms, multiplying inference transactions inside city cores.
  • Second, sustainability metrics are moving from annual reports to live dashboards. Heat-recovery loops, granular energy metering, and machine-learning-driven building optimisation are on course to become baseline clauses rather than progressive extras.
  • Third, budget pressure is encouraging collaborative ecosystems in which cloud operators, telecom carriers, and data centre specialists share risk and margin to deliver integrated AI platforms that scale with demand.

The road ahead

Looking further ahead, AI will anchor inference workloads closer to users, drawing data centre investment into well-connected metropolitan and edge locations. At the same time, modular, liquid-cooling ready designs will continue to prove their value, offering a practical path to higher rack densities without wholescale rebuilds, while interconnection quality now weighs as heavily in site selection as kilowatt price or cooling efficiency. Sustainability metrics and partnership models are moving from aspiration to expectation, signaling that efficiency and partnership will define competitiveness for the rest of the decade.

Enterprises planning their next move need to understand the workload, quantify current limits, and select partners who can scale power, switch cooling regimes, and activate new network paths at the pace the AI roadmap demands.

Discover more about the key infrastructure decisions shaping AI success in 2025 and beyond in our webinar, featuring in-depth insights from Telehouse experts around the world.