contact button

Cloud Networking Applications, Deployment Models, and Trends

May 24, 2023
By Fady Masoud
Sr. Director, Solution Marketing

Part 3: Trends

In two previous blog posts [Part 1, Part 2), we covered cloud networking applications and their different deployment models. In this blog, we’ll take a quick look at the major trends shaping this industry.

Secure DCI

In 2016, cybercrime cost the global economy $450B, according to Cybersecurity Ventures. In 2021, this number was expected to be $6T. Many security mechanisms exist today to protect data while it is stored in the data center (at rest), such as controlling physical access to the data center, securing access to storage and compute equipment, and encrypting data clusters. Equally important is securing data while in flight – when data is transmitted between data centers. Whether cloud networking is used as a part of private enterprise connectivity or data center interconnect (DCI) as a service, data must be protected from intruders and cyber threats. Numerous data center operators are now deploying secure DCI links using encryption, and similarly, DCI service providers are building their value propositions and competitive differentiators around secure connectivity. To protect mission-critical data from intruders and hacking tools, several of Infinera’s optical transport platforms are FIPS 140-2 Level 2 compliant and offer security across multiple levels, including tamper-proof hardware, centralized authentication and authorization and stringent access procedures, secured control plane and management plane, wire-speed Layer 1 and Layer 2 data plane encryption, and many other features. Encrypting transmitted data can be performed without external boxes (a risk exposure) or special network engineering procedures.

Dynamic Data Center Fabrics

As data center interconnect solutions become more flexible and scalable, interconnected data centers can operate as a single entity across varying geographical areas, from small metropolitan data center fabrics to larger regional data center fabrics. For cities with dense populations and heavy demand for data, workloads can be shifted from one data center to another one nearby to accomplish certain tasks and maintain end-user performance. Heavy workloads can also be dynamically “migrated” from one region to another region where electricity is cheaper during the night. This fluid movement of workflows enables data center operators to enhance performance, maximize the utilization of resources, and reduce costs. It also enables carrier-neutral providers to evolve their business models beyond just space and power and get a competitive edge by offering dynamic, flexible, and high-performance DCI services.

SLTE at the Data Center

Data center-to-data center traffic from internet content providers and cloud service providers counts for most of the traffic on submarine cables. Historically, submarine cables were terminated at a cable landing station (CLS) with submarine line terminating equipment (SLTE), whereby traffic coming from the “wet” plant was digitally terminated on a platform such as an OTN or Ethernet switch. But a typical CLS is chosen for its location as a safe harbor and not for space and power reasons. The latest generation of optical engines – such as Infinera’s ICE6 – features significantly better optical performance, offering more capacity at longer reach, so the optical path of the submarine links can be extended to inland data centers, where power and space are typically less limited. Extending SLTE to the data center reduces space and power costs, is far more scalable as we need to support higher-capacity subsea cables, and enhances reliability by eliminating back-to-back connections.

Automation, Automation, and Automation

Automation is not new to data centers. As a matter of fact, automation is at the heart of every data center and has been for a while. Data center operators use many software automation tools on servers and storage equipment to accelerate code generation and DevOps, streamline application deployment, simplify configuration management, automate operations/upgrades, and conduct many other day-to-day tasks. To put things in perspective, a Tier 1 internet content provider at a past OFC conference shared some of operational challenges running a global data center network. They have to manage more than 30,000 circuits, 30,000 configuration changes per month, 40,000 submarine fiber pair miles, 4 million lines of code in configuration files alone, and 8 million monitoring variables refreshed every five minutes, and they have to collaborate with 12+ vendors. Needless to say, automation is crucial for data center operators. While automation is widely used in the storage and compute part of the data center, it is currently ramping up in DCI by extending automation to the network element (NE) level. There are numerous ongoing projects and initiatives to further automate DCI, focusing on the simplification and redesign of network operations with zero-touch provisioning, introducing intent-based declarative configuration management to networking equipment, defining and enhancing real-time streaming telemetry for active monitoring and failure prevention, and leveraging containers and microservices to enhance feature introduction and simplify upgrades.

Liquid/Immersive Cooling

In data center jargon, the “power utilization efficiency” (PUE) factor represents the ratio of the total amount of energy used by a data center facility (IT equipment + cooling) to the energy delivered to IT equipment (servers, storage, and networking). To put things in perspective, a data center PUE of 2.0 means that the power consumption of IT equipment is equal to the power consumption of cooling systems required to maintain the IT equipment at operating temperature. In other words, for every 1 watt consumed by the IT equipment, there is another 1 watt used by the cooling system. A PUE of 3.0 represent highly inefficient power utilization, where twice as much electricity is used to cool the IT equipment than the equipment itself. On the opposite end of the scale, a PUE of 1.2 represents a highly efficient cooling design, where cooling is only 20% of the total of IT equipment power consumption. Over the years, lowering the PUE has been a major target for every data center operator in order to maximize the utilization of every watt and hence reduce operating costs and environmental impact. While major improvements were achieved prior to 2012, it’s been baby steps, or almost flat, since then. Data center operators have been leveraging different technologies to reduce power consumption, such as the automation of cluster size based on workloads and the dynamic deactivating/reactivating of servers. Nonetheless, the ever-increasing demand for content and high-performance computing is dictating the support of power consumption way above what racks were initially designed for (10 kW/rack is the typical design). Up until now, when rack power requirements remained well below 20 kW/rack, data centers could rely on air cooling to maintain safe operating temperatures. But today’s high-performing racks can easily exceed 20 kW, 30 kW, or more, so data center operators have started to adopt liquid cooling for more than just mainframes and supercomputers as water and other liquids are far more efficient at transferring heat than air. There are many data center cooling approaches, including:

  • Evaporative cooling: Air is funneled into the equipment; excess heat dissipates into the water tank; cooler air is then again funneled to the equipment
  • Rear-door water cooling: Water or special liquid runs through tubes attached to the rear of the rack.
  • Waterborne cooling: Data center is built on a barge and is cooled by water from the ocean/lake.
  • Underwater cooling: Data center is built in sealed container and laid down on the ocean floor (e.g., Microsoft’s Project Natick).
  • Immersive cooling with non-conductive fluids: Servers are immersed in non-conductive fluids.

The relentless demand for bandwidth, the fast-paced migration to the cloud and the proliferation of edge computing are fueling demand for data centers and their interconnect. Innovative technologies are helping data center operators overcome operational challenges while paving the way for a new era of hyperconnectivity.

Check out Infinera’s Cloud Networking Applications website and blogs for more details and insights on how you can optimize your data center interconnect and cloud networking deployment.