By Tim Doiron
Sr. Director, Marketing
As many of you know, I spent the last 3.5 years as a Principal Analyst at ACG Research, where I loved the industry collaboration and the ability to dive into timely topics of personal and professional interest. However, I also found that I missed developing and marketing products and being a more integral part of the product ideation process. So, as of May 13, I joined Infinera as Sr. Director of Solutions Marketing. Thank you to everyone who has supported me on this journey. Thank you for your trust and your kinship. Thank you to ACG Research for letting me be a part of the team and thank you to Infinera for inviting me to join your own transformational success story. I am incredibly excited about the intersection of deep fiber architectures, distributed computing, open networking, intelligence, and automation.
Along with Infinera colleagues, I attended this year’s NGON & DCI World in Nice, France, where I chaired the Wednesday afternoon Data Center Interconnect (DCI) track and moderated two panels. I left the conference with four key takeaways:
- 400G deployments inside and outside the data center are interdependent.
- The edge will remain foggy without a systematic approach that includes the concept of “latency zones” or something similar.
- Africa holds promise for optimized networks with on-demand consumption models.
- 5G transport requires software-defined networking (SDN) enablement and intelligent automation.
400G Inside and Outside the Data Center Are Interdependent
The Hyperscale Roadmap panel included Brad Booth from Microsoft, Mike Hollands from Interxion, and Herve Fevrier from Facebook. Capacity demands, network expansions, and growth in interconnections are top challenges facing all three companies. I was a little surprised to learn that the deployment of 400 gigabit (400G) connectivity inside the data center is still really limited right now. In the case of Microsoft, the metro DCI connectivity upgrade with 400G ZR is the gating factor for increased use of 400G inside the data center. Between data centers, 100G/200G wavelengths remain common. Microsoft, with its metro-distributed data center architecture, will remain that way until 400G ZR pluggable coherent optical modules are commercially available, which looks like it will be in late 2019 or early 2020. Brad made a point that it typically takes a year to operationalize new technology like 400G ZR once it is available from vendors, which begins to look more like 2021 for mass commercial deployment. While every data center architecture isn’t Microsoft’s, the timeline for 400G ZR deployments should be similar for others that plan to utilize the pluggable solution.
Latency Zones Help Clear the Fog on Edge Computing
Edge computing was discussed as something that is already occurring in some ways with the deployment of edge data centers that enable content to be closer to consumption. Companies like Interxion, with its interexchange and colocation facilities, are benefiting as Microsoft and other hyperscale providers are using them to expand their footprints and get closer to customers more quickly. LinkedIn (purchased by Microsoft) was cited as an example application in which some content is cached at an edge data center to enable the web browser screen to partially update while additional information is obtained from a more centralized data center location. The definition and location of “the edge” was debated. While definitions vary among service providers and vendors, panel participants were receptive to the idea that we need a systematic approach to edge computing that includes the concept of “latency zones.” With latency zones, we set desired/required application latency targets to help create a common framework for networking deployments and discussions. As an example, latency zone A may be defined with 1 millisecond (ms) maximum latency, while latency zone B may allow up to 10 ms. With latency zones, we can talk quantitatively about where and how to deploy distributed computing in the network and how we support distributed application performance requirements.
Africa Holds Promise for Optimized Solutions with On-demand Consumption
In the second panel, Stijn Grove, Dutch Data Center Association; Helen Xenos, Ciena; and Ayotunde Coker, Rack Centre, discussed how business models and infrastructures are evolving to support “Data Center 2.0.” Ayotunde’s perspective from Africa was particularly pointed. The opportunity for data center growth in Africa is significant. The African continent is home to 1.3 billion people, but total data center capacity is 50% of what we find in Amsterdam or 25% of London. The data centers that do exist are concentrated in South Africa. By contrast, Rack Centre is in Nigeria – a country with a population of 200 million people and data center capacity of 2-3% of that found in Amsterdam or London. Listening to Tunde, you got a clear perspective that power, space, and ease of operation (automation, self-installation) are critical attributes for his network – seeking to minimize electricity, cooling, footprint, and operational costs.
In Africa, prepaid mobile phone services were embraced because they enabled people to granularize and level their cash flow with daily/weekly consumption and payments versus a single sizable and indeterminate monthly bill. That same granularized and as-needed consumption model is playing out with data center resources. African enterprises can consume and pay for “right-sized” compute/storage/networking resources as and when they need them. They can flexibly expand and contract data center resources and costs to match their needs and their revenues. Africa has a young population – a demographic that has grown up with smartphones with advanced features and a propensity to consume data. This demographic makes areas of dense population in Africa “latency spots” that are ripe for highly connected carrier-neutral edge data centers. The population and on-demand consumption model are two of the reasons why Tunde is so bullish on Rack Centre’s prospects in sub-Saharan Africa. Listening to Tunde, I also thought about Infinera’s Instant Bandwidth, which enables easy and on-demand transport and connectivity service consumption with a portable licensing scheme. Instant Bandwidth can enable Rack Centre to obtain the transport capacity it needs, when and where it needs it, in service of its customers and commensurate with its own revenue generation
5G Transport Needs SDN Enablement and Automation
Other top topics from conference presentations and moderated panels included coherent dense wavelength-division multiplexing (DWDM) evolution to 800 gigabits per second (Gb/s) per wavelength and the challenges as we approach the Shannon limit, disaggregation, 5G transport, edge computing, network intelligence, and closed loop automation. Infinera’s own Geoff Bennett delivered a presentation that identified how second- and third-generation coherent solutions can extend optical reach over first-generation technology. Next-generation 400G was offered as a proof point, delivering over 1,000-kilometer reach versus hundreds of kilometers for first-generation solutions. Service providers expressed a diverse set of views regarding 5G mobile transport, with a representative from Orange promoting Optical Transport Network (OTN) across mobile fronthaul, midhaul, and backhaul. Others, including myself, think OTN in the fronthaul space is not appropriate. While there is debate about the optimal tools to implement 5G transport network slicing, there was general agreement that 5G mobile transport will need to be SDN-enabled and highly automated – in concert with 5G RAN and NG Core – to instantiate and dynamically manage network slicing. Without intelligent automation, the benefits of programmable, individualized networking performance over a shared physical network will be lost.
See you in Barcelona for NGON 2020.
Safe travels everyone!