AI, Optical Networking, and Indium Phosphide

July 11, 2023
By Paul Momtahan
Director, Solutions Marketing
With the advent of OpenAI’s ChatGPT, Microsoft’s Bing Chat, and Google’s Bard, artificial intelligence (AI) and associated terms such as machine learning, neural networks, and deep learning have moved into the mainstream and become key topics of public and political discourse. Machine learning (ML) is one approach to AI, leveraging mathematical models to aid computer learning without direct instruction. Neural networks are a type of machine learning where computers learn to process data in a way that is inspired by the human brain, with deep learning referring to the depth of the layers in a neural network. In case you are still confused, this blog from IBM provides a nice explanation of the terminology. AI/ML has applications in a wide range of industries, from transportation to defense. In this blog, I’ll look at the symbiotic relationships between AI/ML, optical networking technology, and indium phosphide (InP).
AI/ML will have a big impact on optical networking
AI/ML is expected to have a big impact on multiple aspects of both optical networking and telecommunications. Applications range from customer service chatbots, compiling network reports, responding to outage tickets, and predicting performance, faults, and traffic patterns to eventually running networks autonomously. Embedding neural networks in the digital signal processing (DSP) chips of coherent optical engines for tasks such as nonlinear compensation has also been a hot topic at optical industry conferences such as OFC and ECOC.
On the other hand, AI/ML requires massive connectivity both inside the data center and between data centers and is therefore hugely dependent on optical networking technology. Optical networking also enables the connectivity between cloud AI services and end users. All this connectivity is provided by photons over fiber optic cables, with the vast majority of these photons generated by InP lasers.
AI/ML drives change inside the data center
AI/ML is having a huge impact inside the data center. According to a 650 Group presentation at OFC 2023, the vast majority of data center server growth is being driven by AI/ML. AI/ML clusters now include up to 10,000 graphics processing units (GPUs). AI/ML traffic inside the data center is doubling year over year, with 650 Group also predicting that one in five Ethernet ports inside the data center will be for AI/ML by 2027. Moreover, AI/ML intra-data center traffic is required to be lossless and low latency. It also needs to be very high bandwidth, cost effective, and power efficient. AI/ML will also inevitably drive increased bandwidth between data centers as well as to and from data centers – for example, large data sets transferred from IoT devices to the AI cloud.
Indium phosphide optics and AI/ML
InP is already a key enabler of optical communications both inside data centers and over the metro, long-haul, and submarine networks that connect data centers to one another and to the government bodies, businesses, and consumers that need access to the data and services hosted within them. In addition to being the key material for single-mode fiber and WDM lasers, InP is also the ideal material for photonic integrated circuits (PICs) as it enables the most optical functions (laser, modulator, photodetector, amplifier, etc.) on a single chip compared to alternative materials such as silicon (no laser, no amplifier) and lithium niobate (modulator only).
Furthermore, InP has some additional advantages that make it ideally suited for scaling lossless, low-latency networking capacity for AI/ML inside data centers. Its superior modulator effect enables faster changes in the refractive index. This in turn enables InP to scale to very high symbol rates (i.e., 200+ Gbaud), which will be critical for the evolution to pluggables beyond 800 Gb/s, for example, 1.6 Tb/s. InP also requires less voltage to effect a given phase change relative to alternative materials such as silicon and lithium niobate. This provides InP with a crucial advantage when it comes to reducing power consumption inside data centers, which will be a key challenge when scaling AI/ML. Furthermore, fast laser retuning of the wavelength’s frequency can enable connectivity to be rapidly reconfigured for bulk data transfers.
What about indium phosphide AI processors?
Digital electronics is dominated by silicon for multiple reasons, including its abundance, its stable native oxide that can act as an insulator, and its large, high-yield wafers. However, electrons move relatively slowly, and silicon electronics also become unstable at high temperatures and therefore require lots of cooling, which consumes a lot of power. As we reach the limits on shrinking chip geometry, these disadvantages are driving the industry to look at compound semiconductors, with their higher electron mobility and superior temperature stability, as potential successors. And while InP and gallium arsenide (GaAs) are used for some niche applications such as high-radio-frequency electronics today, silicon carbide (SiC) and gallium nitride (GaN) are the primary compound semiconductor R&D candidates for succeeding silicon as the mainstream semiconductor material.
Figure 1: InP PIC for photonic neuromorphic computing (Eindhoven University of Technology, 2020)
However, another area of research where indium phosphide could have a big impact on AI/ML is neuromorphic computing. That is building computer chips with neural networks implemented in the hardware itself, as opposed to neural network software running on conventional processors. Internally these chips require massively parallel interconnections, making InP PICs a strong candidate, with optical waveguides replacing the metallic interconnects in an electronic chip. For example, Eindhoven University of Technology demonstrated this type of device in 2020, as described in this paper, “Deep neural network through an InP SOA-based photonic integrated cross-connect,” with the InP PIC from the paper shown in Figure 1. New applications for AI/ML that could be enabled by photonic neuromorphic processors include high-energy particle physics, fusion reactor control, and nonlinear optimization for robotics and autonomous vehicles.
Summary
So, to summarize, AI/ML will drive huge changes in many industries, including telecommunications, networking, and more specifically optical networking. At the same time, optical networking is a key enabler of AI/ML. Furthermore, InP, in addition to its continued role in enabling the highest-performing coherent optical engine technology for metro, long-haul, and subsea networks, also has an increasingly important role to play in enabling high-performance AI/ML clusters inside the data center with scalable, lossless, low-latency connectivity that is also extremely power efficient. Longer term, InP potentially has an important role to play in enabling photonic neuromorphic computing for advanced machine learning.