How SANET Created a Different Kind of Network Backbone: A discussion between Marian Ďurkovič, SANET and Geoff Bennett, Infinera
Marian Ďurkovič is the network architect for the Slovak Republic’s National Education and Research Network, SANET. Last year SANET selected Infinera during a public tender for a new network backbone, and Marian is now taking full advantage of the capabilities of the Infinera Cloud Xpress platform to create a new and highly cost-effective backbone architecture. Here Geoff Bennett, Director of Solutions and Technology at Infinera, interviews Marian about this novel network architecture.
Geoff Bennett: Marian, welcome to the Infinera Blog. Just to set the scene perhaps I can summarize for our readers that SANET has recently deployed a national research and education transport network backbone across the Slovak Republic. Here’s a diagram of that network, which has seventeen 100 gigabit per second (100G) point of presence (PoP) locations, and supports packet-based services for the academic and research community in the Slovak Republic. The reason that I wanted to talk to you about this network is that it uses a rather innovative combination of technologies, and I’d like to ask you to explain why it is so unique.
Marian Ďurkovič: Yes, I think it is really quite unique. So what we did was to take a step back from a typical transport network architecture, which I think we can characterize as having dense wavelength-division multiplexing (DWDM), optical transport network (OTN) and packet layers, and ask how much of this traditional set of technologies do we really need. After all, the distances for each hop are not very far, and we can define the set of services we support as packet-based. So this avoids the need to support legacy time-division multiplexing (TDM) service types, and gives us the opportunity to optimize the functionality.
Geoff Bennett: Excellent – so what does this architecture actually look like?
Marian Ďurkovič: Here is a diagram with the traffic flow for a client connection shown as the green dotted line, and I have numbered each stage in the data flow. At each of the 100G PoPs we have at least two of the Infinera Cloud Xpress boxes. Because we built the backbone network as a ring for the most part you can think of one box as the westbound and the other box as the eastbound in each PoP. For connecting the two boxes we deployed a scalable Transparent Interconnection of Lots of Links (TRILL) switch platform, and this carries both add/drop packet services, and express [transit] traffic. We can extend the PoP architecture to multi-way if we need to, and there are some examples of this already in the network.
The Cloud Xpress gives us the ability to turn up very high capacity over our fibers, and because it has other unnecessary complexity stripped out, it is very cost-effective, and has very low latency.
We literally use the Cloud Xpress as a way to deliver a large amount of capacity into the TRILL switches.
As a final touch we use software defined networking protocols to create and manage connections through the switches, and we also use Infinera’s Instant Bandwidth to expand capacity on the backbone links without sending out engineers to install new transponders.
Here is a picture is of the PoP itself, and you can see how compact and simple the installation is in each PoP.
Geoff Bennett: Yes – that was something that struck me, actually. To think this is a terabit scale interconnect for a national backbone housed in such a small space is really quite impressive.
But let me ask this. To me it seems like you could be putting a lot of express traffic through those TRILL switches. I suppose in other network architectures we’d expect either OTN switching to handle router offload and avoid express traffic, or at a higher capacity scale maybe reconfigurable optical add-drop multiplexers (ROADM) that could provide express routing. Am I missing something?
Marian Ďurkovič: No, I will first say that this network architecture may not work for everyone. We are fortunate that as a research network we are permitted – in fact encouraged – to deploy network designs that are new and different.
You have commented on the express traffic though the switches, and this is absolutely something that must be carefully designed. Our architecture is feasible if both backbone capacity and packet switching can be delivered at a low cost, and the latest advanced technology enables exactly this. The Cloud Xpress does that for transmission capacity, and the TRILL switches are considerably less expensive than either an external OTN switch, or a classic router.
I cannot reveal exact numbers, because competing bids for this network were confidential; but I can tell you that the cost of this network was a fraction of the price we were quoted for either a conventional transport network, or an Internet protocol (IP) over DWDM router network with embedded colored optics.
For a fraction of the cost we are able to deliver an extremely high-performance, highly resilient and high-functionality national backbone, with state-of-the-art service provisioning.
Geoff Bennett: Of course our Cloud Xpress Family was designed for simple, point-to-point connectivity between data centers. Have you found any challenges in using it in this kind of network?
Marian Ďurkovič: No, quite the opposite. This entire network was deployed in only eight weeks, including the commissioning of the inter-city fibers, and we have been extremely impressed with how simple it is to bring up capacity using Cloud Xpress. It literally takes just a few minutes to bring a pair of boxes online, and to start plugging in the packet switches. Individual client services are also added or reconfigured in seconds.
We already have plans to enhance the functionality and scalability of the network. As we scale capacity we can turn on new Instant Bandwidth licenses. And of course we can install additional Cloud Xpress units on each site – right up to multi-terabit capacity. Today we’re using wavelength muxes on the fibers for connecting small regional PoPs. And if we start to see a significant level of transit traffic emerging, we still have the option to build super-channel express lanes using ROADMs.
This really is a very expandable network, and I hope that your readers will be interested in our experience with what can be a very cost-effective and functional network architecture.
Geoff Bennett: I’m sure they will. Marian Ďurkovič thank you very much!
Marian Ďurkovič is Network Architect at SANET. Geoff Bennett is Director of Solutions and Technology at Infinera.
- Video: Unleashing Cloud Networks
- Video: Get Up To Speed With Cloud Xpress
- Lippis Report: Infinera and Aristo Demo Low-Latency High Capacity DCI
- Analyst Report: ACG Research Report on Data Center Interconnect