Skip to main content

NETWORKING

Advanced Interconnects Yield Line Rate Speeds for 5G MEC

5G MEC architecture, 5G vRAN workloads

The holy grail for purveyors of live entertainment and sporting events is creating more immersive, virtual  experiences for audiences and fans. And 5G will play an important role in making it happen—but new network infrastructure will be required.

In many cases, this new infrastructure will be based on multiaccess edge computing (MEC), an ETSI standard that moves networking, compute, storage, security, and other resources closer to the edge. With 5G’s orders of magnitude more bandwidth, plus the ability to tap real-time radio access network information—MECs are poised to deliver lower-latency service than any equivalent degree of cloud processing.

The key advantage of an MEC architecture is that it removes edge applications’ reliance on distant clouds or data centers. In the process, it reduces latency, improves the performance of high-bandwidth applications, and minimizes data transmission costs. According to Benjamin Wang, Director of Marketing at AEWIN Technologies, a networking platforms provider, that’s not the full scope of the potential cost savings.

“MEC sits really close to the 5G base stations,” says Wang. “If you have a powerful-enough system, you can actually integrate the 5G DU (distributed unit) directly along with the MEC stack. That reduces the network infrastructure requirement, saving money for everybody.”

I/O Bound: Improving Interconnects for Edge Network Enablement

So what do these “powerful-enough” systems look like, and how will they differ from traditional networking equipment to meet the demands of 5G MEC?

With MEC equipment expected to deliver the capabilities of a small data center, multi-core multiprocessor systems like the AEWIN Technologies SCB-1932 network appliance are a must. The AEWIN appliance, which can pair with other platforms in a telco server rack, supports dual 3rd generation Intel® Xeon® Scalable processors (previously called Ice Lake-SP) with up to 40 CPU cores.

But with all of the data passing back and forth between 5G vRAN workloads or virtualized network functions like firewalls, aggregators, and routers on an MEC platform, the performance bottleneck usually isn’t processing performance. Often, it is the chip-to-chip and rack-to-rack interconnects used to shuttle information between processors that share virtualized workloads and/or execute interdependent processes and applications.

MECs are poised to deliver lower-latency service than any equivalent degree of #cloud processing. @IPC_aewin via @insightdottech

Until 2017, the Intel® QuickPath Interconnect (Intel® QPI) was used in multiprocessor systems to transfer data between sockets. But today’s 100 Gigabit per second Ethernet (GbE) and faster modules quickly overwhelm QPI’s available bandwidth. The result is systems that are “I/O bound,” where ingress data is squeezed by interface limitations before it can reach the processor, which adds latency.

In response, Intel processor microarchitecture equips an upgraded socket-to-socket interface called the Intel® Ultra-Path Interconnect (Intel® UPI). The low-latency, cache-coherent UPI nearly doubles the speed of QPI to up to 11.2 GTps, or a full-duplex bandwidth 41.6 GBps.

But Intel UPI solves only the chip-to-chip data transfer problem. It does not address shelf-to-shelf or rack-to-rack communications. That is the role of PCI Express (PCIe) 4.0, with its roughly 16 GTps throughput per lane. After putting the technology through its paces, Intel designed PCIe 4.0 into its chips starting in 2020, including the 3rd generation Xeon Scalable processors.

As a result, data in systems like the SCB-1932 can be channeled to any one of eight front access expansion bays for plug-in NVMe storage, application specific accelerators, or network expansion modules over PCIe Gen 4 (Figure 1). The platform’s eight PCIe 4.0 x8 lanes deliver data to these modules at a rate of 100 Gbps.

AEWIN PCI Express hardware
Figure 1. PCI Express Gen 4 support allows the AEWIN SCB-1932 to communicate with network expansion modules at speeds up to 100 Gbps. (Source: AEWIN Technologies)

These modern interfaces allow systems like the SCB-1932 to keep pace with the requirements of MEC deployments, delivering near-line-rate performance in demanding applications like security and video processing.

Keep the Data Flowing, From the Bottom Up

5G performance improvements aren’t just for your average consumer: Private businesses from stadiums to medical centers to manufacturing facilities are embracing these lower latencies to support new applications and services. Use cases include anything from high-speed video streaming to augmented and virtual reality to real-time biometric authentication. All of these make life more efficient and/or enjoyable, but all of them require new levels of edge network performance.

With MEC installations on the horizon, we can expect these and other new application classes to emerge so long as equipment like the SCB-1932 can keep data flowing at low latencies. That all starts with the most fundamental connectivity technologies—processor interfaces—and works its way up.

About the Author

Brandon is a long-time contributor to insight.tech going back to its days as Embedded Innovator, with more than a decade of high-tech journalism and media experience in previous roles as Editor-in-Chief of electronics engineering publication Embedded Computing Design, co-host of the Embedded Insiders podcast, and co-chair of live and virtual events such as Industrial IoT University at Sensors Expo and the IoT Device Security Conference. Brandon currently serves as marketing officer for electronic hardware standards organization, PICMG, where he helps evangelize the use of open standards-based technology. Brandon’s coverage focuses on artificial intelligence and machine learning, the Internet of Things, cybersecurity, embedded processors, edge computing, prototyping kits, and safety-critical systems, but extends to any topic of interest to the electronic design community. Drop him a line at techielew@gmail.com, DM him on Twitter @techielew, or connect with him on LinkedIn.

Profile Photo of Brandon Lewis