As Communications Service Providers (CoSPs) invest in 5G, they’re starting to consider the part that public cloud might play in their success.
Many CoSPs already have a relationship with public cloud service providers. CoSPs host their enterprise applications in the cloud, and use hyperscale cloud service providers for their public-facing websites and software development projects. Now, the question is what role public cloud might play in the network services.
CoSPs are on a journey to virtualize and software-define the network. Instead of dedicated hardware appliances, new deployments are done with software running on general-purpose servers. At the same time, CoSPs are also transitioning to cloud-native technologies. The strategy enables them to have a telecommunications cloud with the elasticity and scalability of the public cloud, but in the CoSP’s private centralized data centers and edge locations.
Virtualization was a stepping stone toward the cloud for enterprise workloads across all industries: a virtual machine could run almost anywhere. Could the virtual network functions (VNFs) in the telecommunications cloud migrate to the public cloud?
Strategic Considerations
First, CoSPs must consider what is strategically desirable.
CoSPs have a huge responsibility to their customers and their regulators to ensure that their critical infrastructure operates, even under extreme duress. They will be looking for strong service-level agreements (SLAs) from cloud service providers before they are ready to host functions in the public cloud.
At the same time, CoSPs need to watch their costs. They need a cost-effective way to roll out the 5G infrastructure and expand their edge computing footprint for new services. They will need a cloudified environment and end-to-end network automation to ensure service delivery. Cloud service providers have strong competencies in this web scale domain, but are unlikely to take on the responsibilities of providing a full consumer telecommunications service. That means there is potential for a collaboration model, combining the cloud service providers’ cloud experience and the CoSPs’ network experience.
The business support system (BSS) already runs in the public cloud for many CoSPs. It’s not latency critical, and public cloud can help to reduce costs. However, other CoSPs want to keep tighter control over any systems that touch billing and finance. They won’t be putting BSS in the cloud soon. Control plane functions might be easier to migrate to the public cloud than data plane functions. The control plane is also closely related to the revenue stream, though. Some CoSPs will hesitate to entrust it to the public cloud.
Financial Considerations
When migrating VNFs to the public cloud, there is also a question of what makes sense financially. The appeal of public cloud is that customers, in this case CoSPs, don’t have to invest in their own hardware and data center infrastructure. Instead, they pay as they go, moving away from capital expenditure in favor of operational expenditure (OPEX). The OPEX bills can add up, though. Cloud service providers charge a lot for outgoing traffic and data-intensive VNFs could quickly become expensive.
Migration projects are significant undertakings, too. VNFs that are architected to work on carrier-grade servers would need to be modified to work on public cloud servers. Independent Software Vendors (ISVs) and CoSPs may need to invest significantly to make today’s VNFs work in the public cloud.
Technical Considerations
Most importantly, which VNFs can feasibly work in public cloud?
The radio access network (RAN), for example, has strict latency requirements that require baseband processing to be at or near to the cell site. The existing data center network of the hyperscale cloud providers isn’t granular enough so there will be more challenges running RAN workloads here. Cloud providers typically have no more than a handful of regions within a country. It’s unlikely to be cost effective to build the edge estate out to the neighborhood level required for RAN.
RAN is also an example of a workload that benefits from accelerators, such as the Intel® vRAN Dedicated Accelerator ACC100. This accelerator enables the general-purpose processor to offload the forward error correction process to dedicated hardware. In this way, CPU cores are freed up to increase the channel capacity. Similarly, Intel® QuickAssist Technology (Intel® QAT) can be used to accelerate the IPsec Gateway for the S1 interface. Accelerators such as these may not be available in the public cloud, where the focus is on homogenous hardware.
Although VNFs run on general-purpose servers, those servers are highly tuned for telecommunications protocols. Public cloud servers are unlikely to be a one-for-one replacement for the servers CoSPs currently use for network workloads. They may not be performant enough for some VNFs. Public cloud platforms are designed for Information Technology workloads and are not optimized for packet processing. Nor do they offer the granular server telemetry that CoSPs need to be able to tune the platform and optimize the performance of their current VNFs.
We created the Telecom Workload Placement and Affinity Model to help CoSPs decide which network functions could work in public cloud. We considered:
- How important the physical location is for the VNF, with regard to both throughput and latency requirements. Workloads requiring high throughput and low latency will be hard to migrate to the public cloud.
- The integration requirements for the VNF. Fixed services related functions have fewer integration requirements than 3rd Generation Partnership Project (3GPP) mobile services related functions. Greater integration is easier to achieve within the CoSP’s own architecture.
- The security requirements for the VNF. Functions such as Home Subscriber Server (HSS) and Home Location Register (HLR) have sensitive customer data and are also essential to the functioning of the network. VNFs are harder to protect in the public cloud and data sovereignty requirements are harder to meet there.
- The performance requirements of the VNF. Data plane functions are more demanding than signaling functions, for example, and so are a poor fit for current instances in public cloud.
We scored each of these factors equally. CoSPs may prefer to weight one factor more highly than another. There might be scenarios where a threshold value for any single parameter requires the workload to be hosted in the telecommunications cloud. HSS/HLR, for example, sits in the middle of the model. If security is weighted more, a CoSP might take the view, that these workloads must stay in the telecommunications cloud for security reasons.
VNFs on the right, including virtual RAN (vRAN), virtual deep packet inspection (vDPI), and broadband network gateway (BNG) are a strong match for the telecommunications cloud. They are more challenging to implement in public cloud. Workloads such as software-defined wide area networks (SD-WAN), unified communications as a service (UCAAS), and universal customer premises equipment (uCPE) already run in the public cloud.
Those functions in the middle of the model may have some technical affinity for public cloud, if the performance, security, and integration requirements can be met.
A hybrid model would be where some functions remain in the telecommunications cloud and some migrate to the public cloud. Platform ownership would be split between the CoSP and the cloud service provider. There could be challenges if technical problems arise, or end-to-end performance is diminished.
This year, we expect to see CoSPs continuing to talk to cloud service providers about how they could work together in the network. It’s worth remembering that when public cloud first emerged many businesses were nervous about adopting it, even for enterprise workloads that routinely run there today. Telecommunications has unique performance and proximity requirements, so any reservations about running network workloads in the public cloud are justified and understandable. If there is enough demand from CoSPs, though, we might see hyperscale cloud providers offering tuned platforms and more local data centers in the future.
Moving to the Edge
One place where CoSPs and cloud service providers can work together today is at the edge of the network. 5G reduces latency, increases bandwidth, and enables massive connectivity for applications such as the Internet of Things. Edge computing helps to take full advantage of 5G, by eliminating the latency and cost associated with backhaul to the cloud.
Cloud service providers and CoSPs both have strengths at the edge, but neither offers a complete solution alone:
- CoSPs have the network connection, and the ability to host workloads inside the network with guaranteed performance. Many don’t have an edge computing infrastructure, including a software stack, which would be costly to build.
- Hyperscale cloud service providers have a software stack that developers know, but don’t have the in-country high-bandwidth low-latency network infrastructure.
Hyperscale cloud service providers are now making their cloud stack available to CoSPs on dedicated hardware or as software platforms on volume server OEM hardware. This hardware can be hosted in the CoSP’s network data center.
As a result, CoSPs can offer cloud-based services within their network. There’s no need for developers to adopt a new software stack: they can use the public cloud infrastructure and services they are familiar with. At the same time, they benefit from the low latency only the CoSP can provide.
Amazon Web Services Wavelength Zones are infrastructure deployments where a CoSP hosts an Amazon Web Services server in its data center. Amazon Web Services is working with Verizon in the US, KDDI in Japan, and with SKT in South Korea. Microsoft Azure Edge Zones with Carrier are a similar offering, and launched with AT&T in the US. Google Anthos For Telecom is an example of a software platform that makes it easier to build hybrid cloud environments at the network edge.
For CoSPs, there are a number of business models they can use with this infrastructure. They could simply host the cloud stack in the network, for the benefit of the cloud service provider’s existing client base. Alternatively, the CoSP could build their own infrastructure that runs platform services from cloud service providers, or offer their own Platform as a Service to developers.
CoSPs could also deliver user-facing services on the cloud stack. For example, they could license computer vision software, run it on a cloud platform in the network, and then offer services based on computer vision. End customers wouldn’t know what platform the solution was using, but CoSPs could benefit by tapping into a rich ecosystem of existing software and developers. Running the software at the edge would enable new time-sensitive use cases, while keeping the “as a service” simplicity of public cloud for users.
When CoSPs offer these services, they take on new responsibilities for SLAs. The cloud stack is offered by the hyperscaler with a less strict, or sometimes without any, service-level agreement (when the data center, connectivity, and people managing the data center are outside of the hyperscaler’s control). CoSPs are highly experienced in delivering on SLAs, though, so the new responsibilities are a good fit for them.
In time, CoSPs may find that they need to work with multiple cloud service providers to deliver alternative cloud stacks at the edge. Hybrid cloud is the norm for most companies today, with an average of 3.4 public clouds (2.2 in production) at each organization1. Organizations choose different cloud stacks for different features and applications. It will be easier to migrate their applications to the network edge if they can avoid rearchitecting for different cloud service provider platforms.
CoSPs hosting public cloud infrastructure in their data centers could also run their own workloads on that infrastructure. Doing so would enable them to more easily burst to the public cloud at times of high demand. This approach may require significant software development, so a rigorous cost analysis would be a required first step.
The presence of identical cloud infrastructure hardware instances across multiple locations minimizes the effort required for application migration, validation, and the support of hybrid cloud environments. Consistent hardware also makes operations easier. Intel offers an unmatched portfolio for the unique requirements of cloud and edge implementations. We also enable the open-source ecosystem, including Linux and Kubernetes, so developers can take advantage of such innovation.
Looking to the Future
Over the next few years, we expect to see increasing cooperation between cloud service providers and CoSPs. Cloud platforms may become more performant to meet the requirements of demanding network workloads. Cloud service providers and CoSPs will help to drive growth in edge computing together, so that their customers can enjoy the speed, bandwidth, and massive connectivity of 5G.
Find Out More
- Telecom Workload Placement and Affinity Model
- Using Server Telemetry to Streamline Telecommunications Network Management
Acknowledgments
Thanks to Mitch Koyama, Jay Vincent, and Paul Mannion from Intel; and to Anit Lohtia and Rowland Shaw from Dell who contributed ideas for this blog. Eslam Kandiel and Ahmed Ibrahim cowrote the Telecom Workload Placement and Affinity Model with Petar Torre.