Cisco Blog >SP360: Service Provider

SP360: Service ProviderDaniel Etman - October 19, 2017 - 0 Comments

Written by Alon Bernstein, Distinguished Engineer, Cable Access Business Unit

Web-scale giants, like Amazon, Google and Facebook, design their infrastructure, applications and deployments to be cloud native because this is the ideal environment for supporting their core applications, including search, e-commerce, and storage. But can a system designed to handle Christmas shopping support a CMTS? Yes, absolutely. Cloud native's benefits are so compelling that it makes sense to enhance them with what's needed to support a CMTS, namely data forwarding and the networking control plane. Let's look at the gaps that exist now in the cloud native architecture, how they could be addressed to support a CMTS, and how the cloud native system relates to the existing network virtualization framework, namely ETSI-NFV.

Cloud Native vs. Virtual Machine

First, a quick refresh on what cloud native is, and why it's becoming the standard for the software industry. The Cloud Native Computing Foundation defines the nature of cloud native as 'containers, dynamic management, and micro-service oriented.' For many, it's the container component that causes confusion and generates the debate between containers vs. virtual machines (VM). Let's clear that up that debate. In reality, the VM vs. containers debate boils down to the packaging option. Cloud native advocates micro-services, so it makes sense to have lightweight packaging (i.e. containers). By contrast, the main benefits of a VM is the hardware virtualization. VM enables a 'lift-and-shift,' by taking existing code base and emulating the server to look like the original platform hardware. Because cloud native requires such a massive re-write of the software to remove all hardware dependencies, according to 12-factor app roles, and using a VM to virtualize the hardware adds overhead, VM in a cloud native environment slows performance and adds no benefits.

Some might ask, is the software re-write worth it? Well, it's all about velocity, availability and scale. And these features are all enabled by the inherent design of the cloud native architecture: it's a modular and highly distributed system. Because of this modularity/distribution, the system is not sensitive to faults. This reduces the risk when deploying a new software version or new configuration because even if a new deployment happens to cause a fault the service is not interrupted. Now you have a virtuous cycle: because updates are less risky they can be made more frequently which means the delta between updates is smaller. Instead of a doing a massive update every six months, we can do tiny, incremental updates daily, eventually supporting a continuous integration/continuous delivery (CI/CD) approach that increases feature velocity. All these benefits are not achieved when packaging software monolithic in a VM.

The distributed nature of cloud native system also results in a higher system availability. Because the system is distributed, there is no single-points-of-failure by design. In fact, the cloud native design rules are so effective in providing high availability that you can build a system with a 99.999% availability from components that are 99.9% available, resulting in cost savings without compromising service.

Adding Support for Packet Forwarding

What's missing from existing cloud native systems? At the moment, they don't handle packet forwarding. The cloud native systems are oriented around transaction handling and a common design is to use a load-balancer in front of several stateless containers. Here's a simple example: let's say we have a micro-service (in a container) that adds two numbers X+Y. As customers request more additions we spin more containers and load share the demand across them. As a side note, you can see how this design pattern helps availability - if one container fails the load balancer simply diverts the traffic to a container that is running.

Unfortunately, the load balancer design does not work for data forwarding because the load balancer itself is a performance a bottleneck. This is not an issue when load balancing transactions, because transaction processing takes longer than an HTTP request (well, maybe not for our X+Y example, but serving a web page definitely takes more time then re-directing HTTP). The way to maintain the cloud native approach for packet forwarding is to directly stitch traffic flows to a path based on the scaling/availability demands of the system at any given point in time instead of using load balancing

Another issue is that many networking protocols are not HTTP based, so even when managing control plane transaction, where load balancing across containers is in place, a system like Kubernetes can't load balance 'out of the box'. Fortunately, in the Kubernetes environment, its fairly easy to write customer load balancers that can be based on any protocols.

Cloud Native and the ETSI-NFV model

Another question many ask is how cloud native fits the ETSI-NFV model, which is very much a lift-and-shift architecture with the networking functions encapsulated in a VM running in an OpenStack environment. To gain traction, the ETSI-NFV has started a work item in the area of using cloud native however, it will take time for it to achieve greater acceptance.

One simple way to have cloud native fit into the ETSI-NFV framework is to state that the NFVI (network function virtualization infrastructure) is cloud-native based and all the management functions on top of it are ETSI-NFV. This is a bit of an over-simplification, as NFVs are frequently re-designed and the changes in the infrastructure tend to percolate upwards. As a simple example, with cloud native we don't use the active/standby model (because cloud native is all about dynamic scaling and is more of an active/active model) and so the whole management of availability is different. Still, as a mental model one can think of cloud native as a form of NFVI, and it's very likely that as we go up the manageability stack, e.g. the definition of services, fewer and fewer of the cloud native specifics will impact the higher management layers.

In summary, creating a cloud native system or environment is more than just throwing an existing function on a general purpose server. And it's certainly more than re-packaging a software monolith in a VM. Cloud Native is about re-thinking scaling, availability, software upgrade, deployment (DevOps) and organization structure. It is all a tall order but it's in line with how complex software systems are built, delivered and deployed in the 21 century.

Come see us in Denver, Colorado, October 17-20, at the SCTE Cable Expo, booth #987 for a live Cloud Native CMTS demonstration.

Tags:

Cisco Systems Inc. published this content on 19 October 2017 and is solely responsible for the information contained herein.
Distributed by Public, unedited and unaltered, on 19 October 2017 14:26:03 UTC.

Original documenthttps://blogs.cisco.com/sp/bringing-the-benefits-of-cloud-native-to-network-function-virtualization

Public permalinkhttp://www.publicnow.com/view/BC1E0FDC34F1EF6607D367376637328E272F3BF1