Open Network Edge Services Software (OpenNESS)

Enterprise customers expect to take full advantage of network function virtualization (NFV) on communications service provider networks. Communications service provider network providers want to support cloud and IoT developers to be able to build, deploy, and manage a new wave of 5G and edge services that can be readily migrated from the cloud to the edge network and the on-premise edge. The Open... Network Edge Services Software (OpenNESS) platform addresses these concerns. It abstracts the complexity of the network and provides an easy "Network button" approach to "network-as-a-service" capabilities for the developer. In this timely webinar, you will learn: - What is OpenNESS? - What are the edge requirements? - What edge solutions are being deployed today? Our speakers include Chris Reece (Technologist, Award Solutions), Bob Pike (CEO, Smart Edge), and Prakash Kartha (Segment Director, Network, and Custom Logic Group, Intel).

Transcript

Hello and welcome to OpenNESS and the Enterprise Edge webinar, presented by Intel. My name is Chris Reece, Technologist at Award Solutions, and I'll be hosting the webinar today. Joining me is Bob Pike, CEO at Smart Edge and Prakash Kartha, Segment Director at Intel.

Our agenda today is we'll start with a discussion of what is the enterprise edge? We'll then transition into a discussion about what is OpenNESS? And then we'll move into looking at some Edge Solutions with Smart Edge. You'll see on your screen there is a Q&A panel. If you have any questions during the webinar, please ask them using the Q&A panel.

We have saved some time at the end of the discussion to answer Q&A. Let's start with a discussion of, what is the enterprise edge? So let's look at the motivations for evolving our computing platform. Now the first question that you might ask is, why?

Why do we need to go through a transition of the edge? There are three key contributors to the motivation to evolve our computing platforms. These are workload placement, utilization of existing resources, and the needs of services that we're trying to access.

When we think about the evolution of computing platforms, we need to think about workload placement. A workload is just software, or an application that a user is trying to access. Workload placement is the placement of that software on an appropriate computing resource in a specific geographical location.

So when we think about workload placement, we need to think about the needs of the service the user is trying to access. Two key demands of a number of services today are the need to have lower latency as well as the demand or a higher bandwidth. Now there's always a motivation for a communication service provider to capitalize on their strengths.

One of the key strengths of a communication service provider is that they have a large number of points of presences, or PoPs, throughout their network. They have these geographical locations that they already have space to where they can utilize that for something. So being able to exploit that with the deployment of computing resources could lead to a competitive advantage.

As services become more and more complicated there is a need for the devices to be smaller and more battery friendly. If we can move some of these computationally complex workloads out of the end user device itself and into a cloud environment, that could facilitate a longer battery life of the device. And some devices like wearables they have a need for a very small form factor.

And they require the help from a cloud environment to support all sorts of workloads. At the end of the day, the motivation for the evolution of our computing platforms is to improve the quality of experience of the user. So today, there are typically two places where we will deploy workloads. They're either in a public cloud or a private environment.

The public cloud environment is represented by the hyperscale computing graphic in this graphic. The hyperscale cloud typically has high computing resources and the ability to easily scale as workload demands increase. The second place we typically see workload deployments today is within a private cloud that has been established by a communication service provider.

This cloud is represented by the telco cloud in this graphic. The telco cloud typically supports workloads required by the communication service provider themselves, including workloads for core networking functions or operational functions, just to name a couple. The challenge of both of these cloud locations is distance. The distance between the user who's trying to access some service and the workload that supporting that service.

The father the distance, the longer the delay on a packet by packet basis. As well, the farther the distance, the more transportation costs there will be as we transport this information from the user to these more centralized locations. Now let's take a look at that same architecture, but let's add an edge cloud environment to it.

The edge cloud is geographically located closer to the users. They will be deployed in the communication service providers points of presences, and we'll be able to support workloads for services that the users want to access. And since they're more geographically close to the users, they will have a lower latency and they will allow us to have a slower amount of transport that we're having to carry throughout our network.

But there is a challenge to this. How do we create an edge cloud environment that will easily facilitate the deployment of these high quality of service demand workloads? How can we have an edge computing environment deployed so that when someone writes a piece of software, it can be easily deployed and utilized by the users? To answer this question, I want to turn things over to Prakash Kartha from Intel. Prakash, take it away.

Thank you Chris. Today I'm going to talk about OpenNESS, or open network edge services software. OpenNESS is an open source initiative for the edge that we had kicked off earlier this year. There has been significant progress on this since we announced this at MWC in Barcelona in February.

Today I'll provide an update on the solution, as well as talk about our partnership with Smart Edge as the commercial solution for OpenNESS. To kick things off, let's start with some context and definitions for the edge from an OpenNESS perspective. So we think about the edge in the context of both on premise edge, as well as network edge.

And the reason we do that is the characteristics of the edge differ quite a bit between these two modalities. The on-premise edge is dictated a lot by IoT type environments. The network edge is dictated lot by more of the telco network equipment type of environment.

And the on-premise edge platforms have the ability to not just connect IoT devices, but they also have a networking component to it, such as a fronthaul and a backhaul network. However, we are seeing a transition, a convergence of these on-premise edge network aware edge devices with what was traditionally known as an IoT gateway.

So we do consider on-premise edge within scope for OpenNESS. At the same time, the network edge started out with the notion of mobile edge computing, but it has expanded beyond the original idea of mobile edge computing to also involve all types of networking devices, appliances, racks, platforms.

That may go anywhere from base stations to wireless aggregation points to central offices. And we believe that there is a tremendous amount of commonality between these two edge environments. At the same time, quite uniqueness between the two as well.

The characteristics from the perspective of the improved service capabilities, improving the TCO, the ability to do a data locality. All that still is very pertinent to both the network edge as well as on-premise edge. Next I will talk about the problem statements that we are trying to address with OpenNESS.

And I'll discuss this in the context of, again, both on-premise edge, as well as the network edge. So let's first talk about on-premise edge. The on-premise edge, again, governs so much about what's been happening in the IoT space, has a lot to do with integrating OT and IT, and now networking technology. It all come together.

There's always a factor to be considered for on-premise, in particular, around legacy equipment. If you are going into a factory floor or into a retail store, there's always existing equipment that needs to be considered when edge computing comes into play. So that's a big consideration for a new edge platform to be deployed in an existing Brownfield or sometimes Greenfield environment.

Similarly, on the network edge there is this notion of legacy, but there is also this notion of network function virtualization. And if we, as this foundational platform, has been taking hold in the networking space. And that's opening up opportunities for edge computing, because now in addition to the networking, virtualized network functions, you can also land end user workloads or edge computing.

At the same time, NFV is very fast moving to cloud native. So the ability for edge applications to run on not just a virtualized, but also a cloud native containerized environment becomes very, very important and a big consideration from a business perspective. And all this edge application is applications are still running on a mobile or wide network, which means that our existing services still need to be maintained and SLAs that still need to be maintained in addition to being able to support these new services.

And common across both the on-premise edge and the network edge are some common challenges, the prime being cost. capex, opex reduction, edge computing has to be delivered in a way that you're not over time increasing your capex and opex. There is the transition of 5G, which is going to be a big catalyst for edge computing. But at the same time, there is existing edge computing deployments that are existing its computing deployments, that are happening on LTE.

So the world is not waiting for 5G. Edge computing is starting right away. And then there's notion of workload convergence, workload consolidation. Where the real value comes when you are able to combine ecosystems. Ecosystems from media, from AI and analytics, with networking. So that's when the true power of edge computing comes to bear.

From a technical perspective, the challenges are slightly different. And again, it really varies between on-premise edge and network edge. The one thing that we have observed when we first started out with OpenNESS is the diversity of the edge. And the diversity comes from the different types of use cases that have to be deployed on the edge. The different deployment profiles, the different legacy environments that have to be considered.

So the original idea of multi-access edge computing in our frame has moved beyond multi-access into multiex computing. As in everything is multi. Your access networks obviously continue to be multi, as in you have to have the ability to support wireline, Wi-Fi, LTE, and now 5G. And within LTE there are different frameworks supported, whether you support an on-premise CUPS type access termination versus a deeper down in the network SGI type when it comes to LTE.

Now, 5G is a lot more mature from an edge computing standards perspective. There are frameworks within 5G that support user plane termination and forwarding of traffic. So all these underlying frameworks have to be considered so that an application, or an edge application, does not have to worry about what specific access termination methods have to be used.

Similarly, data planes. There are different data planes that need to be considered for this diverse edge. Certain applications that run on a on-premise, universal CPE type platform, may have a certain type of data plane or may not even have a data plane if it's not required. But as data plane sitting on the network edge, such as a base station or a central office, will have a different type of data plane.

As an application developer, you should not have to worry about what data plane you're running on and should not have to refactor your code to work on different data planes. The controller mechanism is very different for these data planes. And again, as an application developer, you should not have to worry about it. Orchestration is a big deal. In the network edge, orchestration is a big part of the conversation.

However, in on-premise edge, some of these standard orchestrators or even some of these virtualization infrastructure managers, like OpenStack and Kubernetes, have issues when deploying in on-premise edge. So having the ability to write applications that can work on all these types of environments becomes super critical.

And if not 90%, most of the applications that will move to the edge are either coming from the IoT side or coming from the cloud side, and it's pretty much guaranteed there's going to be some element of cloud computing as part of these applications. Either for data analytics or data storage, there's always a cloud component associated with it. So if you have an application vendor who is already investing in a certain go-to-market strategy, it is involved a cloud in that.

For them to go to the edge, they cannot literally rip themselves off the cloud. If they've invested in a certain cloud vendor, they've also tied themselves to a certain set of algorithms, for example, for that cloud vendor. So the ability to move an application from a specific public cloud to an edge, that framework needs to be available. So all in all, this is the problem space.

The ability for an application vendor to write their edge service once, and run in any of these frameworks. Access network, data plane, control environment, orchestration, multicloud. These are all considerations that we've taken into account for OpenNESS. So what is OpenNESS? OpenNESS is an open source software toolkit to enable easy orchestration and management of edge services across diverse network platform and access technologies in multicloud environments.

So all the aspects that we talked about earlier is considered in the design of the open network, edge services software. To dive a little bit deeper into OpenNESS, we'll look at a few different aspects. The first part is, OpenNESS is not a monolith, it's a set of microservices. And that was a design that was considered with a lot of deliberation because what we understand is there are platforms already in existence.

You're not looking to go replace an existing platform, you're looking to add value to existing platforms. And the best way to do that is to be able to deliver a modular and flexible architecture through microservices. So every component within OpenNESS is expressed as a microservice, which means it's sitting in some kind of a container environment with north, south, east, west APIs. So different microservices can be combined in different forms to solve certain problems.

Each microservice has a specific job. For example, the application agent microservice is basically a service mesh. It's based on the idea of applications being able to talk to each other through a pub/sub bus using a producer consumer concept. The data plane agent is a microservice that lets you connect to different data planes and abstract yourself from the data plane, going back to the previous discussion.

The virtualization agent helps you connect to different virtualization environments. Whether you want to use an OpenStack based virtualization environment, or you prefer to go to a Kubernetes space virtualization environment, or whether you decide to not have a virtualization environment and completely go direct to the platform, in let's say an IoT or on-premise type deployment. Similarly, going from LTE, going into 5G, the access termination methods a different.

5G is a lot more standardized compared to 4G, but there are methods in 4G to do traffic steering and access termination using a capability known as CUPS or control plane, user plane separation. The core network configuration agent that's listed here is a microservice that lets you connect to different types of networks, whether that's LTE or 5G.

Similarly, on the platform side you have a lot of diversity between what kind of platform sits on a central office versus a base station versus the customer premise. And the hardware ingredients will look very different. They'll all have CPUs, for sure, but depending on where you land your platform, you may have certain types of accelerators for, let's say, media or AI. Or you may have FPGA for wireless activation, for example.

And you need software that can abstract out all the hardware complexity, and then have a uniform face to the applications. So the OpenNESS software logically is split into what sits on the edge, or the network element, and what sits on the control function, or the element manager. And at the same time, be able to work with different types of industry frameworks, different types of ecosystems, and so on.

With that, I'm going to go to one level of detail further and talk about the architecture. So the OpenNESS microservices, like I mentioned earlier, lives within the ecosystem of an existing set of platforms. This is not intended to replace Kubernetes, for example. It's intended to work with Kubernetes for the network edge or be able to work without Kubernetes for a low footprint on-premise type environment.

So all the APIs are bundled through these microservices. And these APIs are all open, they're expressed as either GRPC or as REST APIs. And for example, the controller microservices has northbound APIs that can be integrated with a service orchestrator. And has southbound APIs that can be integrated with a virtualization infrastructure manager.

The controller has APIs that can be used to interoperate with a core network. Whether it's a 5G core network or a 4G or LTE core network. And then the ability to not have to change your application depending on which core network you use. Similarly, the data plane agent has APIs to be able to do a traffic steering and not have to worry about what the ingress traffic looks like that's coming from an LTE B, or a 5G g-node B, or a wireless access point.

Similarly, there are APIs for specific enumerated set of public cloud frameworks. Some examples of that are Amazon Greengrass and Azure IoT and Baidu Intelliedge. Now in all of these cases, we expect applications that are specifically written for these cloud frameworks to be able to be containerized and then land on an OpenNESS platform, use the OpenNESS service mesh, or the application agent, and still be able to communicate back through proprietary cloud APIs to the public cloud of choice.

The advantage of this kind of approach is that you do not have to have your application vendor get off their investment that they've already made to the cloud platform, but still be able to land that application on the edge. Now, the cool thing about this architecture is that you can still enable multicloud, which means as an edge platform you can still connect to your Amazon Cloud or your Azure Cloud or a different cloud and basically have a multi-tenant environment where different cloud applications are all landing on a single edge.

So that's the beauty of this architecture. So moving on to the users. So we'll talk about two types of users. The service providers and, of course, the enterprises who are customers of the service providers. And then we'll talk about developers next. From a service provider standpoint, being open source is critically important because there is no one size fits all when it comes to the edge.

We want to make sure that all the software is available in a form that can be extended and it can be improved upon by the ecosystem. We want to move together with the the ecosystem and not necessarily create silos. So it was very important for OpenNESS, from the very beginning, to have the code be available out in the open and, in fact, the code has already been posted on GitHub.

We'll talk about that in a couple of minutes, how to go get access to it. So the code is open source and it makes it easy to do rapid prototyping. It makes it easy to commercialize with commercial partners. After I'm done speaking, we'll have Bob Pike from Smart Edge talk about how they've taken open source, OpenNESS, and partnered with us.

Not just in the development of OpenNESS from an API and microservices perspective, but also look at a specific set of use cases for enterprise on-premise and how that open source framework, with additional value added features, can go solve specific enterprise problems. The flexibility is critically important to be able to add features when customers need to add new features and not just rely on a closed platform.

From a developer standpoint, we look at two types of developers. We look at me the application developers and the platform developers. For the application developers, are the ones we talked about earlier, who are typically coded for the cloud or for the IoT, and now want to move their application into the edge. And the ability to not have to refactor their code is supremely critical.

And that's one of the design parameters that we've instilled into OpenNESS, the ability to write once and deploy on any edge. Similarly for the platform developer, the ability to connect and run their platform on different types of edge modalities, that are let's say, UCP based edge or a wireless base station based edge or a central office based edge. That ability for that platform to migrate between different edge environments is very, very important.

So these are why application and platform developers are considering OpenNESS today. So I'll close with some notes on how to get access to OpenNESS. So like I mentioned earlier, OpenNESS was announced at Mobile World Congress in February. Since then, we've made the first release of OpenNESS and it's available via GitHub.

So if you go to openness.org-- and that is basically the English word, openness.org-- you will find links to download the OpenNESS software directly from GitHub. You will also find a lot of application nodes, guides, white papers that you will find very useful. We are also going to be making available a training program through the Intel Network Builders University.

That training program will be available soon, and you will be able to get a much deeper view of the different aspects of OpenNESS as part of that. And finally, we have commercial partners for OpenNESS. And these commercial partners are enabling operators and enterprises to be able to take the goodness of OpenNESS and go to market today.

And with that, I'd like to introduce Bob Pike, from Smart Edge. Who is our lead commercial partner, who is taking some of the software from the OpenNESS perspective and taking it to market for specific enterprise use cases. Bob, take it away.

Thanks Prakash. Yes, just to reinforce Prakash's comments. We're the commercial rendition of OpenNESS. It's been a wonderful collaboration with Intel. Intel's a very gifted software company, as well as, obviously, we all know silicon.

We at Smart Edge have focused on the enterprise portion of the-- focused on any of the opportunities associated with-- something associated with the network edge. We're focused on, what we would call it, the enterprise edge or any kind of commercial deployment. So one of the things that we started with when we were working on Smart Edge is we began working from the use case back into the technology or the plumbing.

Chris foreshadowed this, one the things that we emphasized is, what could we do to improve the experience of a user, as well as create monetization capabilities both for the enterprise client as well as the operator? So as we began to work on these things, we realized first, we worked from the use case in, and we quickly realized we needed to work with the industry application suites and align with the industries that I'll speak to in a little bit later.

As we talk to the enterprises, they also said, when we consume LTE, it needs to be simplified or abstracted so that the enterprise can consume it in a way that they don't have to change anything. Something that came from everyone is that security had to be foundational. It had to be let's say, built in. If it was overly complex, then it was going to be a challenge for them.

As we began to look forward into the next phases, we looked into some of the applications that we were supporting. So we had to support both VMs and containers at the edge, as well as the BNFs in a single platform. So I think we've already communicated this fairly effectively, is that we're focused on the premise, or enterprise edge. Think about a D mark, right. The D mark between an operator and the enterprise.

That's where we have focus. That's where we continue to focus. It's an area there's so much that we need to do from a perspective of ensuring that this is let's say, as simple and as operationally solid as possible. So this is the area that we're focused on. This is the slide that I'll spend the most time on and it really relates to where we realized we needed to put our energy.

We picked four industries to put our efforts around. And it's logical why we picked them, you look at retail, health care, transportation, manufacturing industrial. We picked these because there's typically a security element. All of them have situations where they have a let's say, use cases that have a lot of content. They have a lot of mobile devices and/or IoT devices.

So accounting firm wasn't necessarily where we wanted to focus. But these four came out both from our own team, as well as the folks who consulted with us on where should we focus. So the team-- we began to assemble the different resources that knew these industries. One of the things that Prakash mentioned that we definitely learned together is you have to have an awareness of the OT environments, control systems that exist in let's say, manufacturing.

You really needed to have a good awareness of what the use cases were in each of these industries in order for you to architect the platform in a way that can be consumed simply, right, without a lot of changes. So one thing that was interesting is, one client we were communicating about AR and VR, and when they came back they said, well, it's important that you help us with our core business process platforms because that's typically where the budgets exist.

And remember, we wanted to make sure that this could be let's say, monetized. And that's where most of the devices are present. And if you think about each of these industries, point of sale in retail, patient record in health care, inventory in transportation. Anything associated in manufacturing/industrial, anything associated with production.

So we did adjust our strategy, not that we bailed on some of the more interesting use cases that really leverage the benefits of being at the edge. We went aggressively at understanding these core business processes and how we can make them better. The first thing that we saw out of the use cases was folks really appreciated the ability to keep the traffic local. And add a different, let's say, access method. LTE has been well received.

It isn't that the mech doesn't support Wi-Fi, it does. But it was interesting, there seems to be a real need for both. We're not seeing a drop off in content or demands on experience. So the additional access method and control of it, and the ability to cache at the edge that could be consumed by both Wi-Fi and LTE. Albeit it sounds simple, there was high value in that.

The next area where we saw in manufacturing was really connecting of IoT, because there are IoT islands. Just simply establishing a connection point for those IoT islands was considered high value. Interesting, you'll see these first two elements were fairly straightforward and simple, but they added value and they also had solid TCO or business cases. The value enhancers seemed to really be best implemented underneath these core processes.

So as you added analytics or deep learning into the core process, it was just a logical step. So we're definitely beginning to see more of that presently in video streams doing some analytics, doing some more, let's say, local analytics associated with sensor data. So we are beginning to see some exciting new more advanced, let's say, capabilities being introduced in the deployments.

The carrier grade policy in security elements, we'll get into security a little bit more as we move along. Initially, we were predominantly working through the UIs, but as Prakash foreshadowed in his discussion, we are now seeing the power of these open APIs and we're actually getting additional input on what the clients are expecting out of it. So we're seeing more and more as opposed to using the UI using the API construct.

The TCO, let's say, is been very positive in most instances. The part of it that has been the challenge is some of the operational things that really need to occur to support this. Because think about it, this is truly converged. So you'll have multiple departments involved because this is network, this is compute. So you will have the server teams involved, this is OT connectivity in nature.

So one of the areas that we are working hard on with the Intel team is to make sure we build things so that this fits into the existing enterprise support systems in a graceful way. The next slide, what we want to reflect is-- and Prakash did a wonderful job-- and this is where the alignment with the Intel OpenNESS team and our developers has been fabulous.

Because we're able to, as Prakash said, simplify the use of acceleration at the edge. So that it is really way more, let's say, easy for the developer or an application running in any one of the hypervisors we support, or container to be able to use that resource. And now, you can get, let's say, performance enhancements much simpler.

And it becomes, let's say, less of a DevOps challenge or, let's say, an integration challenge when you want to implement something. And we're pretty excited about some of the new-- there's some new memory that we're beginning to test at the edge and obtain. There's some new enhancements in elements such as SGX, which can help with secure enclave.

So we're pretty excited about the use of both the Intel software and silicon, specifically at the premise edge or an enterprise edge to enhance the experience, reduce the cost and simplify how use cases can be enabled. This slide is a simpler reflection of a slide that was communicated earlier on.

You'll see on the slide at the top, one of things the controller is built to do is manage one to many. An enterprise will have hundreds of locations, they can manage it in a, let's say, simple view in a, let's say, a straightforward common control point for it. The platform is built to be multi-tenant in nature. So it is in a, let's say, a venue or an operator, requires some multi-tenancy, that is native in the platform. It's something where we do leverage a lot of the security work that we've collectively done.

If you look to the edge node there, you'll see the API or that line, that's the east/west, pub/sub framework. Underneath that is where the BNFs live and that's where I think Prakash just described extremely well. We are definitely looking at a lot of deployments where it's multi-access. That has been exciting and also there's lots of things you have to learn there. And so we begin to connect other and new access methods.

This slide is one to reflect. It was interesting as we got into these clients, they began to bring their software vendors and their OT vendors to the table, and to work with us on integration. So this really drove the point home for us that we needed to simplify the interaction with the existing tools. So as you look at the slide, you'll see that the common platforms that a client would have.

We need to be able to connect to those things or even enable maybe a container microservice on the platform in a way that simplified it, such that there was no loss in effort. The comment that Prakash made on Brownfield, just resonated. We really had to work hard to make sure that we created something that was easily integrated to the existing environment. You were not, rarely, going to find a Greenfield setting.

If you look at the device management, as you think about this, there's tools that already exist inside these enterprises for device management and also user management. We needed to create in the platform the ability to integrate with these things so that there wasn't duplication of work for the enterprise. That we followed or fit into their systems in a very, let's say, logical and supportable manner.

I think Prakash did a good job talking about Cloud Federation, that is definitely something that's going on. Most of the folks that we've dealt with will have multiple cloud elements, and they'll want to maybe even share content between them. So it's something that we're definitely seeing out in the field.

This next slide will take us a little bit deeper into some of the use cases that we're seeing. I've already talked about the gateway consolidation. We're definitely seeing some activity with computer vision, and that's where taking advantage of some of the acceleration is important. We are just beginning looking at things like TSN or industrial ethernet or PROFINET kinds of elements.

We've begun working in the third pillar there. As you think about a smaller retail location, someone will say, can you add in things like SD-WAN? Can you add in these other elements? So UCP and this MAC and this application, there is a convergence of it. It's kind of fun right now, because we're just now in the midst of that journey. And it's exciting to see what the opportunities are for simplifying a store deployment, reducing the cost.

Now there's complexity that we all need to continue to work through. This is a nascent market, but it is fun and exciting to be able to be a part of something in its early phases. No question, to me we're going to solve all these things, but this gives you where we are in our journey. The third pillar there you see, I've addressed most of that. One area that is a lot of work going on right now is the idea of presence, or location based. There's so many ways of doing that.

That is definitely being, let's say, worked on now. I don't think there's a really, let's say, end all solution. Really when you think about presence and location, it really is a combination of different tools that are out there in order for you to give the presence or location data that the application is looking for. But one of the things that was clear to us, we need to make it simple for the applications to be able to consume present locations.

So one of the areas that we've worked a lot with the teams on is the idea of introducing utility to the platform, so presence data. We introduced the idea of a virtual cap for various DPI engines. Initially, we put a dedicated API function, but most clients said, hey I already have my own. Can you just enable it so I can use my own? That was an area of development. And then, finally, telemetry data was something that the security organizations wanted to have.

So there's a security module now. So it's beginning to become richer, the platform, from the input that we're getting from the enterprise clients. The other piece is, the theme that we're learning is hey, we already have a lot of tools, just help us use them at the edge in a way that simplified-- lets us do things now in a way that would have been difficult before and costly, quite frankly.

This slide here is really around retail. I think I've covered most of this. One thing I will call out though here is that we're seeing very much an integration of using both Wi-Fi and LTE. This is fun and it's interesting, you seeing LTE being consumed for the backend systems. It does a nice job with video, there's some things we see from an LTE perspective. And then freeing up Wi-Fi and only having one SSID facing the consumer.

And then being able to in a secure and protecting privacy manner, interact with that individual's digital persona. One use case is there's no reason as long as we protected their privacy and we're secure, that we can interact with that user in a way that reflects their persona and in a way that's graceful, right. We'll know if they said, yes, you've opted in, know what apps they have.

It allows us now to have a little bit more first person element to the interaction. We can be more precise in the ads in the marketing that goes on. We get a lot of good data out of the backend. It's like the one thing we're hearing, especially from a younger generation, is they expect the technology to react to them. In my day you just got up and turned the channel. They're expecting the technology to react to them.

And that's part of what the power of edge can do, because you definitely have [INAUDIBLE] work context. You know that UE or that item is attached. You know a lot more about it as long as you have permission to know about it. I want to finish with this that the area that we've had the most effort and work and technical interaction amongst the teams and with our clients is the idea of security at the edge.

What was clear is the edge could either be a, let's say, a weak point or it could be a point of strengthening. So we have a fairly substantial group working on making sure what we build at the edge is a strength point. Because theoretically you have a virtual D mark here. And extension of keys, key management, certificates.

Making sure that you have-- I mean, we use a zero trust model and everything is untrusted. Even the systems-- our own microservices between each other. As well as one area that was reflected, especially, from certain industries is explicit auditability of any traffic. So you're seeing a lot of work in this because we all collectively have to make the edge a strength point. Security's a journey, it's not something that you ever complete.

But it is something that we, both from an Intel perspective and a Smart Edge standpoint, are taking very seriously and working hard to make sure this is considered what we would call best practice or an optimal, let's say, reference architecture for the edge. Obviously, there's a lot of other folks in this space, but we're excited about what we've collectively done here. So thank you for your time everyone, and I'll turn it back to Chris.

Thank you so much, Bob. There's a few key takeaways I'd hope you've taken away from our time together today. First, workloads are changing and require a new network to support them. Edge computing is the placement of data center, grade, network, compute, and storage closer to the endpoint devices. Edge computing use cases include video, healthcare, manufacturing, transportation, retail, as well as smart cities. And OpenNESS is an open source software tool kit to enable easy orchestration and management of edge services across diverse network platforms and across technologies in multicloud environments. We have a little bit of time left and we've gotten a couple of questions. So I want to ask first question to Prakash. Prakash, what are some of the early use cases that you've seen for edge applications based on OpenNESS?

Sure, Chris. We have seen a number of use cases, but they tend to cluster around four-- I would say, three or four categories. Some of them are what we would consider a kind of horizontal and some of them are more vertical. So from a horizontal perspective, we see a cluster of use cases around digital security. Again, primarily driven from a camera and computer vision, an AI perspective.

That seems to be the initial focus for a lot of end user edge applications. Because, obviously, the ability to deliver low latency reaction. And also, the idea that data tends to be heavy and not always want to move to the cloud and there is a lot of value in doing analytics for video locally. So we see a lot of that as a horizontal use case.

The other big horizontal use case we see is around media. And again, driven by some of the KPIs that edge services delivers. Like, again, better reliability, better latency. So from a media perspective, immersive media, gaming, these are all applications that would benefit from location and placement of edge platforms. Especially in an immersive media type scenario, you want to have the rendering happen as close to the point of delivery as possible.

CDN is another use case that we see a lot. And then that goes back to the idea of the edge being a fairly broad definition as in, even if you had CDN deliver more to the edge of the internet earlier, but now moving more into a virtualized CDN application, moving closer and closer. And CDN is going beyond just media caching, right.

It's going into all kinds of business use cases where business is also looking to cache all kinds of business critical information. From a vertical use case standpoint, we tend to see a lot of use cases in two specific verticals, industrial and retail. Again, utilizing some of the horizontal use cases. So with industrial where it's fault detection and an industrial factory.

Again, using cameras in some instances. In a retail location, we see a frictionless payment. A lot of times using cameras, but even the ability to do payment processing faster. And basically providing more customer intimacy, right. So those are types of the use cases that we've encountered. So at OpenNESS, we tend to do a lot of trials with partners and customers. And they tend to fall in these categories.

Excellent. Very exciting times indeed. Bob, a question for you. What were some of the early challenges that you saw with respect to trying to deploy edge services?

Thanks, Chris. Yes, I think, some of those I mentioned in the dialogue. One area that I think one of the challenges was getting all the groups together in communicating what was going to occur and working through some of the operational elements off the top, I think were challenges. We definitely have developed discipline around that in this converged state.

I think our original API construct didn't take into consideration some of the requests we have. But as that's advanced extremely nicely. It's the, in most of the other challenges are the normal things when you introduce elements into the environment. And then, obviously, we needed to develop more of a history with all of the different types of hypervisor's applications use cases.

And that helped the team from a perspective of working with the business unit from a client perspective. But that would reflect the challenges I think that we saw.

Well, excellent. Well, I'd like to thank both of you so much for your time today. I hope this is just whet your appetite to learn more. To find out more, please go to Smart Edge's website at https://smart-edge.com. If you want to learn more about OpenNESS, as Prakash said, you can go to openness.org, or my suggestion, just download the software and start playing with it at github.com/open-ness.

And if you'd like to learn a little bit more about Smart Edge, engage with them, please feel free to send an email to sales@smart-edge.com. Again, my name is Chris Reece. I want to thank Bob and Prakash for their time today. And until next time, thank you so much.