Evolution of SASE Architecture

You might have heard about multiple architecture principles in the context of network security in general and SASE in particular. Some software architecture principles used by the industry in SASE context are:

  • Single-pass Architecture
  • One-proxy Architecture
  • Run-to-completion Architecture
  • Scale-out architecture
  • Single-pass parallel processing Architecture
  • Bring your own security function Architecture
  • Cloud-nativeve Architecture
  • Isolation Architecture
  • API first Architecture
  • Slicing (E2E segmentation) Architecture

Many of the above architectural principles are not new. Single-pass, One-proxy, Single-pass-parallel-processing, and Run-to-completion architectural principles have been popular since UTM (Unified Threat Management) days in the early 2000s, though they are known by different names before.

The main purpose of these architectural principles is to achieve

  • Higher throughput with efficient usage of resources.
  • Lower end-to-end latency
  • Lower jitter
  • Higher elasticity (Scale-out) and Resiliency even in case of DDoS attacks on security services
  • Introduction of new security functions faster (Agility)
  • Integration of multiple vendor security functions without introducing inefficiencies (Integration of best technologies)
  • Adoptability (Run anywhere)
  • Single-pane-of-glass

Evolution

One great thing about the network security Industry is that it is dynamic. It adopts newer software and deployment architecture principles in every new generation of products. It also keeps up protection against the attackers’ sophistication.

That said, the network security market is traditionally fragmented. There are multiple security vendors delivering different security functions. It is good from an innovation perspective and must be encouraged and shall continue. It is to be noted that one vendor may not be good at all security functions.

As you would see further, if each vendor provides a complete stack for their security functions, there will be enormous inefficiencies and, thereby huge cost implications. In addition, it can introduce more latency, which could be a challenge for some applications. Hence, the applicability of newer architecture principles and best practices. Software architecture shall be in a way that takes care of inefficiencies but enables integration of multiple vendor technologies for the best cybersecurity.

Before Convergence

Figure 1 and Figure 2 are two representative examples of network security before SASE. Figure 1 depicts Secure Internet Access, and Figure 2 depicts an example of Secure Private Access. Note that the order of security functions in these pictures is arbitrary and the order of execution is normally deployment specific.

Secure Internet Access is traditionally achieved using multiple security solutions as listed in Figure 1. It is the Enterprise that buys these security functions, and puts them in a service chain. Since many security functions require proxying the connections, this chaining is also called Proxy Chaining by the Industry.

Since each discrete security solution is self-sufficient and comes from multiple vendors, the common functionality is repeated across these solutions. Common functionality includes traffic policing on ingress, traffic shaping at egress, Traffic filtering to ensure that only relevant traffic goes to the proxies, TCP termination of client connection, New TCP connection towards destination, SSL/TLS interception (which itself requires on-demand certificate generation emulating the destination certificate) that includes TLS decryption & TLS encryption, Proxy authentication (Kerberos, NTLM, OIDC), HTTP 1.1/2.0 decoding. This common functionality in each box/virtual appliance, in one estimate, takes up 50% of the total security solution’s CPU resources.

Secure Application Access is also achieved in a similar way by chaining the discrete security solution, as shown in Figure 2. Here too, the common functionality is the same across multiple security solutions. It is almost like the common functionality of the security solutions used in Secure Internet Access. A few differences include SSL/TLS termination instead of interception and user authentication via traditional methods instead of proxy authentication.

The average overhead of common functionality in a security solution could be as much as 50% of the total solution. The latency due to this architecture can go up by a few milliseconds due to the chaining of functions.

Another big challenge faced by Enterprises with physical and virtual appliances is on scaling. Originally, the scaling was taken care of by using bigger appliances or assigning more CPU & memory resources to the VM-based security solutions. This is called scale-up. If the traffic is of constant volume, it makes sense for Enterprises to spend money on bigger physical/virtual appliances. But, if the traffic is bursty, Enterprises don’t like to see money being wasted. Think of cases where only a few days in a year the traffic is higher. Why would Enterprises like to spend money on these bumps for the whole year? That led to the next evolution, where security vendors started to support scale-out architecture via cloud-delivered service. This is like the reasoning for IaaS in the Cloud applications world. In scale-out architecture, more instances of the security solutions are brought up automatically upon detecting the higher traffic or due to DDoS attack and bring them down when the demand goes down. Picture 3 depicts that scale-out architecture.

There is no doubt that scale-out functionality is required. However, it needs a load balancer at each security solution. This load balancer is needed to load balance the sessions across multiple instances of security solution. Multiple Load balancer traversals require more resources and increase the end-to-end latency of traffic sessions.

Next evolution of network security architecture addresses the challenges of

  • Need for higher number of compute resources
  • Higher latency

With

  • One proxy architecture
  • Single pass architecture

Single-pass and One-proxy architecture (Converged Architecture)

As shown in figure 4 below, all the security functions are bundled together with proxy and other common functions coming into the picture only once. All security functions are called one at a time in run-to-completion fashion. As a result, this architecture consumes compute resources efficiently. Since all the functions are called in the same user-space context, memory copies are avoided. Also, one user space process architecture reduces operating system context switches dramatically. A single proxy instance can process multiple sessions at once via multi-threading, with each thread processing a subset of sessions, thereby leveraging multi-core CPUs effectively. Multi-threading capability also allows some security functions to be executed in parallel instead of serially for a given traffic session, thereby improving the latency. This architecture is called ‘Single Pass Parallel Processing.’

To address unexpected load and address DDoS scenarios, auto scale-out architecture is adopted with one load balancer instance coming in the way of session load balancing.

In this architecture, proxy exposes interface API. All security functions hook into this API. Proxy calls the relevant hooked security functions during traffic session processing. All security functions may not be implemented by SASE solution developers. SASE solution developers work with technology suppliers by integrating technology supplier engines and feeds with the proxies. That way, the customers of SASE providers get best of both worlds – High performance SASE with best security implementations.

That said, some technology suppliers may not deliver Engine in the SDK form for developing security functions as part of the proxy. It may be that they have it as a cloud service. DLP is one example where many technology providers have this as a cloud service. Also, in some cases, it may be that the SASE solution provider may not want to integrate some security functions in the same user space process context of proxy for reasons such as memory constraints, to avoid license incompatibility, to avoid making the user space fragile and others.

Unified Architecture & Bring Your Own Security Functions

The next evolution in the SASE architecture is shown below. There are three salient points shown in Figure 6 below.

Support for security services via ICAP (Internet Content Adaptation Protocol): Enterprises may be used to or may like to use security services from other security vendors. ICAP specifications from IETF specify how proxies can talk to external content adaptation services, including security services. By allowing external security services, SASE solution providers enable Enterprises to choose their choice of security services.

Bring your own security functions: According to the IBM security report on “Cost of a Data Breach Report 2022”, Organizations are taking on average 277 days to detect and contain a data breach. Though SASE provides very good policy configuration, it could be that some data breach containment requires programmatic rules. Organizations that analyze the data breach are in a good position to develop these programs. Depending on SASE providers or security services, vendors can take time as product releases need to go through a complete software development lifecycle. To avoid these delays, new SASE architectures provide a way for Enterprises’ security teams to develop programmatic rules by themselves and deploy them. Since these shall not cause instability in the proxy, SASE architectures provide WASM runtimes to enable the creation of programmatic rules as WASM modules. Since WASM runtime acts as the sandbox, any problems with WASM modules don’t cause that hosting user space and other security function plugins to crash/die.

Unified proxy: One unified proxy for both Secure Internet Access and Secure Application Access can reduce memory requirements. Engines and feeds of some security functions, such as Anti-Malware, and DLP are memory hoggers. Bringing up two different proxy instances for each Enterprise site hence can be costly. Moreover, having one proxy supporting all three modes (forward, reverse, and transparent) is good development practice from the developer productivity point of view too.

Additional architecture principles followed in newer generation of SASE architecture are

Cloud Native: As discussed in the Universal SASE of this blog post, SASE services are required in On-Prem, Clouds and Edges. Hence, new generation SASE architectures are following the ‘Cloud native’ principles to make SASE work anywhere. Though Cloud native and Kubernetes are not synonyms, many times, solutions that say cloud native are Kubernetes based. The reason for choosing to make the solutions work on Kubernetes is that all cloud/edge providers offer Kubernetes-as-a-service, and more importantly, the API interface is kept intact by cloud/edge providers. Due to the consistent API interface of Kubernetes across cloud/edge providers, K8s based SASE solutions work seamlessly across cloud/edge providers.

Isolation architecture: Current SASE architectures, for efficiency reasons, enable multiple tenant sessions to go through via shared user space processes. Clever methods are used to ensure that some level of isolation is maintained even though there are overlapping IP addresses across tenants. Shared user space processes for multiple tenants are not a good thing from a performance and security isolation perspective. With shared resources, it is possible that any misbehavior, such as DDOS attack on a tenant, can cause performance issues in other tenants’ traffic. Also, any exploit on the shared user space process can expose all secrets/passwords/keys of all tenants. Keeping the above in mind, newer generation SASE architectures are increasingly going for dedicated proxy instances either via dedicated user space processes or dedicated containers.

API first architecture: Current SASE solutions provide CLI and Portal to configure and observe. Some SASE solutions use APIs between CLI/Portal to backend management systems, they are not advertised. In some cases, APIs would not have been clean. New SASE solutions are adopting API first architecture where APIs are defined not only for CLI and Portal implementations but also for third parties to develop external programmatic entities. APIs enable SecOps-as-code via terraform and others, from simple script development to complex workflow development.

It is general practice to go with RESTful API, JSON payloads with OpenAPI-based documentation, but some also expose Kubernetes custom resources to configure security and networking objects & policies.

API First architecture is also expected to separate out the actual backend logic from implementing the RBAC (Role Based Access Control). Industry experience is that there are a good number of vulnerabilities when both RBAC and business logic is combined. As a result, the industry is moving towards separating out the RBAC functionality to external entities and leaving applications focusing on the business logic. The same logic is being applied for SASE management systems, where SASE management systems focus on SAE policy/objects and observability and leave the RBAC functionality to external entities such as ingress proxies & API gateways. Ingress proxies take care of all authentication & authorization and API routing. Good thing is that one ingress proxy can frontend multiple applications. Due to this, admin users only need to get familiar with one RBAC entity across many applications. For this separation of business logic from RBAC, API-first architecture solutions are expected to follow some guidelines. For example, many ingress proxies & API gateways expect that URI is used to point to a resource for RBAC. In the case of workflow-based automation, there is an expectation that the management systems can tag the configuration and restore to older tags to enable saga patterns. Any API-first architecture is expected to follow industry best practices.

Summary

Network security architectures are evolving from physical/monolithic entities to virtual appliances and containers. Many cloud-native principles which became popular in the application world are being adopted in SASE solutions to get the cloud-like benefits, including scale-out/in, agility, multi-cloud / edge ready, and use of resources efficiently across multiple tenants. Please watch out for this space for new architecture principles and how Aryaka is leveraging cloud-like technologies.

  • CTO Insights blog

    The Aryaka CTO Insights blog series provides thought leadership for network, security, and SASE topics. For Aryaka product specifications refer to Aryaka Datasheets.

About the author

Srini Addepalli
Srini Addepalli is a security and Edge computing expert with 25+ years of experience. Srini has multiple patents in networking and security technologies. He holds a BE (Hons) degree in Electrical and Electronics Engineering from BITS, Pilani in India.