Skip to main content

AI • IOT • NETWORK EDGE

Reverse Proxy Server Advances AI Cybersecurity

""

AI models rely on constant streams of data to learn and make inferences. That’s what makes them valuable. It’s also what makes them vulnerable. Because AI models are built on data they are exposed to, they are also susceptible to data that has been corrupted, manipulated, or compromised.

Cyberthreats can come from bad actors that fabricate inferences and inject bias into models to disrupt their performance or operation. The same outcome can be produced by Distributed Denial of Service (DDoS) attacks that overwhelm the platforms that models run on (as well as the model itself). These and other threats can subject models and their sensitive data to IP theft, especially if the surrounding infrastructure is not properly secured.

Unfortunately, the rush to implement AI models has resulted in significant security gaps in AI deployment architectures. As companies integrate AI with more business systems and processes, chief information security officers (CISOs) must work to close these gaps and prevent valuable data and IP from being extracted with every inference.

AI Cybersecurity Dilemma for Performance-Seeking CISOs

On a technical level, there is a simple explanation for lack of security in current-generation AI deployments: performance.

AI model computation is a resource-intensive task and, until very recently, was almost exclusively the domain of compute clusters and super computers. That’s no longer the case, with platforms like the octal-core 4th Gen Intel® Xeon® Scalable Processors that power rack servers like the Dell Technologies PowerEdge R760, which is more than capable of efficiently hosting multiple AI model servers simultaneously (Figure 1).

Picture of Dell rack server
Figure 1. Rack servers like the Dell PowerEdge R760 can host multiple high-performance Intel® OpenVINO toolkit model servers simultaneously. (Source: Dell Technologies)

But whether hosted at the edge or in a data center, AI model servers require most if not all of a platform’s resources. This comes at the expense of functions like security, which is also computationally demanding, almost regardless of the deployment paradigm:

  • Deployment Model 1—Host Processor: Deploying both AI model servers and security like firewalls or encryption/decryption on the same processor pits the workloads in a competition for CPU resources, network bandwidth, and memory. This slows response times, increases latency, and degrades performance.
  • Deployment Model 2—Separate Virtual Machines (VMs): Hosting AI models and security in different VMs on the same host processor can introduce unnecessary overhead, architectural complexity, and ultimately impact system scalability and agility.
  • Deployment Model 3—Same VM: With both workload types hosted in the same VM, model servers and security functions can be exposed to the same vulnerabilities. This can exacerbate data breaches, unauthorized access, and service disruptions.

CISOs need new deployment architectures that provide both performance scalability that AI models need as well as ability to protect sensitive data and IP residing within them.

Proxy for AI Model Security on COTS Hardware

An alternative would be to host AI model servers and security workloads on different systems altogether. This provides sufficient resources to avoid unwanted latency or performance degradation in AI tasks while also offering physical separation between inferences, security operations, and the AI models themselves.

The challenge then becomes physical footprint and cost.

Building on a Dell PowerEdge R760 Rack Server featuring a 4th Gen Intel Xeon Scalable Processor, F5 integrated an Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100. @F5 via @insightdottech

Recognizing the opportunity, F5 Networks, Inc., a global leader in application delivery infrastructure, partnered with Intel and Dell, a leading global OEM company that provides an extensive product portfolio, to develop a solution that addresses the requirements above in a single, commercial-off-the-shelf (COTS) system. Building on a Dell PowerEdge R760 Rack Server featuring a 4th Gen Intel Xeon Scalable Processor, F5 integrated an Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100 (Figure 2).

Image of Intel IPU adapter
Figure 2. The Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100 offloads security operations from a host processor, freeing resources for other workloads like AI training and inferencing. (Source: Intel)

The Intel IPU Adapter E2100 is an infrastructure acceleration card that delivers 200 GbE bandwidth, x16 PCIe 4.0 lanes, and built-in cryptographic accelerators that combine with an advanced packet processing pipeline to deliver line-rate security. The card’s standard interfaces allow native integration with servers like the PowerEdge R760, and the IPU equips ample compute and memory to host a reverse proxy server like F5’s NGINIX Plus.

NGINX Plus, built on an open-source web server, can be deployed as a reverse proxy server to intercept and decrypt/encrypt traffic going to and from a destination server. This separation helps mitigate DDoS attacks but also means cryptographic operations can take place somewhere other than the AI model server host.

The F5 Networks NGINX Plus reverse proxy server provides SSL/TLS encryption as well as a security air gap between unauthenticated inferences and Intel® OpenVINO toolkit model servers running on the R760. In addition to operating as a reverse proxy server, NGINX Plus provides enterprise-grade features such as security controls, load balancing, content caching, application monitoring and management, and more.

Streamline AI Model Security. Focus on AI Value.

For all the enthusiasm around AI, there hasn’t been much thought given to potential deployment drawbacks. Any company looking to gain a competitive edge must rapidly integrate and deploy AI solutions in its tech stack. But to avoid buyer’s remorse, it must also be aware of security risks that come with AI adoption.

Running security services on a dedicated IPU not only streamlines deployment of secure AI but also enhances DevSecOps pipelines by creating a distinct separation between AI and security development teams.

Maybe we won’t spend too much time worrying about AI security after all.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

About the Author

Brandon is a long-time contributor to insight.tech going back to its days as Embedded Innovator, with more than a decade of high-tech journalism and media experience in previous roles as Editor-in-Chief of electronics engineering publication Embedded Computing Design, co-host of the Embedded Insiders podcast, and co-chair of live and virtual events such as Industrial IoT University at Sensors Expo and the IoT Device Security Conference. Brandon currently serves as marketing officer for electronic hardware standards organization, PICMG, where he helps evangelize the use of open standards-based technology. Brandon’s coverage focuses on artificial intelligence and machine learning, the Internet of Things, cybersecurity, embedded processors, edge computing, prototyping kits, and safety-critical systems, but extends to any topic of interest to the electronic design community. Drop him a line at techielew@gmail.com, DM him on Twitter @techielew, or connect with him on LinkedIn.

Profile Photo of Brandon Lewis