Cross-Platform AI Kit Simplifies Embedded Vision
The rapid emergence of AI is creating an urgent need for new ways to deploy computer vision, deep learning, and other analytics into embedded applications.
Developers have many options for training their models, with popular frameworks such as TensorFlow, MXNet, and Caffe ready to do the heavy lifting. But once it’s time to deploy that model, developers have even more options, with associated problems.
One of the biggest problems is the mismatch between embedded hardware and AI tooling. Because AI started in high-performance data centers, the tooling was designed to target uniform, high-performance resources. In contrast, embedded systems are highly cost- and power-constrained, and use a wide variety of architectures—often including heterogeneous mixes of CPUs, GPUs, FPGAs, and other processors.
To deploy AI into the messy world of embedded systems, developers need embedded-specific tools that can work with a variety of architectures. These tools should use existing embedded APIs and frameworks to maximize portability, flexibility, and scalability. And the tools should produce inference engines optimized for the resource-constrained realities of embedded design.
Taking Deep Learning to the Edge
For developers looking to embed AI capabilities, the solution to these challenges may lie in the OpenVINO (Open Visual Inference & Neural network Optimization) toolkit Formerly known as the Intel® Computer Vision SDK, OpenVINO is a multi-platform tool kit that scales across Intel vision products, providing a common API for Intel processors, Intel integrated graphics, Intel FPGAs, and the Intel® Movidius™ Neural Compute Stick.
OpenVINO includes computer vision, deep learning, and pipeline optimization. It is based on the convolutional neural networks (CNNs) widely used for pattern recognition in computer vision. The kit also leverages existing frameworks, including a variety of Intel computer vision libraries and the Intel Deep Learning Framework.
The kit provides broad processor support through three APIs: the Deep Learning Deployment toolkit, as well as optimized functions for both OpenCV and Open VX (Figure 1).
With this wide base of support, as well as Windows and Linux operating systems, developers can bring deep-learning inference to embedded systems using an optimized combination of hardware architectures.
Intelligent Development
The first step in using OpenVINO is training the model. The toolkit includesmany cloud-based frameworks that this can be done through, such as Caffe, TensorFlow, or MXNet. Intel even has its own open-source deep-learning framework, called neon™.
The next step is to import the trained model into the OpenVINO Deep Learning Deployment Toolkit, which comprises primarily the Run Model Optimizer and the Inference Engine (Figure 1, again).
The Optimizer is a Python-based command-line tool that imports the trained model and performs analysis and adjustments for optimal execution of the static trained models on the edge device. It then serializes and adjusts the model into an intermediate representation (IR) format (.xml, .bin, OpenVINO). The IR is imported into the Inference Engine, with a common API across all processing hardware options with optimized execution.
For example, some layers can be executed on a GPU and others, such as custom layers, can be executed on a CPU. Developers can also take advantage of asynchronous execution to improve frame-rate performance while limiting wasted cycles. The inference can be further optimized using a C++ AP to work on IR files.
Features such as access to OpenCV and OpenVX let users create custom kernels and use Computer Vision Libraries’ functions as well as Intel’s computer vision libraries. Developers can also use a growing repository of OpenCL starting points in OpenCV to add customized code.
It’s worth noting that while the capabilities of OpenVINO and its Deep Learning Deployment Toolkit are already extensive, they’re also constantly being updated by Intel to improve development and hardware acceleration of CNN deep-learning algorithms. That said, performance improvements to date compared to standard Caffe and OpenCV show marked improvement (Figure 2).
Fast Path to Smart Vision
To help developers get a smart-vision application off the ground more quickly, Intel partnered with a number of ecosystem members to develop accelerator development kits supporting OpenVINO. For example, the UP Squared AI Vision Development Kit from UP ! Bridge the gap is ideal for conceptualization and early development (Figure 3).
The kit is based on an Intel Atom® x7 processor with an integrated GPU and VPU. It also supports a range of hardware options, including the Intel® Movidius™ Myriad™ 2 VPU in a Mini PCIe form factor that comes with the kit.
The kit is fully configured for OpenVINO. Developers can prototype with the cloud-based Arduino® Create IDE and optimize performance using Intel® System Studio tool suite. This software is included on a pre-installed custom Ubuntu Desktop OS configured to allow for computer vision development out of the box.
For developers with a few cycles to spare, note that UP is a brand from AAEON Europe, which is actively looking for members to join its AAEON Community Beta Program. Benefits include early access to AAEON products, free samples of boards, the option to participate in its community content development, and discounts.