Fill form to unlock content
Error - something went wrong!
Your content is just a step away. Please submit below.
Thank you!
Embedded Systems: Balancing Power, Performance, and AI
In the embedded systems industry, it’s all about finding the balance: edge versus cloud; software versus silicon; high performance versus speed; power consumption versus efficiency. And don’t forget cost! Many options are available to power many different applications in many industries, especially with edge and AI becoming more prevalent.
We asked Alex Wood, Global Marketing Director at Avnet and Tria Technologies (formerly Avnet Embedded), about all things embedded systems. He spoke about the wide range of industries that depend on embedded systems, the importance of AI to the whole industry, and finding the right balance of processor power to outcome (Video 1). Just because a company can have the newest, fastest processor, should it? And just because that company deploys it to create some shiny new technology or app, is it something its customers will want to use?
What are the current technology trends in the embedded systems space?
I think we’re at a nexus point in the industry. With AI there’s a lot of emphasis on putting things into the cloud; and then there’s a lot of pushback from people who want to put things on the edge. And both of them have their own challenges and potential setbacks. Customers are saying to us, “We want to leverage this, but we’re not entirely sure how to leverage it.”
What are some of the challenges of driving AI at the edge?
I think power is the key thing—that’s going to be the make-or-break for AI. AI consumes a vast amount of data and is super power hungry. It’s making Bitcoin look almost power efficient right now. And a lot of businesses don’t realize how much power those applications consume at the edge. They’ve outsourced the demand to a data center so they don’t see the challenges firsthand.
So I think reducing the power requirements of performing these applications is going to be a key challenge. That’s going to determine whether or not AI sticks around in this hype cycle—depending on how you define AI and how it works. As these applications access and absorb large data models and process things in real time, they will all require more energy-efficient and more heat-efficient processing.
With #AI there’s a lot of emphasis on putting things into the #cloud; and then there’s a lot of pushback from people who want to put things on the #edge. And both of them have their own challenges and potential setbacks. @Avnet via @insightdottech
What sorts of applications are your customers building these days?
There’s loads of different things we’re working on with customers at the moment. One example is in new farming applications, where artificial intelligence is being used as an alternative to things like putting dangerous forever chemicals into the soil or just for more efficient farming.
You can train an AI robot to identify weeds in the field and to tell weeds and pests apart from crops and non-harmful animals. Otherwise a human has to walk through the fields taking photos of the different plants and then educating the people working in those fields. You can create an AI application that does the crop checking for you.
You want to be able to program the robot at the edge to be able to do that edge-based AI recognition; you don’t necessarily want to put all of that content into a data center. You don’t necessarily have a reliable cell data connection in that circumstance either. And vision is where the jump is in terms of the processing requirements—live-vision AI that is able to identify what it’s looking at as quickly as possible and then act on that identification in a short amount of time, instead of having to send signals back to a data center for crunching.
At the opposite end of the spectrum there are things like automatic lawnmowers for people at home so they can map out the best path around the lawn. One is a big, future-facing altruistic solution; the other one is a more practical, real-life solution. But it’s those practical challenges in the real world that really put the technology to the test.
What should users consider when it comes to high-performance processors?
A lot of our customers will have different tiers of their product that they’re creating for different markets. In mass-scale agriculture—say in America with the giant fields—they want to be able to do things at speed, and they’ll have a top-of-the-range solution to cover a huge amount of distance on a giant farm. They also have the ability and the money to invest in that. There will also be a slightly slower, slightly cheaper, mid-range application and then a lower-range option as well.
For me, the industry is driven forward by the actual application. I’m always reminded of the picture that does the rounds on the internet: it’s of a field and the manufactured path that leads around its outer corner. And then there’s a trodden path that goes diagonally across the field where people have just chosen to walk. It’s design versus user experience.
We’ve seen that in the AI/IoT space recently, where there was all this exciting talk about what was possible, but at the end of the day what was successful was defined by people actually using it and finding it useful. I recently upgraded my aging fridge to a semi-IoT model that tells me if the door is open or if the temperature is too high or too low. I don’t need one with a screen in it that gives me information about the weather—I’ve got a separate display in my kitchen for that—and I don’t need a camera in there. But I do like it if it warns me if the door’s been left open. Those real-life applications are what stick around.
How important is the processor-RAM combination?
It connects up with what I said before about power efficiency. If I’m building a gaming PC, I want to have a higher frame rate so videos will render faster. But the last time I upgraded my graphics card, I had to get a PSU that was twice the size of the previous one. I was pushing a thousand watts to run a proper PC rig when it used to be that 300 watts was a lot. There’s all of the innovation, the excitement, the things you could add. But then, realistically, you have to run it with a certain amount of power draw in order to get what you want. You’ve got to sacrifice something to get something else.
Think about an electric car: You add loads of bells and whistles to it, so it gets heavier and heavier, to the point that the range drops. And then if you want a long-range model, you’ve got to increase the aerodynamics, and that means stripping out things like power seats in order to reduce the weight. So you’ve got to find that middle space, that sweet spot in these sorts of applications.
For most customers, it’s not so much about getting a more powerful processor or the most powerful processor; it’s about balancing consumption, longevity, and capability that’s specific to the application. Of course, for other customers there is a marketing element to it: They want to buy the absolute top of the range, the flagship processor, when they might not need it. Sometimes, though, they actually do—it depends on the application. I would rather sit down with the customer and say, “Tell me what you’re actually building.” Rather than, “You need the top of the range. You need the i9 immediately.”
How does Avnet/Tria Technologies meet users’ range of requirements?
I think we’ve got a pretty good range, one that goes from tiny little low-power compute applications all the way up to the COM-HPCs with server-grade Intel processors in them. Those are designed for edge-based image processing and AI applications, but they’re larger as well. So you have to have that balance between size and power consumption and then what they’re capable of.
A lot of the larger modules, the COM-HPC modules, they’re motherboard-sized, which means that you’ve got to put them inside a dedicated case. You couldn’t just embed them directly into a product unless it was a really big product. Public transportation is one big-product thing we’re working on at the moment. For things like that, being able to take data from a huge number of sensors from a train or a train station, analyze it all, react to it all in real time—that pretty much requires an on-location server. Also, sometimes you can’t rely on the data network being reliable.
Can you talk about that partnership between Avent/Tria and Intel.
One example of what we’re working on with Intel is cobotics—cooperative robotics—with one of our customers: building real-time image sensors into a cobotics environment so that a robot can operate safely in the same space as a human. If a human moves into the robot’s space, the robot arm stops moving; if the human picks something up, the robot knows where that thing is and can take it from the human again.
We demonstrated an early example of that at embedded world in Nuremberg this year. The image processing was built around a combination of Intel-based SMARC modules and then our Intel-based COM-HPC modules. Those two things communicate with each other to analyze the signals from the cameras, and then they communicate with the robot in real time as well.
The processor we use for our customers depends on the size and the shape of the module that it needs to go into. We typically offer the Intel Atom® and the Intel® Core™ series, and the Intel® Xeon® series at the server end. It’s really cool to see what the product team does, putting things into such a small space. I’ve been working with motherboards and processes for motherboards for years, so to see this sort of computing application in such a small package with all the thermal management—it’s a fine art.
And then it’s a fascinating challenge for us to develop applications in the environment that the product’s going to be used in. Being able to deploy an Intel processor and its capabilities—and the new AI-based processes we’re working on as well—to bake those into a small product to use at the edge is pretty exciting.
How is the latest AI technology helping the embedded systems industry advance?
I was at a recent Intel AI event, and the applications around how AI can accelerate an application at the edge were fascinating. There were things like supermarket-checkout applications that automatically recognize the product you’re holding, as well as supermarket queue-management automation.
Dell was up on the stage at the event showing the laptops they’re going to be releasing with built-in AI applications—so it’s an AI device instead of a computing device and something that’s really leaning into that collaborative AI-application environment. Intel showed a case study video of scouting athletes in Africa for their future Olympic potential based on an image-processing platform. That was really cool and really captured my imagination.
I think that AI is at a nexus point at the moment, and I think edge computing is at a nexus point as well—being able to take AI applications away from the cloud and put them onto the device. It’s a really exciting time to be working in computing on a small-form factor with AI in this space.
Related Content
To learn more about the latest edge AI innovations, see what Intel partners across the global doing in their industries, and listen to our podcast on Beyond the Hype: Real-World Edge AI Applications. For the latest innovations from Tria Technologies, follow them on LinkedIn.
This article was edited by Erin Noble, copy editor.