Skip to main content

AI • IOT • NETWORK EDGE

Q&A: Is Time-Series AI the Next Megatrend for Industry 4.0?

Time Series AI

What are the most important trends in industrial AI for 2021, and beyond? What happens when AI meets IT/OT? And what role will time series AI play in the factory of the future?

Kenton Williston, Editor-in-Chief of insight.tech, put these questions to Dr. Felizitas Heimann, Head of Product Line for Embedded Industrial Computing at Siemens. Here are her ideas on the ways systems integrators and machine builders can set themselves up for success in the increasingly complex world of AI, deep learning, and machine learning.

Trending on the Shop Floor

Kenton Williston: Felizitas, tell me a little bit about your role at Siemens.

Dr. Felizitas Heilmann: I am responsible for our product line of embedded industrial computing, which basically means our box-and-panel portfolio. That means leading the product management, and also leading the technical project management

Kenton Williston: What do you see as the biggest trends in shop floor AI for 2021?

Dr. Felizitas Heilmann: I see three of them in the near future. First, concerning applications and use cases. The thing that is closest to being applied on a big scale is probably AI-assisted machine vision—for optical quality inspection, for example. And that’s for several reasons: AI-based image analysis is already quite well understood; it’s comparatively easy to apply; and the quality of the results is easier to control than for other AI applications.

Second, progress in hardware. We’ll see more and more dedicated or adapted shop floor AI hardware—mainly PC-based—as almost all industrial microprocessor companies have started to offer AI accelerators, which are now getting applied in devices like industrial PCs. And that’s one important step to enable the deployment and the application of AI models on the shop floor.

And third—and this is a very important one—shop floor computing architecture. We’re seeing IT/OT architectures at scale recently, where the most prominent example may be industrial Edge computing. That means connecting the shop through assets to central management, and providing additional computing power resources.

But, compared to cloud computing, keeping as much performance as necessary on the Edge—so, meaning close to the machine or the line. And due to the nature of how AI works, this is a mandatory prerequisite.

Kenton Williston: Like you said, visual AI has really taken off. But there are many other kinds of AI that matter—particularly time series AI. Can you tell me more about this?

Dr. Felizitas Heilmann: There is a phenomenon, especially in AI: what looks easy is the most complicated thing; what looks complicated in the beginning, like machine vision, is eventually even the easier thing.

The reason I believe that time series is actually the most rewarding use case of AI in the long term is the ubiquitous availability of the data available, and the massive unused potential for you to gather insights out of it—I mean, non-optical data are everywhere.

The main challenge for the time series application is: the more complex the input gets, the less easy it becomes to understand the model and the dependencies. You can have a vast number of sensors, so it becomes more important to test the model, and also to permanently monitor its KPIs to see if the model is still acting as it is supposed to.

IT + OT + AI

Kenton Williston: How do you see IT/OT convergence factoring into the evolving role of AI?

Dr. Felizitas Heilmann: I would like to narrow down the benefit of IT technologies in the OT world to just a simple statement, and that is: increase options while improving resource efficiency. And, as you have just mentioned, both sides have a tremendous potential in combination. And—if we’re bringing that back to our AI topic, especially for a neural network AI application—you have to differentiate between two kinds of activities.

Namely, training a model with historical data, where the result is already known; and inference, which means using the model to judge new data sets. And the training accounts for only a small percentage of the overall usage time of the model, but during that limited time it requires orders of magnitude higher computing resources.

The inference, on the other hand, is running permanently with a much less, but continuous, performance requirement. So you need to calculate this continuous computing workload for inference, and you run it on a computing process close to where the data are generated.

So one of the core elements of IT/OT convergence is the industrial Edge management—its core domain is to connect these two layers of computing availability and make them convenient to use for an industrial automation engineer in a way that he or she does not have to be an IT expert for it. Otherwise, cost-efficient deployment of AI would not be reasonably possible.

Kenton Williston: How you think advances in computing hardware have changed the AI landscape?

Dr. Felizitas Heilmann: What AI is about in terms of computing is all about parallelization, computing speed, and accuracy. And for the parallelization, you’re talking about the numbers of computing cores. And for the computing accuracy, you need to consider which kinds of data you need to handle.

To answer that, we have to jump into what AI is about in terms of computing. The nature of the neural network is all about parallelization, computing speed, and accuracy. For parallelization, you’re talking about the numbers of computing cores. And for computing accuracy, you need to consider which kinds of data you need to handle.

Vision usually works with integer values usually normalized in the same manner by the RGB value spectrum. So all calculations can happen in the integer range on a high number of compute cores in a GPU.

Multisensory time series applications, on the other hand, take dynamic inputs from sensors with a variety of data ranges. All input data need to be normalized as a pre-processing step, making the inference calculation much more complex.

You will probably not be able to handle that with GPUs. So CPU-based, multi-core computing resources are what you will go for in these kinds of use cases technique, the algorithm to be used, and with which KPIs you intend to control the quality of the model.

To scale efficiently you need to choose your hardware based on two factors: the performance required per single inference, and the necessary number of inferences in a given time. To choose the right computing hardware you’d want to look into the number of sensors, data rates, the normalization technique, the algorithm to be used, and with which KPIs you intend to control the quality of the model.

The Most Important Tool—Humans

Kenton Williston: What kinds of platforms and tools do systems integrators and machine builders need to create and deploy time series AI?

Dr. Felizitas Heilmann: I have a personal opinion about tools, and that comes from experience: for any tool, you have garbage in, garbage out—and this is also true for AI models. So the key initial thing to do right or wrong is the data preparation in the beginning before thinking about any tool, and also to be able to think from the perspective of what the training process will do with your data.

There’s a small anecdote of a true example which highlights that quite well. We had a case where the model capability seemed excellent—100% discovered good from bad training pictures. Unfortunately it could not reproduce this in real operation at all.

So, what had happened? All the good sample pictures had been taken at the same time, and—especially—at the same light settings. And the bad samples had been taken at any time, under different optical conditions. So, in the end, the machine learning model had not trained itself to identify the defects, but to identify whether the light settings were the ones that apparently had resulted in good or bad quality.

So, to be successful is to know what you do, and what you can possibly expect. And for that, the most important tools are humans. And for a systems integrator, I would definitely recommend bringing in engineers coming from the automation domain who are familiar with OT requirements, and who want to adopt the new technology.

Kenton Williston: So, having someone who understands intuitively what the data means?

Dr. Felizitas Heilmann: Exactly. And I believe in the midterm things will get a little bit more easy for the, say, general automation public to handle, because what we will see are more ready-to-use applications evolving. Things will get more easy to apply and to handle even if you’re not a machine learning specialist.

Both on the hardware and the software level, there’s a great advantage in open architecture. And our industrial computers, for example—they enable the customer to use our devices either with their own software, with third-party software, or to go with our offering.

And, for the AI modeling tools, luckily it’s the same—they’re getting reasonably exchangeable via the ONNX standard. So you can sort of let your AI engineers work with their preferred tool set, and then port it into your preferred industrial environment, including the operating system.

Getting the Data You Need

Kenton Williston: I think an important thing for machine builders and systems integrators to keep in mind is how they can leverage existing infrastructure. What are the critical considerations for working in brownfield environments?

Dr. Felizitas Heilmann: Well, depending on the intended application, of course, you need to start at the assets and field devices level. And you will have to ask yourself: are you able to collect all the data you need for your use case based on the existing sensors or other information? Or do you have to equip more.

And what will you have to invest in, also, in a brownfield application? There, there will usually not be excess headroom in computing capacity to run the inference at the desired speed. So, in most cases, an investment needs to be made to add additional Edge computing performance close to the application. Luckily, an industrial computer can be easily connected to the system via the standard known connectors or protocols.

Then, especially, our industrial Edge management software enables convenient server connection, remote deployment, and the remote monitoring. And, again, there we take great care to develop it in a way that it blends smoothly into existing environments.

Kenton Williston: How can machine builders and systems integrators engage with Siemens and your partners to simplify the design process?

Dr. Felizitas Heilmann: Well, there are many ways. For example, we’re building industrial digitalization specialists all over the globe—for AI, for industrial Edge, for new PLM software tools like digital twins, for example.

We can direct you to specialists who can support you in finding the hardware and software solutions when you just need the suitable components. Or even to arrange a consulting service to go on the journey with you together—supporting you with deep industry and domain know-how.

And part of the Siemens customer individual offering can also, for example, be the model monitoring to be aware if parameters start to run away, and also if retraining is needed. And we’re continually enriching our portfolio on the hardware and software side as well. It’s really exciting to see how quickly things are moving in that field.

Related Content

To learn more about industrial AI and the importance of time series AI, listen to our podcast on Is Time-Series AI the Next Megatrend for Industry 4.0?.

About the Author

Kenton Williston is an Editorial Consultant to insight.tech and previously served as the Editor-in-Chief of the publication as well as the editor of its predecessor publication, the Embedded Innovator magazine. Kenton received his B.S. in Electrical Engineering in 2000 and has been writing about embedded computing and IoT ever since.

Profile Photo of Kenton Williston