Skip to main content

ROBOTICS

The Power of Location Intelligence with AI and Digital Twins

Location intelligence

Tracking and monitoring of the environment—or location intelligence—is pretty ubiquitous in our lives nowadays, from security cameras to backup assist in the car. But businesses are just starting to understand the possibilities inherent in matching location intelligence with technologies like AI and digital twins. And if the idea of the digital twin still seems a little fantastical, fasten your seatbelt. We’re about to take it into the next dimension: time. Because when a particular asset is understood within a particular moment, monitoring and spatial awareness can affect way more than just security or defect detection.

Of course, with great technological advancement comes great technology. So Tony Franklin, General Manager and Senior Director of Federal and Aerospace Markets at Intel, is well positioned to explain the whole concept of location intelligence: the challenges it can solve—both now and in the future—as well as the technology designed to help make it all happen, including the Intel SceneScape platform (Video 1).

Video 1. Tony Franklin, General Manager and Senior Director of Federal and Aerospace Markets at Intel, explains the importance of location and intelligence and the use of AI and digital twins.

How are you seeing the concepts of digital twins and location intelligence being used?

I think we’re all really used to location intelligence without even knowing it’s there. Everyone has Google Maps on their phone. Anyone who’s had children knows about the Life360 app: You know exactly where someone is, how long they’ve been there, how fast they’re moving.

But on the business side, we’re just starting to understand how impactful location intelligence can be from a financial point of view. So for a shipping company like UPS, if their locations aren’t accurate in getting from point A to point B, it could cost them many millions of dollars. It’s also important for things like sustainability. I read recently that 27% of gas emissions in the US are from transportation.

And, in addition to location intelligence, I think what we’re starting to really understand is time-based spatial intelligence. It’s not just about location; it’s whether we really understand what’s going on around us, or around that asset or object or person, in a particular moment. Digital twins allow you to re-create the space and then also understand the particular time—both real time and, if you need to hit the rewind button and do analysis, you can do that also.

What’s also valuable about digital twins is that there’s a naturally created abstraction. We know that it’s a digital replica of the real world, and so analysis is being done on the replica, not on the actual data coming in. And that digital replica can then make the data available to multiple applications.

You do need to use standards-based technology when there are multiple applications and different types of AI, because you may need one type of AI to identify certain animals or people or assets, and another to identify different cars or weather or more physics-like models.

What challenges are businesses facing that location intelligence can address?

I think one of the biggest challenges is siloed data coming from different applications. For example, we have a ton of applications that work together on our phones, but it doesn’t mean the data on the apps works together.

In the business world there might be an app to monitor physical security, but another app to monitor, say, the robots in a factory. They all have cameras, they all have sensory data, but they’re not connected—all the data is sitting in different silos. So how do you connect that data to increase situational awareness and make better decisions? And better decisions ideally mean either saving money, having an opportunity to make money, or creating some other value like a safer environment.

“AI and the integration of these technologies and sensor data is so important. It allows these systems to be more intelligent and to actually understand the environment” – Tony Franklin, @intel via @insightdottech

Another challenge is just the need for a mental shift. A lot of the technology we’re already using comes from games. Video games are so realistic these days, and in games you can see everything in your 3D environment. You know location; you have multiple kinds of sensory data coming in—sound or environmental. And all of that is integrated into the experience. So more and more we are starting to want to incorporate that into our day-to-day lives and into business as well.

How is Intel helping businesses implement digital twins and AI?

There’s always a ton of data involved that needs to be labeled to make it available, and we have lots of tools to connect this all together. If we’re talking streaming data in real time, there’s the Intel® Distribution of OpenVINO toolkit, which allows you to apply inference and also to pick the best compute technology for the particular data coming in.

So you’re bringing this data in, applying inference, continuing a loop. Then the Intel® Geti platform allows you to train the data. And it allows you to do it quickly instead of needing—if we’re talking images for computer vision—thousands and thousands of images. And no one needs a PhD in data science, either. That’s what Geti is for.

In the middle we have something called Intel® SceneScape. Like Geti, SceneScape is intended for end users. Think of it as a software framework sitting in the middle of OpenVINO and Geti to really simplify the creation of the digital twin, to make sense of the data you have, and to make that data available and usable in an impactful way. It allows the end user to easily implement AI technology in an open, standard way and to leverage the best computing technology underneath it.

So, the sensor data comes in. OpenVINO will then apply inference for object detection or classification, for example. You can use Open Model Zoo—a range of models from all the partners we work with—and implement that model with SceneScape. Then you use Geti to train the data.

SceneScape also allows you to use any sensor for any application to monitor and track any space. We’re so used to video, but there are other sensors that allow you to increase situational awareness for your environment. You could have LiDAR—all the electric and autonomous vehicles have that—or environmental, temperature, radiation, or sound sensors, as well as text data.

Can you share any case studies of Intel® SceneScape in action?

One commonality to the customers that have been using SceneScape is the need to understand more about their environment—either the environment they’re in or that they’re monitoring—and to connect the sensors and the data and make that data available. They want to increase the use of that data and gain more situational awareness from it.

So think about an airport. There’s a need to track where people are congregating, to track queue times, etc. When we were in the early stages of Covid, there was a need to track bodily measurements with forehead sensors. Airports have spaces that are already being monitored, but now they need to connect the data. The sensor that’s looking at the forehead generally isn’t connected to the cameras that are looking at the queue line. Well, now they need to be.

It builds relationships between data points: You see this person and see that they’ve been in line for 30 minutes, but you also see that they have a high temperature and they’re not socially distanced. Or you see that this person was with a bag and was moving with the bag, and now the bag is sitting stationary, but the person kept moving.

And you’re not just looking at Terminal A, Gate 2, if you will. You need all the terminals and all the gates, and you need to see it in a single pane of glass. That’s one of the benefits that SceneScape provides.

How does Intel® SceneScape address privacy concerns?

Privacy is absolutely important. But we’re just looking at detecting the actual object—is it a person, is it a thing, is it a car? We want to identify what that is, we want to identify distance, we want to identify motion. We don’t actually do facial recognition or anything like that. We’re inferring the data but then allowing the customers to implement what they chose for their particular application.

Where do you think this space is going next?

One of the use cases I’m waiting for is the patient digital twin. Now you’ve got different medical records in different places. Historical data isn’t being used with real-time data, or being used against the reams and reams of medical history across many patients that could apply to me. So I would love to see a patient digital twin that’s constantly being updated; that would be ideal.

But how about just tracking medical instruments? Before surgery there were 10 instruments, and you want to make sure that there are still 10 instruments when the surgery is over—that they’re not inadvertently left somewhere they shouldn’t be.

So there are immediate applications that can help with business operations today, as I’ve already talked about. And then there are the future-state ones that I think we’re all waiting for, where I want my patient digital twin.

I think as companies start to realize that they can de-silo their data and make relationships or connections between the data and the systems they have across a range of applications—not just in one room, not one floor, not one building, but maybe across a campus—they can start to get actual value that can impact their bottom line—they can make more money, they can save more money.

Are there any final thoughts of key takeaways you want to leave us with?

Think about traffic as a use case; location intelligence could help save lives. And we are seeing customers look at SceneScape with this application. Many cars today have camera sensors—backup sensors or front cameras—and most intersections have cameras. But today they don’t talk to each other.

Well, what if there’s a car that’s coming up at speed, and there’s also a camera that can see pedestrians coming around a blind spot. I want the car to know that and to start breaking automatically. Right now most cars coming up on another car too fast will automatically start breaking. But they can’t do that with a person if they don’t know that that person is coming around the corner, because they can’t see it. Or, if the camera can see it, they don’t necessarily know how far away the person is or how fast the car is going.

As humans, we get into a car and we know how fast it’s going; we know if somebody’s coming. And we take the way our brains understand that for granted. But cameras don’t understand that. So that’s an application that can be applied today, and some cities are actually looking at those types of applications.

And that’s why AI and the integration of these technologies and sensor data is so important. It allows these systems to be more intelligent and to actually understand the environment. Again, time-based spatial intelligence: distance, time, speed, relationships between objects.

And that’s exactly what we’re working on—working with the large ecosystem Intel has to make it easy for companies to implement this technology. It’s an exciting time, and we’re looking forward to helping companies make a difference.

Related Content

To learn more about the importance of location intelligence, listen to Gaining Location Intelligence with AI and Digital Twins and read Monitor, Track, and Analyze Any Space with Intel® SceneScape. For the latest innovations from Intel, follow them on Twitter @intel and on LinkedIn at Intel Corporation.
 

This article was edited by Erin Noble, copy editor.

About the Author

Christina Cardoza is an Editorial Director for insight.tech. Previously, she was the News Editor of the software development magazine SD Times and IT operations online publication ITOps Times. She received her bachelor’s degree in journalism from Stony Brook University, and has been writing about software development and technology throughout her entire career.

Profile Photo of Christina Cardoza