Skip to main content

AI • IOT • NETWORK EDGE

Unlocking New Possibilities with 3D LiDAR

Conceptual image of a busy city crosswalk that has pedestrians, vehicles, and street signs highlighted in different colors.

We’re all familiar with the concept of radar. But did you remember that the word is actually an acronym for “radio detection and ranging” that has shed its original uppercase look and become a common noun and a common idea? (As well as a beloved character on the TV show M*A*S*H.) What, then, is the related but more techie-looking “LiDAR”? This one stands for “light detection and ranging,” and it’s not actually a new technology, but it’s been gaining a lot more interest lately, particularly in autonomous vehicles and terrestrial mapping, though its uses go well beyond self-driving cars and archeology.

Recently we spoke with Gerald Becker, VP of Market Development and Alliances at AI-powered 3D LiDAR solution provider Quanergy Solutions. He has seen the technology advance beyond automotive and across many different industries and businesses. He talks about how LiDAR improves operational efficiencies and workflow, benefits of moving from 2D to 3D, and challenges of persuading people to adopt new technologies (Video 1). And maybe one day soon, LiDAR will be so much a part of our lives that we’ll see it in the dictionary as “lidar.”

Video 1. Gerald Becker, VP of Market Development and Alliances at Quanergy, talks about the rise of, and advancements with 3D LiDAR on the “insight.tech Talk.” (Source: insight.tech)

How does LiDAR go beyond autonomous vehicle?

LiDAR has been around for decades, but it wasn’t until the past 10 years or so that we’ve really seen what it can do. Everybody knows about LiDAR being used for automotive—that’s been the holy grail—and robotics and terrestrial mapping, but there are a lot of other applications for it.

At Quanergy, we’ve pivoted and gone after a different market, where we’ve aligned with a who’s who of players from physical security, integration-management platforms, video management, software solutions, cameras, business intelligence, and physical-access control systems. They’ve integrated our sensors into their platforms to provide all kinds of event-to-action workflows. It’s giving end users the ability to explore how to solve old problems in different ways and to get higher levels of accuracy that they’ve never been able to do before—as well as to solve new problems.

I head up the physical-security, smart space, and smart city sectors at Quanergy, and there’s so much 3D LiDAR applicability in those three markets because they’ve always been confined before to using cameras or other types of IoT sensors that are 1D or 2D technologies. The advent of 3D technologies and the integration ecosystem that we’ve developed in the past few years provide so much more flexibility to see beyond two dimensions, to see beyond what’s been the common custom of sensing in this space.

How can that new depth of dimension benefit businesses?

In security, for example, we’re doing some very, very big things, predominantly using radar and camera and video analytics, where our 3D sensors can now provide depth and volume in 360º with centimeter-level accuracy. This increases the TCO advantage compared to all legacy technologies, and decreases the number of false alarms.

In legacy technologies, anytime that there’s movement or anytime an analytic tracks a potential breach, it automatically starts triggering events. That’s a big problem when there are thousands and thousands of alarms just because the analytic doesn’t understand how to decipher that it’s only an animal walking by. Our sensors are able to provide 98% detection, tracking, and classification accuracy in 3D spaces.

From the business-intelligence side, we’re able to provide a higher-level, deeper understanding of what’s going on within a space. Take retail. We can understand where a consumer is going through their journey, what path they’re taking, what products they’re touching, how long the queue lines are for them.

And instead of sticking a camera here, here, here, and stitching them all together, you put in one LiDAR sensor that gives you a full 360º, and you’re able to see that whole space and see how people interact in these spaces. We’re able to provide so many cool outcomes that have just never been able to be done with 2D-sensing technology.

What are some of the challenges that LiDAR is up against in terms of adoption?

I think that with LiDAR, some people may be a little nervous adopting a new technology if it’s out of their comfort zone. When I explain what LiDARs sees, I always revert back to my favorite movie of all time, The Matrix. Remember when Neo saw the ones and zeros dropping from the sky when he saw Agent Smith down the hall? That’s how we see. We don’t see like cameras do, where you could tell that I have on a blue polo shirt. To us, everything looks like a 3D silhouette with depth and volume in 360º.

There is also cost. You have to look at it from a high level. I always use this analogy that I heard when I was young from more senior sales guys—the whole iceberg theory. You can’t just look at the top of the iceberg when comparing what different solutions will cost. A camera may be only a few hundred dollars, while LiDAR may be a few thousand—plus software, et cetera, et cetera.

But the underlying cost is beneath the iceberg, right? What is it going to take to install seven to eight cameras on the one side versus one device? Look at labor; look at the cost of conduit, cable, licensing, the maintenance that’s required to deploy those cameras. So that’s when LiDAR becomes really cost-effective, when you understand the complexity of installation of legacy technology versus new technology in that area.

#LiDAR becomes really cost-effective when you understand the complexity of installation of legacy #technology versus new technology in that area. @quanergy via @insightdottech

How can companies leverage their existing infrastructure for 3D LiDAR?

A layered approach to any solution is probably the best route. There’s not one single technology in the world that can solve all use cases. Is someone trying to sell you on that? Please turn around and run, because it just can’t be done. But when you put the best-of-breed solutions together in your deployment, you’re going to get the best outcomes.

We have a large ecosystem of technology partners that we’ve integrated with. For example, we partner with 2D-imaging technologies: cameras, like your Bosch, your Axis, your Hanwha. If you need to identify something—there’s a bad guy wearing a blue polo shirt that’s potentially going to break through that fence! The camera helps us see that. But when you need to actually detect, track, and classify, that’s when LiDAR opens up new outcomes that you can’t get with just a camera.

Let’s say you use traditional pan-tilt-zoom auto tracking on an embedded camera. The issue with traditional 2D technology and auto tracking is that when Mr. Blue Polo goes behind an object or into another area, the camera doesn’t know what’s happening.

But if you have enough of our lasers shooting throughout the space, seeing up and down aisles, halls, and parking spaces, they’re able to accurately detect the object or person. With our solution, we can tell the camera, “Hey camera, stay focused on this wall. We know the person is behind the wall.” Then when the person comes out from behind the wall, we’re still telling the camera to track Mr. Blue Polo.

The other beautiful thing about the solution is that we provide a mesh architecture. If you have enough LiDARs in a space, as long as the lasers overlap with one another, it creates this massive digital twin. It gives you a flexibility that has never been possible with other technologies. You can literally zoom in and pan around up and down corridors, up and down hallways, other sides of walls, around a tree, around whatever it may be.

Can you talk about some of your customer use cases?

There’s a global data-center company that came to us with a very specific problem. Within a 33-week period of testing at one of their sites, they were generating 178,000 alarms. Now this is by definition a needle-in-the-haystack situation, when I tell you that only two of those alarms were real. Think of the operation to acknowledge an alarm within a security practice: Click. Review. That isn’t it? Delete. Try doing that 178,000 times to find the one time when that disgruntled employee who got fired for something and shouldn’t be at the property at all comes in with a USB drive, plugs into the network, and takes down a billion-dollar organization.

The people at this company knew they had a problem, and they tested everything under the sun—AI, radar, fact-checking technology, underground cable. They finally landed on our solution, and they did a shootout: one of their best sites with our site. Their best site came up with 22,000 alarms; our site generated five actual alarms. It saved them 3,600 hours of pointless investigation work.

Here’s another interesting one. In Florida there are a lot of drawbridges. They go up and they go down, and they’re susceptible to liability issues if people or vehicles can accidentally fall into the waterway in the transition process. Some initial tests were done with our LiDAR solutions positioned on both sides of the bridge to basically track if an object—a person or a vehicle—came into the scene. And if anything did, it could either hold the bridge from going up or notify the bridge tender in the kiosk and say, “Do not let the bridge up.” They had very high success with that POC using LiDAR, and they’re now deploying it across several bridges in Florida.

Tell me more about the ecosystem of partners you work with.

Unlike most LiDAR that is heavily focused on GPU processing with a ton of data that needs to be processed, we’re a little bit different. Our sensors are purpose-built for flow management and security applications; they don’t need to gather and push a ton of data through the pipe. So we have a CPU-based architecture, which means it’s more cost-effective. It’s also highly scalable, but even more so since we align with Intel.

Our partnership with Intel also means that we find out new use cases on a daily basis. Right now we’re exploring brick-and-mortar and warehouse automation with them, where we could provide 3D sensing beyond the traditional way of looking at those types of spaces. The partnership with Intel is really valuable to us as we continue to scale and grow.

How do you anticipate that this space will evolve going forward?

There’s the advent of AI and what’s going on with large learning models. There’s a ton of stuff that’s being done right now with computer vision and understanding much more as far as what’s being cut within the scene in order to understand more generalities that can create different outcomes and tell a different story that ultimately gets you to the end result. Is it a good guy or a bad guy? Is it a good workflow or is it not?

So there’s much more that can be done with LiDAR as we marry it with AI technologies, providing additional outcomes that are just not being done yet. We’re still in the very early stages, but there’s really just a massive opportunity in this space.

We’re past that early phase with LiDAR—the kick-the-tires phase—and there are so many people who are now talking about how it has increased their workflows and provided additional value. So I think, now more so than ever, it’s a time to act and start testing, to start asking the question: What can LiDAR do for me that I haven’t been able to do before? Look at your existing use cases and ask yourself: If I had depth, if I had volume, if I had centimeter-level accuracy—how could that improve my day-to-day workflow, my job, and provide more value to the organization as a whole?

Related Content

To learn more about 3D LiDAR, watch See the Bigger Picture with 3D LiDAR Applications. For the latest innovations from Quanergy, follow them on X/Twitter at @quanergy and LinkedIn.

 

This transcript was edited by Erin Noble, copy editor.

About the Author

Christina Cardoza is an Editorial Director for insight.tech. Previously, she was the News Editor of the software development magazine SD Times and IT operations online publication ITOps Times. She received her bachelor’s degree in journalism from Stony Brook University, and has been writing about software development and technology throughout her entire career.

Profile Photo of Christina Cardoza