See the Bigger Picture with 3D LiDAR Applications
Video imaging has become a cornerstone of business operations, but its limitations are clear. Challenges like low light, obstructions, and tracking multiple targets hinder its effectiveness. Enter 3D LiDAR: a solution known for its unparalleled situational awareness and accuracy.
While 3D LiDAR isn’t a new technology, it’s capabilities are becoming more affordable and accessible to businesses. In this episode, we explore how to unlock the full potential of 3D LiDAR in your business. We discuss real-world applications, overcome implementation hurdles, and discover how this innovative technology can drive your business forward.
Listen Here
Our Guest: Quanergy
Our guest this episode is Gerald Becker, VP Market Development and Alliances at Quanergy, an AI-powered 3D LiDAR solution provider. Prior to joining Quanergy, Gerald served as Senior Director of Sales and Business Development at the AI and computer visions apps company SAFR by RealNetworks. At Quanergy, he leads the identification and development of strategic channel partnerships in the security, smart city, and smart spaces markets.
Podcast Topics
Gerald answers our questions about:
- 1:51 – Bringing 3D LiDAR beyond autonomous vehicles
- 6:18 – Making 3D LiDAR more accessible to businesses
- 7:42 – Implementing 3D solutions into existing infrastructure
- 11:47 – Gaining actionable insights and decision-making
- 14:48 – Real-world 3D LiDAR application results
- 19:32 – The partnerships and technology behind the solutions
- 22:01 – Emerging trends and technologies to look out for
- 23:58 – Final thoughts and key takeaways
Related Content
To learn more about 3D LiDAR, read Unlocking New Possibilities with 3D LiDAR. For the latest innovations from Quanergy, follow them on Twitter at @quanergy and LinkedIn.
Transcript
Christina Cardoza: Hello, and welcome to “insight.tech Talk,” where we explore the latest IoT, AI, edge, and network-technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re talking to Quanergy’s Gerald Becker about advancements in 3D LiDAR. Hey, Gerald, thanks for joining us.
Gerald Becker: Thanks for having us.
Christina Cardoza: Before we jump into the conversation, if you could just give us a brief background of yourself and the company, that would be great.
Gerald Becker: So, a little bit about myself, Gerald Becker. I’ve been in the security space for a couple of decades now, a little over 20 years, and I’ve been in every facet of the industry from being an end user, systems integrator, and even more so to more latest at times I’ve been on the manufacturing side, which is where I really found my stride. I’ve been with Quanergy just shy of four years now.
Quanergy is a manufacturer of 3D LiDAR–sensing hardware, and we’re also a developer of software. We were founded in 2012, and we were one of the first LiDAR companies to actually come out in the space commercially, originally targeting autonomous vehicles, right? The holy grail. So we were one of the first companies to come out commercially and be able to offer a turnkey solution to various markets, which we’ll go into here in a bit.
So, really looking forward to this discussion. Thank you, Christina.
Christina Cardoza: Yeah, absolutely. And excited to jump in. You mentioned the company’s been around since 2012, so obviously LiDAR—3D LiDAR—it’s not a new technology, but I feel like it’s been gaining a lot more interest lately. And you said it started in automated driving, but now it’s spanning across different industries and different businesses.
So I’m just curious, if we could start off talking about what is 3D LiDAR exactly when we’re talking? How does it go beyond automated cars, and what are the pain points that businesses are trying to solve with it today?
Gerald Becker: This is the fun stuff, right? So, there’s a lot of applications for LiDAR. Predominantly everybody knows LiDAR being used for automotive and robotics and stuff like that. Also terrestrial mapping. So, putting these on drones and mapping environments to understand is there a pyramid hidden behind these rainforests, and stuff like that. A lot of cool applications that have been out there for years and years. So LiDAR is absolutely not a new technology. It’s been around for decades—very, very long time. It’s not until, I would say, the past 10 years that we’ve really started going beyond the comfort zone of what LiDAR can do.
In my role within the organization, I head up the physical-security, smart space, and smart city market sectors. And with that being said, there’s so much applicability as far as what you could do with 3D LiDAR in those three markets, because they’ve always been confined to a 2D space—like what we’re seeing on this camera. In those spaces they’ve always predominantly used, like, radar, camera, other types of IoT sensors that have always either been 1D or 2D technologies.
But now, with the advent of 3D technologies and the integration ecosystem that we’ve developed in the past few years, we now provide so much more flexibility to see beyond the norm, see beyond two dimensions, see beyond what’s been the common custom of sensing in this space.
So, for security we’re doing some very, very big things. In security, because they’ve predominantly been using radar and camera and video analytics, 3D sensing is now being able to provide additional capabilities where we provide depth, where we provide volume, but even more so in 360, with centimeter-level accuracy. Now, what that does for security applications is that that brings down the TCO advantage—or, I’m sorry, it increases the TCO advantage—compared to all legacy technologies, but it decreases the amount of false alarms to actually activate and track and see what is real and what isn’t, right?
So in these legacy technologies, anytime that there’s a movement or an analytic tracks a potential breach or something like that, it automatically starts triggering events and sends them to the alert office to say, “Hey, there’s an alarm, there’s an alarm!” That’s a big problem when there’s thousands and thousands of alarms coming in, because the AI or the analytic, the intelligent video, doesn’t understand how to decipher, “Hey, that’s just an animal walking by. It’s not a perpetrator coming up to the fence.”
So with our sensors we’re able to provide 98% detection, tracking, and classification accuracy in 3D spaces. When we marry up with other technologies—such as a PTZ camera—where a camera may be focused into a specific zone our 3D LiDAR sensor sees in that whole space and we detect an object, we tell the camera, “Hey, camera: move over here and keep tracking this object in this space.” Once again, we’re centimeter-level accuracy. We’re able to slew to cue cameras and provide that intelligence to the security operations.
From the flow-management side, which is more on the business-intelligence side, we’re able to provide a higher-level deep understanding of what’s going on within spaces such as retail. We can understand where consumers are going through their journey, what path they’re taking, what products are they touching, the queue lines—how long are they? Even more so, much easier than you could with traditional ways of doing it, like the whole camera base or stereoscopic waves. We’re able to eliminate up to seven cameras with one sensor of ours to be able to give you quite a bit of coverage in that space—once again, in 3D.
So instead of sticking a camera here, here, here and stitching them all together, you put one LiDAR sensor that gives you full 360, and you’re able to see that whole space and see how people interact in these spaces as they’re touching or experiencing different—if it’s at a theme park, to understand what is it a person’s doing in line, or at a museum to see how they’re interacting with this digital space—we’re able to provide so many cool outcomes that you’ve just never been able to do with 2D-sensing technology. So when you asked me what is new and what can you do with 3D, we’ve barely started tapping into the capabilities, what you do with 3D.
Christina Cardoza: Yeah, and everything that you’re saying, obviously that depth of dimension and that 360 view—that is something that’s going to benefit businesses and that you really want to strive for to get the complete picture. But, like you said, it’s not something that we’ve seen businesses really be utilizing until now.
So, what’s been happening in this space? Has there been any recent advancements or innovations that make this a little bit more accessible to these types of businesses?
Gerald Becker: I think the biggest wall to adoption was the ecosystem of technology integrations, right? So as I stated, a lot of these companies have predominantly been going after automotive—the holy grail—and that’s typically been to OEMs—people that take the sensor, develop custom integrations, and stick it into the hood or fender of a vehicle.
Now, that’s not what we’ve done. We’ve pivoted, and we’ve gone after a different market, where we’ve aligned with the who’s who from physical security, integration-management platforms, video management, software solutions, cameras, business intelligence, physical-access control systems where they’ve used our sensors and they’ve integrated our sensors into their platforms to provide all these event-to-action workflows. All these different outcomes that have just not been available in the past, right?
So this is opening a whole new level of understanding and all new capabilities to solve old problems, but even more so, new problems. What we’ve seen is now that we’ve got the integrations to all these tier-one partners in these spaces, is that that’s giving end customers and end users the ability to now explore how to solve old problems in different ways and get higher levels of accuracy that they’ve never been able to do before.
Christina Cardoza: Now, you’ve mentioned a lot of different technologies, and these companies who have been doing 2D sensing with their cameras and their other sensors, how can they now leverage the existing infrastructure that they have and add 3D LiDAR on top of it or work with Quanergy with their existing infrastructure? Or does it take a little bit more investment in hardware and tooling to be able to integrate some of these and get the benefits?
Gerald Becker: Yes and no. As I stated in the previous question, we do have a large ecosystem of technology partners that we’ve integrated with. I would say that 9 times out of 10 off the shelf, we can integrate with a lot of the stuff that’s already out there. But we’re very fluid in how we’re able to work with partners. You can integrate to us directly through your camera, through a VMS platform, through our open API, or third-party GPIO boxes, which is basically nothing more than an Ethernet box where we could push a command directly to it, and then activate a siren alarm or whatever it may be.
So the other side of it too is that we’re not trying to go completely greenfield, right? I’m not trying to discount a lot of the technologies out there, but I will say a layered approach to any approach is probably your best route. Because there’s not a single technology in the world that can solve all use cases. Does someone sell you on that? Please turn around and run, because it just can’t be done. But when you put together the best-of-breed solutions together in your ecosystem or in your deployment, you’re going to get the best outcome from every sensor.
Case in point with cameras. We don’t see like cameras do; we don’t capture any personal identifiable information. When I explain what LiDARs sees, I always revert back to my favorite movie of all time, The Matrix. Remember when Neo saw the ones and zeros dropping from the sky when he saw Agent Smith down the hall? So that’s how we see. We don’t see like cameras do, where I could tell, Christina, you have glasses and a white blouse on, or that I have a black polo shirt on. You can’t see that. To us everything looks like a 3D silhouette with depth and volume in 360.
Now, that’s where we then partner with 2D-imaging technologies, such as a camera like your Boschs, your Axis, your Flares, your Hanwhas—big companies that help us see. When we do need to identify—hey, there’s a bad actor with a black polo that’s potentially going to break through this fence—that camera helps us decipher that. But when you need to actually detect, track, and classify—when you marry those technologies—that’s when you open up new outcomes that you can’t do with just a camera.
So for instance, when you use, let’s say, traditional pan-tilt-zoom auto tracking on a camera that’s embedded, they’ll put a bounty box around the person and they’ll track that person in the scene. The issue is, with traditional 2D technology and auto tracking that’s embedded on the camera, is that when that person goes between an object or another area, the camera doesn’t know what’s happening; it doesn’t see what’s going on in that environment.
But if you have enough of our lasers shooting throughout the space and we’re seeing up and down aisles, halls, parking spaces—whatever that obstruction may be—we’re able to accurately detect the object, and we tell the camera, “Hey camera, stay focused on this wall, because we know the person is behind the wall.” Then when the person comes from behind the wall and into the view of the camera, we’re still telling the camera keep tracking that person: that’s Mr. Bad Guy. So we go from wall to guy with a black shirt on, and we’re tracking him all throughout.
That’s the beautiful thing about the solution too, is that we provide a mesh architecture, right? So, unlike having to stitch multiple technologies and trek from scene to scene to scene to scene to tile, if you have enough LiDARs in a space, as long as the lasers overlap with one another it creates like this massive digital twin. So you could literally zoom in and pan around and trek all throughout, up and down corridors, up and down hallways, other sides of walls, around a tree, around whatever it may be. That’s the power of our mesh architecture is that gives you the flexibility that you’ve just never been able to do with other technologies.
Christina Cardoza: I love this whole idea of partnering with other organizations and experts in this space and being able to get the best outcomes from each sensor and utilize all this technology. But how do you make sure that, now you have this all together, it’s not information overload? That you’re getting data that makes sense and that you can make actions on and do decisions?
Gerald Becker: We’re working with a global data-center company who came to us with a very specific problem. They told us at any given time—well, actually not at any given time—within a 33-week period of time and testing one of their sites, they were generating over 178 alarms—178,000 alarms, to be exact. Now this is by definition needle in the haystack when I tell you only two of those were real alarms. So, when you think of the operation to acknowledge an alarm within a security practice, it’s like: click, next, review. That isn’t it? Delete. Click, next, review, delete. Try doing that 178,000 times to find that one time when that disgruntled employee that got fired for smoking or doing something when he shouldn’t be at the property comes with that USB drive, plugs into the network, and takes down a billion-dollar organization, right?
They knew they had a problem. So, in that respect, they tested everything under the sun from AI, radar, fact-checking technology, underground cable—everything under the sun. So they finally landed on our solution. They did a shootout: one of their best sites with our site—same timeframe of testing their best site with our site. They came up with 22,000 alarms on their best site; our site generated five actual alarms. And—again, I’m getting goosebumps when I tell you this—they told us that has saved them 3,600 hours in pointless investigation work that they can reallocate to other capital expense, other operational expense. “We’re buying more solutions, more CPUs from you guys,” or more LiDARs from us, right? There’s just so much that they’re able to see.
Now, the idea is that we dramatically decrease the operational effect of those legacy technologies to make them only aware of what was important to them, right? So that was a key value proposition there. But even more so, by tying into all those other technologies it made it more effective. So when we did track those five alarms, we did actually track the camera to decipher: is that a good guy, bad guy? Is that a real alarm? Absolutely. So we’re able to decrease the operational expense as far as someone having to click, next, review; click, next, review; click, next, review thousands and thousands of times to actually only work on something that’s important. So there’s so many different outcomes and effects to the positive side that I could go on and on for.
Christina Cardoza: That’s great. And I’m sure when you’re looking at hundreds of thousands of different reviews, you can make mistakes. You’re probably just going through the motions, and something could be happening that you’re just like: all right, click next, click next; I just want to get through all of these alarms and alerts. So that’s great that you guys are able to pinpoint exactly what’s happening.
You’ve talked a lot about infrastructure surveillance, and you talked about the customer behavior within shopping aisles and things like that. I’m curious if you could provide us with—if you have any more customer use cases or examples of how you’ve helped somebody: what the problem that business was having, and what the result was as the result of using the 3D LiDAR and working with Quanergy?
Gerald Becker: We deal with various markets. In fact, one of our bigger markets too is the flow management–smart space-smart city market. We just did a webinar with one of our customers, YVR—Vancouver International Airport—where they talked about their application with LiDAR, and how LiDAR was able to give them the accuracy levels that they needed to—how to better engage the guest journey—that curb-to-gate experience from airside-landside operations. But even more so it’s how to get the flow of people in, through, and out to their final destination.
There’s a lot of bottlenecks, a lot of choke points as you get dropped off by your family, by taxi, or by Uber; as you go to check in, get your ticket; as you go through CATSA or TSA to go through security. Then finally as you go to duty-free or a restaurant to get your food. Then finally when you get to the boarding gates, right? There’s a lot of areas where there’s choke points that create friction as far as the experience and the journey that one takes throughout that environment.
Now, as I mentioned earlier, I don’t want to talk down on other sensing technologies, but let’s just say in this environment we were able to replace up to seven cameras in that environment with one LiDAR sensor. And unlike cameras in that space that had to be overhead looking straight down, giving them a limited field of view, we gave them so much coverage. One of our long-range sensors alone could do 140 meters in diameter of continuous detection, tracking, and classification. That’s equivalent to about three US football fields side by side, right? So that’s quite a bit of coverage you can do.
Now, when you look at it from the TCO advantage that we provide the airports, the data centers, the theme parks, the casinos, the ports—I mean the list goes on and on—is that we dramatically decrease the overall cost in the deployment. So when you would look at it at a high level—I always use this analogy: I used to hear this when I was very young from more senior sales guys—is that whole iceberg theory, right? You can’t look at it at the top of the iceberg and put sensor to sensor what this cost. You know, camera may be only a few hundred while LiDAR may be a few thousand plus software and et cetera, et cetera.
But the underlying cost is beneath the iceberg, right? What’s it going to take to install these seven to eight devices on this side versus one device? You look at labor; you look at cost of conduit, cable, licensing, the maintenance that’s required to deploy that. So that’s when it really becomes really cost effective, when you understand the complexity of installation legacy technology versus new technology in that area. Hence why Vancouver decided to start deploying. They got over 28 sensors in one terminal; they’re expanding to other terminals now. So there’s quite a bit of growth there that we’re doing with that airport, but we’ve got over 22 international airports that we’re currently deployed in.
Now here’s another interesting one as well. So, here in the States, in Florida, there’s a lot of drawbridges that go up and they go down; they go up and they go down. And it’s susceptible to liability issues where people may fall in, vehicles may fall into the waterways, and unfortunately there have been fatalities, which is a horrible thing. So what we’ve done is that they did initial tests with our LiDAR solutions, and they were using LiDAR on both sides of the bridges to basically track if an object comes into the scene—in this case a person or a vehicle. And if that person or a vehicle comes into the scene, hold the bridge from going up and notify the bridge tender in the kiosk and say, “Do not let the bridge up.” Which ultimately would bring down the liability concerns that they had in that area.
Now, with the use of LiDAR and confidently coming out of that POC, very high success, they’re now deploying these across several bridges in Florida. So when you look up at a drawbridge now in Florida, you’ll see our sensors deployed. That’s helping bring down the liability concerns and potential issues of fatalities occurring, or God forbid, a vehicle falling into the waterway, which could happen quite a bit.
Christina Cardoza: Yeah, and I’m sure that not only benefits the operator who’s operating those drawbridges, but also the comfortability of the people driving over those bridges. My husband absolutely hates driving over the bridges and that’s one of his biggest fears. So I’ll have to let him know next time we’re in Florida that he has nothing to worry about. There’s 3D LiDAR, and explain all that. I’ll have him listen to this podcast on the—
Gerald Becker: For sure, for sure.
Christina Cardoza: —drive over there. But I’m curious, because you mentioned this whole ecosystem of partners that you’re able to work with to be able to do all of this stuff. So when you’re talking about some of these examples—and I should mention insight.tech Talk and insight.tech as a whole, we are sponsored by Intel—but I’m curious how partnerships, especially Intel technology and that Intel partnership, how does that help you be successful in these use cases and in these customer examples?
Gerald Becker: Spot on. So let me start off by saying this: unlike the herd of LiDAR that’s heavily focused on GPU processing and they’ve got a ton of data that they need to process, we’re a little bit different. Our sensors are purpose built for flow management and security applications. They don’t need to go into a fender of a vehicle and shoot tons of lasers all over the place and gather and push a ton of data through the pipe as far as throughput requirements for the sensor. Our sensor are purpose built, which means that we have the best angular resolution as far as capturing objects within the space. But ultimately we have a CPU-based architecture, which means it’s more cost effective, it’s highly scalable, but even more so as we align with Intel we provide the best-of-breed solution out there for not only cost, accuracy, and deployment capabilities in the space.
So that’s where we stand apart from a lot of the other Tom, Dick, and Harrys in LiDAR is that it really is a solution you could take off the shelf now and deploy. There is no custom integration you’re going to need to do for six months to a year to get it to where you need to. As I explained earlier, there’s four ways to work with us: at the camera level, at the VMS level, at our API, or through a third-party GPI or Ethernet box.
And then with our partnership with Intel we come to find out new use cases on a daily—I just finished a call with the retail team literally 30 minutes ago where we were exploring brick and mortar and warehouse automation and stuff like that where we could provide 3D sensing beyond the traditional way of looking at those type of spaces with other sensors. So there’s so much to unfold there, but even more so the partnership with Intel makes it valuable for us as we continue to scale and grow in this space,
Christina Cardoza: That’s really exciting, especially with all these different industries you’ve been talking about. We’ve been writing on insight.tech a lot about how they’re using computer vision, AI, and other automated technologies to be able to improve their operations and their efficiencies and workflow, but, excited to see how 3D LiDAR is going to come into the fold and how that’s going to even transform these industries even further.
So I’m curious, since we talked about in the beginning that we’ve really only hit the beginning of the use cases or where we could go with this, how do you anticipate this space to evolve? Are there any emerging trends or technologies that you see coming out that you’re excited about?
Gerald Becker: There’s quite a few use cases already that we’ve tapped, but there’s so much more that’s still yet to be explored, right? So at the very beginning I talked a little bit about orchestration, and we’re able to marry with multiple sensors to create different outcomes. That’s going to continue to grow and expand with additional sensor integrations. So we integrate with the license plate recognition: if there’s a hit, boom, we can then continue to track within a parking lot.
But then there’s an advent of AI: what’s going on with large learning models and all the other stuff that’s coming out. And then cloud, right? So there’s just so much there that just hasn’t been touched. From the AI side there’s a ton of stuff that’s being done right now on computer vision and understanding much more as far as what’s being cut within the scene to understand more generalities that can create different outcomes and tell a different story that ultimately gets you to the end result. Is it good guy? Is it bad guy? Is it good workflow or is it not?
I think that there’s so much more that can be done with LiDAR as we marry with other AI technologies that will provide these additional outcomes that are just not being done yet. So we’re still in very early stages, I would say, for LiDAR in the AI arena, but as it pertains to a lot of the physical-security applications and BI stuff of that, it’s already been proven and deployed globally with quite a few different customers around the world. So, definitely excited about that, but there’s just so much more to peel back as far as what we do with cloud, with AI, that’s really just a massive opportunity in this space.
Christina Cardoza: Yeah, I’m excited to see where else this goes, and I encourage all of our listeners too to follow along as Quanergy leads this space and what else you guys come up with and how else you guys are transforming our industries.
Before we go, Gerald, is there anything else that you wanted to add? Any final thoughts or key takeaways you wanted to leave our listeners with?
Gerald Becker: I’ve always been kind of the guy who always adopts the new platforms once I hear from other people, and I’ll be the last one to create a new social media account, and I’ll wait for what everyone thinks and stuff like that. But I think that with LiDAR or, similarly, some people may be a little nervous adopting new technology, even more so going with something out of their comfort zone—I think now more so than any other time is a time to start testing.
We’re past that early phase, the kick-the-tire phase. There’s so many deployments, so many reference accounts, so many people that are now talking about the value and how this has increased their workflows, that has increased and provided additional value, has decreased the false alarms and operational effectiveness for the—.
I think now more so than ever is a time to act and start testing, start asking the questions: what can LiDAR do for me that I haven’t been able to do before? How can I use LiDAR in my current operations or my current deployments that I have just never been able to see with these other technologies? And look at your existing use cases or your existing business cases and see: if I had depth, if I had volume, if I had centimeter-level accuracy, how could that improve my day-to-day workflow, my job, and provide more value to the organization as a whole?
So I would say, if that’s where you’re at now, reach out to me. You can find me on LinkedIn, Gerald Becker, or reach out to me directly on email, gerald.becker@quanergy.com. I’d love to have a chat with you, even it’s a 10, 15 minute conversation, I’m sure it will lead to a lot more fruitful discussion after that.
Christina Cardoza: Yeah, absolutely. And we’ll make sure to link out to your LinkedIn and accounts for the company, so that if anybody listening wants to get in touch, wants to learn more about this 3D LiDAR space, we’ll make it easy for you guys to access.
So, just want to thank you again, Gerald, for joining us today. And thank you to our listeners. Until next time, this has been the “insight.tech Talk.”
The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.
This transcript was edited by Erin Noble, copy editor.