Hot AI Trends for 2021
2020 was an eventful year—and AI played a major role. Whether guarding against overcrowding or helping factories ramp up mask production, AI truly showed its value.
So what’s next for AI and its cousins, deep learning (DL) and machine learning (ML)? We put that question to Ray Lo, an OpenVINO evangelist at Intel. Join us for a lively discussion of the state of the industry and the big trends ahead in 2021. We explore:
- Why AI applications like natural language processing (NLP) will be hot in 2021
- How developers can strengthen their skill in AI, ML, and DL
- How to create ethical AI applications
Transcript
Ray Lo: I always find people are too ambitious about AI. That’s how I find that was a pitfall. I’m an engineering background. We have to be realistic about exactly what this can do and what it’s good at.
Kenton Williston: That was Ray Lo from Intel. And I’m your host, Kenton Williston, the editor-in-chief of insight dot tech. Every episode I talk to a different expert about the latest ideas and trends that are pushing IoT innovation forward. Today’s show is a look back at the ways AI changed in 2020, and a look forward to what’s ahead in 2021. There’s a lot to talk about, so let’s get to it!
So, Ray, I just want to welcome you to the show. Could you tell me a little bit about who you are and what you do at Intel?
Ray Lo: Great. Yes. Hi Kenton. So, my name is Raymond. I’m an Intel software evangelist for OpenVINO. So, OpenVINO stands for Open. VINO is visual inference. And then, NO means neural network optimizations. So, it’s a big name, but what it means is when you have a CPU, you want to run the fastest possible neural network on Intel, you run this tool called OpenVINO. And that’s what I do. I’ve been giving this news to many people at Intel.
Kenton Williston: Very cool. And how long have you been in this role?
Ray Lo: Pretty recent. I joined about… Let’s see. Hold on. I’m doing my finger math. Oh, four months ago. And now been giving talks at Intel and all that.
Kenton Williston: Well, one of the first things I wanted to ask you given your background there is what exactly AI is. Put some context around that. We’ve been on the insight.tech program doing a lot of work around the OpenVINO platform. And its applications in everything from machine vision to predictive analytics. So, there’s a pretty broad scope of stuff that people think about when they say AI. And of course, there are related terms, deep learning and machine learning. And I think oftentimes all these things get conflated and it’s a little confusing as to which thing is which. So, you want to give us your primer on what in the world AI is and how it differs from these other ideas.
Ray Lo: Sure. Maybe I’ll put a one line about my background. So, from a perspective from my side. So, I did my computer science from Toronto, and then, I did my PhD there as well for computer engineering degree. So, my thinking about AI is that what AI stands for, artificial intelligence, right? So, we always think about there’s a way to emulating, simulating, just want to make a brain that behave like human, right? So, things like predicting things like object recognition and all that.
But what I always see people confuse is there’s a part called machine learning, there’s a part called deep learning. So, those three categories people always think about them in a mixed way. I always think AI is a big umbrella that cover many of that. And within that, you have machine learning. And within machine learning, you have something called the deep learning. It’s one category, like the neural network, where people… I will say more recently because of the computation power allow us to do that. So, it became a lot more popular recently because back then, when I was starting school about 15 years ago, when you’re doing this kind of math, it may take a year before the training was finished. But today, we talk about weeks, maybe days. And if you’re very smart about it, maybe in couple of hours, you can get some results done.
Kenton Williston: Yeah. It’s amazing how much progress has been made, which leads me to ask. This whole podcast I want to talk to you about what are the trends that you’ve been seeing in 2020. So, just kind of open-ended question for you. Beyond the amazing continuing increase in processing power, what do you think some of the biggest trends of the year were? Not only AI, but deep learning, machine learning, all the related areas.
Ray Lo: Right. Because in the last year, I’ll say you hear a lot of podcasting about computer vision side, which is my background too. But I start to see the trend of kind of beyond vision. So, we will be seeing application, like NLP, natural language processing. It’s matured a lot recently. For example, one trend I saw were something called a BERT, was a new I would say a framework [inaudible] from people created for doing natural language processing. And the result is astonishing. So, it can actually… What they can do is they optimize and fine tune it for applications or tasks called SQuAD. It’s Stanford question and answering database. So, they can literally answer questions better than humans. So, if today I take the SAT test, I don’t think I can win. It’s things like that.
So, it’s kind of like, okay, there is certain tasks that now today machine can do so fast and so much better than human. So, that’s one trend I saw is in call center, especially this year is such a crazy year, we’ve seen a lot of disasters. Like, bad things happen. So, one trend were call center are now automated a lot better than before. So, they have machine learning behind it to answer the call, and then, translating what you said, direct you into the right system, or sometimes even answer questions for you.
Kenton Williston: Yeah. For sure. Happily I haven’t been in an emergency situation or anything like that where I needed to get a quick response from a call center, but even on my own daily experience, I’ve got an iPhone and Apple Watch and all the rest. When Siri first came out, it was just really a joke. You could ask it to set a timer maybe. And maybe it would get that right, but it was pretty bad. And now, it is gotten to be very perceptive.
Like, the other day, I happened to be reading my daughter a book that talked about the design called the fleur de lis. And I tried to make a drawing of it to show her what it was. I was like, “Well, this looks terrible. Let me just see if Siri could help me out.” So, I just raised my wrist to my mouth, and asked Siri to show me a fleur de lis, and sure enough there is an image of a fleur de lis on my watch. It’s gotten to be very good at answering broad questions. Same for all the rest. Alexa and all the rest of those too. They’re much, much better than they used to be, even just, say, a year ago.
Ray Lo: Exactly. I even forget how to set an alarm sometimes. I have to really like Google Alexa. I just tell a story read and getting into the menu and all that. So, that’s a lot of task, like what we talked about, become a lot more natural to human. And behind the scenes, you can actually see all the data center crunching all this data for us. And then, doing all this heavy lifting. And that’s why I really find is really cool in this year.
Kenton Williston: Yeah. For sure. Of course, you mentioned the difficulty we’ve had this year and everything that’s happened around the pandemic, of course, has been really dominated not just the tech industry, but what’s happening in the world at large. But I think as difficult as the situation has been, there’s also a lot to be excited about in terms of how all these smarter technologies helped the world respond to COVID.
Ray Lo: That’s correct. I actually did a study… I was at Google before Intel, so I was looking at some of the case studies they did. How they scale the call center. So, that was really lifesaving because with all those emergencies, they take about millions of call in a day. When I think about where do you get millions of people, right? Especially people at that level of stress. They want to get simple answer. And those are really I think… It’s really the future. We always think about oh we worry about the jobs, but those are the jobs that’s not even we can be able to scale to. And then, sometimes essential for us. So, I find that is something very new to us.
Kenton Williston: Yeah. For sure. I think you’re making a really good point there about the longstanding concerns that the robots are going to come and take all of our jobs, which there is some merit to that. Certainly automation has changed the job landscape broadly speaking, but I think AI is really poised to do jobs that just weren’t possible before. And also free people up from the really ugly bad jobs to do things that are more pleasant. So, one example that, again touches on the pandemic situation we’ve been in, are the many different kinds of machine vision applications to do things like scan crowds for fevers or…
One nice one that I was just reading about is a simple digital display that’s paired with a vision system to tell you, “Hey, there’s X number of people in the store.” So, it’s a really non-invasive way, non-confrontational way of saying should you feel safe entering the store or not, is this meeting with regulations to enter the store or not. And that’s a job that would be pretty unpleasant to do as a human being.
Ray Lo: Right. I think I see… I work with partners a lot in Intel. For example, I talk with ADLINK. So, they released tools to gather in logistic house. So, it’s Christmas time guys, right? We’re getting a lot of gifts. Just millions of millions, maybe billions of packages sending around the world. And then, they scan that, double check that for you, before you receive them. And I think just reducing maybe 0.1% of the error rate, just 0.1 maybe, it’s such a huge deal. You can imagine all the gas you waste, all the energy you waste, to deliver something raw. Having those checks in place is such a great thing that’s happening in the industry.
And so, is inspections, right? Safety, I saw, like… for example, it will check the tool for you. If see anything defected, like for car for example, or inspect the wheels. Those are lifesaving for me, and I find those a type of job that even if you give to human, you don’t want to afford that 0.1% error.
Kenton Williston: Yeah. Absolutely. And just goes back to your point you’re making about how in many applications, machines are doing much better than a human could ever hope to do. It’s not even a question of are you replacing a human? It’s just something a human just could not do absolutely.
I’m curious though. We’ve talked about a couple of key areas. I think one of the key areas in 2020 for sure was machine vision. There was a lot going on there, whether it was in an industrial setting like you were describing doing inspection of packages and parts and whatnot, or in the more public sphere to do things like tell if people are wearing masks or not. And of course, you talked a little bit about language processing, I think, has also been really, really important. What other areas have you seen some important movement in?
Ray Lo: For AI, right? I want to give some suspenses because I see a lot of things in the industry. I want to talk about AR as well coming up. Something that I personally work on. Before Intel, I was a CTO for a company building augmented reality headset. So, more recently, I think you may have seen from various companies like Facebook. They’re releasing augmented reality headset. But behind the scenes we start to realize a lot of machine learning will get into a place like recognizing places, landmarks. A lot of decision that will make for you, it’s not going to be done by human behind the scenes to say, “Okay. Trigger this. Show you this.” So, they start to look into a lot of those efforts that I have seen. Quite amazing.
For example, six, seven years ago when I did a SLAM tracking… And once you have the landmark, I always have to question, “Now what?”
Kenton Williston: Right. Now what? Right.
Ray Lo: “I have a landmark. Now what?” Right? So often the after delayer is how do you take this data, and then, generalize it or create methodologies so that people can utilize them? Like, one way I’ve seen is, okay, now you have a scene. Now you recognize the chair. You recognize the table. And you make a scene of information that you can use for content. So, I had a application at one point, they generated a workspace. When you see a desk, a chair, it automatically generate a virtual screen. And people recognize everything, like a setup, as if it’s in the real world. I find that super cool because it’s like the sci-fi movie, but I work on research for many years, and that’s fascinating.
Kenton Williston: Absolutely. And it strikes me that that’s also a really great example of the different kinds of machine intelligence because there is, I’m sure, elements of deep learning and machine learning in terms of recognizing the scene. And then, some AI to decide, “Well, what should I do now that I recognize the machine?” And I think really illustrates how all these different concepts play together.
Ray Lo: Mm-hmm. Yeah, so if ask me, I never say it’s one application. But I see a set of tools that work together, turning to a new experience for human. And it’s like today. You’re going to shopping, right? You often may pull up your phone to look for the barcode and look for the discount. And et cetera. Et cetera. But think about we can automate the entire process. You just walk in the store, pick up the best thing, the coupon automatically applied. You just focus on the shopping instead of trying to go through that painful experience. And then, that’s what we’ve been seeing the retails. A lot of automations are happening and behind the scenes it’s real machine learning driven. Some of them of course what we’ve talked about the tracking and helping the people.
Kenton Williston: And absolutely. I did a podcast series recently talking about retail. And there’s so many interesting examples there. One that really made me laugh was there is an application where they used the RFID to do some analysis of theft that was happening in the store. And they discovered that one of their biggest sources of loss was actually people taking products from one floor, and then, going up to another floor and saying, “Oh, I need to return this. And I don’t happen to have the receipt.” Et cetera. So, there was theft that was happening without anything actually leaving the facilities. So, lots of interesting applications, for sure.
And that makes me think, with all these concepts becoming so prevalent across I think pretty much every industry, would you say that for developers getting skilled in machine learning and AI are becoming really a requirement?
Ray Lo: I will say… Okay. I felt like today when you think about using machine learning, AI, it’s like back then when I was doing math on top of calculus and linear algebra. It’s like fundamental that if you don’t use those tools, you may be missing out for a lot of potential applications. Of course, you don’t have to use it for everything. For example, you just want to print a “Hello World” on the screen, you don’t have to take up your machine learning textbook tonight. Okay, it’s not design for. It’s just doing something simple, right? But I see that as a momentum that I see a lot.
I did some research on the trend on the machine learning. I think it’s published by Stanford too. So, in the last 10 years, the growth was close to exponential. So, the number of conferences with attendees, like they double every year. So is the publications in Europe, China, America and the patents that we file related to machine learning and deep learning. So, this is something I find… just like back then when we talk about internet and this is pretty much happening again. It’s something that if your phone doesn’t have a camera or internet, that’s like it’s not working. That’s how I feel now if you try to get into this field today without having some fundamental, it may block you from your creativity.
Kenton Williston: Yeah. And that makes sense to me. But I think on the other side, when I think someone who’s new to this field, starts looking at the diagrams of convolutional neural networks and things like that, it can be a little overwhelming.
Ray Lo: Hmm. Exactly why we have OpenVINO. That’s exactly what we… I’m not trying to sell this, but… Well, that’s why we have OpenVINO to encapsulate a lot of the automization steps, which I don’t think you want to get a whole PhD on that problem. And it’s really hard. Just getting the quantization problem right is very difficult. So, that’s why in Intel, OpenVINO, we have a lot of engineers just focus on those big problems. Like, how to get the… exactly what I talked about, performance-wise. Or just getting the tool together so that you don’t have to learn everything, but you have to know it of course fundamentally what mimetic is, what does it do.
But for the deployment perspective, for the development perspective, not engineering perspective. When I always think about development is like copy and paste code, make something quick and easy first, and prove your concept. Like, a prototype. Now today we had a couple of hackathons. In a week they built something I spent six years on in my PhD. I was like, “Oh, that’s not fair.” But that’s the reality. That’s the reality, right? That’s what’s happening.
Kenton Williston: Well, that just seems to me broadly speaking how AI platforms and deep learning platforms are evolving in general. Like you said, even just a couple of years ago, to develop some of these applications would be a huge amount of work. And now, there’s so many platforms that offer pre-packaged models or even things like… You were talking about using SLAM. You get a little developer kit that has a mobile robot and ROS operating system and the SLAM already built into it. So, it’s giving you a tremendously advanced foundation to start from. And you still need to do the work of course to implement whatever it is you’re doing for your specific application, but you don’t have to get bogged down in all of the fundamentals as it were.
Ray Lo: Exactly. And that’s why I believe too SLAM took me millions of dollars to build. It was no joke. My journey where I have to find a professor. Then do collaboration. Then from the collaboration I have to sign a contract. From the contract I get a source code, I have to maintain a source code. Then from the source code, I have to debug. And then, we went back and forth for half a year. That was the reality I was facing. But today, you download a package, it’s been tested, calibrated. Hardware, software, all working together. And then, that’s the new reality we’re facing.
Kenton Williston: Yeah. Exactly. Much better.
Ray Lo: Much better. I’m so happy.
Kenton Williston: So, if you are a developer looking to get into this field, what would you suggest is a way to get started?
Ray Lo: I would definitely recommend people start looking at existing tools because we spend a lot of time and effort… I’m not going to say only OpenVINO, but TensorFlow, all the open tool in the market. And get familiar with the framework and then the understanding of the mathematics. I still think mathematic wise you have to go for it. Even if you don’t have the math background, there’s a lot of good lessons from Coursera. Even OpenVINO have open courses. They can take and get that understanding. Once you have that understanding, now you see the possibility.
Then you get into the nitty gritty details. So, we have a lot of demo code you can try. Trial the demos. I love demos because they open up the imaginations, right? When I work with a lot of developers, surprisingly a lot of them from India, they are students. They come up with new ideas that I never even thought about because… and I ask like, “How do you think about those?” “Oh, I remember I tried this demo. And I tried this demo. I tried this demo. If I combine all this demo together, I get a new demo.” I was like, “Wow. That reminds me of Legos.” Right?
Kenton Williston: Yeah.
Ray Lo: Yeah. So, having that understanding, having that flexibility, having things working in a modularized way, and putting them together is the new trend. So, I think that’s where I think a lot of people should focus on the beginning. Just don’t get bogged down on just one technical detail now, but instead think bigger. See if you can solve the world problem. And once people understand it, love it, get a team together, now the resource will come to you because now you are proving your point. So, I think it’s much better than before. You got to be in a research for four years on particular small problem, and then, that’s it. That’s how I see it differently.
Kenton Williston: Yeah. I think one question that leads me to is you’re kind of painting a picture here of almost a blue sky environment where you can just really be creative and put all kinds of new ideas together in ways that people haven’t thought of. But obviously anything you’re doing has to fit within the budgetary constraints, which not just the dollars, but you got power constraints, or you got some kind of rugged environment where you might have a different thermal constraint or whatever. So, where do you see the state of the art in hardware now? I’m wondering in particular if there are advances that the broad developer audience might not know about that would raise the ceiling on what’s possible inside of these constraints?
Ray Lo: That’s a very fundamental problem when I work for my partners, right? So, once every use case. I think now I will say sky or the space is the limit because we had one success story where someone put the Movidius VPU on a satellite. So, that has a much, much harsher requirement than anything else because beyond just thermal, they have to think about radiations. They’re going up to the space. So, things like that I think when we are building product today, today we have a lot more flexibilities to. Back then, you’re constrained to a extremely power hungry GPU or maybe at that time may not be a powerful enough CPU that’s not optimized for the code. Now, it’s a lot better and a lot better. Or you will be really stuck on this extremely low power, low performance, like a Raspberry Pi at one point.
But today I think we have a lot of our hardware accelerator platforms available. Like, just recently OpenCV released a project called O-A-K, OAK. And now you have a camera with a billion Intel hardware accelerator process in it. And then, it just changed the landscape how we think about processing. We always think about processing as a device, a processor, maybe an extra processor like a GPU or maybe something on top. Then you have a cable that connect everything. But with those kinds of newer approach right now is everything in one chip. Like, you have the Intel chip next to the image processor. And then, you may even have a slightly underpowered CPU there just to do some easy crunching. And you can connect that to a host to do even heavy lifting. And that’s how I see the architecturally hardware is converging a little bit. Back then was a duct tape, I call it duct tape. Just you have something on a USB cable.
Kenton Williston: Yes.
Ray Lo: USB 3.0 cable. It was a horrible thing to me. Latency was crazy hard. You have so many issues. Powering. So, today you see a lot more condensed into one single element. And then, I see that as one of the next things.
Kenton Williston: Yeah. And I think it’s fair to say that basically any hardware you look at these days, it’s starting to acquire some AI capabilities. Like, the most recently released Intel Atom processors, which you wouldn’t really think of as being super high-performance processors, you have some AI acceleration built into them. So, even at that level, there’s a lot you can do.
Ray Lo: Exactly. That’s the one that went on the satellite.
Kenton Williston: Ah okay.
Ray Lo: There we go. You picked on the right one. Given all these choices and platforms, now even on a space program, people are able to think, “Okay. Now I have one more other power available. What can I do?” Because all they have is a solar panel. But now they can do so much more because with that project… Now, the problem is not just power, right? They have bandwidth. It takes so much time to transmit one image, so every image is so important. But they can crunch couple of images because they have enough power from the sun. So, then they actually process the image, make sure it’s not garbage image, it’s nice, it’s like satellite image, right? When you take picture of a cloud, what do you see? Cloud, right? You want to see houses. You want to see landscape.
Kenton Williston: Yeah.
Ray Lo: Now because of that processing, they’ve saved… I don’t remember the exact number, but that changed the whole dynamic about the whole efficiency there. And that’s I think that’s the innovation that people are thinking now. And just like re-adjusting the problem statement.
Kenton Williston: Yeah. Exactly. I’m glad you said that because that was exactly what I was thinking that it’s not just, “Oh, you can do all these new things” but it’s a matter of you can come at the problem from a totally different angle than you would have before. So, it’s good to rethink your architecture. And very simplistic example of this would just be the way that all this machine learning, deep learning has very often been split up into train it in a big power-hungry data center or cloud or whatever. And then, deploy the inferencing at the edge on something really, really lightweight. Put the right processing, the right smarts, in the right place.
And to your point, all the other things you can do too. What can you do to rethink where the data flows? So, maybe you do processing in a location that previously would have just been a transmitter of data, et cetera.
Ray Lo: Exactly. And people still get confused between the training and deployment. They always think AI must be extremely power hungry. Yes, the training phase because you’re trying to teach the neural network, but once you have the network ready, the deployment, that’s I think we have to really think twice. The deployment is a different problem than the training. And of course, there’s different type of machine learning problems that may require real-time training. But for most of the stuff with detection, like what we’ve talked about, detecting the cloud, once you train it, the neural network will actually be able to detect those very quickly. And then, we will be able to deploy them very differently.
Kenton Williston: I do think it’s useful to explore where the biggest challenges lie in AI and machine learning. What some of the common pitfalls are and what developers can do to avoid those?
Ray Lo: Yeah. I always find people are too ambitious about AI. That’s how I find that was a pitfall. I’m an engineering background. We have to be realistic about exactly what this can do and what it’s good at. So, I did a challenge about doing image classifications. And I gave it to many candidates. I said, “Okay. Run this code. Put on your own image. And see what it can do.” Even as amazing at 90 something percent, 80% of accuracy, but that 20% of error is hilarious. So, if you think about deploying a tool for use cases, you have to really understand the use case, and align with your expectation on accuracy. Is 80% acceptable? A lot of times it’s a no, right? And it’s amazing, but it’s a no. It’s a big no. And people have to really learn that in an early time before they deploy.
And why is it funny is we did that challenge. People put a Tesla on it. It’s really funny. So, the Tesla is not… So, the Cybertruck is not part of the database. It came up with the answer it’s a jeep plus a beach wagon. I was like, “That’s correct, but I don’t think the marketing team will appreciate that.” So, think about things like that. You have to really learn what you’re doing and make sure they align with your use case.
Kenton Williston: Yeah. Absolutely. There’s some instances I’ve seen some pretty funny examples of AI trying to classify whether what it was looking at was a muffin or a dog. It looks so similar.
Ray Lo: Exactly.
Kenton Williston: And that’s pretty funny. But of course, if you’re doing something like…
Ray Lo: Medical.
Kenton Williston:… when you predict when a very expensive machine is going to fail or anything that has the kind of ethical implications, all of a sudden, it’s much less funny. You need to be very, very thoughtful. And I think that there is some big lessons learned this year about being ethical with AI. I think conversations that really needed to happen.
Ray Lo: In Intel, we actually formed a group just on that topic. I think it’s extremely important to understand what you do, does it hurt people? Does it have any damages? It’s ethical, right? That term is such an important thing because it’s like you have great power… What’s it called? When you do Sudo on a Linux, right? Great power come we think great responsibility. Yes, sounds a little bit old, but it’s happening. So, that’s something I feel we have to all look very carefully into.
Especially with medical. Think about this. You’re doing diagnosis, right? Is that 1% good enough? Is it ethical to say, “I can accept a 1% error”? Is it going to do something harmful with the people? Those kind of have to go through a lot of rigid testing approval and making sure things are right.
Kenton Williston: For sure. For sure. Going back to an earlier point you made, there’s some things that machines can do now far better than humans, but there are definitely times when you really need a human in the loop.
Ray Lo: Mm-hmm.
Kenton Williston: And it’s really even in the training part. So, I just mentioned, for example, monitoring expensive machinery. It would be unwise for a developer who’s not familiar with whatever equipment this is to think that they could just go out and collect some data and interpret it. You really need the human being who’s been operating that machine to help you understand what the data really means.
Ray Lo: Mm-hmm. This is very important because back then we have data biases, and then, create problem down all the way to recognizing… Become racist, becomes manipulative. Bad things can happen to the system when it’s not really carefully reviewed and monitored.
That’s one thing I think we got to be really careful. And then, I think as long as everyone have the good heart, it’ll be okay.
Kenton Williston: Right. Absolutely. So, kind of zooming back out to the big picture. Wanted to again recap what we’ve seen happen in 2020. So far we’ve talked about things that have happened in terms of the advancement of platforms, on the hardware side, and on the development of software, development side. The ways people are coming at problems in different ways. How important this has all been to the pandemic response. Any other big picture trends that you’re keeping your eye on?
Ray Lo: It’s open source. I think that’s one thing we always undertook. It’s the whole OpenVINO effort, all of the TensorFlow effort, all of AI effort, that are open source. So, it’s something that is not very common back then with a lot of the corporate I worked with in the past. So, oftentimes you may have a solution. One off, you have to pay for license fee. Or you don’t even see anything. And there’s no way you can adopt and change with the rest of the community.
So, the open source and community, and then, that’s why I talk about OpenVINO as open. I find is very empowering because I’ve seen a lot of use cases that are done by the community that I’d never seen. Like, for example, OpenCV is our partner. They have their own open community. And then, within the community, they take on both tools. And then, they create new tools. And that’s one thing that’s happening in the next two to three years. We will see those new tools that open source are mature and are getting to the point that will be the new standard. Open standard, open source for AI is the new big thing for me.
Kenton Williston: So, what do you think that will enable? Is it just a matter of increasing the ability to come up with these creative ideas and put things together in a new way? Or is there something more beyond that do you foresee?
Ray Lo: I will see it’s like two or three phases, right? It’s a Linux in the beginning will be like, “Oh, it’s a small community.” But eventually become a standard for all server that we’re running today. And then, become a thing, right? Becomes the gold standard. And I see those will happen in many of those. It will just change the way we approach things. And because of that openness, now advancement is in exponential speed because all those blockers are going. And that’s why I very care and interested in… It’s literally viral. It’s one to two, two to three, two to five. So much faster than before.
Kenton Williston: Yeah. I agree with that because again, thinking about how you want to be focusing on innovative ways to tackle a problem and not the basics of the technology. As more and more gets contributed to this community, and again, as a very simplistic example. Just all of the pre-trained models that are now out there. Boy, that gives you so much of a faster start and makes it so much easier to focus on whatever is unique about what you’re doing.
Ray Lo: That’s correct. Especially with the pre-trained model I think is a big deal because not everyone have the powerful GPU, can train everything from scratch. A lot of people interested in the outcome. Like, the use cases. For example, the BERT, I don’t have the database, I don’t have all those, but I can turn that into a cooking recipe, which I built for demo. And then, now instead of reading the recipe, you can ask questions about what the recipe can do. Like, how many eggs do you need, things like that. And you can run that in real time. And that’s very different because before when I think about that problem, I think, “Oh dear. I got stuck collecting all the recipe in the world. I got to think about a language model. I got to think about who I got to hire. And I didn’t even have a dollar in my bank yet.” So, that’s a huge difference.
Kenton Williston: So, just to make sure I hadn’t missed anything important yet, could you tell me what BERT is? That B-E-R-T.
Ray Lo: It’s a language model that’s published by Google. It stands for bidirectional encoder representation. And what this will do is… So, back then when we did machine learning, there’s many ways to do this. This one is published by Google that has one feature that we all love that it’s called fine tuning. What it means by fine tuning is when you do natural language processing to understand what the language means and all that, the effort oftentimes it’s very much like one task it can do. Like, find out where noun. Where’s the verb. Things like that. But this one, you can fine tune to do things specifically like what I talked about question and answering. So, it will be able to do that, but without retraining the entire model.
So, you can think of it as like a new processing model that Google came up with a lot of researchers together. And it’s something, really, I would say popular now because it’s a new gold standard. Because of that now you do Google search and all that, you get much better accuracy. So, you wonder why. “Why is this so good now?” It’s actually behind the scenes one of the models they use.
Kenton Williston: Got it. Makes sense. So, before we go, I want to get your thoughts on the coming year. So, this is actually going to publish in January. And this will go live in January. So, we’ll take a little risk here and see by the time folks are listening to this, if any of our predictions are maybe even coming to pass. But what are some of the main trends you foresee happening in this domain in 2021?
Ray Lo: Mm-hmm. So, I got to summarize, I think NLP will be a big thing in the next couple of years. Those will change the way we interact with devices. We saw it in the early time, but now time where you get call center, things are happening.
A deployment of IoT will kick in very soon. You’ll see a lot of… all the warehouse, all those automations, you’ll see machine learning in every bit of our industry.
And last and not least, I think the growing trend of augmented reality and virtual reality. We know we talk about a lot. It seems like it’s the hype back then, but today, when I look at the technology maturity, the integration of AI and AR and VR will happen because I crave a good content all the time from virtual reality and augmented reality headset. And I think once we put those elements we talk about, recognizing things, how it can create relevant things about your life, your surroundings, it’ll be a killer app for many things we’re doing today.
Kenton Williston: Nice. Well, with that, let me just say thank you so much for joining us today. Really enjoyed this conversation.
Ray Lo: Thank you. And I have as well.
Kenton Williston: And thanks to our listeners for joining us. If you enjoyed this podcast, check out insight.tech for more innovative IoT ideas. This has been the IoT Chat podcast. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.
The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.