Skip to main content

SAFETY AND SECURITY

Evangelizing AI: The Key to Accelerating Developers’ Success

developer success

AI increasingly is built into mission-critical applications—defect detection in manufacturing, customer-behavior analysis in retail, even traffic detection in smart cities. AI powers it all. But to make these capabilities possible—to train AI models—and to really translate them into business values, can take a lot of time and effort.

Fortunately for developers, year after year Intel keeps making advances that render AI more accessible. This year it celebrates the fifth anniversary of the OpenVINO toolkit, as well as the release of OpenVINO 2023.0. And it just released the Intel® Geti solution, specifically designed to make it easier for developers to work with the business side of the equation. Yury Gorbachev, OpenVINO Architect at Intel, and Raymond Lo, AI Software Evangelist at Intel, tell us all about it (Video 1).

Video 1. Intel’s Yury Gorbachev and Raymond Lo discuss the evolution of AI and the role OpenVINO continues to play. (Source: insight.tech)

What recent trends have you seen in the progress of AI?

Yury Gorbachev: This is the mainstream now. Quite a lot of use cases are being solved through AI—customer monitoring, road monitoring, security, checking of patient health—all of those things are already in the main line.

But I think what we are seeing now in the past year is a dramatic change in how AI is perceived and what it is capable of solving. I’m talking about generative AI, and the popularity that we are seeing now with ChatGPT, Stable Diffusion, and all those models. We are seeing image generation. We are seeing video generation. We are seeing video enhancements. We are seeing text generation. All of those things are evolving very rapidly right now. If we look back 10 years or so, when there was an explosion in adoption of deep learning, now the same thing is happening with the generative AI.

What can you tell us about developer advancements?

Raymond Lo: To work with developers, I have to be a developer myself. Maybe 10, 12 years ago I built my first neural network with my team. I was trying to figure out how to track a fingertip—just making sure that my camera could understand what I was doing in front of it. It took us three months just to understand how to train the first model. Today, if I give it to Yury, two days later maybe it’s all done. But at that time, building just a very simple neural network took me forever.

Of course, it worked in the end; I learned how it works. But through many years of evolution, the frameworks are available; TensorFlow and PyTorch are so much easier to use. Back then I was computing on my own C++ program. Pretty hard core, right? Today they have OpenVINO.

Today when I talk to developers in the community, it’s OpenML, GPT—everything is in there. You don’t have to worry as much, because when you made a mistake, guess what? Ba boom—it will not run anymore, or it’ll give you the wrong results. What is valuable today is that I have a set of tools and resources, so that when people ask me, I can give them a quick and validated answer. Today, at Intel, we are giving people this validated tool.

How do you work with developers in building these types of solutions?

Raymond Lo: As I speak with young developers, I listen, right? “What do you need to make something run the way that you need it to?” Let’s say, hypothetically speaking, someone is trying to put a camera setup in a shopping mall. They need to think about privacy; they need to think about heat, if they’re running it on a very power-hungry device and they want to hide it. Some use cases require a very unique system. The users want it to be in a factory and they want it to be on the edge. They don’t want to upload this data; they want to make sure everything happens on-site.

So we think about portfolio, and that’s what Intel has. The more we work with our customers, I think we are trying to collect these kinds of use cases together and create these packages of solutions for them. But I don’t need ultra-expensive supercomputers to do inference.

Yury Gorbachev: I think you’re totally right. The most undervalued platform, I would say, is something that you have on your desk. Most developers actually use laptops, use desktops, that are powered by Intel. And OpenVINO is capable of running on them and delivering quite good AI performance for the scenarios that we are talking about. You don’t need to have a data center to process your video, to perform style transfer, to detect vehicles, to detect people. That’s something we’ve been trying to show to our customers, to developers, for years.

From the business standpoint, the exact same platform runs in the cameras and the video-processing devices and things like that. And it all starts with the very basic laptops that each and every developer has.

“What we are seeing now in the past year is a dramatic change in how #AI is perceived and what it is capable of solving.” – Yury Gorbachev, @intel via @insightdottech

How have you seen OpenVINO advance over the past couple of years?

Yury Gorbachev: Originally we started by developing OpenCV. So we borrowed a lot from OpenCV paradigms, and we borrowed a lot from OpenCV philosophy. On OpenCV we were dealing a lot with computer vision, so that’s why initially we were dealing with computer-vision use cases with OpenVINO. And then we started to develop this open-source toolkit to deploy AI models as well.

Then, as years passed, we saw the growth of TensorFlow, we saw the explosiveness of PyTorch. So we had to follow this trend. We’ve seen the evolution of scenarios like close-image classification, then object detection, segmentation. We initially made just runtime; then we started working on the optimization tools, and eventually we added training-time optimization tools.

So, initially we started with computer vision, but then a huge explosiveness happened in the NLP space, the text-processing space. So we had to change how we processed the inferences in our APIs quite a lot; we changed a lot in our ecosystem to support those use cases. And now we are seeing the evolution of, as I mentioned, generative AI, image generation, video generation. So we adapt to those as well.

We work a lot with the partners; we work a lot across the teams to power those technologies to always have the best-performing framework on Intel. We were looking recently at how regularly we evolved generation over generation, and it wasn’t like 5%, or 10%—sometimes it was two times, three times better than the generations before.

Can you talk about how OpenVINO and Intel® Geti work together?

Raymond Lo: It’s really about having a problem statement that you want to solve. Geti fills in the training gap in between—where you can provide a set of data that you want the algorithm to recognize. It can be a defect, it can be sort of like a classification of a model or of an object. Today we provide that interface; we provide people the tool. And then also the tool has these fine-tuning parameters; you can really figure out how you want to train it.

You can even put it with the data set, so that every time you train it, you can annotate it. We call it an active-learning approach. After you give it enough examples, the AI will figure out the rest of it for you. So that’s what Geti is really about. Now you have ways and ways to tackle this problem—getting a model that is deployable on OpenVINO. 

What do you envision for the future of AI?

Yury Gorbachev: It’s hard to really predict what will happen in a year, what potential scenarios will be possible through the AI. But one thing I can say for sure: I think we can be fully confident that all of those scenarios, all of those use cases that we are seeing now with generative AI—the image generation, video, text, chatbots, personal assistants, things like that—those things will all be running on the edge at some point. Mostly because there is a desire to have those things on the edge.

There is a desire to, say, edit documents locally; to have a conversation with your own personal assistant without sending your request to the cloud, to have a little bit of privacy. At the same time, you want to do this fast, and doing things on the edge is usually faster than doing them on the cloud. This is where OpenVINO will play a huge role, because we will be trying to power these things on a regular laptop.

Initially, that performance on the laptops will not be enough. Obviously initially there will be some trade-offs in terms of optimizations versus what performance you will reach. But eventually the desire will be so high that laptops will have to adapt.

Raymond Lo: Like Yury says, it’s very hard to model something today because of the speed of change. But there’s something I can always model: Anytime there’s a successful technology, there’s always an adoption curve, right? It’s called a bound-to-happen trend. “Bound to happen” means everyone will understand what it is. In this 2023 OpenVINO release we hit a million downloads. That is a very important number. It represents that the market is adopting this—rather than something that is great to have, but then no one revisits it.

I can tell you, a year from today we will have better AI. 

What is significant about OpenVINO’s five-year anniversary and the latest release?

Yury Gorbachev: In this release there are continuous improvements in terms of performance. We are working on generative AI—we’re improving generative-AI performance on multiple platforms. But most noticeably we are starting to support dynamic shapes on GPU. We’ve done a lot of work to make it possible to run quite a lot of text-processing scenarios on the GPU, which include integrated GPU and discrete GPU. We’re looking at capabilities like chats, and even they will be running on integrated GPU, I think. There is still some work we need to do in terms of improving performance and things like that. But in general the things that were not entirely possible before now will be possible.

The second major thing—we are streamlining a little bit our quantization and our model-optimization experience. We are making one tool that does everything, and it does this through the Python API, which is more data science-person friendly. And one feature that I would probably say is a little bit of a preview at this point is that we are starting to support PyTorch models, to convert PyTorch models directly. It’s not production ready, but the team is very excited about this.

Related Content

To learn more about AI development, listen to Accelerating Developers’ AI Success with OpenVINO and read Development Tools Put AI to Work Across Industries. Learn more about the latest release of OpenVINO. For the latest innovations from Intel, follow them on Twitter and LinkedIn.

This article was edited by Erin Noble, copy editor.

About the Author

Christina Cardoza is an Editorial Director for insight.tech. Previously, she was the News Editor of the software development magazine SD Times and IT operations online publication ITOps Times. She received her bachelor’s degree in journalism from Stony Brook University, and has been writing about software development and technology throughout her entire career.

Profile Photo of Christina Cardoza