Podcasts > Ep. 093 - Enabling machine vision on the edge
Download MP3
Ep. 093
Enabling machine vision on the edge
Orr Danon CEO, Hailo
Friday, July 02, 2021

In this episode we discuss the importance of specilized processors for enabling machine vision on the edge. We also explore ways that edge processing can lead to cost reduction and improved performance for tasks such as inspection and quality assurance. 

Orr Danon is the CEO of Hailo. Hailo specialises in Artificial intelligence, processors to deliver data centre class performance to edge devices.

oT ONE is a IIoT focused research and advisory firm. We provide research to enable you to grow in the digital age. Our services include market research, competitor information, customer research, market entry, partner scouting, and innovation programs. For more information, please visit iotone.com

Transcript.

Erik: Welcome to the Industrial IoT Spotlight, your number one spot for insight from industrial IoT thought leaders who are transforming businesses today with your host, Erik Walenza.

Welcome back to the Industrial IoT Spotlight podcast. I'm your host, Erik Walenza, CEO of IoT ONE, the consultancy that specializes in supporting digital transformation of operations and businesses in Asia. Our guest today is Orr Danon, CEO of Hailo Technologies. Hailo specializes in artificial intelligence processors to deliver data center class performance to edge devices. In this talk, we discuss the importance of specialized processors for enabling machine vision on the edge. We also explored ways that edge processing can lead to cost reduction and improve performance for tasks such as inspection and quality assurance.

If you find these conversations valuable, please leave us a comment and a five-star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@IoTone.com. Finally, if you have an IoT research, strategy, or training initiative that you'd like to discuss, you can email me directly at erik.walenza@IoTone.com. Thank you. Mohammed, thank you so much for joining us today. Thank you. Orr, thank you for joining us today.

Orr: Thank you for having me, Erik.

Erik: This is really an interesting topic to me because we've been doing a lot of work in edge computing lately. You have worked with the government, the Israeli military for about 10 years or so. And then you in 2017 together, I suppose with maybe some colleagues or people in your network, set up Hailo. What's the backstory? How did you make the leap from the military to then setting up edge computing processor company?

Orr: How are the two topics related? And the answer, they're not really related, but it does fit in eventually I’ll try to explain. So I've been with the Army for quite a long while, over a decade. And throughout the career there, I have gone through lots of areas from analog electronics, digital, and the last couple of years, cyber and embedded, so pretty much all over. And that's what I found out that I like to do. I like to do interdisciplinary things.

At some point, I decided I want to leave the big organization and its benefits and go found something that can be more of my own. That's why I teamed up with my cofounders, Rami, Avi [inaudible 03:04]. Actually, we have a quite a story there. Rami, who was the cofounder was actually my commander in the unit of cellular intelligence. So he brought us all together into this super excited thing of what are we going to do. And Avi, my CTO was then with the Texas Instruments, he was the CTO of the IoT group in Texas Instruments. And he told me about this whole new big trend of AI which I had no idea what he was talking about until that point.

Back these years, I was mostly in software and it seemed like there's a really big paradigm shift is coming not only we tend to look at AI in terms of deep learning specifically, in terms of the applications that enables. This is the most important part. But you can also think of it from how are we developing this kind of technology? And the whole approach of being data driven, both in development and in runtime is something that is the instead of decision-driven, which is a classical way to design a computer program, and this is the classical way processor is designed around, how to make decisions very effectively.

And here it is about both in terms of when developing models about gathering data. And the data defines the solution that you're developing, and in runtime flowing the data through your deep neural network. So it was pretty clear back then it's going to be a really big paradigm shift. And it was clear that there's a huge, huge efficiency gap in processing technology back then. And there was a big gap in the numbers between what theoretically can be achieved and what is actually being achieved at that point in time.

So we decided that we will explore this direction of healing processes for AI specifically. And it really appealed because it was clear that it is going to be a highly multidisciplinary project. It's going to be about algorithm. It’s going to be about better software, about build systems, about chip design. So it's going to have lots of building insight in the applications. There’re so widespread, it's amazing through the things people are doing today.

So all in all it really appealed to me as something that has so many aspects to it. That's what based on my experience in the army was a thing that I realized I like to do. And that's why we started the company. Actually, I was mentioning Rami, he actually passed away a few months after we started. It was quite a tragedy, drowned in the Mediterranean Sea. And it was quite a shocking experience to begin a startup with. But we pulled through moved on, in some senses even with more energy to make this project work. But that's kind of the sort of the founding story behind the company.

Erik: Every company has a founding story, which is quite powerful. And I was going to ask it looks like on your LinkedIn profile, you were VP of R&D, and then 10 months in you became CEO, so that explains that the transition there. But yeah, now you have certainly something a legacy to fight for it, growing the company. I think a lot of our listeners are becoming more familiar with the concept of edge computing and why would we need processors that are specialized for this task of doing computing on edge devices?

But a lot of people still it's a little bit of a vague concept, why don't we do things in the cloud or why can't the same processes that we use for data centers, or for our laptop computers, or mobile phones also work for edge devices. If you can put this very simply, what would be the primary value proposition or the primary problem that you're aiming to solve here for the industry?

Orr: So, basically, what we are trying to achieve as a company is to bring capabilities of AI processing that currently availability at the cloud and bring them closer to the source or the sync of the data. This is the kind of the edge computing or IoT paradigm. And the thing is that when you go to the edge, you have constraints. Contrary to data center, where you have lots of electricity, lots of compute power available, you're looking at small power-constrained, cost- constrained, area-constrained devices, you need to make really efficient processes. And the more efficient you make them, the more capabilities that or features that you can bring to the end device. And this has to be a different type of person.

But moreover, things that is happening in recent years especially around vision sensors is that the main requirement for a processor that processes the signals is to do AI, everything in visual processing in recent years have moved to AI or deep learning at least for our purposes the same to AI based methods. And the computer equipment, in terms of capacity just to give numbers, to process a video stream at a significant resolution, let's say one megapixel, at real time, typical vacations are about 10 trillion operations per second [inaudible 09:16]. Well, these are crazy numbers if you think about processors are designed with the gigahertz range, three orders of magnitude more.

And the way to overcome this is to design computer architectures, which are specifically aiming to do this type of AI computations efficiently. Not as if we imagine that you've been doing computers all wrong for decades, and now we'll make two orders of magnitude better; maybe tomorrow, maybe not. But for now, it's not the case. The case is that you want to gain this increase in efficiency by being domain specific. So on one hand, we're narrowing the scope to AI, and on the other hand, we want of course, to leave as much room for flexibility as possible since AI on its own it's a very fast moving field. So bottom line, we need processors, which are relying on efficient architectures to do AI at the edge.

Erik: So it's a case of specializing to get superior performance out of technology that to, some extent, already exists? If I look at the industries that you're serving, it's automotive industry 4.0, smart cities, smart retail, smart home, and then if I look at the products that you have around, so maybe you have a set of products, but on your automotive, you have ADAS electronic control units and front facing cameras, and then for the others, you have edge AI box and an intelligent camera. So is it a case of customizing the processor for each of these different specific use cases? Is that how you maximize? Or is it primarily the same processor behind all of these use cases but you're just putting them in different form factors for these different situations?

Orr: I think this is one of the actually most reassuring aspects of the business. We have some variants around it. But mainly one product called the Hailo 8.0 is a first generation processor. And we actually found out that customers throughout all these industries you've mentioned, actually much more than that are interested in the same core technology. And of course, you need to do some specialization, some customization, whether it's a tweaking of the hardware, or the software, but eventually based on the same product, you’re able to serve a wide variety of verticals.

Fundamental reason why this is possible is the disruption that is coming is coming from the same direction. And that is around their video processing capabilities. If the previous wave of revolution in the IoT was connectivity, and this what’s defined what defined the IoT or edge computing in general, AI is the next defining feature. And specifically, all these verticals that you've mentioned, we're talking about processing video in real time, and using similar tools throughout all these verticals. So the data sets, they vary in the [inaudible 12:26] implementation details. As seen, in reality from the customers that are coming to work with our product is that they're all using the same core technology.

Erik: So you're emphasizing machine vision here. And I would classify machine vision is one of the top three clusters of use cases. You could use it for autonomous driving. You can use it for facial recognition for pigs, use it for quality detection on a production line. So there's a lot of machine vision use cases. Is this the primary cluster of use cases that you're focused on is processing data from cameras, or do you also address other edge computing scenarios where you might have maybe not as heavy data from a single source, but data from thousands of sensors, for example, in an industrial environment that you need to process?

Orr: The mission that we set out to solve the most pressing need of the industry it is by far vision processing since the amount of data that you're trying to crunch, there’re few billions of sensors that are being deployed each year in the world outside mobile phones. So together with mobile phones it's about double the number. And they're creating tons of data, and this means tons of processing that needs to be done in order to make some sense and some use of the data. So although we do have the ability and customers that are mixing signal types, I would say that the main strength is around vision of vision-like signals where the amount of processing that is required is quite fast.

Erik: And I guess if you can process vision data, you can also process a temperature sensor, so that's not a significantly challenging process at that point. How much customization is needed? So let's say on the one hand, you have an automotive OEM, and on the other hand, you have a company that's building cameras for smart cities, are you working with each of these companies to design customized solutions to meet specific needs, or is it basically okay, we have our product, we give you the toolkit, you can customize it as needed, and you kind of are selling the product without an R&D project built around it? To what extent are you also providing the service of customizing and supporting project development?

Orr: So, it's more of the letter. We have one main product line and around that product because we provide lots of support. We're not a service company and we do not provide AI engineering services. But what we do have is we have accumulated quite a wide experience of how people are deploying AI. So we will never know better than our customer the data that customer is using for its specific use case, but we will know what is the [inaudible 15:17] and what is the best way to deploy it.

I will say that there is one exception to the general statement I've given and that actually gave me a clear example, we have gone through specifically additional extensions which are relevant to the automotive industry. First, there are two main things that are relevant; one is quality both in processes and mainly in the physical quality of the device. So, one of the value proposition that we bring to the table is very low power consumption. This allows us to work with very high ambient temperatures which is really helpful, automotive industry, device can operate up to 105 degrees Celsius.

And also functional safety mechanisms, there's a big concern in mission critical systems. What is the rate of failures of your device? Is it something that is actually standardized to the ISO 26262? Since we understand the tremendous potential of automotive industry for AI applications, we have baked into the architecture safety mechanisms that enable you to detect these errors that might happen randomly on the [inaudible 16:27]?

Usually, the naive way of being able to detect errors, and of course, act accordingly is to just replicate all the computations, so just do everything twice. Now, if you're a brake system, that's fine, you think calculations, and then you do it again. But if we are going back to the number that I gave about [inaudible 16:51] of operations, then it doesn't make sense to duplicate them. But you can make use of some of the fundamental properties of neural networks on what we've done to dramatically reduce the overhead that you need to make sure that you have no mistakes in your calculations. So this is something specifically that we did for automotive industry.

Erik: And primarily because it's a mission critical activity, so heavy processing needs to be done right the first time basically. So I understand that the Hailo 8.0 AI processor is the primary one. And then you also have the M2 acceleration module, the mini PCLE acceleration module, the evaluation board, and the data flow compiler. What are the roles of the acceleration modules? I guess the evaluation board and the compiler, they support capabilities for R&D or the accelerator is maybe higher power capability?

Orr: So, the models are basically the same device. What he found out as we started working with customers is many of them have similar use case of existing compute platforms, which have extension slots of PCLE, whether it's M.2, or Mini PCLE, and M.2 is a huge evolution of Mini PCLE. We are providing Hailo branded modules that incorporate our device on these models, and this really accelerate time to market for customers since they don't need to go through hardware cycle design, they just plug it in. This could be a slot that was originally planned for how this drive or WiFi module or GSM module.

So Hailo platforms tend to have these extension slots for a customer that wants to add AI to its product is something that can happen really fast. And save at least the hardware design cycle, of course, you still need to design your software, but in terms of hardware, you get something that is almost out of the box. So this we found very helpful and very popular with customers.

Erik: So now I think we have a good understanding of the use case the industries you're serving and the products here, let's get a bit more into the tech stack. So you mentioned that for you this is a very interesting problem to solve because it integrates a lot of different technologies. How do you look at the tech stack? And then what elements of the tech stack are you building internally? And which elements are on the market but you're integrating them into the full solution?

Orr: The core technology that we're building is the neural network processing core. This is our own proprietary architectural design, both in terms of hardware and software. And this is actually our core differentiation if you look across the market. In this part of the stack, it's really interesting problem because the way we build our device is actually by instead of looking at the problem of calculating graph of a neural network sequentially node by node, what we are doing is doing it in parallel. We are distributed the compute along the core. We have very small virtual code that we're building dynamically in runtime. And these little cores, each and every one of them is in charge of different parts of the neural networks.

So this actually means that calculations propagate through the network as they are available, and this is a very different way to describe the compute problem than sequentially going over all the calculations that you need to do, and doing them one by one on a very powerful paralyzed core like CPU or GPU. And this goes back to what I said about paradigm shift. This paradigm of doing things that are spatially distributed can even think of sort of an analogy to lambda functions deployment. This mainly works when we do it with neural networks, it doesn't work with general purpose code like an operating system, or something of that sort.

And this requires a really extensive software stack that abstracts all this complexity from user, just wants to describe a graph in popular framework like TensorFlow, or PyTorch, which people use mostly to describe neural networks, and have all the translation being done automatically to the primitives that our hardware implements. These are core competency and where we invest very significant part of our efforts, both in terms of hardware design and software design, and, of course, the machinery competency that defines what needs to be done in hardware and software. And this is about one aspect of interdisciplinarity, which we're working very hard on.

Actually, for me as the CEO, very important to find people that are well versed in all these fields, and are not afraid to jump in between great myth part, the softer part, hardware implementation part and have a very well balanced architecture. Other than that, of course, from technology stack point of view, we are building a full product, so we have usual aspects of embedded programming for chip design is a big embedded the computing system. Chip design is a big [inaudible 22:18] that we are holding, and all the embedded filmer, drivers, computer vision, all the integrations that need to go for full-fledged product in the well design systems that we're also doing internally. We're not reinventing the wheel. We're using technologies that are available on the shelf, filling them into a full system is something that we do internally.

Erik: You have a very well visualized on your website the innovative approach that you have here. So if you look at the neural network graph, you're moving from layer one, layer two, layer three, layer four, etc, and then you actually, at least as I understand it you have physically on the process or different areas that are processing each of these layers.

Two questions here, as we look from layer 1, 2, 3, 4, etc, what does each layer include? Is the first layer saying we understand lines, and the second is we understand connections between lines, and then sub-configurations, etc? Or what would be the features that you're moving through? And then maybe second question a bit different is where did this concept come through? Was this a clear concept that maybe your CTO had in mind that he wanted to test out or was this just a bunch of experimentation and you eventually arrived upon this as the best solution to the problem? But maybe first question is just can you explain a bit what does each layer actually indicate?

Orr: You just got very well the way the hardware is structured. Actually, a reflection of the way neural networks are described in the abstract programming frameworks like TensorFlow PyTorch, when you look at the description of neural networks, is usually composed of layers; each layer are supposed to represent a deeper representation of our understanding of the image. So the first layer, the receptive field is the pixels of the image. And let's think of the pixels as the first layer.

And as we move on to the second, third, and fourth layer, we are aggregating combinations of pixels and patterns together to create a more and more complex representation of the image trying to conceptually transition from raw data into insights on the other side of the pipeline. So this is just how neural networks operate. This is not related to our architecture.

What's nice about our architecture is that it reflects the structure in the hardware. And this is what makes it so efficient. This is also actually quite the way we got there. When we started, we didn't really have an idea how to do it. We did realize that there's a big potential to improve, but we didn't know exactly because it's the easy to design a processor that can run one task very efficiently, I mean, it's not trivial, but you can relatively easily pull it off.

When you start looking this as a [inaudible 25:14] device and think on one hand having the capacity and the efficiency and on the other hand having the flexibility to be able to run different workloads, and also I think we've been working since 2017. 2017 up until now, how many things have happened in the world of AI? How fast is this world evolves? So you need to also think in the architecture to keep things flexible enough in terms of what the future holds, and how will this field evolve, and what does it mean that I will need to support in my architecture, once again, while being efficient, and providing a high capacity?

In the first year of our existence about four iterations on the architecture, we got a concept or processor, we implemented it, we started trying to program it, and still what are the issues, what are the problems, what are the bottlenecks? Throw everything aside and did three more of those until we reached a point where we said, okay, this looks good, now, let's make a product out of. So it took us about a year to get to that concept and then we'll started running towards a product.

Erik: [inaudible 26:33] use simulation basically as you move through this process to determine one year of well, or do you actually have to build and test this in the physical world?

Orr: Yeah. So I'm a big believer in doing end-to-end cycles. So we actually use the FPGA technology, which some of the listeners might be familiar with to do these cycles. It doesn't give you the full picture of the whole device, but what it does give, it gives [inaudible 27:04] people and a very powerful tool, instead of simulation which tend to be very lengthy and very limited in what they describe because you're only able to sample a very small subset of use cases, my belief is actually the main thing that will help you understand if what you're doing is right or wrong is to run as much software as possible on that. And that can be more easily done with something that might be actually presenting in terms of hardware, but more presenting in terms of the ability to scale software on it. So we started actually by using FPGAs, and afterwards moving into more hardware-oriented or the async-oriented approaches.

Erik: When did you first build a physical chip?

Orr: So we had a prototype device that we sampled, it's the beginning of the beginning of 2019 summer back then. Just recently the end of last year, we have our production device which we started sampling more accurately, early this year, to a very large amount of customers.

Erik: Are you able to say who's producing this with you or is that confidential?

Orr: One of the leading founders?

Erik: There's been quite a significant disruption to the supply chain in the past, especially around chips for automotive. Is this impacting you? I guess a lot of this is impacting more of the older generation of chips. But is this a disruption to your business? Are you still able to work through this fairly well?

Orr: We have been able to work through this, really whether or not it's [inaudible 28:44]. As you mentioned, there have been specifically on the older process nodes have been a really big shortage. The cost of over-optimization, I hear that many people are reconsidering their just in time policies, meaning if you're a supplier bring me the components only when I need them, the risk, of course, is if the supplier doesn't have a supply, that point, you're stuck without being able to sell your product.

So one, and it's more efficient and capital. On the other hand, it is much more risky in your ability to operate. So I think everybody will come with a bit of a different approach as we walk out of this current stretch in supply chain. Or even relatively lucky, we've managed to go through this. Hopefully, this will stay this way.

Erik: We'll get to this eventually as an industry, at least this current challenge. So there's probably a number of different areas of the tech stack that we should be touching on here. But there's one in particular that I think is a significant concern for industries like automotive, also, a lot of manufacturers, which is concerned of security of edge devices. So, this least appears to be one of the bottlenecks to wider adoption in the short-term. Is that a topic that you're addressing, or is it basically a separate set of companies that are working on the challenges related to cybersecurity, and you're integrating best practices as you build your solution?

Orr: Our main focus was an AI, in terms of standard processing, we are following best practices in taking the established solutions which I'm strong believer in. One of the aspects in which we encounter cybersecurity is that for some customers doing it will actually a motivation that in some senses comes either from security or privacy concerns. So actually, keeping data away from a centralized database is something that they find structurally better at the edge instead of doing it in a centralized location where all the sensitive data is being kept.

And being able to process on the fly at the edge is something that enables you not to save the data at all, even not at the edge. So this is something that it doesn't directly mitigate cyber or privacy risks, but it reduces the attack surface or the data that can be compromised by this kind of risk.

Erik: No, not transporting the data also reduces at least part of the risk landscape. But a lot of manufacturers, they do also quite hesitant about moving to the cloud for any type of critical solution. So edge computing could be a good solution in those situations. Do you have one or two key accounts, you don't have to mention the name of the company, but key accounts that you could discuss what you've built with them, and how this has been deployed in the world today?

Orr: Okay. So one major thing that we see is the push of a video analytics. There's a variety of players that are working with us on this. This is something that is being addressed both by solution providers, vertically integrators, ODM that are building either camera, or aggregation points what people now like to call edge AI boxes, they're doing analytics. So one thing that we mainly see that people are struggling with is processing in real time in reasonable quality.

You're coming to it from two directions. Either you say, okay, I'm going to make a reasonable edge solution. Meaning that's taken ruggedized PC, for example, abrogates eight feeds from camera, there's multiple projects like this. Either, I'm going to make something that is reasonable to handle, so it has to be fanless, so it means it consumes less than 15-20 watts. If you want to detect events that you're seeing on your cameras, and it has to work with at least near real time performance which is 20-30 frames per second, otherwise just be missing out events. And combining all these constraints together brings in wide tight envelope.

And what we're seeing from customers is they're struggling. So either they go to obvious solution, which is adding invidious style GPU to the solution which immediately takes you to a completely different form factor, price point, power consumption deployment scenario, or you're taking something that is available off the shelf from mainly Intel, or from Google today. However, the problem there is the limited performance. I mean, you can get real time or even close to that you can use standard neural networks. You have to use very reduced mobile versions which are very low accuracy.

The result of that is that you're bringing low value to the customer because your mistakes are very high and you're given false alarms. So bottom line, it's limited. So this is typical scenario that we see in the video analytics. And they span across all use cases that people use for these boxes. So I’ll tell you the main ones we see are security, surveillance, retail analytics, access control, smart city, which is supported by a combination of all the below and they're all facing the same challenges and looking for solutions that can bridge the gap between high performance, reasonable cost, reasonable form factor, and reasonable assumption.

Erik: And you have under your industries in two major form factors. One is the edge AI box, the others intelligent camera. Is the primary difference here that the intelligent camera is a single sensor device that is then has embedded compute to process the information coming out that device? And the AI box is taking inputs from multiple sensors and doing sensor fusion to process a larger data set? Is that how we look at these or is there a different way that these are differentiated?

Orr: Actually, definition, this is a common architectural dilemma that you see across industries. You want to process directly at the sensor, or do you want to do it at the aggregation point? And there are pros and cons to each. For instance, if you already have established sense of suite, whether it's deployment in the building, or whether it's on your robot, you don't want to touch the cameras. So this could be a reason why you want to do things at the aggregation point.

On the other hand, some cases you say, okay, we have a customer that is doing smart cities and putting sensors on high poles, doing all kinds of monitoring and violence detection, things like this. So in that case, it's a camera. The aggregation point is at the cloud, but they don't want to rely on connectivity to the cloud because this will significantly make the system more complex, and also mean that they will have to spend their recurring expenses on processing in the cloud. So they want to close the system in one sensor. There is a motivation to go into sensory data processing.

So we see both deployment options. I think that the aggregation point is easier for most customers. By the way, the same applies also to cars, which if you think about it is exactly the same architecture. And I think for many customers, it's easier today as the first step to incorporate it into the aggregation point. But I do think that in a year or two from now we'll see much more smarter input into the sensors themselves.

Erik: I guess it's also a question of in order to achieve the desired insight, how many different sensors do you need data from? So is it sufficient to gather? Is that a reasonable way to think about this? If you need data from multiple inputs, then you need to use something like an aggregator box, but if you can achieve the results from one kind of integrated sensor, or this could be a device that might have a few sensors that are also integrated on the same hardware, then that would be a solution? But is that also a way to think about it of what are the diversity of inputs that are required to achieve the desired effect?

Orr: You can look at it from a system integration perspective. When you fuse the data, you want to fuse at the whole level, obviously, it contains the most information, however, this is the most complex integration to do. You might want to be talking in a higher level of abstraction, saying each sensor with its own peculiarities, angular view, lighting, or maybe it's even from different vendors, just provides a standardized object list. And then you fuse the data, say if three cameras saw the same thing, then the object is there. If only one of the three saw, so it’s probably a false alarm.

And this is actually a choice that has to be done by the system engineer on do you prefer the simplicity of the integration at the expense of making the individual component more complex? Or you say, okay, I want to concentrate the complexity at one point, if everything there I'll have a relatively complex integration, but the first thing will be at one point and second, I will have all the information? So it's really use case dependent.

Erik: So you might have a situation where you prefer to have multiple sensors that all say, objects detected, object detected, object detected, object detected, and all they're just reporting that there is an object that has been detected, or there's a cat that has been detected, and then the central aggregator just knows that four sensors have all detected an object, and it can make a decision based on that as opposed to moving all the raw data to that aggregator and running the process there?

Orr: Yeah.

Erik: Let's talk a little bit about the future. This is a really dynamic area of innovation right now for Hailo, but also just across the industry. What do you see as the big innovation levers are the areas that we should be paying attention to over the next five years as this market develops?

Orr: First of all, what we are already starting to see that the expertise of the planning is proliferating from the cloud players and really sophisticated full tech stack companies into everybody who’s making electronics. And this is one trend that I think democratization and commiseration of deep learning capabilities will bring. We're already starting to see it coming, tons of innovation in product value. All kinds of different products that we use in our day to day life or all throughout the industry that are being reimagined using AI, and this is not only being done by the Googles and Microsoft's of the world. So this is one thing that were going to happen.

Already also starting to see signs of that cloud player is much more invested in edge computing. I think everybody today understand especially with this field of edge AI, and especially for vision, so practical to do things in the cloud. I think we'll see more involvement of cloud players at the edge. So it will be interesting who will prevail eventually: the typical vendors for the edge, or whether the cloud full suite of offering will include management. And you've touched upon security, provisioning, will be more appealing to customers. [inaudible 41:25] interesting trend to watch.

Erik: There's been a couple companies that I've talked to that are quite interested in the prospect of 5G leading to a kind of localized server. So let's say you have a 5G hub that services a large refinery or the neighborhood in a city, maybe a half a kilometer radius, and you kind of aggregate data and then process it, so you could say an edge cloud to some extent. Do you see this as being an interesting architecture, or is it a bit too early to see if this becomes widely adopted?

Orr: I think it's a bit too early, though people are moving in this direction. Whether the market will accept it or not, this is harder to say because it does require to tailor or the solution end-to- end. And it is always been the challenge of adding value from the telco perspective, providing full application. Even if it will remain just as the floating grease compute resource, I don't think it will be very successful. But if it will integrate well with use cases like you've mentioned, this could be a real value proposition.

Erik: Maybe on this point last question is, we talked about the Internet of Things, but I think the reality is that in many cases you don't have an internet really, you have a lot of relatively isolated devices that are collaborating with a few other devices which does the job actually quite fine in many cases.

But if we look at smart cities, for example, it's going to be increasingly important to have thousands of devices that are really coordinated in some way with each other. What do you see as being maybe the status of this just from a technical… There's a whole set of business and legal and privacy related challenges. But just from a technical perspective, what do you see as far as the status today? And then do you feel like this is an important area to develop? Or do you feel like we can get the job done and solve most practical problems without this system of systems level integration?

Orr: The system of systems is an absolutely crucial component in enabling this whole ecosystem. And I think the right balance of what's being done at each system and how they coordinate or consolidate into something that a customer can work with or an operator can work with is crucial. I don't think we've narrowed it down yet. But I think we're going that way. And as an industry, we have to solve it. This whole new set of concepts like building management and safety management, and smart access control, if you want to make them reality, it has to happen, this service level which aggregates and coordinates all these smaller system components because I don't believe in standalone systems.

Erik: We'll all be busy for the next decade at least. Is there anything that we haven't covered today that it's important for everybody to know?

Orr: No, I think we’ve done [inaudible 44:39]. Thank you very much, Erik.

Erik: Yeah. Well, thank you. Just last question. And if one of our listeners is interested in learning more about your solutions, what's the best way for them to get in touch with the company?

Orr: So you're more than welcome. Just come to a website www.hailo.ai, we’ve all the information there and you can contact us through there. We'll be happy to help.

Erik: Thank you, Orr. Appreciate your time.

Orr: Thank you, Erik.

Erik: Thanks for tuning in to another edition of the industrial IoT spotlight. Don't forget to follow us on Twitter at IotoneHQ, and to check out our database of case studies on IoTONE.com. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at erik.walenza@IoTone.com.

Contact us

Let's talk!
* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that Asia Growth Partners may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from Asia Growth Partners.
Submit

Thank you for your message!
We will contact you soon.