Podcasts > Technology > Ep. 058 - A conversation on AI, social responsibility, and future disruptions
Download MP3
Ep. 058
A conversation on AI, social responsibility, and future disruptions
Neil Sahota, Master Inventor, IBM
Monday, March 02, 2020

In this episode of the IIoT Spotlight Podcast, we discuss the current state of AI in practice, the integration of AI and IoT, and the importance of new data sources designed for machine use. On a more human angle, we highlight the importance of measuring social impact while deploying disruptive technology.

Neil Sahota is an IBM Master Inventor, United Nations Artificial Intelligence subject matter expert, and on the Faculty at UC Irvine. With over twenty years’ experience, Neil works with clients and business partners to create next generation products/solutions powered by emerging technologies. He is also the author of Own the A.I. Revolution, a practical guide to leveraging A.I. for social good and business growth. 

Transcript.

Erik: Welcome to the Industrial IoT Spotlight, your number one spot for insight from industrial IoT thought leaders who are transforming businesses today with your host, Erik Walenza.

Welcome back to the Industrial IoT Spotlight podcast. I'm your host, Erik Walenza, CEO of IoT ONE. And our guest today will be Neil Sahota. Neil is an IBM Master Inventor, an AI subject matter expert with the United Nations, and a member of the faculty at UC Irvine. He's also the author of the new book “Own the A.I. Revolution”, which explores how companies can integrate AI into their strategies to realize new competitive advantages. Together, we discussed the current state of AI in practice, the integration of IoT and AI technologies, and the importance of new data sources that are designed for use by machines rather than humans. We also explored the importance of assessing social impact while developing and deploying disruptive technologies such as AI. I hope you find the conversation valuable. And I look forward to your thoughts and comments. Neil, thank you so much for taking the time to speak with us today.

Neil: Yeah, I'm excited to be here, Erik. Thanks for having me on.

Erik: So Neil, I've been thinking what is a good way to start our conversation, we're going to be talking today around AI generally, but that's a huge topic. Before we get into that topic, I want to give our listeners a bit of a background into who you are, what you're about and then why you're with us today. But you recently published a book “Own the A.I. Revolution”. So, maybe if you can give us just a little bit of a background on your path to this point, and then what led you to put in the tremendous amount of hours required to publish a book like this. I think this would help our readers understand where you're coming from into this conversation.

Neil: I will say that I kind of got into all this by accident, I definitely did plan to be on one of the pioneers of the AI wave. I've always been telling the guy to try and solve problems, but try and solve them generally rather for a specific situation. And so as a result, I developed a lot of IP and other types of things. And about 14 years ago, when we saw take off and business intelligence because we could collect, store, slice and dice and report on data, while people said, well, it's amazing what computers can tell us. And I'm thinking myself, well, they're not actually telling us anything, we're just manipulating information that we have. But could the machine do that? And so I want to develop like a series of patterns around what we call machine learning today.

And lo and behold, there was a project going on, and IBM. Turns out, there were some synergy in my work there and next thing I know I'm involved in IBM Watson. And great experience, see a lot of opportunities, was one of the champions for creating the IBM Watson and the whole AI ecosystem to try and basically add new capabilities and services for people. And in doing that, what I really realized was it's not so much about the technology. The technology is a tool, but felt like people would struggle and try understand these new capabilities.

AI is called the third generation of computing with good reason, is very different than just executing a program, a machine to do low level tasks with some cognition that we're not used to. It's just a way of thinking. So working with global Fortune 500 companies and nonprofits and startups, government agencies of the United Nations, I realized they had a lot of the same questions, same concerns. But one of the big things was I know I should be doing something. How do I figure that out? And how I get started?

And so that's why I actually decided to write my book on the AI revolution. Because rather than working everything one-on-one, I said this is an opportunity to share this knowledge so people could actually get a leg up on figuring out how to answer these questions.

Erik: You have, I've got to say one of the more interesting LinkedIn profiles that I've come across. So your lecturer at UC Irvine, but then you're also involved in as a board advisor or somehow as an advisor at quite a number of organizations and companies. Maybe you can help me understand the logic behind this. But there seems to be two big themes here. One is AI or companies that are somehow using this set of technologies. And the second is social responsibility, or companies that are trying to do good while also running a successful business. How does this reflect your time right now? How active are you in these companies? And then how does this align with your more general academic or intellectual interests?

Neil: I know, it may seem hard to believe, but they're actually all connected together. I really believe we have a chance to shape the future. I know a lot of people are always concerned and most of us we're afraid. But I actually believe that each of us have the opportunity to shape the future. And where I see the most innovation or disruption is actually with the startup community: they're willing to challenge the assumptions and come up with not by call automation, finding ways to make an existing process system or product faster, cheaper, less errors, but find a different product, process or system to do the work.

And because of that, that's why I've been so ingrained helping up startups being an advisor, actually trying to help these people figure out how to build a viable venture to make that a reality. The same time, my goal in life, is to try and leave the world as least as good as I found, if not better. I really feel fortunate for the opportunities I've had of life and some of the mentors I've had. And I want to try and pay that back and encourage future generations and future leaders to do the same thing. And that's why I'm really big about social good to champion social enterprise and social entrepreneurship because there's nothing wrong with making money, but I think there's an opportunity to make money and help our community help society of the planet as a whole.

We just usually don't get that kind of mindset or that kind of exposure to us. I think, cultivating that is really important. And both these things really tie a lot to education. And I really feel it's important that we keep our curriculum fresh to get people ready for the future of work. But we give no students and future generations, the knowledge, the skills, along with the tools so you make this future a reality. So, from my standpoint, I'm able to juggle these three things because I really see them as interconnected, and part of a much larger ecosystem to help make that world at least as good as if not better than I found it.

Erik: This would actually be an interesting machine learning or AI challenge, this issue of properly evaluating a company's value to society because right now we have a fairly simple mechanism for doing this. So how much will somebody pay for their core offering? And then we have some costs associated with running that business. But externalities are really not factored in very well. So, some companies or some industries have a lot of negative externalities that never show up on their bottom line, and others have a lot of positive externalities that there's, for some reason, not able to monetize.

And if we had a system or a solution that was able to more granularly evaluate the impact or the value of a particular company, society, this might actually make social enterprise or let's say, companies that are trying to do good as well as make money appear much more successful. So I think, this extra duality challenge has been a just a key challenge for governments and inter-government agencies for hundreds of years ever since they've started trying to regulate. And it's always been people, lawmakers trying to set up laws, so kind of this Byzantine structure of laws to try to regulate what makes sense and what doesn't.

But there always seem to be lagging a little bit behind the industry, because they're always somewhat reactive, and they react with the speed of Congress. And the speed of Congress you're always going to be five years behind any particular trend. I'm curious whether you've ever thought about the application of AI to these more social level challenges of how do we, for example, evaluate externalities and price?

Neil: Ironically, it's actually something that I have thought about because we're now dealing with a space where machines are making recommendations, and in some cases, potentially decisions. We can't afford to be reactive. We are trying to think through away happen. And that is actually a big challenge, especially you talk about people that may not fully understand the potential, the possibility or the technology. And I've actually thought about this, could we create like an AI system to help us figure out what people might do and be the AI philosopher anticipate so we can create regulation good policy.

The challenge, though, is we don't know how to teach AI imagination. We don't know how to teach the AI creativity. That's the realm of we call AGI Artificial General Intelligence, which we haven't cracked yet. And AI can is really good at what we can teach it to do. That's a major constraint in trying to make this happen.

The other thing is the variability. When we teach AI, we have to be able to give it tons of data, learn from, bring the subject matter experts. But if we ourselves can't anticipate if we can't teach imagination, it's unfortunately a really tough act to crack. However, I always feel like there's a path forward. And one of the things I've been thinking about is if we can create that kind of AI philosopher, can we created AI questioner? Can we create an AI that will at least bombard us with questions and go through this where you want to call it critical thinking or thought exercises to flesh out no matter how random might be different things that might occur? That might actually be possible.

And I think that's actually a good example in that, I know a lot of people are worried about is AI going to replace me? It's not really about that it's about human machine collaboration and using AI is a tool that was do more complex, more value at work. And I think an AI questioner that could do pepper us with very relevant, sometimes maybe really far out there questions to flesh, this out might actually be a useful tool, because it's sometimes hard for us to think about the extremes that might occur.

Erik: I think a lot of people want to be able to develop an algorithm that plugs out a specific number and say, okay, this is the number, let's go execute on this. But this thought process of figuring out what type of output is actually feasible, and would provide value? And it might not be a specific number at all. It might be directional might be a set of questions or a set of impulses that is actually more feasible than trying to develop a system that will tell us the answer.

And this is probably the right way in many situations to actually think about the use. But I think it's not intuitively how people think about AI. People tend to think about AI, you put in a bunch of data, and then it says this company has a score of this and that company has a score of that and go implement on this. I'm curious, Neil, when your work with the UN and other governmental or intergovernmental agencies, how receptive are they to adopting this type of solution? Are they actively seeking solutions? Are they actively engaged in this topic? Are they somewhat hesitant or maybe not particularly informed about what is possible today? What's your experience in working kind of at the interface of intergovernmental agencies and AI?

Neil: Well, I have to give typical MBA answer, it depends. It actually, unfortunately, depends on the agency and the leadership themselves. I've seen a lot of agencies in the US and around the world where they're really hyper-engaged. They see this as a phenomenal opportunity to enhance or even provide new public services for people, do some things more cost effectively as well. And they're actively pursuing that. They actually have projects in some cases, some products out there.

For example, Singapore is leveraging this technology to help direct traffic to reduce congestion, which actually has also benefit of reducing carbon emissions, so fantastic stuff. I know that organizations like say the California Department of Defense, Department of Agriculture are actively working on projects and initiatives to enhance public services. But you also have agencies out there where they're slow to move for right reasons, they might be concerned about bias within the AI or the truth of trust in the technology or the disparate impact.

For example, it's one group by, unfortunately, be more definitely harmed by it. And then you have some agencies where they're not really doing anything. It's not really on the radar. And for right reasons that some of it might be caused, but someone might just be like they don't feel like that AI could add anything, and in some cases, it may not. But they don't believe the machine can do something better than a person. I'm not advocating that machines do everything better than people. There are quite a few things, people do better than machines.

But some of the things that machines do better than us is surprising. I think, our own limitations, or our own perceived constraints, sometimes inhibits our ability to do that. Like, for example, you've actually found that people are more honest talking to an AI like a chatbot. It doesn't matter if you show a doctor of 20 years, you’re lawyer of 20 years, you’re accountant 20 years, they'll actually tell the AI something they were never told that person.

Think about like your doctors going to say, hey, for 20 years just do a routine health assessment. They might ask the question, well, on average, how many drinks do you have a week? I will be like, hey, I have a glass of wine, socially, that kind of stuff. But when they talk to the AI chatbot, they're like, oh, yeah, three whiskies at lunch. And it's just like it's surprising, because why wouldn't they tell the person? But we realized, they feel like when they talk to the machine, there's no judgment involved. As a result, they can actually be more honest to provide more better information in that regard, which means better information means better insight, better things, better ways to try and help the person with whatever their needs are. It's tough for us to accept that sometimes another human being will be more honest with machine than ourselves.

Erik: So let's lay a little bit of a foundation first, with the question, what is AI? Because I think you we can get into the habit of just using this term AI around a lot of different use cases. But for many of our listeners, this is not their area of expertise. And so it's useful to know, when we say AI, what are we actually talking about? What are the suite of technologies? How do you frame this? When you think about this, how do you structure this concept of AI in a useful way so that you can say, here I'm talking about AI but I mean, this and here I mean that alternative set of technologies or use cases?

Neil: The definition of AI is a bit of a moving target. As you know, as we develop more capabilities, it kind of shifts. But at the end of the day, AI is about a machine able to essentially mimic human thinking that it can do tasks with the require some level of cognition, without direct human intervention, so to speak. And this typically means that there's three things involved with AI. The first is machine learning.

Machine actually learns by reading, by watching videos, by hearing and by doing so much like we do. You basically give the AI a bunch of data, it draws a lot of conclusions, and then it works with human trainers trying to do things. As it does things, you have so many experts were like, this is right, that's great. Well, this is wrong, or this is the reason why, this is the right answer and why; this is a good answer, but that one would have been this. This is the reason why. And you got three of the four things over here, this is four thing. But the thing is it learns really fast. And it's not about programming. It's not about giving us instructions. You just give it rules of how to make decisions. You give it a whole slew of information. Data is the fuel for AI. That's how it winds up working. That's how it learns.

The second thing is the ability to understand natural language. We don't realize how difficult that is for a machine because think about how we actually talk. You use a lot of slang. You use a lot of jargon, a lot of idiom. If I say, hey, you know what, I'm feeling blue because it's raining cats and dogs. Guess what the machine thing? No, the AI is like wait a second, you were physically the color of blue because it's raining small animals from the sky. That doesn't make any sense, that doesn't compute. But with AI, it's not looking at the keywords, it's looking at the intent, the context of the conversation, the connotation. It’s not looking at things as literal meaning is trying to derive information, the subtext, if you will, about what's being discussed. And that's a powerful tool within AI.

And the third component is the ability for it to act like it's another human being that we can interact that way. For better or worse, search engines, like Google have taught us to rely on keywords. But it's not efficient. If you think about like, oh, Alexa, I could say, hey, Alexa, turn on the lights and it'll totally turn down the lights. But if I say hey, Alexa, charge bright in here, it doesn't realize that I'm really implying it's kind of bright, can you dim it down a bit?

So with AI, it could come back and might say like, oh, whoa, hey, do you think it's too bright, do you want to turn down the lights? Let's say yeah. Or if I'm looking to buy a bicycle, go to AI that knows about bicycles and say, hey, I'm going to buy a bicycle. Which one should I get? It's like having a best friend that knows bikes, he'll come back and say, hey, Neil, what do you want one? I want to get back in shape. So great, how often do you think you’ll ride it? Probably like, four or five times a week, maybe 45 minutes a time. Awesome, Neil. And where would you ride your bike? I’ll drive around my neighborhood. It'll come back and say, Neil, here's the right bike for you.

So it knows about bikes, it's seeking more information, but might also know things about me. That's why I say best friend. It probably knows that, you know what, Neil doesn't like to spend a lot of money, so I’m going to exclude expensive bikes. I also know that Neil says four or five times a week. He's a pretty busy guy. So it's probably like once a week. We factor all these things to come up with a good personalized recommendation, a best fit for my need than just like a blanket type of thing.

So these abilities combined together that allow the AI to actually answer questions or solve problems we don't have the answers to. It's not a giant search engine, just spitting back things we know. It's actually trying to solve problems, help us find answers.

Erik: The first one machine learning is a very fundamental method of writing to some extent, a general algorithm with some really fundamental premises and then turning it loose on data and going through this training process. But you're not building in a lot of logic into that algorithm upfront. You're, building in some fundamental logic, but then you're allowing the training process to build the algorithm as you go.

The other two, how would you build these up? Would it be a combination of machine learning of we have Siri in the home, or Echo or Amazon and through interaction with people it starts to understand that when you say something, typically, it gets the answer wrong the first time and then you rephrase that using another terminology and then it understands you, therefore it starts to associate those two terms through this kind of learning process of just being in millions of homes and having these daily interactions. Is it still the machine learning process that's training it but then maybe there is also some more traditional AI or programming under the hood of okay, and then we'll also connect this to your bank account, so maybe it knows how much money you actually have, if you want to buy, make a purchase, it knows what's financially viable for you, and then we connect this to your Facebook account so it knows who your friends are and what they tend to do and so forth?

And then we also have more traditional AI processes of saying, okay, I'm going to connect these different databases and give it some more similar direction in terms of what data to assess in a particular situation. So how would you go about building the third app that you were talking about where you have kind of a conversational application around maybe purchasing a bike as an example?

Neil: A lot of this has to do with what we call classification, and concepts, and classification, just trying to order some of the data to help the AI understand. But what's actually more powerful is concept training. And with concept training, we're trying to teach the AI the associations. So, I use the example earlier about I'm feeling blue because it's raining cats and dogs. Blue could mean a color, but it can also mean an emotional state; can also mean a reference to music, and so forth.

To use a different example, you think of France. If you were to ask just regular machine, what's France, it would spit back a bunch of facts. Okay, it's a country in the European Union population is 50 million people, 30 million square kilometers, whatever it might be. But if you ask a person what's France. They may not quite say the same thing. They might, yeah, it's a country in Europe, but it's also red wine. It's the loo. It's the Rhone River. So it's the Eiffel Tower. And these are the important things to teach the AI is this conceptual training so it has to make those associations.

So to be able to have a conversation and to give it a sense of personality, it has to kind of understand the way conversation works, the intent, the context would have the banter. It's not just necessarily asking a series of questions, but making a natural flow, that we might teach the AI, it’s like when you engage in a conversation, you just dive into the information. You might like, hey, great, how's it going? Maybe a greeting like, oh, cool, you want to get a bike?
 

For lack of a better word, the psychology how conversation actually works. So, there's nothing an element to machine learning to all this. But what we take for granted, what second nature to us is actually really complicated for a machine. Because think about it, how many different ways could a conversation go? You walk up to somebody may not know, and you say, like, hi, how's it going? How many different possibilities are out there? And for machine to understand and learn how to properly react to the situation, that's complex. That's why we definitely need machine learning where we have to be able to teach those concepts to the machine at the same time.

Erik: Yeah, it's interesting that we're almost on opposite poles in terms of what is easy and what is difficult for human and machine, right. So we have a lot of specialized modules built into our brain after whatever, a couple 100,000 years of evolution that make it fairly intuitive for us to have conversations and read people's emotions, and so forth. And we don't have modules for how to do algebra, not specialized modules that just hasn't been around long enough and been useful long enough for that hardware to be built into us. But obviously, for machines is quite different.

If we use this example, maybe dig into it a little bit more, there's some things there that are probably fairly easy for a machine, so it's able to grab all of the relevant information. Whatever is available online, it's able to if calculations are needed or something, so around the population of France or anything like this, this information is relative available.

So today, we have a balance when building a solution, which is mixing some degree of machine learning and some degree of human coding of just saying, okay, I want to produce some specific result, I want a functional technology that is conversational and human in its approach. And so there's some degree of coding. What does that look like today? So if you were working with a startup to build this team, to what degree would they be building a basic structure and then just pumping in millions of conversations or somehow using large datasets to train and what else? In what areas will they be manually coding this in order to help the solution accomplish goals that a brute force training is not going to accomplish it at this stage in the development?

And I asked this specifically because it's also a challenge for me if I'm doing a project, it's unclear where can we trust the algorithm and just say, okay, we've got these big datasets, and we're going to put these into the algorithm, it gives me something out. But then there's places where we have to step in and really get quite heavily involved in trying to craft the logic. So maybe if we can use this example to illustrate a little bit where we should focus on machine learning, and where we should focus on developers being deeply engaged in designing the AI solution?

Neil: Training is one of the big challenges around AI, and people tend to underestimate effort around it. AI, unfortunately, is not just a magic box, it just does whatever we want. What we really focus on is we call the ground truth. And the ground truth are rules on how to make decisions: they're not the decisions themselves. And basically, that becomes the literal truth for the AI.

So you think about if the AI is like a three-year-old kid, you're trying to explain it to a three-year-old kid, right versus wrong behavior, you try and give the child rules, on like don't hurt people. It's not specific behavior, it's like hurting people as bad. That's a rule to follow, a guideline on good versus bad behavior.

So we tried to do the same thing with AI. And we try and use a cross-functional team to teach the AI. So, as we give it information, depending on how much variability or the complexities around it, we might annotate some of it to call certain things out like good behavior versus bad behavior. So that's part work data scientist, part work developer, but also part work domain expert. So when it comes to AI projects, it's not just we get requirements, tech team goes out, does design build stuff. We need everybody involved along the way. And we just let the AI actually try things and we see how well it does.

We, essentially, are looking for a level of confidence in how well it performs. And so many AIs actually will tell you that I'm 50% confidence of my answer, I'm 70% comes of my answer. We keep going through this cycle of train it and train more and more complex things until it gets to a level of confidence that we're confident with to be used for general use. Now, it just depends on what the impact might be.

If we're talking about a chatbot trying to help out with customer service, we might be okay at 80%. It will keep learning it better. If it runs into a problem, we have a human customer service rep. When you look at health care where people's lives are at stake, 80% is probably not sufficient. So we have to invest a lot more time in doing that probably get it to like 98%. So, when I was working with IBM, and one of the first things we did was health care talking about how Watson could help with cancer research. We had to teach Watson about cancer. And there's a lot of different forms of cancer. There's different stages. There's a lot of different information, a lot of variability involved, a lot of complexity.

Then you add to the mix, that the people that are really the subject matter experts here are extremely busy. And we just can't pull them off their work because they're working to try and save lives. And so between the combination of limited subject matter expert time, and the vast variability in the world of cancer, it took several months for Watson to get up to a decent proficiency around that. And that's one key thing to learn. It's AI out of the box is not going to just know stuff, it’s not going to be able to do things. We have to invest the time and effort to actually make that happen. The more complex it is, the longer it takes, which is why I always recommend to people, it's great to think big, but start small, because you start a small subset. Could you pick one form of cancer like pancreatic cancer? Do you pick one stage like maybe stage one, and focus on teaching that really well and then building upon that with more and more layers?

Erik: You've mentioned earlier that some agencies are heavily engaged and some are not. I imagined for a lot of agencies, they simply don't have the specialized resources, the data scientists and personnel to approach this, or at least they don't feel that they do; whereas if you're talking about Department of Defense or Bank of America, of course, they have the resources to invest. What is the state of off the shelf solutions?

Because there's a lot of medium sized companies and also a lot of medium sized organizations, say government or organizations if we think about a police force in a small town, or a medium sized manufacturer who had a lot of deep expertise in their particular domain. So they have half the equation, but they don't have the ability to go off and hire five data scientists. What's the best practice, if they wanted to get involved, and they had a limited budget, would they be able to purchase something off the shelf that would help them with maybe just one or two developers who are not PhDs, but maybe have 5-10 years of experience to begin to build up solutions. Or does it really require a pretty robust effort at this stage to build something up?

Neil: It really depends on what you're trying to do. A lot of the big tech companies, the Googles, the Alibaba, the IBMs, have been really good about creating APIs for people to use. The API gives you a capability, mix and matches, these are what makes sense. But what I've seen my own experience is just know like perfect solution just can't mix a bunch of APIs and get exactly what you want. There's always some level of proprietary or customized for exactly what you're trying to do. The APIs might gets you 70 to know [inaudible 36:53] there, you still need those resources.

Now, the good thing is there's a lot of people working to try and create these ecosystems, you know, Google, IBM, Alibaba, all these guys provide, help, support, guidance, sometimes that's not enough. And you can't just go, like hire five scientists like you said, but there are other organizations out there that are like medium sized or boutique consulting firms that could be supplemental. Or you have organizations like [inaudible 37:22], that they're trying to take the opportunity where you could sponsor a challenge, or a program at much reduced costs than trying to hire one of the big tech giants.

And what they're doing is they'll help you solve that. But they're all at the same time, they have some good people, they got some people that want to upskill, especially in underserved communities who have the skills. And so they seize opportunity that yeah, we can help you. We can also leverage these people that want to give back and teach people these new skill sets. And so I'm seeing more of this ecosystem type of play, to actually create value across the board. And so if you're a medium sized company, or agency or startup, there's a lot more of these opportunities, especially on a global scale to get the help that you need.

Erik: You said they have communities that they're trying to skill up. Why is that? Is these people internal to their organization that they want to upskill? Or are they kind of a nongovernmental organization that's specifically trying to help particular populations develop these skillsets?

Neil: It's very much the later. They're nonprofit, and they're trying to take the opportunity and say, like look future work is changing. We know where the skillset demand is. People in like a rural community, or underserved community may not have the same access to education or experts. We want to create the community but give them the also the opportunity to learn hands on by connecting them with that and real projects. So, like, [inaudible 39:03] goal is to actually help these populations get those skillsets.

Erik: One thing that I've seen that is not maybe as benign, but might be actually effective in some cases, I'm interested in your perspective on this, is companies that are willing to take on some of the workload or the cost of developing AI solutions in return for access to the data. So I see more companies coming out with productized AI solutions where they'll go in and they'll do some system integration work that's probably not profitable. But by doing this, they're able to get more data into their system, better train their system.

I know from the end user perspective of the company that's employing them, this is always a bit of a concern of, okay, maybe on the one hand data is this beautiful thing where it can provide value to me, and I can transfer it to somebody else, and there's no cost to me and there's additional value for them. So it has this wonderful property of being able to provide value to multiple people with a really negligible transfer cost.

But there's also this uncertainty around if I do that, am I losing control of something that I don't understand? So maybe I don't properly understand the value of my data sets, or the value of having proprietary access to this data, and am I then giving this up to another organization? Or even giving up the competence to understand the data and then I'm just a receiver getting the output, but I don't really understand the underlying logic? This is a maybe a complicated question that gets a bit into business strategy of where a company wants to go with their approach to managing data.

But how do you look at this situation of companies that they have data, they want to extract value from that data, and they have somewhat of a tradeoff of potentially giving up access to that data to a third party who would then also have access to their data, but in return, they would actually get a functioning solution? I'm not sure if you advise companies or come upon this question of when do I give up access to my data? How should I value proprietary access to a particular data set?

Neil: Data is the new oil. And if you're a young entrepreneur, it's really hard to get money and resources, it's a really good deal. And it might be. I mean, you have to really try and weigh. But the challenge that I think everybody has is what's the real value of the data? Just being able to have it collected and store it, slice and dice it up doesn't mean that we necessarily are collecting things that are meaningful. It's actually something I teach to my MBA students. This is the real challenge.

But everyone just knows they want to essentially hoard the data. If I have more data, something good will happen. Not necessarily the case. But I think the bigger challenge is if you do this, what will that other organization do with the data? Are they suddenly putting you in the liable situation?

Interestingly enough, actually, just this morning, I was talking with no rather large company, I won’t name names, and they were working with one of the very big management consulting companies. And they were asking me, like, look, they're willing to help us out with some AI stuff. They're willing to do things at a very cut rate. But they're very insistent on they get the data; that we have to share the data, they can use the data. And we understand why they wouldn't want the data. And I was just telling them, like, data is perceived to have value: the more you have, the more it's worth, even you don't know what you could do with it.

And I was just telling, like the concerns around data security and privacy are a big thing. But maybe they see something or maybe they don't, but you don't and they want to take advantage of. But by the same token, this data hoarding has inhibited our ability to actually make advancement and innovations in a variety of fields like health care, where we actually know now beyond a reasonable doubt that a lot of the clinical researchers, the hospitals, medical schools, academia, pharmaceutical companies, they're replicate other people's research, and in some cases, four or five times without realizing them as a dead end because people don't share information.

We can't make strides in medical diagnosis, and treatment and potentially even cures because we don't have large enough datasets because nobody wants to share their data. They get stripped of PHI Personal Health Information. They just think their data is so valuable that they don't want to contribute to an aggregate. Think about a public repository against PHI, where anybody could access and potentially accelerate the ability to cure cancer, cure AIDS. So if nothing else, we've placed a lot of value on data that it might be a bubble market, so to speak, that we're actually overvaluing the data that we actually have.

So I get the concern and agree in a lot of cases that you shouldn't just instantly give things away: make sure it's a good trade off. But in some cases, were impeding ourselves to actually make meaningful life impacting changes or benefits for society.

Erik: Well, it's a lack of transparency. So what is this data worth? I have no idea, could be nothing, could be a lot. I've seen a couple of startups in the past couple years that have, in particular segments, been building up something that I look at is Google for IoT data. So one is, I think, focus more on transportation. So they're aggregating data from primarily public transportation sources. But I think their intent is also to get private data sources, and then allow people to access particular datasets through an API for some fee, and part of that fee would go back to the original data source.

So then you would have a mechanism to actually be incentivized to share this because you're monetizing data that otherwise you just sitting in some lake. And as long as there's no particular risk to this, in this case would be, I think, quite risk free. This could be quite beneficial for all the government agencies that are investing a lot of time to collect data, they then be able to monetize this and sell it to, for example, automotive manufacturers who want to figure out how to design better cars.

Another company that's doing this is in the retail space, and they have a device that connects between a computer and a point of sale device computer and the device that prints out a receipt. And so every time a transaction goes through, they're able to collect that transaction level data, that whereas before maybe once a week, you'd have an output report from a store. And they then provide this back to the company, but then potentially, depending on the contract, they could also aggregate this data and have than a data set of what do all of the malls in China look like in terms of consumption patterns.

And so I've been seeing more companies experiment with this. It's going to be very interesting to see what the market accepts. But at some point, there seems to be a huge amount of potential value in figuring out solving this problem of how do we share access to data in a convenient way and properly value? I guess people are not universally satisfied with the solutions we came up with for internet data with Google and Facebook and so forth. So hopefully, we come up with solutions that we're more or less satisfied with in this case.

Maybe this can bring us to the AI and IoT data, because I think you mentioned earlier in our talk a lot of the data that we've been working for wasn't created for machines, it was created for people. And now we're kind of repurposing this for machines. But we have now for the past several years, a very significant increase in sensors that are collecting data specifically for the first purpose of machine processing. What do you see in terms of the AI-IoT interaction? And I guess we can talk about this in terms of use cases, but also maybe a bit of a higher level, just in terms of how this is forcing us to rethink what data is, what data should be, how we structure how we capture data?

Neil: Well, I really believe that IoT and AI go hand in hand. Data is the fuel for AI. And IoT is all about generating data, especially for machine consumption. One thing we've learned about machines like AI is they think differently about something but they can process, obviously, large sets of data, all these things.

To try and think of a simple example, we can look at self-driving cars. So, as people, we really rely a lot on our eyes, so we're very visual drivers. And when I started the foray of probing cars, there was a overreliance probably on camera data. Tesla, that was autopilot. I heard the driver was watching the Harry Potter movie, but the driver didn't notice that 300 yards ahead was a truck that turned over the truck bed was walking the highway. Well, it was very obvious to see it. But the Tesla never stopped. In fact, the truck ran and rip the top of the car off. And people like how on Earth did this happen? And it's like, well, the truck bed was white, and it was a cloudy day. And so in the camera eyes, the truck bed blended in the background.

Or we know that machines don't need to rely on sight. If the Tesla had been using radar, or LiDAR, or even auditory sensors, it would have detected in a heartbeat and stopped have plenty of time, which is why actually today self-driving cars have these capabilities. Back, it's actually known that you'll actually hear that little kid about run across the street before you see the little kid.

So the fact that we can take these different sensory inputs, we can take IoT devices in the road on traffic lights, and other vehicles, landmarks wherever it might be, and process that information in real time for an AI system, AI can process, we're seeing like thousands of inputs per nanosecond while it's driving. That's one of the reasons why we think that self-driving vehicles might actually be safer than human drivers. It's that power. All that data is being generated. We as people can consume it.

We, unfortunately, can't use radar or LiDAR information. Auditory is really dependent on how good our car is soundproofed, and if we're listening to the radio or talking with somebody. But for a machine, it's a huge advantage. And I think it's these types of things that are going to create these IoT devices that supply this vast amount of data for an AI to process, they'll make the AI efficient in some areas more so than human is. I think that's the true advantage, the true benefit that we'll be able to reap.

Erik: As a last topic to touch on here, AI obviously has potential to be extremely beneficial. But there's a lot of quite, I would personally say as valid concerns with AI around its potential impact on society, maybe we can set aside the existential concern around actually being able to control the general purpose, but maybe just more than the AI, but in the near term, the impact on society.

And I know that this is an area where you're personally thinking quite a lot and talking to different agencies. If we look at it in terms of impact on our relationship to work, do you already see significant impacts? And what would you foresee in the coming decade in terms of how AI impacts our relationship to work, and then what we might be able to do in response in order to make sure that humans evolve or adapt ourselves in order to stay relevant and make sure that we're not sacrificing some portion of the population to technology advancement and making them somewhat obsolete as functioning members of society?

Neil: I'll say that throughout human history, we've always been worried about new technology replacing people. Even the tractor, a lot of people felt that might be the end of farmers. But artificial intelligence, like I think all technology is a tool. It's all about how we use it. We can use it for good. We use it for bad. I don't think most people are just trying to replace people. I think they're trying to give them the tool so they can actually further their time to do more complex, more value added work.

And very much so the future of work, the hot jobs 10 years from now are being incubated right now. You think about 10 years ago, 2009, we didn't have Uber drivers, we didn't have social media marketers. That's definitely the opportunity. The downside is we know that some jobs will absolutely go away. You think about autonomous vehicles, we probably need be taxi drivers, we probably need be truck drivers. So what do we do with these people? There's probably some percentage of them that we could retrain for a new type of job. But if you're a 50 year old taxi driver with spouse and two kids, you can't really tell that person you got to go back to school for four to six years and learn these new skills, that's not really going to fly.

And I think it's a real concern that we have to figure out what to do with these people. But more so we need to act quickly. Because a lot of people that I talk with the thing some of these changes are 1-20 years out, they're not. Changes happening faster and faster. We think we have more time and we don't. And self-driving vehicles, Singapore's already having self-driving buses and taxis. So, we have a small window to help those that we can. And every day we delay, more and more people are going to fall into the bucket where we can just retrain them We have to figure out what to do and how we help these people. And there's things about universal basic income or other things.

But the one thing I think, hopefully everyone can agree is we cannot delay. It's not that whole sales people, it's like millions people are going to be out of work. It's like we have this opportunity right now to help a lot of people get ready for that future. Are we actually taking advantage of that? Are companies trying to look at a new skills and knowledge and retrain their workforce? Are we looking at trade schools?

Are we looking at colleges and universities, high schools to get them ready for that? But I can tell you that at these, UC Irvine, the Dean of law school has been very much a forward thinker. She was thinking about legal tech, and maybe tools that law students should learn, make them more attractive to firms. When she saw some of the things are going out there, like Legal Nation who’s created an AI associate lawyer, she actually realized that fundamentally, what lawsuits are going to do in their first four or five years is going to be radically different in like six years. So she's actually working to change the curriculum, actually working with law firms or organizations to define what that future work will be.

So I think inaction is not an option. And if we're really concerned about this, want to try help as many people as possible, we should all be working towards that goal of defining the future of work, getting people ready for it.

Erik: Now, that's fascinating topic: law, accounting, and a lot of the areas that are most intellectually demanding are really the areas that are most at threat here, because these are based on the ability to memorize a lot of information. And then that's something that AI tends to be better at than humans. So in law, for example, there's a lot of unmet need maybe around advising, counseling, understanding the human requirements behind a case that maybe we're not meeting so well today.

Is that the type of solution that you might be looking at, in a situation such as this of saying a lot of the work that was done in the past, which is looking up relevant regulations and laws, that will be done by the AI. But in terms of counseling individuals who are in a legal case, that might be then done by a person, and maybe before we were only devoting 10% of our time to this, but we can actually increase that to 30 or 40% of our time and give a much higher service level. What would be the examples of areas where in law we can employ the millions of young lawyers that are entering the market?

Neil: People will say, well, legal research and the management around the case rules, which is more about what's billable, what could be expensed. But even reading court documents, sharing court documents, going through discovery information, a lot of it is very similar case to case; might be 60-80% similar. But you could have a machine, may I basically take up that work and get you 68% of the way there.

I mentioned Legal Nation, was started up three lawyers, they're doing some of this work where they're able to take what normally takes like an associate lawyer, 6-10 hours to do, they can do it in two minutes. The goal is not to get rid of associate lawyers. You still need them. You still need partners, that kind of stuff. But by freeing them their time up from this, where they're doing more the review and filling in some of those 20% gaps, it frees up their time to actually talk more with clients to think more about case strategy, to think more about like jury selection if necessary. So, more of the complex tasks, more of the things that are probably going to be more influential for a case, now you're fleeing, freeing up their time to actually work on that.

And I know there's probably some people thinking out there well, even then you don't need as many associate lawyers, I think that it's on now for firms to think well, we got to get more business. We can do more work at a higher quality and lower cost, you probably get more clients, get more efficiencies. And so there's actually more work than there was before for the lawyers to do.

Erik: Despite the fact that only 1% of Americans are farmers, we produce a lot of food, and it seems to work reasonably well. So hopefully, we can evolve here as well. I think the change is going to happen a little bit faster here than it did in past industrial revolutions. Neil, I have a couple of closing questions here, which are a little bit later. But I think these are areas where you'll have some interesting answers for us. Before we go there, are there any big topics that we didn't cover today that you'd like to make sure we do cover?

Neil: I know you're talking about a lot of AI and the power, the opportunity, and the fear, the concerns, I definitely want people to realize that AI is just a tool about making money in commercialization. There's actually a lot that it can do for social good. One thing I do with the UN is I help them start the AI initiative, there are 17 Sustainable Development Goals, like zero hunger and poverty, their access to healthcare, and so forth. Then AI can be very much a tool to enable some of these goals to become reality.

And I'm not saying making money is bad. But I think there's a lot of opportunities for social enterprise, social viewership where even AI side we can get into the mindset of thinking about I'm going to create a new venture, I'm going to make money. But are there any opportunities as I do this, I can also promote social bit. And I think that's a really important mindset to get into. Because when it comes AI in particular, we're really good thinking about the bad stuff, we call scary AI, or weaponization or loss of jobs.

But we're not so good about thinking about the upside beyond making money. Are there other things we can actually do to help society. Like, can we actually use AI as a career coach to help people get better jobs? So I just want to call that out that in that everything is a bit of a duality.

Erik: And there's probably some cases where you're coming back to this question of the value of data where companies are sitting on data that could actually be potentially quite useful for some of the 17 objectives from the UN, the global development objectives. So it would be interesting to see whether there is data that's kind of been underutilized that maybe companies, if they were properly informed, would be open to also sharing those databases. Because that's somewhat different than sharing that with a company that's going to be a potential competitor, here, we're really talking about making use of data that has some commercial purpose, but then can also maybe be used to inform better governance. That would be an interesting area to explore.

So Neil, just a few wrap up questions here. Is there a young company that's kind of under the radar that's doing something particularly interesting that you'd like to put a spotlight on?

Neil: There's probably a few, but if I had to pick one, I would actually call out a company called Cyrano.io, that’s like Cyrano like [inaudible 01:03:36]. It was actually started by two guys, one who's a therapist, and one who's a neuro-linguist. And their goal was to try and help depressed and suicidal teens. So just as an aside, loneliness is actually the biggest illness in the world. About 40% of people suffer from some level of moderate to severe loneliness.

And so what they've actually done is they've been working and they've actually created an AI engine, that through conversation can actually assess the intent and commitment of a person, so through the word choice and those types of things. They're doing this to try and obviously help depressed and suicidal teens. What they’ve realized it's probably a dangerous place to test their technology. And so they got into more innocuous areas like car sales, because they figured if they failed, the worst thing that happens is a car doesn't get sold. But it's actually been incredibly effective.

And what they realized is, as a whole, it actually built a communication tool for people where not only you gather kind of the intent commitment, but you get the artificial better read empathy. So you could actually now talk to a person and use language and words that's arguments that's going to resonate. So rather than you may be a very fact focused person, you throw a lot of facts, but they might be a very nurturing emotional person when they need more reassurance. So they've effectively built a tool that if you're a parent talking to a child, or you're trying to help out a suicidal teen, or you’re trying to sell a car, you now have a communication tool that makes you become a more effective communicator because it can coach you and help you understand the language and words and information to focus on that's going to most strongly resonate that person.

Erik: Is there technology that's not yet widely adopted but that you see as being potentially very disruptive? And by this, I mean, more of a specific technology. Because if we think about AI, AI is a large concept that underlying there are a lot of Kubernetes. There's a lot of different technologies that might somehow be useful. Is there anything that you see as being potentially very impactful in the coming 5 or 10 years? Could be general, it doesn't have to be related to AI in particular?

Neil: But I'll actually go in a different direction on this one where I think neuromorphic chips could be a huge game changer for us. Neuromorphic chips is basically the machine learning the AI intelligence is actually built into the hardware, so you don't actually need an internet connection to use it. So you don't have the latency, you get the quick response time.

But one of the reasons I'm actually excited about this is that past 30 years, we a lot of focus on software, and not so much on hardware and that we've actually kind of hit the limit. We have actually hardware constraints now and are able to do. Because I hear a lot about like quantum computers and 5G and even 6G telecommunications. We always talk about the holy grail of having your own personal digital assistant, imagine that you don't really need an internet connection to be able to use that, you literally have that with you, whether it's on your phone, or integrated into your body somehow, you can access with your mind, that will become reality because of neuromorphic chips.

And I will tell you, DARPA has spent a lot of money, I think, probably close to half a trillion dollars to try to make that happen. And you have a lot of companies, big and small, like Intel and IBM and HP as well as several smaller startups like [inaudible 07:15] that have made great strides. They are only I think, like stage three, or stage four of the DARPA challenge here to make this a reality. And so, you think about all the SciFi, the movies, the books, or if you watch Black Mirror about having a little AI assistant, that's your personal buddy and knows everything, neuromorphic chips will be a big step in trying to make that a reality.

Erik: What do you have the viewpoint into timeline for when these might be on the market?

Neil: Oh, man, Eric, I wish I knew, I really I really wish. If I had to guess, I probably say we're probably 12-15 years out. But one thing I've learned is this technology develops much faster than we realize. So, maybe it's 12-15, maybe it's 8. I don't know. But let's stay tuned and see what happens.

Erik: Well, Neil, really been a pleasure to talk to you today. I really do appreciate your time. What would be the best way for people to get in touch with you if they wanted to have a conversation?

Neil: There's a couple of different ways. I mean, they're very happy to go to my website, neilsahota.com. There's a way to contact information to reach out to me or you can connect with me on LinkedIn or Twitter. My Twitter handle is at Neal-Sahota. I'm constantly posting, sharing information, checking my messages. So if you want to chat, want to ask some questions about some ideas, always happy to help.

Erik: Awesome. We'll put those in the show notes. Neil, thanks so much.

Neil: My pleasure. Thanks for having me on, Erik.

Erik: Thanks for tuning in to another edition of the industrial IoT spotlight. Don't forget to follow us on Twitter at IoTONEHQ and to check out our database of case studies on IoTone.com. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at erik.walenza@IoTone.com.

Contact us

Let's talk!
* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that Asia Growth Partners may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from Asia Growth Partners.
Submit

Thank you for your message!
We will contact you soon.