Podcast

The Future of Autonomous Systems

Very often, the terms “automated” and “autonomous” are used interchangeably, and while they’re similar, they’re not the same. While automated systems are run based on ML and AI and restricted to specific parameters within which they can work, autonomous systems also use ML and AI but are more adaptive and learn within dynamic environments. Autonomous systems are more intelligent and can even be likened to the human brain based on their decision-making process and ability to work seamlessly with humans as well as other machines. These sophisticated systems can solve many modern-day problems from supply chain management, hasten production, provide new job opportunities, and even be trained to assist in the healthcare sector and personal use by people with disabilities.

While enjoying the benefits of autonomous systems across various industries, humanity must also have the ethics attached to developing these systems under control, from legalities and safety of operations to transparency. Although there’s still work to be done in developing these systems, the future of autonomous systems in the next decade or two shows a lot of promise.

In this episode of “The Future Of” Jeff is joined by Gurdeep Pall, Corporate Vice President, Head of Product Incubations at Microsoft, and Steve Yin, Principal Software Engineer at Fresh Consulting. We talk about AI and autonomous systems, the use of robots in our daily lives, the training of systems to mimic the human brain, and the retraining of humans in adaption to these systems, to mention a few.

Steve Yin: I like to think that in the future we will face the reality that either we have to work with the machines, we are already doing this nowadays, or we have to live with the machines, because they will be everywhere.

Jeff Dance: Welcome to The Future Of, a podcast by Fresh Consulting, where we discuss and learn about the future of different industries, markets, and technology verticals. Together, we’ll chat with leaders and experts in the field and discuss how we can shape the future human experience. I’m your host, Jeff Dance.

***

In today’s episode, we have two interviews with experts on autonomous systems. For the first half, we’ll hear from Gurdeep Pall, Corporate Vice President, Head of Product Incubations at Microsoft. We’ll discuss the evolution of autonomous systems, today’s expansion, and a future where autonomous systems really help us solve global problems. For the second half, we’ll hear from Steve Yin, Principal Software Engineer at Fresh Consulting. In this interview, we’ll talk about the application to our daily lives, some of the ethics, and some insights on simulation and training of autonomous systems.

Welcome, Gurdeep. It’s a pleasure to have you with me on the episode focused on the future of autonomous systems.

Gurdeep Pall: Thank you very much, Jeff. Thanks for having me on the show.

Jeff: Awesome. Really excited to talk about this topic and have listened to you in Microsoft’s Executive Briefing Center talk about the future of AI and I was really blown away by your vision and continue to find your name––if I’m just searching for autonomous systems in the future, you’re there talking to other leaders on the topic. Really grateful to have you here with us. Can you spend a minute giving the listeners a bit more about your background?

Gurdeep: Great. Absolutely. Jeff, I’m a longtime Microsoft employee. I joined in 1990. I’m the kid who got to go back into the candy store about three times at least. The ’90s, for me, was really about working on operating systems. I was part of Windows NT founding team. We shipped Windows NT 3.1. Then I worked on the Windows operating systems all the way to Windows XP. In particular, I was focused mostly on networking areas, but I also contributed a little bit towards the Core OS as well.

After that, I moved on to work on real-time communications and started that business for Microsoft, which today is now Teams, but went through Lync and Skype for Business. During that, I also ran Skype after the acquisition of Skype a few years after that. Then my third chapter, if you will, really has been on AI. I got that in two parts. One was before deep learning had really happened, so we’re still in the world of machine learning. Still data-driven, but machine learning. Then, for the last six years, I’ve been working on AI pretty much with deep learning as a core engine in a variety of different ways. That’s my background.

My specific focus is to look at emerging technologies, emergent AI, and to see how we can create new categories for the company. Autonomous systems is one of the categories that we have created and we’re doing more and more right now.

Jeff: Thanks for that background. Yes, I definitely see Microsoft as a world leader in so many ways. It’s cool that you’ve seen so much of that evolution. You’ve been there for the majority of Microsoft’s life, so to be able to see all that growth and all that evolution and then to be working on what’s coming tomorrow. It’s pretty influential, I think, to be able to shape where the world is going. I see you as a key leader in that aspect.

I also recognize you’re on several boards of other companies that are working on some of these things for the future, like quantum computing. I think that’s impressive as well because it seems like there’s a convergence of so much technology that’s coming together to shape the future right now. Obviously, a huge deep background in tech. Being a world leader with a world-leading company, in so many of those aspects, and being at the early stages of some of these inventions, it’s really perceptive to think about the wisdom that you have. One curious question I have is, what do you do for fun? You’ve been so deep in tech. What does Gurdeep do for fun?

Gurdeep: This is the scary part, I really enjoy what’s happening in tech a lot, so I do read a lot. That has been really one of the joys of my especially last seven, eight years where I’ve been able to focus on new things and not just run very large businesses and so on, I have time to learn. Outside of that, if it’s winter, I love to ski. I have two dogs. I like to do things with my family, travel. Those kinds of things.

Jeff: Now that we have a bit of the formality out of the way on the topic, I want to talk a little bit about the current landscape. Just give people a little bit of, “Hey, here’s where we’re at today.” Then I want to jump to the future and then get some of your advice at the end. If we start with just autonomous systems, it’s a big term. It’s obviously connected to AI. Can you unpack that for us a little bit, maybe in the context of how autonomous systems are different than the automated systems or software of today?

Gurdeep: Absolutely. Autonomous systems are really systems that can operate in the real world. They can deal with all the variations in the real world. They can make decisions, they can plan, and they can operate safely, at least to the expectations that we have. The biggest distinction between autonomous systems and automated systems is that automated systems are designed to do a specific task.

When everything is lined up a particular way, they can perform that task very well. For example, if you are in the assembly line of cars, you have a robotic arm. The robotic arm has one task. It has to screw in the door handles when the car comes to that particular stage. Basically, the way the do that is with very high precision. They’ll have lasers, which will go through these two holes, which indicate that things are now aligned, and then the robotic hand will just go into place with high precision and then very quickly perform its task and it moves on.

Now, in that model, it works great, except that it takes a long time to set that system up in place, so if you wanted to, let’s say, have three different cars then going through the same assembly line, it is pretty much impossible. For each car, you’d have to do so much distinct new work. Then, if anything goes wrong, anything, that this thing is not going to work and the whole line stops. So fragile, expensive to set up, and then very expensive to repair and get back on track.

Jeff: Great. You’d mentioned this notion about the human. You made a connection to humans there. I wanted to double-click into that just a little bit. How do autonomous systems tie it closer to humans and the human brain than what we just described?

Gurdeep: The thing about autonomous systems is that we can all agree that they’ve been underdelivering on the expectation and the fantasy of autonomous systems today. You look at science fiction and you look at flying cars and Jetsons. Frankly, that world is we are all ready for it. It’s just not here yet. Then you ask the question, why is it not here? That’s where I think the world has come to the conclusion that using these classical approaches in autonomous system is not going to happen. If you write a deterministic code, it’s not going to happen. You need to now start using new kinds of methods.

What better inspiration for these new kinds of methods could there be than the human brain? The human brain is just incredible. We are just scratching the surface to how much we understand it, but we are starting to learn a lot more and we are heavily inspired. For example, even before you get into the mechanics of the human brain and understand how neurons wire to each other and how they optimize that path and so on, even you can look and study the human brain from the outside and say, “Well, how do you teach children? How do you teach children to do something?”

Well, it turns out we teach children and they seem to learn tasks pretty quickly and very autonomously. They don’t need to align by a micrometer. They can figure out [if] you throw the ball three times, the kid is throwing, not only that ball fine, they’ll probably pick up the next ball with two hands and still be able to throw it even though they’ve never seen a ball picked up by two hands before. Our ability to generalize, our ability to learn things step by step, our ability to really develop these deep notions of common sense of gravity and pretty soon figure out that when you throw anything up, it comes down. To be able to learn those concepts. That is what we are now tapping into.

Jeff: One thought I was thinking about related to that was if it’s a child that’s learning it’s like––we’ve been working on autonomous systems for a little while, in the course of time, there’s definitely been an uptick recently. So if that child is advancing, there’s an element of nature versus nurture. We’re actually trying to teach the child. There’s the concept of machine teaching, machine learning. I was curious, at what “age” would you give the autonomous system child? Now, I know there’s lots of aspects of autonomous systems, but generally, are we at age 12 or age 5. An abstract question, but what would you respond? What are your thoughts?

Gurdeep: I would say it’s an unfair comparison in the sense that this is the evolutionary side of the human brain. The child really comes into the world primed for “priors” as we call it in AI. It’s like a machine that is optimized to operate in this real world and that machine has been evolving at least for 500 million years, is what we understand now. Since the Cambrian era that it’s been evolving. Maybe a slightly different way to think of it is that if it started 500 million years ago in biological forms that don’t look anything like humans and today we are humans with these very evolved brains, where are we on that journey, in that timeline?

I would say we are starting to see more complex organizations. Definitely not where the human brain is, but we are starting to see more complex organizations. For example, if you imagine a neural net as being literally a cluster of neurons and performing a set of tasks, the human brain at this point has specialized parts of the brain which these clusters of neurons, if you will, which actually do many things, so they’re able to actually then reason across those different things and bring them to bear.

I think we have not gotten to those next layers yet. That’s why one of the biggest things you’ll hear about AI right now is this notion of common sense. We have such deep language models now, but we don’t still seem to have this notion of common sense so that a new data point that we’ve never trained on that it would be able to actually make as much sense as a human who encounters a totally new situation is able to do.

Jeff: Awesome. Again, I see you as a world leader in autonomous systems. What are you seeing out there right now where you see things advancing? We were working together with your team years ago, and so much has happened since then. Where do you see autonomous systems starting to advance today? That could be industries or use cases, but we’re obviously at the beginning of a journey that you’re helping shape.

Gurdeep: Yes. The big thing I’m starting to see is acceptance that this AI-based approach is really the path out. There’s no other path. Everyone has tried the different paths and so on, but this is the path and everybody needs to line up on that. We are starting to see the acceptance of that across, whether it be armed robots and wheeled robots and winged robots––we’re starting to see that. That is great to see.

Then you see people at different stages of adoption of this vision. Of course, we’re saying, “Well, we can make much more robust sensors if we put a lot of AI around the sensors.” You’re starting to see the vision side and other kinds of sensors. A few leaders are now starting to go deeper than that. We call it the perception-action loop where that entire loop can largely be done inside AI. That has not been the case.

Even if you look at a lot of the advanced self-driving cars and so on, the way they do it is they’re using AI, let’s say for the cameras and LiDAR and radar, then they immediately fall back to code and then say, “Okay, great. If I see this, then I’m going to go do this.” Then they go into the actuation logic, which also may or may not have much AI in it and so on. If you can take that entire thing end-to-end and make it happen, I think that is a tremendous opportunity. We’re starting to see leaders starting to do that.

Now, are they doing it for the entire vehicle for all the tasks? No, but they’re looking at, let’s say, you need to land the eVTOL vehicle and you see the target, which is the landing pad. We can pretty much use a bigger height, like 200 feet. You can hit a button and this is going to land by itself. That entire thing is being done with AI, because at that point we’ve taken that section of the task and we can do it really, really well.

We’ve seen where you not only see success in a lot of the trained use cases, but then because we are using methods which have a level of generalizability, it’s even in all kinds of unseen situations where partly occluded because of fog or rain or snow this thing will still land. If it’s dense fog, the human cannot operate in it, for example. Starting to see that.

Jeff: That’s a good example. Also, with that new technology, taxis we can take to and from work, the amount of technology they would hope would make that safe. Safety becomes a critical component to that. That’s where hearing that you guys are using autonomous systems makes sense. Also, obviously, cars. What are some other areas that you see more near-term expanding with autonomous systems?

Gurdeep: We’re starting to see a really horizontal exploration and adoption. My favorite example is Pepsi. You may have heard about PepsiCo Cheetos. This one, everybody likes Cheetos or knows about Cheetos and has had orange fingers at some point or the other. We have been working with PepsiCo to take the entire production of Cheetos and make it autonomous.

Now, PepsiCo––very, very well-run organization. They have tuned and optimized their systems really, really well, but then they’re hitting a plateau where whether it be the waste because they do a lot of quality control, if that’s an issue, or if reliance on sometimes the experts who are operating those machines are not available, what happens to productivity? If there’s varying levels of expertise between the operators, then you end up with some will have higher loss, some––they’ve hit this plateau, which you just couldn’t get past, and then along comes autonomous systems and AI.

We’ve taken that entire process and we are able to control all the different controllables in that entire manufacturing line to deliver at a level that they’ve not been able to achieve before. In fact, they were so excited about it that their global CEO actually tweeted about it about a year ago. They’re moving globally to using the brains that are built with that autonomous systems chain. That just tells you that there is absolutely no limit. In fact, I would go as far as saying that any process or any system that has so many parameters that we’ve not been able to control them properly can be done better now with AI and with autonomous systems.

***

Jeff: Let’s jump to the future. If we imagine what do autonomous systems look like 20 years from now, what are some of the biggest problems you see? You said it could apply everywhere, but as we think about the world and where we’re going, there’s been a lot of concern recently. How do you envision autonomous systems solving some of these big problems?

Gurdeep: I expect that autonomous systems will run large parts of the world in the next 20 years. Convinced of that. I think we as a society, as a generation, we’ve seen COVID and we’ve seen what devastation it had on global production and global supply and so on. That’s one big problem. The other big problem is this climate change is creating very, very quickly very novel problems for humanity. I expect autonomous systems to really, really play an incredible role.

I’ll give you a couple of very tangible examples. We’ve seen recently the fires. In fact, right now, New Mexico, there’s these fires going on, and in California. Some of the fires we know were started by power lines. Some were, of course, human. The power lines is an intractable task for the power companies to actually inspect power lines across the state of California. It’s just not feasible. If you had drones which could do that for you, not only do that, once they could do that, like every month, you could have drones inspecting every inch of power lines. That’s an example of how you could really impact safety.

The other is firefighting. We believe firefighting is a task that a swarm of autonomous systems working in a collaborative manner can actually really, really do well without any risk to human life and so on. I think that there is a whole––it’s not just the efficiency of it; I think there’s the new parameters that humans are dealing with. Then there are some more things that we’ve known for a while.

If you look at Japan, they have an aging workforce problem. For them, it is existential. They have to rely on autonomous capability. Otherwise, they cannot keep their offices running, they cannot keep their factories running and so on. I’m not a believer in this concept that the world––nature creates the problem and the solution. I think that in some ways that nature is putting both these things in front of us. “Hey, the world is changing, and it’s getting crazy, but you know what? You have the antidote for that.” I believe autonomous systems is it.

Jeff: Where else do you see autonomous systems solving problems? You hit on some of the climate change and stuff like that and you briefly mentioned some of the supply chain issues we’ve been having. Any other core topics come to mind as far as the future and how you see this impacting the world?

Gurdeep: Yes. Absolutely. Supply chain already, we’re starting to see movement on that front. I think robots, in particular, back office, working the factory floor. Those things I expect to be all there. I think maybe the hardest thing that we will eventually get to is when you have these autonomous systems literally working around us, in homes, in schools. I think that requires just a level of polish and completeness, because we’ve seen, even the early days of computing, that until you solve those things, you will never be able to penetrate the mass populations, and you should not.

I think that will take a lot of work, and that’s where a lot of the human factors come in and safety goes to a whole different level. No one’s going to wear a hard hat, right? In their homes and so on. That, I believe, is going to be “the Jetsons” moment when autonomous systems have really penetrated our lives.

Jeff: There are so many applications for bringing in autonomous systems, but as we think about our personal lives, you mentioned this notion of having robots around us. Can you speak to that a little bit more about how autonomous systems will play a role in robots?

Gurdeep: Absolutely. I believe that the areas that we should focus on most are where the biggest need is. If you take that lens and apply to consumer scenarios, I think assisted living is a huge place where robots can have a tremendous impact. In fact, one of the greatest privileges of my work is to work with some incredible people. There’s a scientist roboticist called Dr. Katsu Ikeuchi who works with me, who is focused on assisted living. It is really amazing because the approach that he and his team are taking is that it’s not that you need a robot; you go buy a robot, robot comes home and robot knows what to do and everything, because, at some level, that is super hard, anyway.

It’s that you can teach this robot so easily end-to-end tasks in your own way, the way you like them. Now, this thing is like when you get somebody to assist you, you always tell them, like a human, “Hey, I like doing this way. I like to sit this way and put my table here and put my coffee here because I can reach this easily.” Taking exactly that approach for teaching these kinds of systems. Assisted living, I believe, is a real high-need scenario. There are some other scenarios like safety in the home and security. I think those are also super interesting.

Jeff: It’s good to hear that. It seems like if we can get that right and do that gracefully, that we’re setting a high bar. The human nature of that and how we care for and support our elderly, there’s some virtue there that I think will have a waterfall effect. That’s awesome. I have just two more questions, and grateful for all the insights so far. One is just around, again, this concept of technology changing faster than humans.

We have many things coming together right now where we saw it the last decade, things changed really quickly and we didn’t quite realize it, and yet, it still seems like we’re just at the beginning of things really coming together and being able to make fast change. What advice do you have people that are caught in the middle where they haven’t really been prepared for some of the changes of this technology innovation?

Gurdeep: I do believe that with autonomous systems there will be a level of rescaling and people will end up focusing on different kinds of tasks. They will focus on more executive tasks. They will focus on implications of these kinds of systems and solving the new kind of problems that are like when the whole bioethics field exploded, when suddenly they would test-tube life forms of life or they were creating genetic replicas and so on. In the same way, I think with autonomous systems, there are new kinds of jobs that are going to emerge. I think we need to stay open to that.

The thing about progress and technology is that it’s got its own journey. In some cases, we can not adopt something just because it is possible because we decide intentionally not to adopt it. For example, there’s been cases where there’s surveillance or there’s databases where every face in the world is there and you can look it up. Then we say that’s maybe not something we accept as society, but it is there. The fact that you can clone things is there, but are we cloning humans all over the place? No. We’ve decided we will withdraw on this.

To some extent, we’ll hold things back, but other things we will not be able to hold back, or at least hold back for a long period of time. You will not be able to hold back the fact that a business owner who has a factory is going to make some parts of that autonomous. You can’t hold that back. The economy and economics and everything is going to drive some of those factors. In which case, rescaling and leaning into that and being much more proactive about it as society.

I’ll leave you with this anecdote on this, that after the first Industrial Revolution suddenly these mechanical machines, steam engines, the world was going to change. The big factories were set up, but the economy actually went sideways for about 40 years. That is called the Engels’ Pause. Engels was this philosopher, thinker. In fact, Karl Marx was influenced a lot by him. He writes about that. That happened because the workforce didn’t exist. We had an agrarian society and now suddenly they’re supposed to work with German machines. The old Charlie Chaplin image of people screen––you hadn’t even reskilled people to do that.

We can’t afford to do that. Again, we should learn from these past things and apply them to what the next 15, 20 years are going to be and the things are going to change.

Jeff: Thank you for your wisdom and excited for how you continue to shape it. I think 32 years at Microsoft, that’s amazing. How are you going to keep shaping the next 32 years? Seems like that’s the journey taking us into 2050, which a lot of people are talking about, and really excited to watch this and think about how we can design and build it with intent with people like you that care about the technology, but also about the human experience.

Gurdeep: And people like you, Jeff.

Jeff: Thank you. Good to have you on the show. Really appreciate the time in your busy schedule. Again, grateful.

Gurdeep: Great to be here. Great questions, and yes, an exciting future.

***

Jeff: Steve, grateful to have you here with us as a leader in automated and autonomous systems with a deep history in hardware and software and algorithms. Can you give the audience a bit more about your background?

Steve: Yes. It’s good to be here with you. I started from electrical engineering training all the way up from college to graduate school and get my PhD in ECE from University of Illinois at Urbana–Champaign. It’s a good process in that I got very solid training in the electrical engineering front, and then later all my career paths have been dealing with the signal processing, acquiring signals and analyzing those signals and using the information in the features we got to control the system to get the job done. It’s a close-to-loop control process with certain autonomous [elements].

Jeff: As I understand it, you have two PhDs, you’ve done 20 external publications, and you’re an inventor of 15 patents. Is that accurate?

Steve: Only one PhD, but I did have postdoc experience with Harvard Medical School.

Jeff: Postdoc?

Steve: Yes.

Jeff: Okay. It sounds like you have a deep set of experience on the educational side, but then on the practical side actually, having run a business and also worked for some world-leading companies. I learned a little bit about your experience at Philips. Can you speak to some of your experience there with algorithms and autonomous systems and what you guys were working on there?

Steve: Yes. The Philips Lab actually is a very good place to do those fancy research. We are actually targeting about 5 to 10 years technology trends in the lab. Trying to think about what’s going next, especially for any technologies that’s related with the Philips products. The medical products. In there I encountered one interesting project that is to use the ultrasound image to guide the high-intensity ultrasound wave to cause the coagulation of inside tissue. It’s all a minimal invasive surgery type, but it has to be done without human interference.

Consider that was the time in 2005 to 2006. That was a quite bold initiation to start this kind of project. It’s sponsored by DARPA. Of course, they are always looking for some wide off-the-wall things. The interesting part is that the algorithm really plays a key role in the whole process. You first need to look at the image and from the image you extract the bleeding side inside tissue, then feed that information to the energy delivery side using the ultrasound phased array to deliver energy into the tissue. Raise the temperature locally, not into other places, and then cause the tissue coagulations. It should be done all autonomously.

We are achieving a certain level of success by delivering a testbed system. It’s a prototype system that we can stop the bleeding within 90 seconds without harming other tissues. That’s a good project to go. It’s the process that we can look forward to, think about down the road what can be improved. What do we need to further make this prototype to be a viable product for other similar cases?

Jeff: That’s great to hear. Now, at Fresh you’re involved in robotics, and we’re trying to pave the way for robots to connect to robots, and robots connect to humans. That’s going to involve autonomous systems. Can you tell us more about some of the work you’re doing and how that can shape the future?

Steve: Yes. Even being a more interesting part, I think this is going to be the version 3.0 of my career here, that we are diving to the scenarios that we are trying to lower the deployment cost of the robotic system by not only implementing the robots solution but also some other data analytical solutions to facilitate the cooperation between the robots, and even between the robots and humans. This is a huge undertaking. If it’s successful, I think it will open another big chapter for everything in the automation space.

Jeff: One of the things you had mentioned was how you build those systems with intent. I think your experience building in the classroom, you’re building algorithms and software for the classroom and thinking about students. The interaction there, I think, is really applicable to this notion of building autonomous systems with intent. One of the things I heard you say was, “How do you keep the humans part of the loop when they need to be?” I’m interested in more of your thoughts around that. When does it make sense to make something truly autonomous and at what point do you build in the human connection and keep humans in the loop?

Steve: It’s an interesting question. Actually, thinking about humans in these use cases, first of all, they can be the object to be interacted with. Second of all, they can be some type of operator or controller administrator in the system. These are two different roles. For the first type, we are treating that engineering; we are treating that as a typical object in the environment. We’re collecting its status and monitoring its behaviors and trying to understand what it does. What he or she does, not it.

It’s a typical engineering solution for that case, but for the human interference as operator or administrator role, that definitely requires a lot more work. You have to present the data to that operator. The operator needs to use the human wisdom and the business logic to make a decision, then participate in the control part of the whole process.

Jeff: For those that are newer to this topic, there’s a lot of terms that are associated with autonomous systems. There’s machine teaching, there’s simulated environments, there’s digital twin, there’s reinforcement learning. Can you walk through some of those terms and explain in plain English what some of those mean?

Steve: Yes, sure. Definitely, the digital twin is a buzzword right now. It’s not a very new concept for the engineers because we always use the simulation and visualization to help us to understand the process. I think what’s even better is that the digital twin would offer us an opportunity to try to simulate some edge cases which are not able to be duplicated in the daily usages. That is to test the engineering limits of the system, or trying to identify any potential mechanism that can improve the systems.

We often use simulations to do a lot of work in our daily projects. I think that’s a great place to have the value for the digital twin technology in all the robotics work automation work. You cannot do everything from the real lab. That’s going to be very costly and time-consuming. Yes, that’s a good way to use the technology for our work. Talking about the machine learning part, especially to reinforce the learning and deep learning and all those technologies, we have seen quite a good adaptation using those technologies for solving our problems. Sometimes they show much better value in the potential compared with the conventional methodologies.

This is the part where I’m especially interested in because all my previous college trainings are on conventional methodologies. I still remember that my thesis advisor, Professor Bill O’Brien, he once told us that you should graduate within five years from the school because by the time you graduate, the first-year education is going to be outdated. Yes, I think that’s the case that everything evolves at a much faster pace nowadays, so we have to be prepared to adapt to that change.

Jeff: This notion of mapping the real world so that you can simulate it and train it before it goes out into the real world is like a fascinating concept that we’re trying to mimic the real world so that we can train in it, and we can teach it before you go deploy it. Tell us more about that. As we think about autonomous systems, how are people using these simulated environments?

Steve: Yes. Well, simulation definitely helps a lot in that case, especially when we talk about mapping the world. Let’s think about that as a first step when people interact with the world. The first thing we do is to percept what’s surrounding us. As long as we get the information in a digital way, this is the input to our system block. In the system block then we can apply all kinds of methodologies to develop the algorithms to test the visibility and validate it and also do some parametrical studies. Everything can be done there.

One good example is that Boeing has already transferred all of its design process into the digital pipeline. That’s a great way for a big enterprise to be able to use the technology, use the simulation to speed up their design and lower the cost for the R&D. In our case, I will say, for the perception side, I will use SLAM as a simple example. SLAM means Simultaneous Location and Mapping for your surroundings. It’s basically using a camera to shoot around your surroundings. The camera is smart enough to take note of each post, each orientation of its shooting. Then we stitch together all those images. Then we can reconstruct the 3D space information around the camera.

This is going to be very essential for any mobile robot or any automation system. You need to know the surroundings. That’s the case. We are using the technology and the simulation to get the job done.

Jeff: What are you excited about from a perception perspective because we know sensors are a big part of the equation?

Steve: Yes. Well, the perception perspective, there’s a huge community working on that. I think on the research front, there are a lot of new ideas popping up every day. If you go to those conferences, you will see quite amazing results happening every day. Mostly from those young students. I would say, for the implementation side, we have seen a turning point where, by the down selection and the screening of all the previous achievements from the research communities, we can pick some useful ones for our implementations. Those can be deployed into the edge devices, which are able to be plugged into a very energy-efficient mobile chassis for use in the automation process.

This is a big step because previously when we talk about those models, especially in the computer vision models, using deep learning and everything else, it will always feel the cost-effectiveness is not that great when you try to use those technologies. They are useful, but they are not very valuable in terms of the implementations. Nowadays we can see with the computing power boosting up and with all those very nice and light-weighted algorithm models being available, also the hardware, the computer camera hardwares adding together, they make the implementation feasible.

I view that as a product-ready stage coming up, because previously, five years ago, we would say it’s technology-ready, but nowadays I see the product-ready moment is coming. Then if we talk about the next five years, I would say the commercially availability, commercially-ready state, will be there with our effort adding to everybody’s effort.

***

Jeff: You hinted at the future a little. Let’s talk a little bit more about the future. Here we are today. You’ve been deep in this space for the last probably 30 years, from school to your experience in the world of algorithms and software and hardware. That’s the future of autonomous systems. As we look to the future, 20 years from now, what do you see autonomous systems doing?

Steve: I believe the autonomous systems will be ubiquitous everywhere. It would be like everyday normal items for everybody’s life, not only on the factory floor but also in some consumer use cases. Think about now in China some companies are deploying those robot taxis on the open road already. In the US, it’s the case that we have a couple of companies doing this too. This is the beginning. I think with all those technologies penetrating our daily life, sooner or later we will see a lifestyle change for everybody. This is what’s happening.

Jeff: The thought is that we’re building things that are designed for us not to control, but we very much still want to control and have intent in autonomous systems. Elaborate a bit more on that. What are your thoughts?

Steve: Well, I wouldn’t use the word of control per se because, in the end, you cannot control anything. I don’t believe we can control a robot in all aspects as the technology evolves. What we can do is that we can find a way to live in harmony with those devices and robots. Robots need to respect human beings. We also need to respect the robot because there’s also quite a lot of debate either in the science fiction society or in the public space about, should we set up some fundamental rules for the robots? First of all, robots should not hurt a human being, right?

The same thinking about that is if people are using the AI or perception computer reaching technologies, trying to control people’s life. In some spaces, they use surveillance cameras in a very abusive way, trying to probe your privacy or ensure that you behave good. That’s not the right way to use the AI. Also in educational use, some teachers believe that AI is interfering with the natural process of education, so that’s the fear, but looking at this, inevitably, we have to use those technologies down the road.

How do we reach a common ground among people, especially among people to reach mutual understanding on what’s the boundary for us to use the technologies? The boundary can be changing over time because nowadays we think that’s the boundary, but in 10 or 20 years, we may think the boundary can move up or move down. That’s all debatable and it should be all negotiable.

Jeff: What are some of the bigger problems you see autonomous systems helping with in the future?

Steve: The bigger problem so far I can see, we already have that working in the smart factories. We have logistics applications. Also down the road I still see if we plug in the human factors, I would see the lifestyle space would have some opportunities there because it’s going to be a easy implementation for one small problem, but if you’re adding up all those small problems being solved, then finally it’s going to be a large scope of the lifestyle change for our daily life.

Again, I’m using Alexa as an example, and also the Amazon Go store. I believe business-wise they are still not making money by deploying that many stores everywhere, but that’s the trend. It’s a combination of the supply chain management and the logistics and the shopping experience and everybody’s daily lifestyle factors being mixed together in that model. I really hope that model could be successful sooner than later, but we’ll have to see.

Jeff: It’s interesting that you bring up Amazon Go because it’s an element where you brought together a lot of technology to try to make someone autonomous in their shopping experience where they’re not gated by––there’s no friction there for checking out, right? I think that’s a parallel for autonomous systems in general, right? It’s like you’re gated or impeded or slowed down, but really autonomous systems should serve the human experience for having more autonomy and hopefully less monotony or less friction.

I think that’s probably when we get it right, where it’s like we’ve enabled us to be more autonomous and we know that machines and robots or the technology that Amazon is building is an example with Amazon Go, it’s all around that same thread, right? Is autonomy; a human virtue that’s important to humans.

Steve: Yes. Exactly. I think, in our previous work, when we deploy those motion quantification technologies into the classrooms for the Chinese students there. They already see that the measurement, the quantification of the motion indexes and everything else is valuable and is done without humans noticing. It’s quite natural to fit into the loop, so I think thinking down the same pipeline, when we talk about the robots doing the work, if we still have the human being involved in the loop, it should be the same philosophy.

That we will have the perception done in a very natural manner and also we will establish the communication between a human and a robot in a more efficient and pervasive way to get the job done.

Jeff: As far as where do you see things going wrong with AI and autonomous systems, where do you see us making mistakes?

Steve: Well, somehow we would have higher expectations for the short-term results. I’m citing Amara’s Law, is that we’re often overestimating the short-term effects of those technologies, but we are often also underestimating the long-term value for those technologies. Anything that starts from new and scratch, especially if you are trying to establish a new business operation model using the new technologies, it will not be an easy way. Somehow, for the e-commerce space, we have seen the capital driving this force for quite a bit and making it to widespread acceptance nowadays.

In the autonomous systems, the process may be a little slower than that. Partly because the development cost is still not very cheap nowadays. We are trying to integrate all kinds of state-of-the-art sensors and actuators into the system. When it comes to the down selection and all the engineering work, there’s still quite brainpower-intensive and manpower-intensive too. We would not expect a very quick and snappy result just by one iteration. There may be many iterations to reach a status that you can use in your business operation.

I would suggest that any business decision-makers be open and be optimistic to this technology trend, thinking about automating your process down the road and also be a little forward looking instead of just staying where you are because if you’re staying where you are, sooner or later, you would be left behind everybody else. On the practical side, in the implementation, you really have to set a goal but reach the goal in different steps. One step at a time, making it controllable and cost-effective. Especially you have to control your risk bottom line for your business.

Jeff: Any other advice or thoughts on the future as it relates to AI and autonomous systems that we haven’t discussed so far?

Steve: Yes. I think for the future technology-wise; I think people are working diligently on different kinds of ideas and everything else, especially we should be prepared for a new kind of computation workflow because nowadays when we are talking about either the CPU computing or the GPU computing, we are dealing with the data in either the scalar form or the matrix form.

We often talk about the tensor computing in the GPU because that’d be very efficient to deal with those heavy loads of matrix data because everything is being discretized to the linear algebra space, but down the road, I would say if we want to achieve much better precision and much better prediction based on the collected data, we would need to use the time series in a high frame rate analysis. In that sense, we have to think about changing the way we are dealing with data computation right now.

I have seen much progress happening right now, such as people are thinking about, just, for example, using the camera as an example, because everybody is familiar with that kind of technology. Nowadays, all those cameras have a shutter which you are doing a time average projection of the pixel intensity change in an image. If you think about how you can deal with each single pixel in a time-series manner, that way, you can track down the intensity change for each pixel. That way, you are dividing this problem, the time average of the projection to the image into a multichannel data processing problem.

That way, it’ll enable a lot of other opportunities for engineering work. That would mean a lot of more new algorithm implementations, which could be more computationally efficient and less cost in the GPU and the power consumption requirements. Those are all very important, especially for the autonomous systems. You need to think about how you deploy a much more efficient and much more low-cost system for the data usage. That’s the essential part for making this to be widely accepted technologies down the road.

The newer camera technologies, typically, people refer to that as an event camera so that the event camera would be able to track down the intensity time history of each pixel. That’s a great start. I think right now the spatial resolution is not that great, but with all the silicon technologies being advanced, I believe sooner or later we would have very high definition in the space and also very high time definition signals provided to us for further processing.

I think that’s a very interesting and amazing part. We are very much looking forward to that area to come. I think as human beings we are naturally fitted to the loop of identifying the problem, solving the problem, then identifying new problems to solve new problems again. This is like an endless loop. Right now, everybody is very enthusiastic in this space. I think that’s a very good sign to move forward.

Jeff: One last question before we close out. I’ve appreciated all your insights so far, Steve. Any advice for those that might be thinking about the future and changes that are coming and are coming into some career choices? There’s a lot changing right now, so any advice for the young student that’s thinking about, “Where should I focus”?

Steve: Yes. That’s quite a personal choice, but from my experience, I’m very fortunate so far actually, to be able to tag along with all the technology trends and I still enjoy the work. I sometimes, actually right now, I’m still doing some coding for some clients’ projects. It’s interesting that if I use the old tools, I can get the job done, but I like to search for some new tools to have the job. For anybody’s career, especially from the young generation, my first advice would be to follow your heart, just to stick with what you are interested in instead of what could make you big money in the beginning.

I think as long as you build yourself up and establish yourself in your career, especially talking about the engineering side, you become a certain expert in one domain, rven if it’s a very narrow domain. And all the rewards will come back to you. You don’t need to worry about the rewards. Rewards could be the money, the cash or anything else, the price, anything else, but I think the true pride for me is still that you would have the opportunities to dig more, to learn more every day. That’s actually very satisfactory to me.

Jeff: Appreciate that advice and the time you’ve spent with us, Steve, and your experience and leadership, so great to have you on the show.

Steve: Thank you very much for giving me this opportunity to participate. Also, I’m glad that we can trigger more thoughts among the community to let everybody participate in the conversation.

***

Jeff: Thank you. The Future Of podcast is brought to you by Fresh Consulting. To find out more about how we pair design and technology together to shape the future, visit us at freshconsulting.com. Make sure to search for The Future Of in Apple Podcasts, Spotify, Google Podcasts, or anywhere else podcasts are found. Make sure to click subscribe so you don’t miss any of our future episodes. On behalf of our team here at Fresh, thank you for listening.