Podcast

The Future Of Robotics Safety

Nathan Bivans, Chief Technology Officer at Fort Robotics, joins Jeff Dance to dive into the future of robotic safety, including ethical concerns, generative AI’s impact on robotics, and how to ensure that robotics safety regulations accommodate rapid technological advancements without compromising human safety and ethics.

Nathan Bivans – 00:00:01: The most successful Customers that we’ve had, the thing that they do is they realize that they need to move quickly, so they can’t just stop to do safety. If they take a little bit of time, generally upfront, and do things like a hazard analysis, it doesn’t have to take too long.

 

Jeff Dance – 00:00:19: Welcome to the Future Of, a podcast by Fresh Consulting, where we discuss and learn about the future of different industries, markets, and technology verticals. Together, we’ll chat with leaders and experts in the field and discuss how we can shape the future human experience. I’m your host, Jeff Dance. In this episode of The Future Of, we’re joined by Nathan Bivans, Fort Robotics CTO, former CTO of Humanistic and a certified machine safety expert to explore the future of Robotic Safety. Nathan, thanks for being here with us.

 

Nathan Bivans – 00:00:59: Alright Jeff, thanks for having me.

 

Jeff Dance – 00:01:00: Good stuff. For those who don’t know you, if we could get some quick insight into your experience and your journey in hardware and Robotics, tell us more about yourself.

 

Nathan Bivans – 00:01:10: Well, I think it’s an interesting past. So I started out at Apple, I got a couple degrees in electrical engineering. I was really hardware-focused, started at Apple doing hardware design on what was the mobile products in the early 2000s in the laptop team. Did a lot of really cool stuff there, spent some time in China, in Asia, doing support and manufacturing of the products we were working on. I helped debug issues with the original iPod. Some of the things that are ancient now, I was on the team at the original port hardware development of the first Intel products. It was a pretty cool time to be there, but I got pulled back to the East Coast and then worked for a company called Lutron that does lighting controls and was working on these hybrid wired wireless, what would be now IoT Devices for lighting and lead control in very large buildings. And then we ended up with Motorola working on these big networking boxes so that we’re doing hybrid IP to RF communication. So I’ve been a whole bunch of different things up to that point, doing hardware, doing software, doing architecture, doing some network development. So it was a little bit of everything. And then I guess I get bored easily because then I started looking around and I found an ad for, on Craigslist of all places, and I had for a company that was looking for an engineer and they worked on landmine clearance. It’s like, well, gee, that’s interesting. Not something you run into every day. And that’s how I met Samuel and one of the founders of Humanistic Robotics where I got into the world of Robotics with probably the most obvious mission for a Robot, which is to clear landmines. Of the dull, dirty and dangerous, I probably can’t get any more dull, dirty or dangerous than clearing landmines. And I was the first electrical engineer for software, hardware, anything other than a mechanical engineer with the company. So we kind of audacious thinking we could build a Robot to clear landmines when like really none of us had any experience with Robotics, but.

 

Jeff Dance – 00:03:10: That’s how most companies start, we keep going.

 

Nathan Bivans – 00:03:14: I guess you have to be naive to think, oh yeah, I could do that. And we took on the task of converting a five ton track loader from Terex into a Robot, which in, this was in 2012, the software, the hardware was a lot less mature than it is today. So we did develop a remote control system, did teleoperation, and then we started adding some autonomy to it and very quickly realized that this was a very dangerous machine. Something that’s five tons, drives by itself, and can go over 15 miles an hour, can be very, very scary when you’re operating around it. So we developed a safety solution that something literally just for self-preservation that we could trust. That basic work we did in Wireless Safety for that machine and then some governance on the AI is really what was just the very beginnings of what became Fort after several years. So the HRI or Humanistic Robotics developed these two things, this Robot and the safety systems and then the landmine clearance tools themselves that ended up being deployed with the UN all over Africa and the Middle East in support of peacekeeping missions. But it was quite clear that the UN was not interested in the Robot. They were moving from a world where people were doing landline clearance and really IED clearance with shovels and sticks. So moving to something that was vehicle-based was such a huge improvement in both safety and efficiency, that saying, okay, now you’re going to deploy Robots, so it’s just a bridge too far. But we did find that other companies that were doing mobile Robot development had the same problems. And so we started selling our Wireless Safety technology to other folks that were peers in the industry, the fledgling industry at the time. Until early 2019, late 2018 we realized there was just the convergence between landmine clearance and Robotics just wasn’t really going to happen anytime soon. That’s when we formed Fort, and Fort’s been growing ever since as a, beyond those first basic pieces of Wireless Safety, to really more of a platform for safety and security in modern Robotics. It’s a long journey.

 

Jeff Dance – 00:05:30: No, it’s great. No, I appreciate the journey. It is interesting, especially when we start thinking about clearing landmines with Robots. And obviously with everything going on in Ukraine right now, I would think that that could be really useful.

 

Nathan Bivans – 00:05:42: Yeah. Sadly, there’s always more work to be done in that field. I hope it’s one of those things you could just knock out, we could move on, but that never seems to be the case.

 

Jeff Dance – 00:05:52: Obviously, the convergence of all these technologies probably is helping in that regards, but still super challenging when you get into those real world applications. I love where your company’s focused. Obviously, chat had a bunch of spam, but grateful to have you here. I did notice you had six patents as well. I was just curious, where are those focused? Are they also in Robotics? Are they more hardware engineering oriented?

 

Nathan Bivans – 00:06:13: They’re all pretty much Robotics. There’s a few different things in there around governance of autonomy and the safety communications, some newer things around safety protocols and optimization and latency characterization of links and a lot of the deep technical pieces of how do you do safety reliably over wireless, but also getting into some things around security and how you mix safety and security and really how security is such an integral part of a robotic safety solution because you’re releasing these complex machines into environments where it’s largely uncontrolled. We don’t have fences and walls so much anymore with these machines, so security is now a bigger problem.

 

Jeff Dance – 00:06:58: Thank you. Let’s talk about that more. What are some of the, actually before we jump there, you recently spoke at RoboBusiness as I understand it, the topic was on robotic safety and cyber security. What were some of the key takeaways from that talk? If I were to ask you in an elevator, we don’t have time for the whole thing, but what were some of the key takeaways?

 

Nathan Bivans – 00:07:19: I guess I gave myself a good segue because it really is the fact that safety and security are tied at the hip in modern Robotics, especially when you go outside the factory or the warehouse. For indoors, we can contain the problem and say, well, I’m on nice little private networks and I have walls or fences and I have controlled access to the site. But our Customers are doing work in agriculture Robotics or construction Robotics or in transportation is the ultimate one, right? These are largely uncontained applications. The networks are sometimes completely public, but at the very least, it’s harder to control access. So the idea that safety and security are tied together. You really can’t have one without the other. That you can sometimes the proper response to a security violation might actually be something we would consider traditionally a safety response, right? If you have detect a critical intrusion into a Robot, you might need to shut the Robot down in a functionally safe way. Whereas, you know, if that happened in a financial system, right? Obviously, it’s like it’s a different type of response. So we end up integrating these two things together more intimately. And from a process of development standpoint, you know, if anybody who’s done safety development or really any rigorous engineering is going to be familiar with the good old V-model, you start with your requirements on the on the top left, you work through design development. validation and verification on the upside. And that’s really well tried a process for safety, but really there’s another layer that you add to that when you run security, right? Instead of starting with a hazard analysis, you’re starting with a threat model. And you’re working through requirements and architecture and design, the same way you do for safety. So there’s a way to unify safety and security. So it’s really just those parallels. This is not a brave new world. It’s like a new dimension to ensuring you can trust the behavior of any system, but Robots in particular.

 

Jeff Dance – 00:09:22: Thanks for that depth of insight. And I would have loved to have heard the entire thing, so I have to grab maybe the deck for me, if you don’t mind. Tell us, as we think about today, what are the top concerns and top problems in Robot Safety? I think you’ve just alluded to some of those, but what do you see as the top problems? I think it’s probably forgotten a little bit, but from your perspective.

 

Nathan Bivans – 00:09:43: I’d say that’s a big one. You know, the people get into building Robots, not because they love safety and security, right? They get into building Robots because they want to solve a problem. they’re focused on the application and rightfully so. So often during development, you know, safety is, we sell a lot of wireless e-stops to people who are just like, hey, I need some way to stop this machine because I’m still working on the software, the sensors. So who knows what it could do? I need to maintain control. We sell a lot of those. But then often when they think they’re ready to deploy, they’ve validated their software and say, oh great, I can trust it now. But if you look at it from a safety standpoint, I haven’t tested it nearly enough to actually show that it is safe to the six, seven, eight, nines of reliability they’re required for most safety applications. So it’s really about how do you enable those Customers, those people who want to build the Robots, to move as fast as they want to and certainly if they’re VC funded, really need to move to keep the investors happy while still having a safe application, right? Being able to ensure that, yeah, we’re never going to be able to just test our way out of this problem. So we have to have designed in safety measures and in the same sense, designed in security measures because you’re never going to be able to ensure it just by testing it forever. So like that’s a big problem that we see. And I’ve seen a lot of Customers who don’t start with safety in mind. And then often they go to, you know, make that big sale and their customer starts asking hard questions that they don’t have good answers to and at best it’s a significant delay at worst it could kill a deal on one hand, of course, I love hearing from Customers, I love being helpful, but I hate when they’re in that situation because obviously it’s a difficult place to be.

 

Jeff Dance – 00:11:37: Any other thoughts on balancing that? You’ve alluded to that, but as you think about the speed of innovation, and most Robot companies fail, and yet the confluence of the demand for Robots that can solve some of the labor shortage issues that we have and the dirty, adult, dangerous situations that we have right now, and just everything seems to be converging into the need for more Robotics and automation, and yet still most Robotics companies fail, and so they do have that pressure and most people don’t know that. There’s a lot of pressure to succeed, and it’s hard, it’s really hard to create a successful Robot company with the capital requirements, intensive engineering and software requirements, and the fact that you’re on the frontier with user experience things and regulation that you’re trying to navigate that hasn’t really been laid yet. So how do you balance the need for speed innovation, but also something safe? I think that’s why Fort exists, right? It’s like, hey, we’re going to handle this for you, but tell me if you have any more thoughts on getting that balance right.

 

Nathan Bivans – 00:12:42: I mean, you’re absolutely right. Like that’s what we’re trying to do it for. But if I take a step back from my, I’ll try not to make this a sales pitch. The thing that I found the most successful customers that we’ve had, the thing that they do is they realize that they need to move quickly so they can’t just stop to do safety, but if they take a little bit of time, generally upfront and do things like a hazard analysis. It doesn’t have to take too long just to understand the scope of the safety challenges they face, then often you can look at it and say, well, there’s some high priority things that I do need to do and often they’re not that hard to solve and then there’s some stuff that’s a little lower on the priority list that maybe you knock out later, right? You don’t, you get your first Robots in the field and you keep improving. So a lot of it is just understanding the scope of the problem, which is highly dependent on the application and the environment you’re operating in. So nobody else can answer that question other than the people who know the application, but if you take the time to do that hazard analysis upfront, it can really help you get a sense of the scale of the problem that you have. And probably build in some things upfront that don’t take as much time as you’re afraid they might’ve taken that would scare you off from even thinking about safety, you get that stuff done upfront and then you probably will end up better off later and then you can also answer these questions, the hard questions that you might get from a customer down the line because you at least thought of it. You might say, yes, I don’t have a solution to that problem, but at least you recognize that that problem exists and hopefully himself, the big ones. And honestly, it’s the same for security. You know, if you don’t think about it, it’s going to bite you. But if you spend some time thinking about it, even if you don’t have a solution for everything, you’re much better off just having a scope of what the problem is.

 

Jeff Dance – 00:14:31: A little bit of planning, go slow, just a tiny bit upfront so you can go fast later on.

 

Nathan Bivans – 00:14:36: Absolutely, yeah, that’s a good succinct way of putting it.

 

Jeff Dance – 00:14:40: Tell us more about the players right now in Robotic Safety. So there’s companies, but there’s also regulation bodies, there’s people that are shaping policy and all this is emerging because Robots is evolving and changing dramatically. Tell us more about the big players in the space.

 

Nathan Bivans – 00:14:57: Yeah, you’re right that there are a few different facets. You know, there’s obviously the component providers and technology advancing, right? So we have… For a long time, Robotics was really limited by the capability of sensors, right? And cost is still an issue, especially when you get into the safety certifiable stuff. But the cost and capability of sensors, whether it’s lidars, radars, the stuff that we’re doing these days with computer vision, it’s just advancing so rapidly. That’s becoming less of a problem.

 

Jeff Dance – 00:15:28: It’s like memory. It’s just progressively getting cheaper, cheaper, cheaper.

 

Nathan Bivans – 00:15:31: Yes, absolutely. And what we can do today is, seems miraculous compared to two years ago, and I can only imagine two or three years from now where it will be. So that’s less of a problem. And this is why I think we’re seeing more of our customers moving from R&D to actually deployable, because hopefully their bomb costs are coming down, their capabilities are rapidly advancing. And obviously, you certainly alluded to this, regulation is this interesting problem that’s, depending where you are in the world, it’s a different story. In the US, we often say it’s the Wild West. a very tort-driven society. Instead of in Europe, it tends to be more like they come up with the regulations, don’t really sue anybody because the regulations tell you what you have to do. Unless you’re completely negligent, you’re not going to get yourself in court. In the US, it’s like we let the courts figure it out, which we can argue all day about which is better. Ultimately, I think there’s a happy medium there with regulation or guidelines, just at least so people have an idea of what to do. We spend a lot of time, safety for years, trying to figure out where that line is. We go through safety certification with outside auditors. We have a similar process for building for security. It’s difficult. It’s rigorous, but it’s also a lot of the value that we provide is having that done. So it just comes with the job. What I understand for going through that for customers can be a real impediment to advancing their solution. So finding that balance between regulation and the Wild West is a difficult thing.

 

Jeff Dance – 00:17:15: You want to have enough regulation to guide the industry, but you don’t want to kill the industry at the same time, right, and you need to let it flourish and figure things out in some sense too.

 

Nathan Bivans – 00:17:24: Yeah, and honestly, I think we’re relatively early on for a lot in a lot of markets. So if you went and tried to write regulation now, you’d almost certainly be wrong and you’d probably end up killing some really good ideas that would have worked out just because you didn’t consider them properly in regulation. I mean, as an example, for years, when we first started doing wireless, just wireless emergency stops, we would have undoubtedly in every trade show we’d go to, a number of people come up and say, you can’t do that. And we’d say, well, no, you can. It just depends what standard you look at because we were completely compliant and I could prove all of the requirements to the standards for core functional safety, and the calculations for undetected error rates and some of the communication requirements for safety communication networks, all that stuff we could prove. But then there was like, I think an RiA standard for Robotics that said, e-stops can’t be wireless. And why did it say that? Well, at the time people were doing stuff like just putting a button on wifi and saying it was an e-stop. Well, that’s not good enough. So they said, they took the big hammer approach and said, oh, you can’t do this wirelessly when in reality it’s just, you have to do it properly. Now that’s obviously been changed, but it’s easy to overreach because you just don’t know enough really when you’re doing those regulations. So there’s a balance there and I don’t know that you ever strike it perfectly, but I do think I’m seeing the right discussions going on right now. So I’m hopeful that we’re going to land in the right place.

 

Jeff Dance – 00:19:02: Thank you. As we think about AI and Robotics can go hand in hand, that might be part of the definition of the future of what a Robot is, some intelligence, and I know that gets bundled with AI, but the rapid rise in Generative AI and how that can impact Robotics, given that you can have not just a library of visual images or library of text-based model, but also visual models that can help Robots see and identify things even better, but also a library of control movements that could translate and combine with the visual model when you start thinking about a series of steps. It just seems like the Generative AI, AI is going to keep pushing Robots forward, and how do you see that affecting safety? Are you more concerned? Are you just like, okay, now this is really going to raise the need for more safety? What are your thoughts?

 

Nathan Bivans – 00:19:56: I think it ultimately does increase the need for a focus on safety, but it’s really about balancing it. Again, you could use safety to kill all that great technology. There’s no question that AI, you’re right, AI is the future in a lot of senses of Robotics. Even just simply using AI and ML to just do simulation work. Can run through a lot of simulations in a lot more time to prove out that even your Non-AI based automation software is going to cover all the bases if you use some systems that are a little better at coming up with scenarios rather than having to collect that much data in the real world. So there’s so many applications that you can’t really even cover them all, but when you’re running Generative AI, things that are non-deterministic in the field to like… hopefully improve the behavior of the system over time. It just really means that because the system’s not deterministic, there’s no way to test your way out of determining the trustability or predictability of that system. If we think about the way people work together, you trust somebody because you can predict their behavior. You have a set of rules that you operate by. You’re working next to somebody who’s in an excavator. you’re going to trust they’re not going to hit you. They’re watching for you. So there’s rules, whether they’re spoken or unspoken, that you’ve established. And our approach, and I think it’s the right one, is that it’s really about having those rules established. And it doesn’t mean you’re really constraining the behavior of the autonomy. but you’re just making sure that it’s staying in a well understood safe operating area, and that there’s the right levels of oversight. I guess maybe I’ve read too much science fiction and Isaac Asimov, but there was something to the Asimov’s three laws of Robotics, that the right people need to stay in control and it needs to make sure that the Robots shouldn’t harm people and they should understand how to prioritize different levels of harm. And that’s out there a little bit, but…

 

Jeff Dance – 00:22:03: It’s out there a little bit and yet, we’re seeing in modern warfare, a lot of use of drones. If we’re really getting into drones, it’s not a stretch to say, hey, modern warfare is going to, it already involves Robotics, but to a greater extent, I don’t think it’s a far stretch to think about the use of Robotics and the fears that the everyday human has about that, or at least in the US, people have. I’ve heard that in Asia, we now have like, maybe it’s one to a hundred or one to ten ratio of like Robots that we’re integrating with, because there’s a different culture around Robotics there.

 

Nathan Bivans – 00:22:38: Probably different risk tolerances, which is all part of the calculation.

 

Jeff Dance – 00:22:41: Right. Tell us more about your thoughts on the future. You know, what would be some of the top concerns for robotic safety in the future? We have today and, you know, creating guidelines, creating standards, making sure you have thought in advance about safety. You have a framework for that. I guess I think about driving, you know, it’s like if I go drive my car, well, there’s lanes, there’s stoplights, there’s rules that guide my driving, even though I’m driving a very big vehicle that where you have a lot of accidents. I mean, as much as like deaths in a year, right? Just driving around in the US alone, it’s dangerous, but we have guidelines that guide that. But as you think about the future of Robotics, I don’t think it’s a question that we might have a hundred times more Robotics or IoT Devices connected in the next ten years. So what will be some of the big concerns for Robotics Safety in the future?

 

Nathan Bivans – 00:23:33: I think there’s always the concern that like, what kind of oversight is there? And then when bad things do happen, how much disclosure do we get? And I think that’s something we see in the autonomous vehicle world today. When there is an accident, there’s obviously gets a lot of press. I honestly think in a lot of cases, some of the companies that are doing this autonomous vehicle research, really do themselves a disservice by not being more transparent about what happened. Because in a lot of cases there’s a good explanation, you know, it’s like, oh somebody got hit, well they jumped out between two parked cars. If a human driver was doing that, they probably would have hit them as well. I’ve seen these cases, but then they tend to be really close to the vest with information around those accidents and only pick and choose what they release. So then that causes people to be really skeptical. And I think you’re right, as Robots become more commonplace, accidents are going to happen statistically. It’s just going to happen. And I think the more transparent companies are with the information around those, the better off they’re going to be. Cause people are going to trust and say, oh, well, you know, you admit when you’re wrong and you all, so I can trust you when you say you weren’t wrong and it was someone else’s fault. I think that’s going to be critical. Now, then what goes along with that is companies have to be confident in the quality of their solutions so that they feel okay releasing this information saying, Oh, hey, look, we did everything right. And something did happen statistically. That’s just the way the world works. And nothing’s perfect. If you have that confidence because you know you’ve done the safety and security and design work with the proper level of rigor, then I would hope that we can establish that kind of a culture where it’s not necessarily about blame, but about learning and improving. And then, there’s always a, always a level of litigiousness that we’re going to have to deal with, but hopefully it’s more of a typical product liability stuff that we everybody deals with. And it’s less about just going after people.

 

Jeff Dance – 00:25:41: We’re definitely litigious here in the US. I’ve heard we had like one lawyer who’s been trained for like every roughly 20 people or something like that. And so we, whereas other countries, it’s might be like a one to four hundred  ratio or something like that. So.

 

Nathan Bivans – 00:25:54: I’ve a few more engineers and a few less lawyers, I guess.

 

Jeff Dance – 00:25:57: You know, part of our really light research suggested that, you know, there really aren’t many deaths related to Robots. If we look at like the last twenty plus years, it’s only been like, I think, a one and a half deaths per year as it relates to Robots. But I’m aware of a situation where someone lost an eye and then so there’s deaths, there’s accidents and when you’re dealing with machines, cars, I mentioned fifty thousand deaths in the US, you know, per year, you’re going to have issues, right? We’re human beings. But, you know, as we contemplate the future, Robots will be more integrated, you know, than they are now. Historically, they’ve been in cages and we just see more and more use-cases and companies and investment into the human Robot integration at scale. So, you know, as we think about safety layers and like how we design the future of intent, like what thoughts do you have for or maybe you can articulate more of those layers that we need to be thinking about as Robots get more integrated into our work? I mentioned Asia because we’re seeing a lot more Robots just in like Restaurants and the workplace and not so much in the US so far. But what thoughts do you have on that?

 

Nathan Bivans – 00:27:06: Well, it’s actually something I think it’s worth mentioning is, you know, well, we haven’t seen a lot of accidents with Robots directly, at least not reported in that way. Like you’re absolutely right that the landscape is changing, the fences are coming down. So we are going to see more interaction between people in vulnerable situations and Robots. So I think it’s something to be concerned about. In a lot of places where we are seeing our customers pushing their Robots, they are taking people out of situations where it’s inherently dangerous. So you also have to consider it’s about harm and risk reduction. Maybe the Robot itself is potentially more dangerous than the human operator, but maybe now the human operator isn’t even just spinning in a piece of construction equipment for eight hours a day for thirty years and then they end up with these nerve damage due to the vibration constantly. The computer doesn’t care. The electrons will shake around. They don’t care. So we have to look at the relative risk. And this is where I mentioned understanding your risk as you’re developing your solution. You can often say, yes, it’s risky, but the alternative is actually really risky. So I’m willing to accept this. And as long as you can look at those things and balance them out, it can work out just fine on the advances front, there’s some stuff that I think is really interesting. And I don’t know if this is a common concept, but I’ll give him a shout out because I’ve heard him talk about it. Riccardo Mariani, he’s a head of industrial safety now at NVIDIA. He talks a lot about these three kinds of safety. And surprised I hadn’t heard Abel say this, so I don’t know if it’s his thing or not. There’s predictive safety, which is really stimulation. How can we predict when the things are going to happen? It’s generally offline. There’s proactive safety, which is watching sensors and trying to avoid the dangerous situation before it happens. So this is almost like ADAS. It’s like, I’m going to keep you in the lane so that you don’t hit the other car. And then there’s reactive safety. And this is where most of the safety has been traditionally, which is like any stop up. Somebody, something happening and reacts to it. Most of the safety is largely reactive today. What we’re seeing now with advances in sensors, advances in processing, and ability to have high-performance, functionally safe platforms, is this move to push safety up. Simulation is continuing to grow, so that’s great. Gather a lot of data from the field and hopefully improve our algorithms, improve our sensing platforms over time so they can be better at detecting things. But then also having these proactive safety layers that are more sophisticated than traditional safety, but not doing full AI, nudging things in the right way, keeping the machines within their safe operating area. So hopefully you don’t have to hit the E-stop button as often or ever. And I think ultimately what happens is you just end up with a more nuanced and sophisticated view of what is safety as part of the total machine. And I think that will enable hopefully safer machines overall and ultimately probably better functioning machines, less downtime because you don’t have to hit the E-stop button that often.

 

Jeff Dance – 00:30:32: I was thinking about like a car, like how often are you slamming on the brakes or pulling the e-brake or something like that. I mean, not that often it happens. We have accidents, but like you’re doing a lot of things before because you understand the rules, the regulations. It’s like you’ve been trained, et cetera, et cetera. So I don’t know if that’s a good analogy, but I really like the predictive, proactive, and reactive classification of safety as we think about designing the future with more intent. It’s a good framework to think about. As far as where you guys are focused, it seems like the evolution has been from reactive devices towards the predictive and the proactive. And so is that where the progressions we think about, not just hardware, but also software playing together for the future of safety? Is that where Fort is going and other companies are going as well?

 

Nathan Bivans – 00:31:24: I would definitely say so. So like, you know, we’re headed up here, right? We started in reactive. We still have a lot of work to do there as we see. Everybody starts with a one-to-one deployment, right? You have like one person monitoring the machine. Now we’re starting to see customers having larger networks. They’re more dynamic where the interaction between the human and the machine is different. You know, it’s like a one-to-ten deployment and the human operator bouncing machine to machine, right? This is how you multiply the effectivity of those people you have. It’s not like you’re getting rid of the people, but they’re more. they’re more productive because the dull works being done by the machine and the humans more oversight. So enabling those interactions while maintaining safety, that’s like stuff we’re doing right now. The work that I’m doing as CTO over the hilltop stuff is definitely more into enabling more proactive safety. So it’s enabling dynamic connections of more sophisticated sensors on machines in the environment, sharing data between machines all while maintaining safety to avoid the problem before you get there. See around the corner because there’s other machines there or there’s a camera in the environment that I can get data from and use that to make safety decisions to know that, oh yeah, there’s somebody coming, I’m going to slow down the synchronized my crossing of this intersection with that other forklift and just make sure that whether it’s human operated or a Robot, make sure that it’s, you know, we don’t run into each other. So those kinds of things to optimize behavior, but also to avoid these safety scenarios. So yet nobody has to slam on the brakes and building that in a way so that it’s deployable by a Robotics company and not have to spend years in developing it themselves.

 

Jeff Dance – 00:33:16: Thank you. A few other questions on this. One is like, are there regulatory bodies you’re working with right now, or that are part of the designing this future with a bit more intent? I know you said we’re balancing, but are you guys actually working with any bodies or where have any other bodies that are really influencing standards of safety for the future?

 

Nathan Bivans – 00:33:35: So, I mean, we’re on the RiA. Committee, they’re obviously in the US really guiding the standards for Robotics and started with, they really focused mostly on AMRs. The world of outdoor Robotics, construction, agriculture, mining, transportation to an extent, there’s less focus on regulation. I think it’s just less mature. There are some regulations in Europe where we’re not on those committees, although we’ve been talking to folks in a lot of the industry related groups about it. We’re in a close circle of folks who are worried about it, but not sitting on the committees. And I think, you mentioned before, some of them may be a little ahead of the game. And I think it’s making life difficult for the companies. They’re trying to deploy solutions. Like we have a bunch of Customers in those worlds who are just saying, like, we’re not going to deploy in Europe yet because we can’t deal with those regulations. We’re going to figure it out in North America. And then the hope is by the time they have that, either the regulations have adapted or they have enough confidence that they can meet the regulations in Europe. So it’s that balancing act of dealing with the regulations, knowing you have a safe solution, a trustworthy solution, but also being able to move fast. And I tell you, the regulation world is a tough one because on one hand, it really helps us when there’s regulation, because it makes it so that you have to do things the right way. It does really make it, can make it difficult to deploy some of these solutions because you’re right, they try to move so fast that writing regulation, the second you write it and you get it through committee, it’s already outdated. Now somebody’s saying, hey, wait a minute, why can’t I do this new thing? So I do think that there’s a danger of moving too fast and we are too small, I think, to keep up with all the different regulations, at least as much as we can influence them.

 

Jeff Dance – 00:35:34: Well, I definitely see you guys as a main player in the future, Robotic Safety. One of the serious companies that is getting the funding and also helping Customers move the needle. So that’s awesome. You know, when we talk about dangers, there’s always this fear of Robotics and you think about like, okay, hey, if we’re going to have times more Robotics and IoT Devices in the next 10 years, like, can you speak to any of those worst case scenarios? Because a lot of people, they jump there really quickly. You know, they’re like, oh, we have all these Robots like connected in a network. And given that cybersecurity is like the silent warfare of today, and it’s like one of the major warfares of today, and cybersecurity safety and security go hand in hand, as you mentioned, upfront, should human beings like fear these mix in AI and Robots on Broad Networks. Can you speak to that?

 

Nathan Bivans – 00:36:30: Well, I think I wouldn’t say we should fear them, but we should definitely be cautious. And I think you’re right to point out that traditional safety is certainly a concern. Are the Robots just going to miss something and run you over? But that’s, if you look at the scale of risk, that’s generally a small risk, right? There’s one injury. In the worst case scenario, let’s say you had a thousand delivery Robots all over a city, somebody hacked into them and was able to take them all over, you could… do major, at least economic, if not physical damage with a army of machines that you now control. So the impact of a security breach actually could be significantly more dire than a safety issue. So I think you’re right that as we get to mass deployment, especially in less controlled environments where maybe there aren’t any fences, it becomes even more critical to take a really hard look at security and understand that security is an ever evolving threat. So just because you were secure a year ago doesn’t mean you are today. And you have to have the ability to do updates in the field. You’ve got to be able to push updates. You have to have an active threat management, threat monitoring process. And one of the things that’s really come to light and there’s even been some guidance, I think, from NIST in the White House at this point about software bomb management. Robots are built on complex stacks that pull software from a lot of different places. And some of it’s open source, some of it’s closed source. You know, there are-

 

Jeff Dance – 00:38:11: Merit comes from all over the world.

 

Nathan Bivans – 00:38:12: Yeah, but you need to have an understanding of where that’s coming from, just so that when some vulnerability in your stack inherently won’t bump, it’s going to pop up, right? So you need to know, are we affected by it? And it has to be almost instantaneous. So that level of like rigor and management of your software stack needs to be just built into your systems, because otherwise you’ll have vulnerabilities you won’t even know exist, and that’s really scary. So, and what I mentioned before, this idea that sometimes security vulnerabilities might result in a safety behavior, right? If you start, you need to have monitoring, live monitoring at the end point, network monitoring won’t solve this problem. The Robots need to have some level of monitoring themselves to watch for anomalous behavior. And that, in the worst case scenario, might be, hey, I got to shut this thing down because something funny is going on and I don’t want to take the risk that maybe I’m infected with some bad actors got in somehow.

 

Jeff Dance – 00:39:12: Some bad actor that could hang out there for a few years and then trigger something, much like computers. I mean, that really happens today, right? It’s like, you could have a bad actor hanging out for a couple of years until they decide to light up something.

 

Nathan Bivans – 00:39:23: Yeah, and you can hope it’s just something like a DDoS attack, but maybe they could be stealing your IP, maybe they, who knows?

 

Jeff Dance – 00:39:30: It’s interesting. I think if I could summarize, the fear is probably overplayed right now if you look at the actual accidents and things that are happening that we’re aware of, but the risk is still high in the future. And so that’s why we need to be proactive about safety, cybersecurity, and really designing with intent given where things are going.

 

Nathan Bivans – 00:39:51: I think that’s fair. Yeah, I think people, you know, there’s a lot of science fiction around this. You know, everybody’s afraid RoboCop’s going to go crazy, but the risk of that today and the impact is pretty small. But we need to be careful because we could expose ourselves to a lot of risk in the future if it’s not done right.

 

Jeff Dance – 00:40:09: That’s why we need leaders like you and conversations like this to really, really make sure that we’re all planning for that future together. To wrap up, I just have a few more questions, but really appreciated the insights so far. What advancements in Robotics are you most excited for?

 

Nathan Bivans – 00:40:25: There’s something that’s really been popping up a lot recently and it’s been on my mind. This move towards the software-defined Robot. I’m stealing that from the software-defined vehicle work that’s going on. What really enables that is something called mixed criticality systems. Instead of having these, you know, I buy a Safety Box, I buy a Sensor Platform, they’re getting converged. And this is… General maturity increases with the platforms that we’re building on, but that enables a lot of really interesting interactions, both in between different levels of software, sharing sensors, so we’re not having a safety, dedicated sensor platform and then a perception platform. They have pretty much the same data. Why can’t we use them together? They’re getting to that point. There are a lot of advances that I think are going to enable new applications, new use-cases, because hopefully some of the costs and complexities can come down in designing a Robot and making it work in the real world. And then also just the power, the sheer performance and power we can get at the edge on relatively small platforms. I get excited by it and I’m not the one doing perception, so I can only imagine the stuff that the perception engineers are going to come up with in the not too distant future to do with this amazing power. So I think it’s really exciting.

 

Jeff Dance – 00:41:48: We’ve been in the Robotics space for the last six years and the costs have come down and then the capabilities going up. It’s been a significant ramp essentially in both directions as far as the capabilities and then the cost coming down at the same time. That’s where we see the future evolving a lot more rapidly than I would say in the next ten years than it has in the last ten years.

 

Nathan Bivans – 00:42:12: Yeah, I would definitely agree. It’s a very exciting time.

 

Jeff Dance – 00:42:15: The future of Robotics being software is also a reality, right, especially as we elevate AI. And we standardized more the hardware availability and capability.

 

Nathan Bivans – 00:42:26: Yeah, and this is, I’m a hardware guy. Like I go way back in hardware, but I fully admit that the future is software.

 

Jeff Dance – 00:42:32: It’s good to align on that. What about, you had mentioned an industry leader that you were learning from, from a safety perspective, the fellow at NVIDIA. Any other kind of industry leaders that you look to for insights as it relates to Robotics and robotic safety?

 

Nathan Bivans – 00:42:47: There’s the old stalwarts, you know, the Rockwells, the Six, the Siemens, they’re all working on some interesting stuff, you know, that you got to look at them and respect the work they’ve done in the industrial space. You know, the people building the new software platforms, I think are interesting too. You got, Elliot Horowitz doing Viam, which is a different approach to robotic, robotic software platforms really. And then obviously ROS, which has been revolutionary as far as making Robotics a little more accessible, right? You can get off the ground and build something in ROS pretty quickly, and then it also could extend to being a pretty sophisticated system and very capable. So there’s stuff happening everywhere. There’s some startups that I’ve been talking to that are doing some really interesting modeling and predictive modeling software. you know, a company that’s working on what’s basically rolled up to be ADAS for a Robot. You know, it’s more like that proactive safety layer. So you know, the innovation is coming from every corner. And I think that’s what’s so interesting right now is the. There’s so much going on. There’s so many interesting technologies coming out that the world’s going to be completely different in two, three years. And that’s both a scary place to be because you never know where you’re going to end up, but also really exciting right now. And so I’m eager to see how far we get, where we are in three years. It makes my job really challenging so that I can make sure that I’m, we’re building the right products and following the right trends. But so it’s stressful, but it’s also super exciting.

 

Jeff Dance – 00:44:25: Yeah, I think Bill Gates said we overestimate what’s going to happen in a year, but we underestimate what’s going to happen in ten years. And I think if we apply that to Robotics, I would say we have those numbers and we overestimate what’s going to happen in like the next six months, but we underestimate what’s going to happen in five years, like in this space.

 

Nathan Bivans – 00:44:44: I think that’s probably a fair set of gauges to use, yeah.

 

Jeff Dance – 00:44:49: Awesome. Any other thoughts? You know, part of this podcast is to look to the future where technology is going and try to design the future with more intent and have these conversations help us collectively think about the future and how we shape it. Any other thoughts on being more proactive in this space versus reactive? You mentioned that continuum of robotic safety, but do you have any other thoughts on the future related to that?

 

Nathan Bivans – 00:45:13: Well, I mean, I’d say that probably reiterate some of the things we’ve already hit on, but the idea that you have to think about this stuff upfront, don’t be afraid that you can’t solve all your problems. So you just turn a blind eye and say, I’m not even going to look at it. It’s the most important thing is just simply to have a gauge of where you think your problems are most likely, especially if you bring in somebody who understands how to do these hazard analysis. Quite often we see people thinking like, oh, I don’t want to do a hazard analysis because the results are going to be too scary. Bringing somebody who knows how to do these things, has some experience, it probably won’t be as scary as you think it is. And you’ll actually feel more confident in your solution. And then don’t ignore security or think that just because I, you know, I have a, I use TLS on my connections or I’m using some established security communications protocol, assume that you have security taken care of because there’s a lot more to it than that. Those are the things that if you do as a Robot builder, you are going to be much better off as your solution matures and you won’t be blindsided because that’s the worst case. You think you got something ready, you’re trying to get this thing shipped and get it into customers’ hands or you’re in that point where you’re trying to scale. If these things hit you as a surprise, it can cause huge upsets to your business. So, spend a little bit of time upfront to give you that predictability and that confidence in the future.

 

Jeff Dance – 00:46:45: Thank you. One thought I had as you were talking about that was, you know, we have HR and people experience for people, but I would envision a future, maybe ten to twenty years now, where we have a lot of positions that are centered around the HR or people experience around Robotics. You know, that could be like a robotic safety director, for example. You know, if you have as many Robots in your workforce as you have people, then, you know, that might be an equivalent title or role.

 

Nathan Bivans – 00:47:11: Yeah, that’s actually a good point. And we have, because we do so much safety, part of our onboarding for everybody is an understanding of what is functional safety, like what is cybersecurity, at a little more technical level than the average person would get. And I think you’re right that in the future workplace, you probably will have, you’ll have to take that little online course and go through a few quizzes to just understand what it is to be interacting with Robots, what to expect out of them, what not to expect. Yeah, it’s going to be a different workplace.

 

Jeff Dance – 00:47:45: Just like, I mean, driving a car, you go through a lot of training to pick up that large machine you’re going to be directing. But on that note, sounds like we can continue to look for robotic safety, leadership, thought leadership, Any other resources you would recommend for companies when it comes to robotic safety training? And there’s not just the building of the Robots, there’s also those interacting with the Robots. Does anything come to mind there?

 

Nathan Bivans – 00:48:11: Well, I mean, there’s the trade organizations like A3 has some interesting training. You know, there’s the TUVs all have their own training. Exit is another safety certifier that we work with that does training. There’s even like Udemy courses and those sorts of things. If you’re not looking for, you know, I need to be a certified safety engineer, but just to get an understanding of what is this safety thing and how do I approach it. And like these guys talking about a hazard analysis, what the heck is that? You know, I’d encourage folks who are getting into Robotics, just go do a little bit of research, watch a couple videos, just to get a sense of what it is. It’s a lot less scary. Then a lot of people think it is. And that understanding will go a long way. You don’t have to be an expert. People spend years doing that, but just having a basic understanding can allay a lot of fears.

 

Jeff Dance – 00:49:01: I appreciate you repeating that. I say repetition is a law of learning. And even then, even though you’ve said this a few times, I’m sure the next engineer, we’re going to dive in and we’re like, oh, and then we’re going to catch ourselves like three months or hopefully not three years down the road when we’re looking for some cash in our startup, or we’re having an issue with a customer like, oh, Nathan, Nathan told me about this. He told me to put a framework together, to do that risk analysis and to start. Do a little bit of planning so we can go faster later on. Appreciate that emphasis. Last question. You’ve been at a lot of different companies have a great background, seem really deep in your space. What’s been one of the most rewarding experiences that you’ve had in industry so far?

 

Nathan Bivans – 00:49:45: Ooh, that’s a tough one. A lot of interesting, interesting experiences that I’ve been lucky enough to be in. I mean, honestly, it was, I would say that the process of like, starting where we started, like not even thinking about building a company and then just really coming up with something, finding a need and then building a business around it has been really, really rewarding. Honestly, I’m somebody who’s driven by solving problems as most engineers are. And I love seeing the solutions that I come up with actually make a difference in the real world. So, it’s hard to beat that for me from a rewarding standpoint. So I love seeing our customers using our products, using our technologies to help them move faster, build better Robots. And then ultimately they’re the ones selling these Robots and deploying them and making a difference in people’s lives, keeping them safer, allowing them to work faster, whatever the solution happens to be.

 

Jeff Dance – 00:50:43: Thank you. I would say you’re working in a really cool space. So it’s cool to hear that your most recent experience is maybe your most meaningful one. But as we think about the future, even at Fresh, we’re working on the future as well. Our goal is to do meaningful work like you’re doing and align with partners and customers that want the same. But you’re at the intersection of the future robotics plus the meaning, something that’s really meaningful. Like if you are helping them deploy faster, but also safer, then there’s meaning. You’re saving lives, you’re improving lives as a result of that technology work. And that’s harder to find. There’s a lot of cool companies that do consumer gadgets, this or that, but something that really benefits human beings and shapes the future is awesome.

 

Nathan Bivans – 00:51:32: Yeah, I think that’s why I’ve stuck around for a little over eleven years now doing this. Cause yeah, I think I found something that has a unique intersection that has kept me interested.

 

Jeff Dance – 00:51:42: Keep going. Well, we’re looking forward to partnering more in the future and thanks for joining us and sharing your insights as an engineering and Robotics leader. Grateful to have you in and I know our listeners will appreciate your insights.

 

Nathan Bivans – 00:51:55: Great, thanks. It’s been a great conversation. I can talk about this stuff forever.

 

Jeff Dance – 00:52:00: Thanks, Nathan. The Future Of Podcasts is brought to you by Fresh Consulting. To find out more about how we pair design and technology together to shape the future, visit us at freshconsulting.com. Make sure to search for the future of an Apple Podcasts, Spotify, Google Podcasts, or anywhere else podcasts are found. Make sure to click subscribe so you don’t miss any of our future episodes. And on behalf of our team here at Fresh, thank you for listening.