Podcast
The Future Of Decentralized Robotics
This episode explores The Future Of Decentralized Robotics with Jan Liphardt, founder of OpenMind. The discussion delves into the critical role of decentralized systems for building trustworthy, collaborative robots and highlights the challenges of machine identity, transparency, and security. Jan emphasizes open source, modular robotics software, and the rapid evolution necessary for responsible robot coordination in homes and cities. Key use cases in healthcare, education, and accessibility demonstrate the social impact and potential of intelligent robotic companions. The conversation also addresses the need for proactive discourse, regulation, and responsible innovation to ensure robotics technology broadly benefits humanity.

Jeff Dance: All right, in this episode of The Future Of, we’re joined by Jan Liphardt, founder of OpenMind, to explore the future of decentralized robotics. By way of background, OpenMind offers an open-source OS for intelligent robots and a decentralized AI control layer for large-scale robotic coordination. Their AI-native software stack gives robots the ability to think, learn, and work together.
Jan received a PhD from Cambridge University and serves as an associate professor of bioengineering at Stanford and a professor in residence at StartX Accelerator. He’s passionate about the intersection of data, machines, and decision-making. So am I, so I’m excited to go deeper with you. His work has been supported by various US government entities. He’s also a Searle Scholar— who invented “the pill”—a Sloan Research Fellow, and a Hellman Fellow. So he has a lot of good background for our topic today. Jan, welcome to the show. Grateful to have you.
Jan Liphardt: Thank you so much, Jeff. Glad to be here.
Jan’s journey in robotics
Jeff Dance: Great. For those who don’t know you, I’d like to go a little deeper. You’ve worked in the fields of physics and biotech, then AI, and now we’re talking about robotics. What led you to this path where you wanted to go deeper into robotics?
Jan Liphardt: Well, like a lot of people, I’ve loved the notion of being surrounded by intelligent machines since I was a little kid. I’m sure all of us, when we were little, read books about the future and saw movies about the future—whether it was Star Wars, 2001: A Space Odyssey, or Isaac Asimov’s books.
For many of us, as we grew up, the idea of being surrounded by robots that can talk, think, plan, help, and work with us was fascinating.
When I started out in academia, I was a physics professor. Part of that involved building instrumentation for optical trapping, which involves lasers, actuators, quadrant photodetectors, CCDs, and software to move large amounts of data back and forth, dealing with thermal drift issues and stability. That was my first exposure to connecting hardware using software and needing everything to work together really well. Then I got drawn into more healthcare and medically relevant data and decision-making, which sensitized me to questions about privacy and computing on sensitive data, primarily healthcare data.
So it was a small step to put those things together—my fascination with building things and an appreciation for how valuable data is, and all the useful things you can do for humans if you’re able to connect data, privacy, and advanced physical manipulation capabilities.
Jeff Dance: Thank you. I can see how that all comes together in robotics, especially when you think about robots that are more human-like or that could solve problems. That’s great. We heard that you live with a humanoid and two robot dogs. Is that true?
Jan Liphardt: Yes, it’s actually three robot dogs. So that’s Bits, Bites, and Frenchie—the dogs—and Iris, the humanoid. Yeah, of course. Come on, these are not just inert pieces of plastic. Even roboticists tend to think of robots as something they work with in the lab.
Jeff Dance: Okay. They have names. This is good.
Jan Liphardt: But I encourage everyone who works on humanoids: bring one home with you and see what it’s like to have a humanoid in your home. One of the retrospectively obvious things is that in the lab, you can have big cooling fans running to keep the electronics cool. In a home, that’s absolutely the last thing you want. You don’t want a humanoid running a powerful cooling fan in the middle of the night, making noise at three in the morning. Those are the little things you begin to appreciate when you’re surrounded by machines at home.
Living with robots
Jeff Dance: That’s fascinating. You mentioned that if we’re going to live with machines, we should know how they think and help them think better. Is that why you’ve tried to incorporate them into your home and daily life, to get into that mindset?
Jan Liphardt: Yes, it’s to learn all the little things you only see when you have normal interactions in your household. For example, my 13-year-old son, Ben, who’s in middle school, just asked one of the quadruped dogs for help with his math homework. That’s not something I prompted. I didn’t say, “Hey, Ben, there are four LLMs, one of which can win the math Olympics, that you can query through this quadruped dog.” He just said, “Hey, Bits, help me solve these equations,” and the dog was perfectly happy to do that. That’s something that happens spontaneously when you have the technology around you and your kids. It sensitizes you to all the gaps in robotic software and hardware.
Jeff Dance: That’s fascinating. So the robot dog was connected to an LLM, and your son was asking for help with his homework. The dog wasn’t necessarily moving around, but it was close to him, so he was able to do that. It was really that AI interface—AI as the new UI—that let him request help because the machine was close by. He could have done it on his computer, but the dog was right there, so he asked for help with his math.
Jan Liphardt: Exactly. You’re absolutely right that you can interact with AI through a phone or computer. But what I’ve noticed is that people are naturally drawn to movement and to things that look at them, look at their face and eyes, and are more expressive. There’s something very basic about that, especially with little kids. When the dogs go to the park, all the little kids come running.
Their faces light up, and they ask the dogs to jump, sit, and bark. They’re infinitely more engaged than with a computer screen.
Jeff Dance: Interesting. So you have more engagement and interaction via a moving form factor, which is very human-like. We’re at the intersection of design, software, and hardware, so we do robotics. Our belief is even design concepts should move because it could be a living thing. That’s great.
Machine-to-machine communication
I want to talk more about the current state and then get to the future. You mentioned decentralization, describing robots moving from individual smart agents to coordinated, distributed systems. Why is decentralization such a critical evolution for robotics?
Jan Liphardt: There’s a lot going on in your question. If you look at people—the fact that we’re interacting now through technology—people have developed amazing systems for connecting, like Zoom and telephones, mail, messaging. As we build and do things, humans have found it incredibly useful to make it convenient to reach out to millions or billions of other humans. That allows us to do amazing things. There’s every expectation that autonomous, intelligent, creative machines will want to do the same.
So, if you have a bunch of quadrupeds and humanoids running around in your home, how do they talk to one another? How do they talk to other machines? For example, if Iris wants to help with the groceries and the groceries are delivered by Waymo, how does Iris interact with Waymo? Does she use her three fingers—she doesn’t have five—to use an iPhone? That seems incredibly labored. Is Iris able to interact directly with the Waymo API, or does she use specialized software originally built for blind people to interact with Waymo?
How does Iris open the trunk of the Waymo when it shows up outside? When you start thinking about how machines connect, you immediately get into the territory of which machines you trust. For example, if you have a humanoid in your home, and someone attacks that humanoid and is able to command actions, someone could unlock your front door from the inside. That’s exactly what happened when one robotics lab—whose name I won’t mention—gave access to others to their robots inside the lab. One of the first things that happened was someone commanded the robot to push the door open. That’s why, when you start thinking about machine-to-machine communications, you land on questions about which machines you trust and how you identify a machine.
There’s no passports or birth dates—no “Is this an American machine or a Singaporean machine?” So all the ways humans have invented to help with trust do not necessarily scale one-to-one to robots.
Why is decentralization critical to robotics?
Jeff Dance: As you think about that problem and opportunity, the idea is that for the future, where robots collaborate, we need systems and protocols to enable trust for actions that are performed. So, as you think about distributed systems and decentralization—you’ve talked about the problem, but why is decentralization so critical? Is it so we enable trust systems for this robot-to-robot collaboration? Decentralization seems counterintuitive to trust. Can you go deeper there?
Jan Liphardt: Sure. Decentralization is a loaded term. There’s the computer science perspective, about distributed or decentralized systems, and then there’s blockchains and so forth. That gets us into public immutable ledgers. What’s important when you talk about blockchains are characteristics like immutability. Imagine you want to establish what actually happened and where things are. Then immutability can turn out to be important. What we’re seeing, practically, is that most countries and governments are way behind—almost like they’re asleep. As we deploy robots in cities, we need good systems—not in three, ten, or twenty years, but right now—to figure out where these machines are, who they are, whether we trust them, and who we allow to interact, coordinate, and communicate with which other parties.
Very pragmatically, we’ve found some of the infrastructure developed around blockchains to be easy to repurpose for machine identity and communications. Of course, no one would use a blockchain for machine-to-machine communications directly because of cost, latency, and other technical issues. But the identity and coordination part can be handled by a blockchain, and the actual data would move off-chain.
The main observation is that, for example, when I walk around my town with the quadrupeds, some people ask, “Jan, how come you’re not scared?” I say, “First, I wrote the software and it’s open source. If you have concerns about what these machines are thinking, go to GitHub and see how it’s architected.” That’s why we care about open source. Second, we wrote Asimov’s laws of robotics onto Ethereum. When the machines boot, they’re getting their guardrails from an immutable, global, public source of truth.
So, Jeff, imagine you say, “Jan, I don’t actually trust you. Who are you to claim that?” Then I can say, “Jeff, go to this address and look up Asimov’s laws. That’s what the robots are reading.” We’re not unique in that. Anthropic has done an amazing job thinking about constitutional AI. Google is doing a great job with constitutional robotics—they’ve baked Asimov’s laws into Gemini Robotics, their new robotics-focused model. Across the industry, there’s more appreciation that there needs to be a system of rules, and it would be dumb to hide those rules. If I tell you, “Jeff, we have this all under control, the rules are baked in,” your natural question would be, “Prove it.” So the rules need to be public. There’s a lot going on in this area, but the main highlights are that many human-focused systems for governance seem antiquated, and governments are moving very slowly.
Jeff Dance: Mm-hmm.
Jan Liphardt: There’s a vacuum—a tech, regulatory, and coordination vacuum. A lot of companies, like us and others such as Anthropic and Google, are trying to be ahead of the curve and anticipate some of these issues.
OpenMind’s Approach
Jeff Dance: That’s great. I love that you’re getting down to the root around trust, given that people generally have a fear of change, and generally from robots. I don’t know if that comes solely from the movies they’ve watched, but there is a fear—of behavior or at minimum of robots replacing some of the work people do. Historically, people feared the computer in a similar way. It did a lot of great things, but it also changed us dramatically. I think we can expect the same from intelligent machines. I love that you’re focusing on identity, trust, capability, and transparency. If we’re transparent and open, that can breed trust for those interested in going deeper. OpenMind is building a modular robotics and open platform for collaboration across machines. Can you share more about what you’re doing and some of your progress?
Jan Liphardt: Sure. There are two basic approaches to building software for robots. One is based on the notion of an end-to-end AI. A great example is software for a wheeled robot—traditionally called a car. The software looks at the world and decides when to accelerate, stop, turn left, or turn right. For that, an end-to-end AI is awesome. You collect vast amounts of data and spend a lot of money to perfect the AI. Then, when you’re done, you have great capability.
But what happens when you add a wheel, remove a wheel, add a sensor, add wings, or change the form factor? Then the end-to-end AI approach gets difficult. Another issue is that when debugging end-to-end AI, you’re peering at an enormous array of numbers, and it’s not obvious where decisions are coming from, where bugs are, or how to upgrade or fix the system. We’re much more on the side of many small models. We like the idea of having 10 to 20 AIs working together, using natural language to discuss what they’re seeing, what information is important, what needs to be acted on now, and what choices are possible. Then those decisions flow into the hardware abstraction layer, which governs movement, speech, and so forth.
We’re coming down on the side of many small models. We’re not unique—many people are seeing that if you try to own the entire stack as one monolithic model, it’s slow and expensive. Because the architecture is composed of many individual pieces, you can imagine a robot that has learned something transmitting that skill as a modular skill to another robot. For example, we’re one of the first robotics companies developing a hugging motion policy for humanoids. We expect a humanoid to be able to safely hug people. That’s not normally something humanoids do, but we think it’s important. Once we have that policy and one robot is good at it, you can imagine that robot sharing it with all its friends. That gets us into the territory of good systems for moving skills from one machine to another.
The same argument can be made for data, situational awareness, language skills, and knowledge about individuals. That leads us to what we’re calling Fabric, a coordination and governance system for many connected machines.
Skill-sharing across robots
Jeff Dance: I think it’s basic for a lot of roboticists to work on their single machine, but it gets more complex when you start thinking about robot-to-robot communication, task coordination, and collaboration. The notion of a fabric of skills and capabilities that could be shared, if the robot has the capabilities, is really interesting. Is that part of the decentralization vision—that there’s sharing among robots, and that becomes decentralized because you’re not instructing it to go share with another robot?
Jan Liphardt: That’s precisely right. Just like people can share and sell skills—imagine online math tutoring—robots are even better at that. You can imagine one robot acquiring a new skill and being able to share it with millions of other machines almost instantaneously.
Of course, that’s fascinating, but it also requires a lot of thought and responsibility regarding what skills are being transmitted and who gets to access them. Some of these concepts are relatively new in the commercial, home-focused, and consumer-focused humanoid market. But people have been tackling these kinds of questions for decades in the defense and military realm, where you face challenges like coordinating many sensors—satellites, airplanes, submarines—all collecting information.
Ideally, you’d be able to assemble those assets into a coordinated team to accomplish specific missions. That requires knowing who’s who and what their capabilities are, and it involves dynamically optimizing the combination of assets to accomplish those missions. That’s the standard defense and military formulation of this problem. It’s not a new problem, but it’s underexplored when it comes to robots for non-defense settings—homes, families, schools, hospitals, logistics, and so forth.
Jeff Dance: That’s where things get exciting, right? Where there’s collaboration and you can do much more complex tasks that involve coordination. We’re excited about that future here at Fresh Consulting as well. You talked about decentralization requiring more transparency and about open source bringing more trust, but it also exposes everything you’re using. If you know exactly what’s being used, is there potential for more hacking? I’m curious about your thoughts, especially since your platform uses Unitree, which comes out of China. How are you thinking about preventing the compromise of security, especially in a coordinated system? You see it in movies, right? The fleet of robots goes bad.
Jan Liphardt: Yes, it’s unfortunately true that horror movies sell a lot more tickets than boring movies where things work and people are happy. No one would ever go see that movie.
An infrastructure of trust
Jeff Dance: Right, fair. But you mentioned warfare, and today we have countries using swarms of robots to kill people with agency—they’re sent on missions and have a sense of agency, which is a new realm of AI: agentic AI. When we talk about embodied AI, we’re bringing these human-like capabilities. Any other thoughts on security and how we can feel good about the systems we’re building, and what we need to do to be sure we can trust them?
Jan Liphardt: One of the basic lessons of cryptography is that there is no such thing as security through obscurity. If you try to invent your own cryptographic techniques and keep them secret, that makes you look foolish. That just doesn’t work, and it’s been proven over and over again. The best way to build systems that are safe and secure is to be as open as possible. Most of the internet today runs on open source software for a very good reason: we need the internet to work.
The same is true of robotics. We want this stuff to work, and we want to be able to see what’s going on and trust it. That’s why I have strong convictions about this. Humanity has a dramatic motivation and justification for having a lot of this infrastructure be completely open and transparent.
Of course, there are certain capabilities that might become more broadly available to everyone. So there are downsides, and that has to be navigated with caution. But the critical thing is for everyone to know what’s going on. There will be many opinions, but the most important thing is to make sure people know what’s happening.
The future of robots
Jeff Dance: That resonates. So we can’t have a secret code language like the Navajo code talkers in World War II. Let’s talk more about the future. You’re working on the future, and we believe in the future of humans and machines. I’ve discovered, talking to many roboticists, that the more we do this right, the more it helps humans be human. We’ll do our best work when robots do the dull, dirty, dangerous, and mundane tasks that humans aren’t meant for, even though we’ve gotten used to them. Where do you see things going in the next 10 to 20 years? You said cities are way behind. Give us some vision for where you see things going, especially with how quickly AI is accelerating robotics.
Jan Liphardt: There are many misconceptions about humanoid robots. One misconception is that the point of having a humanoid robot in your home is to wet wipe your floor, fold your laundry, or pick up your socks. That’s an incredibly myopic perspective on what thinking machines can do for humans.
Generally, we spend less time looking at situations where, after interacting with a humanoid, humans have a big smile on their face, have learned something new, have connected with people they haven’t talked to in a long time, or are healthier. We’re much more interested in those kinds of use cases compared to just wet wiping your floor. The last thing you want is a $100,000 humanoid in soapy water in your kitchen. That doesn’t make sense.
Jeff Dance: So the companion nature—the intelligent, companion-like nature of a machine that has AI embodied in it—is that what you’re talking about?
Jan Liphardt: Yes, exactly. Educational use cases, health companions for people who live alone or have various health conditions. For example, we just had blind people reach out and say, “Hey, there’s a national shortage of seeing eye dogs, and they cost $200,000 each. They’re incredibly difficult for blind people to use, among other things, because dogs poop on the sidewalk, and if you’re blind, it’s hard to clean up after your seeing eye dog.” That’s a situation where it’s a win for everyone. Imagine having a quadruped that can navigate a city, talk to you and others, and narrate the world around you with voice. It’s always there for you. It’s not about making seeing eye dogs extinct; there’s a real gap, and blind people have asked us, “When can I get one?” So that seems like a win for everyone.
Another example is education. As someone who teaches, it breaks my heart when I look at 300 students in a lecture room. My ability to even know their names is hard—I may know the top 20, 30, 40, or 50 students, but not all 300. How am I, as an educator, supposed to be aware of what every student knows and doesn’t know, and reformulate the material to challenge them and address their knowledge gaps? The way we do things now, especially in education and healthcare, seems utterly medieval. We’re not doing a particularly great job. We’re spending most of our time looking at situations where current systems are not performing well, and where technology gives an immediate benefit.
This isn’t about one-to-one replacement of humans. It’s about better outcomes—in education, healthcare, or just making it more convenient for blind people to navigate a city.
Jeff Dance: That’s fascinating. I appreciate that perspective. The CEO of Google, Eric Schmidt—or is he still the CEO?
Jan Liphardt: I don’t follow his current titles. I’m more familiar with other things he’s doing, but our audience can Google that for us.
Interoperability
Jeff Dance: No worries. Eric Schmidt said that the AI revolution isn’t overhyped; it’s actually underhyped. We’re only beginning to understand what intelligent machines can truly do. I think that’s the next wave of the future as we put intelligence on top of machines. We talk about “smart” devices, but our machines haven’t really been smart—they haven’t had intelligence, agency, and collaboration. We’re at the beginning of that. As we look to the future, with more interoperability between robots, how do you see commercial partners in the robotics space catching up? Do you see them bolting onto a platform like yours? Many robotic systems are still closed; some are open for educators, but the mass market is still a closed ecosystem. How do you see the market opening up? What will help robotics become more interoperable?
Jan Liphardt: Fundamentally, it’s going to be consumers asking for capabilities. We see that play out repeatedly. The four robotics companies we work with came to us because they had customers complaining. They wanted the robots to do more—spatial navigation, reasoning over physical environments, supporting different dialects, and HIPAA-compliant multi-agent robotics endpoints.
If you want to deploy robot hardware into US hospitals, for example, it’s convenient if the software and data are developed in the US and stay on HIPAA-compliant American servers. What we’re seeing is that everyone wants to be the Apple of robotics. Everyone wants the hardware-cloud-app store for robots. Everyone would love to monopolize humanoid robotics. That’s exactly what I do not want as a parent, someone in the medical field, or someone with elderly parents.
If you ask, “Jan, are you just being naïve?” my response is, “Look at Android.” The majority of phones on Earth run Android. There’s a good reason: if you’re a phone hardware company, you want to sell lots of phones and have people like your software. So you’re not generally going to roll your own operating system; you’ll focus on features that differentiate your product. So you end up with Android. Everything we do here is designed to help realize that future for robotics, not just cell phones.
Jeff Dance: That’s a good example. I noticed your website mentions Android. Is there an underpinning of your platform that’s Android-based?
Jan Liphardt: No, it’s all Ubuntu, with a lot of Python code that allows about 10 to 20 different AIs and models to coexist and communicate. Out of those conversations arise the motions, speech, facial expressions, and actions of the robot.
Jeff Dance: Great. How many robots do you have coordinating right now that you’ve tested? I noticed several standard libraries set up for collaboration. Have you gone wide-scale yet?
Jan Liphardt: We have five different humanoid platforms we work with from different manufacturers—Unitree, Engine AI, UB Tech, Deep Robotics, and open source robotics hardware like K-scale.
We want our software to be easy to deploy on many different types of hardware—two legs, four legs, wheels, no wheels, and so forth. That’s why it’s important for us to have many different types of robots running out in San Francisco. We’re close to South Park, so if you want to hang out with lots of robots, come visit South Park. Most days at lunch, one or more of our robots will be running around the park, hopefully not being too annoying or taking your seat at lunchtime. We try to have our robots defer to humans when they want to sit down.
Insurance and regulatory obstacles
Jeff Dance: Nice. You mentioned cities are far behind. What sort of regulatory frameworks do you think we need to accelerate the movement you want to see? In San Francisco, did you need unique permits? Are they ahead of the times or behind? Tell us more about the regulatory side, especially as we start putting intelligent machines with agency outside. Cities will be scared—some are more forward, others are not. Any thoughts on the regulatory or legal side for the future?
Jan Liphardt: Most companies want to stay in business, and we’re aware that if one of our robots hurts a child, we shut down the next day—no question. The main thing for us is to avoid any kind of catastrophe or problem.
I would like to think most companies operate the same way. In terms of rules and regulations, there really aren’t many specifically for humanoid robots, but there are many general rules—don’t do dumb stuff as a company or you’ll be liable. One gap we’ve noticed is it’s very difficult to get insurance. If you wanted to deploy a hundred humanoid robots into people’s homes, good luck finding an insurance company to write you a simple policy today. If anyone wants to start an amazing company in a trillion-dollar vertical, build a humanoid insurance company—we’ll be your first customer.
Jeff Dance: Robotic insurance is definitely a white space that isn’t yet filled, but it’s a big opportunity. We’re running out of time, but I wanted to ask: as robots accelerate with the advent of AI, where do you see some of the big breakthroughs coming in the next few years?
Jan Liphardt: One breakthrough will be people rethinking the potential roles robots can play in society. We need to get out of the mindset that robots are just for putting golf balls into boxes or wet wiping floors. If we begin to appreciate the real opportunities, I suspect it will be a win for society, especially in healthcare and education.
The best story I’ve heard lately is about an Australian company, Andromeda, deploying humanoids into memory care facilities. There, 40% of patients haven’t been visited by a relative or friend in years—they’re almost completely isolated and told not to get up because they might fall. When a humanoid is deployed in that setting, looks at them, listens, and makes them smile, the human nurses have to wipe lipstick off the head of the humanoid in the evening.
Imagine being 10 or 20 years into memory loss, and suddenly you have something in front of you that listens and makes you laugh. That may sound dystopian, but who am I to criticize a situation where someone who hasn’t been visited by a family member in years smiles and laughs?
Designing for more good and less harm
Jeff Dance: Connect that person to the world’s intelligence, give them empathy, have that humanoid remember everything that person said—I can see how that could be powerful. I’m seeing your vision for the future, where robot companions are not just workers but intelligent and social companions as well.
How do we design this future with more intent? We’ve seen some of the harm that smartphones have done, changing who we are. If the future involves swarms of robots collaborating in a new paradigm, can you think of principles beyond openness, transparency, and intent for those designing these systems? How do we assure more good and less harm? Eric Schmidt said the race for AI dominance isn’t just about innovation—it’s about power, sovereignty, security, and ultimately human dignity. How we navigate this will define the next century and the kind of world we build, one that either empowers or erodes humanity. Technology has a life of its own and does both good and bad. How do we make sure it stays on the good side and benefits humanity?
I think the computer has brought more good. The smartphone—you could say it’s the computer in our pocket, and there’s a lot of good, but also a lot of negatives.
Jan Liphardt: Every parent can sympathize as they watch their kids at dinner, silently staring at their phones instead of talking about their day. The single most important thing—this is a whole podcast topic in itself—is for everyone to inform themselves and develop an opinion. Try to build better technology, talk to your friends, neighbors, and relatives, and engage in debate about the technology and where you want it to go. The more people just sit back and let things happen, the less likely we are to arrive at a great outcome. If you’re an educator, technologist, regulator, or parent, make sure you dig into this, pay attention, and engage with others.
Jeff Dance: Thank you. I agree—the discourse and attention to this are probably the most important things. If we just consume things and let them affect us, we don’t know the outcome. But if we create a plan, discuss it, and create principles around it, there’s a chance for a much better outcome—one that benefits all of humanity. I’m excited about that future and feel responsible as a creator to talk with people like you to figure out how we can make it best for the next generation.
Jan Liphardt: I’ve been a little disappointed by most humans. For example, when I give a lecture about privacy and ask, “Who here cares about privacy?” everyone raises their hand. Then I ask, “Who here uses Gmail? It’s free, right?” Everyone raises their hand. Then I ask, “Have you ever thought about what the business model of Gmail is and why you get free email?” Some people say, “That’s a good point,” but it’s so convenient and free. So there are many choices humans make that aren’t as informed as one would like.
Jeff Dance: Makes sense. Well, Jan Liphardt, thank you so much for your insights and the depth of conversation. I appreciate your passion for where you’re trying to take the future. I’m grateful to have you on the show.
Jan Liphardt: Jeff, it’s been a delight. Thank you for the awesome questions.






