Podcast

The Future Of AI Agents

In this episode, Jeff Dance, Host and founder of Fresh Consulting, is joined by Jason Thane, Co-founder and CEO of GenUI, and Elisha Terada, Technical Innovation Director at Fresh Consulting and Co-founder of brancher.ai, to discuss the evolution and future of AI agents. Together, they highlight the shift from traditional AI bots to agentic AI, which involves more autonomous decision-making. The conversation covers the implications of decentralized AI technology and its potential to enhance human creativity and productivity.

Jeff Dance, Host and founder of Fresh Consulting, is joined by Jason Thane, Co-founder and CEO of GenUI, and Elisha Terada, Technical Innovation Director at Fresh Consulting and Co-founder of Brancher AI

Jeff Dance: In this episode of The Future Of, we’re joined by GenUI co-founder and CEO Jason Thane and Fresh’s Technical Innovation Director and Brancher AI co-founder Elisha Terada to discuss the future of AI agents. Guys, I’m so excited to be talking about this topic given that we’re at the beginning of such a big transformation. Welcome to the show.

Jason Thane: Thanks for having me.

Elisha Terada: Thanks for having me too.

Jeff Dance: Awesome. I want to give a quick background to Jason and Elisha. Jason, as I mentioned, is the co-founder and CEO of GenUI, a Seattle leader in creating innovative software products for the last 16 years. GenUI provides an end-to-end product focus through experience design and software development, serving clients like Microsoft Research, Nordstrom, Seattle Children’s Hospital, and recently the Allen Institute for AI. We actually met skiing about 10 years ago and have been chatting about big topics like this ever since.

Jason Thane: Can’t believe it was 10 years. Yeah.

Journey into AI: “Compression of knowledge”

Jeff Dance: That’s crazy. Elisha is, as I mentioned, the Technical Innovation Director at Fresh Consulting, also the co-founder of Brancher AI, which has over 150,000 users. Elisha combines over 14 years of experience in software product development with a deep passion for emerging technologies. He’s also helped here at Fresh over 100 businesses create impactful digital products and guide them through the strategic adoption of new emerging technologies like generative AI, no-code solutions, and rapid prototyping. So, so grateful to have you both. Your deep experience in the software space, but also the AI space. I’d love to hear any other insights you have into your experience just with AI for the audience so they can understand your background. Jason, start with you.

Jason Thane: Yeah, thank you, Jeff. As you mentioned, I’ve been running GenUI for the past 16 years, co-founded with my co-founder Jason Greer. We’ve been engaged in innovation services, giving leverage to visionaries and helping them bring their visions into reality. That’s a heavy lift. The devil is in the details, and there are a lot of details. We exist to get behind that vision with the visionary and bring it to reality, bring it to market. In the early days, it was all about mobile app development and computing in your pockets, having a great computer connected to the internet with you all the time. But it’s transformed over the years. Now, we’re seeing so much innovation activity happen around AI. I think it’s a new era of computing when we’re able to use these models and indeed develop agentic workflows to get things done. I want to mention that we are on the cusp of releasing, with the Allen Institute for AI, an app called OLMoE. This is an on-device language model. It runs on your phone. You can ask it questions about anything in the world in airplane mode, and it will give you a reasonable answer. It’s a four-gigabyte file that you download. It’s compatible only with iPhone 15 Pro and above, but it’s incredible how much knowledge is compressed into that four-gigabyte file. It’s really made me realize that that’s what this stuff is. It’s compression of knowledge. Because of that compression of knowledge, we can do things that were never possible before. So that’s kind of an example of what we do.

Jeff Dance: Carrying the world’s information in your pocket without the internet, I mean, how game-changing is that?

Jason Thane: Yeah, and it’s really part of the debate first about open-source versus closed-source AI models, but also local models running on your own device on the edge or in completely private mode. These things are important. And it’s definitely not decided that all the AI is going to happen on a big AI lab’s cloud server.

Jeff Dance: Amazing. Elisha, how about you? Tell us more about your journey into this AI sphere.

Elisha Terada: Yeah, I think pre-generative AI, and I think when we say AI, it’s such a big umbrella and includes so many different terms in one. But pre-generative AI, I got really interested in machine learning. I think it was around 2017, 2016 when every company seemed to come up with their own machine learning algorithm, whether banks were sifting through transactions to find something that didn’t look right or companies like Expensify were releasing products where you just send an image of a receipt, and they’ll figure out how to automatically categorize it and bring the numbers. Turns out I think 10% or 20% was still done by humans behind the scenes. But the whole hype led me to be really interested in the field of AI. I started going to conferences hosted by Microsoft, Redmond. I think it was the Pi data and got to know a lot of people in the space, got really interested in it. I myself created a model trained on sticky notes, where I would train it to recognize multiple sticky notes, even if they were overlapping each other, and then crop them at the right coordinates and get the information out of it. I think, Jeff, you might remember our Invent Value project, where we wondered, how do we tie the physical space information that we capture during workshops into the digital space automatically, simply by taking a photo of sticky notes and turning that into digital data. I created a machine learning model to handle that, running on-device on iOS because that’s right when iOS started to support the ML capability right on the device where you can deploy and have your mobile application for iOS work with those models.

We also had some fun projects like training to recognize ingredients on a pizza by training on the pizzas we get at our office for lunchtime. I would annotate individual ingredients and say, “Hey, please tell me what’s on the pizza.” I think those days are gone now that all these things are built into the device and you just take a photo and it’ll tell you anything you want about the world. So that’s such an amazing world.

As we shift through the effort and manual labor that we used to put into training things with our own data to a general model becoming such a great general-purpose use for solving many problems. I continued my journey at Fresh Consulting, creating applications like Brancher, which lets users create AI-powered applications with no coding because it used to be that I had to download Python code and run those training notebooks on the device and it takes two hours to train now it’s just something you click and type, “Hey, this is what I want,” and you just deploy and make it work. So I’m amazed at how fast things have moved, and I continue to invest in learning about the investments.

Jeff Dance: Thank you. I look at you guys both as experts and students. And I think we’re all students of AI right now. No matter how deep you are, it’s changing so fast that we’re all learning from each other, given how fast things are going. And truly we’re at the beginning of another 10-year transformation, like software as a service or cloud or smartphones. It’s the biggest thing since the internet, right? And there’s a real aspect of intelligence.

Jason Thane: Absolutely.

Jeff Dance: We know it’s artificial, but there’s intelligence that’s growing and kind of leaps and bounds. And it’s been amazing since the transformer came out, the GPT, and put us in this position to vectorize information, just how quickly things have changed, where we’re looking at moving forward human-like performance across 20 different dimensions, 20, 30 years. And I don’t think people realize how fast things have pivoted. There’s really a small bang, preparing for like a big bang moment in time. And it’s real. It’s now. And we’re at the beginning.

Jason Thane: The future is going to be remarkably different from the past. That’s the one thing I know. The difference for us looking 10 years in the future is going to be so much more dramatically different than even someone could say that 10 years ago. And thinking of all the change that’s happened in the past 10 years, it’s amazing. But we’re right on the cusp.

What are AI agents?

Jeff Dance: We’re on the cusp. We’re at the beginning. We want to drill in, as we think about where we are, we know that agentic AI is this next step. We’ve started to experience sort of generative AI. Half the world is using it in one way or another. It’s sitting on the devices in our pocket now. But what are AI agents and how are AI agents different than AI bots? We’ve had AI for a long time. So just to dive a little deeper there as we think about the current state and then we’ll shift to the future state. But Elisha, what’s your kind of definition of an AI agent and how are these different than the things we’ve had in the past?

Elisha Terada: Yeah, I think overall the industry is still trying to throw terms at something that never existed before and then try to mean different things. Like when we say agent, what is an agent? There are so many articles you can look up online and type AI agent and then based on which article you read, they completely try to mean different things. And sometimes it’s also vague and broad. A term that maybe when you say AI chatbot when the chat GPT came out meant one thing, but five years from now, the AI chatbot, our understanding or capability would also shift. So the definition of what it can do would also shift as well. Given that, chatbot, as I understand it best, the way I think we understand in our mind is an interface, a visual interface that you can go into, whether it’s your mobile phone or a website application, and you type something, “Hey, how do I cook eggs?” hit enter, and then you wait for a little bit. Within five, 10 seconds, you get the answer. And then you’ll interact with it again and again. And when you’re done, you quit the application.

The rise of agentic AI is a little bit less attending to your attention to how the AI is thinking and how AI is asking you to interact. It’s the new era where I ask an AI agent to figure something out, given a goal. So I say, “Hey, my goal is to find people in Seattle who are software developers and capable, and I’m interested in working with them. So please go find who I should hire.” And unlike a chatbot where I’m asking it for something, and the chatbot asks me the next question and then I say, “Okay, now go to LinkedIn, scrape the website, and chat GPT is saying, “Okay, what’s the next step?” It’s without me giving further instruction, it will go and do its own thing. And it might even take 30 minutes for the AI agent or the agentic AI to think and refine, review their own work, and then come back to the different steps and iteratively loop through the task that they come up with on their own.

Jeff Dance: Thanks for that explanation. Jason, what else would you add?

Jason Thane: Yeah, I mean, I think that’s right on. Moving from chatbots to agents is really an advancement, and it’s a qualitative change. I would define an agent as having agency, which is like the specific empowerment to complete a specific task, where a chatbot might just provide information.

Jeff Dance: Yeah.

Jason Thane: And that requires that an agent can generally be trusted, although we have to verify, right? You can’t just blindly trust these things. You have to generally trust the agent to be able to do the right thing, even in unanticipated ways. So you don’t know exactly how it’s going to complete the job, but it’s going to. You know, we expect an agent to continue attempting until it’s either successful or is stopped somehow. So yeah, the current phase of innovation in language model-powered agents means that these agents can really understand our need and our intent. They’ll be capable of providing a lot more assistance. They’ll be good at conforming to policy and compliance requirements.

Jeff Dance: Love that.

What is agentic AI?

Jason Thane: That’s all part of building the agentic app is that you have to verify what they’re doing. You have to verify compliance, and you have to verify that they’re following policy. One other thing I would add is that it’s important to understand that when we’re talking to one of these agents, we’re actually talking to an orchestration of multiple agents. So it’s a group of agents that’s charged with the task. This architecture is highly effective when each agent is given a little slice of the task. Some of those tasks are to check the output of other agents. Some of those tasks are to classify the input or classify the problem. Some of those tasks are to move the process along. Others are to guarantee compliance, effectiveness, guarding against hallucination or abuse or negative actions. So there’s really this new kind of software engineering that’s emerging. It’s like the level beyond prompt engineering, which is prompting a chatbot. There’s multi-agent orchestration. And there are a lot of parts that can go into that. There’s this coordination of layers of agents. There’s prompt engineering. There’s RAG, retrieval augmented generation. And there’s even sort of fine-tuning and training on maybe proprietary data that’s not available to the outside world. There’s still a lot of work to be done to make these agents really effective, trustworthy, and guarantee their level of quality and effectiveness.

Elisha Terada: I would love to add on top of that, if anybody is still confused with what’s exactly an agent, I think it’s essentially mimicking how we work. I can ask someone that I trust to go achieve a goal, like “Hey, please find someone that we should contact for the next deal we’re interested in selling with the software services.” You still would want the person to have a supporting mechanism. Maybe the person has the officer, like the security officer they can talk to, maybe the CIOs they can talk to, make sure we’re not violating some HR code of conduct. It’s really mimicking how human organizations that we come to form, where we don’t trust one individual to do the right thing all the time. We want people to verify that you did the right job. The managers might need to supervise still. And I think if we think in those terms, I hope people could understand that we’re trying to achieve something similar, but in this maybe black box thing that you don’t quite get to see how it works, but mimicking how organizations form in human society, but in the form of a computer.

Jason Thane: Absolutely, that’s totally right.

Jeff Dance: Amazing. I’ve thought about a couple of thoughts that came to mind as you guys were talking. One is something adjacent, but I think it may help from an understanding perspective. One is robotics, essentially. We do a task with robotics, essentially. They’re often a simple task. But as soon as you accomplish a simple task well, it begets the next task. True work automation will involve multiple tasks for a robot to be successful. So the history of robotics has been like, yeah, we’re doing these very programmed things well in some highly automated routines. But now that robots are smarter, we’re trying to get into more complex tasks. And so as I think about the parallel of that between AI agents, like, yeah, we’ve been doing these tasks that have been more defined. But now, once you solve a simple task, we can add on an orchestration of different things. So instead of like, hey, narrating 10, 20 steps to accomplish something, we could be like, go get the lawnmower to a robot. And the same way, we could say like, hey, maybe book me a trip to Germany for Christmas if it’s under 500 bucks. And put together that trip itinerary and make it happen, knowing who I am or something like that. Trying to think of some examples, but what other examples come to mind for you guys that could help this come to life for the listeners?

Jason Thane: I think that’s a good metaphor. The trouble with automating a robot is that you have to predict everything the robot could need to do. That’s if you’re doing, you know, writing plain code and there’s no intelligence baked in, but what’s happening in robotics right now with AI is the same as what’s happening with this sort of agentic process automation is that by baking in more intelligence and, my paradigm is that these models are compressed knowledge and making that available to the task, giving it some agency, you can then handle all the edge cases without having to code for them specifically. It does mean that you have to manage the model just like you might need to manage a person. And that’s what these orchestration layers are about. But it really means that we can go a lot farther to your point, Jeff, of chaining together the actions and creating a really holistic process or assembly line. If you tried to write all of that in code, it’s an infinitely complicated task. And using intelligence and using compressed knowledge and using agent orchestration, we can actually make things work.

Jeff Dance: Nice. The one example, I mean, we’ve been experiencing Siri and Alexa for some time now, right? And could we say that they are, while they have large bodies of knowledge, they’ve been kind of dumb or basic or kind of like, Alexa does a lot of party tricks and that’s cool. That’s fun or whatever, but we’re not really getting work done, right? We’re retrieving tidbits of information. We’re not doing complex tasks. And so, as we think about agentic AI, we’re thinking of something that’s doing much more complex work, correct?

Jason Thane: Right. It’s just like when we call the bank and we use their phone tree and you find that they try to be intelligent and try to handle your request with a phone tree and with a chatbot that you’re talking to. But really, it can’t get anything done. And I think that’s the same problem that we’ve had with Siri and Alexa. And the issue is not that we can’t give them access to our computers, for example. We can do that. The issue is that they haven’t been trustworthy. They haven’t been really intelligent enough to sort of be given agency. They haven’t been agents. They’ve been chatbots. And I think that’s the difference. Yeah, Elisha.

Elisha Terada: Yeah. Maybe they have been more like a qualifier of should human be involved at this point, but not quite like maybe their filter as in like phone tree, like, okay, you’re in the category of customer who needs this help. And maybe at this point, you need to talk to a human, but that could completely change it where you can take it all the way to the end of resolving my problem. Because a lot of the problems, it’s not really like human needed to do something other than just clicking a button, like, “Okay, yep, your modem is reset and hopefully your internet is back.” That could have been just done by a computer deciding to do that for me, right?

Jeff Dance: Interesting. Elisha, as we think about this next version of agents, what other examples could come to your mind or how do you see this moving towards this term, AGI, the artificial general intelligence? Any thoughts there?

OpenAI’s new Deep Research tool

Elisha Terada: Yeah, we have started to see quite a bit coming out in the last maybe just a month or two. Just I think this month, OpenAI has released what’s called a deep research where compared to something like perplexity where you would say ask a question to the research about like something in the scientific field and then it would take maybe a minute or less for perplexity to look through their own index knowledge and then try to summarize the finding based on the information that they already have. It’s nice and snappy and quick, like, great, this is all useful. And we thought that was already useful and end of the research. But deep research by OpenAI, which is not to be confused with the Google version, it takes 30 minutes for the deep research to actually do the research. And what it does is it doesn’t just like look up the internet and then try to summarize it in a human-readable way. It does research and thinks and pauses like, “Hmm, is this really answering the question?” Then it goes through an iterative loop until it thinks it’s done enough work, as if I would ask someone to go do the research, but don’t just tell me the first 10 links you find on Google results as like, “Hey, I did the research.” No, I want you to read it. And then, like, “Huh, this information that I read on this first article is leading me to think about this other thing I should go research.” It goes deeper and deeper. I do that manually when I read articles from, like, McKinsey and all these big KPMG companies where I read a 10-page paper, and I can summarize it, but it leads me to think, “Huh, there must be some other thing I should be paying attention to, and this term looks interesting that I should pay attention to,” and I go do more research, more research. It’s like a graph of the knowledge that I ended up using my brain to follow and find out and summarize and synthesize the results. But that’s basically something the deep research by OpenAI is doing, that it goes on its own, not just do quick research and spit out the knowledge back, but it thinks on its own, “Is this really answering the question?”

Jeff Dance: That’s interesting. So an agent of the future could take some time to do its research. It’s not going to be instantaneous. Hey, as we’re doing some complex tasks, it may do these iterative loops. It may fire up an orchestration of agents, and then it’ll come back when it’s done.

Jason Thane: And it’s still orders of magnitude faster than a human. You could pay a human for a week to do that, right? It might take 20 minutes or 30 minutes.

Anthropic & Claude

Elisha Terada: The other example I wanted to highlight is a computer user that’s by Anthropic. And the other competing solution from OpenAI is the operator. Essentially, they work with your computer or browser where it would act on your behalf. I’ve actually installed a program on my computer that would take over my mouse and keyboard. And then I ask, “Hey, look at my screen, look at my spreadsheet. Look at column A of all the technologies I’m interested in. And then look at column B, which contains a description I want you to fill. Go look at column A, fill the description of each tech in column B.” And I hit enter and then hands off. It starts to look at my screenshot, goes to the cell, “Okay.” It looks like I need to left click, left click. Then, “Okay.” It looks like I need to do research and start to fill the information. And it takes a pretty long time between each task because it needs to take screenshot and confirm that they’re on the right path. But they keep going and going and going. And I was really amazed at how the agentic model, agentic capability goes beyond just me interacting with the chat and then getting the text results as the output. It literally could do work for me on the computer, through my computer, as if I’m working. An IT team could probably not even tell the difference whether I’m doing the work or a computer is doing the work on my computer.

Jeff Dance: Wow, that’s fascinating. So you mentioned, Elisha, some bigger players like Anthropic and OpenAI. Any other big names come to mind from either of you that are really moving the needle right now with Agenda KI?

Startups & a distributed future

Jason Thane: I would just put a shout-out there for all the startups that are doing it. You know, there’s sort of two competing trains of thought, ones that AGI and artificial superintelligence, it’s all going to happen in the big AI labs. But I’m definitely not anywhere near ready to, you know, kind of wave my hands and say that a centralized artificial superintelligence is going to do everything for us. I think the future, the reality of the present as well as the future, is very distributed. And I think that means that startup innovation is not going anywhere. In fact, I think it’s going to be more important and powerful than ever, particularly as Elisha was describing, having an agent, delegating tasks to an agent, having it get things done. I think that those that are building new companies and new products are going to have incredible acceleration, incredible power tools to get the hard work of innovation done. And that’s really exciting. I think these companies are going to solve specific problems for humanity, and they’re going to be very focused. The AIs they use, the agents that they develop, the application layer of AI in these agents is going to be very specific for those problems. And I think we’re going to see a lot of abundance. I mean, it’s just really, really exciting to think of what’s going to happen from all that innovation. The vision that Sam Altman put out there famously was that OpenAI wants to enable a startup to have a billion dollars of revenue and one employee. Right, so because there’s just one person owning it and running it and providing the motivation and then a whole lot of agents doing all the work. And I think that’s not as far off as it might sound.

What industries use AI agents?

Jeff Dance: Amazing. What about industries that are kind of leveraging AI agents the most right now that we can look to, for example, Elisha or Jason, any thoughts come to mind as far as businesses that are doing it now and that are kind of paving the way?

Jason Thane: Again, I really think about these transformer models as like highly efficient compressions of knowledge. And so you know, what it leads me to is that there are really a few criteria that define what industries and what activities within industries are going to be affected the most. And I think it’s wherever it’s critical that there’s high-quality knowledge available instantly. So where you can bring this compressed knowledge to be relevant to a job or a task that’s got to be done. And this really means most knowledge work industries. The second criterion is where knowledge tasks are time-consuming, laborious, repetitive, or error-prone. They need to go better, faster, more inexpensive. That applies to much knowledge work as well. The third one is that the possible world of business conditions or environmental conditions is really hard to model with traditional automation code. So sort of back to the same idea we talked about a minute ago that you can’t really write code for every edge case that’s out there in the world. It would take an infinite amount of time. So wherever those three conditions apply, I think you can really make AI relevant. And agentic workflows in particular, we’re really seeing software development as our privilege as software developers is being accelerated first. We’re the first kind of knowledge workers to really take off. And it’s absolutely incredible. I mean, we’re seeing agentic paired software development be 3X to 10X, at least as productive and as effective as coding alone. And so I think that we are just the bellwether in software development for all kinds of knowledge work, all kinds of creative technology work, but all of that is going to follow. Another really important one is healthcare. There’s a huge staffing shortage in healthcare, and there’s lots of knowledge work. Most of healthcare is knowledge work. And there is an incredible amount of work that has to be done kind of in the margins to enable the work of actually providing care. So I think that healthcare is going to be transformed pretty drastically. So things like cybersecurity, agents are going to be great at compliance and security audits, things like that. That kind of leads to legal agentic legal work is super interesting. Again, not to replace the attorneys, but to superpower them and give them the ability to do much, much more, more effectively with higher quality. And then the last one that I’m personally very passionate about is education. Salman Khan, Khan Academy has introduced an AI tutor called Khanmigo. And although I’m the last person in the world that would ever want to replace the notion of human teachers and human instruction, I think that humans can be made more powerful by using tutors that can meet the students exactly where they are, that can clear those hurdles of knowledge that stand in the way of any student trying to learn long division and struggling with the algorithm. I’ve seen firsthand that these tutors can be highly effective at this agentic job of understanding right where the level of understanding is, what they need to know or unlock in order to proceed to the next level. And so I really think that we’re going to see a lot of human growth made possible through advances in education. And I hope that that can be complementary with the traditional models of both public and private education.

AI agents in education

Elisha Terada: I wish I had a personal TA in college when I took calculus because I could not understand based on a professor who has a Ph.D., and of course, they’re smart. They know what they’re talking about. I didn’t understand at the student level. And the solution was that I just fail at tests, and there’s no additional help that I can get unless I guess you have a private tutor that you can hire. Maybe great. But I wish agents would just tell me step by step how to understand.

Jason Thane: I personally have been learning so much outside of my field. I find that I can go into different fields with the help of an agent. And that’s super exciting. It kind of makes me want to do nothing but learn forever.

Jeff Dance: That’s amazing. A couple came to my mind: Khan Academy is enabling some of that to Jason’s point. But I’ve been seeing people talk about Google Next. Maybe it was Google Next that had a write-up recently about the time it’s taken to conduct research on topics for learning, and how AI can shrink that. So that’s pretty fascinating as I think about agents in this space. You could empower teachers with agents to do some research on who their students are and to prepare the right curriculum. And you can combine with learners to do some deeper dive discovery on a topic or whatnot. And if we can customize it to them, how amazing could that be? So much is evolving right now. It would be interesting to hear your perspective on startups, but any other industries come to mind that seem relevant?

Elisha Terada: I do want to add how it could affect cross-industry because I think Jason already did such a great job highlighting different industries that can be impacted. There is a common thread where we are doing a lot of work on top of what we call crowd applications where you go in and fill the form if you think about Salesforce. There are like a million forms that you could fill about your deal that’s going on. Or maybe you’re using a spreadsheet or something like that. I think the future is less of me having to be onboarded into how do I use Salesforce to navigate through pages of inputs. I just have a goal that I just want to log my conversation with the lead, and I don’t care how it’s inputted or how it’s stored. And I want someone else to be able to ask, “Hey, what’s Elisha working on this week?” and be able to get the answer. Replacing the need for these clunky interfaces that we have to deal with to just access an agent who can store the knowledge, access knowledge, analyze the knowledge, and give us insight with the knowledge.

The future of work

Jason Thane: Elisha, I think you hit the nail on the head. That’s what these agents are all about. You know, the name of my company is GenUI or General UI because we were about how do we bring the bounty of computing to humans more effectively? How do we make it easier for humans to use computing? And this agentic layer, this is really the application layer of AI of the current generation of compute. And you’re absolutely right. It’s going to reduce the friction in using tools that we need to use because we’re going to have a much more natural user interface of dealing with an agent. We like to say AI is the new UI. So instead of learning that complicated Salesforce user interface, we can talk to an agent that can do what we need to do. And that means we can focus more on our core activity. It can up-level the human work to be something that’s a lot more fulfilling and a lot more valuable. And I think that’s gonna be a lot more fun for people.

Jeff Dance: A lot of my conversations with people in the future, especially those that are deep in technology, talk about, and this goes to some of the most sophisticated roboticists, like, “Hey, this will help humans be more human.” And leverage really are our biggest skill sets versus getting lost in the complication of screens. I’ve said for a while that I think we’ve gotten narrower and narrower with our screens. And we’re all developing this neck thing where we’re looking down at these mobile phones. But my hope for the future is that, you know, we can look up more. Typing might be a little clunky and we can talk to these interfaces and have more 3D and spatial computing where it’s more natural. It’s more human-like. And I think that agentic AI is a step in that direction, you know, towards those sort of interfaces that Elisha is describing for one specific use case.

Jason Thane: Definitely. When you work with an agent, you can focus more on what you want to get done instead of spending so much time and attention effort on how, right? But there is this danger in skipping over the how. If you don’t understand it, or if you don’t go on the journey of the how, it’s getting the what done. You can become irrelevant or you can really miss the point. So it doesn’t mean that we can fall asleep at the wheel. It means that we have to be aware of what’s happening and what the agent is doing, but that we can get a lot more done a lot faster and be more self-actualized to your point. We can be more human. Absolutely.

Jeff Dance: Yeah, move up Maslow’s hierarchy of needs, which changes work. And I’ve reminded people that we all feared the computer, much like we fear robots or we fear AI. And the reality was that it changed work. We now have four to one knowledge workers versus manual laborers. And we will experience change. And I think the hard part is for humans when they get caught in the crossroad; we don’t change as fast, and reskilling is necessary. We don’t have a lot of wagon wheel makers anymore, but it’s something that’s been going on with every phase of the industrial and digital movements. And so we will experience change and we will have to change with it. One analogy I heard yesterday from a change management consultant that came to Fresh to train us to make sure we’re on the cutting edge, I didn’t love the analogy, but I thought it made sense. She mentioned that we should be like slushies. I was like, “Like slushies? Like, what do you mean like slushies? Like, do I want to be like a slushie?” Which she was referring to technology essentially and how like, you don’t want to be frozen and like ice and like be immovable and, you know, just get sidelined by something that, no. 52% of the Fortune 500 no longer exists. That started in the year 2000. It’s like companies that don’t adapt are dying. And if we’re not changing, we’re really not growing. So you don’t want to be ice. You don’t want to be frozen. You don’t want to be set in your ways as new technology you encounter. But the other alternative is water. And water is fluid. And water is fluid. It’s like, but you don’t want to go with whatever change in the wind and be like, this is the new thing. The point was to be like a slushie in that you’re able to move, but there is some liquid aspect to that, but there’s also some firmness aspect to that. So I thought it was a good metaphor to think about, hey, as we’re processing change, yeah, we need to experiment. We need to explore. We also need to, there might be timing to something as well, but we need to anticipate that if we want to progress from a business perspective.

Jeff Dance: We also need to change. And that’s just been a time-proven principle over the last 50 years. It’s just accelerated because technology has accelerated in the last 20.

Jason Thane: Yeah, it’s a good middle path. My friend Shane likes to say you want to be open-minded, but not so open-minded that your brain falls out. Yeah, you’re right. I think this stuff is fun. And hopefully, people will realize that it’s fun. Getting more self-actualized, having a co-pilot that takes care of the menial aspects of your task, takes care of the red tape.

Jeff Dance: Nah. I love it.

Jason Thane: Lets you do what you care about, what you pride yourself on, what you’re good at, what really makes an impact. Hopefully, people see it as fun and will embrace the change. But yeah, it will not be without growing pains for sure.

AI agents helping humans “be more human”

Jeff Dance: Let’s talk a little bit more about the future. You know, I had mentioned at the beginning that we’re kind of having this small bang with generative AI, you know, and how the change we’re already seeing and kind of anticipating for human-level performance, but it seems like agentic AI and AGI as we get there is the big bang aspect of, you know, a point in time that is significant. So as we think about that future, 10 to 20 years from now and we’re thinking about, how do we design and be a part of good? How do we design things with intent? What are some of your thoughts about what the future might look like as we jump forward another 10 years or so?

Elisha Terada: I think being more human is such a good analogy. I think it would be great if I could collaborate with my team members or have one-on-ones or have time to maybe chat with the clients that I have worked with versus getting really busy and heads down into the computer or smartphone trying to execute tasks, doing the research, trying to do the busy work of entering data and reporting. Putting together a spreadsheet to look like, “Hey, I’m doing great work and like we’re doing a great job,” but you would actually bring me back into the more human collaboration interaction. And I can just ask my agent assistant, “Hey, can you put together a spreadsheet and just figure out how to best represent how we’re doing this quarter?” I can review it later, of course, have human oversight, but I don’t need to be heads down trying to find 10 different documents that tell me I need to put together one nice documentation to say, “Here’s the state of the project.” If that can be automated, I can address the needs of the team member that I work with as a human, not attending to the spreadsheet.

Jeff Dance: I like that example.

Jason Thane: Yeah, the thing I’m most excited about, Jeff, is the creativity. I think we’re going to enter a really awesome era of creativity for human work. And I think it’s going to be a lot of fun. I think we’re going to see an incredible amount of acceleration in innovation at all levels, startups and big companies alike. This is going to deliver profound, wonderful benefits for mankind. People worry about the alignment problem, and they think, “Well, is AI gonna get so good it will just obliterate humanity?” And I really see the alignment problem instead as an opportunity. I think this symbiosis that’s gonna happen between humans and AI is gonna be really magnificent. The fact that we’re working with agentic AI in this way already, and we’re seeing that pairing, that partnership of humans and AI be more effective than any other model right now. I think that’s going to continue for quite some time. Maybe there will be a super villain AI created by somebody that wants to destroy humanity, but at the same time, there will be probably a lot more beneficial AIs that might counter that. So I’m excited. I think we’re going to see a lot of the problems that plague mankind be solved.

Jeff Dance: That’s amazing. I know we’re talking big right now, and I want to add some thoughts to that. One is, I think with technology, there’s always a good and a bad. There’s always people that are on both sides of good and bad. So almost any technology, see things that are less ideal. But that should not, I think, overshadow all the good. If you think about what the computer has done for our health as human beings, It’s like there’s been some transformational things and we’re talking diseases or we’re talking disabilities. We see the transformations when we’re, some of the technology with AI we are seeing already for disabilities blows my mind. One of the things you mentioned was creativity and I just like overjoyed that you said that as we think about the future, amplifying our creativity. And I think there’s been some debate of like, are we gonna destroy our creativity or do we amplify it? And I think it depends on if we are a consumer or a creator. If we just consume media all day long, we’ve seen the problems with that. We’ve seen that like, hey, if we’re on our phones all day long, we’re just consuming media. We’re filling our own LLM essentially. And then when our brain processes that, much like an agent process over a LLM, right? It’s like, you’re getting whatever you put inside. And so if we’re just filling it and filling it with these, we’re comparing ourselves against these images, et cetera. It’s like no wonder we have mental crises now. I would say it’s some of the bad that’s coming out of the idea that we’re just consuming. But when we’re creating, this is the best of humanity. We’re all creators. We were born with these innate gifts of creating that we see in the youngest of children as they develop. And sometimes we lose some of that as we grow up and we go through more regimented systems like our modern-day schooling. But this notion of amplifying our creativity and getting back to what humans do best and moving up Maslow’s hierarchy of needs, I think that is the opportunity that I think if we focus on as humans, we’ll find that the good will outweigh the bad. Those are some of the high-level thoughts.

Jason Thane: Absolutely.

Jeff Dance: I’d love to hear any other thoughts on the future before I transition to the next question. Anything else come to mind?

Will AI agents take jobs?

Jason Thane: You know, back in the Industrial Revolution, it was, I think it was Marx, I think it was Karl Marx who predicted that with industrialization and the factories that are being built, humans will have so much free time on our hands that we’ll be able to, you know, work two hours in the morning and then go fishing and hunting and reading the rest of the day. But that’s not what happened, right? The more industrial automation we created, the more we wanted to get done.

Jeff Dance: Great example.

Jason Thane: And there’s this thing called Jevons’ paradox, the more productive technology is, the more the demand for that technology increases. And we are absolutely seeing this play out with AI acceleration. The more we accelerate, the more we see opportunities to accelerate. And there’s no sign on the horizon that that’s going to end. We still have a lot of problems we can solve. We still have a lot of things we can accomplish as a species. And the fact that we’re doing our work in collaboration with AI is not going to remove the work. It’s going to create different work, to your point earlier, transform the work, and it’s going to be much more meaningful. But because of Jevons’ paradox, we’re going to continue consuming as much of this as possible. But again, I think it’s going to be fun, and it’s going to be super weird. The only thing I know about the future is that it’s going to be dramatically different from the past. And we really don’t know what to expect, but there’s a lot of cause for optimism. And I think the pessimists will be proven wrong.

Jeff Dance: The World Economic Forum mentioned that they expect more jobs created than jobs lost. Jobs will be lost, but more jobs will be created as a result of AI and also robotics, which I think is that example from the Industrial Revolution is just poignant of the philosophy that happens with change. When we understand that humans are creators, they’re builders, and we like to be productive. And so it just changes the nature, like you said. We have higher orders of creation than before. And we see this with some of the billionaires going like, now I want to go, “Okay, maybe I felt like I’ve conquered what I can on the Earth, now I want to go to the moon.” Right? I want to go to Mars. You see that, some of those aspirations change.

Jason Thane: Yeah, yeah, these people building rocket companies have so much money they, you know, they would never have the possibility of having to do anything they don’t want to do in their life. And what do they do? They do the hardest thing imaginable. And that’s part of the human condition.

Jeff Dance: Exactly.

Jason Thane: Yeah, we don’t have a lot of lamplighters or chimney sweeps anymore, right? But do we miss that job? I don’t know.

Jeff Dance: Yeah, exactly. And I think we saw this through the pandemic in a deep, deep way because it only existed or existed significantly in construction where we couldn’t find enough workers for construction. It was like it was a government emergency. But now all these other industries that had had dull, dirty, dangerous, you know, more mundane work, they couldn’t get the workers back because they didn’t want to come back to that work after they had a break for a couple of years. Not as many did. Because it didn’t fill them, it wasn’t meaningful to them. So if they had a choice, they were looking for something else after they got out of that routine. So I think that’s very, I would say, descriptive of what could happen in the future. It’s like we all wanna do more meaningful things. And work itself is meaningful, but if we have a preference, if we have a choice, if we can have our agency, then I think as human beings we’ll choose something of a higher order. And that higher order is part of that is being able to be more creative because that’s part of our human nature and it’s even been proven that that is a higher order of intelligence. Highly creative people are often actually of higher intelligence but it doesn’t always go in reverse.

Jason Thane: I had a professor in college who convinced me that at least for me, the most rewarding activity is creativity. And I think that that probably applies to a lot of people. And hopefully, we’ll get to see that blossom for everyone.

Elisha Terada: Yeah.

Elisha Terada: Yeah. Didn’t Henry Ford say something like, “If you ask people what they want, they’ll say faster horses.” And I think that’s how I would describe the LLM that exists at least today. Maybe it will change five, 10 years. It’ll be smarter than us, but we invent cars, right? We think about the new mode of transformation that if you’re training LLM with the knowledge where a car didn’t exist, it wouldn’t come up with the, you know, need or want or desire to have an alternative mode of motive. And to invent something, I think, is still, hopefully, advantageous for human beings. And maybe innovating is still maybe something that LLM can do really well, like mix and match with what we already know and exist. Like, if you combine A plus B plus C, LLM can probably do it faster than humans could do on the existing things. But to invent something, to create something new that didn’t exist, I think this is where we still have an edge and have joy. Like, how do we go to Mars? Maybe we need to invent something that was never done before that LLM cannot just come up with by combining existing things.

Multimodal models

Jeff Dance: That makes sense. Made me think of the, just the multimodal aspect also of how these new agents can be helpful. When we mentioned multiple sequences of steps and orchestration. I think one of the things that I’m seeing is the combination. And you guys both alluded to this, but we’ve been talking a lot about text, right? And then we’ve been seeing images, that’s amazing. But when you combine text, images, code, video, and then start to mix in robots, all the componentry that intelligence can sit on top of. You start to get into be able to do a lot more complex things that can extend you as a creator.

Jason Thane: The real-world aspect of multimodal models is maybe the most interesting and the most challenging. You know, Andrej Karpathy, who’s a really great person to follow on Twitter and listen to what he has to say, both about diving deep into how these things work and predicting the future, suggested that we might not be able to make a multimodal model as capable as a human without really having it understand the real world, the physical space that we exist in. Things like occupancy networks, which predict which voxels of space will be occupied by an object in the future, which is how some self-driving cars work. Such an important mode to creating sort of a holistic intelligence. And to your point, Elisha, I think we will probably be limited in the advancements and the novel reasoning that these models can do until they really have all of those modes, especially world model or real-world understanding incorporated. But I would contend that I do think that we’re on a path that will lead to AI models making discoveries and advancements on their own and not just for gurgitating what’s been given to them. I think that there’s plenty of evidence now showing that they are doing, you know, real reasoning and that, you know, it’s not that different from how the human brain works.

Jeff Dance: Hmm. Let’s see.

“Hey, what’s out there?”

Jeff Dance: I know we’re about out of time, but I want to ask either any other thoughts you have on the future or anything you’re most, as we think about AI agents, what are you kind of personally most excited for as we look forward?

Elisha Terada: I think as someone who likes to create, I think I’m still curious. I’m still interested in putting pieces together. At a personal level, I want to keep exploring what’s possible with AI. How could it help with my personal life? How could it help with the business, professional life as we’re being the service company providing services for clients, how can we be better at bringing great solutions for our clients and making them feel like they are accelerating on the progress that they really want to make and not being blocked because of some technical difficulties that they have. So I think I’m still maybe open-ended in my mind, open-minded, I guess, without letting my brain fall off to be really curious, like, “Hey, what’s out there? I want to keep exploring. Maybe I don’t have a definitive like this is what’s going to happen or like I need to go to Mars. Maybe not so definitive in my mind, but really every day I’m really intrigued to learn and experiment a lot.

Jeff Dance: Ha ha.

The decentralization of AI

Jason Thane: I’d say I’m really excited about, you know, the decentralization of AI. We’ve really been in this mode of centralized labs, releasing models and having us use them in the cloud. And that’s been partially out of necessity with just the compute resources that are required. But just recently, in the last few weeks, there have been a number of breakthroughs that are kind of showing that we might not have to use as much compute as we thought, at least for inference. One of those is the DeepSeek AI coming out and reporting if we believe in the amount of compute that was used as orders of magnitude below what we thought was necessary. And the other one is this notion of open source and especially on-device, on-edge AI, empowering the developer community with the ability to create and customize models and run them on devices right where we need them, when we need them. I think that’s a real revolution in the software industry. It’s something that is tremendously exciting. Now, I mentioned the Allen Institute’s Ai2 OLMoE project. That is a soup-to-nuts open pipeline for training AI. And, you know, we got to add the little cherry on top, which is a ready-to-go iOS application as an open-source repository. Any developer can download that and build their own iOS app that is based on a model or features of on-device agent architecture. That’s available today. And I think it’s really exciting to see what’s going to happen from all of the innovators and creators and hackers of the world who are going to take this and run with it.

Jeff Dance: Guys, thanks for your wisdom and your thoughts here. It’s my pleasure to have this conversation. I don’t always do this, but this is an episode that I want to go back and listen to. I definitely learned from you both and grateful for your perspective, also your excitement, and your dedication to this space and doing so with human intent, thinking about how do we create good and do what’s good for humanity. When you have that creative combination with those good intentions, I think. For clients, like Elisha said, for ourselves, for the businesses we work in, it’s an exciting place to be.

Jason Thane: Thanks for your mindful hosting, Jeff. It’s a pleasure to be on the podcast.

Jeff Dance: Awesome. It’s good to have you both. All right, that’s a wrap.

Elisha Terada: Likewise.