Podcast

The Future Of Human-AI Collaboration

In this episode, Jeff Dance and expert Andrea Iorio explore the evolving landscape of human and AI collaboration. Andrea outlines why most organizational AI projects fail, emphasizing the necessity of understanding the complementary roles of humans and AI. Through insights on automation, augmentation, and unique human skills, the discussion addresses the impacts on the workforce, productivity paradox, and the importance of soft skills. The conversation also highlights responsible AI use, the transformation of job roles, and preparing for rapid technological change.

Host: Jeff Dance Guest: Andrea Iorio (Author, Speaker)

Podcast Transcript:

Jeff Dance: In this episode of The Future Of, we’re joined by author and speaker Andrea Iorio to explore the future of human-AI collaboration. I’m going to give a quick intro to Andrea, but welcome.

Andrea Iorio: Thank you so much, Jeff. Such a pleasure being here.

Jeff Dance: Yeah, such an important topic right now. I’m really excited to learn from you. Andrea is a globally recognized keynote speaker, doing about 100 keynotes per year. We’re grateful to have you with us. He’s an author and a thought leader on AI, digital transformation, and the role of human skills in our rapidly evolving technology landscape. He has 10 years of experience in multinational and tech companies. For example, he was the head of Tinder across Latin America for five years, which sounds like it was fun. He’s also been the Chief Digital Officer at L’Oreal in Brazil. He has clients like Bayer, Nestle, Ford, Coca-Cola, IBM, and others. He brings a wealth of experience. We’re excited to have you. I heard you just wrote a book. Tell us more about that.


Andrea’s Book and Human-AI Collaboration

Andrea Iorio: Yeah, Jeff, I recently published a book with Notebook LLM called Between You and AI. It dives deep into the intersection of what AI does best and, as a consequence, what humans should do better. I talk about the new skill set, the future of skill sets, and the future of work. So I think it’s very much in line with our conversation today.

Jeff Dance: Good. Before we dive into all that, tell us more about what you do for fun.

Andrea Iorio: For fun, I’m a Brazilian Jiu Jitsu black belt. I don’t know whether that’s so much fun—sweating on a mat with a bunch of very strong people—but I’ve been doing it for 15 years. I know you’re a snowboarder, and I actually like to do that. As a good Italian, I would do it in the Alps, but now living in Miami, I can’t find much snow nearby, so I’ve been doing it less and less. Honestly, I love to do sports.

Andrea Iorio: I’m also about to become a father in a couple of months, so that’s fun as well.

Jeff Dance: Wow, congrats.

Jeff Dance: That’s amazing. We recently took on a foster child. I have four kids, but we took on a foster child, and I kind of forgot what it’s like to have a baby. So I have a few tips for you after the show.

Andrea Iorio: Yes, I definitely need those, Jeff.

Jeff Dance: Yeah, so much there. But that’s awesome. I love snowboarding, and physical exercise definitely helps us get the stress out, right? Having that routine is really important. Any physical exercise routine is so important right now, especially with what’s going on with the mental health crisis. So, great. Well, let’s dive in. Tell us more: how do you define human-AI collaboration? You wrote a book about it. Where do you feel organizations are getting this right or wrong? Because it’s a new space.


Organizational Challenges

Andrea Iorio: I think data shows that most organizations are getting it wrong. A recent MIT Media Lab study showed that 95% of pilot projects in big organizations—AI pilot projects—actually fail. They do not really show results. The reason behind it is that we actually don’t really understand how humans and AI can collaborate better together. The first thing is that we often see an either-or: either humans, or AI, and they sort of compete with each other. But there’s a great study with the game of chess that shows the best players are not AI players, but humans who use AI to play. They’re more creative and can come up with new strategies together with AI. There’s even a game and championships organized around this called Centaur chess. It’s a great analogy for what should happen in organizations. We need to understand that AI is not here to substitute humans. Humans who use AI correctly and effectively will replace those who do not.

And overall, AI is not here for our jobs, but for some of our tasks. Our jobs are made up of many tasks—some familiar, repetitive, data-driven ones. Those are going to be replaced. But the true collaboration comes when, first, we understand that, and second, when we realize that through it, we save time and can use it to become more important to our customers, ask better questions, and collaborate more. When we look at the definition of human-AI collaboration, it’s one where AI makes human work better through automation, but also through augmentation. It’s not about substitution, but about increasing the quality of human work thanks to AI, not in spite of AI. I think that’s a really important definition for organizations.


Empowering Human Skills

Jeff Dance: That’s great. I love the notion of empowering humans to magnify ourselves because we have so many natural skills. The history of robotics has been about taking over dull, dirty, dangerous jobs. The history of technology has been about automating more work, and during the pandemic, we learned that humans weren’t coming back to a lot of repetitive jobs. It seems like AI is another step in the history of technology that’s going to take over more automated tasks, but the question is, how do we empower humans to do more with their skills? Talking to some of the leading roboticists and technology leaders, they say this will help humans be more human. I think that’s a positive side, but the change is the hard part—the change that we experience. So your point that AI is to take over some of our tasks, not to replace us—what I heard, and I don’t know if you mentioned this, is that AI might replace some people who aren’t leveraging AI. That’s your point: we need to use this to better ourselves, not to replace ourselves.

Andrea Iorio: For sure. And it does something else beyond that: it democratizes access to a set of skills that were scarce before. For example, a lawyer would take years to graduate and start practicing, and now anyone who has never studied law can have access to the same repertoire. It’s a bit like calculators in the 80s did for counting numbers—a niche application that became widespread. That puts a lot of pressure on what it means to be a good professional because if, up until now, we thought the skills we spent years developing—usually hard skills—were important, those are commodities nowadays. It’s funny to see how Microsoft published a report a couple of months ago about the 40 jobs least likely to be replaced.


The Evolving Skill Set

Andrea Iorio: Number one was phlebotomists, which I didn’t even know exactly what they did. They are the ones who do blood tests and have to find your vein because AI still lacks the ability in physical manipulation and so on. Of course, robotics is improving, but the key is understanding what AI still doesn’t do well. That’s where humans should thrive. We need to become more human.

Jeff Dance: Yeah, it’s interesting. You mentioned the calculator, and it made me think of the spreadsheet. In the history of technology, we’ve feared things that would replace us. The spreadsheet was one of those that people thought would make accountants obsolete, but really, it empowered accountants to do their best work and allowed our tax code to get a lot more complex. How many accountants do we still have in the world, right? The spreadsheet was a great tool that didn’t replace accountants. If we go back in time, the printing press was feared for spreading bad information and dangerous ideas, yet it democratized knowledge. The telephone was thought to destroy authentic conversation, but it connected humanity. The computer was supposed to make workers obsolete and change work completely, and it did, but now we have more knowledge workers than manual workers. Cloud computing was seen as too risky for storing our data elsewhere. All these things were feared, but we adapted and evolved. If we’re growing our economic output, there is reskilling and new jobs, and that’s hard, especially when change happens fast, like with AI. But is this just another thing in the history of technology that’s a great new tool, like the computer, that will evolve our work but not replace us?


Impact of Technology on Job Roles

Andrea Iorio: Totally. Beyond creating new jobs, it will also change what an accountant or a developer does. You mentioned accountants being empowered by spreadsheets, not replaced, but as a consequence, a lot of manual entries were reduced or simplified. The consequence of AI across many jobs is that it changes the job description and what people actually do. Let’s look at developers: now, 25% of code within Google is already being done by AI. If I’m a developer and I still think my main responsibility is just to write code, then I’m looking at the old way of being a developer. A new developer now is more someone who revises what AI does, prompts AI well, understands the architecture of an application or software, and so on. It reshuffles what it means to be in these professions. It’s not going to eliminate developers or accountants; it’s going to redefine what makes a good accountant or developer. I think that’s true for all professions.

Jeff Dance: Yeah. Could we make the analogy to the computer in the same way? Didn’t the computer do the same thing for jobs and how we work?


AI’s Unique Risks and Black Box Problem

Andrea Iorio: It did, of course. The big difference is that with software, computers, and the Internet, we sort of understood how it really worked. Using the example of software, it used a programming logic: if A, then B. If I were the developer, I would understand how decisions were made. The big difference with AI, which makes it more powerful but also more risky and unpredictable, is that not even the developers of large language models really understand how the model will arrive at a certain outcome. There’s this whole black box problem—the explainability problem of AI. This makes it unpredictable. We can’t fully control which tasks AI will be able to replace the fastest. But there’s a pattern: tasks that are repetitive, data-driven, and have measurable outcomes are more easily substituted. Calculators did that, computers did that on a massive scale, and now the scale we’re seeing with AI is unprecedented.


Optimizing Tasks and Human Uniqueness

Jeff Dance: Yeah, it’s interesting to think about the tasks where AI really excels or doesn’t, or how we get it into a position to excel. People think it just magically does everything, but there are core concepts in the deterministic side of AI, how you prompt for that, how you select what you want, versus the creative side, where it’s novel and generative. There are two big spectrums, and AI can sit on top of both. You mentioned the black box aspect, the neural aspect, and how people don’t quite understand that there is a neural, brain-like quality to it. That’s somewhat human—when we give a human a task, we don’t always know exactly what we’ll get. The vector-based aspect of generative AI is that it combines vectors, which is why it gives unique answers—it’s generating by pulling together vectorized information. Those are important concepts: tuning tasks to make things work and recognizing which tasks might not be a good fit. But I love your definition of reshuffling the profession or job duties and the notion that these jobs aren’t going away; we just need to reshuffle now that we have this higher order of intelligence we can use. My curiosity is, because it’s more ubiquitous and we’ve democratized this new level of intelligence that we can tap on demand, are we going to go up Maslow’s hierarchy of needs? Are humans going to move into higher orders of productivity and intelligence because we can tap into that, having deeper conversations from the start? Or is that going to hurt us because our brains won’t do as much deep thinking? Does it give us more deep thinking capabilities, or does it take them away?

Andrea Iorio: Look, Jeff, a recent study from MIT Media Lab showed that brain engagement of students solving the SAT test while allowed to use AI was much lower than those not using AI. One big risk is over-dependence—what fellow speaker Pascal Burnett calls “AI obesity.” He uses this analogy to say that since AI is accessible, we overconsume its outputs and over-rely on it. This is one possibility, but as with social media, where my generation first used it without understanding its negative effects, I see Gen Z and Gen Alpha now understanding privacy and shifting from Instagram, which is more about showing off, to TikTok, which is more about learning and exploration. I hope that with AI, especially if leaders, organizations, and developers take responsibility to educate people about its right use, risks, limitations, and opportunities, we can use it wisely. The opportunity is there—we all love using AI tools, but there are limitations. AI is only as good as the data it’s trained on. If we feed it biased data, it will produce biased outcomes. Over-reliance is another risk. We’re at a moment now, as Kevin Systrom, the founder of Instagram, said when he left his role: we’re in a pre-Newtonian age of social media. With AI, it’s very pre-Newtonian—we use these tools, but we don’t really know how they work or what their effects are. It might take longer to adapt because of the explainability problem, but over time, we will adapt, understand its limitations, and outsource the right decisions and tasks to AI, while keeping those that shouldn’t be outsourced. Right now, we’re outsourcing everything, and that’s risky.


Over-dependence and Responsible Use

Jeff Dance: Yeah, I agree. We’re still in the early stages of understanding how to connect, but also the importance of disconnecting. We created a site called BalanceSecond for this purpose—how to connect with technology to magnify and augment yourself, but also how to disconnect. Disconnecting is important, especially with what we’re learning about smartphones, social media, and their negative influence. We need to remember how to connect as social creatures in a non-screen, human way, and keep those human traits alive that help us think in different, novel ways. There are still many benefits humans have compared to AI. I want to go there, but first, I want to ask more about your book. The title is Between You and AI: Unlocking the Power of Human Skills in the Age of Artificial Intelligence, and it’s launching soon, right? Did I hear November?

Andrea Iorio: Correct. Yeah, November 18th. It’ll be available in bookstores in the US and, of course, on Amazon, Barnes & Noble, and so on. Coming up soon.

Jeff Dance: Perfect. Tell us more about that. I heard there’s a three-part framework in the book. Tell us more about that transformation.


Andrea’s Three-Part Framework

Andrea Iorio: Exactly, Jeff. The book revolves around mapping out what AI now does best and, as a consequence, what humans should do better. I came up with three pillars of transformation for skill sets and behaviors. The first is cognitive transformation—I lay out three new skills related to decision-making and the way we think: prompting, data sense-making, and re-perception. They all revolve around our ability to make decisions and think as humans in the age of AI. The second pillar is behavioral transformation—it’s about how we perform our tasks. There are three skills here: adaptability, augmentation, and antifragility. They revolve around what AI can do in automating some tasks, augmenting others, and making humans more like scientists with the time we gain back. As you said, the time to disconnect is also an opportunity—AI automates many tasks, giving us time back to focus more on creative ideas, connecting with others, and innovation. The last pillar is emotional transformation, which involves the need for more empathy, trust, and agency. Especially agency—humans should take responsibility for the AI tools they use. It’s tempting not to, thinking, “Well, that was generated by AI, so I’m not responsible if it hallucinates.” But a recent case with Deloitte shows that we are responsible. They produced a report for the Australian government, which noticed hallucinations in the bibliography. The government asked for their $300,000 back. The story here is that we need to use AI responsibly and remain accountable as humans. The book revolves around these skills—adaptability, empathy, trust, re-perception—which are all soft skills and the least replaceable. The hard skills can be replaced much more easily.


Human Responsibility in the Age of AI

Jeff Dance: Thanks for those insights into your book. It sounds like there’s a lot of depth to explore. I’m looking forward to reading it. A couple of things caught my attention. One is responsibility. We just hosted AI 2030, the Seattle conference, and had a session on responsible AI. The notion of humans being responsible for the AI they’re utilizing is important. What’s interesting about AI is how easy it is to use—across all technology, whether creating presentations, documents, doing research, creating apps, music, whatever, you put in a little and get so much out. It generates so much with just a prompt, can do things in sequence or parallel, can be agentic. There’s so much output, and it makes it so easy. It seems like, “Well, I can just send that report over; it looks good,” but that’s maybe not responsible. To not own what I’m sending, to analyze and look at the sources—that’s important. Being responsible about the production value, because you can produce so much, is a really important trigger for the future. This is a tool, it’s helpful, but there should be a review, a human in the loop, and ownership of how I’m using it. That’s a principle to hold on to.

Andrea Iorio: It is, and if we look at AI’s lack of responsibility, it’s across several dimensions. First, it’s not legally responsible for what it does because it’s not a legal entity. There’s talk about making it a legal entity, but we’re far from that. Second, it’s not morally responsible because AI doesn’t have a conscience. It processes data and information syntactically, not semantically—it doesn’t give meaning or understand the depth of decisions. Third, it’s not technically responsible, as Luciano Floridi calls it, the “many hands problem” with AI—it’s too opaque, so assigning responsibility is tough. There was a case in 2018 where Uber was testing an autonomous vehicle, and unfortunately, there was an accident where the vehicle ran over a cyclist because it misidentified them. In court, Uber was not deemed responsible, nor were the developers of the AI tool. The only one legally responsible was the person in the car overseeing the system, who didn’t pull the brake in time. This is a perfect example: OpenAI won’t be made responsible if we use ChatGPT in a malicious way. We will be responsible for that. It’s tempting to think we’re not, but we are. Imagine a bank outsourcing to AI the decision to approve loans. If a customer is denied a loan and asks the manager why, the bank may not be able to answer, since they don’t really know. If they respond, “I wasn’t responsible,” that won’t satisfy the customer, and trust is breached, leading to reputation problems. Responsibility and agency are very important skills for good leaders, professionals, and entrepreneurs who understand that AI is powerful, but we must use it responsibly and take accountability.


Soft Skills and Competitive Advantage

Jeff Dance: Yeah, I love that. It’s another dimension of responsible AI. But I think understanding how it’s not responsible—its deficiencies, not being legally or morally responsible, not having a conscience, not taking responsibility for the technology—helps us understand where we are responsible. BCG came out with a principle of 70-20-10 for AI work: 20% is the technology, 10% is the data, and 70% is the people, process, change management, and context. That comes back to the need to understand both what AI is and what it’s not. You mentioned AI obesity—using AI to pump out content and work, but that’s not responsible if we’re not careful and involved. That’s an example of AI obesity—just getting used to it without thinking. With social media, I think about creators versus consumers. If we’re just consuming all day—news, doomscrolling—it doesn’t help our brains or us as humans, and we see this in mental health. But if we’re creators, maybe we won’t get “obese,” figuratively. You can create things, but you have to be responsible about what you create or share. That’s an important component. You also mentioned the black box aspect. It’s interesting that, in the legal aspect of autonomous driving, technology companies have been winning against copyright because of the black box aspect—inputs go in, the AI recombines vectors, and there’s fair use to information on the internet. The novelty makes it hard to trace exactly to the source. This is all relevant as we think about how we use and partner with AI. Thanks for your insights. You talked about some of the things humans have that AI doesn’t—morality, consciousness. Can you go deeper there? Where do we have an edge or need to keep honing our skills differently than what AI brings?

Andrea Iorio: Taking a segue from what we’ve just discussed—the problem of responsibility—there’s a second aspect. Sometimes it’s tempting to think that AI makes us more productive, but there’s a trap: it’s not only making us more productive, it’s making everyone else who uses the same tool more productive. Daron Acemoglu, the MIT professor who won the Nobel Prize for Economics in 2024, calls it the productivity paradox of AI. He says that while AI automates tasks and supposedly makes us more productive, it also makes everyone else more productive. This means we need to focus on other types of activities. For example, on LinkedIn, everyone is creating content now because it’s much faster and easier. I was tempted to use ChatGPT to post every day, generate images, and train my videos quickly, thinking I’d gain a competitive edge. But I didn’t, because I was overwhelmed by polluted feeds where everyone was doing the same. I realized that wasn’t what would give me an edge. I had to be more original, differentiate myself with content that AI doesn’t do. So, a second big aspect is not falling into the trap of thinking…

Jeff Dance: Mm-hmm.

Andrea Iorio: AI helps us do this, and therefore, this is our competitive advantage. Actually, sometimes it’s the other way around: if AI does something well and everybody starts doing it, maybe we should do it differently or focus on something else. That’s one point. Getting into the emotional side, that’s a super interesting one. You mentioned conscience. Some researchers, like Blake Lemoine, who used to work at Google, have said AI has a conscience, but the overall agreement is that AI has not developed a conscience yet. That’s why there are some problems that stem from it.

One problem is that AI does not really take responsibility. The second is what’s called the Chinese Room experiment. Imagine, as proposed years ago to explain how AI works, that AI does not truly understand the decisions it makes. It just simulates an understanding by processing data and using statistical models to generate text based on its training from internet content. But it does not really understand the meaning of its output, and that’s important because humans do.

For example, if we use AI for military applications and it’s optimized to reach a military target, it will not have the same feeling as a human drone operator, who would consider whether there are civilians or other factors. AI will not. It’s pure goal maximization with no understanding of its decisions, making it risky. That’s why we need humans in the loop to interpret the meaning of things, which is increasingly our responsibility.

AI is great at simulating empathy and human feelings—we notice that through the soft-spoken voice of ChatGPT’s vocal function and chatbots designed to be human-like, which is why it works for therapy. The problem is that AI does not feel back, and there’s this uncanny valley effect—a breach of trust, a feeling that you’re not interacting with a real person. That’s a big problem. So, again, it’s our duty to map out what AI does best. First, understand that just because AI does something well, it doesn’t mean that’s our competitive advantage.

Andrea Iorio: Secondly, we need to know its limitations. That’s where we thrive and should focus more. These include empathy, the commoditization of skills and knowledge, and responsibility, as we’ve discussed.


Embracing Change and Practical Recommendations

Jeff Dance: It does seem like AI can be a short-term competitive advantage, but in the content world, I don’t know. In tech companies leveraging AI, there is a competitive advantage, but I think it’s a time-based window. If everyone is doing it, it’s no longer an advantage. Time and speed seem to be the last great competitive advantages. If I can make something more intelligent or better and get there before others, there is an advantage—or a disadvantage if I don’t. McKinsey said that in three to four years, technology companies could be obsolete if they don’t incorporate AI. So there is something there. But I agree that long-term, it levels out and becomes commoditized. We’re already seeing that with the big engines competing, which benefits us all. It’s commoditizing what anyone can access, which is amazing.

I love the notion of differentiating what we have: AI doesn’t have multi-sensory emotions, and that’s something we have. You mentioned it doesn’t have a conscience, especially in contexts like war. We are driven by values as humans, and I believe most people are good, even though the news highlights otherwise. We build trust, connection, and we feel. We love, and love is one of the most powerful human forces. There are so many unique things we have: morality, genuine emotion, moral judgment, intuition, improvisation, creativity—these are things we need to hold onto and magnify. I hope that as AI transforms our work, these unique human qualities get magnified.

But there’s still change along the way, and that change can be painful—having to reskill or adapt as jobs change dramatically. Humans don’t change as fast as technology. Do you cover some of that change in your book? Are there practical recommendations for people experiencing a lot of change?

Andrea Iorio: Yes, actually. There’s a skill in the book I call “re-perception,” which is tied to the ability of professionals, leaders, and entrepreneurs to go through change—not just accept it, but promote it. In the age of AI, things we thought impossible yesterday are possible today. We see this with large language models and other new technologies. In a world where the rate of change is exponential, we have to update our beliefs, decisions, and knowledge proportionally. But as humans, we’re designed to change linearly, so we’re not used to this pace.

One practical way to adapt, which we implemented when I was Chief Digital Officer at L’Oreal, is reverse mentoring. Instead of traditional mentoring, where experienced leaders mentor new talents, we reversed it. We leveraged the beginner’s mindset of someone new to the industry or fresh out of college to mentor experienced leaders. The result was amazing—leaders gained fresh perspectives on how younger generations make decisions, value transparency, and sustainability.

Another way is to use AI as a sparring partner for thought experiments and brainstorming. MIT studies show that AI is good for brainstorming because it provides a higher number of scenarios and choices than we would generate on our own. The first 10 ideas in a brainstorming session are usually obvious, but with AI, we get a divergent thinker that helps us explore new scenarios if we prompt it well.

Lastly, it’s important to understand what’s going on with the end customer. Real change comes from the end consumer, not just the product or service we sell. If we’re not customer-focused, we won’t be able to update our knowledge and thinking at the necessary rate. Customer centricity, using data to understand the end consumer, and using AI to interpret that data are crucial for updating our thinking and making (or unmaking) good decisions.

Jeff Dance: Yeah, I love these thoughts about innovation. If that’s at the highest order of Maslow’s hierarchy of needs, and we can be more innovative with AI, how do we do that from a divergent perspective for idea generation? How do we understand the value of a beginner’s mindset—someone who isn’t an expert but comes in with an open mind—and how that can drive innovation?

Andrea Iorio: Because they also ask better questions. One of the big factors that will help us make the best use of AI is how well we craft questions—that’s prompting. This open, beginner’s mindset is valuable. Sometimes our kids are the best potential AI users because they ask questions that adults don’t, giving them access to a huge repertoire of creativity and knowledge that AI can help unlock.

Jeff Dance: Yeah, creativity and innovation can decline over time as we get used to things. We stop asking questions because we’re institutionalized in how things have always been done. True innovation requires novelty and value. Generating new ideas with value is the history of companies, but most don’t continue to innovate—some do, but even 52% of the Fortune 500 no longer exist since 2000, often because they didn’t adapt or innovate with new technology.

When we created Fresh 18 years ago, it was about helping companies stay fresh. How do we bring a beginner’s mindset to new technology? Even after two years, you’re an expert, but in creating the company, I realized I need both sides: young people with a beginner’s mindset and experts with wisdom. Wisdom comes from experience, but if you only rely on wisdom, thinking can become closed. If you only have a beginner’s mindset, you might have great ideas but lack convergent thinking to add value. You need both. What’s cool about this conversation is we’re talking about the value of AI and also the value of reverse mentoring and generational exchange—both sides, human and AI, playing into innovation.

Andrea Iorio: Exactly. There’s a generational exchange that’s very powerful. This diversity of thought and experience is super important. When we talk about diversity, it’s broader than just certain aspects—it’s also generational. The way different perspectives mix is complementary to AI. It’s important from the boardroom down. As you said, AI is a great sparring partner, providing more scenarios and ideas, but we have to use our critical thinking to determine what’s good for us.

Jeff Dance: If you’re a creator trying to create value, that sparring partner allows for deeper thinking and more creation. But if you’re just consuming, maybe not. That’s a key difference for us as humans.

One of the things you mentioned was the productivity paradox: if AI makes us all productive, maybe it’s not a competitive advantage. I’ve also heard about the Jevons paradox from the 1800s coal industry—when coal became cheaper and more efficient, people consumed more. If something is easier, we use more of it. If we have a wider highway, traffic increases because it’s easier to get there.

Andrea Iorio: Yeah.

Jeff Dance: The parallel for AI is that if it’s easier to do something, we’ll do more of it. As business owners, we wonder: are we creating value or just cutting costs? In the news, we see companies cutting employees—maybe 4%, which is still a lot of people.

Andrea Iorio: Yeah, in absolute terms, it sounds big. Amazon recently announced hundreds of thousands, but it’s still less than 10% of the workforce. But you’re right, those numbers can look gigantic.

Jeff Dance: Right. My thinking is, and maybe this is the positive side, as we get more productive, maybe we’ll create more value and have more economic output, leading to new jobs. That’s been the history of technology, but I don’t know if it works here. The Jevons paradox analogy for AI is that it has that potential. What are your thoughts? Will we create more jobs as a result of AI, or lose more? Any input?

Andrea Iorio: What I believe will happen is that we’ll have more time available and won’t work the same way we do now. A chunk of our work will be more related to personal or professional creativity and innovation. I don’t think the number of jobs will change much; it’s more about how the jobs themselves change. Of course, new jobs will emerge, like AI ethicists, but not so many will be fully substituted.

For example, cashier-free supermarkets have been discussed for over a decade, and Amazon implemented them, but they haven’t really taken off because there’s still a role for people there, even if the job is substitutable. I believe responsibilities will shift, and the core of work will change. The future of work will involve being curators of our own workflows. We’ll need to map out what’s substitutable and automate those tasks. The next step is to determine what can be augmented through AI—enhancing the quality of our work.

If I’m a marketer, I can use AI tools to generate more images for a briefing or quickly create presentations in Gamma. The third part is what we do with the time we earn back. Salespeople, for example, will be affected—McKinsey shows about 30% of a salesperson’s tasks are automatable, like order taking or pricing. What will make a salesperson stand out is how they use the time they gain back: spending more time with customers, asking better questions, or just relaxing. The difference will be in how we use that time.

It’s more about reshuffling activities than a big substitution of jobs or creation of many new ones. New jobs will be created, as always—“influencer” wasn’t a job before, but now it is. Ultimately, it’s about what we do with the time we gain back. I like the analogy of a scientist: we’ll need to experiment more, test and learn new things, which we don’t have time for now. Time is valuable, and as you said, it will make a difference in the future of work.


Future Outlook

Jeff Dance: Yeah. On that topic, thank you. What other thoughts do you have for the future? I know it’s hard to predict with AI moving so fast, but any other insights about how things will evolve, or that you’ve written about?

Andrea Iorio: Definitely. On the technical side of AI, I think we won’t see as much innovation in the next five to ten years as we have recently. We’re reaching a plateau with current large language model architectures—they’ve been trained on almost all available data, so now they’re using synthetic data, generated by AI itself in a self-feeding loop. We’re plateauing with the power of large language models. We’ll see more private, proprietary language models—like Google’s Notebook LLM—that are niche and used for specific applications. But we won’t see big innovations, at least in my view.

On the human side, we’ll see a shift in training and evaluating people—from hard skills to soft skills. For my book, I interviewed 247 HR leaders globally and asked: would you rather hire candidate one, who has all the hard skills and knowledge but poor soft skills, or candidate two, who lacks the necessary hard skills but has excellent soft skills? 93% said they’d hire candidate two. That’s counterintuitive because we grew up being told to focus on knowledge and specialization, not soft skills.

Now, that’s changing. The future of work will focus more on soft skills, though hard skills will still be important. But hard skills are much more easily accessible now, especially if someone knows how to use AI tools. Soft skills are the real differentiator—they’re not easily replaced.

Jeff Dance: That’s really interesting. We’re seeing evidence of that now. Computer science graduates don’t have the same job market as a few years ago, even though those are hard-earned skills. Anyone can create basic code now because it’s language-based. Hard skills are more accessible, so soft skills—which we haven’t always hired for because they’re harder to measure—will become more important. I love the notion that soft skills are going to become more important in the future. These are human skills—unique to us. If intelligence is commoditized, then the emphasis becomes more on soft skills.

Andrea Iorio: That’s correct. In my research, I asked HR leaders why, and they told me two reasons. First, hard skills are easier to teach than soft skills, which didn’t surprise me. The second reason was more interesting: they said they already use AI tools that perform hard skills better than people, but haven’t found any tool that comes close to the soft skills of their employees. So why hire candidate one? They see more need for candidate two, which is counterintuitive for those of us who grew up being rewarded for knowledge and hard skills. Hard skills are measurable and transferable, but now AI tools can do them, so recruiters and leaders are looking for something less accessible.

Jeff Dance: I love it. As we think about the future, that’s a new one for me. I want to ask you two more questions to wrap up. One: you mentioned some business leaders haven’t been successful. I just interviewed one of the inventors of RAG, and he was talking about context—context is everything with AI. How do we flip 10% success to 90% success? What would you say to business leaders working on AI strategies? What’s your advice to make sure their AI deployments and launches are more successful?

Andrea Iorio: The first thing is data. AI tools are only as good as the data we have. Incomplete data leads to generalization problems, hallucinations, and transparency issues. We have to be data-ready—there’s no point in hiring the best tools or launching big projects if the data isn’t ready.

The second part is people. It’s more complex than just having the right people trained on the tools. Research shows there are two dimensions to human collaboration with AI: understanding the tool, and the emotional perception of the tool. If we don’t create a safe space where people can experiment with AI and don’t feel threatened by it—where complementarity, not substitution, is communicated—then adoption suffers. If I ask my sales teams to use AI but don’t communicate it well, they may subconsciously think they’ll be replaced. Communication is key.

The human part is how we design and communicate the collaboration so that people not only understand how to use AI, but also trust the tool, their leaders, and the organization. The last aspect is governance. If I’m using AI to make sophisticated or private decisions, how do I protect the data? How do I explain decisions to customers? How do I ensure there’s a human in the loop and not just full automation? So, my advice is to focus on three pillars: data, the human/cultural aspect, and governance—making it ethical and keeping humans in the loop.

Jeff Dance: Yeah, thanks for that. It reminds me of the 70-20-10 principle: so much is on the human side, but we think it’s just the technology. Everyone should have some sort of AI strategy, but there’s human work required to make it successful.

Andrea Iorio: Yes, that can’t be outsourced. That’s for sure.


Final Thoughts

Jeff Dance: Awesome. Last question: What advancements are you personally most excited for as it relates to AI? As you look ahead, what are you most optimistic about?

Andrea Iorio: There’s a great part of using AI in my own work as a keynote speaker and author. First, it has helped me scale my business globally—I can now create content in multiple languages much more easily and quickly. I’m Italian, live in the US, and also cater to Spanish-speaking markets and Brazil, where it’s Portuguese. Translating books, articles, or podcasts used to take so much time, but now, using tools like Synthesia and ChatGPT, I can reach more people than ever before. That’s really positive.

There’s no AI yet that can create a digital twin so you can be on stage in multiple places at once—maybe one day we’ll get there, though that could commoditize speakers, which we don’t want! I’m excited about the augmentation part: I can create and edit content much more easily, reach more people, and promote my work and topics. Especially with my book, Between You and AI, I want people to understand it’s not just about the technical aspect, but about developing new abilities and skills.

If we just implement AI tools but view our jobs the same way as before, we’ll end up overlapping tasks and thinking we’re more productive when we’re not—because everyone else is using the same tools. If we don’t have the right data, we’re not making better decisions. Training and educating people globally, thanks to AI, on its responsible use—based on human capabilities—is very exciting. Connections like this podcast are possible because of digital transformation. I’m very excited, though also a little scared, like everyone else. But that sense of urgency keeps the flame alive to keep pace with transformation.

Jeff Dance: That’s positive pressure versus bad stress—it’s good stress. Thank you for being on the show. This was a great session as we think about the future for people and business owners, and how to make this positive by combining human and artificial intelligence. I’m amazed at the intelligence that can be added to almost anything, even though it’s still artificial. There’s value to us as humans. Thanks again for being here. We learned so much together and are grateful.

Andrea Iorio: Thank you so much, Jeff. Thanks, everyone, for listening. It was a pleasure being here.