Is technology helping us thrive or tearing us apart? Explore how rethinking innovation could lead to a more connected and human future.
What if we’ve been asking the wrong question about technology all along? Not “What can it do for us?” but “What is it doing to us?”
In this WhoWhatWhy podcast we talk with Chris Colbert, former managing director of the Harvard Innovation Labs, to explore a provocative idea: that technology as we know it may be dead — and that’s exactly what humanity needs to thrive.
Drawing from his new book, Technology is Dead: The Path to a More Human Future, Colbert talks about how our obsession with innovation and productivity has eroded our connections, amplified our vulnerabilities, and left us more isolated than ever. But this isn’t a doomsday manifesto — it’s a clarion call to reimagine how we design, adopt, and interact with the tools shaping our lives.
From AI and social media to the unintended consequences of centuries-old innovations, Colbert challenges us to rethink our relationship with technology — and with ourselves.
Colbert’s insights will make us question not just where we’re going, but who we’ll be when we get there.
Apple PodcastsGoogle Podcasts RSS
Full Text Transcript:
(As a service to our readers, we provide transcripts with our podcasts. We try to ensure that these transcripts do not include errors. However, due to a constraint of resources, we are not always able to proofread them as closely as we would like and hope that you will excuse any errors that slipped through.)
Jeff Schechtman: Welcome to the WhoWhatWhy podcast. I’m your host. Jeff Schechtman. In the grand tapestry of human progress, technology has always been both our paintbrush and our paradox. Today, we stand at a crossroads with the digital and the human intertwined in ways both exhilarating and unsettling.
My guest Chris Colbert, former Managing Director of the Harvard Innovation Labs offers us a compass for this terrain in his new book, Technology is Dead: The Path to a More Human Future. Colbert’s vision is not of unplugging, but of re-imagining, a clarion call to infuse our innovations with intention, our algorithms with empathy.
Yet as we explore his ideas, we’re reminded that technology, like life itself, often defies our best-laid plans. From the humble light bulb to the ubiquitous smartphone, history whispers a familiar refrain. Progress comes packaged with unintended consequences. The cotton gin that revolutionized agriculture also entrenched slavery. The automobile that connected continents also reshaped our cities and our climate. So as we stand on the precipice of AI, quantum computing, and marvels yet unimagined, Colbert invites us to dream of a future where technology amplifies our humanity. But we must also ask, can we truly shape this digital destiny or are we merely riding a whirlwind of our own making?
In a world where we often marvel at how far we’ve come, Colbert dares us to question not just where we’re headed but who we will be when we arrive. He challenges us to envision a renaissance of the human spirit in the digital age. A noble pursuit yet one that must contend with the unpredictable nature of progress itself.
Colbert argues that we must be charting a course toward a future where technology doesn’t just connect us but elevates us, where the end of technology as we know it might just be the beginning of our next great human adventure, one that will shape as much as it shapes us. It is my pleasure to welcome Chris Colbert here to the program to talk about Technology is Dead: The Path to a More Human Future. Chris, thanks so much for joining us here on the WhoWhatWhy podcast.
Chris Colbert: Jeff, thanks for having me on the show. And I have to say that is “the” best introduction and I think the most succinct and compelling capture of what I have attempted to do with the book. There was a little voice in my head saying to me while you were speaking, “Jeff should have written this book,” which is, really, really wonderfully done and I think you nailed it. I really do.
Jeff: Well, thank you. One of the things you talk about and that we discussed in the introduction is this notion of the humanity of technology. It seems that we forget sometimes that this technology was human-created, that there is a human hand in the creation of this technology, and in most cases, it is to solve what is perceived as a human problem. Talk about that.
Chris: We have created the, I won’t call it a monster, but a lot of people hear me speak and hear these interviews and they think I’m anti-technology. I’m not anti-technology at all. It has had a profound positive effect on society, on our species, on the way the world works, but it has also, as you pointed out, created some unintended consequences. I think the makers of technology, the innovators going back centuries, before bits and bytes of technology, other forms of technology, were primarily focused on productivity as the measure of its effect. And I talk in the book about the idea of speed being the golden calf that every innovator since the beginning of innovation has really focused on.
The presumption being speed begets productivity, productivity begets economic growth, and that all ends up being really positive for everybody. But that’s not, in fact, true. Economic growth is joined with unintended consequences, and that the implicit boo-boo, if you will, of innovation going back centuries and fast forward to today and AI is what I see is the inability of the innovators to dimensionalize the problem state that they’re trying to solve with their innovation or the opportunity state that they’re trying to solve through the lens of human truth. Human needs, human wants, human frailties, human vulnerabilities.
I’m a big believer in, I’m sure you know Abraham Maslow’s work from the 1950s and his hierarchy of needs, and I talk about it in the book but I also believe there’s a dark side of that pyramid of that hierarchy, and the dark side are the seven deadly sins. Pride, sloth, envy, greed, and gluttony, those are all within us too. And I think the innovators of the past and the innovators of today need to recognize that those vulnerabilities can and they are easily preyed on by technology.
This whole idea of bad actors. Well, we all, in a way, have a bad actor within us. And so the advances that are happening are, I won’t say they’re targeting but they are very readily connected to the dark side of our needs, and out of that, they create troubling consequences at an individual level, at a society level, yes, I think for all of us.
Jeff: One of the things though to think about, it seems, with respect to as technology evolves, is that there isn’t always the opportunity or even the desire, in many cases, to create a cost-benefit analysis of where a technology is going, that it happens with respect mostly to the emotional sense of solving a particular problem. That it is emotionally generated.
Chris: I think that’s absolutely true. One of the questions I’ve asked myself when writing the book is, do humans have the capacity to dimensionalize the problem that they’re trying to solve and to dimensionalize the solution to the problem, and from that, does the solution beget unintended negative consequences? Let me go back to social media. Could we have understood 15 years ago that the advent of social media would carry with it division, would carry with it mental health issues, would carry with it user-generated content that would cause things like conspiracy theories? And I’m not sure we could have done that but I don’t think that means we shouldn’t try.
And I think what’s interesting about AI is it has advanced my point of view tremendously. A year and a half ago, I wasn’t going to publish this book because literally, I thought nobody cared. It was a theoretical conversation that nobody wanted to have. And then AI, and specifically ChatGPT reared its head, and all of a sudden, people are saying, “We need to put guardrails on this thing. We need criteria. We need safety valves. We need all sorts of things.” And I think that’s good news. I also still question whether we have the capacity to figure that out. And so, yes.
Jeff: Isn’t the dimensionalization of this that you’re talking about, isn’t that something that comes, though, through iteration? That the nature of technology is such that the first attempts, that the initial beta attempts are never the final product, so that it becomes harder, except through the process of iteration, to really dimensionalize this?
Chris: Yes. I think that’s really fair. But I’m going to take this up a couple of levels, and I talk about this in the book. One of my beliefs is that everything is a system with inputs and outputs. Dinner is a system. We make the dinner, we eat the dinner, it gives us energy, et cetera. Inputs and outputs. And innovation, they’re all components in the system called the human race or the species or society. And so the question is, at a macro level, what is the intention of all these innovations? What is the intention of the human species system?
And historically, the intention has been nothing but economic growth, GDP, more recently. And I argue that that’s a flawed intention. That’s a myopic intention. That’s a one-dimensional intention. And if we as a species all agree that the intention of the system is bigger than that, is more dimensional and I call out well-being, human well-being in every facet of what that means is the intention, and then we put all the innovations that exist, and to come through the filter of, “Does this contribute to collective well-being?” I think that serves as a pretty effective guiding light that early on in the first or second iteration of these innovations, we will know pretty quickly whether it has the capacity to positively contribute to that outcome.
But the boo-boo of the human species is we don’t agree on the intention of any of this, other than economic growth. And there’s been a presumption for decades that economic growth would– a rising tide will lift all boats, and the data does not support that. It creates division. It creates separation. It creates a world where 0.1% of the population controls 90% of the wealth. So my thing is, we have the wrong measure for the system called the human species. I’m not the only one in this. There are many organizations, including United Nations, that are trying to reset the orientation of the planet and of the nation-states that make up the planet with, “What is the goal of all this?” Right now, I think most innovation has the wrong goal.
Jeff: Sometimes, though the goal is not necessarily economic outcomes, but it’s to solve a particular problem that has the benefit of creating economic abundance later on. Facebook is a good example. The original intent of Facebook was for people to get dates on the Harford campus close to where you are. It turned into something else.
Chris: Okay, I appreciate that, but I could argue, no, the– okay. But that’s a form of productivity. It was a way to get dates faster, and, oh, by the way, it was also a way to make a lot of money. And there’s a great quote from Sean Parker, who was one of the founders of Facebook, who ended up turning on the whole social media thing. He’s now anti-social media. But one of the things he said is, “We knew we had a multi-billion-dollar business on our hands when we realized the entire platform was about validation,” which is back to the seven deadly sins; pride and ego are alive and well in all of us, and that thing preys on that. But the original idea, the original innovation was about the productivity around hooking up boys and girls.
Jeff: But Sean Parker’s quote, which is great, and actually, I pulled it out of there in my notes, it’s a great quote, but his point is that, “We realized.” That wasn’t the intent. It was a realization that came later, after trying to solve this dating problem.
Chris: Right. So, again, the motivation for that innovation- and this is all subjective opinion- but I’m going to argue, was a productivity play. Like, how do we reduce the distance between a person finding a significant other? How do we use technology to do that? Early on in the first iterations, they realized the power of the platform was really in the dimension of human need for validation, which is a dark, sad truth. And that realization– and I’m going to argue in the future state, which may be Pollyanna on my part, in a future state, the innovators would pull back from the innovation going, “That is unhealthy for collective well-being.”
We’re now on to something that is ultimately problematic. Feeding on the need for people to feel better about themselves is a terrible business.
Jeff: But it also brings up this issue that you raise, which is, where the complexity is. Is humanity more complex, or is technology more complex? And arguably, we’re getting to a point where that may be changing.
Chris: Yes. It’s funny. I’m of two minds. One is, I think we’re way more complicated. The code of humans is way more complicated than the code of technology. On the other hand, I think within our code are some very simple column motivators, drivers, and I think, as I said, Maslow’s hierarchy of needs is very real in all of us. We need food, water, shelter. We need belonging, we need safety, we need comfort, we need self-esteem. We need the opportunity to actualize our potential, or whatever, and the seven deadly sins are within.
Also in some senses, we are very clear on who we are. I think the complexity comes in the course of engagement. Whether engagement is innovation, I’m trying to build something or engagement is I’m trying to work out an issue with my wife. Our emotions are so wrapped up in our capacity to analyze the other, that we get confused really quickly. And the biases in our brain and the little voices in our brain telling us, “Don’t believe her, she’s–” whatever, whatever. It gets messy really quickly. And I think because of that, we opt not to go down that path.
We opt not to put the human factor, human understanding, at the center of the engagement. Again, whether it’s innovating a new thing or negotiating with a significant other, it’s just icky. It’s mucky. And so we ignore it, and we just plow forward and say, “The goal here is to build the thing, and then we’ll worry about that other stuff later.”
When I was at Harvard overseeing scores of startups; student startups, alumni startups, I taught a workshop called Human Factors, and this was for all the founders of these startups. Each semester, we had 200 startups in the building, and it was the worst-attended workshop. And I taught other workshops that were the best-attended workshops, but none of these founders wanted to come to a workshop on Understanding the Human.
And my theory is because it’s mucky. It’s complicated. They don’t want to look the truth in the eyes on that one, and they’d rather go back and work on the technological code and building the thing. But the problem with that from an innovation standpoint, you can build the thing, but if you don’t build the thing to fit where the human is, the human is not going to adopt the thing.
Jeff: And you talk about that, that adaptability really is acceptance is really a key factor.
Chris: Yes. So it’s weird. Like, on one hand, most innovations fail. 95% of corporate innovation fails, something like 90% of startup innovation fails. And my theory is that it fails because of the lack of human understanding. And then on the other hand, a lot of it is, at least consumer-facing technology that has succeeded, it has succeeded in its adoption, it’s succeeded in its productivity, enhancement of our life, our ability to do many things at the same time, whatever, whatever, but it’s also failed, and that it’s in a way, eroding us, eroding our human on our human connection.
One of my fears is this atomization and anonymization of society. We are all effectively becoming strangers to each other. Even people that live in the same building now don’t even acknowledge each other. Most of them walk the halls with earbuds in their ears. There’s just this erosion of human connection. So my point there is, even the stuff that appears to be successful, the innovation that appears to be a positive, does carry these costs that I don’t think we have fully calculated.
Jeff: One of the battles that seems to be going on, and maybe it lies at the core of some of this, is this battle that goes on between the emotionality that we’ve been talking about, the fact that emotion plays such a large role, but that that’s happening at the same time that critical thinking skills seem to be in decline.
Chris: Oh, yes. You nailed it, man. It’s a troubling combination. Now the techno-utopians would argue, “Well, don’t worry about any of that, because technology will save the day. Technology will create a world where we won’t have to think critically, and all of our emotions will be accommodated by this set of tools and resources and virtual worlds that we live in that will just envelop us in love and affection and everything will be great.” And I think honestly that’s crap.
There was actually a piece in The Times yesterday about some guy that used AI to create a virtual representation of himself, a voice, basically. And he called his friend and used the tool. Anyway, the gist of the whole article is, functionally, yes, you can use AI to create a version of yourself, and you can have a conversation with one of your friends, but really, what the article got to was, all we’re doing in that artifice of connection is creating a sense of loneliness.
We are feelers, and when you start removing the capacity to feel, the authenticity of feeling, and it’s an artifice of feeling, I think we can’t help but translate that into sadness. Yes, the slope of loss of critical thinking and the emotionality of us, it’s– I’m not a doomsdayer, but we have to find a different path. We have to get on a different trajectory.
And one of the questions I’ve been asked in these conversations is, well, what exactly do we do? And there’s a systemic view that I call out which is maybe a little wonky for some readers to contemplate, but there’s a much more fundamental view, which is just stop being in service to technology. There’s a great quote from Ash Carter in the book. I spoke at a conference with him at Harvard. He was the Obama’s Secretary of Defense, and he said, “We cannot slow technology, but we must steer it.” And I argue that that’s at an individual level. He was talking about doing it at a systems level or a society level, but I think at an individual level to the listeners, are we making the right choices vis-à-vis our relationship with technology? And are we making the right choices for those that are parents vis-à-vis our kids’ relationship with technology?
And I’ll share a quick anecdote on that, which is talking to a guy a couple of weeks ago, I didn’t really know, but he, of his own volition, brought up– basically said, “I’m really worried about, my two kids. I have 9-year-old and 11-year-old daughters. The 9-year-old really seems to be really, really anxious, and the 11-year-old appears to be going through bouts of depression. I’m pretty convinced it’s screen time. They’re on their phone 24/7, even when they’re home their mother and I, they’re not at home, we’re also disconnected.”
And I said, “What are you going to do about that?” And he goes, “I can’t do anything.” And I said, “Why?” And he said, “Well, if I reduce their screen time, they will hate me.” And then I said, “Well, what’s worse; them hating you for some short period of time because eventually it’ll dissipate, or them ending up with lifelong mental health issues or something worse?” And my point is, the guy was abdicating his ability to steer technology. Honestly, I think that’s a slippery slope to hell.
Jeff: Of course. If you carry that metaphor a little further and talk about steering it, whether it’s institutional or whether it’s individual, as you’re talking about, then it requires some kind of map, some kind of direction to steer it and I’d argue that we don’t have that at the moment.
Chris: No, and I think this goes back to my point about intention. I’m a big fan of intentionality in everything. I work at a company and I’m the guy in the room in the meetings constantly saying, “What is the intention of this work that we’re talking about? Why are we doing this work? What is the expected outcome beyond economic growth? It has to be more dimensional than that.”
And I think the weird thing is– I’ve encouraged– I’m lucky to have a lot of friends and a lot of young people as friends, much younger than me, and many of them are having kids for the first time, including my daughter, and my son has a daughter. And I say, one thing you should talk about at home with your significant other is, “What is the intention of our family? What is the intention of our parenting? What do we seek as the intention for our children’s lives?” Because there is an intention, it’s just, in most cases, we don’t talk about it, or the presumption is the intention is economic growth and longevity.
So I want my children to live long and I want them to be moderately wealthy or at least have enough money to live okay. And I think that’s a really primitive– well, as I said, one-dimensional definition. So if that father and his wife or partner had a conversation about, “What is the intention for our children? How do we want them to feel? How do we want them to think about themselves? How do we want them to grow up? How do we want them to realize their lives?” That intention would help motivate a desire and potential capacity to steer technology within that household. But there is no intention.
Jeff: Arguably though, that gets back to a hierarchy of needs once again, it depends on what the needs of the household are. And in most cases, it’s just the goal of today is tomorrow.
Chris: It’s survival. Yes, but even there, I’ve had that discussion with others it’s– but part of that is because we are in abeyance to technology, and we now perceive that the goal of the day is to keep up with the volume of transactions. I was on the subway last night watching people on their phones. And it’s funny, theoretically, they’re on their phones for pleasure, but it kind of felt anxious, “I have to constantly be plugged into everything. I’m texting here, I’m looking here, I’m looking on that,” da, da, da, da, da, da, da da. And you’re like, “Does that all translate into better? Does that translate into well-being? Does that translate into happy?”
Jeff: It translates into productivity.
Chris: Yes, it’s productivity. I’m doing a lot of, “How’s your day, Honey?” “Oh, it was amazing. I did a lot of shit.” You’re like, “Well, is that good?” And I see this in the corporate world where volume is the measure of output. And I’m like, “No, no, no. Volume is not the goal.” Productivity, particularly in the white-collar world, is a thinking function, it’s not a doing function. But because of technology and its profound impact on literal transactional productivity, we have now determined that volume is the goal. And I’m like, “Ah.” I don’t know. I don’t know.
Jeff: We’re just about out of time, but I guess the question is, that new direction, that leadership, the new ideas, even the guardrails that you alluded to earlier, where does that come from?
Chris: You are so smart. Your questions are great. I really, really appreciate you and your brain. The sad truth is, and I write about in this book, it’s not going to come from traditional leadership. It’s not going to come from governments, it’s not going to come from the United Nations. They’re doing what they can, but they’re emasculated. They don’t have authority to make this happen. Plus the nationalization of the– we’re a globe. We’re a globalized entity as a species, but we still operate in nationalistic buckets. That doesn’t make any sense.
The macro solution, I write about it in the book, but I don’t really know if it will ever happen until we get to D-Day. My whole ending point here is it’s on us. And so if you’re the dad of those two little girls, take action. Do something. If you’re in a company and the entire intent of the company is economic growth and the employees are miserable, do something. If you’re in the community and you’re walking down the street and you see somebody coming towards you, say hello. If you’re in a gym, take out your earbuds and talk to the person next to you.
I think one of the great ironies of the disconnection of society, if, all of a sudden there was an asteroid heading our way, we would all of a sudden be more mindful of the other, we would all– and this happened after 9/11, of course.
Jeff: After 9/11, right?
Chris: Yes, people in New York were hugging for the first week, and then the second week they were still saying hello. And then by week three, they’d forgotten the whole thing and they went back to flipping each other off. Why do we need the threat? Why do we need adversity to remind ourselves that our humanity is actually the only thing that matters? And that being human first in every action is the way.
Jeff: Chris Colbert, his book is Technology is Dead: The Path to a More Human Future. Chris, I thank you so much for spending time with us here on The WhoWhatWhy podcast.
Chris: You’re a brilliant man, and I’m grateful to know you.
Jeff: Thank you so much. And thank you for listening and joining us here on The WhoWhatWhy podcast. I hope you join us next week for another WhoWhatWhy podcast. I’m Jeff Schechtman. If you like this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes. You can also support this podcast and all the work we do by going to whowhatwhy.org/donate.