
Episode Summary
People often say that history repeats itself—so often, in fact, that the phrase has become a cliché. Yet when it comes to technology, that sentiment holds a lot of truth. We don’t just reinvent the tools of our ancestors; we also recycle the same debates, challenges, and controversies that have surrounded earlier innovations, often without realizing it.
That is especially true with today’s artificial intelligence technologies. While AI may feel like a bold leap into the future, its foundations rest on decades-old ideas, and the controversies it stirs up—about ethics, economics, and control—echo the same fundamental arguments that once surrounded everything from the introduction of calculators in schools to the chaotic rise of the early internet.
Technology and the social debates it provokes are as old as humanity itself. And as technological development continues to accelerate, these conversations will only become more urgent. In this episode, I discuss all this with Dave Karpf, a political scientist and associate professor at George Washington University where he specializes in technology’s political history.
The video of this episode is available, the transcript is below. Because of its length, some podcast apps and email programs may truncate it. Access the episode page to get the full text. You can subscribe to Theory of Change and other Flux podcasts on Apple Podcasts, Spotify, Amazon Podcasts, YouTube, Patreon, Substack, and elsewhere.
Related Content
Large language models are unleashing the power of mediocrity
Big finance and corporate monopolies have blocked the original promise of the internet
How libertarianism bifurcated into neoliberalism and corporate authoritarianism
Discussing the famous ‘Californian ideology’ essay 30 years after the fact with its co-author
Grok’s ‘Mecha Hitler’ meltdown and MAGA’s broken epistemology
The strange nexus of Christian fundamentalism and techno-salvationism
The political history of Bitcoin is not what you may think
Why Elon Musk and other technology investors have become so politically extreme
Audio Chapters
00:00 — Introduction
12:21 — ‘Satisficing,’ mediocrity, and large language models
17:42 — Corporations used interns to kill jobs before they used AI
23:46 — AI as a technology isn’t a problem, it’s how it’s used
26:58 — Escaping bad epistemology is the ‘first singularity’ for humanity
33:24 — Societal elites ignored reactionism and now are shocked that it's monstrous
38:46 — View of the world's complexity as the ultimate dividing line of politics
41:54 — ‘Crypto is libertarian, AI is communist’
47:56 — Is the U.S. left ignoring technology to the peril of democracy?
50:48 — How to use AI responsibly as a regular person
55:59 — Student AI use in assignments is a response to larger problems
57:57 — Left and center-left leaders seem to fantasize about ideal policies as the right burns
Audio Transcript
The following is a machine-generated transcript of the audio that has not been proofed. It is provided for convenience purposes only.
MATTHEW SHEFFIELD: And joining me now is Dave Karpf. Hey Dave, welcome to Theory of Change.
DAVE KARPF: Thanks for having me.
SHEFFIELD: Yeah. So, you have been somebody who has been trying to get the larger center-left to have more coherent thoughts, I think, about technology and policy in a way that I wish was more common. So congratulations on that, first of all.
KARPF: Thank you.
SHEFFIELD: And one of the things that you've done that I think is really important is that in order to have coherent conversations about technology, you have to understand the history and that so many of the problems that exist in the current moment really do trace back to things that happened decades ago.
We keep rehashing the same controversies like moderation, content moderation and banning and, like, these are things people were haggling about when Usenet was a thing. Which I say that now, and like a lot of the listeners are like, what the hell is Usenet? look it up guys. I won't bother you with it.
KARPF: I feel like my life motto for the past seven or so years has been, history doesn't repeat itself, but it runs. So I'm not, I'm a political scientist by training. I'm not an actual historian, but my approach to history, like I, this all, the work that I do on tech now originated from reading all the back catalog of Wired Magazine to just get a sense of what were people saying the digital revolution was going to look like as it has been arriving for my entire adult life.
And as you look at that, what you start to notice is, oh, [00:04:00] the things that we're saying about, for instance, AI today sound a lot like what we were saying about big data 15 years ago. We can maybe learn some lessons from how that worked out. Not to say that history will always repeat itself, but there's at least we can inform, have some informed skepticism by noticing that we're always saying the same things and making the same promises.
SHEFFIELD: So for people who weren't paying attention earlier to the big data hype give us a little overview on, what happened with that.
KARPF: Sure. So big data, this is like late s into the mid-teens, was the digital future. The idea essentially was now that there is all this data being shared online now that we have, so web, 2.0 is the early aughts and mid aughts that's when everybody starts contributing content online. And then you have the rise of data scientists saying, oh, we can analyze all of this to figure out patterns that you never could have understood before.
'cause we didn't have the data. So the rise of big data was both promising to revolutionize economics, but also like dating life. Like the guy who founded OkCupid wrote a whole book about how, based on the data that we have from OkCupid, we can finally solve for romance. And like that was a fun book to read.
Like I enjoyed reading that book, but there was a while where like people really believed that now that we have all of this data, we will apply basically fancy regression analysis and we'll crack the code of humanity. within a few years what we find out is almost none of this panning out 'cause humanity's. More complicated than we thought, Like we thought 15 years ago, we've cracked romance, we will now figure all that out. And now I'm reading culture stories about how like young people these days like don't date and don't have sex anymore. So like something went south. And that's not to say that it's okay, Cupid's fault, but it is to say that the ambitions of 15 years ago didn't work out.
And it's basically because the world turned out to be more complicated than we thought. But then when you hear Sam Altman and other people say like, no, now the AI is like, the AI is essentially saying we're gonna do big data, but without the data [00:06:00] scientists like the it, it will do regression analysis on itself. we will crack the code of humanity. And I, think we can take a step back and say, we keep hearing that promise as though now that we have all the data, we can regression, analyze our way to the promised land every time it's turned out that human society is more complicated than that. That's probably true here as well.
SHEFFIELD: Yeah. and, cognitive science certainly does like one, my, my biggest frustration with so much larger tech discourse or even academic discourse is that there, there's just, everything is siloed. So no one knows anything about anything else, and you're not allowed to. And in fact, if you, try to get a, in, import some ideas from another field into your PhD, people are like, no, stop it.
this is wrong. It's out of scope. You're, you, need to stop this. And like. To me like that, that's what is just so horribly ironic that, we, there's this, drive toward, artificial general intelligence, and yet human general intelligence is hated in academia. you're not supposed to be a, generalist.
You're not supposed to have, a deep knowledge of two or three fields. That's, it's not allowed. I don't know. Am I wrong?
KARPF: I. I wanna push on that a little bit because
my has been based on people saying, actually Dave, we're gonna let you do that. Dissertation project was somewhere between social movements and political communication and interest groups. It didn't fit anywhere. I was studying moveon.org and how it organizations like that differ from older advocacy groups like the Sierra Club. And I remember having a, one of my dissertation committee members say to me, Hey, this is really good. You're not gonna get hired to a political science department. And my, answer was like, I, didn't know that until you just told me that right now. you think I'll get hired somewhere?
And he was like, oh, we'll see. Which was, as an aside, not a great thing to say to Youngme. freaked me out. Like when that came out as a book, I was quite worried about exactly what you're talking about, [00:08:00] that the book was borrowing from too many literature and therefore didn't fit in any of them. And mostly just as luck, the reaction from each of those different disciplinary communities was, oh, he's not one of us, but. He is talking about things that we care about. So we're gonna chat with him as like an interesting visitor. So like, sociologists don't treat me as a sociologist, but they do treat me as somebody who they would be happy to hang out with. like on this wired project, historians have repeatedly been like, Hey, that's really cool. Like, they don't, fool themselves. They don't think I'm a historian, but they think that I'm somebody from a different field that they're happy to chat with.
I do think it's very hard to do that, and you need to get very lucky in the way that I've gotten. But honestly, I, think the reason for that is throughout your and my adult lives, we have been starving. academy for resources, and as resources get more scarce, then we end up following just into the silos as the, only thing that's still around. So like, to make it all about me for a second I, finished my dissertation in spring of 2009 and went on the, I was going on the academic job market as the the, worldwide economic crash was happening. It was not a great time to do it. And one of the things that happened is people who do weird interdisciplinary work, were left out to drive first because we're the oddball weirdos. And when most of the jobs go away, those jobs definitely go away. So I caught a postdoc and got lucky and, surf for a couple of years until I could land somewhere. we, the academic job market now is so much worse than it was even back then. So I like, I, think you're right about the siloing, but I don't think that's part of the way Ade behaves with itself. just the way that we end up reverting to we starve it for resources.
So I think that'll get worse until we decide we vi we value higher education as a society and we're gonna fund it like we did back in the 1970s. I think then good times could arrive again.
SHEFFIELD: Yeah. that's a great point. It is really incredible though when you [00:10:00] look to the early 1970s that all that investment that was put in, we're still reaping the benefits of
it, nearly 50 years or more than 50 years. and but that's not an argument that is, has any currency with, the Trump administration.
And they're trying to cut it systematically and then also censor it in a really horrible and disgusting way. And, with, no regard to the repercussions And that is it to me. I see this as just a, natural, this is natural reactionary social theory that, they really believe that knowledge must be stopped.
From the moment that I arrived on the scene and we can't learn anything else. There is no history. There is no future. There is only what I am interested in and nothing else.
KARPF: As an aside, did you see the Mark Andreessen comments from like a week or two ago about higher ed?
SHEFFIELD: I only briefly but go ahead and summarize it.
KARPF: I, the short version is Mark Andreessen is an asshole, and I know that's gonna be a, huge shocker to your listeners. Andreessen, whose career. His, life was made by being part of a lab at a public university.
SHEFFIELD: Yep.
KARPF: to University of Illinois van of Champagne where thanks to Al Gore's information Superhighway Bill.
He was part of a lab that got to work on internet stuff, and that's where he built Mosaic, which then they just copied over and made private for Netscape. Like that's where he started out. But he has now
SHEFFIELD: Yeah.
KARPF: that MIT, and let's see, I forget the exact quote, but he basically said that he thinks MIT and Stanford are just lobbying shops for like the baddies now, and that we need to like, destroy all of higher education in order to save it. Because now higher education is just investigating things that Mark Andreessen himself doesn't like anymore. Like
SHEFFIELD: Yeah.
KARPF: up the ladder and now the people who aren't, who are lower down the ladder aren't clapping for him loud enough, so we must burn the whole thing down. Fuck that guy. [00:12:00]
SHEFFIELD: yeah. And that's, it is a classic. It is. This is the reactionary epistemology that it's, it's, an arrested psychological development in which everything is egocentric. Like it is literally a child mentality that, other minds don't exist, don't have intentionality, and if I can't understand something, then it's worthless.
KARPF: Yeah.
'Satisficing,' mediocrity, and large language models
SHEFFIELD: One of the things that you've been writing recently or you've wrote a little squib on it, on your blog is about the idea of, the existing large language model ais as being satisficing. And yes, I did say not satisfying, so just for everybody who's listening, that was not a, an error on my part there.
And you'll get into that, but I found it very related to—so the guest before you on the show which you I'm sure haven't seen yet, it was Venkatesh Rao, and he has this idea of mediocrity. And I thought, there's an interesting parallel between what Venkat was talking about here and what you were talking about with satisficing.
Tell the audience what you meant by that in that context, if you would please.
KARPF: So satisficing is not a term that I made up. It's a term that Herbert Simon made up in the 1970s. He actually won a Nobel Prize in economics for the concept, and it was either 76 or 78. so I learned about it in grad school. My wife's a lawyer. No lawyers that I have ever found have heard this term, even though grad students like me back in the day were obsessed with it. And satisfies is the combination of satisfy and suffice. The idea like it was originally proposed as an alternative rational maximizing. So the, theory of satisficing is that rather than, like, if you were searching for information rather than obsessively surfing searching for the perfect information, which would take a lot of time, and therefore there would be opportunity costs, that the more efficient thing to do is to first establish the threshold at which you've reached good enough information. Then you search, you use your time and energy [00:14:00] until you've reached that threshold and then you stop. And in practice, the reason why I find that useful for AI is like I often hear AI promoters talking about how we are building super intelligence. We're building digital God, we're building a thing that is going to be better than the best authors at writing texts better than the best scientists at doing science. For actually existing ai that strikes me as actively not being the case. What actually existing AI is quite good at arriving at averages, right? Like the thing we've, known ever since this stuff came out that it's tremendously good at producing. I'm forgetting the term now 'cause I should have had more coffee before we chatted. Cliches, that's the word. it's, we'll give you cliche, laid in text. 'cause cliches are cliches for a reason. They're the thing that people always say. And that means that these large language models that are foundationally about guessing the next word, what's the likely next word, in a sequence of words, they're going to fall into that average step. means that they're not going to produce the very best novel, but they can produce a pretty average novel. And if you wanna mash up different genres, you can do the average of all the genres, which can be surprising, delightful, cool. It can also hallucinate and have all sorts of errors. So you wouldn't want to use AI to your cancer. Holy crap. You might wanna use ai. Like one of the early examples that came out in like 2022 of how cool this stuff was, somebody who was using AI to plan their trip to Disneyland. Critics pointed out. Yeah, it has you going on a ride that is currently out of operation. haha, it's wrong. if you've ever been to Disneyland with kids, but when you go to Disneyland, the goal of a parent is to survive that trip. So like that's a great exam. Like, like the type of case where what you really need to do is satisfies. You do not need to plan the perfect Disneyland vacation for you and your small children. You need to plan one that'll get you through the day. we've set the threshold of I just [00:16:00] need something that's okay something that is average and maybe cliched because I'd be happy to take my kids on a cliched Disney vacation. That's the point. Then that's the type of task that AI is quite good for, or at least actually existing AI that we have today. there are a lot of tasks where that is useful, but then we wanna separate out those tasks from tasks like I don't think that we're going to AI as writing speeches for major political candidates 'cause major political candidates. Are willing to spend a lot of money to believe that they're getting the very best possible speeches. That's not a space where you would expect to see AI or where you would expect to do better than the best speech writers. So it's not, we're not gonna see it there or we're gonna see it is in places where really what we need to do is like produce the 50 page report. For an audience that was never planning to read the report, they just wanted to know that it was produced.
That's the type of thing you use it for. this then leads to the second point that I was making in that post was part of the tension right now with actually existing AI my personal satisficing threshold as a say as a writer is very different than the satisficing thresholds of the people who own major news publications, right?
Because private equity bought a lot of these things out. They do not care about the product. They just wanna skim out the money. many of us critics called it a couple years ago that what these technologies are going to get used, for, at least in the near term, is lowering the quality of the outputs while cutting the costs. That is exactly what's been happening, and it's unsurprising, not because it's something fundamental to ai, but because it's something fundamental to the structure of a lot of these industries. Like if the
SHEFFIELD: Yeah.
KARPF: of these industries don't care about the product, then their satisficing threshold will be so low that of course they're gonna fire the journalists, they're gonna fire the workers and just have a shitty product.
Instead, they didn't mind having a shitty product.
Corporations used interns to kill jobs before they used AI
SHEFFIELD: Yeah. and that's a good point. It, but it, and it's also there, there is, this happened before ai,
so like these
publications the big city dailies, the large magazine. And, in other news organizations, they replaced [00:18:00] their senior writers with interns about 15 years ago.
And And, that's why, some like people sometimes, they're like, why are all, my favorite writers now on, Substack or Ghost or whatever?
And it's like, because they were fired by the greedy corporations who were paying their salaries, that instead of cutting executives and replacing them with, more experienced writers, which is what they should have done because, media companies are just full of wasteful executives who do nothing.
Having worked for several of them, I can say that they could be very easily great examples of a flat organizational structure,
in which the, senior producers of the content are the ones who do most of the management. But that's not what they do. They're very stuffed with executives who produce nothing and mostly given the way.
And are, and have outdated knowledge. but they don't, and so what they did instead was they, they said, okay, woman is making $120,000. a year, we're gonna ax her. and it doesn't matter that she's beloved by our community. Doesn't matter that she gets us, these me mega links, ev every month or so, we don't care because we're just
gonna flood the website with intern copy.
And in a lot of sense, what these quote big companies are using AI for, that's what they did with interns.
KARPF: Yeah. so Megan Greenwell used to run Deadspin before private equity destroyed Deadspin. She had wrote a book just came out a month or two ago called Bad Company, is all about this, about private equity eating America. And it's a great book and listeners should check it out. One of the things that stands out too is when they are buying these companies, not only are they firing the good writers and replacing them with interns, they're also buying them layer, layering them with debt, and then wrecking the company while li like selling off, selling the company off for parts and then just claim bankruptcy and the company has to cover the debt. So like, it would be [00:20:00] one thing if they were. Like ruthless businessmen who are making the thing actually more efficient. What they're actually doing is like just socializing the risk while privatizing the gains. Never understanding the industries that they're destroying because they're just seeing them as profit centers. And that's happening not just to journalism, but, and in greenwell's writing about how it's happening to real estate. It's happening to healthcare. Like this is like just an absurdity of the tax code that a few very rich people have figured out how to wreck things with. And of course AI's gonna make that worse 'cause it's, if all they're interested in doing is maximizing funds while like exploiting vag reason, the tax code AI will be very good for that.
It doesn't have to be that way. It's not in the technology, but it's pretty predictable that's how things are gonna end up. Yeah.
SHEFFIELD: and yet on the other side, AI also has allowed people, from the bottom up to do a lot more things than they were able to. like, and I think, just a, an immediate obvious example of that is that a lot of people are now using, AI generated images to, for their books or, their articles, or people that, may not have a lot of artistic talent, but they have an aesthetic sense.
And if they give, really detailed prompts, they have come up with some really neat stuff. Now whether, if it is not satisficing to an artist who's looking at that stuff and they're like, oh, I can see this looks like so and and where this literally ripped me off. Like, Hey, that's, these are fair points.
But on the other hand, it is. I think very argu true that it has enabled a lot of different types of creativity from people who had no access or ability to do it before. And I think we have to say that.
KARPF: So I, I think that's right. I, and I have used AI images in my substack occasionally. I haven't done it in a year or so because it started to feel gross. And my thinking at the time was, much like what you're saying, that like it's a free substack. I was not gonna pay an artist 'cause there's no money in that substack. And and I'm not an artist myself, so I would write an essay for [00:22:00] Substack and then I would think, you know what, image fits this? And I would fiddle around with some free AI tools until I got one that was good enough. And again, like this to me points out the gap between what the technologies can do and what their broad social effects are likely to be if things go the way they normally go. I think that would be basically fine if we lived in a world, like if we lived in a world that. the arts, the way we funded the arts in the United States. Just post New Deal, right?
If we still had like a works progress administration and a bunch of public funding where people who are serious artists can make a living doing their art. And then also we had a set of technologies that made it really easy for me to like, give a prompt and get an image. I feel like in math, people would be pretty fine with that, right? Like the crisis of ai, like when we talk about AI fair use and copyright, it's happening in a context of, God, how are we going to pay? How, are artists going to be able to make a living the world we were already in? And the way that this is stuff is probably gonna get used. So if we were to build a big safety net, then I think a lot of that stuff would end up being fine. It's all in the backdrop of, wow, the tech billionaires are probably gonna say a few, like Sam Altman likes to say a few nice words about the sa safety net, and then go on and go like, go off and like revolve some copyrights, some more while the safety net does not materialize.
And then he ends up supporting Trump because he feels like he needs to do that to get his company ahead. So under those context, I think a lot of the copyright critiques, like there, there're for the actual world that we live in today and how things are likely about to get worse. But like, yeah, if we could fix the safety net, then like, hell, let's also democratize the ability to create images. That'd be fun,
AI as a technology isn't a problem, it's how it's used
SHEFFIELD: Yeah. and, that's why, yeah, I think that the, bigger challenges of AI and, what may come or we'll say LLM, so not l ai, we'll say the biggest challenges of LLMs are not [00:24:00] what they themselves can do because they have no volition. They're not going, there is no rise in the machines coming from a ChatGPT, because it has no somatic grounding. And but it's, so the biggest problem is what people who have the money use them for.
Like, that's actually the real problem. And we're seeing, a lot of bad stuff with that. And, and just a few weeks ago there was a piece that Ars Technica published, which we'll link in the show notes here, that, that was lamenting how a lot of hiring managers were feeling overwhelmed by a deluge of a, of AI generated resumes and of responses to their questions and whatnot.
And, if you go in the comments of that piece, it's just filled with people saying, you know what? You assholes in HR started all of This.
This is your fault.
You are the ones that you know, screen people based on, supposed qualifications that you don't even know about. And your entire industry is based on.
Knowing about the process of, checking accounting boxes and insurance things, that's what you do. You don't actually know anything about hiring and who's a qualified applicant, and so you guys did this to yourself and now it's just the people fighting back with the same tools that you are.
The ones who started, there, it was pretty hilarious reading all those comments I thought.
KARPF: Yeah.
SHEFFIELD: Do you, did you want to riff on that though? Yeah.
KARPF: yeah, it's just a, again, like that intersection of, there's, like, I have a, line that I've used on the blog, and it's gonna be in, I, have a book coming out next year that'll be in the book that the trajectory of any emerging technology bends toward money. And the point there is when we're imagining the var, various digital futures that could come to pass. The one that is likely to happen if there is not regulation, if there is not public resistance, if there is not pressure to alter the course of the future, [00:26:00] it's gonna bend towards wherever the money is. so yeah, we've built up these big systems where like hiring managers say like, how do I manage this so that I can have more efficiency and less work for myself? Less under that system? Like, yeah, like what did you think was gonna happen? Of course, people are going to try to figure out if there's a game with rewards, how do I try to win it? Particularly when the stakes are, will I have a job or not? Like, what did you think was gonna happen guys? And yeah, there's some good comedy in it because it's like, you thought AI was only gonna be useful for saving you time?
No, people are gonna use it everybody else is gonna use it too. This is again, some of the earliest things that critics were worrying about with AI was. We're gonna use AI to produce 10 times more emails that will then get summarized by ai, is, that's the Red Queens race. Like that is just moving faster and faster to stay in the same place. Like this is not the grand social transformation keeping promised. It's just more and more text that fewer and fewer people have time to read.
Escaping bad epistemology is the 'first singularity' for humanity
SHEFFIELD: Yeah. and , within the technology world, there's often this, vague, almost messianic notion of the technological singularity and, artificial general intelligence and like, look, I'm not gonna say that's never gonna happen 'cause it probably will.
but when, who knows? Like, it definitely the stuff we have now, that's not it. But let's say that it would happen, at some point, fine. But the first singularity that happens is how do we not destroy ourselves with the shit we already have?
KARPF: Right.
SHEFFIELD: and like, 'cause like that's the real, disinformation problem.
Because like I, I see people saying, oh, the AI is gonna be used to, flood the internet with bullshit. And it's like, ha, have you used the internet?
KARPF: Right.
SHEFFIELD: And it's like, so that's the first singularity is bad epistemology on the part of humans. And, a lot of that.
Really goes back to a failure a refusal to invest in, in not just college education, but also, just adult [00:28:00] education. 'cause like,
I think that's,
what we really need a lot of money coming in because, no one has any access to stuff once they go out of college. And most people never go to college or never get a bachelor's degree.
like we, the government has no use or, doesn't want to talk to them, doesn't want to help them grow, doesn't wanna help them think better. And that's, we're really facing the the problems of that because a lot of people. They went to, fundamentalist Christian High school, K through 12, and that was it.
And so when their education taught them that, Ken Ham is a real scientist and that evolution is fake, and that Jesus is coming back next week, like that's what their education taught them.
KARPF: Correct.
SHEFFIELD: have nothing to say to them when they're adults and when they're voting.
And then we wonder, and we limit, oh gosh, these people are so stupid. what the hell is wrong with them? Look how smart I am. And it's like, are you that smart when you don't try to help anyone else learn? I don't think so. And like science, to me, the obligation of science not isn't just to explore, the world and to understand how it works.
It's also about telling the public how, what you've learned and why it fucking matters.
Because if you don't, then you're just building an ivory tower that the barbarians are gonna topple. That's what's gonna happen.
KARPF: So I think that's right, that this reminds me a piece that I wrote, God, 20 20 19, 20 20, like a, while back. So like pre AI universe more on misinformation and disinformation in its effects. 'cause one of the bits that stands out in, in, in the piece, this was for Social Science Research Council it was called on, disinformation, democratic myths.
And one of the bits that I tried to build out there was this idea of myths in a democratic society. I, the reason I'm thinking of this is. You're right about [00:30:00] public's lack of knowledge today. And also, we've never had a public that was well informed as our democratic ideals. And like we, we have never had a public, this was one of the, crisis of democracy that came out in God, I think it was like the thirties and forties, right when they
SHEFFIELD: Yeah.
KARPF: public polling. One of the first things that they found out was, the mass public really does not know much about affairs. How can they live up to their democratic ideals, like their, democratic commitments if they're not so well informed according to our polling? And the argument in the piece is essentially. We've never had a well-informed public, but we did have this myth of the attentive public amongst political elites largely because they didn't wanna, get yelled at by Walter Cro and they imagined if they get yelled at on the night nightly news, then the people of the Capital P will know and they might toss you outta office. And that then led to us having political elites who behaved as though there was a well-informed and attentive public unit if there wasn't. And I like, I actually don't think, again, if we go back to the, late 20 teens when we were mostly focusing on misinformation disinformation as opposed to AI stuff I've actually never really seen misinformation and disinformation as a large direct threat to democracy. But the indirect threat of elites coming to realize, oh, you can just say shit, nobody is paying attention. So like you can engage in rampant graft, grafting, corruption, nothing bad will happen to you starts to happen. I certainly during Trump one that starts to happen where they realize there's no consequences to behaving as though the public doesn't pay attention. And that's where democracy really starts to collapse. Because if we've never had mass public that lived up to ideals, but we did have political leads that behaved more like we did. so if they all behave, if they have those norms that lead them to behave as though people are paying attention, you don't get a perfect, not democracy, but you have a barely functioning one. now what we're seeing, and I think this accelerates with AI because now they're realizing, Hey, we can really [00:32:00] just lie about things and no one will kick you outta this. Nothing, bad will happen. Now you can engage in massive corruption. People will move on. They won't pay attention and they won't realize to, begin with. so you can just do that and no con consequences will occur. And that's how, like, that's part of how you end up in authoritarianism, which is frankly what's happening right now.
SHEFFIELD: Yeah.
KARPF: real quick, didn't it? I just ended up going
SHEFFIELD: Yeah. but it's, that, that is, the democracy is always in a crisis inherently because it's based on the voices and opinions of people who aren't paying attention. Like, that's, that is the paradox of democracy. and, and I, would push back the timeline in, some of that discourse becoming, really per pervasive within American elite cultures to the, like, the rollout of public education by people like Horace Mann, because like, and, the, the, progressive era educators of that were before him that, they had this idea, look, we're gonna have this system here.
we have to let at least some of the public know what's going on. We have to teach them literacy. And, and the, world has just become so much more complex. That the obligation and responsibility of the government should have increased over time. But instead what's happened is it's decreased.
Societal elites ignored reactionism and now are shocked that it's monstrous
SHEFFIELD: and, and, we're at this, moment where so many administrators of universities and other places like that, or elite new, news organization executives or editors, they just took everything for granted that they, thought that everything was, that the myth of the attentive public, but also the myth of the, honest politician like, which is like the paradox because the public themselves don't, believe in the honest politician, but the American governmental system [00:34:00] seems to have been structured in that way.
That, and that's why, there the Trump can, do all these things because the norms. Don't save you. Because the norms are nothing, as it turns out.
KARPF: Or at least norms are not self enforcing.
SHEFFIELD: Yeah. Ultimately, yeah.
KARPF: like if we look at higher education, like I, I can't be too mad. I personally find I can't be too mad at university administrators who weren't prepared for this assault on higher education because part of the immediate reaction is, wait, higher education is one of our comparative advantages to the world. What, fucking idiot would wanna destroy this? Like, you're running the country, don't you want things to go well? And
SHEFFIELD: answer is no.
KARPF: think everyone should have expected that they would apply some pressure given things that already happened with new school in Florida and elsewhere, but like this all out assault the same way that. I, was pretty skeptical about the fate of American democracy when Trump won the second term. I did not think that they were gonna demolish like NIH and NHS like I didn't think that they would decide we are just not going to do cancer research anymore. Like, I didn't think that they would be on team cancer. I didn't think they would outlaw all vaccines and like, like, like try to bring all that back 'cause that's just so stupid.
they are. And like if you look at public opinion polling 60% of the country says like, we you were still trying to deal with cancer, but not in any way that's actually gonna take power away from them. So like they're, they will stay in power. They'll continue enriching themselves while they do things that are like demonstrably awful. Like
SHEFFIELD: yeah.
KARPF: Like it's just bad. But it's also, it's, bad in a way that is so absurd that like I do find myself routinely just thinking. This isn't just the worst timeline, like this is the dumbest timeline.
Like how, why
are things?
SHEFFIELD: and I would say the, that's, that is [00:36:00] my whole, shtick actually, Dave that, like. This is a movement that is run by people who hate modernity. They hate science.
They hate, independent administration. They hate competence because all of these things are liberally biased in their viewpoint.
Like the idea that they having a, a spoil system, that's what they want. They want to be able to appoint their friends and have as much corruption as they want. And because they think they're entitled to it. And, any other institution that they cannot directly control, which has, that it's bad.
And in fact, and you see this really a lot in the, work of, that Curtis Jarvin guy that, he really believes that an independent public civil service is a threat to democracy. He actually believes that in some weird, twisted way. but in, and I guess in a, very simplistic sense, that's true.
That you could say that, it's not, directly accountable to the voters. sure that's true. But on the other hand, Donald Trump is doing things the voters don't want. Like no one asked him to do any of these things, that you're talking about. You cut NIH and like no one asked for any of these things.
So they don't apply their own critique to themselves. And it's because again, like I, I reactionaries have dumber ideas. Like that's really what it comes down to. And they're mad that everyone knows it. Like that's really what it, why they're why they're rushing around to tear everything down.
Because like, like, and I can say this as a former, Republican activist myself, that there's this inherent idea that people are unfair to me and us. And they have no interest in understanding their own ideas and whether they're actually approvable, whether they are, can, affect anything positive.
'cause they don't actually have a positive agenda. Their view is this [00:38:00] world sucks, life is unfair, and you know what, if you don't like it, get over it. And maybe you can, and go, I'll give you a gun. That's basically their worldview. And we saw that, most particularly I think with the Trump, COVID policies, essentially what the right wing literally was asking him to do was do nothing.
Let it rip, let people die. That is literally their viewpoint. And it's so ghastly and it's so horrible that it's, it is, almost unbelievable that a fellow, your fellow humans could have this viewpoint, but they do.
KARPF: Yeah, I, so I've often, this is, I've written on this a bit and I always end up talking about it in, in one of my classes.
View of the world's complexity as the ultimate dividing line of politics
KARPF: I've often thought that the real dividing line isn't a clear left versus right, but a, theory of politics in society where one says the world is very simple and the other says the world is very complicated. The nice thing about believing that the world is really quite simple, is gives way to this sort of, very comforting authoritarian messaging. or populist authoritarian messaging of like, the problems that you see in your lives are because crooks and idiots are in charge. Put me in power, I'll get rid of the crooks and idiots, and then everything will be fine. and the side that is standing up for liberal technocracy is saying no, the, world is very complicated. Put us in charge and we will have people with good VA values. Try very hard to make things better at the margins. Also, mistakes will be made. And also there are some crooks and idiots, and we'll manage that in just as we go.
But things will get a little better if we work very hard for a long period of time. And I, teach political communication like that. That latter message sucks. It is ghastly, it's also true. but the authoritarian message, particularly as [00:40:00] the world gets objectively worse, becomes more appealing. Like as, this is one of the things that keeps me up at night is as the effects of climate change get worse, suspect we will have more and more authoritarian Authoritarians sees power their message of, no, this isn't baked in problem that we didn't address decades past and now we have to manage it and live a reduced life as society. It's just crooks and idiot. And like, we saw this with Trump a few months ago where he was looking at California and saying like, oh no, it's just 'cause the crooks and idiots didn't turn the water on. of the la wildfires would've gone away if they had just turned the water on. But that's very appealing.
Like, if that were true, that would be so comforting. oh, we are not now living in a world where your homes might just be destroyed. We're just living in a world where in the future we'll turn the water on. Like that would be so nice. It's not fucking true.
SHEFFIELD: No.
KARPF: that distinction. That stands out to me a lot.
'cause of course we do also have people on the left who believe the world is very simple. It's only crooks and idiots. And like, unfortunately they're wrong too. like sometimes the people they put to point to as crooks and idiots, I'm like, yes, that is a crook. Yes. Like, I think Curtis Arvin is a fucking idiot. I, and therefore I think we should make fun of him. I also don't think the entire world will be fine once we like disempower the guy.
SHEFFIELD: Yeah.
KARPF: I, I, like that's only one of the problems. There's a lot more because the world is complicated, but that divide and the unlevel playing, playing field that results from one side getting to have more compelling messaging because they're lying both to the public and to themselves. Like that is I think, one of the defining dynamics that has led us to the world we're in right now.
SHEFFIELD: Yeah, I, definitely agree with that. And that to me is the first singularity is how do we overcome our own irrationality be because. We will destroy ourselves independent of whatever AI does. We'll, we will have destroyed ourselves before then.
KARPF: That's a worry.
‘Crypto is libertarian, AI is communist’
SHEFFIELD: And on that point, with this whole AI focus that the right has had in their [00:42:00] general instead of, they don't want to improve their knowledge or their ideas, they want to stupefy everyone else's.
And we see that just recently with the the AI dominance plan that that Trump White House put out just a little while ago in which they, claim that they want to force LLM systems or AI systems to be objective and free from top-down ideological bias. But then they also later on say that things talking about unconscious bias, intersectionality and systemic racism are not neutral.
And so they want to have some top down bias to get rid of the top down bias essentially. And like that's, and there, there, is a real concern that the, right wing does have about ai. In fact, they've even formalized it into a, slogan where they say that crypto is libertarian, AI is communist.
And I think that's, that phrase might be a little fascinating or confusing to people who have more progressive viewpoints and have a, generally a negative viewpoint about ai. But there's something there. I think,
anyway, I, said a lot of stuff there, so feel free to pick whichever one you wanna start with.
KARPF: So one thing that just comes to mind with, like there is some obvious comedy in them saying there will be no top down decisions about what the AI will do, and this is an order that we're giving from the top down.
SHEFFIELD: Yeah.
KARPF: brings to mind. So I recently read this book it's called Adventure Capitalism by Raymond Kreb which is about like the, like several decades experience of libertarians trying to build their own city or build their own, like sea setting operation. Like,
SHEFFIELD: Oh
fun.
KARPF: and create their own societies, right? Like actual GA sculptures that they're just gonna go unbuilt and it never works out. But he's got a really great line in there where he points out that all of these libertarian paradises start by insisting everyone here will be completely [00:44:00] free, so long as they agree with the rules that we'll lay out in advance. Also we, the libertarian billionaires wanna build this. We will be in charge of building, of setting out those rules. It's like every like pure freedom for everyone. So long as they follow my dictates like, man, that just means you wanna be a fucking king. Like this is not libertarianism in any like deep way.
Like you just decided to be the king and then everyone has the right to do what you said that they'll do. Like the same with AI here, like. This, like, it's all very compelling, so long as it's only at the sentence level, but if you form it into paragraphs, then it suddenly doesn't make any sense. We are going to declare from the top down that AI cannot be top down.
Like, okay guys, sure. That will last right up until the AI says another thing that you don't like. This is what Elon Musk has been doing with Grok for months now, where anytime it's producing result he doesn't like, he says, oh, we're gonna go and change that. And that's how it ends up turning into racist Mecca Hitler last week. like, this stuff doesn't work all that well.
SHEFFIELD: Yeah.
KARPF: on the capitalism, communism thing as well, but I just got derailed from that,
SHEFFIELD: Oh,
okay. no, go for it. Go for it. Yeah,
So crypto. Yeah, crypto, libertarian, ai. Communist.
Supposedly,
KARPF: Like I certainly buy the libertarian politics of crypto one
SHEFFIELD: yeah.
KARPF: I've seen a couple examples of how crypto in theory could be used for progressive ends. But they're always like. So strung out that they don't quite work out. And what really comes to mind is if you read the book Ministry of the Future that has got a, bit in it where it imagines a future where something blockchain based ends up being useful for taxing billionaires. Though even there, the author like I think six months later the author was like, I looked into it more.
It was Kim Stanley Robinson looked into it and was like, yeah, actually I don't think blockchain would really work for that. So like there are way, again, like I don't think, like all technologies have politics. And also I don't think we should be purely tech determinist here. There's a world where you could use blockchain for non pure right Libertarian ends, it's libertarian [00:46:00] technology.
SHEFFIELD: Yeah.
KARPF: think
SHEFFIELD: and certainly the history literally
is that, yeah.
KARPF: Now I think when we talk about the ideals of ai, having a system that has access to all the world's knowledge and needs to produce truth, like I know you like to say this, that like reality has a left-leaning bias. Like Yeah. If we had that ideal system, I think that would end up working out. What I'm less sure of is for actually existing AI today. Like the biases of actually existing AI are mostly going to be towards the things that make money for the AI companies. So like will chat, like apparently GPT five is supposed to come out next month. I just saw a headline of we'll see. But if GPT five is a meaningful advance, is that going to be a step towards communism? I'm gonna guess no, because I don't think Sam Altman is gonna actually build out a technology that leads us towards communism. 'cause Sam Altman just believes in trying to make, like the guy wants there to be trillionaires and he wants to be one of them. and that means that any tendencies it would have against that are just going to end up getting cut down. In the same way, anytime Gro ends up saying a thing that Elon Musk doesn't like, he says, we need to re-engineer this so it fits my ideals. Now, in a world where we didn't have a few tech billionaires shaping the future of their technologies, would we end up having AI that leads us towards communism? Yeah, maybe.
But the, in the world we have right now, I think all these technologies further empower the people who control them. The way to solve that is governments, however, look at our governments, and then I just feel depressed and feel like we're screwed.
SHEFFIELD: Yeah. and that is why, I, really do want to encourage European governments to really, invest in a lot of this stuff more, and, it, to some extent they've done that with like France with mytral ai,
Has, done some interesting things and obviously China's trying to do their own thing for their own purposes and necessarily obviously not not the good guys over there.
Is the U.S. left ignoring technology to the peril of democracy?
SHEFFIELD: This does militate to the idea that I, [00:48:00] see a lot of. Technological, Luddite is along the left. And the problem with that is, look, it's fine. Look, I'm in favor of making sure that, executives are accountable and, and, regulate, and businesses are regulated appropriately to protect people's rights.
But at the same time, if you refuse to take part in discussions about things that will be important and already matter a great deal, then you are going to lose those discussions by default. Your opinion will not matter if there's within lobbying in DC there's always that phrase that if you're not at the table, you're on the menu.
KARPF: Correct.
SHEFFIELD: And that's to me like this, this is the age that we're in. Like people, I think a lot of people on the left don't understand. This stuff is not going anywhere. Like LLMs, they're here for the future, and you have to figure out how to use them for yourself in some way. otherwise you are gonna become obsolete in the same way that, if you refuse to drive a car again, like, maybe not this moment right now, but
like, you have to figure out a way.
Especially like if you're, you need a job like you're, somebody who's trying to get a job, like you should use it in a way that is helpful to you. and it sucks. Maybe, you can lament that this, that's how it is, but why disarm
yourself unilaterally
KARPF: so I
SHEFFIELD: is what I would say.
KARPF: that a little bit, So the, first thing I would note Brian Merchant wrote my favorite book on the Luddites called Blood in the Machine. It came out a couple years ago. And his articulation of Luddism is one that I think we should embrace, which is essentially doing Luddism.
The Luddite revolution being effectively about labor power back in an era where unions were illegal, So the reason why they're smashing the machines isn't because they believe no one should use technology. It's because they correctly are analyzing that a small number of owners are going to use these machines to put the workers out of jobs and instead hire child, laborers, child.
And the workers don't have a seat at the table unless they fight back. And so they engage in collective [00:50:00] action and they use the tools that are available. Then, from that perspective, Luddism today isn't necessarily saying, Hey, no one should ever touch a ai. It's saying, Hey, we should really be focused on how actually existing, like talking about private equity earlier, like how will actually existing power structures make use of this to make my life professional or personal, better or worse?
And if I think it's worse, then I ought to be organizing and I ought to be pressure that includes getting a seat at the table. But it includes getting a seat at the table to say. No, you will not fucking do this. Or we will fight against you. We will counter pressure. Related to that, and this is kinda on a different note I, got to give the commencement address to my, the, not, to like the big university, but to my school department this past year and GW School of Media Public Affairs.
How to use AI responsibly as a regular person
KARPF: And wanted to say something to my students about ai and the thing that I raised with them, and these are clinical communicators and journalists who are in the audience was that the thing that bothers me about AI as it exists today for writers is that when you were leaning on ai, it is leading you to sound like everybody else.
Like again, that's what LLMs are for, is predicting the next word and cliches and sounding like people normally sound. And the work of figuring as a writer, the work of figuring out how to sound like yourself. Is the craft, like that is the, work that we're always doing in tr like I'm still trying to do that. And that kind of what I worry about in this moment, and the thing that I think is a comparative advantage for people who work it out is like, I, would encourage writers not to use l lms, not from a sort of Luddite, let's reject technology standpoint, but actually from a professional standpoint, figuring out how to sound just like yourself and not sound like everybody else is probably in the medium term, how you succeed. if everybody else is leaning on the thing that's gonna do the early work for you, like, I actually think that there's real value in rejecting that and doing the hard work of building up your skills. So there's something that helps you to stand [00:52:00] out. So like, for that reason, like I don't, I certainly, I've never used AI in writing and I don't think I'll. Not because it can't produce a, an okay paragraph for me, but because I don't want my paragraphs to be just okay,
I do worry, certainly like teaching in a program that teaches political communicators and journalists, like I, I quite worry not that they're gonna get washed out of the market because other people are using AI and they're not. I'm worried they're gonna get washed outta the market again because private equity decides like, whatever AI produces is good enough, I don't wanna pay anybody. But then the people who are gonna actually get jobs are gonna be the ones who are able to do something that rises above that. like, I actually think leaning on that crutch is probably, at least for my students, not a good first step to begin with. I'm
like, you need a seat at the table, you're on the menu.
But of Leadism that actually embraces that and says, yeah, and we should also, when we get a seat at the table, be real clear on what is gonna be on the menu and fight for that.
SHEFFIELD: Yeah. I, agree with that, and I would say, for students it's a different thing because yeah, you're, if you're in school, the whole point is to actually do the work yourself and not to have something else do it for you. But I, guess I see it as more similar to like a calculator in math.
Like when calculators came out and we literally beginning with, just simple arithmetic calculators and then later scientific calculators, graphic calculators, equation solving calculators, they all were proclaimed by some math teachers to be the end of math and no one will know how to do anything.
and, that certainly has been the case for, like, I think people are less able to do math in their head now than they used to. I think that's a fact. but on the other hand, has it made people less numerate? I don't think that it has. and it's so technological advancements, they, you have to figure out how they work for you So if you are [00:54:00] working on a calculus problem, and you're a professional, and you already know how to do calculus, and you know how to run the derivative, and you don't want to have to bother with third-order derivatives because that's a fucking waste of your time. It is a waste of your time.
Put that thing into your Matlab, do it. Don't waste your time on that, but don't think that the computer is going to give you the original concept. So that's, my general take on it, is that, when it comes to, telling your HR person what they want to hear, hey, do it. Don't waste your time trying to figure out what kind of nonsense is going on in their head.
And but when it comes to having your own ideas, you need to do that work yourself.
KARPF: That again, brings me back to satisficing, right? if, what you need to do is a box checking exercise, use the tools that make it very easy to check those boxes so you can focus on the other things.
What worries me particularly again in writing is that I think people discount how much, like, what I really have in mind since I just finished drafting this book. An awful lot of the work of the book is writing bad versions, then figuring out what is bad about it and then making it better. if I had the AI write the bad versions for me that early, like the creative work of figuring out Where am I getting stuck and how do I make this better, wouldn't actually be occurring. It's like, this all brings to mind. John Warner's written a really good book called more Than Words which is about AI and writing. He previously wrote a book that was attacking the five paragraph essay and saying, we should destroy the thing. It's bad. So he is not saying like, oh, no, education is over 'cause AI can write your five paragraph essay. He's instead talking about like, what are we actually doing? Like writing is thinking let's think about what we can use this stuff well for. And also what it fills in. So it's like, it's, if AI is a calculator, we should also figure out what are the things that we want a calculator for and where is a calculator actively not helpful.
Student AI use in assignments is a response to larger problems
SHEFFIELD: [00:56:00] Yeah, I fully agree there. And and like I, I see a lot of teachers or professors complaining that their students are using AI to do the work. And my response to them is, then your assignments suck and you're probably not a good teacher. That's what I would say because, again, because it's going back to suffic, your assignments are trash and so you're getting trash in response, garbage in, garbage out.
KARPF: Yes, though a again, the thing I wanna rise in defense of, I, I think that's right and anytime I have that conversation, I think that comes, like I, I have tenure at a nice university, so I have the, time and luxury put in the work to make sure my assignments are good. there, there's an alternate universe where I didn't luck into this job, where instead I'm an adjunct teaching five or six classes every semester. And like the class that I'm teaching is whatever some university needed in that world, probably not as good at teaching and certainly not as good at like, constantly re like, I don't have the luxury to readjust all my assignments again, like at the top. There was a decision to like stop, cut off funding to higher education just lean on adjuncts for everything and they'll be poorly paid.
And we don't care that means that they won't have the luxury time to make their assignments really good.
SHEFFIELD: Yeah.
KARPF: that's a trade off that was made well above all of us. And the downstream is like when we then say, okay, all of you adjuncts need to radically overhaul all of your syllabi. It's like, the money and no healthcare that you're paying them.
No, they're not gonna do that. So like, you're right. And also that just makes me want to yell at the university administrators and the, like, the people even above them who decided fuck it, let's just monetize the hell out of higher education and like chase dollars and like not make the learning experience a good one. 'cause we
SHEFFIELD: Yeah.
KARPF: underfunded that. Of course now things are falling apart.
The center-to-left needs to stop obsessing over policy fantasies
SHEFFIELD: yeah. No, that's a fair point. [00:58:00] And, it reminds me of the last thing that I wanted to talk about here today, and that is a few months ago you wrote a review of the book abundance by EZ Klein and Derek Thompson.
and you argued that it was, and I, agreed a hundred percent with what you said.
You said that this is a book for a different timeline
and it's solving problems that. Are not the problems that we have right now. And like, to me, I think that's the, biggest problem of, neoliberalism right now. And even, but it not just neoliberalism, like I would say if you look at a lot of socialist discourse they're doing the same stuff.
Like they're, building intricate sandcastles in the air while Trump is burning down the government.
KARPF: Yeah.
SHEFFIELD: like, guys, this policy stuff, it's not, it's fun to think about. Look, I, can do this with you all you want, but we have to stop the bleeding now too. It makes my metaphors we have to put out the fire and you have to be concerned.
Like, that should be your number one goal here. Not saying, oh, what's this fantasy economy thing I can come up with? That's, we don't need that.
KARPF: And, to be clear like this is that is, I think, and I meant it as a harsher critique of book tour than of their book,
like books take a while, like I do not blame them for writing a book that was published just after Trump took office isn't imagining Trump taking office.
Like
SHEFFIELD: Correct.
KARPF: publishing timeline, timelines work.
SHEFFIELD: Yeah.
KARPF: there's like, it, is a more interesting book in the world where Kamala Harris gets elected and Democrats have to figure out what do we do? Like, how do we govern what's, our broad policy agenda? So the, fact that it's a book that's not imagining like, oh, not R-F-R-F-K Jr is being the worst version of RFK Jr. And destroying scientific research and, like medicine, like the fact of that stood out to me while reading the book. [01:00:00] don't mind that the, like, it's sad, but like I don't blame them for that. then there was just like, when you're doing a publicity tour, you need to explain why this is actually a perfect book for this moment in there.
It felt like they to stretch so hard that they were probably pulling a.
SHEFFIELD: Yeah. Yeah. And I just, so much of the discourse, even now I feel like, more than 10 years after Trump, first came down, his, brass escalator and his brass people was not gold and get it straight that, more than 10 years after he came down. And launched his campaign.
We're still having the same discourse in so much of the, the, most elite center and left places that, it's almost like people imagine that he's just gonna go away and that the people who voted for him, they'll come to their senses, and like,
that, that, phrase that, Obama used, which I, people still believe this, that the fever will break.
fucking believe this I think they do a lot of them, and or if they don't
believe it, sorry. If they don't believe it,
KARPF: Yeah.
SHEFFIELD: then they just, they have no plan to, counter it. So
like they, yeah.
KARPF: that's what I wanted to get at is, 'cause I agree with you, and also the question that I end up getting asked in those spaces is, okay, so what do we do? What's your plan? my answer like. My answer is some version of, I think we're fucked at least for a little while.
Like I, I think it takes 10 or 20 years to rebuild the administrative state and undo a lot of the immediate harms that are being caused right now. Like just at a minimum, unless you have a constitutional convention, which at the moment if that happened, it wouldn't go well because you have 20 states that are run by Republicans. So like, unless we fully reformat democracy after a massive crash that I would prefer to avoid, but is seemed more and more unavoidable, I can't see any way that we aren't stuck with this Supreme Court for the duration of its majority's lifetimes. [01:02:00] preventing any democratic or progressive administration from achieving anything like they have just decided. Trump gets to be, get, gets to be king. And if you are a Democrat, then actually you're not allowed to do anything. those are, unelected kings and robes and they're not going anywhere unless we have. Massive constitutional reform. And if somebody says, Dave, realistically, do you think we're gonna get constitutional reform as it stands right now?
My answer is, fuck no. So like I end up in some sort of version of, yeah, I don't know, man. Like, I think we're fucked. Which isn't like, that might be true, but it's not productive.
so I part of the instinct, the reason why it's like 10 years later and nobody has a good idea, is that actually, like there aren't any good ideas, right?
Like, I, started to smile when you mentioned it. Like, people think he'll just go away because like actuarily, he will I like there
SHEFFIELD: He will.
Yeah.
KARPF: but, like, I, to be clear, I'm not advocating for anything. I will note that the man, when he travels abroad, like a McDonald's apparently goes with him. He's now the oldest president we've ever had, I think it is. And a world in which the, hamburgers, go through his system and run their course. And we were left with JD Vance. Trying to run this coalition is a world that I think goes comically bad for JD Vance. 'cause I don't think he can pull off what Trump has been pulling off in terms of scaring people and holding things together. So that, like, that is you can't plan for that. Like what happens after Trump has a heart attack. 'cause he eats too much red, meat for his age.
SHEFFIELD: Yeah.
KARPF: like, if we're trying to imagine like the likely paths forward, like at some type point in the next 20 years, like the hamburgers and the red meat will probably hit the guy. there isn't a plan within the Republican party for after Trump in the same way that as far as, I'm not a Russia spot specialist, but I don't think there's a plan for Putin's party post Putin. So that question of like, we're living in un under an authoritarian who does not have a real second in command. At [01:04:00] some point actuarily that will mean something, but nobody can plan for it. It's just a, thing that will happen and then you'll react that. That's not a plan and I'd like to have a plan, but like
SHEFFIELD: Yeah.
KARPF: kind of is the answer is shit's gonna be really bad and chaotic for a long time in ways that it shouldn't be and are bad for democracy. then hopefully we work our ways out after an awful lot of chaos that I wish we had avoided.
SHEFFIELD: Yeah. and I think the, conclusion that the neoliberal establishment of the Democratic Party has been trying to avoid for decades is that, in a crisis of democracy and inequality, there will either be. Democratic socialist restraints on capital, or there will be fascistic restraints on capital. There won't be any other alternative. These are, that's it.
And so if you have to get, you have to accept that in the same way. and, Europe, has proven very definitively that you can still have a business. You can still make shit loads of money and, have a fancy life and get all the things you want.
But also have a society that's not, being burned to the ground by religious zealots. Like they've shown that this works and that people can have healthcare. Like that's, what's so bizarre about, American politics is that people are always saying, oh, this is not possible. It's not possible. it is, and people already did it.
KARPF: Yeah.
SHEFFIELD: just overcome your fear a little bit. It's okay to have a little bit less.
KARPF: And it's, not even like fully automated luxury communism, like what we're actually just talking about here. It's like, I think sometimes about, like you gimme a, a time machine and a magic wand me go back to like the 1980s and just like either add a wealth tax or prevent a bunch of the tax cuts, like still the rest of capitalism, but just with higher tax rates. And I'm pretty sure we get mo certainly within the internet stuff, like I'm pretty sure we have an internet, like all the things that we like about the internet, we still get just, fewer tech billionaires or tech billionaires that aren't sent [01:06:00] billionaires. And like some modest restraints and regulations and competition policy like, that's not like pure, like overthrowing capitalism even.
That's just like capitalism within like Sian limits. that works okay. You just are gonna need to accept those limits. And in the 1980s we stopped accepting those limits and just said, what if we let markets do everything, but we're putting markets in quotes because it's really just whatever a few rich guys want. we've run that experiment for four years. It doesn't fucking work very well. You can like, you can have a little capitalism as a treat, just like there also needs to be tax policy and some regulations Deal with it.
SHEFFIELD: Yep. Very reasonable. Very reasonable. All right, so Dave for people who want to keep up with your work what are your recommendations for that?
KARPF: I would love for people to read my substack. I don't love it on Substack, but it'd be expensive to go elsewhere. So it's davekarpf.substack.com. I write once or twice a week there. I'll a book coming out in the spring or the summer 2026, and.
SHEFFIELD: All right, sounds good. Thanks for being here.
All right, so that is the program for today. I appreciate everybody joining us for the conversation, and you can always get more if you go to Theory of Change show where we have the video, audio, and transcript of all the episodes. And my thanks especially to everybody who is a paid subscribing member on Patreon or a substack.
Thank you very much for your support. You have unlimited access to the archives and I am really grateful that you are doing that.
And if you can't afford to do a paid subscription right now, I understand that. But if you can share. One of your favorite episodes with your friends or family or post it on social media, that would be really appreciated.
I thank you very much for that as well. And if you're watching on YouTube, please click the like and subscribe button so you can get notified whenever we have a new episode or clip. Thanks very much and I will see you next time. [01:08:00]
Share this post