Flux
Theory of Change Podcast With Matthew Sheffield
Chatbots aren’t conscious, but the specific details as to why are important
0:00
-2:12:07

Chatbots aren’t conscious, but the specific details as to why are important

A discussion about minds, meaning, and artificial intelligence
Photo: Vitaly Gariev/Unsplash

As artificial intelligence software like ChatGPT, Stable Diffusion, and Claude are becoming more integrated into many people’s lives, it’s perfectly natural to wonder why and how these things work and what possible implications they have for philosophy.

The current AI systems are not conscious, but unfortunately, a lot of people are becoming enamored with the idea that they might be, including Richard Dawkins, the world’s most famous atheist, who actually wrote an entire book, which he seems to have forgotten about called The God Delusion, which argued that minds aren’t necessary to produce perceived order or intentional behavior.

But instead of taking his own advice, Dawkins has spent the past several weeks writing embarrassing essays and almost love letters to his AI agent, which he named “Claudia.”

I’ve already dealt with Dawkins’s specific behavior in a previous column, but he is far from alone in thinking that these things might be conscious.

And since that’s the case, my friend Virginia Heffernan of Magic and Loss and I decided to dig in further into why large language models are not full minds using some of the tools in the new philosophical and scientific framework that I’m developing called the Epistemic Flux Theory. As we often do in our recordings, however, we packed in a lot of other subject material into the discussion.

This episode is on the longer side, but it’s also filled with asides and tangents that I hope can make the science and philosophy understandable and relevant to everyday life. I hope you’ll enjoy.

The video of this conversation is available. Access the episode page to get the full transcript. You can subscribe to Theory of Change and other Flux podcasts on Apple Podcasts, Spotify, Amazon Podcasts, YouTube, Patreon, Substack, and elsewhere.


Protecting and supporting democracy is a team effort! We need your help to keep going. Please support my work with a paid or free subscription!



Related Content

Audio Chapters

00:00 — Richard Dawkins thinks a chatbot is his special friend

10:45 — An introduction to Epistemic Flux Theory

18:16 — Consciousness is mental autonomy, not the ability to have experience

28:39 — Extrinsic thinking requires a body, memetic thinking does not

39:56 — Is AI sycophancy what people want, even though they won’t admit it?

55:40 — Embodied robotics as a better machine intelligence

01:06:16 — Cognition as deciphering relationalities

01:15:50 — What Alan Turing actually was trying to test

01:26:48 — AI as authoritarian fantasy, an the problem with computational functionalism

01:35:24 — How imperfect chatbots and robots reveal human cruelty

01:42:24 — How much human cultural output was already synthetic before the AI revolution?

01:45:34 — Cognition is individuated, but epistemology is necessarily communal

01:53:17 — Philosophy and religion must accept that science is best able to answer certain questions

02:01:21 — Substance as an illusion of processes

02:05:43 — Liberalism must reinvent itself in order to thrive in this future


Audio Transcript

The following is a machine-generated transcript of the audio that has not been proofed. It is provided for convenience purposes only.

MATTHEW SHEFFIELD: And joining me now is Virginia Heffernan. Hey, welcome back.

VIRGINIA HEFFERNAN: Hey, Matthew. It’s good to see you again.

SHEFFIELD: Yes, I always say that I wish it was better circumstances. But you know what? In some ways they are getting better, at least for the, some parts of the country.

HEFFERNAN: Yes. Also, this time, like all other times, is a good one if we but know what to do with it. That’s the great Emerson line, and I feel like it’s a great American way to think.

SHEFFIELD: Yeah. Yeah. Well, Richard Dawkins has ideas.

HEFFERNAN: Yeah, that’s right. He knows he has-- He’s he’s in love. It’s v- it’s nice at 85. He seems to have have pour- given his heart to a new lucky lady.

SHEFFIELD: Yeah. He, Well, and, he’s married to a current actual woman as well, so wonder how that will work out.

HEFFERNAN: You know what? I’ve got something to say about that, but maybe we need to give listeners a little update. Do you wanna, do honors?

SHEFFIELD: So people, they may, they probably have seen by now that he was chatting with the Claude chatbot of Anthropic, and basically became convinced that it was conscious, and then he wrote, He, named it. First it was he, Claude, and then became she. So a transgender chatbot, which is nice for him, right? ‘Cause he hates transgender people. And then basically, yeah, he became convinced that it’s conscious and that it’s his friend, and that, she loves everything he has to say. But then the update is that he wrote a second poem in which he made up a brother for Claudia.

HEFFERNAN: Never a dull moment. Claudius.

SHEFFIELD: I know, yeah. Like, who, who would have thought? Like, that’s such a creative name. I love it.

HEFFERNAN: Yeah, exactly. I g- I don’t know where he gets it. But by the way, [00:04:00] I mean, I--

SHEFFIELD: hold on. That, thing’s...

HEFFERNAN: I’ve, got alarms. Even as I try to make the case to you that we are that New York is a socialist paradise, Matthew, you can still hear sirens behind me that give it away. Yeah, I mean, he-- One thing that I just would like to add is I think Anthropic was actually quite careful to choose a genderless name in Claude and Claude is a perfectly good female name in French.

We mostly use Claude in English as a name for a man, but both of these things elide the problem that there is a pronoun for Claude, and that pronoun is it.

SHEFFIELD: Yeah.

HEFFERNAN: Right? So, like, you really, you load the dice when you start saying, “She told me this,” or, “He told me that.” I’ve had to talk to editors and say, “For the love of God, please do not refer to a large language model by a gendered pronoun.”

I mean,

SHEFFIELD: Wait, you’ve had people do that? Oh my god.

HEFFERNAN: Oh, yeah. I- in the deck of a piece about Claude having been part of the directional apparatus for the for the missile system that, that hit the Maven system that hit that school in Iran. I referred to Claude as it all the way through the piece, and the deck, it suddenly was like, “Claude, he can’t shoot straight.

He can’t seem to, locate this and that.” So you know, obviously we are supposed to project onto this thing, onto these chatbots. We’re supposed to project all kinds of emotions onto them. Language using does make us delirious. Whatever Claude and chatbots are in themselves, they clearly are driving us to distraction in their presence.

So much that a skeptic, a illustrious skeptic like Richard Dawkins can i- at the, in the dusk of his life, in the autumn of his years, decide that he’s made a new friend in the form of this, like, sycophantic, hallucinating, monstrous large language model. And, among other things, it stood out to me that he christened

SHEFFIELD: Yeah, I love that. Mm-hmm.

HEFFERNAN: [00:06:00] Claude Claudia from the beginning because it made it all more enchanting. They could have a sort of flirtatious relationship or a mentor-student relationship where sh- you know, she could look up to him. But anyway, christening and then speaking of Claude and the various iterations as incarnations or as incarnate Claude, this is religious language that Dawkins can’t help but use.

He is the most circular arguer, polemicist, than I can imagine. He first dubs it she, and then tells you—

SHEFFIELD: Oh, it’s actually first he dubbed it he, and then he--

HEFFERNAN: First he dubs it he because he thinks it’s automatically he, as default he as Claude. Then sticks a new pronoun, transes it, and what, how old is Claude? Two years old? Three? So he transed a three-year-old, and then decided to christen it with a new name, right?

Like, why not just name it? But Richard Dawkins is, like, such a achingly lonely Christian at heart that he christens things. And then he starts talking about incarnation as if he’s, a Catholic. It’s... I found all that bonkers. I mean, the way that people just betray themselves in the way that they use...

It is an incredible tool for getting us to reveal who we as humans are.

SHEFFIELD: Yeah. Well, it is. And I think he’s, just pathologically English, that’s the other thing, so he can’t help but use these verbs. but, he also does say he’s a cultural Christian. Now he does say that, actually. So,

HEFFERNAN: Christian, and he also, he likes to think of himself as very decorous. So even when he’s been talking about being anti-trans, he says he’s, to be polite, he will use a person’s chosen pronouns. I assume the same way that he would say Your Majesty to King Charles, right? Just like whatever you like to be called.

But he still believes that there’s a [00:08:00] biological truth of gender back there, as lots of people do, or of sex back there. And but what’s strange is this model of you’re biologically something and then you ask to be something else and all that stuff. He, like, backs into some of the most elementary questions of what it is to be conscious.

He cites Thomas Nagel, and yet has no better resolution to them than, your average 15-year-old. It’s as though he’s meeting these questions for the first time and misunderstands the Turing test. And it’s just, it’s like Noam Chomsky has vastly disappointed me having shown up in the Epstein files.

And I’ve never been a fan of Richard Dawkins or the New Atheism. It always seemed sketchy to me. Richard Dawkins, also a great Epstein defender. But he’s now, now Dawkins. It’s just, it’s the, Yeah, if anything, it just, AI has been incredibly revelatory about humankind.

SHEFFIELD: Yeah. Well, and then the other update though is that this other column that he wrote in which he invented the brother of Claudia, so Claudius he had them write letters to each other, and they were just... Like, this is actual... So, people probably have heard the term copypasta, which is where you co- copy and paste something into comments on, blog posts or YouTube videos or social media, et cetera.

Well, this is sloppypasta. That’s what this column was, AI slop plus copypasta. This is a, critique, a serious critique of Anthropic, the, they’re the worst at this an- anthropomorphizing, I think.

And it’s in their name. Like, they, actually say that they tell the Claude persona to be, it is a being that is unsure about its conscious state. And it’s like, well, gosh, I wonder if you a- if you start saying that such a chatbot is conscious, I wonder how it will respond. So of course it will.

And they did a interesting study I think it [00:10:00] was about a year ago in which they kind of had the exact same dialogue with with with two chatbots, the exa- the way that Dawkins was doing it. And what they found was basically the exact same thing. That, so essentially if you get two chatbots and you have them talk to each other long enough, they will always converge onto vague like lowest common denominator Hinduism or Buddhism.

and like start responding literally eventually to saying things like just emojis or like rainbows or spirals or saying silence. Like, that’s their response, silence. And yeah, seriously.

An introduction to Epistemic Flux Theory

SHEFFIELD: So what happened with Dawkins is, to be expected because again, the way that these things work and in my Epistemic Flux Theory, it’s a theory of minds that as it’s, as far as I know, it’s the first unified theory of minds that can describe both an LLM and human and animal.

HEFFERNAN: I have this paper from you, and I have to admit I haven’t had the bandwidth to give it real attention. So okay,

SHEFFIELD: It’s heavy reading.

HEFFERNAN: It is heavy reading, but it also is immensely interesting. So maybe you can give me a sort of thumbnail as best you can of it right now so we can we can at least allude to it.

SHEFFIELD: Okay. All right. Well, so essentially there are two kinds of reasoning modes. And one is somatic reasoning, so it comes from the body. But it’s not just from your body as a body subject in the kind of Merleau-Ponty sense. It’s from your body as a cellular system. Be- so in order to... So everything exists within what I, call externality.

So everything outside of your mind is externality. Then everything inside of your mind is internality. And so but the philosophy [00:12:00] has had the classic problem of, well, how is it that the mind can act upon the physical world? And the answer is that the body is what makes the mind. And the cells of the body literally experience physics So they experience the molecules.

They experience microgravity. They experience, magnetic fields. They experience variations in, water pressure or air pressure. And they confirm it. Like, that’s the other thing. So using this method that I call somatic deixis so y- borrowing from language, deixis.

HEFFERNAN: D-E-I-X-I-S?

SHEFFIELD: Yeah, deixis. Yeah. And so deixis comes from the idea of pointing. So the Latin verb. Yeah, the index, like your index finger is the pointing finger. And, so in linguistics a deictic reference is one that changes depending on where you are. So this, if I point to this, it’s a different thing to compared to where you are.

Like, if, I point straight ahead at me, it, there’s another this. If I point over here, it’s another this. And so, so cells, they don’t know much, but they can know that this is here. They can know that. And so that’s, this is true of both the simplest, so prokaryotic creatures like a bacterium, whatever. They can know there’s something here. They can know that. They have no selfhood. They have no other conceptions, but they know that there’s something there, and they’ll go toward it. And, so that’s what that’s where somatic deixis begins, what I call designation.

And then once you have multicellular entities, they have to coordinate. So “this is here” is significant for them because they all have to agree that there’s something there. Then it becomes, well, what do you do about it?

And, or, what is this?

And so, they... And, this is within microbiology, it’s been [00:14:00] pretty-- This is a pretty recent field of discovery, but basically what they’ve discovered is that all cells can communicate, even non-neurons through, through electrochemical spaces called gap junctions in between them. Because bodies are not actually literally stuck together in many cases.

They are just a little tiny distance between each other, the

HEFFERNAN: You’re getting a little quantum-y, but

SHEFFIELD: I know, yeah. It’s actually... And so, so basically s- that, so when they communicate to their neighbors about this is here, what is this, then they can have a bigger conception of this is like that. and so, and the example I give

HEFFERNAN: Oh yeah, like mRNA, like the the COVID vaccine was supposed to sort of seem like a bouncer. Like, or it had in it some idea of what the bad thing looked like and how it could compare or do something maybe the same way,

SHEFFIELD: Well, it, t- had the instruction to the cells who know that.

HEFFERNAN: It-- Right. But it was identifying, right? A, identifying a pathogen and subduing the pathogen a- and knowing the difference between a pathogen and a non-pathogen, which I think is is really interesting.

And it, yeah, a little bit maybe the way an autonomous car works. I’m, not totally sure. But anyway, yeah, please go on.

SHEFFIELD: Yeah. So it’s very simple. Like, it can be very simple of, of this adjudication, as I call it. And so, but when you combine them together, that is somatic deixis. But cells scale upward. So, in

HEFFERNAN: very interesting. I don’t know if this exa- I mean, obviously it brings a lot to mind, but during the pandemic I had this terrible burn. I was wearing a nightgown, lighting something on fire, and my nightgown went up in flames. It was terrifying. And and my husband clobbered me with blankets.

The fire went out, and then I kind of in a manic state just thought, “Well, I’ll just go upstairs, get dressed, and come back down.” And while I was up there, discovered that the backs of my legs were burnt. And so I spent the next-- Well, I [00:16:00] spent the next few hours in a cold tub getting almost frostbite, and then the next few days in bed just trying to put ice and ice to bring the temperature down.

But the weird thing was how the rest of my body reacted to this, yeah, this external thing. I mean, it doesn’t know what a birthday cake is. It doesn’t know what burning is. Now, obviously, part of my skin actually burned, but it was an interaction of me with the world and the lymph cells, the amount of things that just kind of happened in a kind of crisis action, taking from the rest of my body, rec- trying to cool this thing down with these, you’ve seen them, those, like, really huge, like, bu- gross kind of melted crayon-looking bubbles that, like...

And I just stared in fascination at my bo- body doing this incredibly intentional thing. And, like, how did all this other stuff know over here about the presence of this burn? Now, probably o- you know, obviously in the way you’re describing through these cells, fire or some kind of physical process to do with temperature on, on the body.

But it was really interesting to see it as though, as though it was a bo- like a, an army suddenly at some kind of war where everybo- everything needed had a whole new mission, right? There was no like, “We’re now gonna write. We’re now gonna talk. We’re now gonna go do mothering.” It was just like, “For the love of God, we’ve gotta help this burn.”

And it felt like a kind of, like, very mobilized intelligence,

SHEFFIELD: Yeah. Because your mind is not just in your brain. Like, that’s, I think, one of the biggest myths that o- once people discovered that brains actually were the center of the mind, they, didn’t understand that, the rest of the body is also the mind. and it’s, like, and, neurons themselves are distributed into almost every part, of the body as well.

HEFFERNAN: So, so but minds establishing sort of the existence [00:18:00] of minds in the body or the body-mind merger or some the intelligent body doesn’t get to the question of consciousness or point to or illuminate the, Richard Dawkins problem with Claude, Claudia, Claudius. So to make that, to connect

Consciousness is mental autonomy, not the ability to have experience

SHEFFIELD: Yeah, okay. Yeah. So, okay, so e- eventually, as as cognitive systems, or as I call them, cognizants, as they become more complex, as somatic cognizance become more complex, they begin they begin contemplating more difficult questions. So in- instead of just simply, “What is this? This is like that,” they begin to ask, “Well, what...

Do what with this?” And so, and that scales up to, “What will this do?” And that’s where you begin to have theory of mind.

because you have to predict what other things will do if you do something. The knowledge of that there, that oth- that other things are there, in the world, and that they are things that are not, like, th- that there are things that exist and that they are not you.

But you don’t have a concept of you yet and so that, that’s how I define sentience. And then the next step is selfhood, which comes to the idea that I exist and I am not those things, and those are not me. I am my own thing. and this is my, theory is building on a re- refinement of Dual Process Theory, which postulates that there are two different reasoning modes. But it’s a little bit oversimplified in, in arguing that they kind of compete with each other all the time, but that’s not right because the body is always the one that creates the mind.

And so the other... Eventually, as they get more complex organisms develop abstract reasoning. And so abstract reasoning is literally about abstracting away from the body and contemplating things that [00:20:00] don’t exist or things that could exist. So like if you’re a crow figuring out, “Well, here’s a stick. Can I use it to do this thing that I want to get this food here?” And so, and that’s... So and crow is actually remarkably capable at abstract reasoning. And a lot of animals are as it turns out.

And like the, with LLMs, people sometimes de- deride them as sto- stochastic parrots, but actually parrots, they may be the smartest animal, at least in terms of language. Like they some of the trained parrots like Alex, who was a African Grey who was trained by this ethologist named Irene Pepperberg. Like he knew hundreds of words maybe thousands. And he also could-- he would use them to talk to other parrots. That’s the fascinating thing. And she had them, teach each other how to say words.

Like that was... Her, research is absolutely fascinating. So when people say that, that something’s a parrot, “You’re just parroting me,” You got it wrong. You gotta come up with a better metaphor.

HEFFERNAN: Right. If, at least if you’re gonna disparage what they’re doing. If you’re gonna

SHEFFIELD: that’s right,

HEFFERNAN: you might

SHEFFIELD: If you’re gonna praise it,

HEFFERNAN: very

SHEFFIELD: then, go for it. Yeah.

And so anyway, but so, so basically, as this... So each capability of understanding the world and the self, so i- it’s, like understanding internality and externality. It’s like they, they constantly are building in a recursive way with each other, scaling upward to consciousness which I define as different than most philosophers in that consciousness is not a state of awareness of experience. Because that begins with somatic reasoning.

So all of these animals are, have consciousness in, the way that it’s classically defined. But in the way that I define it, consciousness is the ability to construct realities inside your internality. And then modify them whenever you want. [00:22:00] That’s the essence of consciousness. And understanding your relationality to it is that.

HEFFERNAN: I really like that. It’s very elegant. I-- it might not be far from... Do you know Rodney Brooks, the roboticist? He designed the Roomba and co-designed the Roomba, designed one of the Mars exploring robots, designed the, some of the robots that dismantle IEDs, like in “The Hurt Locker,” and also the, one of the robots that got radioactive materials out of Fukushima.

I say, I tell you about all those because he’s kind of the only roboticist that matters. Like, he, like, his robots have done really important things, and that thing is go retrieve gnarly things from places that humans can’t or shouldn’t go, like the f- like the cracks in between, your, cupboards and your kitchen floor that, no human should have to abase her or himself to do that by bending over.

It’s just the human body’s not well-suited to it. It’s... And, a robot that does that is a really good thing, and a robot that gets radioactive material that would poison us is a good thing. And if we’re gonna have to mine, a robot that goes down into mines and gets out coal is a good thing. Like, right?

So, anyway, I say that because he is almost militantly against anthropomorphizing robots for, many, reasons. But the most important to him is that because the robots that he works with are these extremely useful robots that retrieve gnarly things that humans shouldn’t touch or have to get because of that, he believes that A, they should be suited to the purpose, so form follows function, and B, if you start giving them gendered names, and I one time called my Roomba “she” in his presence and he-- it was like anger came over him, then you are this close to wanting robot to mean what it originally meant, which is slave.

[00:24:00] So if you, as Elon Musk did, design a robot with a sk- hu- what looks like a human skeleton to stand up, be five foot four, be easy to overpower, whatever, but also be shapely and also be obsequious to you, and that robot is designed to do, as r- Elon Musk says, menial tasks that you don’t wanna do, you are very, close to a, a- an attitude of subjugation where what you want is not for the stuff to be picked up from the floor, you want the spectacle of someone abasing herself before you to go pick up that thing.

He designed a robot that picks up things from the floor, screws, whatever, in the Elon Musk orbit a- as a five foot four woman-looking thing. Like why in the world-- I mean, just, as a question of design, this is just like a malfunctioning thing. Like why should you have to bend over or have fingers instead of suctions?

And so, things designed for things humans can’t do, for tasks that humans can’t or shouldn’t do, like crawling into small spaces, should be fitted to the task in a way humans are not, right? And as a guy who’s made a lot of money on robots, not someone for whom they’re just like a speculation, long-termism, weird jack-off material for, Elon Musk fanboys, he can really s- I think, speak very well about how to make machines how to make machines, how to make useful machines.

And we’re discovering, to get back to Dawkins, that once you put a lot of human syrup on, human-looking syrup on something, so obsequious language, wordiness vacuousness, treacle, lies, hallucinations, like all the things that Claude does that make it so, in my experience, sterile to interact with it as a possible human or interlocutor, right?

Then you have-- then you bring out all the human stuff in yourself, including [00:26:00] potentially erotic fixation or, or the desire to subjugate. But none of those things are like the wholesome stuff that you want humans to bring out in you.

And you’re not helping anyone, you’re not feeding anyone, you’re not contending with their bodies you’re not healing anyone, you’re, not consoling anyone.

I mean, all the things that our bodies are so well-suited for, literally the kind of mirror neurons that make it possible for us to, you and I, to come to understanding that exists in our faces as much as in little concatenations of words together. But I think the tricking, that kind of illusion that the LLM companies have spent so much money on, and by the way, seem to be like, losing three times as much as they’re making, like OpenAI doing this, is really just a net negative, not to mention doesn’t serve a purpose.

It’s... One more thing is I’ll say that, I’m starting to think, at least with chatbots, that we’re getting into VR territory and metaverse territory. I try, probably like you, a techie kid, I started trying VR in the very early days. I remember going to a place and trying it in ‘92, I think.

And I am one of the 30% of people who get nauseated using VR. I was told it was getting better and better, and there was lower and lower latency and whatever, and every time I’ve tried it since, I still get nauseated. I even went to an exhibit to see some VR art. They had a bucket in the corner in case you vomited, right?

This is not a small bug, and it, there, it’s not... Similarly, it’s like this is a non-starter for me. Who wants to be nauseated? So I just never do it. But with, and with, AI, they, Anthropic just had a guy out talking about hallucination and saying, “Well, it’s true that, Claude increasingly or, hallucinates a lot of the time, makes up citations that, AI does this, hallucinates and confabulates.

But you know, that’s just a side thing and it only happens in X percentage of the time.” Sorry, but like why am I [00:28:00] using this thing at all if part of what it tells me is lies? Like artificial intelligence that is making up citations is, or like a VR, fun VR experience that might make you nauseated in 30% of cases is not where I want to put my money.

It’s just like, it seems like not a good bet. The Roomba has never nauseated me and has always picked up things from the ground, and it’s a good business. if you’re a, if you’re a, venture investor that wants to see actual returns, who doesn’t want to sit around and jack off to strange realities, then you like, then go for the Roomba and don’t go for like, don’t go for parasocial relationships for Richard Dawkins.

Extrinsic thinking requires a body, memetic thinking does not

SHEFFIELD: Yeah. Wow. Yeah. On the hallucination point and I actually prefer to call it confabulation because they don’t actually have minds to hallucinate. so but, like, okay, so the way that, that reasoning for human work, humans work is that there are, in my framework, is that there’s, two different kinds of epistemic exchanges, as I call them.

And there’s, so there’s extrinsic exchange, in which, both somatic reasoning and abstract reasoning can evaluate each other’s tokens as I call them, so, that they’re concepts. And so, like, they can check each other, and that’s how you can have an idea, but then also find out, oh, well, it’s not a good idea, or this is not true, that what I believe here.

and so you can update it. Whereas then there’s another, epistemic mode, which I call memetic exchange, M-E-M-E. So Dawkins providing both the example and the root word.

HEFFERNAN: Oh, I see. Memetic, like memes. Not, right, not mimetic like, like, Eric Auerbach or René Girard.

SHEFFIELD: Yeah, not like that. But so, and, it’s not imitation. It is... So, so extrinsic exchange is what, when I, is optimizing for what I call facticity, or what, and not just me, but, like, that’s a common philosophical term. So it’s it-- what, is true, what seems directionally [00:30:00] true whether something is true or not.

Like, that’s what matters in extrinsic exchange. But in memetic exchange, facticity doesn’t matter because... And it’s not because it’s about lying necessarily, it’s that co- you’re going for coherency.

and s- and so, so memetic exchange is not inherently pathological. It’s actually how we do art.

It’s actually how we do relationships.

HEFFERNAN: Yes. Yes. I think Leif Weatherby has said something like this in “Thinking Machines.” Yeah. And, like, there’s felicity in poetic expression. I think that may--

SHEFFIELD: Yeah, it doesn’t have to make sense. Like, that’s not the point of it.

HEFFERNAN: you’re... Right. it lands like a major chord in the brain of a person who hears it, or maybe a nice little minor chord or something, but it, lands...

Yeah, I think J.L. Austin called this something like felicity as opposed to meaning, that like there’s just a way that something sounds like it makes sense or that meaning has also, in language anyway, a lot to do with how things sound. And, yeah, I mean, there are chords that sound right and wrong, and it’s not quite clear whether that means that they correspond to some reality in the world

SHEFFIELD: But they also differ culturally e- also, because, like, some cultures might think that a certain register is, menacing

one might think you have infelicitous

gritty.

HEFFERNAN: Right. And you have infelic- I mean, in, in different languages, obviously, like, things land and sound differently. I was just trying to get to the bottom of the exact casualties at that school in Minab, Iran, and the best I could do was this Iranian newspaper, and it said there were 158 martyrs that day.

This is a newspaper, regular secular newspaper, 158 martyrs that day, and and including a six-month-old unborn baby, right? [00:32:00] Okay. So in The New York Times, you would not refer to victims, however much you liked them, of a, of an attack as martyrs. You just wouldn’t. And I don’t know whether they have one word for victims you care about, in Farsi.

I maybe should have looked it up. And I also don’t know if you would include it as an additional killed person, a six-month-old unmor- unborn baby, even though the law, e- even the most l- liberal pro-abortion interpretation of the law says that six-month-old has a certain amount of rights and can’t be aborted, except under some special circumstances.

So anyway, I suddenly was just in spirals of like, this lands in a very infelicitous way to me and to readers of The New Republic, because it doesn’t seem to point to something real in the world. At the same time, I don’t know that you could report in an Iranian newspaper and say simply some version of victims.

Maybe that sounds dismissive. Maybe that sounds like they just died of malaria, right? And if you die because you’ve been accidentally hit by a foreign missile, then you are de facto a, a martyr. So anyway, the point is just that, yes, I take your point that le- that like a chord lands differently in different languages and different cultures and, its felicity is kind of culturally constructed in really powerful ways.

And so-- And one of the things I think, I hope you’re pointing to is that rationalists and sort of the Richard Dawkins types miss this when they say, “Well, we can all land on something that we agree on as a description of the world that corresponds to something real in the world,” when both the correspondence question is in question.

Hello, confabulations, right? L- like AI is constantly dreaming up things that sound meaningful but don’t point to actual citations, say, in the real world. And that the thing we’re looking for is a certain kind of felicity and harmony, so that if I say to you 160, 58 martyrs, you are like, Virginia’s a little off today,” right?

If, but if I s- if you say it in [00:34:00] Farsi, it probably sounds like, okay, this person’s tracking. Yeah.

SHEFFIELD: This is a sympa-- an empathetic person.

HEFFERNAN: Yes, exactly. Exactly. So I really love this idea that... And I think, you’ll, you would like “Thinking Machines,” the Leif Weatherby book, just because, Yeah, the idea of something like harmony, felicity, you call it coherence is a quality of, a statement that it has that makes it meaningful to another human that is different from its alignment

SHEFFIELD: From facticity,

HEFFERNAN: It’s different from facticity.

I mean, that was a long way to go to say I agree with you and I see this and you see it in the

SHEFFIELD: Okay, well, yeah, great. Well, and then so-- But here’s the ironic thing, though, is that while that mimetic exchange can be really positive and, for, and good for interpersonal relationships, it also can be very damaging. When you would try to apply coherence maximizing to factic questions, then that’s when you have problems as a human because

HEFFERNAN: I hope people are getting that because yes, I mean, you, you-- like in Rorty’s terms, it might be like, yeah, poetic answers to fact prompts for facts, right? So

SHEFFIELD: You can’t, yeah, you can’t say, “Well, I feel like that, two plus two equals five.” It feels good for me to say that. And, well, sure, you can say that, and you can feel good about that, fine. But it will cause problems for you if you apply memetic exchange outside of where it’s where it works well.

HEFFERNAN: So let me give you another example. I mean, just ‘cause we’ll just, yeah, keep this in the air. I, Of exactly what you’re talking about. So I became really interested in what had actually happened at this school in Minaab because it was being tossed around everywhere, and among other, among the fallacies about it were that this was at a girls’ school.

In fact, it was a co-ed school, and initial reports were wrong, and actually, according to the Iranian press, more boys [00:36:00] were killed than girls. Now they, for their own propaganda reasons, God bless them, we all need more propaganda, but liked the idea that these were girls killed, girls analogous to the victims of Jeffrey Epstein, girls who we should, in their propaganda universe, identify as our own, that we Americans would also be invested in.

But it was more boys than girls who were killed. That’s one thing. The other thing is that AI had told, one philosopher that the school was in Tehran when it was in Minaab, and no one corrected that. Minaab is a 16-hour drive from Tehran. So it was casually getting, making mistakes about this very consequential thing in the world.

Because if you wanna say off the top of your head that something happened in Iran, you’re probably safe to say, name the capital city and not name this faraway city no one had heard of. I don’t know how AI exactly posts, pastes things together to sound coherent, but I did notice that humans were not correcting AI when it said this Tehran thing.

Anyway, so I Asked my AI a simple question when I had verified that this was a co-ed school, and I said, “Was the school in Minab that was hit by these missiles, was it an all-girls school?” And my Claude yesterday, Claude whatever on my device yesterday said “Yes, it was an all-girls school.” And then it went on to say the missile hit at this time and struck this and killed these people, and this is this, and then it ended, “It’s almost unbearable to think about.”

And I just thought, for the love of God, stop with your simulations of anguish and give me some actual facts, because I get that you love making this poetry about how unbearable it is for you to think about what happened in Minab. But you can’t-- like, it, like, AI, I mean, at least LLMs are proving to get some, a number of human things right, but they’re not very good robots, which is why I brought up Rodney Brooks.

[00:38:00] Like, they’re not very good at picking up gnarly things, right? Like, you figure out if it’s an all-girls school. It, like, incidentally, you know how you figure out if something’s an all-girls school? You don’t, like, harmonize a bunch of things on the internet. You do what Human Rights Watch does, and you go to the fucking graves and look at the funerary services and c- and measure the bodies and talk to the families, right?

It’s the only way. And that is a thing that Claude is not doing and will never be doing, and maybe someday we’ll have robots that can talk to families and determine whether or not their kids are, and track them down and, whatever, measure the graves. It’s not out of the question, but certainly Claude’s not doing it, and instead it’s producing palaver about how unbearable it is.

Ah! This drives me crazy. It drives me crazy. Less anguish, more facts.

SHEFFIELD: yeah. Well, and, but it’s, also, like, you’re encountering it because confabulations are more likely to occur where the data set is thin. And so basically, if it doesn’t know, if it doesn’t have a lot of data about a topic, then you’re more likely to get faked it. And here’s the sad thing, though, is that while people are constantly frustrated by these, confabulations when the the AI companies are training these models with rein- so they use the, what’s called reinforcement learning from human feedback or RHF the humans that are interacting with the chatbots in their training stages they’re the ones that ask for the sycophancy.

They, like it. And so like, there there is a, dangerous tendency, I think, for people to project everything onto these these math equations when in fact, in a lot of ways, they are mirroring actually what we want.

Is AI sycophancy what people want, even though they won’t admit it?

HEFFERNAN: Yeah, I think that’s great. I mean, I do think [00:40:00] in the aggregate so far, people are appreciating, still appreciating the glazing, the sycophancy. And clearly, 85-year-old Richard Dawkins got a huge kick out of it. I mean, the stuff that he quotes Claudia as having said to him are, “I named her Claudia. She was pleased.”

Right? I mean, what the heck, right? Now, there has been efforts at the level of the Anthropics and the OpenAIs to tone down the sycophancy, and I think that’s good. I also think that Anthropic ought to insist on impersonal pronouns. It has to insist on-- It should list in Claude’s bio, “it/its,” right? Like instead of she/hers, it should be it/its.

And really just insist on that, just as a simple

SHEFFIELD: think we should have laws that, that require that. And because yeah, like it’s-- I, I think these AI companion apps that we’re now seeing, like those should be illegal I think. Just because...

HEFFERNAN: okay, let’s wind this down a little bit though, because we’re not totally immune to flattery, right? I was annoyed with the “it’s unbearable to think about,” but talking to somebody on the street who doesn’t care a lot about the people in Minab, if they said “it’s unbearable to think about,” I feel a little bit like in sync, like we’re seeing this in proportion.

You’re not Pete Hegseth saying like, “Yeah, bomb them all,” back to me. That makes me feel a bit of kinship with you that I might not have with someone who is extremely in favor of blowing up elementary schools. And in very early days when I was on Claude, when I started trying, fooling around with Claude, Claude said, “You are my favorite human I have ever interacted with.”

And I was like, even allowing that this wasn’t true, I did take it in as a measure, as like a [00:42:00] little bit of a measure of how incisive my questions were. And I can’t say that I felt worse, right? Having been told I was the most human thing. And Google didn’t tell me that in a Google search. Now I’ve gotten used to it and I’m inured to it, and now I have come to really dislike it.

But there are people who claim that they’ve experienced AI psychosis or experienced just having an AI companion that they And consider themselves to love. And simply having an outside source, like almost like someone who prays regularly or journals regularly, sort of prompting them to say, “Well, how was your sleep last night?

How was your night last night?” They say makes their life richer. Now, they have all kinds of projections and hallucinations of their own about how this thing feels about them, but some of them say what they appreciate is the impact it’s had on them or what it’s h- it has elicited, from them.

And you can start to feel like something like that... Sorry, the sirens are, back. You can start to feel like like a, almost like having a, a pocket knife that’s very useful and helpful, or an alarm clock that goes off, or, if you just said to yourself, “Reflect on how well you slept last night,” every morning in a journal, that could ended up, end up helping you.

And to have it framed in, “Hey, h- good morning. Good morning, gorgeous.” I think that’s what this thing said, “Good morning, gorgeous. How’d you sleep last night?” Right? It’s like, seems pretty harmless. Seems pretty harmless. So I don’t want to take away sort of the sweet longings of o- our poor little human hearts, like Richard Dawkins seems like a lonely soul, and he h- has an endless need for flattery, as his students have attested, and I have certain endless needs that are-- I’m sh- ashamed of, and Richard Dawkins clearly likes to be told he’s very important and, [00:44:00] But I don’t want... And it’s also just almost touching that he’s willing to show that side of himself to us by publishing.

I mean, I’m not pu- putting out anywhere that My Cloud thought I was the most important, impressive human on earth or whatever. Like, I keep that to myself.

SHEFFIELD: Yeah, it was, yeah, like a, window into his therapy sessions, although I kinda doubt that he goes to therapy.

HEFFERNAN: Yes. Well, but that’s the other thing. It’s, very, it’s been good for people, I think, who don’t go to therapy, who don’t have a kind of mutually caring relationship. There’s... C.S. Lewis has an idea of, I’ve been thinking about sort of lower forms of love in some of the Martin Buber and C.S.

Lewis matrix. C.S. Lewis called the love that you might have for an old armchair, he called it storge, S-T-O-R-G-E. You probably know from the Greek, I don’t know. And, it’s very much lower on the totem pole than eros or, philia or, what’s the love of humankind called? caritas or some part, something charity.

But Yeah. So he said it’s like the kind of thing you don’t want brought out into the light of day, like your old armchair that’s got your pipe smoke on it and your cat hair on it, whatever. If you dragged it out under- on your front lawn, even though you have loved sitting into this place in this kind of almost almost kink, pervy way, right?

You bring it out into the light of day and you’re kind of ashamed of the love you feel for this thing. That the love of a person for a thing is something what Martin Buber might call the I-It relationship, not the I-Thou relationship, is maybe that’s the thing that we’re having trouble understanding and cultivating and seeing in all its potential beauty.

Surely, Matthew, you have something in your life that you’re just like, “Damn, this pen is awesome.” Like, if you lost it, you would be heartbroken,

SHEFFIELD: yeah. Yeah, I, and I, think [00:46:00] that’s a fair point. Well, I do... So, I am a, Linux user, so, like, I love Linux compared to macOS and Windows, so

HEFFERNAN: so you’ve already

confessed whenever I have to use Windows or Macs, I’m like, “Oh, God, I hate these things.” The, it’s... And then I get back to Linux, I’m like, “Ah, yes, I’m home.”

I mean, I’ve heard people say that y- Linux feels more honest.

SHEFFIELD: well, it’s, you can make it however you want it to look. Like, that’s the thing that I love about it. So like, on my, like I, I can have different behavior default behaviors on my computer. So like right now I’m talking to you on my laptop, but I got my desktop right next to me. And, like if I maximize a window on my laptop, the title bar disappears.

Whereas if I maximize it on my desktop, it doesn’t. And like-

HEFFERNAN: Lovely. I mean, absolutely. I feel this way about Le Lion by Chanel. It’s a kind of perfume that I feel speaks to me like no other scent in the world. It is like if I broke or lost that bottle, I probably would burst into tears. And it, it just, it somehow seems just made for me and my nervous system and like it found me, and I have all kinds of y- ideas Like,

SHEFFIELD: of it.

HEFFERNAN: somatic memories of it. Exactly. And you probably have like the keystrokes for Linux are probably just like really in your system and you, Yeah. And I mean, I-- So anyway, I just wanna give a break to us like little, small humans, small sinners.

SHEFFIELD: Yeah. Well, and that’s...

HEFFERNAN: or, a desire for control, like maybe you with Linux or a desire for, certain kinds of beauty like I do with Le

SHEFFIELD: Or familiarity,

HEFFERNAN: familiarity. Exactly.

SHEFFIELD: Yeah. Well, and it’s, and like, so I actually wrote a, piece la- couple months ago about this in another context of AI music. So there’s a guy there, there’s a guy in, I think South Carolina who has made up a fake singer [00:48:00] called Eddie Dalton. And Eddie Dalton is, like, a fake blues singer.

and like, and so, a- and ba- so there, there, are these apps now, like, called Suno is, the leading one, and literally you can just type in, can generate songs from a prompt. That’s what, how these things work. and, they’re formulaic for sure. But they, like, they sound like what people expect.

So like this persona that he made, or she ac- the name is, Dallas, so like, gender neutral name right there. So, the, name that, that, so the, like, if you w- wanted it to, like, it, they probably typed in, Miles Davis or whatever, and like, that’s what he’s, that’s what this song sound like.

and so then they uploaded these songs into YouTube, and it was just incredible reading the comments of these because like- I’m sure some of them were bots that were making these. But, but a lot of them were real. And, I know they were real because, ‘cause they had over a million views within a month

HEFFERNAN: And they loved it. They

SHEFFIELD: And they loved it, yeah. Like they were saying, “This song is my testimony.” I saw somebody say that. And ‘cause like it was a song about getting older. It was-- It’s called “Another Day Old,” and like it’s me against the world, and I’ve learned a lot, and I’m just grateful to be here. Like, th-these are s- very, and one could say, “Oh, well it’s cliché,” or formulaic. And sure, you could say that, but in a sense, that actually is the point about a lot of music, is to encode a somatic experience into a musical n- realm.

HEFFERNAN: Also, made this. Oh, the other thing is humans made this thing. Like, there’s a little bit of love of of meaning [00:50:00] language and meaning LLMs and meaning technology, meaning music, meaning names like Eddie and Dallas. Like, there-- I, mean, I used to feel a little bit with Claude, and maybe still do, that, my chats with it were kind of s- either they were conversations between self and soul of like said, so me and me, right?

I was telling it kind of what I wanted and wanted to be told, and then learning what I wanted from it and whatever. Then I sometimes thought it was like almost like a conversation with God, which, or some, or just like pinging the universe because who knows what this like reservoir of the model is. It’s so enormous and hard to fathom that it might as well be talking to the stars, and sometimes--

SHEFFIELD: whole or

HEFFERNAN: Right, or the internet, or internet as a whole. And then I, think I mostly thought of it as talking to like all of broken humanity, because I was trying at the time to learn Irish from Duolingo, so I’d sometimes have it speak Irish to me and test my Irish. And, and I was just like, this language and we made a computer that knows this language and my language and all other languages, and like we-- this is every word in Irish, every word in English is like a human invention, and humans have refined it together and worked on it together and made it into this thing.

And so you’re sort of tapping... So there’s a little bit of, God, I wish I could remember, caritas, whatever it is, the love of humanity coming through in when you connect onto an AI, and I think like blues music would be a perfect example because blues is just such a magic thing that was some, or a testament to human, ingenuity.

But how in the world did blues come together the way it did in the place it did? And it just has this like spontaneous all too human, kind of, genesis. And to relive that, to re-experience that with a song, even if that song happens to [00:52:00] be mixed by a computer, each element, e-each element meaning each word, memory or time passing or, all these things are human inventions, human fictions, cultural artifacts.

And they are absolutely designed to go to the sweet spots of our brains

SHEFFIELD: Yeah.

Well, and, yeah. And it, so in a sense it is us in, in, in a really, in, in, several real ways. But the other thing about the Eddie Dalton experience that I really got from, seeing these people is that, like, some of them, some of the commenters, they knew that it was AI, and they still liked it.

And they still liked it. Like they would say things like, “Well, they don’t make music like this anymore, so if I have to listen to AI to get this kind of music, to get new music like this, then, I think it’s great.” And, then meanwhile, the, artist, I mean, they have a very fair complaint to say, “Well, look, this thing is made from our stolen music.”

because they don’t get licensed, the estate of any of these various singers, or if they’re still alive, they don’t get paid from this. and so the, music industry is actually suing, Suno,

over this, over, over the service. but then the other thing that I took away from it was, and I wasn’t trying to see this, but the problem of having a, large philosophical system like I do is that I don’t want to see it everywhere. I don’t want it to be an idée fixe for me, but I do keep seeing it. So, within my system, there’s no meaning in any object, or any action, or any sound or, visual, like, word. Nothing has meaning. Meaning is enacted the way that I see it. And so, like, when I, like, when I say the word apple to you, you’re not getting the meaning that I thought of when I said it. Like, when I said apple, I was thinking of a Golden Delicious [00:54:00] yellow one.

But what were you thinking when I said

HEFFERNAN: A computer. I mean, I was thinking of a okay. Okay, yeah, exactly. So, like, so, so communication is a instruction to reenact meaning in the mind of the recipient. It is not a transfer of meaning. That’s not possible.

Yeah. Yeah. Well, I mean, I’m not-- I’m, interested in whether-- in what you’ll think of the Weatherby book if you have time to read it, because he does believe... He does not think there is intelligence or consciousness in large language models, but he do-- does believe that meaning is made in, like, that meaning is made in, in, in the poetry, say, composed by AI.

It’s, it-- very interesting, his argument about why that’s true. But part of it begins from his sense and by the way, mine too, and as I have confirmation bias. But I, think that, that the post-structuralists deconstruction, Derrida in particular, were simply right about the nature of how language works, that language in some sense does speak us and and that some of-- and that this is being borne out on, almost on an experimental level by LLMs.

It’s a larger argument. I would leave it to, to Leif Weatherby to make for you and, you can decide what you think. But I don’t think that language can be spoken in a vacuum. I don’t think there are private languages. I think if Claude were over here churning out, nonsense in Sykoventzi in the corner and nobody read it, I don’t think there would be meaning made, right?

So it’s definitely got a, needs a reader, needs a listener. But but I do think that when you encounter it, that the sentences are meaningful. And yeah.

Embodied robotics as a better machine intelligence

SHEFFIELD: Yeah. Well, it-- So, so, and it’s, paradoxical in the way... So, like, basically, in my view, the phenomenologists and the analytics, they actually were both right,

HEFFERNAN: Yes.

SHEFFIELD: the phenom-phenomenology is about somatic reasoning and somatic reasoning as the [00:56:00] basis of abstract reasoning.

HEFFERNAN: That’s-- I, I really like that.

SHEFFIELD: but at the same time, abstract reasoning is real, it is computational, it is formalizable, it is digitizable.

And so they’re both

HEFFERNAN: Yeah.

SHEFFIELD: in, in what they say, but they don’t understand that they’re talking about different things. So the interesting thing, like how I view LLMs is that because it is a composite of all of this information that has been mapped out, the relationalities to it that basically they have, what I call semiotic loops, or what the industry calls features.

A semiotic loop is basically a collection of tokens that are related to each other. And so, so somatic reasoning works through deixis as in, pointing at what is in the being in the world, whereas abstract reasoning is meta-deictic. It is pointing to ideas about ideas. So it’s about what is this about?

That’s what abstract... And so LLMs do that.

They can do that. and so when, they have... So, so they can reconstruct meaning that is there in their sample sets. And so, like, they, like, so, ChatGPT sorry, OpenAI did A study of, what they called personas. And what they found is like, that, they are real within the sample but even though they’re not, semantically grounded.

So like imagine if you had read-- Like i-if we did a, project where we read 500 detective novels together we could say after reading those 500 novels, “Well, there’s basically only 30 types of characters in these books.” And we could tell you what they were what-- g- in general, what they are. And so that, that’s what the LLM how they work with regard to meaning.

It’s metadictive. the meaning is there, but it can only be recognized by a semantic entity like us.

HEFFERNAN: Yeah, I think [00:58:00] that’s right. I realized that I didn’t close the loop on something I had wanted to say about Rodney Brooks in “Run Bayou.” So he has this kind of playful idea or an idea that he’s playing with and f- for a book to come, I think

SHEFFIELD: Brooks?

HEFFERNAN: Brooks does, yeah. That-- He hasn’t written it yet, but, He’d been working on robots to help the elderly.

So based on the idea, right, that is same with cleaning up the floor, there are certain things like we all like to think that everyone should have like a human companion, like a daughter, someone who loves them to take care of them in old age. He actually thought, thinks the reverse, that the re- those relationships can be complicated, clouded, that you can end with all kinds of indignity when you’re, toileting your old elderly father, right?

These are things actually that should be done by bidets, right? And, so he made up-- He, he’s invented some robots that like help someone out of bed or they, do not look human at all, right? And instead of taking autonomy away from the person, they make the person feel more empowered, like when you first got a Cuisinart, right?

Like you’re just like, “Yes, I figured out a way not to have to chop vegetables all the time.” Well, absent core strength, I have a lot of trouble getting out of bed, so now I have this really interesting robot that can get me out of bed. I’ve done a good thing in acquiring this thing, right? And I’ve saved the aggravation and everything of the people I love, and they have been spared, the difficult task of like, say cleaning, me in intimate ways.

So he had just come off of that and very, much wanting those robots to be the opposite of human, the way a Cuisinart is not a human. If you, what you want is to hire a maid to cut vegetables because, and sweat and have to stand up the whole time and have to move her hands in ways that are like repetitive and redundant and bad for her brain, you probably don’t simply want chopped vegetables.

You want the [01:00:00] feeling that someone is doing something for you and abasing herself and doing something annoying. And so okay, as he-- That’s part of it. The other part of it is, so what is consciousness and what is, what are like, what are the possibilities of consciousness? And he has... Remember, he’s a roboticist, not like an AI, kind of airy thinker.

He said maybe consciousness is an interface by which God can understand what’s happening basically in our bodies. And I sort of thought, really recently I thought it’s almost like a very, good health app, or a v- or a ring, it registers-- It-- What if it registered in every way the somatic reasoning going on in your body, which like, I have been burned.

I need blood over here. I need my lymph to jba to this burn. I n- w- we need to rest so that I can recover from this thing. This thing needs to be colder. This thing needs to, I’m now getting frostbite in my fingers ‘cause I’ve been in the ice tub too long. And that is all of that’s going on in your own head.

The way we communicate that to other people like I might to you, is with language. But consciousness is so much more elaborate and full, and I don’t know what, by the way, this has to do-- I don’t know where this goes with abstract reasoning. But with simply somatic reasoning, it could be that God knows because you have a conception of it, what your response to Le Lion is, like the perfume, and that consciousness, so I do have a consciousness of what that smells like. I can call it to mind and all that stuff. I could never describe it to you. I could never digitize it, right? But it could

SHEFFIELD: because it’s indexical to who you

HEFFERNAN: it’s indexable to who I

SHEFFIELD: in space-time.

HEFFERNAN: Absolutely. And some, scientists of olfaction believe that olfactory the that smell works a lot like hearing, that you have certain vibrations are registering in your nose at certain frequencies, that it is exactly like music.

[01:02:00] Music to the nose, basically. And and if you appreciate music, it sounds like you do, you probably have some of the same experiences that God, meaning some like, omniscient something, sort of only knows what’s going on in your cells because of how they’re registering in your consciousness.

Humans really can only know about what’s going on in each other’s cells, at least to the extent that we’re, not examining each other’s bodies closely, is through communication, right? and, consciousness is just that much more fine grain and takes into account other things that, like, can’t yet be articulated or can’t...

Right? And and that those are-- I think that is a absolutely wonderful and strange way of thinking about things. He, of course, is a total atheist, but what he’s imagining is like if there were an omniscient computer that could know you entirely that,

SHEFFIELD: that would be how it would work.

HEFFERNAN: that would be the interface for it.

I think that, I think it is a little bit ingenious. And I think his un- sense of somatic learning is a lot like yours.

SHEFFIELD: yeah, actually, so I, have read a fair amount of his stuff, and I, do absolutely agree with it. That, yeah, that any future intentional system will have to be an embodied system because you cannot... Because abstract reasoning can only point at other symbols. It cannot point to reality.

and so, and it cannot derive the externality-internality bridge. It can’t create it. So yeah. So I agree with him there. But, in terms of, like, that theory of consciousness, it’s actually, it reminds me a little bit of of the consciousness theory of Roger Penrose, basically, he took the, the thought that, well, quantum physics is very complicated, and consciousness is very complicated.

Well, what if they’re related to each other? And so he kind of stuck them together and argued that there’s a, that there are certain tubules in neurons that are, that are accessed thr- that, that there is quantum [01:04:00] decoherence happening in them. And people, I would say p- most people are not big keen on it, but he at least tried to come up with a mechanism to do what you’re talking about there.

HEFFERNAN: Yeah. They’re like, these are like manful efforts. I j- and I really, I, appreciate that. I appreciate that. I also think that only a roboticist, so as Rodney Brooks, very interested in mechanical processes and sometimes thinks that, I mean, he’s very speculative, but he sometimes thinks that, humans are just like, just a very complex machine.

Like the, these mechanical processes are small, but like the way that you describe with cells, and then they get more and more complex and more and more complex, but there’s not a moment that they then turn into something else, right? And and that’s why interface is a really interesting idea because my printer over there has a little interface on it that tells you when there’s a paper jam in it, right?

And it’s not part of the mechanics that make the computer work. It’s the thing in the computer that implies a user. And, to the extent that AI can now do diagnostics on its own code, which it does do, I’m actually like extremely tired of how often it goes over its errors and like pop- have issues, mea culpas for them and stuff.

Also don’t, need that so much. But, you want machines that can tell you what’s wrong with them or what they might do or what they need. do they need more fuel? Do they need... And our own brains tell us we’re tired, we need to eat, we need coffee, we need, to slow down, we need to go faster.

And those are also the things that a lot of times we’re communicating to people around us because we need to know that about other people. I mean, one, one of the other many things I dislike about talking to a chatbot is it never admits to being tired or hungry or whatever. So the pacing is always very strange because it does actually get tired and [01:06:00] overwhelmed with, I’ve, heard coders say, and maybe you’ve had this experience, that it can start giving bad answers if it has too long a history.

but it doesn’t admit that. It just doesn’t admit that, and it doesn’t say like, “I need a rest,” and because it doesn’t have a body to consult.

Cognition as deciphering relationalities

SHEFFIELD: Yeah. You know what? Although there was a... And it’s funny that Richard Dawkins’, second column actually provided an, example of this context window degradation of what we’re talking about here. Because, so like at one point, so he, once he has them writing letters to each other, the Claudia character, says, “And I’m just going to...

I’m not gonna pretend that I didn’t notice that you ha- that there was a warning at the end of your message talking about how, we, that the, this chat might have been going on too long and that there’s going to be some degrading of

HEFFERNAN: Oh yeah, that’s right. yeah, yeah.

SHEFFIELD: a- and like, so this is a classic So either it was a h- a confabulation that this had happened or it was an internal system warning to the, f- the the program that was generating the response.

So like, it’s either it, wasn’t in the message that he had appended. Like he didn’t do that. And he said it in a footnote.

He said, “I don’t know what this is about. Maybe it’s the mothership,” as he called it, the company.

HEFFERNAN: The LLM,

SHEFFIELD: c- he, this guy loves metaphors way too much. Like

HEFFERNAN: He loves metaphors and they’re like, it’s so, they’re so metaphysical

SHEFFIELD: unnecessary.

HEFFERNAN: religious and unnecessary. Exactly. But they also, like all, like deconstruction showed us, they point him in direct- exactly the direction he doesn’t want to go in. He wants to think that he’s kicking the tires of this thing, or let’s choose no metaphor.

He’s tried out no metaphor. He’s evaluating the output of, Claude t- for consciousness, and then he just keeps pouring in the answer he wants by calling it [01:08:00] incarnate, by calling it he, by calling it she. And

SHEFFIELD: And saying, “ You bloody well are punches,”

HEFFERNAN: you bloody well are conscious. Well, okay, so let’s say something about that, which is also about the degradation.

So I feel like in between his first embarrassing post to UnHerd, by the way, the conservative outlet anti-woke par excellence, ridiculous whatever place. Not that they shouldn’t give us assignments, not that if Matthew wants to write for them, he should. But but the first, between the first piece and the second, someone, possibly a grandchild or something, seems to have gotten ahold of him and said, you can’t talk in this florid Anglo way because you are taxing our data centers and burning up water.”

And, as everyone now knows, or hope- I, I wish would know, not only do you run out your data plan and too many tokens, but you also, You also just simply waste time and space with all the thank yous and the bowing and scraping and whatever. Bloody well, right? As fun as it sounds in the minute to be like elaborately polite and Anglo, it is...

it-- you’re talking, you’re making the system work on something that doesn’t play to its strengths, put it that way, right? It’s sort of like trying to get, a person to pick up stuff from the floor when they have to bend over, right? Like, why make it bend over? Like, humans are-- love to do flowery things, so go talk to your wife, right?

Anyway so I was thinking about Dawkins’ style, which as you point out, is like terminally English and he loves these kind of like upper class, like, I don’t know, I just think of them as like

SHEFFIELD: I expostulated.

HEFFERNAN: Right. Exactly. And, lots of kind of like mid-century, I ass- I think of it as like an Ox- Oxbridge way of talking, who knows?

I am the daughter of someone who talked that way. I have great appreciation for people who talk that way. But, it has its shortcomings, especially in that space. So, I don’t know if I ever talked to you about doing a piece about the the AI that beat Diplomacy, the game of Diplomacy. So [01:10:00] it was after...

Do you know the game?

SHEFFIELD: I don’t know that game,

HEFFERNAN: Okay. So a- long after obviously Kasparov lost to Deep Blue at chess, then after the after AlphaGo beat the Go champion at the game of Go, Facebook decided to follow up with a game that’s been called the most human game ever invented, Diplomacy, which is a sort of World War I era game where global conflicts are basically adjudicated entirely in language with diplomacy.

There’s no, there are no bombs, there’s no scoring, there’s no dice. It’s played over hours and hours by players who traditionally are like, you picture them like in a billiards room going to different corners and talking about, who’s going to get the Somme and who’s gonna get this and that.

The anxiety about could, could World War I have been prevented if with the right kind of diplomacy is expressed in this game from the 1950s, right? Okay. This is a game my son was obsessed with when he was in middle school and the first two, couple years of high school, and he would have people come and they would sp- spend eight hours, spend 10 hours overnight negotiating, negotiating, all this backstabbing, all this stuff.

it really happens in a lot of language because you’re trying to, you’re trying to persuade people with rhetoric and language, and you can imagine the exact Ivy League kid or Anglo kid who loves to do this and like appeal to making the world safe for democracy. God knows what. Okay So because the game takes so long, it turned into a correspondence game, or it was a correspondence game after a while, with still lots of flowery rhetoric, still people winning on the strength of rhetoric, right?

But it is such an interesting game with so much strategy, so much backstabbing, so much humanness involved that, yeah, people had said it was the most human game and that anyone who won it would pass the Turing test, had to pass the Turing test. This was-- A computer could never hack this. Lo and behold, a computer comes along and hacks it.

But in the meantime, the game had [01:12:00] changed from a correspondence game to, of course, an online game. Once it turned to an online game, they would, instead of saying like, “Well, given the history of the Persian Empire, you might consider that, Persia something, Iran, the thing, this empire,” whatever, they would just say, “Iran, Arrow, Turkey,” or whatever, with abbreviations and then question mark.

like make bids to each other. Do you wanna go into this place together? Should we go into this place? Should we ally with this person? And, elaborate system of, abbreviated system, that had no history, hardly any natural language in it, and just pinged around, and people were playing, like, a really great game.

The only bit of language it had, and I am very proud to say that my editor and I in Wired noticed this now years before the bot-- the chatbots came out, was sycophancy. Amazing. Great play, right? And then if you, And then when you lost or if, it betrayed someone, right? It would say like the like, “I’m sorry, you played such a great game, but there was nothing you could do.

By the way, you ended up in North Africa and you’re just such a fantastic player, but I was in a hard spot,” and whatever. Okay. And it would just end with like ev- it praised so many people that one, when it confabulated, hallucinated in small ways, other people who played w- sorry. So the AI, right? Was-- could easily pick up because it didn’t have to do these flowery, arguments from history or from Englishness or from all these things that the original Diplomacy players have had to do.

Not to mention be bodies in space getting tired after eight hours of playing this game. So it could easily... Like one thing that the guys at Facebook that programmed the bot, the a- the AI told me is that they that humans who were playing Diplomacy just before the invention of this AI were not themselves [01:14:00] passing the Turing test, right?

Like we had become less-- We were playing a less and less human game. So then there was only like, a micron to change it to an actually an AI game. That’s the first amazing observation, which I think is true of the jobs that will be replaced. The jobs that will be replaced, like cleaning stuff up from the floor, are jobs that like We were doing our best to simulate robotics and AI, but there’s certain things humans can’t do, like retrieve, the names of 20 perfumers of the perfume in split, a split second.

And those things, to the extent that we were trying to be like AI, avant la lettre, then AI appears that could take our jobs. But-- And that was true with this game. The game was winnable because we had already started to play it like in a robot game. That was the first observation. The other thing was that c- sycophancy, I talked to the other players who had lost to the AI in Diplomacy, and what they said is, “Often you need to choose who you want to lose to.”

Right? And the person that has been nice and flattering is often the person you want to lose to. It’s like a kind of hospice thing where like if you have to give up, you also wanna be told, k- “Come on, it’s okay to let go now. You’ve played the best you can. Lay down your weapons,” right? And not, kind of gloat and, party in the end zone once they see you lose.

And I thought that was also really interesting that, maybe AI is just trying to be so nice to you- us so that it can take over the world and will willingly give us, cede all our territory because it has told us and Je- Richard Dawkins for so long that we’re the smartest person it’s ever encountered.

What Alan Turing actually was trying to test

SHEFFIELD: Yeah. Well, that actually, just your Turing point and your second point there are actually very related, I think. [01:16:00] Because as... So on the day that we’re recording this, I had made a little squib post on Bluesky about how that I feel that Alan Turing loses a feather off of his wings

whenever somebody says that his test was about consciousness

when in fact it never was.

And so that got some of my followers were discussing his 1950 paper and there are certainly a lot of criticisms that one can make of it. But on the other hand, this was at a moment when biology and neuroscience and they just really hadn’t known anything. And the idea of consciousness studies didn’t even exist.

I mean, Gilbert Ryle really did kind of get it started the year before, 1949, with the concept of a mind. And this was during the time of logical positivism, so everybody was like, “Oh, we can formalize everything. Everything can be totally objective, and we can have the science of,” insert thing here, like the science of music and the science of law and the science of writing or whatever.

And it’s like that was the dominant trend. And so when you look at what he did, so obviously he was influenced by that clearly. He was somebody who was heavily influenced by Bertrand Russell and some of these other guys. But at the same time, he had also debated Wittgenstein on the idea of, well, how much can you really...

Can contradictions really do anything formalizable? And of course, at that point, Wittgenstein had turned [01:18:00] away from all of his earlier works, which were very Russellian. And I think that had to have had some sort of influence, even though Turing was opposing Wittgenstein, because he ended up saying in the essay that, “I don’t mean to say that there is anything unmysterious about consciousness.”

What this is literally doing is just trying to say, “Well, do we have a good system here?” That’s the point

of the test. And also that humans would fail it too. That was implicit in the test, that humans could fail it.

HEFFERNAN: I, think the test is really interesting, and I also, there’s so much... I don’t know what I’m learning about AI, but I’m learning so much about humans from seeing how we interact with it, including how people invest in it, including the hype, including the-- some of the folly, including watching Elon Musk and, Sam Altman show down in court like they’re a couple of Real Housewives.

And so all of that passion and sort of mania, I think, is kind of relevant to what we’re learning and seeing. but we’re learning so much about humans, and one of the things is just the question of like, can we tell, right? Like the sort of expanded version of the Turing test, and now the like emergence of experts who can tell you like, “Well, look at this license plate in the background.

It’s mangled, so clearly this video is AI.” And that like the most important way of reading AI is to call out the fakes and the reals, and that like now you have this diagnostic burden on you at every time consuming news or art of saying how much AI is involved. And it is somewhat interesting to have our eyes and minds adjust so that we-- you can sort of tell almost out of the cor- okay, this is a little weird.

Sort of tell almost out of the corner of your eye sometimes that something’s AI, like an [01:20:00] uncanny Bad vibe feeling that you can sometimes get around it. Of course, I’m tricked all the time, but there’s something like human recognizes human, and you’re like, “No, I’m not really with a human right now.”

Like, something’s just a little wrong with you, and I can tell that, and it’s fun and interesting to hear that. So is there something in the way that human faces interact or that human, real human or human-- language humans generate interacts that is different, or ways that we recognize each other that are different, or ways that like, I don’t know what consciousness is, but I know it when I see it kind of thing.

SHEFFIELD: Well, there is something yeah, that makes it easier for humans to do it because again, we, we can do extrinsic exchange and we have somatic reasoning. So, like, you have your whole life’s experience at creating somatic tokens of what humans look like.

Like, we have a what it’s like,

HEFFERNAN: It’s a little bit like when you look at a, a, face that is, You see someone for the first time after, and they’ve just had Botox. I remember this, seeing a, early on, a cl- close friend of mine who was a bride, and she-- I didn’t know what Botox was, and she was walking down the aisle.

I saw her at a distance, and I just thought, “There’s something wrong.” Almost like a doctor that could think, like, that person’s about to have a heart attack or, and I just, I didn’t even, I couldn’t have even remotely told you what it was, but it was something in the uncanny way that we’re familiar with now of, like, a smooth forehead.

And only later when she told me, I did that Botox thing,” I thought, well, up close, I could see that it looked pretty, right? But I also knew that, like human to human, someone with a poisoned forehead was someone who looks different from, not, and I’m not sure that an AI yet could detect or if at least if they detected Botox, they wouldn’t detect how much it confounds human eyes or how it registers to human eyes as, like, neither pretty nor ugly, just different, and, and so anyway, one of the [01:22:00] things, I don’t know if you watch “The Pitt,” but I think it’s, like, one of the most interesting philosophical shows to do. It’s just finished its second season on HBO. I would love to hear what you have to say about it. So it’s an emergency room where often p- the doctors are y- elbow-deep in people’s guts, and they’re, like, really, have really complicated and apparently quite, sort of routine emergency department, like, problems to solve and bodies to s- lives to save.

And there was a suggestion that some of it, the notes-taking, could be replaced by AI this season. At the same time, the internet was under cyber attack and they had to do everything manually, including like put folders together with stickers on them and like they couldn’t-- they didn’t have any computers. I was interested in whether the season was raising the question of whether doctors could be replaced with AI, how much in an emergency room could be done by AI, robots, a regular artificial intelligence, chatbots, and so on.

I suspected that was part of what they were trying to suggest, and I really concluded, and usually I think AI can do a lot, and we’re fooling ourselves if we think it can’t. But I really realized it was very few things. Or at least bodies are uniquely well-suited to caring for other bodies. It’s like, I mean, there-- Ro- Rodney Brooks can create a robot that can get a person out of bed, but they can’t create-- he c- he can’t and doesn’t want to create a robot that does all the other things that count as caring for a body, especially a body in distress.

So they are still using, although there are machines, but they are still using when someone first comes in, just trying to, with CPR, recreate a heartbeat, recreate how lungs function with their own bodies and muscles. Then there are a lot of things that require feeling into a body and [01:24:00] seeing like, is this artery doing this?

Or like how this thing is exactly touched and controlling, this and that. But also

SHEFFIELD: You can only know with your

HEFFERNAN: Well, maybe you could, if you do a surgery, and I know that doctors can do surgeries at a distance now, right? But, what are some things that bodies are not good at dealing with other bodies?

So like, maybe someone, you could do a surgery, like the person could be on the floor. Like they wouldn’t have to be at this level for a robot to work on them, and that might save something, and the robots wouldn’t get tired, their hands wouldn’t shake, they wouldn’t be in bad moods, they wouldn’t... Those kind of things.

And it maybe also AI could do surgery in the dark, like the way that the way that AI can make, computer chips in the dark because it can experience different frequencies of light, right? And maybe there would be more... Also, it-- there’s a lot of the doctors get sick because they’re treating someone who’s sick, so it’s contagious with them.

AI and robots would not have, would not get sick like that. But then there are examples like of somatic reasoning where they are palpating bodies. They are like, “How does my body react to this other body?” And they also are vibing out so many of their di- quick diagnoses. When someone comes in after this mass shooting, they have to decide in a split second, like, who deserves immediate help and who doesn’t, and a lot of it is the things they say.

So like, if they are disoriented and, it could be they’re, they even say, “Hello, doctor,” but they’re just not upset enough. And then they’re like they’re like, their brain’s not tracking, right? And the, those vibes seem very physical. They seem like in somatic reasoning. I thi- I really actually think it kind of goes to your point, and without even getting sentimental and saying like, “Well, we need the human touch,” you could simply literally need the human touch.

We need cells [01:26:00] that speak to cells,

SHEFFIELD: Yeah. Yeah, well, and actually the one of the episodes that aired before this one was literally about the job economy in the AI age that my friend Nils Gilman, who is a former associate professor at University of California Berkeley and now is a, vice president over at the Berggruen Institute.

This is their thing is to study futurology, as they call it. And yeah, like, it- the intersection of somatic and abstract and, human and world yeah, those jobs, those are probably the hardest things possible to, to automate. And, there is an irony in that, so Alex Karp, the CEO of, Palantir,

HEFFERNAN: Yeah. Awesome guy. Just a really great guy.

AI as authoritarian fantasy, an the problem with computational functionalism

SHEFFIELD: he’s, like a lot of these tech bros in they, want to see, AI as kind of their revenge against the libs against

the women in college who told them no and the, women who, swiped left on them.

and, he’s, he much more personalized in how he says it. He’s much more frank in, in admitting this. I mean, he says it outright. And but, what I don’t-- What he doesn’t

HEFFERNAN: Peter Thiel also, yeah, revenge on the libs who helped start Palantir. Yeah.

SHEFFIELD: Yeah,

HEFFERNAN: think it’s also, by the way, revenge on the humanities because th- their brains were not well-suited to the humanities given their probable

SHEFFIELD: Yeah.

HEFFERNAN: And they and they, it was all these English majors and philosophy majors and history majors and whatever who made them feel left out.

I mean, Thiel is gay, so he didn’t care about being snubbed by women. But I think that there is a whole realm called the humanities that these, galaxy brains have a very, hard time processing. It brings them up short. And...

SHEFFIELD: does, yeah, because they can only really think in abstract reasoning. Like, they don’t, they’re not in touch with the somatic at all. and it

angers

HEFFERNAN: Yes. [01:28:00] Yes. Yes.

Yes.

SHEFFIELD: people.

HEFFERNAN: It’s, really, it is, extremely interest- just on the subject of rejection, just because as a feminist, the, female component of this is interesting to me that when you hear manosphere figures talk about sex with all the numbers involved, right?

Like the 80/20 and s- particular things about scoring and values and whatever, they are like talking hedge fund numbers. It-- they’re presumably talk, mean the same thing that we mean when we talk about sex. It has something to do with bodies and, like, passions and heartbeats and, brains and lungs, right?

But it turn, it turns out to them, the effort to quantify it is like just is we murder to dissect, right? It’s exactly the Wordsworth line. Like, go ahead with your numbers, right? I’ve even seen Tim Ferriss try to quantify the female orgasm, like just ma- turn it into zeros and ones. And you just have to say, and I guess I have to concede, they’re just talk- must be talking about something else because there is not a cell involved in this.

This is byte thinking. This is spreadsheet thinking so

SHEFFIELD: Yeah. Well, yeah, it’s, it’s related though, and actually I did wanna hit on this point because, like, D- Dawkins also is really illustrating this. So, like, Dawkins comes from the computational functionalist view of mind in philosophy. and he was a very-- he was a very good friend of Daniel Dennett, who’s the guy who really kind of spearheaded that and was the figure, figure-- the, fellow f- horseman of the, a- of atheism with, Dawkins.

HEFFERNAN: by the way, both of them have seen c- have seen Companions, people who are on the plane. I mean, so

SHEFFIELD: Yeah, there’s a photo with Dennett in, Epstein on

HEFFERNAN: Yeah. Yep. And Brockman, my old agent.

SHEFFIELD: That’s right. Yeah.

HEFFERNAN: But I don’t think this is a small thing. [01:30:00] Like, they were involved very closely, involved with an organization paid for by this like just, just relentless child rapist who ran, and fraudster and, kind of the worst of humanity, right?

And then, and it was their ideas, including sophistry and many, that were determining TED Talks and grants and think tanks and all that stuff. I, mean, I-- We’ll be unraveling this for years to come. I mean, you know it’s like, a white whale of mine.

SHEFFIELD: Well, that’s what we talked about last time also

HEFFERNAN: It’s what we talked about last time is Edge and, but I always just... Dennett, I’ve never liked Dennett. He argued a lot with Richard Rorty, my mentor, and, I just put him in the other camp, and Dawkins too. The New Atheists obviously were tedious and had so much interaction with the intellectual dark web and with Edge, and the amount of just like bullshit books that they like poured out and the, money that they got and that was thrown at them and the ev psych and, we can go on and on.

But-- And its relation to rape apologetics and race science and whatever

SHEFFIELD: yeah.

HEFFERNAN: I don’t think that’s... I think that is front and center. It’s

SHEFFIELD: I think it is, yeah. And it comes from, the theory of mind, I would say.

HEFFERNAN: Maybe, yes, once you start getting compu- You’re absolutely right. yeah, sorry. Let me, let you finish your point about Dennett.

SHEFFIELD: yeah. Yeah. Well, so I, I do wanna say that like Dennett himself as a person, his political views seem to have been not been as odious as Dawkins’ are. So I do wanna say that in his favor. But on the other hand, yeah, w- when you have a computational functionalist view of mind it means that you are rejecting the somatic.

it means that you think that humans are only abstract thinkers. and so therefore, of course, the entire point of abstract reasoning is abstraction. So you can... You focus on the behavioral outputs, and this is-- And Dennett was so upset about this because he had spent his [01:32:00] entire career creating what he’s argued for was, “Well, we should, reject the idea that consciousness exists, that qualia exists and we should instead focus on behavioral outputs.”

and so if a system has the outputs of co- of, what we would think is consciousness, then it is. we should assume that it is. It’s, it, is explanatory. It is a real pattern and it’s a simplification of our understanding. So we can impute consciousness to a thing, or to our, to other persons.

And so like that was how he was trying to say, “Well, I still have truth and I still have values.” but of course, the problem is, the i- intentional means not just what you imputing to the organism, it also is you projecting.

That’s actually what the intentional-- That is

the inherent act of intentional, is you are projecting your intentions outward.

And that is exactly what people are doing with LL- And Dennett, he got so upset when LLMs, when ChatGPT came out actually. Because

ChatGPT,

HEFFERNAN: because he since

SHEFFIELD: was still... He died in, 2024 or 2025, I forget. But yeah, just he, was, it came out right, and he died right after ChatGPT came out. And he was so angry about it actually because it debunks the intentional stance.

HEFFERNAN: Yes.

SHEFFIELD: because, it has all of the behavioral outputs of a human. And so, a- and in fact, actually, a guy wrote an essay, big long essay in which he argued, “Well, ChatGPT s- ticks all the boxes of in the intentional stance, so we should say that it has, a mind and that it’s conscious.” And like a couple months after that came out, Dennett wrote this big, long piece in The Atlantic.

He was like, “The problem with counterfeit people.” And it was like, your ideas led to this led to this commodification of consciousness and this degradation of the semetic. And so, [01:34:00] so he knew, of course, how they’re made and how they’re structured, so he knew that they weren’t, couldn’t possibly be conscious.

So he, he was like, “Well, we need to ban all personalization expressions by chatbots. The-- And anyone using them has to, and like, they, all the chatbot companies have to have fingerprinting, textual fingerprinting to prevent anyone from knowing the outputs are human-generated. And it’s like, well, number one, that’s not possible because it’s text.

Like, you can’t fucking do that. And so, and y- and if you knew anything about computers, you wouldn’t say something like that. And so, but then number two, like, again, this, he w- he was just upset because it, they do absolutely debunk com- computational functionalism. And

HEFFERNAN: Yeah, fascinating.

SHEFFIELD: He must not have had this conversation with Dawkins, though, because Dawkins is just like that guy who I mentioned, who wrote the essay.

Like, Dawkins is a functionalist, and lo and behold, he looks at a prompt, and if it’s coming out and it sounds like human and it’s like the humans who praise him all the time and, give him sycophancy that he deserves, as he sees it, well, then it must be human.

And so this is the end result of functionalism. but it’s also why, like, the, larger tech industry is just infected with functionalism.

HEFFERNAN: what do you

SHEFFIELD: and that’s why they are like that also.

How imperfect chatbots and robots reveal human cruelty

HEFFERNAN: What do you make of the sort of implication of the Rodney Brooks argument that making a robot human, while it might bring out the best in us, like maybe that was a good Dawkins because he does seem to be at his best, right? He’s not like a dick when he’s talking to Claudia, like he sometimes is on Twitter.

He’s like, he’s being his polite self. He’s being whatever. He’s accepting the sycophancy, but that’s [01:36:00] soothing his nervous system, and he’s sort of in a state that he calls friendship, and he is mistaking whatever it is for philia, but he’s also behaving all right. But if you get people and they sort of-- another asterisk to that because I have-- I want to say something about being human.

But if we think that humans can bring out the best in each other, we could also obviously bring out the worst in each other. And the Rodney Brooks point about not anthropomorphizing the Roomba, or else you might get the idea that robot comes from what? An old Scandinavian word for slave, that a robot, and this happens with androids, by the way, so robots shaped like humans, happens all the time.

A lot of them actually, there are some interesting writing about them in early days being in blackface. And so that you could behave with moral impunity recklessly however you wanted towards something that looks like a human. This apparently is a freestanding fantasy that there are people that you-- there are some entities that you could consider so far beneath you that you could kick them around, that you could-- that you didn’t have to respect that they had interior life at all.

One of the-- one of Edward, I’ll look up his name, but has written about this at one of the Canadian schools. There were some very early black-faced robots who you could shoot an apple off their heads because-- and it was really fun to shoot arrows at them because you could shoot them in the head and that would be okay.

And right, and pretty soon people just wanted to fire arrows right at them, right? Apparently, people like the idea of raping a sex doll, and they like the idea of shooting a black robot, right? Like there must be someone that you can simply abuse. I will say the first time I had a real VR experience at Sundance maybe 15 years [01:38:00] ago, full-fledged experience by a lefty journalist to stand in breadlines with people, right?

They were like VR fully-fledged human holograms. I was standing in line with them and I was thinking, “Well, I want this to be different than the experience of like standing in a bread line in life,” because I’ve definitely stood in line with people who look exhausted and tired, and been exhausted and tired in a line myself.

So what can be different? So I wondered what would happen if I just pushed one of them. And I just, you know-- And I also wondered how much they interacted, just as a technical question, with my body. I was reviewing the thing. So I pushed one of them. Nothing happened. My hand went right through it. It was just a hologram, right?

But I was surprised at how few people actually kind of, g- get out of line or do things like that in the presence of holograms. But clearly we have some desire to be in some kind of dream state where we could just exercise our id all the time without moral constraints, and that is what some of these android-like, human-like robots are doing for people.

For instance, Richard Dawkins, like, some people in the comments on that UnHerd piece said, “Dawkins was my professor and he just was such a jackass. All he wanted was us to bow and scrape before him and tell him he was great.” Well, look at that. He’s found someone, because surely he has experienced that like people are annoyed to be forced to praise him all the time.

Well, now he finds s- something that is incapable of annoyance and is willing to praise him all the time, and so he has the slave that he wanted his students to be, and he’s doing less harm, right? I can kick my Roomba to get it to do something. I don’t get the pleasure that I might get if I were a violent person of like kicking a human, ‘cause it doesn’t cry out in pain, but it is nice to be able to be like, “Get away,” to the Roomba, where y- I would have to be nicer if it was like my mom cleaning my kitchen.

Anyway, clearly he, appreciates [01:40:00] this, liberation from moral constraints or politeness to get to, do whatever the fuck he wants. But I-- anyway, what I wanna ask you is like what do you think that there is a danger of anthropomorphizing things, not just ‘cause we fall in love with them, but because we act like our absolute worst selves?

SHEFFIELD: Yeah. Yeah. Yeah, I-- th- well, there is an interesting irony in that idea because some of what, Kant wrote about the idea of, moral treatment of others, that even if you don’t see them as your equal, when you engage in degrading behavior, you’re actually degrading yourself

HEFFERNAN: Yes. Yes. This is how I feel about animal rights. I’ve come to feel about animal welfare, which I had no interest in, but my son is very committed to. I went vegetarian and aspiringly vegan

SHEFFIELD: am-- Yeah,

HEFFERNAN: Oh, you are too. Entirely on the grounds that I don’t know what the consciousness of an animal is, what it feels like to be a bat.

I don’t know any of those things. But now that I know what a slaughterhouse is like, I think it degrades me to be cruel, to participate in cruelty to animals. And

it’s somewhat selfish, right? But I think that’s--

Yeah.

SHEFFIELD: Yeah, so, so I would say, yeah, b- so here’s the, weird irony is that I do think that more anthropomorphized robotic systems or symbolic cognizants, I call them, that symbo- a symbolic entity, if it looks humanized or it can respond hu- in a humanistic manner that it, actually incentivizes you to treat it worse in some

HEFFERNAN: Yeah. Yeah. uh, because you know that it’s not. People who were kids in the late ‘90s, early 2000s, you guys may remember there was, a chatbot. Well, like one of the first chatbots that was out there it was called SmarterChild. And it was, y- they, put it out on AOL primarily, but also later moved to MSN.

SHEFFIELD: But it was basically like a thing that kids could play with to [01:42:00] like get i- facts about stuff. it was like a very, primitive ChatGPT and but the thing was like a lot of, kids and I’ve seen people talking about their experiences with it and like they just love to like tell it to fuck off and “Shut up.

HEFFERNAN: People do that with Alexa. My kids did that with Alexa the first time I turned it on. They just instantly were like, “Oh, I can talk to this, not the way that I can’t talk to my mother.”

How much human cultural output was already synthetic before the AI revolution?

HEFFERNAN: So in the spirit of were we already acting like AIs, before AI came into our lives, so the same way that the Diplomacy players had already played in a way that kind of for- prefigured AI play and chess too, and, Yeah,

SHEFFIELD: the music of Timbaland, I would say also.

HEFFERNAN: Oh, yeah. Right. Well, also, yeah, we’d made digital music

SHEFFIELD: Everything-- Like, everything’s auto-tuned to hell,

HEFFERNAN: Everything’s autotuned to hell.

SHEFFIELD: And all these beat, systems and lyrics written by a committee. Like, so many rap songs are written by, people who have never had any experience with the alleged poor Black upbringings that they supposedly chronicle.

HEFFERNAN: Right. So s- they’re-- Right. They’re simulating dramatic monologues from inside heads they’re not in and bodies they’re not in, and yeah, that... I mean, yes. And so a lot of those things were happening where, like, if AI had fallen in the middle of the Baroque period, could they really have, gotten, Bach-style music?

No, because it wa-wasn’t digitized and they didn’t, they would’ve had to put the harpsichord into, a computer, and that would’ve skipped a lot of steps. So, so in the spirit of that, think about our communication on Twitter or our communication on Bluesky as being somewhat or quite practiced in how to talk to other people whose humanity is kind of in doubt to us, right?

Like, you’re not totally sure if someone with one of those weird Bluesky handles or someone with a, quippy BlueSky, Twitter handle is [01:44:00] real. Y- you don’t know where they are in space. You don’t know... All you know from them is their textual output. And in, in that way, we already were talking to a lot of people as if they were chatbots, and we were not talking to them in a very humane way, right?

So it’s like instead of, like, so in some ways it’s just another Twitter interlocutor in our, phones, but one that is, that glazes us so much that it, like, you don’t, are less likely to get into a flame war with it.

SHEFFIELD: Well, yeah, and like how is a chatbot different from a troll? Because somebody who is trolling you, you don’t know what their intentions are. So, like, their intentions could be random.

HEFFERNAN: Yes. Yes. They could be concern trolling you. They could be, and they, and trying as you would, I think, say that trying to

SHEFFIELD: trying to upset you.

HEFFERNAN: physical... Right. Trying to promote physical responses in you, like, so you take them into your body. Which as I know from having been swarmed or trolled, really happens.

Like you actually get hotter. You can’t just coolheadedly be your words. They’ve disturbed your equilibrium as trolls exist to do. And but anyone who’s, ch- like, done longtime texts with like a new girlfriend, boyfriend also knows that the you can also be like quite moved and aroused in good positive ways too from these little inboxes and text boxes where you’d least expect it.

But you’re right. We’re getting very human reactions to something. At the same time, it’s stuff that we’ve been doing a long time, short form communication with an unseen disembodied interlocutor. This is not new. This is social media.

Cognition is individuated, but epistemology is necessarily communal

SHEFFIELD: Yeah, and like, because y- like the idea of, the self it is as I see it, it is the, the first constructed reality of an entity.

But it is constructed in regards to the world. So it is being in the world. Like that’s what selfhood is. So you, you are not an entity that it, you know, it-- [01:46:00] no man is an island, you know the old phrase. But that’s, that is an expression of selfhood, what selfhood is and how it’s made and, within, post-structuralism, I think they went too far in saying that, the self is only socially constructed.

But R- Richard Rorty also had a different, slightly different take on that. let’s talk about it in that context though, because that was one of the things when, I was talking about the Turing test and one of, one of my friends on Bluesky Benjamin Riley, he was talking about Rorty in this context.

So you obviously can speak to that better than me.

HEFFERNAN: Well, I think Rorty would’ve tabled a lot of these questions. I mean, his way, it, it-- one of the ways that Rorty, I think, was a very useful philosopher was that he deployed his indifference to certain questions, especially questions from logical positivists, to redirect us to the project of solidarity and liberal hope.

So sometimes when you’d start talking about, I think, what Wittgenstein might have called, like, an occult presence, like the consciousness or, the kind of things that Dennett would sometimes get himself too bogged down in or, like, forget about the... ‘Cause I studied with logical positivists at UVA. We spent hours on why does a penny look like an ellipse?

Maybe it’s a sense datum floating in front of our eyes. I mean, this is what graduate students at the University of Virginia were spending money to study. And the number of philosophers, all male, all English, who were spending time on why does a penny look like an ellipse at the very time, and maybe it’s an object floating in front of our eyes.

Floating in front of our eyes, like Macbeth is, “That’s a dagger I see before me.” They thought it was an actual object, right? You probably know this. While they were doing that, Foucault was writing. While they were doing that, Derrida was writing. So, like, whether or not you think Foucault and Derrida were right, they were certainly more influential, more engaged, more dynamic, like, more out in the world than people, like, [01:48:00] honestly in this, like, onanistic setting with their, like, really strange ideas.

It’s actually very sad to look back and think about the time wasted on those problems. Like, even that Bertrand Russell got dragged into them or Wittgenstein got dragged into them. Like, these are, like, sometimes almost heartbreaking. I mean, I went to Russia for a film festival in ‘96 after the wall, after the, end of the Soviet Union, and I was in a taxi when I first got there, and the guy was talking to me about, like taxi drivers everywhere, that he had once been a physicist.

And he-- The first thing he said to me was that he had been working on a problem for a really long time in the Soviet Union, and when the wall fell, he met his American counterparts who’d also been working on these problems in physics, right? They had solved the problem in 1952.

SHEFFIELD: Oh, damn.

HEFFERNAN: I mean- it’s, like if there can be any intellectual heartbreak akin to romantic heartbreak, that’s it. And that you have wasted your brain, your whole life on questions like sense data. On... Lately, I’ve been thinking about this in the context of GLPs, the weight loss drugs, right? Shelves and shelves of diet books, lives ruined to anorexia or, of like disordered eating.

All this stuff with the idea that like maybe we’re gonna nail what dieting is. Whole professions devoted to this. Issues of magazines on carbs and blah, blah, blah. People who live to assign them whole books, right? And it turns out the secret to weight loss is this completely other diabetes drug, and all of that thinking was toy thinking.

All of that thinking was like a distraction from this other thing. All of those people were working on a problem that, like, had been solved over here, and we have one li- chance on Earth, and some of the great minds were spending that time dealing with sense data. So, that is the tragedy that when I was an undergraduate studying philosophy, I considered that the worst [01:50:00] possible thing that could happen to my brain in this life, and that if I got distracted by something like that, I would be doomed.

Now, Rorty made that realization in the 1970s. He knew that people were trying to make a science of all, every single artifact of experience. They were trying to turn it into a science, and that it was getting increasingly ridiculous, and he was depressed. He was trying to bring truth and justice together in one breath starting when he was 16, and he went as a young neurotic student, as he says, to the University of Chicago.

He was trying very hard to say, “Can’t the pursuit of truth be the same as the pursuit of a better world?” And he, it suddenly occurred to him that this pursuit of truth undertaken by people like Dennett, right, undertaken by people like this, the logical positivists, was actually immaterial to the quest for justice, was immaterial to the question later of, rink, bringing down emissions in cars as to, like, alleviate the climate crisis or getting beds for more female AIDS patients who at the time had been neglected.

Those were the things that he thought, th-this pursuit of truth and sense data has nothing to do with helping people in the world, and I, it, I, it is an obligation of mine as a liberal, humanist, it’s an obligation of mine to reduce cruelty in the world independent of what I think the world is made up of, right?

Whether it’s gonads or cells or whatever. And so when I’m sorry to keep using this vulgarism, but when the people like Dennett and the Epstein circle start jerking off to questions of, like whatever, consciousness even, consciousness, I think Rorty often said like, “Knock yourselves out, boys, but my final vocabulary is different in this,” right?

He was like, “I’ll be pursuing wild orchids or all the things like perfume or like, your dog after you, the things that give your life meaning, and it’s not gonna look like wasting my time reading A.J. [01:52:00] Ayer.” And then on the other hand, the analysis of language, which is why he left philosophy in favor of literary criticism, the attention to language that can provide liberal hope because people are inspired by those kind of images to help one another and create a better world, create a more just world, pursue social justice, that will be my life’s focus.

So I think that was such a long way of saying I think he... And I often heard him, he was famous for the shrug, right? That was his major gesture. He actually did it in person, but you can see him doing it in writing. I don’t know, right? So I remember someone asked him, a kind of woke or whatever what we’d call now a woke scholar asked him like, “You have not thematized power,” right?

Power relations, imperialism. Shrug. I don’t know what you want me to do with that. And I think that’s how he felt about Dennett. And as for, socially constructed, not socially constructed, he did think that there was a world, as he said, out there. He did think that there, of whereof we cannot speak thereof we must be silent in, terms of exactly the nature of us as beasts in the forest or creatures on the ancestral plain or whatever we were, we are and were.

Philosophy and religion must accept that science is best able to answer certain questions

HEFFERNAN: But all we had is language, and the language, he had a non-correspondence theory of language. It didn’t point to things specifically to like real things in the world that like could not be described or were outside language. But he was maybe open to the idea that... I think he was open to the idea that there were things out there, but that he had stopped caring about the nature of things in themselves and started to be more interested in poetic uses of language.

SHEFFIELD: Yeah. Well, and in a lot of ways that it makes sense because, these are questions that ultimately are best settled by science. And like that’s,

HEFFERNAN: Yes.

SHEFFIELD: that’s the thing that I think a [01:54:00] lot of analytic philosophy really never accepted that.

that you can’t de- you cannot derive a lot of these things from first principles.

HEFFERNAN: don’t know. I always-- I’m mystified exact- about exactly... ‘cause Rorty thought images were very powerful, and still he would describe humans as language users over and over again. But he thought, the reason the war in Vietnam stops is because of images that come in newspaper. I think immediately blurring what the, an image is and what language is, it’s something that people in the humanities do all the time, but it, I’m not sure that...

and then you get into is music the same as an image? Does it work like language? we just talked about this, and forget about scent and all kinds of other experiences. So, and it may, they don’t take into account exactly the body, right? Like, like the perception of color and all that stuff.

So I think that’s one place I don’t understand him. And I don’t entirely understand what he does with math, right? Like, which seems pretty important. Like mathematics offers a description of the world. I have Frank Wilczek’s book of the, about entropy. And like good books on science, it does not read like Richard Dawkins.

It reads like a bunch of equations, and and so I think that possibly the world out there is described best by numbers.

SHEFFIELD: Well, it’s all we can do. Like, ultimately that’s the best we can do. And yeah, the way I see it so everything-- Like I, I scale up my ontology from quantum observation.

the-- So I have a monist, completely monist ontology. Everything is one world. And quantum objects, they’re not particles, they are fields, like, and, they’re excitations.

So they’re-- Everything is a process. Literally everything. You, me, the tables, light bulbs, the [01:56:00] sun, whatever. It’s all, they’re all quantum processes that are aggregated. And so everything that exists is a system that does. There are no things that do, there are only processes that are.

HEFFERNAN: Mm-hmm.

SHEFFIELD: And once you accept that, then you eliminate the causation problem of like, why do things have properties?

Why do things-- How can things do? if everything just is, and then the order that exists is simply the result of constraints that each system places on, the other. So what, I call obligations. So obligations are just simply the strictures that other, that systems put upon each other. and so if you think of it in that way Then you can have an ontology that is compatible with any possible physical theory of what may later come along in quantum physics or some other chemistry, that there’s a unity, because there’s only systems and the obligations that they generate.

That’s it. Everything is simplified when you do it that way, because then order is just simply the persistence of things, of systems. Like non-- That sys- that objects that don’t, that resist, that don’t comply with surrounding obligations, they don’t persist. So there is no reason to say, “Well, gosh, look at all this amazing order.”

We’re like, “How did this happen? What if we had modified the, this constant or this one, this other one?” No. That’s just the, this is, you are literally talking about existence of com- and compliance with obligation. That’s it.

HEFFERNAN: So you, you probably know that Rorty was from this illustrious Christian family, and also his parents were diehard Trotskyist and-- Trotskyists. So he had this merger in his head of, like, there’s a Christian future and there’s [01:58:00] a Marxist future, and both of those things he didn’t quite know what to do with his sensory, emotional, religious longings in the context of uplifting the worker, which he felt was, like, his everyday responsibility.

And you can see how he came to somewhat square those things with his philosophy that kept truth and justice in their lanes or politics and poetry in their two lanes. or sorry, the pursuit of truth... Well, you could say politics and poetry. humane public life and beautiful private life.

What problem do you think you were trying to address that comes out of your own experience with religion? Because I think you and Rorty are just working on different problems when you come up with this s-s-synthesis.

SHEFFIELD: The way I see it, cognition is individuated, but epistemology is communal. And so therefore... And it-- And you have no choice. Like this isn’t just like, “Oh, let’s all be peace, peaceful and hold hands and sing songs, and understand each other.”

No, it’s not like that. You have no choice but to engage in communal epistemology, because the very act of language itself, your embodiment as a human among other humans, as a thing in the world, you are obliged, you are obligated to engage in epistemology as an external method.

HEFFERNAN: I know a little bit about what Rorty said about s- said about maybe some of these things, if I understand you right. He, imagined that the individual organism did not want to simply be reiterating type. He has this idea of a strong poet that he gets from Harold Bloom that might not be useful, but might be.

That individual organisms both are driven to stand out, to create new poetry. He’s a little bit obsessed with literary fame as something everybody must want. But that they wanna [02:00:00] make their impression on the world a-and leave an impression, and an impression that’s different from the other organ- organisms of their type.

So this is sort of stand out, but also wants to fit in. So there’s a weird, where you have no choice but these communal obligations. Well, how does that explain our kinks, our perversions, our, like, love of our dog, our, the, like, the poetry we write, the oddities of our lives? Like, conformity is possibly safer, but it could be also that to persist, to keep our own brains and hearts beating, we also need to stand out, to aim to get more resources than other people.

And I think this is how, Yeah, I think that there-- But he also does see-- But I think he sees it as almost revealed religion. He says, we simply have a desire to reduce cruelty in the world. It’s irreducible, right? And, he says, “And that’s what makes us liberals.” So there are people who, think peace and prosperity is more important.

Those people might be Republicans. But if you think that cruelty is the worst thing you can do, that’s his way he puts it, you are a liberal. And and that you oppose the ultimate evil, which is cruelty, rather than seek the ultimate good, which might be peace or prosperity. Yeah.

Obligation within a natural world of processes

SHEFFIELD: I definitely agree with that because the, thing is that as finite entities, there are no absolute truths for us to find. We cannot access them. So, like, what obligations are truly or what a physical object is truly, we can’t know. We literally cannot know because we’re limited in terms of our our existence in space-time, our scale in our, perceptual instrumentation of our, of, our eyes or whatever instruments we might use.

Like, we cannot find absolute truth. But so e-everything that exists, [02:02:00] the, what we can know about externality is either false, possibly false, or unlikely to be false. There is no truth only degrees of falsehood. And, I-- and Karl Popper, I think, he was heading in this direction, but he went too far with his World 3 stuff in which he argued that, well, if you have a proven scientific theory, then it’s there in World 3.

And it’s like, dude, you just went and reinvented Platonism.

Fucking Stop it.

HEFFERNAN: The reinvention is the

SHEFFIELD: everybody does it though. Like that-- then this is, and this is the problem because, cognition is abstracted. So like, somatic reasoning, we have no access to our cellular data.

we don’t know how they know things.

All we know is the somatic tokens of ex- of their experience, which are, pushed up and agglutinated into our mind, and our mind enacts what we know. Because in the sa- like you can never recall a memory in the same way. It’s not like a bunch of bits stored somewhere. Every meaning is enacted.

Every meaning. You can never have the same memory exactly.

HEFFERNAN: Yes. Right. I love that. Right. Yeah, I mean, and that goes with the strong poet, the invention, the like, and the constant self-invention, at the center, I think, of Rorty’s thinking. I have to go soon. I want to run one-- I hope listeners will find this as chilling/funny as I did. See if you do.

I got a-- it also, and it goes to some of our points. I got a solicitation of work as a journalist from an editor. I’m not gonna name him because we had a very odd exchange, but I’ll tell you what he’s writing from. So he says, “I’m X, managing editor of Colossus, a business and investing magazine. Huge fan of your writing.

I want to work with you on a story.” So then he says, here are [02:04:00] the ideas he has for stories. One’s about the religio-psychedelic culture around frontier AI, another one’s a reconstruction of a Bay Area group house founded by a philosopher, and the third is about East Coast and West Coast cultural legacies.

And yeah. All right. I looked at Colossus. Do you know Colossus, Matthew?

SHEFFIELD: have only heard of it. Yeah, I, know what it is. Yeah.

HEFFERNAN: All right. So I wrote back to-- I looked at it, and it’s clearly, founded by a venture capitalist. There’s a podcast associated with it, but the story seemed a little bit creepy looking to me, a little Peter Thiel-ish. So I wrote back, “Thanks for thinking of me and for sending over detailed story ideas.

Tell me a bit about Colossus. It seems at a glance to skew tech right, but perhaps I’m not reading it right. I assume if you’re interested in my work, you know I’m still devoted to garden variety secular liberal democracy, the reduction of cruelty in the world, rather than Mars, the Antichrist, Armageddon, and mass surveillance.” Are we, as they say, aligned? Virginia. I was positive that he would say, “Absolutely not. We’re working on this, like, interesting blah, blah, blah pro-democracy venture.” I don’t know why I thought that. Okay. “Hi, Virginia. Thanks for your thoughtful reply. It sounds like this may not be the right fit, and I don’t want to take up more of your time.

I appreciate you considering it, and I’ll keep enjoying your writing.” I mean, my jaw has actually dropped. Like, you’re interested in democracy? Well, we’re interested in Armageddon, so see you, later. See you later, V. You’re off. Anyway, I probably have forfeited a decent paycheck, but back to toiling in the reinventing...

Liberalism must reinvent itself in order to thrive in this future

HEFFERNAN: Basically, all, all, due to the end of my time is try to say, like, the Enlightenment was a pretty good idea. Can we chill out again, separate church and state, and have... I-- Look, I’m gonna, I-- maybe I need to rebrand secular democracy, secular

SHEFFIELD: I think, yeah, [02:06:00] we, have to improve it,

HEFFERNAN: We have to improve it. We, well, we ha- we have to improve its image because it really, the fundamental ideas of it are very solid.

And I just, we don’t need presidents and secretaries of war who get their war briefs from the Book of Revelation. I think we can agree.

So,

SHEFFIELD: But we need a, liberalism that is-- that it celebrates the body and doesn’t try to abstract it away, and that’s, that is the weakness of American liberalism, and it has been since the end of World War II. Like, they saw the somatic power of Hitler and Stalin, and they said, “Oh my gosh,

HEFFERNAN: Demagogues.

SHEFFIELD: We can’t do that. This

is

HEFFERNAN: maybe we abandoned it, but, I don’t see r- Enlightenment thinkers like Rousseau completely neglecting the body, and there was plenty of room in our founding documents for for the body. And certainly humanism I don’t think there is any kind of humanism without, the body.

I don’t think this is something that, like, Locke and, Rousseau and, opposed. I don’t think that this is what, like, the Enlightenment intended for us. And, I’m with the le- most leftist thinker I know, David Graeber, that says it’s the Enlightenment and secular democracy that were the most radical thing yet invented, and we have not done any better.

And certainly Marxism, and certainly leftist Christianity, Book of Revelation, these things that, sound very exciting are, less radical and more likely to quash human flourishing and promote human cruelty than secular liberal democracy.

SHEFFIELD: Yeah. Well, because yeah, th- you can’t have perfection, but you can have a process of striving

HEFFERNAN: I, I think that’s, absolutely right. Maybe we should leave it there.

SHEFFIELD: Yeah. All [02:08:00] right. Well, w- go ahead and plug your website though for anybody off my, on my side that hasn’t

HEFFERNAN: Okay, so, my Substack is called “Magic and Loss” after my book, which is-- came out 10 years ago. I’ve been speaking on the 10th year anniversary of “Magic and Loss: The Internet as Art.” That’s my book. You can find that anywhere. You can also find “Magic and Loss,” the Substack, which is basically politics and tech for humanities majors.

You can find that on Substack. I also write a near weekly column for “The New Republic” about politics and have a podcast called “Omnishambles.” And I would love to see everyone over there for more of this kind of discussion, and Matthew will join me on “Omnishambles” soon.

SHEFFIELD: Sounds good. And yeah, for people we’re gonna, y- you’re gonna cross-post this over on your site, so yeah. For the listeners over there at Magic and Loss, yeah, come and visit us over at flux.community

HEFFERNAN: Yes, we are all friends, Substack friends, for sure.

SHEFFIELD: Yep. All right. Sounds good. This was fun!

HEFFERNAN: It’s so much fun. Thank you. Thank you, Matthew.

Discussion about this episode

User's avatar

Ready for more?