Flux
Theory of Change Podcast With Matthew Sheffield
In the AI-powered job market, knowing what truth looks like will matter most
0:00
-1:03:36

In the AI-powered job market, knowing what truth looks like will matter most

Nils Gilman on why a classic liberal arts education is a safer bet in an age of machine-augmented intelligence

Since the public release of ChatGPT in late 2022, large language model artificial intelligence systems have become the most rapidly adopted technology in human history. Last March, ChatGPT’s website had 5.7 billion visits, while its competitors Claude and Gemini combined for another 3 billion.

Despite how much people are using these services, however, AI still has many critics who argue that they are nothing more than simplistic pattern-matchers that are vastly overhyped.

While the critics are underestimating what you can do with these systems, they do indeed have a point. LLMs excel at many abstract reasoning tasks, but because they have no somatic, embodied connection to reality, there is still a lot that today’s models struggle with. Full cognition depends upon the ability to designate “this” in the world and to compare “what it’s like” based on lived experience.

Love it or hate it, this technology has already changed the economies of every country, and this process is only just beginning. No one can say what will happen everywhere, but one thing seems evident: As abstract knowledge of facts becomes commodified, human somatic adjudication will become more valuable than ever before. The future will belong to people who can think across multiple disciplines and who understand what truth looks like, both broadly and in particular.

All of this is the topic of a recent essay that my friend Nils Gilman, the former associate chancellor at the University of California–Berkeley and deputy editor of Noema magazine, recently published about future-proofing your career in the age of AI that is the focus of today’s discussion.

The video of our conversation is available, the transcript is below. Because of its length, some podcast apps and email programs may truncate it. Access the episode page to get the full text. You can subscribe to Theory of Change and other Flux podcasts on Apple Podcasts, Spotify, Amazon Podcasts, YouTube, Patreon, Substack, and elsewhere.



Protecting and supporting democracy is a team effort! We need your help to keep going. Please support my work with a paid or free subscription!


Related Content

Audio Chapters

00:00 — Introduction

06:56 — Large language models’ limitations are where future jobs will flourish

15:41 — AI supplementation and the human role in improvement

26:14 — Analogies for AI adoption and disruptive technology

34:50 — Art, reproduction, and the value of authenticity

41:11 — The jobs of the future will be at the intersection of somatic and abstract reasoning

46:44 — Liberal education and metacognitive skills

54:14 — Porting knowledge from within time and other disciplines


Audio Transcript

The following is a machine-generated transcript of the audio that has not been proofed. It is provided for convenience purposes only.

MATTHEW SHEFFIELD: And joining me now is Nils Gilman. Hey, Nils. Welcome back.

NILS GILMAN: Glad to be here again.

SHEFFIELD: Yes. And your article is about a very important topic that will only become more important, I think in the intervening months and years especially. But it has a premise though that I think some people, perhaps many people on the political left, would strongly disagree with. A lot of people seem to think that large language models are not capable of anything, that they’re all just a big scam, and that they don’t they’re not able to do anything.

GILMAN: Yeah. Look, I think it’s worth noting that there’s no technology that’s been adopted this quickly ever in history. And there’s a reason for that. The post-ChatGPT 3.5 models that have been rolling out over the last three years are capable of things that are really, really extraordinary.

Things that for a long time were seen as almost impossible holy grails of achievement pattern recognition [00:04:00] activities. And most notably with the most recent generations of large language models, the creation of text, whether that’s code the whole vibe coding trend prototyping, but also writing for many purposes.

I’m not sure that LLMs have yet to create a great piece of literature. That requires some imaginative additions that we can talk about a little bit about what those things are. But, for things like answering emails, various kinds of agentic purposes, drafting boilerplate for legal purposes or for, regular corporate communications, things like this.

These are really extraordinary tools that are rapidly accelerating the ability of people to produce content. Not necessarily always the most elegant or creative content, but a lot of content we need to create does not necessarily need to be elegant or creative. And for that kind of stuff, it’s massively increasing productivity and output.

And so I think anybody who says these are just stochastic parrots or mediocrities, they may [00:05:00] be on one level correct, but it may be irrelevant because for many purposes, those technologies, these technologies are going to be more than good enough for the purposes that people are, are using them.

SHEFFIELD: Yeah, I think that’s right. And in a lot of ways in-- from a, just from a calculation standpoint and some other text processing standpoints, software was already capable of doing this before LLMs. But of course, the only people who were really having access to that were computer programmers.

So if you, if you knew how to do various programming languages, you could do this stuff, a lot of it. Whereas what we’re seeing with the large language model is that this is kind of a-- it’s an expansion of capability to regular people. Because most people are not wizards at Perl or have a lot of experience in PHP or some other language.

And these chatbots can write that code also. So like, there’s-- I think there’s a, there-- To some degree, people are judging them on the initial [00:06:00] ChatGPT 3.5 that they had heard about and which was remarkably less capable.

GILMAN: Yeah. And, look, I mean, people have talked a lot about AI hallucinations, and those things are very real. I mean, I personally, in my own, my own practice, I use AI a lot to do research, and you always have to double-check the work. Because sometimes they do make up... they do this less than they did a couple generations ago but they still sometimes either, either make up articles or citations from whole cloth or don’t necessarily have the best take on what the article or the book in question that they’re citing is.

So you always need to check your work. But I will just note that, insofar as this might be a substitute for an undergraduate research assistant or graduate student, graduate research assistant, those things can happen with human research assistants as well. So, you can’t necessarily-- You always have to check the work of anybody who you’re outsourcing a function to, whether it’s a machine or a human being.

LLM limitations and cognitive science

SHEFFIELD: And there, there is still some truth though, of course as you touched on, [00:07:00] that a large language model is inherently limited in certain things. And that’s what the focus of the discussion here will be about.

But so within the cognitive science framework that I’m developing that which is based on the dual process theory of Daniel Kahneman and others they lack what I call somatic reasoning.

They are not embodied, and so therefore they-- there, there are certain things that they cannot have reference to. But also they do not have a stake in the world, and so therefore the their ability to both visualize the world and model it for, especially illustration or conceptual purposes, is limited.

But, most of the text that people are generating in their own life isn’t really about, well, which thing is above this one on the picture? Or where is the red handlebar in the bicycle? That’s not-- those are not questions that for a lot of purposes that people are having to deal with, [00:08:00] unless you’re an artist or something like that.

GILMAN: Right. I mean, look, one of the terms that people throw around in computer science to describe this is that the current generation of large language models lack a world model. That is an ability to understand the broader context in which they’re producing the texts that, that they are in response to prompts.

Melanie Mitchell, the CS researcher, has described this as a lack of embodied knowledge. That’s one way in which one can say why these machines lack a model of the world, because they don’t have a body that places them in a specific phenomenological space. and so they create strings that of words or tokens that will be coherent in themselves, but may not actually be in direct correspondence with whatever they allegedly are describing in the outside world, because they have no way of actually verifying whether the thing that is in the outside world actually that they purportedly are describing or purportedly trying to work on, actually [00:09:00] is the way the textual stream that prompted them to produce this content, suggests.

And that, that is one major source of mistakes and hallucinations and, stylistic infelicities and so on that these machines continue to do. But again, I think you and I are in agreement that even though they have these kinds of limitations, they still can be very useful for a great number of purposes.

And they clearly are going to be changing the way people do their jobs, because many jobs involve things that involve rote production of text in one way or another, and those things are going to become rapidly commodified in as these technologies are rolled out into, into workplaces and, and into people’s r- everyday lives.

SHEFFIELD: Yeah. And the other thing is that, the, the technology was primarily just about statistical relations with the lexical tokens within the model originally. That’s mostly what it was. But now there’s a-- the-- it’s-- there’s a lot [00:10:00] of supplementation to that core technology using things called retrieval augmented generation. So where they go out and search the web for the specific topics or where they are relying very heavily on training. So that’s where they are interacting with humans that correct outputs.

And so like-- and then a credit-- a lot of credit has to go to, to the people who are doing those corrections because that’s really where the core of the improvement has been has been made.

And, and, and there is, there is some interesting promising research out of a new company by one of the early founders of AI, Yann LeCun, who is working on a world model generation. Although it’s not tied to robotics, so I don’t know if there may be limitations on that as well.

But on the other hand, sure looks like there’s a-- they’re, they’re going in the right direction there.

GILMAN: Yeah. I mean, so I-- Yann LeCun’s a very interesting example. Your listeners will probably know that he used to be the head of AI for Meta, [00:11:00] Facebook, and recently left to start his own company specifically because he feels like the current generation of large language models, because they lack this idea of a world model that we were, we’re referring to here, are going to hit some kind of a limitation in terms of their capacity.

And so he wants to think about a really, a different kind of architecture. This is at, at this point, Yann is a brilliant guy, and if there’s anybody who can accomplish this, it’s probably him. But it is experimental research at this point, so we don’t know, I mean, I think he would be the first to admit this.

We don’t know for a fact that this is going to work and what it actually would mean to build a n- new generations of artificial intelligence that did have a world model. And how exactly that will be instantiated, I think remains to be seen.

SHEFFIELD: But in any case, this is I think perhaps the comparison is the early automobile that, and in a lot of ways they were unreliable and a lot of-- they had a lot of limitations in terms of how far they could go. They didn’t have a lot of horsepower, but you know what?

They were still incredibly useful and that was a rapidly [00:12:00] d- d- adopted technology. And it’s, that’s, that’s where I see where we’re at right now.

GILMAN: I always like whenever I think about a new technology to make a car comparison, because everybody kind of understands what cars are and what they do and how they have radically changed the way we live our lives. And I do think that, obviously it’s an analogy, so you don’t want to exaggerate it.

But I think that there’s a number of things that the analogy actually helps us to understand. One is that, was there a lot of technological disemployment? Well, yeah, people who were, breeding horses a lot of those jobs went away. The number of horses in New York City fell from a couple of million to a coup- a couple tens of thousands in the course of the first two decades of the 20th century.

Obviously, that was a dramatic transformation. If your business was horse breeding, you were going to be put out of business. But lots of other jobs were created: auto mechanics, gas station attendants, obviously car, automobile manufacturing workers, the commodity supply chains to produce all of that.

Like, so there-- new, new things came along. [00:13:00] So that’s one thing that’s worth noting. So there will be some technological disemployment from certain categories of work. But then the other thing that I think the automobile example really highlights is it’s not just that the automobile with the internal combustion engine, let’s just say, changes the way we move around mobility for individuals, is they end up, it ends up changing everything, about our economies, where we work, the kinds of jobs we have, the morphology of our cities, the rise of suburb- suburban living people’s sex lives. Like, automobile-- the rise of the automobile changed a great many things beyond just the direct employment implications of changing mobility services, if you want to put it that way.

And I, I think there’s every reason to believe that LLMs are likely to be similar. It’s likely to change, the way we w- the way we work, the way we relate to each other, our, our sex lives. Like, there’s lots of things that are going to be changed as a result of this technology. And this brings me to my third point, [00:14:00] which is a general point that I think I always want to underscore when everyone talks about trying to-- or when everyone tries to think about forecasting the implications of a technology.

And that is that what a technology does in the lab, and the way an individual uses it, particularly an early adopting individual uses a technology, doesn’t necessarily tell you very much about what the larger social implications are going to be of that technology when it’s rolled out at scale. So let me just give a different analogy or a different example that can show you what I mean.

Airbnb was originally dreamed up as a way to sort of meet people when you’re traveling in a couch surfing application, so that it would change kind of, for people who, after the pandemic, wanted to be able to travel but couldn’t necessarily s- afford to stay in hotels. And I think it worked great, and th- there was a lot of early adoption for precisely that kind of reason.

But as it scaled up, it started to have all sorts of implications that went beyond what anybody at Airbnb had even contemplated, which is that, at scale, it suddenly meant that many, many [00:15:00] apartments were being taken off the market in central-- desirable central city locations because, people who owned those apartments figured they could make more money, with a series of short-term rentals than they could with renting to long-term-- for long-term people.

So this ended up hollowing out the residential structures of many central cities. And that’s had deleterious effects in, particularly in smaller cities and tourist popular cities. It’s been quite malignant, which has, then required more kinds of legislation to be able to deal with those sorts of things.

So in general, what I would just say on that, on that point is that it’s really important not to think that just the way in which something gets used initially is going to tell us directly what the implications are when rolled out at scale.

AI supplementation and the human role in improvement

SHEFFIELD: Good point. And it’s also a reason why people who are concerned about the abuses of this technology, it’s important for them to be involved in how it is conceived and how it is regulated and how it’s discussed in the public mind. So, [00:16:00] but yeah. So specifically though, there, there-- We don’t know for sure, as you’re saying, how, what kind of changes the much broader application of, of LLMs is going to be within society.

There will be many that are not even being done right now. For sure, that’s the case. And it raises the, the question that I think is worth considering in terms of the personal applications, which is kind of what the focus of what we’re going to be talking about here today, is that some people I think very rightfully refer to AI not as artificial intelligence, but as intelligence augmentation.

That is that it is-- you should think of it in that way. This is not some int- alien intelligence that’s going to take over the world. No, this is just a way for people to augment their own minds and to do a lot more things with their own thinking. And that’s probably something you agree with, I presume, right?

GILMAN: I, I largely agree with that. W- another way to think about it is as a [00:17:00] prosthesis. I think that there are two implications of that that are worth teasing out a little bit though, right? One is that the augmentation will allow you, all of us, to do things much more quickly. Just think of a thing like a calculator, right?

Calculator allows us to do... if you’ve got a scientific calculator, quite advanced things in terms of the crunching of numbers that doesn’t re- used to require-- would’ve used to required long, laborious, working out numbers by hand if you want to multiply or divide large numbers or, take a cosine or a sine or what have you.

These were com- relatively laborious calculations that now can be done literally with a c- push of a couple of buttons. And so it can rapidly increase the rate at which one does these kinds of calculations, which can accelerate all sorts of processes, right?

But there is a downside to this anytime you’re talking about the ability of technology to augment a particular capacity. And that is that it often means that, like, the native capacity, if you want to call it that, that the humans [00:18:00] had, will atrophy, perhaps quickly within an individual and certainly over time as the either social or maybe even biological affordances for being able to deal with the pre-technological situation no longer exists.

And, I’ll just give an example that everybody who is, let’s say, 35 or older will remember. We didn’t used to have Google Maps, right? And so all of us had, when we lived in a space, to have some kind of a mental map of what the city we were living in is or the city we’re visiting is.

Maybe we had to have a physical map in order to look it up if it was a new place. But we all began to make mental maps as we walked around a city. I mean, I moved to several new cities, in the 1990s after I finished college and, one of the things I had to do in each case, it, it wasn’t something I even really thought about, but it-- I just naturally created a mental map of cities when I moved to them or when I, when I visited them.

I don’t really do that anymore because I have the map in my pocket, and I’m not even sure I could do it with the same facility that I was able to do it in my 20s because I haven’t had to do it in so long, right? So [00:19:00] there is this risk whenever you create a, an extension of a, of, of a particular human capacity that if you o- automate the technology that allows it to be done with relatively low effort, that you’re going to lose the native capacity to do it.

Now, is that a bad thing? Maybe, maybe not, right? The need for the kinds of strength that other primates have declined as humans developed tools for all sorts of physical things, right? So that’s why human beings are much less strong than, a gorilla or a chimpanzee or, our, our other near neighbor primates evolutionarily speaking.

Did that make us worse? No. We figured out other ways to use tools and to socially cooperate in order to be able to achieve the ends we wanted to as social primates, right? But it did mean that over time we lost some of the physical force that we would have had that our ancestors probably had, a couple million years ago.

So I think that those are all things we do need to think about whenever we roll out a technology that, like, one does lose... the technology that augments [00:20:00] or extends some particular capacity can also, over time erode the ability of that, that, that capacity, that native ability within, within a particular human being or certainly within a community that comes to rely on that technology.

SHEFFIELD: Yeah, that’s definitely true. And, and that’s extremely relevant in the context of primary education because, you, you see so many students who are just farming out their assignment to a chatbot rather than doing it. But although on the other hand, that raises the other question, which is maybe that assignment wasn’t a very good one to begin with.

Because, like, there is, I think in not just education, but, like, a lot of certifications for professional certifications, they rely on the memorization of things that are of absolutely no relevance to anyone. So, like, just as an example, so from my background in i- in computer technology, like there’s some [00:21:00] certifications where they would require you to memorize some obscure command flag on a, on, on a command that which you do use frequently, but you would almost never use that particular command.

And so what, what value have you gained by, by memorizing that flag? Not really anything. Especially because you can-- most people don’t even use that command in that way. And so, like, and, and, and there’s just, just a variety of things where that is the case. And, and then you’ve also had the what, what one could call a cartelization of a number of different professions, such as the legal profession.

Many states, they don’t require you to go to law school, and I think that that’s the right, the right attitude. But a lot of states do. Most states do.

GILMAN: Right. I mean, look, let’s just give-- to use the mapping example of this sort of forced memorization credentialization requirements. Time was that London [00:22:00] cabbies... london is this enormously vast city, right? This, scores of villages that grew together. And it’s very complicated figuring out how to drive around in London.

It used to be that if you wanted to be certified to drive a cab, a black cab in London, you had to pass a test of what was known as The Knowledge, which is the ability to drive from any one place in London to any other place with the shortest possible route, and you would be tested in order to be able to be certified for that.

And because London is so big, this was like, often took years. It typically took two to three years for s- for somebody who wanted to become a, a taxi cab driver in London to basically have the entire map with the shortest route between any two spots within London memorized inside their head. And this it’s actually a really interesting classic example of neuroplasticity because the part of the brain that does that kind of mapping would actually physically grow in these London cabbies.

The posterior hippocampus, I believe, is the part of the brain that that is affected by, and it would actually grow. And, this was-- there was a reason for this originally, right? Before you had mapping apps, [00:23:00] you wanted to be able to rely if you got in a cab in London, that the cab was going to take you across town in the most efficient possible way so that they wouldn’t ring up extra charges or what have you.

There’s a reasonable quality to that requirement. With the, rise of mapping apps, anybody can drive an Uber and it’ll tell you, Google has solved that problem, and now people don’t have that kind of knowledge. I wonder how many people there are who are, who ha- you know, will ever have that knowledge again.

Now, is that a human loss that we no longer have black cabbies in London who have The Knowledge? I wouldn’t say so. I would say that was two or three years of their life where they weren’t making any money. They were investing in growing their posterior hippocampus as a job requirement, and it was a job requirement.

It was a real job requirement. But we don’t need that anymore, and that’s going to save several years. You can become a taxi cab driver who can efficiently get across town in London overnight with the technology. That seems to me a straightforward improvement in the productivity of taxi cab driver, uh um, recruitment in London.

[00:24:00] And similar things I think are going to happen for, yeah, as a result of LLMs in all sorts of other fields. There’s going to be much lower barriers to entry because you don’t need to have that kind of knowledge. I’m not sure I totally agree about the law example though, because in the case of a law degree, the stakes are really high.

It’s not just that you’re going to get across London more slowly if the, L- if the LLM, driven mapping app, s- doesn’t give you the shortest route across town. But, you may incur, tremendous amounts of civil or criminal liability if you hire a lawyer who’s not qualified for the job.

And because there’s a lot of-- there is in fact a lot of specialty knowledge that one needs in order to be an effective, litigator, lawyer in general I would think it would be rather risky for one to rely entirely on LLMs. On the other hand, I think many of us have, before we go to a lawyer now, or before we go to a doctor, or before we go, to a therapist, we may start by asking an LLM, “Give me the basic outlines of this.

What do I think [00:25:00] this contract ought to look like? What are typical pieces of boilerplate that I should probably discuss with my lawyer about whether I need to have this in the contract?” So that you can go in as a more informed consumer when you’re dealing with a professional lawyer or or a doctor or what have you.

So again, like I think this is just going to not displace the doctors or the lawyers or other kinds of people who have specialty knowledge, so much as it’s going to change the relationship between how-- or, or the relationship that clients have to those practitioners and also change the way those practitioners mobilize the knowledge that they have, right?

So, I remember something my mother used to say to me when I was a kid. She said, “The second best thing to knowing something is knowing where to look it up.” And it’s sort of a quaint phrase at this point, but you know, now we all know where to look things up. You start by going to an LLM, and you always gotta be m- you always gotta be mindful that maybe there’s going to be some sort of hallucination going on.

But again, could you really always rely on Encyclopedia Britannica to tell you what was, what was what about a particular subject? It was pretty good, but like, there’s been a lot of evidence now that it’s not as good as, the [00:26:00] crowdsourced Wikipedia in many cases, right? So, I, I would say that we should take these technol- these technologies are radically going to reconfigure the way we relate to various knowledge bases, but we shouldn’t assume that it’s going to m- you know, wholesale displace those things overnight.

Analogies for AI adoption and disruptive technology

SHEFFIELD: yeah. Well, and, and the encyclopedia context is, is another good comparison because I re- I remember when Wikipedia was first coming online and I was indirectly in the orbit of Jimmy Wales, the co-founder of it. And like it was controversial when Wikipedia first came along. Like people, they thought, “No, this is, this is wrong.

An encyclopedia that anyone can edit, this is, a way that the world’s going to be filled with misinformation. It’s going to be filled with lies and inaccuracies and trolling.” And to an extent that certainly does happen on Wikipedia, but the community is now large enough that they have developed protocols and methods to really cut down on that.

And, and so at, at this point, while [00:27:00] you, you’re not-- nobody’s going to be out there citing a Wikipedia article in a, in an academic study or something like that, at the... It is the starting point if you are i- unfamiliar with something that people have been going to now for, more than 20 years that it’s, it’s since it’s become mainstream and and it’s changed the world in a, in a lot of really positive ways in, and in ways that its critics, I don’t think ever fully admitted that they were wrong about what it could be done, what you could do with it.

GILMAN: People rarely admit that they’re wrong in general, Matt. That’s my, my, my observation is when people get-- occasionally you get people who admit that they made a big call wrong. We have some people doing that in politics these days. But usually people just, if they turn out they were wrong, they kind of just turn the page and pretend that they didn’t actually believe those things.

SHEFFIELD: Yeah.

GILMAN: I don’t expect a lot of mea culpas coming out of the AI doomer or boomer crowd when we achieve neither doom nor cornucopian [00:28:00] plenitude.

SHEFFIELD: Yeah. Yeah. Well, it’s the, the, the phrase that, And I for- I forget who, who coined it, so I, I can’t credit them. But yeah, just it-- this is a normal technology. This is what it is. And, so to that end, though as productivity’s increasing there’s, there’s still going back to the, the, inherent lack of capacity that it does have in some ways where certain professions, and this is what your article that you recently published is about, is that certain jobs cannot really be done by an LLM.

And they, they-- because they have no physical stake in the world, they also are not accountable. And so someone always is going to have to be there as the endpoint. So go-- walk us through a bit of of your argument here.

GILMAN: Yeah, let me, let me say that I think one of the things that’s really important to note is that for the kind of work that LLMs are, or the kind of tasks that LLMs are very [00:29:00] good at at this point they’re typically not a whole job anywhere. A computer programmer, right, is not just typing code all day, right?

Most of the things that you can do that where you have to type your fingers those are the kinds of things that I think LLMs are going to be largely replacing over time. But that’s not the only part of a job, right? The part of the job is, just to give examples from computer science. It’s, collecting feature requirements from customers, prioritizing those things deciding, what order one wants to do things.

All the sort of meta processes associated with developing code still aren’t going away quite yet. I mean, I think those things are likely to be commodified over time. Or to take the lawyer example we were going to. It may be that the LLM can help you write your brief, but figuring out your legal strategy with a customer, fig- with a client, figuring out the business risks that they want to mitigate, if we’re talking about commercial litigation figuring out how risk-tolerant they are about taking a case to trial as [00:30:00] opposed to settling.

Those are all things that require complex human negotiations and typically I think are not going to be going away. And I think those functions are actually going to become even more relatively valuable, right? So this is some basic economic theory, right? If you have, two inputs into producing some good and one becomes a lot cheaper, then the other one becomes relatively more valuable, right?

So if we’re thinking that objective reasoning is the thing that’s being largely, commodified by LLMs, and we think that the production of, of words and, whether those be computer code or written language is also being rapidly commodified, the question is what remains? And I think that for most jobs, those things are not going to completely go away.

Your job’s going to be highly reconfigured, though. You’re going to be expected to produce a lot more, for example, or interact a lot more with clients, or go to more meetings or so on. And so that’s, I think, where the value in a lot of jobs is going to migrate to, is the ability [00:31:00] to do those kinds of things that require emotional intelligence, things that require creating social consensus, things that require ethical judgment, things that require questions of taste.

All of those kinds of things I think are going to become relatively more valuable as the actual execution of things becomes relatively easy to do.

SHEFFIELD: the, the irony is that the, the conferences and your conference calls and Zoom meetings that everybody hates about their jobs, in a lot of ways, those are actually the most essential things even though y- they are often regarded with infamy. And, and, and a chatbot, of course, can be in the meeting, and Zoom obviously has already integrated those types of features.

But yeah, that, that, that type of, of the integration of judgment, of presence, of sensation of other people’s responses and ideas and feelings, they can’t really-- They can’t do that.

GILMAN: Right. I mean, so [00:32:00] let me give an example, just personal example from yesterday. I mean, I was interviewing I was talking to somebody who is potentially going to do some contracting work on my house. And, I wanted to hear, like, what her idea was for doing this work. But really the thing I was sitting there judging was not-- was do I trust this person?

Do I think this person is going to have the taste and the judgment to do the things that I want to do when I’m traveling and she’s working on the project and I can’t be there to oversee it at every single second? That quality of me making that judgment of her was one that I would not have trusted to outsource to a machine, because ultimately I have to look her in the eye.

I have to have some confidence in myself that like, when I give her the keys to my house, it’s going to be-- it’s going to look better after she’s done with it than, than worse, right? And that’s, that’s a, that’s a judgment issue that like, to this, to this point, I don’t think people yet are willing to give up on and I think may become even more valuable.

Likewise, for her, it’s not just about whether she can execute this. She’s trying to sell me, right? She’s trying to sell herself to me in the course of that [00:33:00] conversation. And that’s again, something she can’t just do by writing a bunch of stuff down. She’s got to do it partly by having a meeting with me and making me feel that I, I, I’m-- I, I would be wise to put my trust in her, right?

So those kinds of things I think are not, that’s not going away. And there’s lots of other things that I think are also not going away, things that involve convening and human, human bonding of various sorts. Those things are also, I think, going to become relatively valuable, relatively common kinds of descriptors of jobs.

So the irony is, there was a, there was a little bit of a meme I think when it was this four or five years ago, you’ll probably remember better than me, Matt, but like, this idea of “wordcels” versus “shape rotators” that was sort of going around the Silicon Valley, these two kinds of minds, and shape rotators were engineering mentalities who, you know, like to think about things in in very linear structured ways versus wordcels, who suppose-- And this was initially s- s- developed as kind of a joke and then turned into a kind of a serious thing. If we take it somewhat semi-seriously, maybe more seriously [00:34:00] than it should be, what’s actually turning out is that the kinds of things that shape rotators are particularly experts at are the things that are relatively commodifiable by LLMs, whereas the kinds of things that wordcels typically pride themselves on the facility with which they use language, whether in written or oral form, those things are actually harder to commodify away.

What I think is going to be really a threat, though, in all of this is people who are mediocre at, at either thing because mediocrity is, achieving a reason-- a, a, a fast but mediocre outcome. That is the thing that these technologies currently are really great at extreme-- achieving something truly special that really connects in complicated human ways with a variety of stakeholders, that’s as yet a, a frontier that they haven’t reached is what, the way I would put it.

Art, reproduction, and the value of authenticity

SHEFFIELD: Yeah. Well, and, and another comparison I think that might might be interesting in this context is, is art. [00:35:00] So we’ve already reached before The, im- image generators came along. Art had already been commodified. So, the, the idea of reproduction of paintings is, that was done by a computer decades ago.

Like, if, if you wanted to have a, a, a Van Gogh in your house or, a, a Da Vinci or whatever, you could do it by, by just having a, a printout of that picture. And then, at the same time, the, the, the formulaic artistry, painting, sculptures or whatever, that weren’t original if you wanted those things, you could easily get those.

And, and, and, and it did, unfortunately, make it harder for people to make a living being an artist because you could now have high quality or mediocre, whatever you wanted of those works in your house. So that did decrease the, the number of people who could make a living off of that.

But you know, the, the image generating at this point, I don’t see [00:36:00] that as having a major impact on visual art because we were already there. And the same thing, like I used to work as a web designer that industry basically almost entirely got destroyed before the large language model because of s- websites like Squarespace and services like that, that people, they realize, “Oh, well, I don’t have to have a great website.

I can have a mediocre website that costs 50 bucks. I’m going to do that. Or I can even have one that’s even shittier and have it for free.” And so, like, I-- That was very dismaying to me, I’m needless to say, but this was not something that was that AI did. And so a lot of industries that I think people, might be saying, “Oh, well, the, the chatbots are going to ruin the economy for these...”

Well, it was already ruined har- sorry to say.

GILMAN: Yeah. I think one essay that I read many, many years ago in college originally that I’ve come back to again and again is this famous essay, maybe the famous, [00:37:00] most famous essay in art criticism of the 20th century which is entitled “The Work of Art in the Age of Mechanical Reproduction” by Walter Benjamin a German critical theorist.

And he published this book in the mid this essay in the mid-1930s. And it’s not a coincidence that while, when he published that essay, he’d been busy putting together this big project collecting unbelievable amounts of information about Paris in the middle decades of the 19th century about 75, 80 years before he was working on this project, including a huge number of pho-photo photographs of old Paris.

And so he reflected a lot particularly about photography and how that changed art. And he notes in the essay that, it used to be there was a whole, as you were alluding to, Matt, a who- a whole sort of industry of people who would be portraitists for middle-class families who wanted to have a family portrait.

And they, the family would sit and, there’d be an oil painter who would create a, a painting of the family that they could then hang on their wall or pass down from one generation to the next. When photography, daguerreotypes [00:38:00] initially and then photography come in, that rapidly... It does two things.

One is it massively expands the market of the number of people who can do this. Now anybody, you, you can go and takes, a few seconds to sit for a family portrait and, and it becomes much, much, much cheaper to produce that. So a lot of these painters go out of business, right? Because, or they have to become photographers.

It also changes the nature of painting, right? Because now painting is no longer about exclusively or primarily trying to create verisimilitude to real life, which is what typically portraitists, or particularly not very good portraitists, would try to do. Now you begin to realize that painting, is applying oil to a two-by-two canvas, and the c- explosion of creativity within painting in the second half of the 19th century and into the 20th century is really without precedent in the history of, in the history of, European European art.

So, there is a way in which the commodification of one kind of thing [00:39:00] sets the stage for another kind of flowering of, of creativity. And I think it’s also worth noting the other big concept that Walter Benjamin has in this essay, is he says, “So what is it, then, in the age of mechanical reproduction, the difference between a picture you have of the ‘Mona Lisa’ and the actual ‘Mona Lisa’?”

And he has this term that he uses that he calls aura, and it’s almost a kind of a, a metaphysical or mystical quality that he says people ascribe to the original, right? That when you stand in front of, in the Louvre, in front of the original “Mona Lisa” with a huge crowd of other people who are all snapping photos of it, you feel like you’re in the presence of Michelangelo in some sense as he created that painting, right?

Whereas when you see the reproduction yourself, you can see the actual-- even if it’s the same size as the actual original, it’s not, it doesn’t have that same kind of quality. It’s not-- And it’s not just because it doesn’t have the same textural quality. Even if you pr-pr-produce something that was an almost identical forgery, once you know it’s a forgery, and this is a very [00:40:00] close facsimile that Matt Sheffield or Nils Gilman has painted as opposed to Michelangelo, it just doesn’t have the same quality for people, right?

And I do think that there’s going to be many kinds of things that as LLMs and other kinds of, AIs are able to produce vast amounts of slop, as people like to say, the value that you- people are going to ascribe to a authentic real person meeting or, seeing a play of human beings live on stage, I think those things will become increasingly valuable.

And I think that’s borne out by the fact that, the r- the inflationary prices, the rate of inflation for live events has been far outstripping the, the baseline rate of inflation. So, how much does it cost to go to a, a ball game now compared to when we were kids? Or how much does it cost to go see, Taylor Swift play a concert compared to what it would’ve cost to see a, Madonna in the 1990s, right?

I mean, so there’s just been this increasing escalation of the value of things that are-- allow you to feel this kind of authentic bond with the particular [00:41:00] art and artist of the moment. And I think that those things are going to continue to be accelerated by the increasing, acceleration of mechanical reproduction in the sense that Walter Benjamin talked about.

The jobs of the future will be at the intersection of somatic and abstract reasoning

SHEFFIELD: I think that’s right. And, and ultimately what we’re, what we’re talking about here just to go, back to the, the, the cognitive modes. So, we, we have your abstract reasoning and your somatic reasoning. Well, essentially the value in this new idea economy or cognition economy is in the intersection of somatic and abstract.

That’s where the value is created and, and that’s where it is-- That’s where it, it was created in, in the examples that we were just talking about. Because, with the painting, the act of, of verisimilitude, that was already done. So the, the, the, the purely cogni- somatic contact with reality, that was done.

But the, the, internal contact with reality, [00:42:00] that is not something that a photograph can do, or it’s severely limited in what it can do. And, and so that’s what the value was being done. And, in the same way while the industry of web design has shrunk massively the types of designs that we’re seeing now are just incredible what people are able to do.

so, this may be-- I don’t want to get too technical, but, like, Cascading Style Sheets is a technology that was, g- invented in the early days of the web. Well, now it’s powerful enough, you can make straight up games in CSS that require no programming language just pure CSS. And, and, and so this is, like, the, the, the idea that, everything’s going to come to an end and, and jobs are going to just be wholesale limited.

Yes, many will, but many will not. And, and it’s worth keeping that in mind.

GILMAN: I think the idea that there’s going to be no work left is absurd. I mean, [00:43:00] look out the window. Like, there’s a lot of work to be done out there as far as I can tell. There are, potholes to be filled, houses to be built, meals to be cooked and served and enjoyed. There’s a lot of things that need to be done.

Old people that need to be cared for, young people that need to be, born and educated. Some of that stuff can be, facilitated by technology, but there’s not a shortage of work. We have lots of things that need to be done. What I think is under threat is professions that have relied on, various barriers to entry and they may actually double down on that, right?

So you know, look, I’ve got a couple of kids in college right now, so I’ve been talking to them a lot about, like, what should, what should you be studying in, in this context? What are the kinds of skills that you want to be acquiring? I think-- I, I’ve always been of the opinion it doesn’t-- the actual content of what one learns in college probably doesn’t matter that much for one’s career success, just to take that as the dependent variable we’re thinking about.

Mainly because [00:44:00] even if you get some very, technically specific degree, you learn some, you major in CS and you learn some particular programming language. Within 5 or 10 years of graduating, the particular things you learned are not going to be, from a content perspective, that relevant.

The question of whether you’re a well-educated person and the kind of person who I think is going to thrive in the new economy, the new post-LLM economy, is whether you’ve been educated in a way such that your brain is a kind of machine tool and can reinvent itself as different kinds of tools, right?

So you can do different things over time. So as the job market, as the economy evolves, as different sectors of the economy rise and fall, you can surf from one area to the other a-and, and learn how to retrain yourself to do new things. And I think all of us in the face of LLMs and the way in which LLMs are going to radically transform all jobs, or at least a great many jobs are going to need to retool ourselves.

And so the, the real question is whether you’ve learned [00:45:00] one way or another. I don’t think this is something you can only learn in college something you really should be learning from day one, and you should continue to learn your entire life. But college is a particularly important moment for this is learning what I would call metacognitive skills, like learning to think about one’s own thinking learning how to identify what is the mode of reasoning that I’m engaged in to solve a particular problem, and is that the right mode of reasoning?

What are alternative modes of reasoning that I might use apply to a particular par- to a particular challenge that I’m trying to solve in the workplace or in my personal life for that matter? So sort of being aware of what one is doing and knowing that any particular way of thinking about a problem is going to be partial, right?

Is going to be, create blind spots, and that you want to have, a diversity of perspectives on whatever problem you’re working on. Therefore, you want to have a diversity of perspectives on the team of people who are working on these things. These are all like sort of truisms. I mean, none of, nothing that I’m saying is anything more than a cliché.

But [00:46:00] I do think that it actually implies something that’s not so obvious about the way in which you should seek out an education that will augment that capacity in oneself over time. And that as one continues to learn, as one, goes through one’s career and one’s life, one should continuously be thinking about learning new kinds of ways of thinking about one’s own thinking.

Improving one’s metacognition continuously over time, I think is going to be the most important thing. And I think one can learn those kinds of skills studying anything one wants. I don’t think whether it mat- matters whether one studies physics or comparative literature or, modern dance. Any one of those things I think can help you if you get good at that to develop these kinds of metacognitive skills, which I think are the most important ones to have if you want to sustain a career over the course of decades.

Liberal education and metacognitive skills

SHEFFIELD: think that’s right. And that is really where the value of the classical liberal education, I think, is coming back. Because, in the information age economy, as we’ve been saying, a lot of the [00:47:00] jobs were just simply people who had arcane knowledge applying them to the real world in ways that might not have been particularly anything other than mediocre.

And like, like-- And people instinctively have that idea, that concept of mediocrity as inherent to so much of white-collar work. Like with the stereotype of the paper pusher or the, the bureaucrat stamper, and/or the accountant who does nothing but count beans. Like these are all concepts that people intuitively know are true because this metaphor keeps existing across so many types and types of professions.

And so yeah. So ultimately that’s why I like to say that in the manufacturing age and the information age, these were the [00:48:00] domains of economics But now in the, in the, in the AI age, it is the domain of the philosopher, not just in terms of, well, are these things conscious or not? Well, no, they’re not.

But what matters is how you can relate things to other things and how you can relate yourself to all of these other ideas and how-- and other people’s ideas as well, and their thoughts and feelings

GILMAN: I think that’s exactly right. I mean, let me just-- you talked about a liberal education or liberal arts education. Let me, let me just dive in and double-click on that for a second because I think it’s worth... First of all, when the phrase liberal arts doesn’t mean liberal in the sense, or at least it’s only vaguely related to the idea of liberalism, particularly, as it’s understood in, in, in the United States.

It’s not just sort of a kind of left orientation. It means liberal in the Latin sense of libertas, becoming free. And the idea of a liberal arts education is that you will get a broad-based education that will free your mind, [00:49:00] and that ultimately from the shackles of prejudice and various other kinds of, poor metacognitive, capacity.

And so to me, I, I just think it’s really important also, sometimes when people hear the word liberal or liberal arts or liberal education, they think, and sometimes people do use it this way, they mean we’re, we’re talking about the humanities as opposed to STEM, right? science, technology, engineering, and math.

And I actually think that that’s exactly the wrong way to understand what a liberal, a liberal arts education properly understood is. I think a, a liberal, a, a good liberal arts education will give you a basic understanding of a variety of different things, right? Like, you should know something about science.

You should know something about the arts. You should know something about literature. You should know something about engineering. You should know something about... et cetera, right? Like, it’s really a broad-based ability. And I think that, that what that does, if you get a good education that has that kind of broad-based skill set, it gives you the kind of capacity that you [00:50:00] were just referring to, Matt, which is that it will help you relate to different kinds of people, different kinds of ideas.

It’ll help you say, “Oh, here’s a framework from one domain that perhaps is useful in another domain.” It’ll help you see similarities and differences in thinking across different fields, different disciplines, different expertises. And to me, that kind of ability to, to helicopter up and down from, like, very specific, in the weeds knowledge to the 30,000-foot view and being able to see connections between things across different levels That is, arguably that is the definition of a certain kind of human intelligence.

I don’t think it’s necessarily something that LLMs are not going to be able to do themselves, but it is something that if you can do that, then you can reinvent yourself over time and make yourself and sort of future-proof your career for an age of LLMs. And so I actually think that it’s precisely as you say, those kinds of abilities to see things acro- connections across, across different domains and to ask what’s [00:51:00] important about all of this?

Those are fundamentally philosophical questions, about meaning, about purpose, and those things only will become more important and more central to the kinds of kinds of things that were put that are put to us both in a professional context and also in our personal lives, I believe.

SHEFFIELD: And that’s where the, the, the role of, of primary education, I think, is, is really going to be important because because so much of, of primary education, but I guess also, p- post-secondary as well that, it, it’s too much about memorization and not enough about how to think and how to understand what is truth, what does truth look like?

Because that’s-- that ul-ultimately I think was the, the biggest mistake of, of, before the internet age, that schools didn’t teach epistemology sufficiently. And so now you have, tens of millions of people in, in this-- maybe hundreds of millions perhaps of people who don’t know what, [00:52:00] what makes something a good idea.

And, and that knowledge is going to become even more important in, in, in the age that we’re getting into now. Because if you don’t know what makes something sound reasoning then you will fall for the hallucination. Then you will outsource everything to the LLM and not be able to, to think independently on your own.

And, and, and that’s not obviously what you should be doing.

GILMAN: Yeah, for sure not. I mean, I think that teaching, learning epistemic humility to know the limits of one’s own knowledge to understand what one doesn’t know to be unashamed about admitting that one doesn’t know something, that one needs to understand better what’s going on before one, before one makes a decision or renders a judgment on it.

I think those are all really important qualities that a good education... And again, I totally agree with you. This is not something that should be deferred to college. It should start at a very young age. Teaching kids the ability to make those kinds of judgments. And we could have a long conversation [00:53:00] about the history of primary and secondary education.

Obviously indoctrination has traditionally been a big part of it, teaching people a certain kind of, or, enforcing a certain kind of discipline onto young people so that they can be, conformists in society, docile work- docile and effective workers. I mean, that’s part of the socialization aspect of education that has long existed.

With that said, if we leave that part of the story aside and just think about the intellectual side of things, I also strongly agree with you, Matt, that like, memorization in itself is not helpful. However, let me give an example from my own field. I mean, I, I did a, I studied history. I got a history undergraduate degree and then a graduate degree in history.

And I remember I was always interested in history as a kid, junior high school and high school and so on. And the history exams that I was given then were often very much about, have you memorized the facts about what exactly happened during the Thirty Years’ War in, in, in Central Europe or what have you, right?

You were expected to do what are known as identification [00:54:00] questions. Can you, d- have you memorized all the names and dates that are relevant for a particular thing? That to me is not really what history, certainly when one is a professional historian, that’s not ultimately what history is about.

Now, you have to have fidelity to those facts.

Porting knowledge from within time and other disciplines will matter in the future

GILMAN: Um, but ultimately, what makes a good historian a good historian is the interpretation they give of the facts from the past, which facts they choose to highlight, and do they tell a story that’s compelling in the present about some episode or some era from the past, right?

That’s what makes a makes a historian, successful in terms of gaining a readership, whether that’s an academic readership or a popular readership, is do you tell stories about the past that help make sense and that entertain people in the present? I mean, honestly, it’s narrative-making to a very large extent.

Now you have to know a lot of facts, and I think the reason why often it takes a while for a person to become a really excellent historian is that if you want to say something original about the past, I mean, people have been writing about the past for a very long time. If you want to try to say [00:55:00] something original about the Thirty Years’ War, people have been writing about that for 400 years at this point, right?

So coming up with something original requires really getting immersed in a lot of facts so you begin to have a chance to see a pattern that none of the other historians over the last 400 years have seen. And part of that is about understanding that the Thirty Years’ War What was it about that moment?

Well, nowadays we might tell a story about the rise of new technology as a driver for that, for that conflict of religions in Central Europe, right? Because we’re in a moment wherein technological disruption seems very relevant. In other moments, people might emphasize a different set of facts about the Thirty Years’ War.

The rise of, the, the Swedish state and, the aggression of of the French monarchy and, the fragmented nature of the Holy Roman Empire and, and so on and so forth as driving causes. I mean, during the middle of the 20th century when Europe was engaged in all sorts of fragmentation, those are the main stories that people told about the Thirty Years’ War.

And those stories weren’t wrong, right? They weren’t-- They-- But the, the point is they were telling a story about the [00:56:00] Thirty Years’ War that was trying to make sense of what was going on in the 1920s, not in the 2020s, right? Why do we care about this episode from the past? We care about it not just because we need to memorize these facts about the Thirty Years’ War, but because the Thirty Years’ War, by understanding what took place there, we believe we can understand something about ourselves differently.

Now this is, this is an example of what historians do. I think the same thing applies to economists, to computer scientists, to, maybe not theoretical physicists or number theorists, but even there I would b- guess that, like, the kinds of questions that people ask over time, it, it may well.

SHEFFIELD: I’ll tell you how.

GILMAN: These are not fields I know well. Okay, tell me.

SHEFFIELD: Because basically... Yeah, so basically math mathematics as a field is constantly generating fictional models that have-- that the, the, the mathematician has no thought whatsoever about how it applies to reality. And, and so there, that’s, that’s basically how you get noticed and, and regarded as a great mathematician, is, is being able to generate a new [00:57:00] field.

That’s what makes you great. But the thing is, the interesting thing is that physics is constantly looking into mathematics to say, “Well, here’s this concept that I want to, model, but I have no idea how to do it, so let me just go ahead and go shopping in the annals of mathematica.” And in fact, that is what happens, is that a lot of--

so like that’s what, where quantum physics came from. and, and that’s where, Riemannian geometry was not something that, had any, application to reality, when Riemann made it, but Einstein plucked it out of obscurity and, and, and did exactly what you said. He, he made it-- he took something that was not relevant to people in the past and made it relevant to people in the present.

GILMAN: Well, I think that that’s, that’s a great example. I love that. And, and this actually raises another issue, which is that again, something I think that’s going to continue to be valued and maybe become more valuable over time is the ability to port ideas from one domain to another. A lot of what people [00:58:00] describe as intellectual creativity is that just to give a, a classic example, you were referencing Danny Kahneman at the beginning of this podcast.

Danny Kahneman eventually won a Nobel Prize in economics for basically, inventing the new field of behavioral economics. But Danny Kahneman’s not trained as an economist, he’s trained as a, as a psychologist. And basically what he did, working with, initially with Amos Tversky in the 1970s, is he began to sort of systematically catalog the ways in which people are non-rational in their decision-making in a variety of ways and various kinds of biases.

and this led to the development of what he called prospect theory, right? So people have identifiable patterns of miscognition, right? Which throws through into question the entire, rational actor hypothesis, which lay at the core of a great deal of microeconomic theory at the time. And so basically this idea that initially comes out of, [00:59:00] close observation of psy- in, in psychology labs and experiments, eventually migrates over to economics with, as it were, on the back or in the heads of, of, Tversky and, and Kahneman, and then revolutionizes the field of economics as a result.

There’s so many examples of this, of ideas that are taken from one domain and moved over to another. Complicated ideas in s- i- i- in symbolic theory that end up revolutionizing linguistics, for example, right? So there’s one example after another of people who take ideas from one domain and apply them to another.

I’ve been giving a- academic examples here, but the same thing applies in lots of other fields, right? Think about the way in which food is remixed over time where, some chef will take an idea from, one cuisine and port it over and use it to reinvent something that’s going on in another cuisine.

Or music is another great example of like, musical traditions that will undergo various transformations as they go through, various dispensations. So, you have the music [01:00:00] of the Anatolian Greek diaspora that’s displaced, after in the 1920s, that eventually goes through, becomes a kind of Greek blues, and eventually comes to America and becomes the basis for surfer rock, right?

So, these kinds of evolutions of things over time, I think that is the basis of creativity, and that ability to port things from one domain to another in order to create new insights. And again, those things might be facilitated by LLMs over time, where you say, “Hey, where’s an idea from this other field that I might apply to help think about this problem,” right?

But you need to think to ask that question and to give that prompt in order for the LLMs to necessarily do that, at least at this stage. And I keep saying at least at this stage because we don’t know exactly how these technologies are going to develop over time. Will they be able to auto-suggest those kinds of creativities?

I think there’s always going to be another level to it and another level to it and another level to it. And so I think that’s where a lot of the value add is going to happen over time.

SHEFFIELD: Exactly. All right. Well, this has been a, a great discussion, Nils. [01:01:00] And I-- hopefully it will be useful to the audience. But if people want to keep up with you outside of this conversation what are-- is your advice for that?

GILMAN: Well, I’ve got a Substack that I contribute to intermittently. I also have been writing a lot. I’ve got a book out, “Children of a Modest Star” came out two years ago about planetary governance, if you’re interested in sort of intersections between political theory and global ecological concerns.

That’s a good book to-- That, that was what that book was written to do. I, I hesitate to encourage people to follow me on social media, but I’m, I’m on there too as well if people to find me there.

SHEFFIELD: Okay. Although not on X we should point out.

GILMAN: Yeah, I’ve I’ve deci- I’ve decided that platform’s not for me.

SHEFFIELD: Yeah. Yeah. Okay, great. All right, well, good to have you back again.

GILMAN: Thank you.

Discussion about this episode

User's avatar

Ready for more?