Everywhere in the news it seems, people are talking about artificial intelligence. The executives at the various companies keep saying that they’re just a few months away from a program that can think as well or better than a human. Whereas on the opposite side, a legion of critics are saying that AI is a giant scam with no value at all.
But underneath this debate is an even larger question. What are minds? And do we even know what it means to think like a human?
No one has final answers to these questions, but some are better than others. Psychology and computer science have plenty to say about the capacity to do things, but if we want to understand minds better, it makes sense also to look at biology, because biology has been studying living systems, behavior, and cognition for a lot longer than computers have been around.
I’ve been working behind the scenes on a lot of this stuff recently, and as I continue to roll out some of my ideas publicly, I wanted to bring on some people to the show here to discuss some of their ideas as well, because these are really important questions that are worth taking seriously, regardless of whatever your position is on them, they are ideas that don’t just stay in the lab. They shape how we build our technologies, how we write our policies, and how we understand ourselves.
On today’s program, I’m joined by Johannes Jaeger. He’s a biologist and philosopher who has published extensively in cognitive science and he advocates what’s sometimes called an an enactivist approach to mind, that is they are something that our bodies are doing and not something like a magical spirit or something like a software that you can pop in and out to some other device.
The video of our conversation is available, the transcript is below. Because of its length, some podcast apps and email programs may truncate it. Access the episode page to get the full text. You can subscribe to Theory of Change and other Flux podcasts on Apple Podcasts, Spotify, Amazon Podcasts, YouTube, Patreon, Substack, and elsewhere.
Related Content
Experience creates minds, not the reverse
What’s going on with Pete Hegseth’s jihad against Anthropic?
Chatbots are more likely to give bad answers because they’re trained to provide an answer, no matter how incorrect
The reality of other people’s minds is the root of so many political conflicts
AI content is not going to go away, we should have some realistic norms for how to use it
Mediocrity and ‘satisficing’ are what complex systems do
The strong link between wanting to defy social norms and belief in disinformation
Audio Chapters
00:00 — Introduction
06:15 — Cognition is mostly an unknown unknown
16:48 — The return of behaviorism
30:28 — Reality is always mediated by experience which makes it not externally computable
39:28 — The accidental dualism of mind-as-software
44:19 — Cargo cult philosophy and Jeffrey Epstein
52:34 — Meta-modernism and technology for life
01:00:44 — The real singularity is whether humanity can learn to live together
Audio Transcript
The following is a machine-generated transcript of the audio that has not been proofed. It is provided for convenience purposes only.
MATTHEW SHEFFIELD: And joining me now is Johannes Yeager. Hey, Yogi, welcome to the show.
JOHANNES JAEGER: Hi Matt. Thanks for having me on.
SHEFFIELD: Yeah, this is going to be a really good discussion. And I’ve written and published things on these topics but I haven’t done a lot of podcasting on them. So you’re kind of the first one to kind of get, get my audience into my, my podcast audience into these cognitive science topics that I’ve writing about.
So let’s maybe start though with so you were trained as a, as a, a biologist, and that’s your, your academic certifications, but that’s, that’s not where your heart lies.
JAEGER: I’ve probably always been more of a philosopher, but I did start my career as an experimental lab biologist studying developmental and evolutionary biology, and then moved on to become a mathematical modeler. And I was always interested in the kind of methods that I was using and to sort of reflect on them.
So I guess I was always a bit more of. Philosopher, a conceptual thinker. And what I’m doing right now is a bit weird because I think I’m still doing biology, but I’m doing it using philosophical methods. So I’m sort of interested in concepts, conceptual problems in biology, and thinking about how we do biology and how we think about life at the moment.
SHEFFIELD: Yeah, and that’s really important at this point in human history, I think, [00:04:00] because philosophy as a discipline is kind of the origin of all-- I mean, literally, this is true, like philosophy is the origin point of all sciences.
It, they, they came out of it you know, going back all the way to Plato’s Academy and all the o other various, places that people, started up afterwards.
And you know, and, and, and so now, we’ve had this, this, this new discipline or meta discipline, if you will, called cognitive science. And this is, you know, it is such a, because we don’t, we don’t know fully how, how minds work or brains work or what even how we can know anything, like it is just a lot of this is so unclear, experimentally because it’s hard to quantify a lot of this stuff.
Because first you have to, you have to know what you’re quantifying before you can quantify something. like that’s, that, that’s really one the what it comes down to. And, and so biology and, and, even computer science and and psychology like are all having to become a lot more philosophical, I think, because, you know, as we started are starting to get more serious about trying to build things that can be more autonomous.
That we have to figure out, well, what makes something autonomous? That’s really what it comes down to.
JAEGER: I totally agree. I mean, the problem is that we don’t even know what life is and we don’t know what minds are. And in some ways I, it’s a bit provocative, but I joke sometimes that we know less about that right now than we did about a hundred years ago because we have these ideas about minds and bodies being machines and computers in particular that are extremely misleading.
I guess we’re going to talk about this in particular, so we have ideas that can actually put us further from the truth, even though we have amazingly improved technologies and techniques to probe into what life is. And it’s, minute is detail, but we’ve kind of lost the forest for the trees there a bit.
And I think [00:06:00] if we wanna make sense of all the data we’re producing and and also of course of AI that we’re going to talk about and the differences between those living systems and machines then we need to sort of zoom out and look at the big picture again.
Cognition is mostly an unknown unknown
SHEFFIELD: Yeah, absolutely. And and, and we’ll come back to this repeatedly as a theme, but you know, overall the, the there, there, there’s this idea that, and I, and I hate to to quote him here, but Donald Rumsfeld, the former US Defense secretary, he had one good idea, which is that when you’re going into a situation there are the known unknowns and then there are the unknown unknowns.
And, and that’s the thing about science is that the, the paradox of science is that it actually increases ignorance at the same time that it increases knowledge. I, I mean that’s really-- and this is also why also I think why we see a lot of proliferation of conspiracy theories as well. Like there were no conspiracy theories of aliens abducting people until people theorize, oh, well what if there are planets out there?
And what if there are beings that live on those planets that could come here? So there were no alien abduction ideas before aliens were existing. But even in a more scientific sense, you know, like people trying to figure out, well, how does this chemical induce this type of behavior and what would happen if you did this?
And, you know, like there’s just, the more you know, the more you don’t, you know, the more you know that you don’t
JAEGER: I mean, Rumsfeld, I use these quotes in my philosophy course as well, funnily enough, because it’s really good to, to show you that what’s really important at this frontier of what we know is the question is how you set up your experiment and. It is extremely important to realize that this is not just some sort of, automatic process, but it’s something that you have to use creativity for judgment that we’re also going to come back to later on.
So this is the part of science where you [00:08:00] need to use your own intuition, school intuitions, and there’s no way around that. So it’s not right to see everything we do in science and the subjects that we study as pure algorithms or sort of rule-based systems. This is just not how nature works because it’s not how our experience works.
And this is where I think the work that you’ve shared with me in cognitive science and my work on something called Real relevance realization, really overlaps strongly that the first step that a living being has to do to get to know its world, is to identify in that world what is important, what is relevant to it.
And that is not a computational problem. This is something that we can go into detail about. But this is huge because that means that the intelligence of a living being, no matter how simple it is fundamentally different from what we can achieve in, in machine intelligence at the moment, no matter how sophisticated or even, impressively similar to what we can do with language or images the output of those machin machines may be.
So there are underlying differences that really count because they are also connected in the end to taking a responsibility for our actions. And this is another thing that machines obviously can’t do. So we need to sort of think much harder about the application of those technologies and how we are going to attribute responsibility to things that happen because of them.
SHEFFIELD: Yeah, absolutely. And you know, and, and the context that we’re having this discussion here is that we have cer seen the proliferation of a bunch of different large language models and other artificial intelligence systems as they’re called. And you know, I I, some people don’t like that term.
JAEGER: I have two suggestions very quickly there. So first of all, it should be, if it’s properly used, it should be not AI, but IA intelligence augmentation. So a technology that augments our own intelligence. And the second is, I call it algorithmic mimicry. This is not something that’s going to catch on, [00:10:00] but it’s the algorithm mimicking, imitating what human beings can do.
But it’s, a simulacrum, it’s not the real thing. And we can go into that, what that means as well. But it’s just superficial. and then, some of the AI bro have turned this around and said, oh, our brain is not that sophisticated. But if you actually understand the nature of a living being, that, that is probably very likely not true.
SHEFFIELD: Yeah. That’s right. And, and, and so just for, just to give an overview though, for people who you know wanna get a bit up to speed or they never read the articles article essentially, you know, a large language model is a computer program that will, that is trained on, like a whole bunch of data is put into it into files, and then it classifies everything in the relationships between the words.
And says these words are in this broader topic, and some, and this is, these are called features often or, or they’re called vector, vector space relationships. And then essentially, so when you, when you type in a question, what it does is it breaks down your query into what are called tokens but they, which is like a sub word, and then anyway, analyzes the relationships with all kinds of different ways.
And then says, okay, well, to ans this question is about these topics. Statistically speaking, this is what it’s about. And then I’m going to respond using these statistically correlated words in these topic areas constrained by these rules alignment rules of grammar or facticity, et cetera, et cetera, et cetera.
But these are not you know, these, these rules. Externally imposed. And I, and I, and I think that that’s is an important thing for, for people to get. So like there, there is this concept, they do have a concept of alignment and it’s good, and it’s the only reason why you can make any sense of the stuff that they say.
But these are, these are externally imposed requirements [00:12:00] by humans in order to make the outputs make sense because otherwise they would not make sense.
JAEGER: Yeah, so that’s really important no matter how complicated they are, or even if those models are post trained in the reasoning reasoning models. That’s another really misleading name. What the model in the end does is it reproduces patterns that it’s recognized in a dataset or in a, reasoning exercise after the main training step.
So basically there is no semantics, there is no understanding. It’s just patterns. So we can call that syntax. So there is no semantics. And then of course, there is also no action from such a model. So the software and the hardware remain just in like a traditional algorithm, strictly separated. So the software runs on the hardware, but it doesn’t change the hardware.
And so if you compare just these kind of aspects to a living system, all of the meaning the semantics comes from inside the organism, or better put from the interaction the organism has with its environment. While in, in the algorithm it’s put. Into, first of all the way the training data set is set up, that’s done by humans, it’s curated and there’s a lot of human meaning that goes into the formatting of that training data set.
Second of all, the way that the target functions are set and then third of course, the prompt that the human is giving the algorithm when it interacts with it. So this is where the meaning of the answers that you get from a LLM come from everything internal is pattern, is very complex, pattern reproduction.
And, sometimes people use this, term called stochastic parrots. I, don’t think it’s a very good term because it, or also some, sometimes what I think is a better way to think about it is a very complex tool that you can use to make sense for yourself, but you as the human user have to be there for sense to arise from this interaction that you have with the machine.
The other way, it’s not the same. So there’s no person in there if you, [00:14:00] if there is no Chachi PT between prompts, right? it just exists as a, patterns of magnetic bits on a heart disk. But it doesn’t really have a process state. While, as you also point out in your own work, a human mind or any living being is a process that constantly updates its state in relation to the environment.
And that’s where experience come comes from. So basically what that means is that none of these algorithms can experience anything. And they are in that sense, not true selves. They don’t have subjective experience in that sense. It just doesn’t make sense to ascribe that to them. And the next question is then, so, so basically this is a pattern producer, a very complex pattern, producers that’s put in a very complex environment with people.
In the training, meaning put into it in the training data set, in the prompt, et cetera, et cetera. And then it works in an environment on the internet. It interacts with other algorithms, it interacts with people. So this is not traditional computation, but it is still the execution of rule-based instructions one by one in the end, even if that happens in a massively parallel way.
And there is hardware, there is a code base, and, these rules are set from the outside. There’s a training set. Everything is pre-given and supplied from the outside while the organism. And you also have a beautiful account of that in your work creates its own self through experience through itself.
So you cannot make an organism. The organism has to make itself, and that is the very definition of a living being. It is a physical system that manufactures itself. That means it produces all the parts that it needs to function. And relates them and assembles them in a way that is functional, that is conducive to its existence.
Its further existence. So you’re basically always working as an organism towards staying alive. [00:16:00] If you sleep, if you’re in a coma, you still, your cells work to be alive. While, it’s obvious that no, not even the most complex algorithmic system that we’ve created does that. You can just save it on a heart disk and then restart it.
But it’s just fundamentally not the same thing. So everything that’s human-like about these algorithms that doesn’t come in, through like some internal interactions, but it comes through, these constraints, these alignment constraints that you were mentioning before that we put in to begin with, but we put them in, such an indirect way.
There’s such a big gap between the person who creates the dataset, the training dataset, and the person who uses the algorithm that we don’t see these things and it seems lifelike to. We’re fooling ourselves if we think that.
The return of behaviorism
SHEFFIELD: Yeah. Well, and that does raise the idea of that used to be very common in, in psychology of the school of behaviorism of, of BF Skinner that basically had this idea that, well, okay, we don’t, we can’t, well, let’s not bother trying to, to, to hypothesize what is, what’s going on inside of minds.
Let’s just only look at the outputs of, of human actions. Like, what are people doing? What are they saying? Because nothing else is measurable. Nothing else is ultimately real, perhaps. People are just machines. Like, and, and so that. That mentality was quite popular, for a, a while or in the mid 20th century through Skinner and other people like him.
And eventually people realize that if that wasn’t, it couldn’t explain enough in part because the, you can have the same behavioral outputs with totally different intentions. So, and, and a perfect example of that would be within if you live in a totalitarian dictatorship where you are required to praise the leader.
And so, lots of people had that reality, [00:18:00] so they would praise the leader and say that he was great. And it was always a he notably, and they would always, you know, but, but they didn’t mean it. But they had the same behavior output.
And so. That eventually most of psychology kind of moved beyond behaviorism, but now we’re seeing a return to it with this idea of computational functionalism, which is the idea, well, the only thing that really matters is, is, is the outputs of system. So the, the so-called turing test as well is, is a really bad example of that, unfortunately.
JAEGER: No, it’s true. So, but, so there are a few things that happen here. So first of all, whenever you go and you speculate behind the behavior of a machine nowadays people say you, you’re making a metaphysical argument and metaphysics is this sort of bad word for a hundred years now already.
And that’s something we don’t want. But the funny thing is that the assumption, the very assumption that the human body. The mind is a machine. It’s metaphysical it’s completely unproven. It’s just an assumption, which if you look into the history is actually quite funny and recent.
So, so the whole idea that beings human beings in the world itself are machines is only about 400 years old. Descartes, we can date it. This to about 1642 when Decart published two essays that stated these two things exactly. So, so he declared all living beings Automata, and he declared the world a machine.
And the machine was, of course, at the time, high tech was the clock. And they had all these really fancy clocks with ORs and everything in the cathedral so people could see them. That was like the computer technology of the time. And they said, okay. Of course the universe is like, a talk work.
And the same thing is happening again right now in recent times. And it’s only about 30 years old 40 maybe by now, and not more that the world is a computer, which is really funny because the theory of computation [00:20:00] is about a human activity. It’s about making calculations with pen and paper according to fixed rules.
That is the definition of what computation is. And based on this a guy called Alan Turing managed to build a universal machine that could basically solve all logical problems that you would pose to it that were solvable. That’s the universal touring machine. So that’s a model of a universal machine, a universal problem solver.
And that’s. Also notice this is about problem solving. Okay? So then World War II came along, and after that, we somehow switched to the idea that our own thinking is like, that is computation. Okay? So because we built all these computers became an everyday technology. It was the best technology we have ever developed.
And they were built to emulate the human capacity of problem solving. But problem solving is a tiny thing of what you’re doing. I mean, we’re not talking about motivations and emotions that need to arise from inside your body. They can’t be programmed into you. And then the other thing is we don’t, we’re not talking about that thing that we were talking about in the very beginning of our conversation.
That you have to first point out what is important to you. That is not a problem to be solved. That’s something you need to do as a motivated being a being that is motivated to survive. Then things become important and unimportant and relevant to you, and that is not a computational problem. The idea that a living being is ca capable of judgment and of reframing problems.
And that’s what we call creativity. That is outside what we understand by computation. So we’ve come up with a model of something humans do. And so we mistake this model, which is more a model of how we logically explain the world with how the world actually works. Or you can think of this as the ultimate mistaking the map for the territory.
Okay? Somebody once said it was a computer scientist. The problem with computer science that it’s territory is a map. Okay? It’s studies, [00:22:00] a theoretical subject and so, but people are now o only in the last few decades coming to this idea that everything in the world is computation. And this is crazy because your experience.
Your subjective experience your motivations, your drives your ability to judge, your ability to be creative are fundamentally not computational in nature.
SHEFFIELD: No, they’re not. And, and, and, and the, and that’s the thing, like the, you know, saying that everything is computable or should be that’s just focusing on just one aspect of, of human activity, one activity which is, you know, s serialized, formalized logic and saying, well, that’s all we do. But everybody knows that is not what all you do as a person and or what anyone else does.
Like, we, we are so much more than that. But, you know,
JAEGER: I wish everybody knew that’s the problem. Yeah.
SHEFFIELD: Yeah. Well, I mean, I, I think instinctively everybody thinks of themselves it’s that way, but even, and even the tech bros I would say they would, you know, if you cut them if you took that out of the context of computing, they would, they would admit that.
You know, and, but the, there is kind of a, and, and, and this, this distinction or this idea of, of, of computation or computability it, it kind of bifurcated western philosophy in a lot of ways between what ended up, and these are, are bad terms, frankly. But the terms that people use are, are analytic philosophy and phenomenology philosophy, and, you know, and, and so the computer science largely became reliant on analytic philosophy. And then the phenomenological people, they kind of, a lot of them became kind of anti technology almost Luddite is, or, you know, even getting into [00:24:00] mystical stuff and in, in some of them in very bad ways, like Martin Heider as an example.
So, you know, like, and so the, the both sides were kind of missing what the other one got, right? I would say that they both, they, they, they, they both had good points, but they also had bad points. And that, that, that’s kind of where I think Western philosophy kind of went wrong, is that it, it tried to split these two things off.
JAEGER: Here’s the weird thing, right? Everything we know about the world comes out of the experience that some human being or maybe one of our ancestors had. And in the case of humans, because we have language and we’re social beings, we can share those views of the world as well. So we have a collective sort of imagination.
About the world, but everything we know comes out of this subjective experience that we have a really hard time understanding with our abstracted theory because this is the act of abstraction. So we, by making knowledge objective from subjective to objective, we have to put them into language. We have to then put the theories into numbers, testable statements, and that those are huge steps of abstraction.
And then the next step is that we confuse those theories, these abstractions that describe the world with the world itself, which is just that experience that we have. Right? And so I, I side here with the phenomenologists that say experience is primary and we have to sort of examine also eastern meditation practices that are trying to get through the conceptual layer that we have.
We are very strange creatures on this planet because we have this massive reliance on language and both, these traditions of philosophy. Of course, philosophy itself depends on language. So, Wittgenstein, famous Viennese philosopher once said, whereof, you cannot speak thereof. You shall be silent.
But that is a huge problem because as you and I explore in our work, all we do at the abstract level is deeply grounded in a [00:26:00] lot of stuff that’s going on. Underneath that is beneath the level of the conceptual level, the abstract level, it’s direct experience. The idea that we cannot directly experience anything without language is absurd.
We do that all the time. But what we are aware of as self-reflective human beings is in the abstract level. So if you want to understand where this really basic level comes from, and then it’s actually useful to go much lower to simpler organisms. And there’s a great book by Kevin Mitchell, it’s called Free Agents.
That is exactly arguing that you can’t. Understand, easily understand all these sorts of experience by starting from the human experience because it’s very complicated. So let’s sort of look at what kind of bacterium, the simplest
Behaviorism and computational functionalism
JAEGER: living cell on earth experience itself. And it has a sort of, funnily enough, it has the ability to judge in a very simple way.
It’s not sitting around, there’s no bacteria philosopher or anything like that, but it can go for the sugar and avoid the toxins. So it has of course, evolved to do that. It does it very mechanistically. But every once in a while, those sort of preferences, those value systems, those interactions with the environment they change because we evolved from something that probably looked very much like a simple bacteria.
So at some point in its career, it must have been able to do something unexpected. I mean, unexpected, like that is completely not formalize in advance. This is the work by biologist Stuart Kaufman, one of my co-authors, and he calls this the adjacent possible evolution and life in general. The behavior of organisms is always going into new spaces that we haven’t been able to imagine before they reformulate problems.
It’s a truly creative process that you cannot just put in a bunch of equations and play it like you play an algorithm in a computer. And that’s the whole point of evolution and life. It is to break the rules. Of course it still follows the rules most of the time, but it is able [00:28:00] to do that and that is what makes living systems alive.
And they can only do that. This is where it becomes a bit complicated because they are self manufacturing systems, so they built themselves and so they can in a way decide whether they built themselves in this way or in that way. Okay. Only if we have mistaken our abstractions, our theories about the world for the real thing.
Can we think this is not real? So there have been several places in history of science. Famously Lala was a, lala was a guy in the very early clockwork stage of our science that said, okay, if the world is like a clockwork, everything has to be predetermined. And he called this the he called up this demon that could look into the universe from the outside and sort of see the universe and then predict its whole future.
And this idea is coming back now with the idea that the whole universe is a computer. It’s the same thing again. A demon who sits outside the universe can predict everything and so can manipulate everything. And we can then engineer the whole future of the universe. But there are two problems now.
So one is this demon is not part of the world itself. So it’s basically, God, it’s not a scientific. Or a natural entity. Right. And the other thing is that, of course, what the people who believe that the world is a computer and the mind is a computer want to do, is to control it from within. They think they can control their own minds, their own world.
Although we are only this tiny part of the universe, and we certainly don’t understand it well enough to manipulate it in this way, we, and we see that there’s evidence for this. This is not just speculation. Every time we interfere in a complex system, there are unintended consequences. And I mean, every time, this is one of the most robust empirical findings that science.
Has made over the last 400 years you interfere, something goes wrong. Okay? We know that from everyday life as well. So unexpected things happen all the time. And this is only [00:30:00] possible if you let this idea go, that the universe is somehow calculable is a computation, is controllable is predictable which is, and I want to come back to that, a purely metaphysical assumption.
There is no evidence that the universe is like that. Not a single shred, but that’s always glanced over and this whole view is kind of sold as the only reasonable view there is, right? So that’s how that works,
Reality is always mediated by experience which makes it not externally computable
SHEFFIELD: yeah, absolutely. And, and, and this is why in my own work, why I think it’s important to structure a philosophy through a, an access to the external world. So, you know, in my view, everything is, there is an externality, and that exists regardless of what we do or where we are or who we are, even if we exist at all.
But we don’t have direct access to all we can access is our local externality. And then within that only what we is perceptible for us within it. So there, like if you’re a bee and you see a flower, you see lines that show you where the you know, the, the, where you can get the pollant, or I’m sorry, get the the nectar from, and, you know, but if you’re a human and you look at that flower, there’s no lines on that flower.
It’s just a red, it’s just a red rose. And so, but, so it’s outside of our perceptible externality and then it’s nested even further is our percepted externality. So that’s what we, of what we can sense that we actually register in our minds and say, okay, this is here and this is, this is like that.
And so, you know, that’s but a lot of this, this worldview that we’re talking about here, this computational functionalism, it doesn’t draw any of these distinctions. It, it thinks, no, there’s an objective reality. And we can, when we have scientific laws Yeah. That we can model it and we know what it is.
And, and, and yet this [00:32:00] is despite the entire history of science, show you that’s not true. That is not true. You know, and, and, and, and that it’s not just quantum physics, you know, talking about how ev everything is literally solid objects do not exist. So there’s that. But it is, it’s even beyond that, you know, like every, every single fundamental scientific field shows that there, there are, there are always new discoveries that completely upend everything.
And, and, and, and yet we still have people with this, this sensibility that no, no, there is objective reality. And I can find it because I’m so
JAEGER: Yeah. And often people are afraid of a slippery slope that leads us into this idea that everything that postmodern idea that we have nowadays especially also in the political right, that you, anything goes whoever has shouts the loudest has the right view. And this is extremely dangerous because.
What we’re saying here is not that, but what we’re saying is that our knowledge of the world is grounded in millions of years of interactions of us and our ancestors In with an externality that you called it the perceptive externality I call it an arena. It’s also called the umwelt, which is just German for environment.
But it basically means that the perceived environment, the things you can see and experience it all, and that is beyond your control. It’s not that you can just claim that it’s like this or that. It’s not it is a certain way and you interact with it and you basically go out and you try things out and you find out, and that’s how science works still.
And it’s very robust, but it never ever gives you an infallible, which means. A complete or perfect view of the world. And so this assumption that the whole universe could be a simulation, for example, and we just live in a simulation that leaves two questions hugely unanswered. That’s first of all who is the simulator, and that’s just God again, I’m sorry, that’s a supernatural being.
So this is a religious idea. It’s not a scientific idea. And the other thing is, of course, how do you get experience in a [00:34:00] simulation? I want to know, so I want a scientific explanation why I experience speaking to you right now. And I am me, and this is where it starts. And from that, I make abstractions once again.
And this is called The Blind Spot by Adam Frank Marcelo Gliser. And Evan Thompson wrote a really good book about this. This is a strange loop, a really weird thing that we go from our subjective experience to these abstractive theories. And then we suddenly mistake those theories for the real thing, like physicists who believe.
That their equations, the shorting wave equation is the only real thing there is in the world comes out of the equation that’s just upside down. That’s map not territory. And the same thing for computation. Computation is a way to describe the world. It’s not the way the world is. So, for example, in a famous example I think it was a philosopher, Hillary Putnam who came up with it first, the waterfall.
Does it compute something? You can make it compute something. You can make the water run in different ways and do computations for you. Or you can simulate it in a computer. But you won’t get wet standing under that simulation. And that’s something that is so absolutely forgotten very often, which is amazing.
I say, it walks like a duck. It walks like a duck. But you can’t make canara Lauren from it. And for sure it’s just a simulation. It isn’t real. So the question that I am really interested. Right now is why do our theories fail to describe that difference? Right? And I think we have a really fundamental, again, this is philosophy.
We don’t understand how an organism causes itself because this is a mathematical problem, right? I mean, nothing is supposed to sort of be its own product. And so you have this circularity I think it was Aristotle 2,500 years ago, who outlawed this in analogical arguments already, rightly so, because it’s a circular argument literally.
And it doesn’t make any sense. But the problem is that nature doesn’t stick to that logic that we have. Okay? [00:36:00] And it, it makes circular arguments all the time. And they don’t go around in a circle. They construct themselves. So they go up in a spiral, right? So they spiral in new directions.
And this is how you can imagine. Living beings. These are processes that work together to construct each other and maintain each other’s existence in this way. And they spiral up in these different directions. And this is what we call evolution in the end. And this is extremely unlike any machine we’ve ever built.
So the world is not like a machine. And also the machines we’ve built, they are something really strange. They don’t have anything to do with how the world out there really works. And this is something we’ve forgotten, and this is why I joke that we understand the mind and the body less nowadays than we did in the past.
Because a hundred years ago, nobody would’ve come up with this idea that everything is a computation. Because even the most rational people, Charles Babbage or Condor Savin before who thought about the nature of rationality and intelligence set, intelligence and rationality are about judgment mainly.
And then only rule-based computation. Secondarily, you have to follow rational arguments once you’ve decided what the problem is that you want to solve. This was always there until about World War II and the development of a little before that of computation theory that led to us forgetting that and thinking that thinking is computation.
That’s a bad sentence, but you know what I mean. It’s it is. When you think, first of all your LLM does not think the way a human being thinks, not at all. There’s a fundamental difference and no matter how many data points you add to the training set, no matter how more complex you make the model itself, it will not be able to think.
It will never, and you can quote me on that, be able to think as long as we stay in this paradigm of algorithms, software running on hardware. Of a specific architecture that we are, we’re running on [00:38:00] right now, and that’s just something that is not ever heard in public conversation about these problems.
So all these claims that we have, conscious AI, or we’ll have it soon, they’re completely overhyped and mostly also completely delusional. A good example is Epstein’s favorite Yha Bach, who’s been claiming that you can emotions, consciousness are a secondary consequence of computation.
Again, this is, if you look at this work complete one of the most obvious map and territory confusions. That turn his entire work upside down. And you can create machines that act as if they have emotions. But the funny thing is, a programmer always has to program the personality type in open claw mold book where we’re in the the news with these agents.
And you have to have, they have a soul file. I really like that. So the thing is actually called a soul file where you have to write in the personality. So it has to bootstrap itself from that thing that you as a human being with human defined words, define the soul of this algorithm. And then it goes out and it acts in autonomous ways.
And we say, oh look, that’s what you meant by the alignment constraints before. So, we basically made it do act in an intelligent way. We programmed that into it and now it acts in a seemingly intelligent way. And we say, oh, we can do that on its own. No, we can. We designed it so we can do it basically.
The accidental dualism of mind-as-software
SHEFFIELD: Yeah.
Yeah exactly. And well, and, and this idea though of, of, of mind as software, I think that’s, is such a pernicious idea and, and wrong hit. And it also undermines completely what the people, at least a lot of the people who came up with it we’re trying to do when they need it. So, so Daniel Dennett, the, the late philosopher and cognitive scientist, he was the one that really kind of put this.
Into the computational functionalism and, and, and mind as software. He called it a [00:40:00] virtual machine. The mind, the, the mind is a virtual machine that is, is made out of your neurons. And, and that then he didn’t understand how virtual machines work, I would say. ‘cause like I deal with them. I am a, I am a, a cybersecurity professional as well.
And like, that’s not what a virtual machine is like. They are not sep They they are, they are separate from the soft, from the other software on the, on the computer. So like the whole point is they’re not interfacing with, with the lower level processes, whereas your mind, of course is and so, so this doesn’t work.
But the other problem is that when, when you, when you have this metaphor of mind as software instead of mind as execution statement or the, the interaction of, of beliefs and of of, of heart, of, of body, when you, when you just thinking of mind as software, what you’re inadvertently doing is you are creating metaphysical dualism when you do that.
And, and, and we see this, and I think probably the biggest example of how mind as Software really creates dualism is looking at Daniel Dan’s former partner, Michael Levin, the biologist, who has done a lot of incredible cellular biological research, which, you know, really does show the way that a lot of cellular entities can in fact, you know, discriminate with their environment and, and understand in a rudimentary fashion how to navigate themselves and structure and respond to things like he’s done a lot of great research on that.
But he’s taken this idea of Mind is Software, which he got from Dennett and wrote several pieces with Dennett about and then is now saying, well, actually no Mind is software is. Of Platonism and dualism. And so like the, the, the entire point of computational functionalism was supposed to say, well, we’re against metaphysical stuff.
We’re against, you know, spiritualized stuff. And now here it is being used to support the idea [00:42:00] of supernatural substances and entities.
JAEGER: So, so this is completely crazy. So, so, and it’s a wonderful example because if you start with some logic sounding premises and then you come to completely bizarre conclusions. So before the platonic domain of minds that impinges on our domain as patterns in your brain Levin came up with the idea that sorting algorithms are thinking have experience, and so on and so forth.
So if your framework, so this is what we said right in the beginning, what we forget nowadays, we think science is just a bunch of people doing some experiment that came out of nowhere that was rationally decided on, and they find out the objective truth. This is not how it works. The way we do science is we have a model.
We have an imagination. We have an expectation of what’s going to happen, so we ask specific questions. We use specific concepts to address those questions and do experiments. This is all an interdependence between thinking about the things we’re doing, experiments about, and doing the experiments.
So if your framework of concepts gives you absurd interpretations like that, shouldn’t you go back and think, okay, maybe my basic assumptions are wrong, but that since they were indoctrinated with this idea that it is science all the way down, there is no metaphysics, so there’s no metaphysical assumption underneath this idea that everything is computational.
This computational is, or computational functionalism idea they don’t see anymore that this was also just made up. And that’s a map. It’s an abstract map already that comes out of the philosophy that’s underneath the science. Funnily enough, it was Dan Dennett who himself said there’s either science that has taken, that there is no science without metaphysical assumptions.
There’s only science that is aware of those assumptions or. Science that hasn’t taken those assumptions on board. And Levin is a perfect example of someone who’s absolutely clueless that his basic assumptions are completely inconsistent. So when he starts going off on these tangents, he gets absurd results.
And you think, why would a [00:44:00] rationalist empiricist like him not bulk at this? But, it’s the dualism is fashionable again. Because we have a lot of very rich people that are very religious, suddenly again. So it is a good thing to say these things. I call it burner science, but I think Feynman called it Cargo Cult Science.
Cargo cult philosophy and Jeffrey Epstein
JAEGER: So what’s being done here? It’s cargo cult philosophy. Actually, it looks like it’s philosophy, but it’s really it doesn’t have any of the essential ingredients that good philosophy actually has. And this sounds a little harsh, but it’s really borderline fraudulent, the whole thing, because it’s really a way to tell a story to rich sponsors that then funnily enough, sponsor that kind of research.
You can see that from Nick Bostrom and the simulation hypothesis. I mean, with the whole Epstein files, people say, oh, he was just interested in, in, in special scientists, special thinkers. Well, you can see one bias that’s he mostly paid men, very few women. And the other thing is that all of those men that were sponsored by Epstein were working in certain directions, right?
And this what we’ve just been discussing, this idea that everything is computation that you can control. Everything that you can engineer and everything that you can become immortal through longevity and uploading your brain into the cloud. This is not just Epstein, this is now followed up by his also probably not quite clean successors like Peter Thiel and other people, Elon Musk, who are sponsoring the same people now that are, that were sponsored by Epstein.
And it’s always the same pattern. It’s about building a humanity that is, it’s transhumanism, basically building a better humanity, always in their own image, of course. Who wakes up in the morning and thinks everybody should be like me in the world, that would be absolutely horrific, right? But that’s the kind of thing.
And then, it’s about genetic engineering of humans. It’s about longevity research at the moment. They’re obsessed. It’s also psychopathological to want to live forever. And it’s it’s about uploading. So, so creating machines that are better [00:46:00] than us, more intel, super intelligent to use Nick Bostrom’s terms.
So, so it’s fundamentally eugenicist, that’s eugenic, he wants to create
SHEFFIELD: Well, and in Epstein’s case, literally he was a eugenicist. And he tried to inseminate it was horrible. I mean, if you read into the files, but these ideas of biohacking and what’s going on in these free cities, like Prosper Hour, people are ha trying their, they’re, experimenting on themselves.
JAEGER: So I don’t care. But, as long as they don’t use other people. But this is all driven by this ideology that is supposedly rational, okay? That’s why they think because they have this superiority. it’s, completely, cultish. It’s a cult. It’s a religion. And so I call this Trumpism in science.
So this is sort of, first of all, you make up a view of the world that you just believe in, and you pretend that it’s true. And then you invest so much money that, that, enough people believe it’s true. And that, as we may imagine both of us, it’s not going to go well because reality, there’s this book by David philosopher David Chalmers, he, it’s called Reality Plus, where he argues that virtual reality is just as real reality, which is true in some ways, virtual reality can affect the physical world, but you know, real reality has this one character.
It will kill you if you ignore it long enough. And virtual reality makes your life better on Cisco. Hey, you finally pulled the plug. You will be much better off in your real life than in virtual reality. So this is the difference. And David Chalmers is another great example of a by now I have to say grifter, that is, pandering to these people with the money and the people with the money they want.
What’s coming out of the Epstein scandal that’s not the files that’s not, restricted to that. They want, the humanity 0.2 0.0. Right? Because we’re not good enough for some [00:48:00] reason. And for me science has a completely opposite purpose. It has the purpose of making our human lives better.
Okay? It’s very old
SHEFFIELD: End up doing it together. End up doing it together.
JAEGER: Collectively improving everyone’s life. Okay. That’s always been a naive vision, I know, and in reality. But this is blatantly not the case here. So it’s a really sort of creepy thing. And I’m not saying these people are ill-intentioned.
Sometimes they’re quite anxious people because they think again, that everything they do is scientifically justified all the way down. There is no philosophy. That’s just rational thinking. And that’s crazy. Okay. That’s exactly completely forgetting about these aspects of intelligence like judgment, like creativity, but also emotional aspects and compassion and things like that, that are not computational.
And that should be driving you. It’s not a compassionate project at all. It’s, you can see that also with reactions by Yascha Bach, for example against his horrific things he said in in the. Files where he just says, oh, poor me. My career is now threatened and I’m the one who’s going to develop conscious AI.
He believes that his network framework is the thing that’s going to give us conscious AI, but it’s a completely mistaken and inconsistent framework. So he’s going to be disappointed and they’re anxious about this. So that’s why you see a lot of, sort of really hard push at the moment for this.
I think it’s all going to disappear in smoke, to be honest, the next few years or decades, because people will realize that, that these, it’s hubris it’s assuming that we can do things that we can’t, at least not without creating really devastating unintended consequences and isn’t the situation we’re in right now.
Just like a bunch of unintended consequences from climate change to the mass extinctions we’re creating to. Geopolitical breakdown to the, it’s all social media is disrupting society, not because we intended it to do that. Everything we see is unintended [00:50:00] consequences at the moment. So why should we, by switching that to turbo, by going hyper modern, not just modern, why should we be able to solve that problem?
We’re just going to create by, by accelerating everything, we’re going to create more unintended consequences. And one of those is eventually going to offer us completely, I’m sure. So
SHEFFIELD: And that would be before any you know, actual intelligent computer system would be existing.
JAEGER: Maybe, who knows? But I think so. And why would you create an actual intelligent, artificial agent? I think that’s the other question that I have here. Why don’t we ask ourselves why we do something? And an intelligent agent like that would’ve to be treated no longer like a machine, but like a being.
And if it’s actually smarter than us. Isn’t that a really bad idea? I mean,
SHEFFIELD: certainly could be, well, especially if you don’t. develop a, you know, a fully res, you know, a fully respecting theory of mind that would you know, w would be able to show, look, this is why humans still have value even if we’re not as smart as, as you, or whatever you is, or alien or whatever.
Like, and, and, and I, and I think that that is worth doing and we should do that philosophy work, and that’s part
JAEGER: I think yeah. No, I agree. Yeah.
SHEFFIELD: But you
JAEGER: but, if I may say, I mean, also what’s important is to design the interface better between us and the machine. So the machine serves in the end as not your usual hammer tool, but in the end it’s a tool for you to think better and to make better choices and, not the other way around.
So this is. Idea of the reverse center that the computer starts using you instead of you using the computer. It’s this, figure with human legs and a horse head, which is not ideal of course. And so it’s the metaphor for our technology taking care of us because, not because it wants to take over the [00:52:00] world, super intelligence.
There is no self, there is no will, there is no motivation. But it’s because of us human beings giving our agency a way to a machine that has none and has no creativity and has no judgment, has no ability to take responsibility.
SHEFFIELD: Well, and is owned by people who are that way also.
JAEGER: yeah. No, totally. I mean, that’s the other thing we haven’t talked about, but the combination of the current type style of capitalism that we have, especially in the US and this technology is probably extremely unfortunate. And China as well, I.
Meta-modernism and technology for life
SHEFFIELD: Well, and that, yeah, I mean, and that is why, you know, my personal view is that, look, you know, these are, these are useful technologies in many ways. But they’re, they’re limited in what they can do. But, you know, there’s, there’s some ways that they are incredible. Like I have seen that they do work for computer code in some settings and they can be useful for that.
And other things, you know, like analyzing x-rays and things like that. But, but ultimately they, they, they’re not autonomous. they, and they, and they, and the way they’re architected, they won’t be. But you know, it’s, and that’s why it’s important for governments and for people who are, who support democracy to do more than just say, well, this is just stupid stuff.
You know, it’s nonsense. We, we should just get rid of it. We should ban it. Like you are not going to ban this stuff. That’s number one. Like, you will not ban it. Even if you could, you know, get your own country to ban it, people will just go to another country. So it’s not going to achieve anything, and you certainly won’t get a global treaty to it.
so let’s just take that off the table right now and understand that, look, we need to, to understand how to deploy these things in a way that is, that is humane. Because ultimately, as you were saying, you know, the science should be for humanity and, and not the other way around.
JAEGER: Yeah, no, I mean, I think this is, so this is where the second part of this conversation has to come in, and [00:54:00] that is we need this, these kind of thoughts that we were exchanging right now, these theories that we are developing both in amazingly parallel ways. I love your approach, by the way is a deep recognition of the difference between the living and the artificial at the moment.
So, so what’s important is that I’m not saying that it’s impossible to create a real agent. I think it’s going to come out of a biology lab and it’s going to be a disaster, but it is possible to do this. I have two requests for humanity right now. One is just to, if we develop a new technology, can we.
Stop the accelerationist bullshit and sit down for a second and think, why are we doing this? What is the purpose? I really think we’ve lost that completely. So we’re, we have to go somewhere and we’re in a race to the bottom because of that. And the second thing is if we understand the nature of the living versus the non-living much better, then we need an attitude change.
Again, that’s philosophy. We really need a different attitude towards ourselves, towards the technology and towards the social systems that we’re embedded in. And we need to recognize that the ecological and social systems we are relying on are a part of the equation. And we’re not doing that right now, this entire.
Crazy spiral. And again it’s a constructive process. So this is it. It’s funny, it’s so human. It only a living system can create this kind of disastrous situation. The computer by itself, I repeat, the technology itself is not bad. It would’ve never done this by itself. It’s just the way that it’s employed void.
So this idea that, so first of all, we have this constructive processes that are the basic, the cell. Then we have multiple cells. Then this happens in your brain, right? Your brain is constructing the personality that you are, the individual that you are through your experiences in the same way that a cell is constructing itself.
And then societies have also, they’re not quite as integrated as organisms and minds, but they also have this sort of [00:56:00] constructive aspect to them. And we are the ones with the agency to change the direction of that construction. So I also don’t want to hear any sort of predictions that this is super, intelligence is in inevitable and we’re going to be replaced.
I don’t want to hear resistance is futile. It’s, you’re mentioning the Luddites before. The Luddites are much maligned, but they were a social movement that actually wanted a different kind of model for the possession of the means of production. They were not just stupid people breaking machines instead of going after the bosses.
They couldn’t go after the bosses, that’s why they broke the machines. So we have to find better ways, not just to break machines. I saw talk at the chaos communication conference that, that showed how to poison AI data sets. So I think there is a certain I don’t know, satisfaction to that maybe in such a situation, but it’s not very productive.
We need a better way. A constructive way. What’s happening right now? We’re deconstructing our societies, we’re deconstructing our relationships with each other. Through this technology. There is always talk about disruption. So the right has become incredibly postmodern and they will hate to hear that.
But so this idea that everybody’s entitled to their opinion, you can just say something and it’ll become true. But also the fragmentation of everything and this sort of it’s a deconstruction. Disruption is the word right? That all the Silicon Valley people use disrupt what you will.
But you have to construct something. Society has to get to this coherence again, where we’re constructing something together. This is what you learn from studying the mind and the organism. We have to find a kind of an organization for society that’s constructive again. And what we have right now is pure cancer growth.
You can compare it one to cancer. It’s out of control. Accelerationism is out of control. We need to slow down. How is that going to happen? I think it’s going to take a major break breakdown of systems for this to hit the awareness of enough people [00:58:00] that we need to go ahead. As you say I am not against going ahead.
I want us to go ahead carefully. Because in a complex system where you create unintended consequences, you need to test every step and see what consequences come up. If you just rush through it, these unintended consequences are going to fall in your head and kill you in the end. And this is what we’re doing and it’s a fundamental misunderstanding, not just of the nature of us, our relations with each other, the world, but the world itself.
We misunderstand the nature of the world we live in, and we have rarely been so much out of alignment between what we can actually do and what is actually working. And this is surprising maybe to hear for people because they think, it’s an amazing time to live through, technological progress is so fast, but it’s very limited in most.
Areas that are actually useful to people. Are we making progress in how to live together, how to provide basic needs for most people? Are we making progress in these kind of things? No, we have no, no way to value this. So we just value breakneck innovation because we have this stupid system that is venture capitalism right now, capitalism on steroids that needs to make a profit.
And this is by now the same thing in science. We idolize people. Let’s go back to our friend Mike Levin. So he’s a person who, before AI already published about 30 papers, a scientific publications a year. It’s probably more like 50 right now. And why is that?
Something that we admire, that there’s no way that this stuff is well done, well curated controlled, and now it open claw, and these autonomous, autonomous, a AI agents going around the production of unreliable vibe coded stuff is is going to be bearing as nothing can be trusted anymore. So we’re building software infrastructure that can’t be trusted. We’re building a scientific literature that can’t be trusted anymore.
Almost all submissions to computer science conferences now contain made up [01:00:00] references. And that’s a clear sign that they’re all written by AI. So science is getting into this mode where we’re writing publications by AI. We read them by AI. Why don’t we just go and have a beer? Okay. There is no point to this.
What is the point again? I want to ask what is the point of what we’re doing? I don’t know anymore. I wanna stop and think and breathe and say, what are we doing? This is a moment where humanity should really urgently do it. And of course, the way we set up our societies, this is the moment where we’re at least likely in our entire history to actually be able to do that, which leaves me a little clueless, to be honest.
But I guess the political guests on your podcast have better insights on that than I may have.
The real singularity is whether humanity can learn to live together
SHEFFIELD: Well, yeah, I mean these, these are real questions and, and that is why you know, sometimes I think of the political challenges and the societal epistemic challenges that we have. Those are the real singularity, which is how can humanity have a, a globally connected? Con informational space and survive because that we have to do that first before and, and anything else that comes after that, we’ll be able to handle that if we can get through this one.
And, and this is really what matters is, you know, understanding how can we take care of each other and how can we pa help each other know what truth looks like, or at least you know, what falsehood looks like because I, I, you know, that’s ultimately also what, what the other, one of the other kind of fundamental scientific principles that tends to get ignored.
And, and Carl Pop Popper is, was very good on that regard, is that he’s, you know, said basically, look. You can’t note anything for absolutely certain. So in that sense, the postmodernist were right in that nothing is [01:02:00] absolutely true because if it were, then you’re, if you, if that you’re, you’re to say that is to say that you are a model of something is that thing.
So that’s not right. But at the same time, we can know what falsehood is also, and we can know because it contradicts many other observations. And, and that’s, you know, getting that to be a scalable societal you know, belief and practice, like that’s, that’s how we can, can set humanity on the right path.
It isn’t, you know, in imagining this, you know, fanciful future of a, of a computer that, you know, does all our work for us. Yeah, sure. Look, that would be nice.
JAEGER: A hard problem. I mean, that’s, there’s B’S law that says it’s always 10 times easier to produce the bullshit than to, to to uncover it. But what you just said, like we have to construct again after deconstruction. So there’s a philosophy called meta modernism that’s saying we need to move on from deconstructing all our knowledge.
And, that was important in the 20th century. We were too sure of ourselves. And it’s still important today because what we described before, the accelerationism, all of that. It could be called hyper modernity. It tries to solve the problems we’ve created with our technology, with more technology.
And as I just said, I don’t believe that’s going to work. What we need is a, rethink of how we can establish ourselves in reality again. and there’s a project called meta Modernism, which is both a political philosophy. It’s not very well known yet, and and also a principle for doing a different kind of science that doesn’t treat the world as if it was a machine.
I’m writing a book at the moment. It’s called Beyond the Age of Machines. And this is about the kind of science we would need beyond those unreasonable actually assumptions. Now, you will always have some assumptions beneath your science, but you don’t have to claim they’re a hundred percent certain or solid, but you have to say they’re solid enough, they’re trustworthy.
And also they give us a much more humane and useful and fun world to live [01:04:00] in. I’m sometimes attack saying, oh, you, you’re building your philosophy just to build a world that you want to live in. I said, yeah, why would I want to build a world that I don’t want to live in? And I think this is paradoxically what’s happening a lot.
and it has something to do also with, the, kind of, nerdiness of this movement of, Silicon Valley that these people have a lot of grievances towards other people. And so they are sometimes I suspect even a bit resentful. And, they do this deliberately deliberately. And again, from the Epstein files and sometimes from other symptoms like Peter Thiel’s antichrist lectures and things like that, you realize that they are actually planning and afraid of the crisis that’s going to come.
And they’re planning with it. They, know it. They don’t actually see the world as just progressing any further. And then you can see all of this.
SHEFFIELD: Yeah.
JAEGER: In a, yeah. In a very, more, much more sinister light. And you can say these people are the control they’re working towards is also including other people because they basically treat the rest of humanity as machines, which is it’s not good philosophy, obviously, not just for logical reasons but for ethical reasons.
So this is really leading to, to some really nasty outcomes that could be much worse than what we have ever experienced before. And I’m not saying that this is willful destruction. I think these people are truly deluded in, in, in a lot of cases about how the world works. Yeah. And they overestimate their own ability to judge their own situation.
SHEFFIELD: Yeah, absolutely. And yeah, and, and in Thiel’s case, I mean this is explicitly religious. Delusions I mean, read any number. I’m, I’m sure the audience probably, hopefully, but we’ll put a link to at least one of them to, if you haven’t read the any of these pieces on this stuff, this is seriously you know, religious, solitary but you’re right, Yogi that you know, that we, there, there has to be an alternative.
You can’t, [01:06:00] you can’t just simply criticize. And I think that that’s been kind of the, the, the loop that the progressive left has been kind of stuck in for so long that, you know, they, they, they, that a lot of them, you know, they, they’re, they’re against. They know what they’re against. So they’re against, you know, racism.
They’re against sexism, they’re against you know, capitalism or exploited capitalism, wherever you wanna say it. They’re against those things and, and their right to be against, you know, extraction, capitalism. And as to, to quote Cory Doctor again, you know, enshittification. That’s great to be against those things.
But you do have to have an affirmative vision because if you don’t then essentially the incompetence, the corruption, and the malignants of people like Donald Trump actually becomes an argument in their favor if you can’t present an alter. Because, because they can turn around and say, oh, well the reason why your life is terrible and why you can’t get a job, or, and, and why you’re addicted to drugs or whatever, is these people did it to you.
I didn’t do it. They did it. And, and, and, and there’s no, and if there’s no affirmative vision, then, then you can’t really defend yourself and, and you can’t. And more importantly, you cannot move forward in a positive way and have a future that is bright in your own mind. Because if you, if you don’t have a, a guiding star, then, then you won’t get anywhere.
JAEGER: I mean, I still do think that it’s hard to change things in the, state we are in right now because everything has become a sort of an immature popularity contest in this society. And I think this is this, a symptom of, universal capitalistic, neoliberal principles being applied where they shouldn’t be in, in science, in education, outside where they should be working and where they’re not useful.
And that creates, a, very unhealthy dynamic of these races to the bottom where everybody just has to go somewhere, even [01:08:00] if they’re not knowing where, they go. And also I mean, these are hard problems. So if you want a really difficult problem, you’re one of those nerds out there, then work on those societal problems.
They’re, actually much harder than even flying to Mars, which is hard enough. And you don’t want to live there, believe me. So, so why don’t you concentrate your efforts on actually understanding social dynamics. These are hard problems. You can’t solve them with your usual engineering mindset.
But even going through that challenge of going beyond your engineering mindset and trying to, to sometimes. Acknowledge your limitations and say, maybe we shouldn’t do this. But then still boldly go where, no one has gone before. But just a little more carefully than, or a lot more carefully than we’re going right now.
So that is a worthwhile sort of project because it, not only requires entirely new ways of thinking it, it requires new ways of doing science methods and forms of collaboration. Which is something I’m also interested in working on, where we have to work together and also harvest the differences between us.
We, we, there is no single solution to the kind of problems that we have right now. So we have to try out many different things with tolerance, but also good boundaries. Because what’s happened right now is that the boundaries have gone out of the window. Every anything goes. And we need to reestablish a structure and organization for our signs, for our freedoms in society.
And that’s the meta modern project. It’s saying you can only be individually free if there is a supporting and robust societal and environmental structure around you that allows you to be free. And I think that’s the, basic insight that we have to relearn on the political stage, not just to reform our politics, but everything from education to how we deal with health to, to science itself.
And that’s also one of the main thesis of my book that we can learn from the organism how it survives. The organism is basically a physical system [01:10:00] that shows us how you can extend your lifespan. So the, most ironic thing with this whole craze about the survival of humanity, going to the stars and, living forever.
Is that this drive the people who drive this are the ones who are most likely to, to jeopardize the future of humanity right now. And I’m sure they don’t intend to do that, but they are severely misguided and they are severely shortsighted and I have to say very often, a lot less intelligent than they think they are and are told constantly by the people around them.
They are just because they’re rich. And that’s a huge problem. I mean, these people live in a bubble. And I’m trying to remember, I think it was Nate Hagens who said, if you could only change the minds of the 1500 richest individuals on earth and make them really engage the problems that we have with all their rich richest, then we would have solved most of the problems that we have in, in, in, a few years.
But the, complex problem here again, is the societal problem. How are we going to work, make this work in practice with real people in the way? That we’re dealing with it with each other right now. So this, these are the real challenge that these, the most intelligent people on earth should be tackling.
But again, we’re measuring intelligence based on what IQ tests, problem solving. So you have these people that score high on a IQ test. They’re sometimes the most incredibly stupid people in, the sense of not being able to read the room, not being able to anticipate unintended consequences and not knowing what to do in any given situation.
So these are all forms of, knowledge, of intelligence that humans have that algorithms don’t have. So again, why are these people so obsessed with artificial intelligence? Because it’s most like. What they know as intelligence and they want to see that as, a good thing for the future of humanity.
I think it’s very limited. We have to step out of that narrow minded narrow focused thinking. Sometimes it’s called left [01:12:00] hemisphere thinking. I don’t think the neuroscientific evidence is very good that it’s really in the left hemisphere. But we have to do more wide boundary stuff again and sort of scan for consequences and, tread carefully instead of just rushing ahead with this ultra rational mode that is in the end, as I told you several times during this podcast, irrational at the bottom in its metaphysical assumptions,
SHEFFIELD: Yeah. That, yeah, that is the, that the unfortunate irony with that. All right, well, I think we’re, we’re going to have to do Yogi, we’re going to have to do a separate episode just on co cognitive science and minds because we got a, a lot more kind of meta political here, which is good and I liked it.
But we’ll, we will come back for people who might have been expecting us to go into more on the mines. We’ll do that in a separate episode
JAEGER: Oh, I’d love to come back. This was great. Thanks. Yes.
SHEFFIELD: Awesome. Alright, so why don’t you what websites do you want people to check out if they want to keep up with you?
JAEGER: my personal website is just Johannesyaeger.eu, all in one word, except for the EU, of course. And the scientific results are on a website called expandingpossibilities.org. And I have an art science project. It’s called The Zone. It’s almost impossible to Google it. So it’s the dash zone, a T because I live in Austria.
That’s that.
SHEFFIELD: Okay, sounds good. And you got the, and you got shirts, so I see you got one
JAEGER: Yeah.
SHEFFIELD: All right. Thanks for joining me today.
JAEGER: All right. Thanks a lot, Matt. It was great talking to you.
SHEFFIELD: All right, so that is the program for today. I appreciate you joining us for the conversation, and you can always get more if you go to Theory of Change show where we have a video, audio, and transcript of all the episodes. And if you’re a paid subscribing member, you have unlimited access to all the archives.
You can get a paid subscription on Patreon or on Substack. You can go to patreon.com/discoverflux, or you can go to flux.community for that. And we do have free subscriptions as well. If you can’t afford to do a paid one do stay in touch anyway. And if you’re watching on YouTube, please do click the like and subscribe button so you can get notified whenever there’s a new episode.
Thanks a lot for your support. All right, I’ll see you next time.











