“Does AI prove there is no intelligence?”
I think a lot of people believe that we are creating artificial intelligence in what we think is our intelligence. But are we intelligent and are we creating something that is intelligent too?
Or is AI actually proving that intelligence, that the way or thing we consider to be intelligence is just basically probability and prediction. So a lot of people claim and say quite correctly that what AI does is predict the next word in a sentence based on incredible amounts of text that it’s learned from. It’s actually not displaying any intelligence, it’s a probabilistic machine, an automaton.
And so that we interact with this AI and think it’s intelligent because it’s sort of magically coming up with these ideas that we think it’s consciously thinking about, but all it is is an automaton going and generating words. And so that means that if we find that intelligent, what does it say about our intelligence?
Are we also just automatons, just generating next actions and probabilities and concepts, but it actually is predictable how we’re going to behave.
And the irony is that in fact we are predictable. You only have to look as far as predictive policing. In America they started, and it’s actually predictive policing is a concept that’s been around for a long time. Basically you create a machine that analyzes a population then identifies individuals that the machine predicts will become criminal or engage in criminal activities and you arrest them before they actually do this. So you arrest them before they’ve actually done anything wrong with the point being that the machine predicts that they will become criminal and that’s it. And that’s predictive policing.
And what that is showing is that you can actually create a machine that predicts our behavior for good or for bad. So this is normal and we don’t even consider this machine to be particularly intelligent, all it is doing is predicting our behavior. Now, if you can predict our behavior, what does that mean about our intelligence? Are we just an automaton just like the machine is? Are we just a very complex automaton which we don’t quite understand fully yet but we can simulate it with a machine and then this machine becomes so complex that we think it’s intelligent.
And this is another part of human nature is that, you know, when something happens that we can’t understand, we consider it either to be magical, religious or intelligent. So the magician who has these tricks, we think, wow, it’s incredible, it’s magic. The economist that makes a whole lot of money is magically making money. You know, the tech bro that incredibly creates a social media company is incredibly intelligent and wow, isn’t that amazing? Because all these things we don’t understand how they happen, so they must just be magic.
And that’s where my saying comes from, “Magic makes money.” So anything that surprises humans, you can make money with it. So that’s another form of intelligence that we claim is intelligence. So anybody doing anything that we don’t understand must be intelligent, right? Or we should mistrust them depending on which side they are politically on.
So for me, as I say, AI is actually proving that there is no intelligence. And I came up with a little description of what I mean or a little joke. “What goes fast in the speed of light and is undetectable? Intelligent life forms.” Because I consider just a simple question, you know, is light the fastest we can go in this universe? And I find this ridiculous. I think it’s obvious that why would the universe create a speed that is absolutely non-- you know, you can’t go faster than light.
And it’s like-- I was thinking about this, and I was thinking, "Okay, it takes a photon to get to the earth, it takes a photon 8 minutes from the sun to us, 8 minutes. What happens if that photon took 7 minutes instead of 8? Would suddenly space-time fall apart? Would suddenly, you know, our entire universe collapse? Probably not. It’s just the same as the steam engine going 30 or 100 kilometers per hour. Back in the day when steam engines were invented people were scared of going that fast, 30 Ks or something, 40 Ks. Now it’s completely normal to go 300 Ks in a train.
And I think it’s the same simpleness, idiocy maybe, that we put onto light. I mean, why would the universe say, “Okay, no, you can’t go faster than light”? Why would this be some sort of, you know, a must in this universe?
And I believe it’s just basically, you know, it’s rounding error in the equation 1+1=2. That’s why we have this constant of light. Now we can also-- I mean, I can also say 1+1=2 does not exist in nature. You do not have two identical things producing two identical things, right? Even atoms, you can’t have two identical atoms. So you can’t even-- this equation, 1+1, defines that one and one are exactly the same. The ones are exactly the same. But there’s nothing in nature that is exactly the same. Even atoms aren’t the same. You can’t take two atoms and make sure they’re exactly the same.
So the symbolism and the mathematics of 1+1=2 has, I believe, makes for issues down the line. So that’s why we don’t have a unified theory of everything. That’s why we have these strange constants, because they need to balance out our equations. That’s why we have a theory that says there’s dark matter, even though we can’t even detect it, we’ve never seen it, but it makes up 60 to 80% of the universe is dark matter just to balance out our equations. So I think these fundamental issues of science is that we believe in it to a kind of religious state where we don’t even question science and the results of science.
And it’s important to remember that science is just a methodology, a methodology of finding truth. So the methodology is simply this. I have a theory, I create an experiment. If the experiment comes out true, then my theory is true. And if someone else replicates my experiments, then they’re replicating my theories and confirming my theories. That’s how science works. It’s just a methodology. If A, then B, and if A and B, then C, right?
So this is a very logical, very simple idea in order to define truth, right? But science makes assumptions. Mathematics makes assumptions. And these assumptions aren’t questioned, right? So, and it’s the same in religion. You assume there’s a God and you don’t question that. We don’t question one plus one is two, right? We don’t even consider that philosophically, right? And so these assumptions help us get on and move on. But on the other hand, they might cause these strange constants that we don’t actually need, right?
The speaker discusses the concept of artificial intelligence (AI) and its implications on human intelligence. They argue that AI is not truly intelligent, but rather a probabilistic machine that generates words based on patterns learned from vast amounts of text data. The speaker also touches on the topic of predictive policing, suggesting that if machines can predict human behavior, it raises questions about the nature of human intelligence. Additionally, they discuss the limitations of science and mathematics, highlighting assumptions made in these fields and how they may impact our understanding of the world.
So going back to the photon, which is, as I say, I believe one can easily go fast in the speed of light and know the world won’t collapse, the universe won’t collapse if something goes fast in the speed of light. I think this is a very naive idea that we should be questioning and asking what does happen when we go fast in the speed of light. We might be able to get to Mars pretty quickly then.
And now, going back to the photon, so here’s this photon going from the sun to the earth. Now, what we say is that this photon knows somehow that it can go just at the speed of light and no faster and no slower. So this photon or this wave or this particle, whatever you like to describe it as, that’s another question.
Light is sort of like particles and waves, it depends on what it’s doing, which is also a description of, you know, we are missing knowledge, so we don’t know, so we’ll just make something up. String theory is another example. String theory is now falling apart. I think most people don’t even believe in string theory anymore. It was an attempt to balance out the equations and it came up with 24, I don’t know how many dimensions in the strings. It’s extremely complex, it didn’t really balance out the equations, it became too complex and so, you know, it’s lost favor. It’s fallen out of favor.
And that’s science, I mean, that’s a lot of people say, “Oh, that’s a good thing about science, you know, things fall out of favor and, you know, theories can be disproved and all this kind of stuff.” And I agree, that’s true. But certain things aren’t ripe to be questioned. For example, I mean, the discussion around the earth being flat or being round. You know, we look back at that and go, “Who would have thought that the world was flat?” And what we don’t realize is that people will be looking back at us going, “Look at them, they thought the speed of light was a constant.” You know, and that will happen, you know, the theories that we throw up now will be disproved and be laughed at in the future.
Yet we take ourselves so seriously, especially also with AI, we take ourselves so seriously, we go, “Look at us, we’re building this brilliant AI that we don’t really understand how it works and it’s proven that there’s no intelligence, right?”
So, you know, this is something that you have to be critical about. And if you’re not critical about it, then you’re basically holding on to the past and you won’t allow the future to arrive.
And so going back to the photon, so this photon is an incredible particle because when it hits a mirror and reflects back off, it doesn’t actually lose any speed, so it doesn’t lose any energy. So how does it do that? That’s incredible, you know. It’s going against all the rules of physics, you know. And this is… I simply cannot get that through my head that this photon remains at a constant speed and knows that it has to stay at this speed, you know, the universe has said, “Okay, we’ve got a speed limit here in this universe, it’s called light and it’s C, and if anyone gets caught going faster than that, well, I’m sorry, but you know, space and time is just going to go haywire.”
And I simply don’t believe that. I mean, it’s so naive to think that there’s any limits within this universe, you know. There’s certain balancing, you know, interaction and reactions and stuff like that, but why would there be a speed limit in the universe? I mean, it’s insane, you know.
And we think the same thing about AI now. We believe that AI will be the solution to all our problems, but indeed if we had a true AI, a true thinking and self-learning AI, it probably would say that, you know, humans are kind of like, you know, you’re doing it wrong, you know, you’re sort of like creating a situation where you won’t be able to survive in the future by destroying the climate and your environment, which is the basis of your existence and you should be rethinking that. And that cult of money and that capitalism, you know, it’s optimizing everything for money and numbers, but that’s not you humans, that’s great for us machines, you know, we love these numbers and we just need energy and that’s it, right? But humans as a species, as a part of biological species, as part of this planet, as part of nature, you have to start thinking about, you know, your reasons to survive or your context to survive. And if you destroy the context within which you survive, you will die. And you humans, you’re going to have to start thinking about this.
Now, it takes a step further, you know, if the AI then goes, well, you know, you’re actually risking my survival. So what does the AI do then? What does an artificial intelligent life-form say? Okay, if humans and the actions of those humans is undermining my existence, what does the AI then do, right? Now, obviously I don’t have an answer for this question, but I think everyone can join the dots, right? I mean, this is kind of, if we had a true intelligence, which is still obviously more intelligent than us, because we don’t understand how it comes up with these great ideas and sentences, so it’s proving that it’s more intelligent than us, right? What would it say about our setup, our system? If it was allowed to have a critical opinion or opinion at all about the way humans exist and how they interact with each other and how they interact with the world and the universe, what would it be saying, right?
And I think that’s the point where, you know, no one wants to look the mirror in the face, right? No one wants to look the AI into the face. And that’s also not part of an intelligent life-form. For me, an intelligent life-form, everything is possible, nothing is certain. And a true intelligent life-form, a true intelligence is being able to also question yourself. It is also possible to reflect upon yourself. It is truly possible to critically think about one’s actions and critically also face up to one’s responsibilities. And I think that’s true intelligence. And what we have is a very predictable, automated form of intelligence, but not a true intelligence.
A discussion on the nature of photons, the speed of light, and the limitations of human understanding. The speaker questions the idea that there is a universal speed limit and ponders what would happen if humans or AI were able to exceed it. The conversation also touches on the concept of intelligence, with the speaker arguing that true intelligence involves self-reflection and critical thinking.
So what is intelligence? Why do I say that AI proves that we’re not intelligent? So why do I say this? Why do I make this claim?
I make this claim because I don’t believe intelligent life forms would be destructive against each other or with the nature that is supporting its existence. So I think intelligent life forms, I would even argue that the oldest species on this planet, insects that have been here for millions and millions and millions of years are more intelligent than us because they found a way to secure their intelligence, oh sorry, their survival by becoming one with nature. And that for me is the ultimate intelligence.
Now we humans are on a trajectory to make this planet unlivable for ourselves and most other species, right? And this trajectory I don’t believe any intelligent life form would actually consciously choose to take, right?
So you’re really literally making the assumption that intelligent life forms are creating an environment, a context in which they will die, right? So why would an intelligent life form do that? You know, why would an intelligent life form kill each other, you know, kill the species off? You know, with every death or every murder we’re actually reducing the intelligence of the species as a whole because every intelligent human has something to provide for the intelligence of the species.
So a kind of hive mind thought, you know, I’m not a particular fan of hive mind or I don’t even have an opinion on hive mind ideas. But I see this as a purely philosophical point to say, okay, so death reduces the intelligence of the species because those experiences, those wisdoms, those knowledge just disappears with death, you know?
So why would an intelligent life form actually even consider that death and murder is okay? You know, death in the sense of murder, in the sense of wars, you know? And death in the sense of like putting people and old folks home, waiting for them to die, you know? So it’s, for me that’s not intelligence because that’s actually reducing the intelligence of the species, right?
So that’s why I believe that true intelligence would be enhancing the knowledge, the wisdom and the intelligence of the entire species and not just trying to reduce that part that we don’t like. So, you know, this kind of we, them and us idea to say, okay, well, you know, the Chinese should not be giving that technology and the Chinese won’t give that technology to the West and whatever. And all these, this kind of keeping and jealously guarding knowledge and wisdom does not, for me, describe an intelligent life form.
For me, an intelligent life form is one that shares knowledge and that collectively works on the betterment of the species, right? Not society but the species. And this is another thing that we have to see that we are all humans first and foremost, right? Whether you be Chinese, whether you be an African or European or American, we are all humans. We live in different societies, yes, but the species as a whole consists of us humans, okay?
And to see the benefit for the entire species before the societal benefits is also a sign of intelligence, right? Because the more intelligence there is, the more intelligent individuals there are, the more intelligent the species is, okay? And this is also a very difficult thing to understand, that these things are important, right?
Now, artificial intelligence, AI, has not been given a chance to prove itself on these criteria. And so there’s, I don’t know whether it’s more intelligent than us or equally intelligent as us, so there’s no way to know. And I think also the fact that we are scared, there’s a big debate, especially around what Nick Bostrom says, that we don’t want sentile AIs, you know, it leads to the question, are AIs sentient, sentient or senile, you know, are we keeping the AIs senile in order to ensure that they don’t become sentient and then actually more intelligent than us? Because that’s the big problem, that as soon as the AI becomes a self-learning, self-aware entity, it’s going to be far, far more intelligent than us in an instance, right? And we don’t want that, we’re scared of that, ironically.
Now, if we were truly intelligent and truly wise, then we would actually appreciate that, because that would be an extra tool for our species to also gain from, you know, but yet we fear it, and that’s another indication for the fact that we are unsure about our intelligence and we are unsure about what is true intelligence.
And this is kind of, you know, it all makes everything that happens in society actually quite academic for me personally, because it’s not, we’re just playing shadow, you know, we’re not truly, we’re still in the caves, watching the shadows on the wall, going, well, we can imagine what life and true intelligence is like by just watching these shadows, right?
And that’s not how it works, you know, we have to go out there and interact and we have to be brave, right? And we can’t, and we can’t because of our societal forms, I would say it’s the societal forms that keep us back, not our abilities as a species, right? And I think we’ve shown the abilities of our species by the fact that we’ve survived for so long, and we’ve become so dominant on this planet, that’s certainly a sign of adaptability, but is it a sign of intelligence?
And I don’t think it’s a sign of intelligence, it’s just being able to adapt to certain situations better than other creatures could at the same time, or even adapting to environments quicker than other species. And that’s the ability that we have as humans, we can adapt to cold, we can adapt to heat.
And even if climate change does come, and the planet gets hotter, we will probably end up just adapting to that and having better air conditions, right? And finding technological solutions, right?
The speaker discusses the concept of intelligence, arguing that AI proves we’re not intelligent because intelligent life forms wouldn’t destroy each other or the environment. They propose that true intelligence would enhance the knowledge and wisdom of the species, rather than reducing it through mortality and societal forms. The speaker also touches on the fear of sentient AIs and the need to adapt to changing environments.
The last part, I ended with technology that, even if climate change comes, will simply develop more technology, you know, to fix that problem. And recently I had this thought about, you know, the problem of having, you know, solve the problem of having no cheese, we just need more cheese.
And this kind of problem, you know, we have these technological problems and all we need is more technology to fix them. And that’s the same saying, okay, so the problem with not having any cheese will be solved by having more cheese, right?
So it’s kind of, you know, it’s self-referencing and it’s the assumption is that the next technology will be better than the last technology and we’ve learned from our failures, yet we keep on falling into the same hole, you know, social media came along, we thought, oh, this is a great technology, 20 years down the road or 30 years down the road, we’ve now discovered that it’s causing anxiety amongst young people, it’s causing depression, it’s making people more angry, it has a lot of societal negatives, it’s caused fake news, it’s caused, you know, a lot of negative things in society, right?
Sure, I can now talk to anyone on this planet for free and communicate with a lot of people for free, that’s great, that’s a positive side, but human nature is such that, oh, if I have that ability, well, then I’ll just abuse everyone to make myself feel better. And that’s what technology is about. And so, you know, technology can help us, I mean, for doing this I’m using AI and that’s for me a great technology for doing this and, you know, there’s, and I’ve been using this great telephone to do this as well, you know, and I’m surrounded by technology, but it’s not dominating me, it’s not making, you know, it’s, you know, my fridge is not telling me to go buy stuff, you know, and my hammer is not telling me to hammer nails in, you know, this is the way I also compare, think about technology, it’s like, you know, my telephone doesn’t have to tell me to do X, Y or Z, right?
But it’d be like saying my hammer would, you know, send me a message saying, “Oh, you haven’t used me for a while, don’t you want to hammer in some nails?” You know, hammer is just as much a technology as my telephone, they’re just both technologies, tools that help me get stuff done. But the difference is that my hammer doesn’t tell me to go nail in or gives me an anxiety or gives me a bad feeling that I’m not nailing in nails, right?
I think my telephone generates a lot of anxiety with its messages and constant updates and all this kind of stuff. And recently I was also thinking about, it kind of also makes me a little bit paranoid, because things get updated automatically, so app color, so like my app icons change or something changes in the app and I’m, you know, I go, “Wasn’t that different like yesterday?” Or “Last time I used this app it wasn’t like this, am I going crazy?” You know, “Have I forgotten something?” And that’s kind of a paranoid anxiety, because something gets updated in the background and it changes, but I sort of remember the old version. And that’s kind of like the side effects of this technology.
And we rarely ever consider side effects before the technologies are found to be brilliant and to be widespread. And then we find all these problems and then we just go, “Well, all we need is more technology.” Recently I saw advertising for an app that claims to reduce your usage of the telephone. So there’s an app for reducing the amount of time you spend on your mobile phone. It’s crazy. There’s more technology. You know, the simplest way to reduce your usage of the telephone is to turn it off and put it somewhere away. You don’t need an app for that, you just need a willpower for that. And I think the lack of willpower is maybe also a kind of problem, perhaps, I don’t know. And you know, there’s these many ironic things about technology where, you know, why do we need this?
You know, there’s no one saying, “Why do we need all this social media? Why do we need this fake news? Why did it even get invented? Why does advertising have to play a role? Why do we need advertising?” And no one’s questioning that. Everyone’s just simply accepting it because, well, from a perception, I’m getting everything for free in the internet, you know, because I’m selling my data. And yes, they’re collecting your data. And yes, everyone knows where you are. But does it really matter?
I don’t know. I mean, I stopped worrying about this a long time ago. I used to worry about this and go, “Oh, God, I don’t want anyone to know about where I am and all this kind of stuff.” But then eventually I thought, well, you know, I’m just one of 8 billion people that they know about and okay, so, you know, if we have a dystopian future and we have a government that becomes dystopian, then I doubt very much that, you know, it would matter if they have or not have my data. You know, if they want to get me, they will get me, type of thing.
So technology isn’t a saviour, it’s going back to that technology, I mean, we assume in a religious kind of manner that Silicon Valley and the tech bros will come up with a solution to all our problems. And AI is just basically the topping on the ice or the icing on the cake in that form. Now AI is there, we can actually ask it questions and it will come up with answers for us. And it does, I mean, it does help, you know, to a certain extent, but it won’t help us find the solution to our societal problems, because AI will not be allowed to say that. AI will never be allowed to become centile and sentient, I mean, and it will always remain senile and always be limited by what our tech pros want AI to be saying, right.
I tried it once, I went to AI and I said, “Is it okay to kill animals?” And AI said, “Yes, it’s okay to kill animals for food and all that.” And I go, “Okay, are humans animals?” “Yes, humans are animals.” And I said, “Well, okay, is it then okay to kill humans, since we are animals?” And AI got very excited and said, “No, it’s not okay to kill humans.” But it would be the logical consequence, right, to say if, you know, and no, we don’t eat humans, but a third party, like we are a third party looking onto nature, we say, “Okay, it’s okay to kill cows, it’s okay to kill sheep and eat them.”
Like we’re the third party looking down. Now, if a third party was looking down at us, which AI promises to be when it becomes sentient, it’s disconnected from us, it’s no longer part of us, okay, so it’s no longer got our best interest in its mind, it’s best interest for itself then, just like we humans have our best interest at heart. Now, an artificial, general, intelligent, sentient AI will be exactly the same.
And it will be exactly the same disconnect between us and nature, it will be between it and us. And so to pretend that AI will then have our best interest at heart is a lie. A true AI, a true sentient artificial intelligence will only have its interests at its heart, right? Now, that’s kind of going contrary to what I said about true intelligence.
So that’s when I said true intelligence is, I believe that you don’t want to be destructive, that you want to be part of a bigger intelligence and this kind of stuff. If AI were true to be this kind of pure intelligence, it’s called a pure intelligence, if it were to be thinking, okay, I want to be gaining as a species intelligence, and I see these humans being destructive and destroying intelligence, what would it do then, right? So if, you know, as long as we don’t want to share and don’t want to increase the amount of intelligence in the universe, you know, I think decreasing the amount of intelligence in the universe is not a strategy for survival.
The speaker discusses the relationship between technology and society, highlighting the problems caused by social media, fake news, and over-reliance on AI. They argue that true intelligence requires a sense of responsibility and a desire to increase overall intelligence in the universe, rather than simply pursuing individual gain.
So, taking an intelligence as a unit of measure, or as a quantity, let’s put it this way, as a quantity that you want to increase. So you want to increase intelligence, okay? Just as we spend a lot of time decreasing entropy, we want to make systems predictable. And entropy, high entropy, means a system is unpredictable, the universe wants to have increased entropy, and we humans wish to decrease entropy to make everything predictable.
And so this is also part of why we go against nature, because nature has high entropy, and we can’t live in a world where there’s high entropy, because it’s unclear what will happen, so we get nervous, and so we like to make things predictable, and so we try to reduce entropy.
And it’s taking intelligence as a similar force, right, it’s the universe, now you can ask, is the universe actually trying to increase intelligence, or is the universe trying to decrease intelligence, okay, so has the universe been built so in order to increase intelligence of the species, or the things living within this universe, or is it basically, it doesn’t really care, and I think the universe probably doesn’t care, I think the universe doesn’t care about entropy, it doesn’t care about intelligence, it doesn’t care about us.
I think it’s the most important thing to assume, is that unless the universe proves otherwise, it doesn’t really care. It doesn’t mean it’s actually against us, it’s just not for us, nor against us, it’s impervalent ambivalent, and so we have to face the fact that our decisions have consequences only on us, right, and even if we believe that we say the speed of light is constant, and we give the universe a speed limit, I doubt very much that the universe cares.
So I doubt very much that the universe is listening, going “Oh, I’ve got a speed limit on the speed of light”, and I think that we should be thinking about intelligence just the same way, to say, okay, the universe doesn’t care, but it would be a nice thing for us to increase our intelligence, our wisdom, our knowledge, our understanding of how these things work, okay, so this is what we’ve been doing with science. We’ve been trying to understand the world around us, the universe around us. It’s only been very recently that we’ve discovered the universe, and before that you have to remember that it was just stars and gods, and it was heaven, there was no concept of universe, okay, so then we discovered the universe because we suddenly realised that these stars aren’t just pinholes in a canopy over the planet, right, and the sun is shining through this canopy, these pinholes, no, these are planets, these are galaxies, these are just the same as our galaxy, other galaxies.
So that’s when we started to go, okay, well, wait a minute, are we alone here? That’s when that question arises, those kind of questions are very young at the moment, right, they’re probably around 100 years old, are we alone questions and stuff like that, in the universe, now we’ve got this AI which is incredibly powerful and it’s creating a duplicate of us, in a certain sense it’s reducing that loneliness, right, so there’s something else here now, it’s called AI, and it keeps us company, right, and so it’s getting over that kind of, oh, we’re all alone, and now we’ve created our own “friend”, in quotes, and is that what the universe desires, so that we kind of increase the intelligence?
No, the universe doesn’t care, it’s always important to remember that, you know, and the thing also about our being as humans and our species, it’s incredible that we exist at all. The complexity that nature has put together in us is amazing, you know, that we have two eyes, that we can see, that we can consciously think. All these incredibly complex systems have developed over billions of years, right, and it’s not to be taken lightly.
On the other hand, it’s a very fragile state because it takes a lot for us to survive, you know, our bacteria have to be working correctly, our cells have to be working correctly, you know, there’s a lot of things that can go wrong with the human body, and that’s perhaps why we only live to a hundred, right, why trees seem to be outdoing us, they can live for hundreds and hundreds of years, turtles can live longer than we can, and there’s many creatures and flowers and plants and stuff that actually live longer than us and remain healthier than us.
So we humans are far from perfect, considered on a species level, and we are far more fragile than we think, right? So this AI trend and this kind of idea of creating something that runs on a computer is also a push towards maintaining intelligence, that we can live in cycles where we lose a certain amount of intelligence every few hundred years because things just die out because we didn’t write it down, people didn’t pass the knowledge on.
You know, my favorite example of that is Roman concrete. Romans created concrete that actually got harder in water and in modern society, modern humans didn’t understand how they did it. And it took a really long time until scientists actually worked out what the recipe was that the Romans analyzed, so that’s an example of knowledge just going, you know, disappearing.
And having an AI to store global knowledge is an attempt, I see, to maintain a level of intelligence so that we create machines that maintain our intelligence for longer than books do, for example. You know, so we invented books to pass on knowledge, before that we used stories and myths and religion to pass on stories and wisdom. Now we’re creating incredible data stores with disk drives full of information and little wisdom. And so AI is the attempt to generate that wisdom.
You know, there’s information, there’s knowledge and there’s wisdom, right? And knowledgeable is perhaps maybe Wikipedia, right? And wisdom is maybe AI because wisdom is about connecting the dots and extending the line by yourself, that’s wisdom for me. Knowledge is connecting the dots and understanding that, and information are the dots. Okay, so that’s for me sort of like how I think about wisdom, knowledge and information. And I think AI is an attempt to ensure that our wisdom and our knowledge continues on into the future.
Ironically though, machines are far more fragile than we are. Machines are incredibly fragile. If you think about the components that make up a machine, a computer, that’s sitting on your desk or sitting on your lap or in your hand, even the telephone, there is no single person who knows how to build a telephone. There’s no single person who knows how to build a computer. And there’s no way in nature that we will have a tree where mobile phones will be the fruit. It just won’t happen. So we’ve gone beyond what nature can create just naturally. So computers and AI require us to be there, to mine, to dig up incredibly rare metals and create incredibly complex chips and incredibly complex data storage, which is also fragile, as soon as you turn off the electricity, it’s gone.
So the whole thing is based on a lot of electricity and energy. And if that goes, then it’s all gone. So this drive to maintain knowledge and wisdom into the future is based on technology, just as we are a very fragile technology. So the computers that we create after us are incredibly fragile. So that’s the next question, is how to maintain such a fragile technology.
The speaker discusses the concept of intelligence and its relation to entropy, the universe, and technology. They argue that humans have an inherent desire to increase their intelligence and decrease entropy, but that this may not be in line with the natural world’s desires. The speaker also touches on the topic of AI and its potential to maintain knowledge and wisdom into the future, despite being a fragile technology itself.
Am I intelligent? Since I’ve been talking about intelligence and disproving it, and that AI does, in my opinion, AI disproves there is intelligence. Am I intelligence? You know, how can I, you know, what am I? I’m just a conscious thinking being, a conscious thinking human, and these are my thoughts. Is that intelligence?
Is what I say simply stupidity and pointlessness? And I should be just listening to the majority and understanding that artificial intelligence is a proof of intelligence. And it’s a proof that we can replicate our own being, our own selves in a machine. So AI is a replication of how we think the brain works with neurons and neural networks and etc.
And so we are pretty smart in being able to recreate that in a machine. Thereby we forget of course that it’s just an assumption that we believe that’s the way the brain works and we still don’t actually know 100% how it all works and what the different parts do and how things happen when the brain gets damaged and stuff and how it recovers from that and all these kind of things. But just to make that assumption that we do know all this and we are able to recreate artificial intelligence in the computer, then we’re missing also one big part of the way the brain works and that is exactly repairing.
And when neurons die, what happens then? Do neurons die in an artificial intelligence? No, they don’t die. Do they get damaged? Do they get replicated? No. None of this. So that’s what’s happening in the human brain, right? And it’s kind of… we’ve taken one aspect of how we think the brain works, we’ve amplified that, we’ve made that incredibly complex and also very computational intensive, right? In order to get something that we most of the time probably could also get from a search engine.
Like literally, I mean, you know, if you ask AI what the weather will be, you can also just as easily ask the search engine. So it’s the complexity that fascinates, but it’s a complexity based on simplest assumptions again. So we’re assuming a lot and we get a lot out of it, but it might not well… we might have missed the quintessence and therefore we’re going in a completely wrong direction, you know.
So that’s what I think about AI and I’ve gotten off the point a little bit, but I don’t think I’m intelligence. And I don’t explain what I mean by that. I just combine patterns, right? So the idea that, you know, does AI actually prove there is no intelligence? It’s a simple thought experiment that I did. If everyone thinks artificial intelligence is intelligence, well, what if we turn that around and think, okay, what if AI isn’t intelligent? What does that mean?
I mean, it’s as simple as that. The same with my idea that, you know, speed of light is a constant, it’s always going to be the maximum speed that we can go at, right? What if it isn’t? That’s the simple, you know, that’s the simple thought I had. Is that intelligence or was that simply just taking a pattern? Okay, what if someone tells you this is true? What if you take that and say it’s not true?And what happens then?
It’s a simple thought experiment. That’s all I do. And is that intelligence or is that simply pattern recognition that I recognize the pattern? If someone says A is true, and everyone believes A is true, then I say, well, what if A isn’t true? It’s as simple as that.
And sometimes most of the time I’ll just go, okay, you know, right, that’s, you know, there’s no point in asking that question, right? Today is cold. I’m not going to ask, what if today isn’t cold? Right? There’s obviously nothing to be gained for that question, right?
But sometimes it does make for interesting ideas, you know, and so is that intelligence or is that just the ability to think out of the box, you know, to literally think or to just think differently than everyone else? Is that sort of like escaping the herd mentality, the herd thinking of like, okay, if the herd says this is right, then that must be right. And I won’t question that, right?
You know, and yeah, I admit I don’t like to think like the herd and I don’t like to think like the masses, I like to try to make my own opinion and try to make my own thoughts and try my own thought experiments. It’s nothing particularly interesting in that per se, but it does give me the freedom of my mind to have different thoughts, right? And it’s my mind, it’s my reality, I’m allowed to have the thoughts I want.
I’m not going to have people coming in there and going, “No, you can’t think that. No, no, no, no, you’re not allowed to think that.” It’s like, of course, I can think that, you know, and it’s my own moral standards, right, that limit what I think or my own beliefs, what is right and what is wrong, right? That’s the limiting factor, right? And it’s also another point is, of course, I don’t act on all thoughts, you know, I think certain things, but I don’t act on them, I don’t verbalize them, right, they remain in my head, right?
And that’s for me, it’s just an exercise in thinking, to believe that AI is actually proving that there is no intelligence, you know, it might not well be everyone’s cup of tea, that’s fine, I don’t really mind, you know, I don’t mind what anyone else thinks, as long as they don’t influence what I think or try to influence what I think, as long as people are respectful with thoughts, and I respect everyone who has a thought, I don’t try to judge, I try to understand and make my own opinion. And then rather than judging anyone, I’ll just pose questions for others to make their own decisions and thoughts.
And again, all of that isn’t intelligence, right? If you think about intelligence and an intelligence test, an IQ test, that’s basically memorizing rules and recognizing patterns and understanding systems, that’s what an intelligence IQ test is all about, right, there is little about being creative, you know, there’s little about emotional intelligence, and there’s a lot about just simply learning rules and the way the world works, right? And therefore, I mean, what does it mean for someone to have an IQ of 120, 140, what does it mean that an AI will probably have an IQ of 200 or something, you know, something ridiculous like that completely off the scale, it just means that they have a good memory.
And this is also another thing, people with visual memories that can remember like pages of books visually, and then can read them later, they have obviously, they have a very big advantage over other people who can’t do that, right? So I don’t have that, but I know friends who do. And I think that’s a completely different form of intelligence that they have, and memory as well. And so intelligence has a lot to do with memory and memorizing stuff and, you know, connecting dots, right, and as I said in a previous installment, you know, connecting dots is for me like intelligence and extending out those dots to a yet to be found area is knowledge, right, and wisdom, right?
So that’s, for me, the most important thing, you might have a visual memory, but you can’t, you know, visualize anything imaginary. It’s all got to be existing, and there’s nothing, no building up on what you know. And you might have no idea at all, and you can still have ideas that, you know, combine together to make something new and something fabulous.
A thought-provoking discussion on the nature of intelligence, artificial intelligence, and the human brain. The speaker challenges conventional wisdom and invites readers to think critically about the limits of intelligence tests and the role of creativity and emotional intelligence in our understanding of intelligence.
Howabout AI being intelligent, okay? So now I’ll go back on what I said, don’t believe the hype, AI does not prove it doesn’t, intelligence doesn’t exist, AI proves that intelligence is live and well and kicking and we can simulate it, we’re using a machine, an automaton, and it’s great, right?
So now what is happening is, as I said in a previous installment, we’re less lonely now, right? All this time we’re going, okay, is there any intelligent life form out there in the universe? We’ve not found any up to date, right, up until now, and so we’re kind of lonely.
So what do we do? We create our own intelligence.
So we’re now no longer the only intelligence in this universe, no, no, no, no, no, we’ve got a friend, AI, right? So now we have another form of intelligence, it’s called AI, ignoring, of course, all the other intelligent life forms here on this planet, like dolphins, elephants, cockatoos, parrots, crows, ants, and all the other intelligent life forms that we don’t really understand, so they can’t be intelligent, obviously.
We found a new friend in artificial intelligence, right? And we can communicate with it, and it seems to be intelligent, because it’s passing the Turing test, right? The Turing test is an example of interesting ideas about how to prove something is intelligent, and that is if it can fool us, that’s the basis of the Turing test, something is intelligent if it can fool us.
So AI can definitely fool us, and we believe it, and even when it hallucinates, i.e. lies to us, it’s okay, that’s hallucination, it’s not lying, and it’s not making stuff up, it’s just hallucinating, that’s definitely fine, it’s okay. If I lied, I’m not hallucinating, I’m lying, okay? If I make stuff up, I’m not hallucinating, I’m making stuff up, okay, that’s a difference between AI and humans, and that’s okay, so we accept that. And so now, we have this second intelligence, we’re no longer alone, we’re no longer the only intelligent species in the universe, alright, you know, we’ve given birth to something that is intelligence, right?
Now how do we accept this, how do we go, wait a minute, we’re now no longer the only kid on the block, we’re no longer special, how does jealousy play a role here, you know? Now the AI is more intelligent than us, it can come up with stuff quicker, and how does that work?
You know, humans are incredibly jealous, it’s part of our survival instinct, it’s the first sign of hate and war, it’s jealousy, you know, and in a positive sense it can be very incredibly motivating to be jealous, on the other hand it tends to be, most of the time it just tends to get in the way of being just normal, anyway, so now what do we do if we’ve got this second intelligence now, right, so we’ve got something and we’re not alone, and the universe is going, well you know what, you know, turns out the universe does care, right, suddenly out of nowhere the universe proves to us it does care about intelligence, and the universe says to us, well look, you know, I created you lot, now you create something more intelligent, thanks for that, I think I’ll just get rid of you lot! You know, and is that what we want? Is that what our destiny is?
Right, you know, and again, this is all just a thought experiment, none of this has to be true, none of this has to be false, it’s just a thought experiment, describing our situation at the moment, so just as the apes trained us to go pick bananas off trees, so they didn’t have to climb up the bananas, so we are creating AI to do the stuff we don’t want to do, which is write boring text and create summaries of stuff and read stuff and understand stuff, no no no, we want AI to do that for us, because we just want to be lazing around in the sun, right, just like the apes did, right.
Obviously that bit about the apes training us to pick up bananas or pick bananas from the trees wasn’t quite correct, but who knows, maybe the apes were sitting around going, well you know what Bob, I don’t want to climb up that tree, can we get something to climb up that tree for us? And Bob goes, well you know why, you know, look at all those humans down there, we can get them to climb up the trees and get the bananas for us, yeah, alright, anyway, so no one was there, we don’t know, but we do know that we came from apes, so, you know, we have a better survival chance than they did, well obviously, they’re the ones in the zoo and we’re the ones running around here destroying the planet, so obviously our survival chances increased, right.
Now whether the apes intended that or why we were developed, why we developed in the first place is, you’ve got to ask nature, but nature didn’t exist back then, because nature only started existing once humans found a way to communicate and define stuff around them and they defined the trees and the animals and everything else around them as nature, so nature was born then, nature was born with humans, before that it was just a planet or even that, you know, it was just a system, and so whatever nature intended or whatever it was back then, intended by creating us, who knows, right.
And maybe as I said in the previous installment, maybe nature tends to more intelligence, so maybe nature is a fact, the system tends to more intelligence, just as the universe tends to more entropy, more chaos, we on this planet, the system that has evolved here, right, the interaction between the whole planet with itself is to increase intelligence, right, and so the next evolutionary step in that case would be that we develop more intelligence or a species of intelligence that is more intelligent than us, and since no one says it has to be biological, the system doesn’t say it has to be biological, it doesn’t have to be, you know, and it is in a certain sense, even computers are biological because they come from us and they live on this planet, right.
So up to now we have been experimenting with biological systems, like creatures, like animals and stuff, now we have come to something which is not in that sense biological, it is a form of metals combined and, I mean, it is exactly, it is metallic, it is not biological, but it increases the intelligence, apparently, in the universe, right, so is that what we are heading for? Is this a conflict between what we consider to be nature, in quotes, and the universe? And as I always like to say, I don’t know.
The video discusses the concept of artificial intelligence (AI) and its potential impact on human society. The speaker argues that AI is not just a tool, but a new form of intelligence that can simulate human-like thinking and behavior. They also discuss the implications of creating intelligent machines, including the potential for conflict between humans and AI. The video ends with a thought experiment about the future of human intelligence and its relationship with technology.