Richard Dawkins Robots Will Become Conscious Ft Hamza Tzortzis

Mohammed Hijab

Date:

Channel: Mohammed Hijab

File Size: 23.49MB

Share Page

Related

WARNING!!! AI generated text may display inaccurate or offensive information that doesn’t represent Muslim Central's views. Therefore, no part of this transcript may be copied or referenced or transmitted in any way whatsoever.

AI Generated Summary ©

The speakers discuss the concept of AI and how it will affect human perception and behavior. They explore the definition of consciousness and how it relates to physical processes and intentionality. The speakers also explore the use of symbols in relation to human behavior and how humans can manipulate symbols. They briefly touch on the topic of the Roman culture and its use of symbols, but note that humans and computers are different in their ways of thinking.

AI Generated Transcript ©


00:00:05--> 00:00:11

Assalamualaikum warahmatullahi wabarakatuh how are you guys doing? I'm joined with the esteemed

00:00:13--> 00:00:14

with the real man

00:00:16--> 00:00:19

with the champ with the heavyweight champion

00:00:24--> 00:00:29

to his teeth. It was chilly. Yeah. Well, he couldn't be here today.

00:00:37--> 00:00:46

Salam Hydra. Good to hear from you. Good to see where we're coming back to the new atheist you delivered to one of them maybe seven years ago.

00:00:47--> 00:00:48

A pretty good spanking.

00:00:51--> 00:00:54

Yeah, Leo's young Krauss, let's

00:00:55--> 00:00:57

listen up Lawrence Krauss.

00:00:59--> 00:01:01

How do you feel about that debate?

00:01:03--> 00:01:03

You know,

00:01:08--> 00:01:26

it was good, given the circumstances and my learning at that time, given the circumstances and the learning my pathway of development at that time, obviously, there's many things that I think I could have said that was more articulate.

00:01:27--> 00:02:08

It was better in terms of the way the audience would understand it, and to really portray that Islam is true, and that atheism is false. But generally speaking, I think was a very, very positive debate on balance. But I'm going to do a video in sha Allah with our beloved brother Zhi Shan smart two gentlemen, on seven years on from the debate, and we're gonna do an assessment and analysis of it. I think that probably was the most popular atheist Muslim debate in the last 3050 years that I know I think it is, yeah, from a numbers point of view. And oh, there's no doubt about that. So it will be good to get your insight on that. Absolutely. But talking about famous atheists, we wanted to speak

00:02:08--> 00:02:15

about something that Richard Dawkins has been coming up with his dear tie that he comes up with, you know, he's got a title.

00:02:18--> 00:03:01

And speaking of which is AI, artificial intelligence, let's just watch quickly, some of the stuff that he said about AI, and giving AI robots rights. And then we can comment on that we reach a profound philosophical difficulty. I am a philosophical naturalist, I am committed to the view that there is nothing in our brains that violate the laws of physics, there's nothing that could not, in principle, be reproduced in technology. It hasn't been done yet, we're probably quite a long way away from it. But I see no reason why in future we shouldn't reach the point where a human made robot is capable of consciousness.

00:03:02--> 00:03:13

And a feeling pain. This is profoundly disturbing, because it kind of goes against the grain to think that there's a machine made of metal and silicon chips,

00:03:14--> 00:03:33

could feel pain, but I don't see why they would not. And so that this moral consideration of how to treat artificial artificially intelligent robots will arise in the future. And it's a problem which philosophers and moral philosophers are already talking about. So you can see,

00:03:34--> 00:03:37

he obviously thinks that on his worldview, materialism

00:03:39--> 00:03:43

that philosophical metaphysical naturalism that he thinks that

00:03:44--> 00:04:10

robots or AI can have conscious consciousness, yes. And because of this, they should be given rights. What is your response to that? Well, I don't want to go into the whole rights and the whole morality stuff. Okay. The reason being, because that is dependent on your understanding of AI being conscious. And what do you mean by AI? being conscious? Yes. And that's why I have my phone out. It was not to be rude. I have notes on this. Yeah.

00:04:12--> 00:04:19

The first thing we need to address I think, is Richard Dawkins says that he's a philosophical naturalist. And I find that very interesting, bro, because

00:04:20--> 00:04:43

philosophical naturalism is not really scientific, per se. And he comes across as someone who promotes some kind of public scientism, right? that science is the only way to render the truth about the world and reality. Okay, maybe that's a hard form of scientism it, let's make it a bit more softer. Maybe he says, science is one of the best ways to render the truth about the world and reality.

00:04:45--> 00:04:59

However, philosophical naturalism is more of a philosophy rather than anything scientific. So he says he's a philosophical naturalist. What does that mean? That means that there is no divine there is no supernatural. Everything can be explained.

00:05:00--> 00:05:03

by physical processes are reduced to physical things in some way.

00:05:04--> 00:05:22

Now, that is very interesting because that's a faith. Remember, he says I am committed to now the atheist, the prominent atheist philosopher microbrews. He says, you know, if you want to concession, naturalism is a faith, because you have to believe it as a lens

00:05:23--> 00:06:06

in order for you to see through in order to understand the world and reality, so he already comes with the presupposition that there is no God, he already comes with the presupposition that everything can be explained by physical processes. He that's his starting point. Those are his lenses in order in which he sees through to understand reality. So he's admitted something here, this is my faith. I'm a philosophical naturalist. So therefore, even though I know nothing about AI, I'm going to assume that AI is going to be conscious, and I'm going to assume that it must be given rights. That is a really unknownst ridiculous way of starting a video about AI. The discussion

00:06:06--> 00:06:46

shouldn't pursue philosophical naturalism to be true. And when you watch the whole video, you see that really, they're presuming philosophical naturalism to be true. And in the context of AI and consciousness, they're presuming a physicalist understanding of consciousness, which basically means that consciousness can be reduced to or is identical in some way, to physical processes. So how would you define consciousness and what needs to be in place for consciousness to work on my died? Because I think we need to, you know, cover this ground. Okay, this is a big question, but let's apply it to the AI scenario. So I think what they're trying to say is that

00:06:47--> 00:07:02

AI machines or computer programs or robots, whoever the case may be, they are going to be indistinguishable to human beings in some way. Okay? When it comes to consciousness, when it comes to intelligence, when it comes to

00:07:03--> 00:07:34

interaction, to the point where Richard Dawkins even says, even concerning pain, right? And this is the point that we need to zoom in on, we don't have a problem with certain aspects of consciousness such as thoughts or you know, cognition or intelligence, right? Because these are these are connected to consciousness as well. What we're talking about here is that can artificial intelligent machines, can they have inner subjective conscious states?

00:07:36--> 00:08:27

Can they have something called intentionality, okay, which is, this is quite broad in the philosophy of mind by generally speaking intention, it means and is connected to meaning, it means that your reasoning is about or of something else. Okay? So say I am reasoning, say, I'm reading about Mohammed hijab. I'm talking about Mohammed hijab, and I'm reasoning reasoning about you. I know that I'm reasoning about something other than myself. And other than the just the, the sounds and the words that I'm using it is off, and about something external to me. Now, we can safely say that robots or AI machines don't have an ability to do that. Because really robots and machines about

00:08:28--> 00:08:59

just rearrangement of symbols, right, the symbols, don't know that those symbols are about or of something external to the symbol itself, right? Because fundamentally, computer programs are based on zeros and ones, right? Fundamentally, so do the zeros and ones do they know that they are addressing an entity, a conscious sentient entity couldn't have any job? Do So do they have intentionality? No, it's just zeros and ones. And this is an arrangement of zeros and ones, the zeros and ones are not about

00:09:01--> 00:09:37

Mohammed hijab, or rather, the zeros and ones don't know. They are referring to something called Mohammed hijab that's external to them. Right? So this is generally speaking, intentionality. And it relates to meaning that's a really good point. That is a good point. But the thing is, it's it's very vast, and there's lots of discussions about so that might be a good way of putting it because you're saying that robots will only be able to interact with symbols directly, but wouldn't know wouldn't be able to give meaning to those symbols. Good. So this is the point here. So computer systems just manipulate symbols. They can't attach meaning to the symbol. So this is syntax and

00:09:37--> 00:09:50

semantics. So let me give an example. The difference between syntax and semantics. So we have here three sentences, right? Yeah. One in Greek, one in English and let's do one in Turkish right. So it's, I love you

00:09:52--> 00:10:00

set up a book which is Greek I love you and you have sunny Seville, which is I love you in Turkish. Now, as you can see the three sentences

00:10:00--> 00:10:38

have the same semantics, they have the same meaning, but they have different symbols. So what do we learn from this? Well take this, if I were to give you all of the symbols of Greek, and teach you how to arrange them in the correct way, with the right spaces, right? In the right, kind of grammatical formula, whatever the case, maybe by virtue of you doing that, would you know the meaning? No, exactly. So that shows there's a difference between just merely rearranging symbols and understanding the meaning connect to the symbols and attaching meaning to the symbols, right. So there's an interesting argument that I think, john, Professor john, so he developed and I've adapted

00:10:38--> 00:11:28

to here is number one, computer programs are syntactical so that based on syntax, number two, minds have semantics. Number three, syntax by itself is neither sufficient nor constitutive. For semantics, for therefore, computer programs by themselves are not minds. For example, just imagine an avalanche, bro, there's an avalanche in some famous mountain see in the Alps in France, right. And the avalanche, when it basically creates its mess, all of a sudden, you see rocks that I arranged, and it says, you know, my name is Muhammad hijab, and I'm over six foot five, and I love wrestling, and I'm a debater. Right. So now the mere arrangement of those symbols, right? So the

00:11:28--> 00:11:33

main arrangement of those symbols, does the avalanche know the meaning?

00:11:34--> 00:12:14

No, exactly. So the main arrangement of the symbols itself doesn't give rise to the meaning. So if an if a if a sea, right if the tide was coming in and out, and as a result of the tide moving, you see an arrangement of sand that says, I love my mother, I love my parents, does the C know the meaning of those symbols? No. So the main arrangement of those symbols in a particular way, doesn't necessarily give rise to meaning because the C doesn't know how to attach meaning to the symbols. And the avalanche doesn't know how to attach meaning to the arrangement of rocks that for us has meaning right? Does that make sense? Okay, so this is good. So you can't ever prove that? Do you

00:12:14--> 00:12:25

think there's ever a chance? No. So So here's the point. The point is AI machines, just complex syntactical arrangement? Never you're saying that it's not possible for them to they're kind of attachment symbols. Yeah. Why not?

00:12:27--> 00:12:42

Because of what we just discussed. So for example, if an avalanche were to come and somehow arrange a bunch of symbols that says, I love smokey john is the best channel in the world. Please subscribe now. Yeah, right. He doesn't know the meaning of that. That's meaningless anyway.

00:12:46--> 00:12:56

But do you see my point? Yeah. So let's break this down further. So your question really has opened the door to Professor john Searles. famous Chinese room experiment.

00:12:57--> 00:13:03

Okay, how did this before what you just go through again? Yeah. So So say this is a room? This this pillow, right?

00:13:04--> 00:13:30

Can you see this pillow? Sir? this pillow is a room. You are in this room? hijab? Okay. inside. Inside you inside? Yeah. Right with that? I'm okay, good. So you're in this pillow. But it's a it's a Roman, we call it the Chinese room. In this room, there is a rulebook. But it's only in the English language. Yeah, and the rulebook says, when you see this Chinese symbol, and this Chinese symbol, then you

00:13:31--> 00:14:08

give this Chinese symbol, you don't know what the symbols mean? It's just giving you a symbolic representation right? Here are the Chinese characters when you see this Chinese character and this Chinese character then give us or give outside of the room. This Chinese character? Yeah. Okay. Outside of the room, all Chinese speakers, for example. Yeah. And they give you questions. Okay. This is an adapted version of the thought experiment, but it still works. They give you questions in Chinese. Yep. So they don't know who you are. But you take the questions in Chinese and you read the English rulebook. And you say, okay, I've seen this Chinese character, I have no idea what it means.

00:14:08--> 00:14:22

But I've seen this Chinese character, and this Chinese character and the rulebook says, I have to give this Chinese character. So you're giving all the right answers out. So for the people outside of the room, do they think you know Chinese? Yes, exactly.

00:14:23--> 00:14:59

But do you know Chinese know exactly. So this Chinese room thought experiment represents what happens to the AI machine, they just have syntactical arrangement is the manipulation of symbols, not meaning. Now there is a response to this is called the system's reply, john, so he calls it the system's reply. Some people say, yeah, you as Mohammed hijab may not know the meaning, but the system itself knows the meaning. And john Searle replies and says, Well, how can that be the case? Because there is no way of the system attaching meaning to the symbols in the first place. Yeah. And you could even extend the thought experiment by saying that this whole system could just be in your

00:14:59--> 00:14:59

brain.

00:15:00--> 00:15:17

Mr. Mohammed hijab, yeah, you could know how to manipulate all the symbols, and always give the right answer. But does that mean you you know, the meaning of the language? No, you just know how to basically put different things together. That For example, I could teach you right now Greek, right? So if someone says,

00:15:20--> 00:15:23

boss, he said, Okay, yeah, boss is fine.

00:15:29--> 00:15:33

Someone says, he said, Yeah, you should reply.

00:15:34--> 00:15:39

yada, yada, yada, yada, yada.

00:15:41--> 00:15:42

yada, yada.

00:15:44--> 00:15:50

So I've kind of done f carry stuff. Okay, so this is pretty bossy, sir. Yeah.

00:15:52--> 00:15:55

There you go. Do you know what do you know what I'm saying? Yes, yes.

00:15:58--> 00:16:05

You don't know what I'm saying. So I you just said to me, how are you? I'm very good. How do you know? I just

00:16:10--> 00:16:12

for the audience, you know what I'm saying? Yeah, so I

00:16:14--> 00:16:54

guess, okay, yes, exactly. So the point here is, I'm just giving you symbols, but in the form of sounds, and I'm teaching you what sound to give me back. Just because, you know, the kind of syntactical symbolic arrangement, whether it's written format, or in, in, in, in sound, waves, whatever I, it doesn't mean you know, the meaning, right? Meaning so I could train you to come to my house, my mom's house. And she may have like, she may give you like five sentences, right? And I could train you to respond in a particular way that may make her realize that you think that she thinks that you know, Greek to the point I could, we could manipulate the whole thing isn't, say,

00:16:54--> 00:17:28

after those five sentences, and you responded, so well, you can say in Greek, oh, I need to go. My mom's quoting me. Yeah. So you could escape the room? Or you know, your question don't any further. So the point I'm trying to say is we we can train you to come across as knowing Greek, but you have no idea what's going on, you know, just by virtue of you've just arranged you just know, the program's you just you just do you see my point? Yes. So that's why is that so you said intentionally? Yeah. So let me just go back into my notes, because there's another response to the Chinese room experiment, which is very important for us to

00:17:30--> 00:17:54

Yeah, so so concludes having the symbols by themselves just having the syntax is not sufficient for having the semantics merely manipulating symbols is not enough to guarantee knowledge of what they mean. Okay. So obviously, there is lots of discussion concerning this issue in the philosophy of really interesting in the Quran. Yeah, yeah.

00:17:55--> 00:18:05

When when the Malacca when the angel said, Allah says, Allah Adam, and that's not cool as tomato tomahto mother equity for color and boonie vs. Yeah, how will I incur to solder it

00:18:06--> 00:18:54

says that he told Adam all the nouns or the names, and then he and then Adam reflected it back to the angels. But the words he is MB only be s. Matt. Give me news literally from never give me a news of what this these words are, if you are truthful. So it's not just regurgitation is telling me what this is about. It's about meaning as well. Yes, well, some of the extra gtes they even said that this is not just labels or terms and nouns. This is also the concept of things. Which is about meaning, which is very, very interesting in abstract nouns. Yeah, but Jani, the kind of exegesis of the I hear is it wasn't just that it was about the concept behind these things. Yeah. Which is very

00:18:54--> 00:19:24

deep. So you know, if you want more information, then go to Sapiens institute.org. Go to read and you got the answers as an answer code. Does AI undermine religion? and put it in the description? Yeah. And you got more information on what we just spoke about. But just to summarize, yeah, you computer programs don't really have intentionality. So AI machines don't have intentionality, the symbols that they have in their programming, the symbols don't know that it's about something or have something outside of the symbol itself, right. It's just a symbol.

00:19:25--> 00:19:59

And that is connected to meaning. And we know that computer programs manipulate symbols, not semantics is syntactical arrangements, not semantics. And we give the example of the three different ways of saying I love you, I love you soccer ball, semi severe room, it has one meaning but different symbols and if I gave you all the right symbols of the texture Greek language to put them in the right way, just by virtue of arranging them in the correct way to produce the words equivalent in that language of I love you would ever know. It means I love you. Because you just know the symbols. There's no way of you attaching meaning to the symbols right?

00:20:00--> 00:20:40

Then you asked me, What does it mean? What is meaning, which is a very, very deep question. And that opened the door to the Chinese room thought experiment. And the Chinese room thought experiment actually shows that actually, you can manipulate computers manipulate the symbols, but it doesn't necessarily mean that they have the semantics, it's not just by virtue of you being able to manipulate the symbols doesn't mean that you now know the meaning. And this differentiates between soft AI. so weak AI and hot AI. so weak AI is yes, computer programs can be very intelligent. They may even pass the Turing test, the Turing test is basically this kind of test that was developed is

00:20:40--> 00:21:12

like a game that this computer human being having a discussion, and there's an outsider that basically needs to try and differentiate which one's the computer, which one's the human, something like that. But that's not a very good test. And there's a lot of contentious behind it anyway. But the point is, you may even pass the Turing test, you may talk to an AI machine and think it's a normal human. That doesn't mean now that that AI machine is conscious in the way that we just spoke about, it just means that they're manipulating the symbols really, really well, even when they're talking about things like machine learning in AI, right? machine learning is this complicated

00:21:12--> 00:21:48

syntactical arrangements with more complicated syntactical arrangements, and that just reduce fundamentally to zeros and ones which don't have intentionality. Right? So yeah, so this difference between Hard and Soft AI. So soft AI is the could be very intelligent, more intelligent than human beings. And we've seen it on popular media. And there's a really good YouTube video by smart agenda on, you know, the conspiracy of AI and stuff like that. And you see, computer programs can be far more intelligent than human beings. But that doesn't mean that conscious in the way that we just discussed that they have inner subjective conscious states that they can feel pain, that they now

00:21:48--> 00:22:25

can attach meaning to symbols. So if they can't have that type of consciousness, why are you even talking about rights in the same way that you talk about human rights? That's my point. So don't come across with the physicalist philosophical naturalistic presupposition here that yes, one day they are going to have rights. You know, I did say to Zhi Shan a few hours ago, I said, in 10 to 20 years, you're going to have people marrying robots. Yeah. And when you turn a robot off, and with the intention to turn it back on kind of thing, it's going to be like murder. That's the gist of our conversation. So I think that's going to happen you had Japanese, well, nothing wrong with Japanese

00:22:25--> 00:22:29

people. But there was a Japanese person. He married a manga cartoon,

00:22:30--> 00:22:32

manga cartoon, bro.

00:22:37--> 00:23:13

He didn't want manga continues his cartoon, he made a cartoon anime. He's like anime, yeah. Would you believe it? So there's a difference between hot AI and soft AI. So soft AI is computer programs and AI can be very, very intelligent, and even may even pass a Turing test that it could sound like a human being as sound conscious. But you will never truly be conscious because AI machines, there is no such thing as hard AI just from a philosophical point of view, because of intentionality, because of syntax and the semantics discussion that we just had. And yes, this discussion continues. There's lots of debates on this, you know, we're not giving you the full picture. But it's a good

00:23:13--> 00:23:45

starting point for you to understand that fundamentally, you know, you have to understand when you watch videos like this, they come with some philosophical lenses and presuppositions, as Michael says that the faith of philosophical naturalism, they have to pursue that physicalism is true, they have to presume that philosophical naturalism is true, and therefore say, yeah, that maybe they can become conscious in the way that we did discussing, and therefore we should give them rights. But before we get there, let's have the real philosophical discussion. Can they really be conscious in the way that you're thinking is philosophical naturalism true from this point of view, and is

00:23:45--> 00:23:59

physicalism true as an approach to the philosophy of the mind? And that's the debates that's continuing, and hopefully, we're giving you some insights for you to do further research and go to the link below or wherever, you know, to read a little bit further concerning. Does AI undermine religion?

00:24:00--> 00:24:03

With that, guys, thank you for watch. Have you been

00:24:05--> 00:24:09

with that, guys? Make sure you do what the man says.

00:24:10--> 00:24:36

And go Yes, go down to the channel. Lots of description. There's lots of things that come out the book that I've written called scientific deception, new atheists is also free of charge on the Sapiens Institute. So make sure you go and download that it's free of charge. I'll put another description link in the description for that. There are things that are coming out from our webinars that we're doing on a weekly basis, that are free of charge, everything there is going to be once again free of charge. So make sure you go and visit that website.