Consciousness & Artificial Intelligence

Hamza Tzortzis

Date:

Channel: Hamza Tzortzis

File Size: 21.58MB

Share Page

Episode Notes

ICNA Dawah Conf 2020

Related

AI: Summary © The speakers discuss the struggles of humans achieving consciousness and the potential for artificial intelligence to be fully conscious. They also address the potential for artificial intelligence to be "immediate of words" and the potential danger it could pose to religion and Islam. The discussion shifts to the potential for artificial intelligence to be "immediate of words" and the potential for it to be "immediate of creature."
AI: Transcript ©
00:00:00--> 00:00:00

eau

00:00:04--> 00:00:50

de la salatu salam ala Rasulillah sallAllahu alayhi wa sallam. I'd like to welcome you all in. This is your brother sama Akbar and inshallah I will be moderating this session titled The struggle within. Next up we have about the Hamza sorties joining us Hamza sources embrace Islam and 2002 and has gone on to becoming an international public speaker, writer, lecturer debater on Islam. But the Hamsa is also the author of the book, the divine reality of God, Islam and the mirage of atheism. He is also former CEO of IRA, that organization and he's also done some great work through the affiliate of Ira Sapiens Institute. With that I give the floor to Bella Hamza Inshallah, who is

00:00:50--> 00:00:55

going to address the topic of consciousness and artificial intelligence. So that's her for being with us.

00:00:56--> 00:01:40

Smilla Rahmanir Rahim Al hamdu lillah wa Salatu was Salam ala Rasulillah Salam alaykum warahmatullahi wabarakatu, who brothers and sisters and friends. First and foremost, I praise Allah subhana wa and I thank him for giving us an opportunity to share some ideas today. And also thank Allah subhana water Allah for the previous talk by our stada it was a great reminder, may Allah subhana wa Taala bless her and her family and grant her and her family the best in this life and the Oscar. So I'm going to be talking about a very complex topic. In philosophy, there's an area in philosophy called the philosophy of the mind. And even in the domain of science, there's a sub

00:01:40--> 00:01:50

domain could neuroscience and all of this is interlinked in a very complex way with very complex topics. However, why want to do is give you a key key

00:01:51--> 00:02:42

step to be able to navigate the contemporary space in the Dow because as you know, when we try to share Islam with Rama with compassion, with hikma with wisdom, with intelligence, we have to be able to navigate different questions, especially if they try to undermine the kind of ontology of Islam the basis of Islam, the foundations of Islam. So today's question is going to be on can AI be fully conscious meaning can artificial intelligence being fully conscious? Now one would argue if artificial intelligent machines are essentially physical machines that have a computer program? Can these machines be fully conscious like human beings, if they can be fully conscious, this can

00:02:42--> 00:03:28

undermine the foundations of Islam, from the point of view that we as conscious beings are no different than a machine. And therefore we're just blind, non rational nonconscious electrons whizzing around, we're an amalgamation a collection of all of these different electrons whizzing around. So one would argue this will undermine what it means to be human from the Islamic discourse perspective. Also, it may undermine the understanding of the role of the soul in Islam, because we have a general position that the law is nothing physical from the point of view that is made up of physical stuff that we experience in the world, but it's a Meta physical reality. So from this

00:03:28--> 00:04:18

perspective, can AI can artificial intelligence, undermine religious narratives? Can it undermine Islam? Now, what's very important to understand here is there is a difference between Strong Artificial Intelligence and a weak artificial intelligence. So let me let me explain. Weak artificial intelligence is the artificial intelligence that we're experiencing today. And we may experience an even more in the future, which is a computer or a machine or a robot that has a computer program can be extremely intelligent. For example, it can play chess with 500 chess players, and you could beat them all in five minutes. Okay, so that's the kind of weak artificial

00:04:18--> 00:04:59

intelligence I know it sounds funny because you're calling such intelligence weak, but it's called weak from the point of view that it is very smart, and it can undergo these rational processes, but it's not fully conscious meaning it does not have the ability to attach meaning to the computer program to the computer symbols. It doesn't have in the subjective conscious states, otherwise known as phenomenal states in the language of the philosophy, the philosophy of the mind. So there is a difference between weak AI and strong AI weak AI that computers and artificial intelligent machines

00:05:00--> 00:05:45

Is are very intelligent, but they're not going to have in the subjective conscious experiences that they're going to have this inner, subjective conscious states, also known as phenomenal states, strong AI is not only that computers and artificial intelligent robots and machines, not only are they intelligent, but they also can have inner subjective conscious experiences. They can also have meaning and attach meaning to symbols. So there is a difference between weak AI and strong AI. weak AI does not undermine religious discourse, strong AI has the potential to undermine religious discourse. So why I'm going to show you today in sha Allah is a way is a thought experiment by

00:05:45--> 00:06:32

Professor John cell that really shows you how to distinguish between weak AI, and strong AI. So we can affirm as Muslims that weak AI can definitely exist, they could be very smart, they could be super smart. In 100 years, they could be even smarter than we can ever imagine. But would they ever have consciousness from the point of view of having inner subjective conscious states? Would they have the ability to attach meaning to symbols? No. And this is why. Now before I get into the thought experiment, the first main reason is, well, artificial intelligence is an extension of our own intelligence. It's a protraction of our own intelligence. So from this point of view, there's

00:06:32--> 00:07:19

nothing that we should be surprised about. And it doesn't mean that all of a sudden artificial intelligence is going to be conscious in the way that we think that they would have feelings and in a subjective conscious states, they're just mainly a mirror in of our own intelligence as William Hasker. He says, computers function as they do, because they have been constructed by a human being endowed with Rational Insight. A computer, in other words, is merely an extension of the rationality of its designers and users. It is no more an independent source of rational thought, then a television set is an independent source of news and entertainment. The second reason that artificial

00:07:19--> 00:08:10

intelligence won't be fully conscious in the way that we discussed is the artificial intelligence, they do not it does not have intentionality, this is a complex word in the philosophy of the mind, by intentionality is related to meaning. So what do we mean by intentionality? intentionality means that the thinking process or the system or the human being, or the cognitive faculties are about something, it's about something other than itself? For example, I'm a human being and I have intentionality. I know that my thoughts and the representation that's happening in my mind, whether it's in a visual format, or in the form of thinking is about something other than the thoughts. It's

00:08:10--> 00:08:54

about, for example, the computer screen that I'm looking at, I have intentionality. I have intentionality which means that I have thoughts and these thoughts and ideas are about something other than me about something other than the abstract thought itself, a represent something outside so to speak. Like I am looking at this computer screen, I had intentionality now, because my thoughts about the computer screen, I know they're about something outside of the thought itself. This is intentionality. Now, we would argue that computer programs or artificial intelligent robots are machines with a computer program, they don't have intentionality. In other words, they don't

00:08:54--> 00:09:44

have meaning they cannot, they cannot put meaning or connect meaning to the abstract program symbols that they have. Let me give an example for you. To understand this, you have to understand the difference between syntax and semantics. Syntax is the rearrangement of symbols just like that you have in a computer program. Fundamentally, all computer programs are rearrangement of zero and one. Semantics is about meaning that computer computer programs and artificial intelligent robots they cannot attach meaning to the symbols, or they know if you want to use such a word. They just know how to rearrange the symbols in complex ways, in complex ways they can attach meaning to the

00:09:44--> 00:10:00

symbols. Let me give you another thought experiment to understand this. Take the three statements as an example and say we were to write these down. Statement number one. I love my family. Statement number two as our boating Goiania

00:10:00--> 00:10:43

Um, so the firt the first two, the first two statements, these two statements, say I love my family, that's the semantic content. It's saying I love my family. But the way that meaning that semantics is arranged is arranged in different symbols. Number one, you have the English language, I love my family. Number two, you have the Greek language as our Bodine who again, yum. Now, for you who don't know any Greek, if I were to give you all the letters, and teach you how to put them in the correct order with the correct spaces, if you were to put all of that together, and you would spell out as a boarding gaganyaan, which means I love my family, you would never be able to understand that it

00:10:43--> 00:11:27

means I love my family, because all your all you are doing is arranging the symbols in a correct way, you have the correct program, if you like you know how to arrange the symbols in the correct way. But you don't know what the symbols mean. In order for you to understand the arrangement of the Greek letters, in this case, Greek symbols, you would have to know what the words actually mean. And that's why there's a difference between syntax and semantics. There's a difference between an arrangement of symbols and attaching meaning to the symbols. So the following argument can be developed from this. Number one, computer programs or artificial intelligent programs are

00:11:27--> 00:12:17

syntactical basically they're based on syntax based on symbols. Number two minds human minds have meaning they have semantics. Number three, syntax by itself is neither sufficient nor constitutive for semantics, conclusion, therefore, computer programs, artificial intelligent programs are not minds. So let me just finally end this discussion with the famous thought experiment by Chris Professor John cell. This was called the Chinese Room thought experiment. And then you paraphrase the experiment for you imagine all of you watching this are Chinese speakers, and you read Chinese and you understand Chinese. And I'm in a room, and you're all watching this room. And inside the

00:12:17--> 00:12:38

room, you can't see what's happening. But you know, I'm in the room. And in the room, I have a rule book, I have a rule book. And the rule book is in the English language, and the rule book is teaching me how to manipulate the Chinese symbols. So if I have a Chinese symbol here, and a Chinese symbol here,

00:12:39--> 00:13:26

I read the rulebook to find the appropriate Chinese symbol. And then I'll put it together and give you more Chinese symbols from my basket, to give you an answer to maybe a question or to a conversation. Now, imagine you all were giving me written Chinese symbols that you know the meaning of in the form of questions, I receive those questions in the room. And I don't know what they mean by go to the rule book that's in English. And I look at all the symbols, and I happen to give you outside of the room the right answers. I do this all the time, because I follow the English rulebook properly. Now, all of you outside of the room will be saying to me, Hamza knows Chinese Hamza

00:13:26--> 00:14:14

understands Chinese. But in actual fact, I don't know nothing about Chinese, I could never write a rule book in English, that teaching me on how to manipulate the Chinese symbols. Now, this shows this, this thought experiment really tells us what artificial intelligence is about what it is, is a complex way of rearranging Chinese symbols, or in this case, rearranging zeros and ones and ones and zeros. Even if you study artificial intelligence and you talk about machine learning. All of those complex things are fundamentally reduced to zeros and ones that are combined in very complex ways is just symbols, it syntactical arrange of its symbolic or rearrangements, but there is no way of the

00:14:14--> 00:14:56

computer program to attach meaning, remember intentionality we spoke about to attach meaning to those symbols the way we do as human beings. Now there is a reply to this argument is called the system's reply. And some would say yes, Hamza, you don't know Chinese, but the whole system understands Chinese the room, the basket with the symbols, and the rulebook all of that system understands Chinese. But this is false, because there's no way for the system to attach meaning to the symbols, even if the whole system were just in my mind, even if the system and the rulebook were just in my mind, and I manipulated the symbols and gave you the answers.

00:14:57--> 00:14:59

I would still not be able to know Chinese

00:15:00--> 00:15:40

because there's no way of me attach meaning to the symbols. This thought experiment is a very powerful thought experiment that shows there is a difference between strong AI and weak AI. Yes, weak artificial intelligence exists is always going to exist and it's going to progress much, much further, it's going to be super intelligent. However, it will never be strong AI, meaning it would never be able to turn to a human being from the point of view that you have inner subjective conscious states that you're able to attach meaning to the symbols in the way that we discussed, and the computer program would never have intentionality computer programs. The symbols don't know that

00:15:40--> 00:16:19

these symbols are about something other than themselves. That's what intentionality is about. I have a thought, I'm looking at the door. My thought is about something other than the thought it's about the door symbols don't have that intentionality. So from that point of view, artificial intelligence will never be fully conscious. Therefore it doesn't undermine Islamic narratives, brothers and sisters and friends Xhaka here for your time as salam Wa alaykum Warahmatullahi Wabarakatuh and in the future we're going to be developing some essays and some webinars and articles on this issue so keep posted May Allah bless you all keep up the great work Salaam.

00:16:22--> 00:16:54

Zack look her brother Hamza that was quite a nice thing. I mean, Allah azza wa jal, except your efforts and may Allah azza wa jal reward you next insha Allah will have two questions for brother Hamza sources, both of them we receive these from the audience's tuning in and please I ask audiences to keep the questions rolling in sha Allah first. So what a strong AI are humanoid AI? Basically be accepted as potentially possessing intentionality? And if I can find the second question too,

00:16:56--> 00:17:01

would an atheist using AI pose a danger to defeating religion or Islam?

00:17:02--> 00:17:37

Okay, let me just a second question. Yeah, not me because Allah Subhana Allah to Allah is Al Hakim and his Allah alim. He in the wise and the northern, he has the totality of knowledge and wisdom, Allah has the picture, we just have a pixel. So there is no intelligence on this planet, whether it's AI or otherwise, that can even match the power and majesty and wisdom and knowledge of Allah subhana wa Tada. So don't worry. If you connect to Allah, if you connect to the revelation with his guidance, there's nothing to fear. In terms of the first question.

00:17:39--> 00:18:23

You're the you're postulating that stoneware convincingly say that weak AI can potentially have intentionality? Well, you can't really argue that because this is more of an ontological question, meaning it's about the soul and the nature of that thing. So it's like saying about a human being, can a human being potentially be a giraffe, right? Now I know, we all have this post modernism going on, you might think, yes, but let's, let's be real, the point is a human being can never potentially be a giraffe, right? Because from an ontological perspective, their souls and their nature is fundamentally different. So this question that you're asking can only be applied if there is a

00:18:24--> 00:19:12

potential to actually be that thing? So from an AI point of view, can we get AI be potentially strong AI? Can it's potentially have intentionality? That I would argue no, you can't even apply that question in this context, because about the source of the nature of that thing, the nature of computer programs, the nature of AI that has computer programs or program says a program inside it. The nature of it is just about syntactical arrangements, it has no way of attaching semantics, to the syntax to the symbols, and it doesn't have intentionality by virtue of the system by virtue of the program by virtue of what it is. So the question doesn't apply because it's, it's not like

00:19:12--> 00:19:38

saying can can a human being be the kind of human being potentially fast that running yet because that question applies within their ontology. But when it comes to this question on AI, it doesn't apply. Because it's an ontological question is outside of the on top of its ontology, you it'd be equivalent of saying, can a human being be potentially a giraffe, which we know can't be the case? So hopefully, amongst the questions,

00:19:39--> 00:19:40

exactly where the Hamsa

00:19:41--> 00:19:52

The next question is, again, from the audience for the Hamza, what if someone claims that science and technology will eventually figure out how to give consciousness to AI?

00:19:54--> 00:19:59

Okay, so now this is a wider, more complex topic, but generally speaking again,

00:20:00--> 00:20:38

This is about this also relates to the idea of the hard problem of consciousness. So the hard problem of consciousness is not in a nutshell is based on two main things that I know what it's like for me to have an inner subjective subjective conscious state. For example, if I have a chocolate, I'm undergoing a subjective inner subjective conscious state. However, I don't know what it's like for you to have a chocolate, I don't know what it's like for you to have an inner subjective conscious state. So the first point of the hard problem of consciousness is, well, what is it like for particular organism to have an inner subjective conscious state, we may know if we have the same

00:20:38--> 00:21:03

experience, but we don't always like for them or it, even though they may use language such as it's tasty, it's creamy, it's sweet. Those words those vehicles have meaning and meaning is a representation of the inner subjectivity. And if we were to map out everything in one's brain, all the new chemicals firing, we'll never be able to know what it's like for someone else to have an inner subjective conscious state, such as having a chocolate

00:21:05--> 00:21:48

or something sweet. So that's the first part of the problem. The second part of the problem is, well, how did these inner subjective conscious states arise from seemingly non conscious blind physical processes? Now, this is an ontological question. It's about the source and nature of reality. Now for them to even begin for scientists to even begin to say they can, in the future gain knowledge and science to be able to say that computers can have AI, they have to solve this problem. And the way to solve the problem is actually not scientifically because science has an assumption it has neuroscience has to have the assumption, the philosophical assumption of what is called

00:21:48--> 00:22:34

physicalism. physicalism is the broad category of the philosophy of the mind that basically says that consciousness can be reduced to or is identical in some way to physical processes. Now, there's different types of physicalism reductive materialism, by the way, physicalism materialism are used synonymously in literature, so reductive materialism, eliminative, materialism, functionalism, and major materialism, strong emergence, weak emergence, I don't have time to go into these. However, in a nutshell, these approaches to the philosophy of the mind physicalism In other words, cannot solve the hard problem of consciousness. And therefore science won't be able to solve the problem of the

00:22:34--> 00:22:45

hard problem of consciousness because science in this case, neuroscience has a philosophical function that it requires and that philosophical assumption is physicalism which

00:22:46--> 00:23:14

so in a nutshell, there's more to it than that. That's how it answered the question a lot let's use off my head Exactly. But uh, Hamza, again, thank you for joining us. malaba Protect you bless you, your efforts, your families inshallah make you a source of much hair, for all the audiences inshallah where we apply what you taught us individually and at the same time take that to our communities and societies to make them better into our color. Hara salaam aleikum wa rahmatullah.