Can Artificial Intelligence Undermine Religion

Hamza Tzortzis

Date:

Channel: Hamza Tzortzis

File Size: 67.19MB

Share Page

Episode Notes

Seminar

Related

WARNING!!! AI generated text may display inaccurate or offensive information that doesn’t represent Muslim Central's views. Therefore, no part of this transcript may be copied or referenced or transmitted in any way whatsoever.

AI Generated Summary ©

The speakers discuss the potential for artificial intelligence to address problems for religion and the physical world, including deep learning. They also touch on the use of computers in the past and the potential for artificial intelligence to stimulate thinking and create subjective. The concept of "weAKiness" is discussed, where "we" refers to the mixture of light and space, and "we" is the essence of existence. The speakers also explore the concept of time and how it relates to "we" and "we" concepts.

AI Generated Transcript ©


00:00:01--> 00:00:59

stood out more at Kumar Rahmatullahi were better care to brothers and sisters and friends. And welcome to this online live seminar. Can artificial intelligence undermine religion? Now, brothers and sisters, this is a good topic because currently there is a lot of drama, or there is a conversation going on concerning AI, artificial intelligence, robots, computer systems, advanced computer systems, and so on and so forth. And you have academics talking about this, you have people in the popular online sphere, talking about this, you see this in the movies, you see this on social media, on videos that you can find on YouTube and many other places. And there has been this kind of

00:00:59--> 00:01:03

conversation going on concerning

00:01:04--> 00:02:03

consciousness. And can robots be fully conscious? Or is it just a simulation of consciousness. And there have been subsequent questions or conversations concerning religion, theism, and the idea that there was something special about human beings, we have a soul in the Arabic Islamic tradition, a row, we have a spirit, we have consciousness, and the reality of consciousness is specific to human beings. Yes, animals can have consciousness too. But we have rational insights, we have the ability to attach meaning to symbols, we have an ability to attach complex meaning to many symbols. And so the conversation has gone down the path of well, maybe in the future, you can have a robot or a

00:02:03--> 00:02:53

computer program or a piece of hardware, whether it's a robot or not, is almost irrelevant, that can have a program that can not only simulate consciousness, but it can be truly conscious, which is this idea of strong AI, Strong Artificial Intelligence, that it really is conscious from the point of view that has awareness and cognition, just like a human being. Now, that's the strong AI, there is the weak AI, which basically says, Well, no, computer programs can never can never be fully conscious, they only simulate consciousness, the only only simulate cognition from the point of view that they can just manipulate symbols in a very advanced and complex way. But there is no way of

00:02:53--> 00:03:42

attaching meaning to the symbols. And that's the summary of today's seminar, basically. But we want to break this down further to empower you. Because the whole point of Sapiens Institute is that we're here to empower you to be able to share and defend Islam academically and intellectually and for you to develop others to do so the same. And this means we have to address contemporary questions and inshallah God willing, we'll be doing this into Allah gives us our last breath. So let's go through the slides. Okay, can artificial intelligence undermine religion? Well, the first thing we need to talk about is, well, what on earth is AI? According to the late computer scientist,

00:03:42--> 00:04:30

John McCarthy, who was based at Stanford University, he said, It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence. But AI does not have to confine itself to methods that are biologically observable. So from this perspective, again, the idea that artificial intelligence is a science. It's an engineering that tries to make machines, smart machines, intelligent, and it's related to the action or the task of using computer programs or using computers to understand human cognition.

00:04:31--> 00:04:59

Now, according to IBM, and you can find this on the website and by the way, this has been taken from the website and you will have the links and all the books and references and bibliography at the end of the seminar on the relevant slide. Now, IBM, they basically articulate AI in the following way. At its simplest form, artificial intelligence is a field which combined combines computer science and robust datasets to an end

00:05:00--> 00:05:47

Problem Solving. It also encompasses subfields, of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms, which seek to create expert systems, which make predictions or class classifications based on input data. So you're getting the idea here, that artificial intelligence is a domain of knowledge, if you like, it's a field, which takes the understanding of computer science and data. And there is there are the development there is the development of complex algorithms that can basically use that data in a functional way. Meaning that they can use it from the point of view

00:05:47--> 00:06:32

that give us solutions that make predictions or they classify that data or they interpret that data. So AI from this perspective, especially when it's related to machine learning, and deep learning, is essentially based on datasets and algorithms. So continuing with IBM is understanding of this, and this has been taken literally from the IBM IBM website, and the links are provided at the end of the seminar. Now, yes, there is a kind of connection between artificial intelligence, machine learning and deep learning the or related, so let's unpack some of these concepts. So you get them get to understand them, at least on a basic level. So deep learning is actually comprised of neural

00:06:32--> 00:07:04

networks. Now, this sounds quite funny, because this is emulating the kind of or mirroring the kind of human understanding of the brain or the understanding of the brain that we have neurons, that you have to really understand that neural networks are just basically a kind of term that tries to mimic the way biological neurons signal to one another. So neural networks. In other words, artificial neural networks are just comprised of node layers, containing an input,

00:07:05--> 00:07:46

or one or more hidden layers and outputs. So basically, it's just a kind of relation between inputs and outputs, there's inputs and outputs in a very complex way. And there are many layers of these inputs and outputs. And that's what it really means really. So this kind of understanding of neural networks and deep learning is nothing more fundamentally than just basically, node layers containing an input layer, one or more hidden layers, and an output layer. And there is a kind of connection between all of these things, there's a kind of relation between these inputs and these outputs. So deep in the deep learning refers to a neural network comprised of more than three layers, which

00:07:46--> 00:08:28

would be inclusive of the inputs and outputs can be considered a deep learning algorithm. So essentially, all it is, is just layers of inputs and outputs. That's what it is related in complex ways connected in complex ways. From that perspective, it has inputs and outputs, and many layers of these. So already explained this neural networks, also known as artificial neural networks, or a and NS, or simulated neural networks, S and NS, are a subset of machine learning, and are at the heart of deep learning algorithms. The name and structure, as I said, are inspired by the human brain mimicking the way that the biological neurons signal to one another. So essentially, artificial

00:08:28--> 00:08:52

neural networks are comprised of node layers containing an input layer, one or more hidden layers and output layer. And all of these essentially, are just inputs and outputs, a complex relation between different inputs and outputs and many layers of these. So for it to be deep, it basically means there is one or more, there's there is more than three of these layers. Okay, so hopefully that makes sense.

00:08:53--> 00:08:58

So the understanding of machine learning and deep learning is, is connected to artificial intelligence,

00:09:00--> 00:09:44

as we've just explained, so that you want to break this down fundamentally, brothers and sisters, all it is zeros and ones. That's it, okay. And this has been taken from a very kind of basic website on explaining how computers work, okay, and I've given the link at the end of this seminar on the relevant slide. But I have put this in place because if you break it down to its nuts and bolts, it's just fundamentally zeros and ones. And those zeros on ones. So it's basically ln it and then it's an electronic version of on and off switches. That's it. Simple as that. And computers, they use zeros and ones, which is the binary system, okay, so it's just two digits one and zero. And this

00:09:44--> 00:09:59

system is called binary. And our computers use it all the time. And, you know, just like Atoms make up everything around us in the real world. Everything in the digital world can be broken down into binary. In other words, zeros and ones, even though we can't see them. It's all a bunch of ones in

00:10:00--> 00:10:47

zeros in and that is really explained by an electronic version of an on off switch. So when you use complex computer programs such as C Plus Plus or JavaScript script or complex computer algorithms, when you talk about machine learning and deep learning, fundamentally, they're stored onto the computer as zeros and ones, which are fundamentally the kind of electronic version of an on and off switch. I hope that makes sense. Okay, I'm gonna break this down as simple as possible. So as I alluded to earlier, there is a difference between strong versus weak AI, strong AI and weak AI. And the type of AI that may be a problem to religion is strong AI. So let's understand what these are.

00:10:47--> 00:11:32

Professor John cell, a philosopher of the mind, he describes Strong Artificial Intelligence as the appropriately programmed computer really is a mind really is a mind in the sense that computers, given the right programs can be literally said, to have to understand and have cognitive state. So from his perspective, and there's nothing controversial about this understanding is that computer programs, they literally have cognitive states. And that means they literally have cognition, like human beings, they have awareness, and subjective conscious experiences. Because when we are when we are undergoing a, a cognitive cognition, and we're thinking, and we're having rational insights, we

00:11:32--> 00:11:44

also have a subjective field, a stream of consciousness concerning these rational insights, this cognition, this thinking process, we are aware that we're having it, and there is a,

00:11:45--> 00:12:29

what it's like to be in that cognitive state. So we have a sense of subjective experience concerning what it's like to have a Rational Insight, or to be able to think about something deeply. So strong AI really, basically is that computer systems or computer program really is a mind. It has a cognitive state, and not only doesn't have a cognitive state, like a human mind, but just like a human mind is aware of that state. And there is a subjective element concerning that too. Now, weak AI is kind of different, because weak AI is, is that computers do not have consciousness. And in this sense, they do not have cognition in the sense, they only simulate cognition. In other words,

00:12:29--> 00:13:12

they only simulate thought and understanding, the only simulate having a Rational Insight, they don't really have a Rational Insight. They don't really have cognition, like we do we only simulate. So our problem or our question today is not about weak AI, it's more about strong AI. So this raises the question, strong AI, a problem for religion? Can it be a problem for religion? Well, if you think about it, if artificial intelligence becomes fully conscious, from the perspective of Strong Artificial Intelligence, is going to support the view that consciousness is based on physical functional processes and interactions in simple language, it's going to support the kind of

00:13:12--> 00:13:35

philosophical naturalists perspective that everything can be reduced and understood by physical processes. This means therefore, that human consciousness could have emerged without anything external to the natural world. In other words, there is no need for a soul no need for No, a no need for divine interaction. And this will be problematic from the point of view of religion, as you know.

00:13:36--> 00:13:41

However, one could argue, even if it were the case, even if

00:13:42--> 00:14:05

AI could be fully conscious, it doesn't necessarily mean that religion is undermined. Okay? Obviously, there's more to unpack here. But it doesn't necessarily mean that religion is undermined. Why? Because it could be argued that God use these physical functional processes, God used these physical processes that he put in place that he he, he created, in order to bring bring about consciousness.

00:14:06--> 00:14:45

And therefore, it did not require something non physical to ensure that measures of consciousness, right, so once God created the system in place, and the physical process within the system, there was nothing required outside of the system, in order for consciousness to emerge. But fundamentally, obviously, the whole system had to be created, and the physical processes within that system had to be created by God in the first place. And that's why obviously, from a more fundamental theistic perspective, you know, one would argue that all phenomena that we observe a contingent including physical processes, computer programs, the hardware, the software, any type of artificial

00:14:45--> 00:14:59

intelligence, these are contingent in nature, and obviously if you refer to the argument from contingency, all contingent things derive the existence from unnecessary independent beings so you wouldn't deny God's existence per se, and you wouldn't necessarily deny religion

00:15:00--> 00:15:47

And unless, of course, a particular religion, actually, you know, highlighted exactly how the installment took place exactly HOW CONSCIOUSNESS emerged. But that is a different discussion for another time. But generally speaking, you know, even though we can maneuver away from strong AI, even if strong AI were to become a reality, because to philosophically maneuver, it would provide the philosophical naturalist with, you know, a good argument, though to look, you know, it seems that this closed system, this natural system, the universe itself, and physical processes within the universe can give rise to consciousness, cognition, and subjective consciousness, inner subjective,

00:15:47--> 00:16:21

conscious experiences. Therefore, there is no need for anything external to this system. Because just to remind every single one of you if you've been following our work, philosophical naturalism, is the understanding that there is no divine, there is no supernatural. And everything can be explained by physical processes. In other words, there is no non physical. And if there was something external to the universe, it doesn't interact with the universe or effect in the universe. So they would say, Look, you know, philosophical naturalism is supported. Because strong AI, you know, is a reality, if it were to be reality, of course, so it could support their worldview, the

00:16:21--> 00:16:59

philosophical lenses that these philosophical naturalist put on their eyes, in order to understand themselves and reality. But notwithstanding all of that, there is still some space, philosophically speaking, or intellectually speaking, rather, for religion to maneuver and say, well, if religion is not necessarily undermined, because even if strong AI were a possibility, it could be argued that God created this system with these physical processes. And the combination of these physical processes in complex interactions, if you like, could give rise to consciousness, that may not be adequate. But that's not the main purpose of our discussion today. Anyway, we could unpack that

00:16:59--> 00:16:59

another time.

00:17:01--> 00:17:43

So how do we address strong AI? From this perspective? How do we address it with the previous slide assumes that string a I say, it's a possibly how could we respond? But now we're saying well, is well, and now we're questioning is strong air possibility? And I would argue, of course not. And this is the consensus, from my understanding and reading of philosophers of the mind, because there are problems for strong AI. The first problem is what you what you may know as the hard problem of consciousness. And there are two main questions concerning the hard problem of consciousness and we're focusing on the second main question, which is, Can inner subjective conscious experience

00:17:44--> 00:18:21

arise from physical processes? Fundamentally, it's a fundamental question. So let's unpack the hard problem of consciousness. Now, the hard problem of consciousness brothers and sisters is concerned with the nature and source of our conscious experience. And by the way, conscious experience is also known as or referred to as phenomenal experience in the language of the philosophy of the mind. And the hard problem of consciousness raises two key questions. Number one, what is it like for particular organism to have a phenomenal conscious experience? In other words, what is it like for a particular organism to have an inner subjective conscious experience? Okay. This is a very key

00:18:21--> 00:18:36

question. In other words, what is the nature of someone's subjective conscious experience? And this is more of an epistemic question. There is a knowledge gap. I know what it's like for me to have a hot chocolate on a Sunday morning. But I don't know what it's like for someone else to have a hot chocolate on a Sunday morning.

00:18:38--> 00:19:16

And even if I were to map out all the neurochemical firings in their brain and correlate it to that experience, it was still not give me that knowledge of what it's like for them to have a hot chocolate on a Sunday morning. Even if I know what it's like it doesn't necessarily follow. I know what it's like for them to have a hot chocolate on a Sunday morning, right? Even if I use words like sweet and creamy. These are just utterances that echo or describe my inner subjective conscious state, they're vehicles to meaning and meaning is a representation of what's what's happening inside, subjectively. So my understanding of creamy and sweet could be totally different to that

00:19:16--> 00:19:57

person's understanding, even if they use the same descriptions when they're having a hot chocolate on a Sunday morning. So, you know, there is an epistemic gap, a knowledge gap of what it's like for a particular conscious organism to have an inner subjective conscious experience. Now, the second question, which is really the question that we're focusing on, concerning artificial intelligence, strong AI, is why and how to phenomenal experiences. In other words, why and how do inner subjective conscious experiences arise from physical processes? And you could raise the question, what is the ultimate source of these experiences? Now? Why is this important for strong AI for us to challenge

00:19:57--> 00:19:59

strong AI? Well,

00:20:00--> 00:20:42

But it's important because strongly AI is supposed to really be conscious. It has human cognition, and it has awareness and it has inner subjective conscious states, if it has inner subjective conscious states, then how on earth can we explain that these inner subjective conscious states arise of some heart from some hardware and software, which are fundamentally physical processes? Because physical processes are blind, and cold. And we're going to unpack in the future. We're going to unpack that in a few moments. But considering the hard problem, the epistemic question, Professor David Chalmers, who is known to have propagated that he coined the term hard problem, or at least he

00:20:42--> 00:21:21

was the one who popularized it. He says, the really hard problem of consciousness is the problem of experience, when we think and perceive there is aware of information processing. But there's, there's also subjective aspect, what unites all these states is that there is something it is like to be in them, all of them are states of experience. If any problem qualifies as the problem of consciousness, it is this one. In this central sense of consciousness, an organism and a mental state is conscious. If there is something it is like to be in that state. So he's really kind of summarizing the first question of the heart problem. Now, philosopher coin outta he's summarizing

00:21:21--> 00:22:03

the second question which then the second question, by the way, is not an epistemic one. It's an ontological one is the source and nature of our inner subjective conscious experiences? And he says, How does my brains activity generate those experiences? Why those are not others? Indeed, why is any physical event accompanied by conscious experience, the set of such problems is known as the hard problem of consciousness. Even after all the associated functions and abilities, I explained why one might reasonably wonder why there is something it is like to see letters appear on a computer screen. So the main part of what he's saying is, why is any physical event accompanied by conscious

00:22:03--> 00:22:44

experience? How does my brains activity generate those experiences, especially when we understand that physical processes are blind and cold AI processes are blind and coat, the hardware is blind and cold, whether it's a computer, a chip, a robot, these are fundamentally based on physical processes, not from a philosophical, naturalistic perspective, they are blind and call to see otherwise, is to define philosophy to is to define philosophical naturalism. And in some, some other way, is to give you some kind of magical properties, which, from my understanding, no philosophical naturalist worth their salt would even mentioned such a thing. And we know physical processes,

00:22:45--> 00:23:28

whether it's aI processes, a hardware software program, a robot, a chip, whatever the case may be the zeros on the ones the electronic, on off switches, these are blind and cold. zeros on ones are blind and cold brothers and sisters, especially when we're talking about strong AI. Because fundamentally strong AI can be reduced to the zeros and ones, and their Blyden code and zeros and ones, by the way, just the kind of electronic representation of an on and off switch. So these zeros on ones, these physical processes are blinding cold, meaning they do not have an intentional force directing them anywhere, and they are called are not aware of themselves aware of anything outside

00:23:28--> 00:24:07

of themselves. So given the fact that the physical process the zeros and the ones in the context of strong AI, and the phenomenon reality, meaning the subjective consciousness, the fact that we have awareness, are completely different. We can easily say at least from a metaphysical point of view, our metaphysical intuitions tell us that they're not the same, they're totally different. So what remember the key question that we want to refer to consider the hard problem? Why and how do these phenomenal experiences why and how does this awareness wine how does this inner subjective conscious experience arise from physical processes? And in from the context of strong AI? How does you know if

00:24:07--> 00:24:49

it were to be the case? How does strong AI true consciousness with awareness and cognition arise from Blind zeros and ones called zeros and ones, there is nothing that intention directs them anyone, and these zeros and ones are not aware of themselves aware of anything outside of themselves. So to claim the more we know about these physical processes, the more we know about the computer system and the program and the more we develop, from a knowledge perspective, you know, to claim that is not going to give you knowledge about how they are how in a subjective conscious experience arise, because our metaphysical, metaphysical intuitions tell us that in a subjective

00:24:49--> 00:24:59

conscious experience, awareness, cognition from a human point of view is totally different than blind physical processes in the context of strong AI, blind zeros and ones. So

00:25:00--> 00:25:43

Have you know to claim that if we know more about this physical stuff, we're going to know how it arises is false, because it's the equivalent of saying, knowing more about the wall in front of you will give you rise to knowledge about the moon. And when that's the case, we know that knowing more about the wall in front of you will not give rise to knowledge about the moon, because the wall and the moon are fundamentally different, just like in a subjective conscious experience are fundamentally different to blind physical processes and blind zeros and ones. Okay. So one would argue that strong AI is how could it ever be the case that fully conscious artificial intelligence

00:25:43--> 00:26:26

can arise from non conscious code blind physical processes, blind zeros and ones? It doesn't make sense. It's like saying, Let's believe in magic, that's what it's basically saying. And we know something something cannot give rise to something if it does not contain it in the first place or have the potential to give rise to it. Something cannot give rise to something else, if it does not contain it in the first place, or have the potential to give rise to it. Likewise, blind coders on ones do not have inner subjective conscious experience, how can they give rise to and blind zeros on ones, even if they causally they interact? They interact in causally complex ways cannot give rise

00:26:26--> 00:26:29

to inner subjective conscious experience from that perspective.

00:26:31--> 00:26:41

Now, one would argue, well, you've made some broad claims here, but there are physicalist approaches in the philosophy of the mind that tries to explain consciousness.

00:26:42--> 00:26:58

I agree. There's something like eliminative materialism, reductive materialism, functionalism, emergent materialism, strong, emergent materialism, weak, emergent materialism. And there are many other empirical theories that are based on these fundamental approaches to the philosophy of the mind.

00:26:59--> 00:27:36

We don't have time to explain these and they they are not needed. With regards to explaining strong AI and dealing with today's question, they're not needed. However, I will briefly introduce functionalism, which, broadly speaking is the approach that underpins the computational approach to the mind, which is important for today's discussion. But if you want more unpacking concerning some of the stimulus I've just mentioned, an unpacking of the kind of different approaches to the philosophy of the mind the physicalist approaches, at least, there is going to be a forthcoming seminar on the hard problem of consciousness, philosophical naturalism, theism and the physicalist

00:27:36--> 00:27:38

approach to the mind in detail inshallah. Okay.

00:27:40--> 00:28:24

So functionalism, which really underpins a computational approach to the mind. So what is functionalism? Well, a functionalist basically defines consciousness as the functions or roles that plays emerging from a set of relations within the organism within a system, it doesn't matter, it could be a lump of gray matter, or it can be a computer system. And really these relations are what is the relations or their relations between inputs, mental states, and outputs. For example, I see my bus arriving. And that's the input, I experienced the internal mental state of worrying or being anxious, anxious, I'm going to be late, which is the mental state, and then I run towards the bus

00:28:24--> 00:29:04

stop, which is the output. So you see there is a relation between the input and the buses arriving, the mental state, oh, my God, I'm going to be late. And then I run towards the bus stop, which is the output. Now, what's very important to understand from a function is point of view. And this has been well understood. If you read the works of Ned block and others, you see that just because you could understand or figure out the relations between the input the mental states and the outputs, it doesn't mean that now you have knowledge of what it's like to be in a particular mental state, or what it's like for someone else to be in a particular mental state, which is really the first part

00:29:04--> 00:29:42

of the hard problem of consciousness, or why these inner subjective conscious states or these mental states arise from physical stuff, is just giving you an understanding of the relations between the input and mental states and output is not giving you an understanding of how these mental states how this awareness or how this inner subjective conscious reality arises from this kind of physical system. So it doesn't really address it at all. Functionalism doesn't address it at all. So just to give you an example, so I could understand I can understand when someone sees a dangerous dog running towards them, which is the input, they would experience fear, which is the mental state the

00:29:42--> 00:29:59

inner subjective conscious state, then they'll run for safety, which is the output. Now just because understand the relation between the input dog running towards them, the fear which is the mental state and the output, which is running for safety, just because I understand that relation. It doesn't make me

00:30:00--> 00:30:19

understand what it's like for someone else to be in that conscious state, as we mentioned with examples previously, but fundamentally, from the point of view of today's seminar, it doesn't give us any understanding or give us any explanation how these mental states arise from physical processes, how they arise from blind, cold, physical processes.

00:30:20--> 00:30:53

Just to remind you blind meaning there is no intentional force directing this physical process anywhere. And these physical processes are not aware of themselves aware of anything outside of themselves. And from the point of view of strong AI, fundamentally, the zeros and ones which are really a representation of an electronic on and off switch, they are not aware of themselves aware anything, anything, they're not aware of anything outside of themselves, and also, and also brothers and sisters and friends. There's not intentional force, directing these zeros and ones anywhere or from that perspective, right. So that was the first main problem.

00:30:54--> 00:31:21

It's the first main problem, this the fundamental problem. The second part, the second question of the hard problem of consciousness, how does awareness how does this mental state, how does this inner subjective conscious experience how does this phenomenal state arise from physical processes. When these physical processes are blind and cold, they're not intentional force direct and directing them anywhere, and not aware of themselves and aware of anything outside of themselves, it just doesn't make any sense.

00:31:22--> 00:31:58

Because remember, something cannot give rise to another thing if it does not contain it, or have the potential to give rise to it. And these things, these physical processes, whether it's a computer system, whether it's reduced to the electronic version of on an off switch, the zeros and ones these things are not, they don't have any intentional force directly darken them anyway, they're not aware of themselves aware of anything outside of themselves. So how can you combine these things to produce something that has awareness? To make such a claim is like I believe in magic, right? So that's the first main problem. The second main problem, which is the key problem is what you call

00:31:58--> 00:32:08

the syntax semantic problem. It could be formulated in the following question, can artificial intelligence attach meaning to symbols?

00:32:09--> 00:32:11

The first thing to understand is this.

00:32:12--> 00:32:58

Artificial Intelligence is really an extension of us. It's an extension of our own awareness. It's an extension of our own of our own consciousness. It's an extension of our own cognition. Remember, artificial intelligence from this perspective is not an independent system with the with the ability to engage in real cognition. AI, artificial intelligence was designed, developed and made by human beings that can attach meaning to symbols, human beings, can attach meaning to symbols, we can do that AI is a protraction of our ability to engage in cognition in real cognition. In this case, our ability to attach meaning to symbols, we can do that computer systems can't do that. They just

00:32:58--> 00:33:02

simulate our real cognition.

00:33:03--> 00:33:50

And William Hasker really just eloquently summarizes this point, he says, computers function as they do, because they have been constructed by human being endowed with Rational Insight. A computer, in other words, is merely an extension of the rationality of its designers and users. It is no more independent source of rational thought than a television set is an independent source of news and entertainment. All computers do is mirror our ability, they don't have the ability, or computers do is mirror our ability to attach meaning to symbols, they don't have the ability to attach meaning to symbols. This is a for them. It's just an arrangement of symbols. It is syntax, it is not semantics.

00:33:50--> 00:33:52

Okay, let me break this down further for you.

00:33:54--> 00:34:26

When we're talking about syntax and semantics, we're talking about this concept in the philosophy of the mind called intentionality. Okay? So when we say ai ai cannot have any real cognition, they only simulate our cognition. What we're saying is they don't have intentionality because human beings are only really intelligent from that perspective, we have intentionality. So what is intentionality? intentionality is the our reasoning, our cognition is about all of something and that is associated with me, okay?

00:34:27--> 00:34:59

And can conversely computer programs are not characterized as having meanings. All they do is manipulate symbols, remember the zeros and ones, they manipulate zeros on ones, that's all they do, really, fundamentally, they're electronic on and off switch. That's all they do. Is they sequence or or many, many, many 1000s and millions, if you like, of honor off switches, electronic on and off switches, that's all it is. It could be an on off of on, on off on off whatever the case may be. It is a complex arrangement of those that's what it is. And those things

00:35:00--> 00:35:42

Do not have meaning they just syntax is just an arrangement of symbols okay and they cannot attach meaning to those symbols. Now for the computer system for the AI system if you like, the symbols are not a bunch of something, all computers can see inverted commas are the symbols that manipulate irrespective, well we may think the symbols are about off. So, computer systems or the way the computer systems manipulate these symbols do not have the feature of intentionality, the not about something or of something, right. In other words, you could even extend it to see they're not aware of themselves or anything outside of themselves. But from the point of view intentionality is just

00:35:42--> 00:35:54

on and off switches. It's not about something or of something, but human beings when we have cognition is about of something. And that is, and that relates to meaning.

00:35:55--> 00:36:35

So, let's break this down a bit further to really get you to understand the difference between meaning and symbols. In other words, syntax and semantics. Syntax is like symbols and semantics is meaning. So take these two sentences, okay, the following sentences, I love my family, which is English, and a boat in Eco eco ganja. I love my fun, which is in Greek. Now these two sentences, they have the same syntax, they have the same meaning. In other words, I love my family. And this refers to semantics, the meaning of the sentences, but the syntax is different. In other words, the symbols are not the same, that unalike that if you never knew anything about the Greek language, and

00:36:35--> 00:36:48

I told you to put the Alpha there, the Rama there, the alpha next to the gamma, and so on and so forth, and give you the right spacing, and give you the right symbols and make you arrange it in the in the correct way. It wouldn't give rise to meaning

00:36:50--> 00:37:32

it wouldn't give rise to meaning you could use any language. For example, you could say, I love you. And then you could say in Turkish city Saburo, I love you, which means I love you. And you could say in Greek savable order these three sentences have the same semantics the same meaning I love you, but they have different syntax. Now, if you only knew the Turkish language, and you knew how to spell cine Saburo when you spelt it, you could attach the meaning to those symbols because you know the meaning of the language. But if you didn't know English, and you didn't know Greek, even if it gave you all the right combinations of the symbols and other words, the letters it will not give

00:37:32--> 00:38:14

rise to the meaning of I love you for you. And this is very, very important to understand. And therefore from this perspective, the following argument can be developed. Computer programs are syntactical based on syntax and other words based on a manipulation of symbols. minds have semantics minds have meaning. Syntax by itself is neither sufficient nor constitutive for semantics. So symbols arrangement of symbols by themselves are neither sufficient nor constitutive. For meaning, therefore, computer programs by themselves are not mind. Therefore they don't have full consciousness, you can't have strong AI. And if you don't, if you kind of strongly i, then religion

00:38:14--> 00:38:14

is not undermined.

00:38:16--> 00:38:42

Now, this leads us to talking about the Chinese room experiment, brother, this is just a phenomenal, phenomenal thought experiment. And Professor John Searle. And I'm going to use my book The divine reality, the newly revised edition to read from John Searles, Chinese Room thought experiment. And it's quoted in the book and the references you have at the end of this seminar on the appropriate slide. So

00:38:43--> 00:39:27

let me read it for you. What is the Chinese Room thought experiment? Now listen to it very carefully, and I'll explain it further as well by just just listen to it. Imagine you are locked in a room. And in this room are several baskets full of Chinese symbols. Imagine that you like me do not understand a word of Chinese, but that you are given a rulebook in English for manipulating the Chinese symbols. The rules specify the manipulation of symbols purely formally in terms of their syntax, know the semantics. So the rule might say, take a squiggle squiggle out of basket number one and put it next to a squiggle, squiggle sign from basket number two, and that's the pole was that

00:39:27--> 00:39:59

some other Chinese symbols are passed into the room, and that you're given further rules for passing back Chinese symbols out of the room. Suppose the unknown to you. The symbols pass into the room are called questions by the people outside of the room, and the symbols you pass back out of the room are called Answers to Questions. Suppose Furthermore, that the programmers are so good at designing the programs that you are so good at manipulating symbols, that very soon your answers are indistinguishable

00:40:00--> 00:40:32

from those of a native Chinese speaker, they you are locked in your room shuffling your Chinese symbols and passing out Chinese symbols in response to incoming Chinese symbols. Now, the point of the story simply is this. By virtue of implementing a formal computer program, from the point of view of an outside observer, you behave exactly as if you understood Chinese. But all the same, you do not understand a word of Chinese. So brothers and sisters from this perspective,

00:40:34--> 00:41:09

this is similar to what's happening with the computer program on artificial intelligences system you have zeros and ones. And just because the zeros and ones they are, they are combined in a particular way to produce what seems to be thinking seems to be cognition, it doesn't mean it's real cognition like a human being with awareness. And with a particular conscious mental state with the inner subjective conscious mental state. It just simulates it just like the person in this room simulates understanding Chinese but they have no understanding of Chinese. They just have a rulebook, just like a computer program that is able to take these symbols and manipulate and combine them in a

00:41:09--> 00:41:29

particular way. And to produce other symbols, even if it's deep learning or machine learning. All of that really is just inputs and outputs, manipulation of complex symbols, which are fundamentally reduced to zeros and ones, which fundamentally are electronic versions of or electronic manifestations or representations of on off switches.

00:41:30--> 00:42:12

So that's what it is. It's a it's a complex program that puts symbols together that simulates cognition, simulates Rational Insight simulates cognition with apparently some kind of awareness, right? All it is, is the manipulation of syntax, there is no way that the system can attach meaning to the symbol. And this thought experiment is brilliant, is absolutely brilliant. Because you have a complex secure computer program. It's in the English language, someone is passing Chinese symbols into the room, you don't know the meaning of the Chinese symbols, but you have the program in place, which is an English language. And if you and it says if you see the squiggle, squiggle, talk this

00:42:12--> 00:42:35

squiggle, squiggle together, and you do that and you produce the right answers. Now, people outside the room think that you know the meaning of you know, Chinese, you understand Chinese, you understand the meaning behind these symbols, but you don't you just are able to combine these Chinese symbols or characters in the right way. That's all you can do. And that's what a computer program does.

00:42:37--> 00:43:02

Now, brothers and sisters, there is something called the system's reply, because there is an objection and profited on. So he understood this objection quite well. And the objection object who might respond to this by arguing that although the computer program doesn't know the meaning, the whole system does, now, so causes the system's reply. And he answers this very well. Basically, think about it.

00:43:04--> 00:43:08

Why is it that the program, why is it that the program does not know the meaning?

00:43:09--> 00:43:49

The reason the program does not know the meaning is because it has no way of assigning meaning to the symbols? Correct. But since the computer program cannot assign meaning to the symbols, how can a computer system which relies on the program understand the meaning you cannot produce understanding, just by having the right program, that's the point. And even so he extended this version of the Chinese Room thought experiment to show that the system as a whole does not understand the meaning. And he says, imagine that I memorize the contents of the baskets and the rule books. So imagine the contents of the basket, the Chinese characters, you memorize them, and you memorize the rulebook.

00:43:49--> 00:44:28

And you do all of the calculations in your head, right? You can even imagine that I work out in the open, I might even write it out and work it on the open. There is nothing in the system in my head from this perspective, that is not in me. And since I don't understand Chinese, neither does the system. So what he's basically saying is, imagine you had the baskets full of Chinese characters in your head and the rulebook in English, to know how to manipulate the Chinese characters to produce the right answers, even if all of that was in your head, and you were to work it out on a blackboard, for example, or whiteboard. Even if you're going to do that, you still won't have

00:44:28--> 00:44:49

understand the meaning of those Chinese symbols, you just were able to manipulate them in the right way. So the whole system comes into your head. And even though the whole system is in your head, you still don't have any way of attaching meaning to symbols, you just can only manipulate the symbols in this case, manipulate the Chinese characters to produce the right answers for the observer who actually already knows the meaning.

00:44:52--> 00:44:59

But as with the philosophy of the mind is further contentions. Right now, Loris culton, postulates that

00:45:00--> 00:45:43

So Chinese roof thought experiment commits the fallacy referred to as the denial of the antecedent. So, culture maintains that so commits the fallacy fallacy because we are given no evidence that there is only one way to produce intentionality. And he claims that CERT is assuming that only brains have the processes to manipulate and understand symbols, in other words, intentionality and computers do not. And so Colton presents the fallacy in the following way to say certain brain process equivalents produce intentionality and x does not have these equivalents. Therefore, x does not have intentionality is to commit the formal fallacy denial of the antecedent, which is a logical

00:45:43--> 00:46:29

fallacy. However, there will Jacquet maintain that so does not commit the formal fallacy if an interpretation of those argument is if x intrinsically is intrinsically intentional, that x has certain brain process equivalents. And I don't even really find this objection as satisfying in any shape or form. It's a pointless objection. But nevertheless, what Dell continues to argue on behalf of cell is well, you know, therefore, Sol Sol must believe in functionalism. Or it's a concession to functionalism. And he argues that function is maintained that there is nothing special about protoplasm the brain so that any properly organized matter instantiating, the right input Up Program

00:46:29--> 00:46:52

duplicate the intentionality of the mind. So from this perspective, so seems to admit that machines could potentially have the ability to understand Chinese if these machines were arranged in the correct input output program, that duplicate intentionality of the mind. Right? However,

00:46:53--> 00:47:35

the cell states I do, I do see very strong arguments for saying that we could not give such a thing, in other words, intentionality, to a machine where the operation of the machine is defined solely in terms of computational processes over formally defined elements, in other words, arrangements of symbols, right, the syntax. So so he was saying hypothetically, from a functional point of view, if hypothetically, we're going to get the right input output program, whether it's in protoplasm, or a computer system, or anything else, it's irrelevant for the functionalist as long as the proper organized matter that has the right input output program, and it duplicates intentionality that this

00:47:35--> 00:48:04

input output program that can somehow be about or have something, yeah, if it can duplicate intentionality of the mind, then maybe we can have a system that is not a human that can have that be fully conscious. But does such a system exist? No. And also, the way we understand computers today, especially with the binary system, zeros and ones, and the rearrangement of symbols, in other words, syntax,

00:48:05--> 00:48:28

how can you ever have intentionality because all computer programs all computer should systems are a computing machine is is just defined solely in terms of compute computational process over formally defined elements. In other words, just an arrangement of symbols a complex arrangement of symbols, and having symbols alone do not give rise to meaning give rise to semantics.

00:48:29--> 00:49:13

And that's why you know, even if the function is may have a point, well, the question here is, well, are conscious machines possible? Can we have such a machine in place? Well, in order to have such a machine in place, the robot or the machine would have to be able to attach meanings to symbols. But that would require something other than computational processes over formally defined elements, meaning, it would have to have something other than the ability just to manipulate symbols would have to have the ability to attach meaning to the symbols to such a machine exist. No, could they exist? If they could? They would, they would probably would have to be able to attach meaning to the

00:49:13--> 00:49:49

symbols. And could they do that? And if they could do that, well, they could do it, but it just doesn't exist and the computer programs today, computer programs tomorrow, how deep deep learning is being developed in machine learning and AI is fundamentally just the complex complex arrangement of syntax and other words, symbols, fundamentally zeroes on one's fundamentally the electronic version of on and off switches combined in complex ways, even with many many layers of these, they are fundamentally just

00:49:50--> 00:49:59

symbols arranged in a particular way syntactical arrangements, the system itself or the program itself has no way of attaching meaning to

00:50:00--> 00:50:44

those symbols is just a simulation of human consciousness is not like human consciousness at all, because humans can attach meaning to the symbols, the program, just like what William Hasker said, is just a kind of mirroring of that. It's a simulation of that. So even if one were to argue with me that you can have from a functionalist point of view a system in place that has the right input output type of program that, you know, has that that has intentionality. Well, where is it because we're talking about strong AI in the form of robots in the form of computers in the form of the hardware that we know today. And the problems that we have today, I just based on syntax, not

00:50:44--> 00:51:15

semantics, symbolic arrangements, the arrangements of symbols, so you're talking about this hypothetical that just doesn't exist. That's the point here. And even if you were to adopt a function is understanding and that's what we addressed earlier in the seminar, if you address a function is understanding remember, functionalism is just the relation between input mental states and outputs, even if you know the relations. It doesn't explain why these mental states arise from seemingly cold blind physical processes.

00:51:17--> 00:51:48

So from this perspective, religion is not undermined, brothers and sisters. According to Rocco Gennaro, many philosophers agree with souls view that robots cannot have phenomenal consciousness. In other words, they cannot have inner subjective conscious experience, they cannot have the ability to attach meaning to symbols, they, they they are not aware of that process either. And that's why some philosophers argue that to build a conscious robot, qualitative experience must be present and that's something that they're really pessimistic about.

00:51:49--> 00:52:31

And to quote to explain, consciousness it explain how the subjective internal appearance of information can arise in the brain. And so to create a conscious robot would be to create subjective internal appearance of information inside the robot, no matter how advanced would likely not make the robot conscious since the phenomenal internal appearances must be present as well. Why? Because AI cannot attach meaning to symbols, it just manipulates them in very complex ways, the human mind, therefore consciousness is only simulated, not actualized. Therefore, there will never be a strong version of AI. Therefore, religion is not undermined, you can have the Islamic view that

00:52:32--> 00:53:20

consciousness or if you want to connect it to the concept of the roar of the soul was given to us from a supernatural perspective. And it's not based on blind code physical processes. So brothers and sisters, this is your bibliography the websites that I refer to very basic websites we understand machine learning neural networks, for you to understand basic computer science like the binary systems, zero zeros and ones. And here are the references with that I mentioned. Lawrence Colton, Rocco Gennaro William Hasker del jacket and the references of John sell various various various works. So brothers and sisters I hope you enjoyed that let's now have some questions.

00:53:23--> 00:53:34

I realized I may have been bent most of the time I do apologize I was going that way I should have been going that way but heard inshallah the important thing was that you listened to me and you and you went through this presentation so

00:53:35--> 00:53:38

Bismillah let's let's have some questions

00:53:52--> 00:53:56

let's have some questions brothers and sisters

00:54:02--> 00:54:06

Okay, let's see if you guys have an awesome questions.

00:54:11--> 00:54:13

Bear with me. Scroll from the beginning.

00:54:16--> 00:54:17

Okay.

00:54:23--> 00:54:28

People are giving this salam Wa alaykum wa salam rahmatullahi wa barakato

00:54:39--> 00:54:41

Okay, so

00:54:45--> 00:54:53

I'm just looking through these questions to find a question. Love these things are comments which some of them are just reflecting what I've said.

00:55:08--> 00:55:19

of the romances Salam Alikum. AI is simply a tool which has been mythologized by modernists machines can never think. Yeah, they can't think like humans this for sure they can really stimulate thinking.

00:55:24--> 00:55:25

I can't really

00:55:30--> 00:55:32

I can't really

00:55:34--> 00:55:38

I can't really see any questions.

00:55:40--> 00:55:45

There are some discussions that are not relevant or related to the topic as per usual.

00:55:59--> 00:56:00

Ah, this is a very question Good question.

00:56:02--> 00:56:09

Haba Allah, Allah, I don't get it isn't the electrical signals consider zeros and ones, the same signals that are produced by the brain.

00:56:10--> 00:56:21

But even if that were to be the case, so you have neurons firing, for example, here's a problem. You are making inner subjective conscious experience identical to those

00:56:22--> 00:57:02

electrochemical firings. And that is a problem because you'll be assuming a physicalist ontology, or you'll be assuming a physicalist understanding of the mind for example, you may be assuming eliminative materialism or reductive materialism, that you could reduce consciousness, in this case, in a subjective conscious experience to these neurochemical firings. But that would be assuming that you won't be proving that. Because remember, the two main questions of the hard problem of consciousness? The first one is what is it like for a particular conscious organism to have a specific conscious experience? I know what it's like to have a hot chocolate on a Sunday, but I

00:57:02--> 00:57:44

don't know what it's like for you to have a chocolate sundae even if we describe it as the same. I don't know what it's like for you. The second question is, well, how does inner subjective conscious experience arise from seemingly blind code physical processes? These two questions cannot be answered with the assumption of your question, which is, in this case, we could say it's reductive materialism or, or, or reductive physicalism, that you can reduce inner subjective conscious experience to neuro chemicals firing. That's the that's the question itself, can we? You're assuming it to be true without any evidence? Yes, neuroscience is the science of correlation. There may be a

00:57:44--> 00:58:01

correlation when you have neuro chemicals firing with the inner subject subjective conscious experience, but it doesn't mean it is the same as the in subjective conscious experience. And that's where you have these two main questions of the hot problem that you can't answer with reductive materialism or reductive physicalism.

00:58:02--> 00:58:40

And even other aspects of physicalism. And obviously, this today's seminar wasn't about that. And we are going to have a similar specifically unpacking eliminative materialism, reductive materialism, functionalism, emergent materialism, and so on and so forth, in light of the heart problem. So we will address that. But hopefully, the way I've addressed this question is enough, because you're assuming that actually the electrical electrical signals in the brain or the neuro chemicals firing just like the zeros and ones in the computer, is identical to inner subjective conscious experience. But that's not the case. Or they give rise to inner subjective conscious experience in some way. But

00:58:40--> 00:59:17

you have to prove that you can't just assume it's because you're raising the question. And when we focus on those two questions at the heart problem, we realize Oh, hold on a second physicalism, especially in the conception of eliminative materialism and reductive materialism. Oh, by the way, physicalism or materialism and these two terms in the philosophy of the mind are used synonymously they have different histories and slight different meanings, but they use synonymously. Okay. Anyway, so, you know, the physicalist understanding of eliminative materialism and reductive materialism or reductive physicalism, you know, can't answer the two problems of the hard problem of

00:59:17--> 00:59:17

consciousness

00:59:19--> 00:59:32

unless you want to assume you assume them to be true. But then you just you just you're not proving your assumption. You're just assuming your assumption faith and hope that does that make sense? If you want more further on pattern let me know.

00:59:43--> 00:59:59

So Hamza, are you saying that this is a wholly frivolous pursuit nor worth investigating? No, I'm not saying that. I'm not really I'm neutral. Concerning that question. I'm just saying it's not problem for religion or for kind of a theistic worldview or something

01:00:00--> 01:00:03

tificate religious worldview with this understanding of consciousness and the soul

01:00:36--> 01:00:58

Good question. Ramin or good point? I think the real question is, is learning the same as consciousness? Because we can create systems that learn? Well? Do we we create systems that have that you have an algorithm that that is works upon an existing data set, even if that current data set is not categorized, yes, it can derive.

01:00:59--> 01:01:25

It could categorize that dataset and provide solutions. If you want to call that learning then so be it. But is it learning the way we learn with real cognition, and that has an element of awareness and inner subjective conscious state? Or conscious quality to it? No. And that's where you're right? Because we can create systems that can learn from that perspective, but then people call the AI but it's a false equivalency agreed it's not the same as we just as I just mentioned.

01:01:41--> 01:01:41

Go

01:01:54--> 01:01:58

let me just find a few more questions brothers and sisters bear with me.

01:02:02--> 01:02:10

Some of your comments are really funny man, honestly is like, you know, what do you guys eat? Or sometimes I want to say what do you guys smoke?

01:02:13--> 01:02:14

When he killed it,

01:02:15--> 01:02:16

um,

01:02:18--> 01:02:24

bear with me. Bear with me, this increase the size of this?

01:02:45--> 01:02:50

Okay, rayhaan Khan says, should the Muslims be against AI? No, AI is just a tool.

01:02:51--> 01:02:53

And you use that tool for good or for bad?

01:02:54--> 01:03:08

When it's simple as that? I mean, I would argue I would, I haven't done much research on this. But social media is a kind of interesting tool because people think that they're free. But the kind of algorithms actually,

01:03:09--> 01:03:29

they almost have the effect of social influence. Yeah. And that has been shown. So you know, I mean, cognitive science scientists say, you know, if you want a healthy, you know, psychology or sense of self. Don't, don't, don't scroll or don't use Instagram.

01:03:31--> 01:03:40

Professor, John Varkey from the University of Toronto. I consider him a friend. I've engaged with him a few times, twice. I did a podcast with him.

01:03:41--> 01:03:53

And he's a great human being. He mentioned in one of our interactions in Canada at the University of Toronto, he basically says, Get off Instagram, so it's literally not good for you.

01:03:54--> 01:04:11

So yeah, you know, algorithms can be used for good or bad consider an algorithm like a knife. You can cut a mango and share it with your brothers and sisters in humanity or you can kill someone right so you can use it for good or for that.

01:04:35--> 01:04:38

Okay, let's take one more question.

01:04:45--> 01:04:46

So bear with me

01:05:00--> 01:05:02

Sorry, interesting. So

01:05:04--> 01:05:39

this is the final question. Now when he says Why can't consciousness just be seen as something that emerges from an absurd number of computations that we can't track? This is usually what Atheists say. But not all atheists see this. And this really is assuming the physicalist approach to the mind known as emergent materialism. Okay? Now there is a problem with emergent materialism, there are two forms of emergent materialism. The first form is the weak form and the strong form. And by the way, I'm just going to summarize this, but we're going to have a seminar in detail concerning emergent materialism.

01:05:41--> 01:05:52

The weak form brothers and sisters of emergent materialism basically says that there are complex processes that are causally connected in complex ways.

01:05:53--> 01:06:41

And when we understand how they are causally connected in complex ways, then we will be able to understand how consciousness emerges. Because what does emergence mean? It basically says that there is a property that emerges that you cannot find in the individual processes, but because there are many of these processes that are causally connected in complex ways, you'd have a property emerging that cannot be find in the individual parts of the system, or the end of individual parts of the physical processes, or the individual physical process, it only emerges as unique property emerges as a result of this. And these, these many physical processes, causally connected in complex ways,

01:06:41--> 01:07:18

okay? So, and they say, once we understand how it happens, then we'll basically, you know, have a true understanding. But that really just assumed reductive materialism or reductive physicalism to be true. Which basically, is another way of saying, Well, yes, consciousness can be is identical or reduced to in some way, or explained by in some way by physical processes. But reductive materialism can't answer the hard problem of consciousness. Right? Any of the questions?

01:07:19--> 01:07:54

Because remember, the two questions are the hard problem of consciousness number one, what is it like for particular conscious organism to have an inner subjective conscious state? And number two, why and how do these? Do these physical these inner subjective conscious states arise from seemingly cold, blind physical processes? reductive materialism, this type of physicalism can't answer those questions, because the first question is, oh, well, you know, it's a physical process in the brain. It's neuro chemicals firing, okay. But if I were to map out the person in question, his or hers,

01:07:56--> 01:08:11

neuro chemicals and the neuro chemicals firing in their brain, that electrochemical signals in their brain map those out and correlate to the inner subjective conscious experience, you won't give me you don't give rise for me to know what their conscious experience is. That's the first point.

01:08:12--> 01:08:19

Even if that mapping was the same as mind mapping, and I had the similar experience, I all we're doing is

01:08:21--> 01:08:54

using my descriptions of the experience, and correlating it with their descriptions, even if we use the same descriptions, and assuming that we're having exactly the same inner subjective conscious experience, which we should be false. So the point is, it can't answer the first question, because when you map out all the electrochemical processes, and all the electrochemical firings or happenings that are happening on one's brain, it doesn't now follow that you know exactly what it's like for them to be in a particular conscious state. Even if that map that neurochemical that electrochemical map is similar to mine having the same type of experience, say eating a banana,

01:08:55--> 01:09:21

right? And I'm using certain descriptions. It doesn't mean now they're having n days use the same descriptions to it doesn't mean they're having the same experience. So the first question is the answer the hard problem? Noise? The second question answered, which is what the ontological question, how do neurochemicals firing? Or how do inner subjective conscious experiences arise from physical processes that are seemingly blind and cold? They have.

01:09:22--> 01:09:34

They have no intention of force directing them anywhere and they're not aware of themselves are aware of anything outside of themselves, how does that happen? Because remember, you cannot give rise to something. If

01:09:35--> 01:09:59

something cannot give rise to another thing, if that thing, if where it came from doesn't it's not contained within it or has the potential to give rise to it. Having something something non conscious and cold and blind plus something nonconscious called and blind causally connected in complex ways, it's still going to give you something that's non conscious called the blind unless you really believe in magic, right? So it doesn't answer those two questions. So the week

01:10:00--> 01:10:12

form of a major materialism actually just just assumes reductive materialism, a form of physicalism. And it doesn't answer the two problems of the hard problem of conscious of the two questions of the hard problem of consciousness, the epistemic one and ontological.

01:10:14--> 01:10:21

Now, the strong form is quite interesting. And actually, let me read the strong form for you because the strong form

01:10:22--> 01:10:39

is, doesn't even answer at all. It doesn't even doesn't even answer the questions of the hard problem of consciousness. And the strong form basically says that it's too complicated. We're never going to know. We're just never going to know.

01:10:40--> 01:11:17

It's so let me just read it from my book. So the strong form of emergent materialism argues that subjective conscious is a natural phenomenon how phenom phenomenon. However, any physicalist theory that attempts to address this reality is beyond the capacity of the human intellect. This form of emergence argues that we can get a new phenomenon x from y without knowing how x emerges from y, strong emergent materialism and teens that we can get something new from a complex visit from the complex physical processes, but the gap in understanding of how this thing emerges will never be closed. Now, this approach doesn't explain the hard problem of consciousness, he doesn't answer the

01:11:17--> 01:11:57

two questions. And, you know, in my view is no different to saying it just happens. It's so complex that no one knows if which really is similar to what they accuse theists of doing, oh, God did it. We don't know. How about God did it right? And revamps your argues and Tony Robbins to the philosophy of the mind. He argues that strong imagine materials will never be able to address subjective consciousness. And even if we were to be given the correct theory, and I quote, it would be equal what hamsters could make of Charles Darwin's Origin of Species if a copy was placed in their cage, and then I continue to say, since we're trying to explain the hard problem of

01:11:57--> 01:12:16

consciousness or answer the two questions, dismissing subjective consciousness as a mystery does nothing to prevent a rational person from accepting an approach that actually does clearly explaining it that theism coherently explains it but that's for another seminar brothers and sisters and friends.

01:12:18--> 01:13:00

And to that point, brothers and sisters, I pray you're all well and may Allah bless every single one of you, and grant you all the best in this life and the life TOCOM I've just seen a question here by Rohan Khan. Does Islam have a dualist perspective? We're going to address that in another similar in another seminar brothers and sisters deserve to have a give me the opportunity to share this information with you. If you found it useful. Please share it with other people. Anything good has come from Allah Subhana Allah to add a lot the source of all goodness. Anything good inaccurate or wrong has come from my enough's my ego shaytaan es Salaam Wa alaykum Warahmatullahi Wabarakatuh