Shadee Elmasry – NBF 167 All You Need to Know About Artificial Intelligence
AI: Summary ©
AI: Transcript ©
Love, he was so happy when Manuela welcome everybody to the Safina
side to the Safina society live stream, nothing but facts. I
totally butchered that this time around what is it? Welcome
everybody to the nothing but facts. What do I usually say?
Oh welcome everybody to this just nothing but facts the Safina
society live stream. And where when I say nothing but facts I
mean facts are
what's going to set us free from conjecture. All right. But today
is finally we got some snow and hamdulillah is a day in which
the snow has come down. It's not sticking yet but it is coming down
in huge flakes. Such a gorgeous thing. Such a romantic thing is
the snow. Right? You gotta love the snow.
And on a Sunday, or a Monday morning, or Monday afternoon, in
which
the month of Rajab had come in last night.
And the month of Rajab is one of those things where you wanted to
make sure you take advantage of the DUA on that night because it
is from many books of Hadith, stating that the a bad
the.in first night of Rogers McCool.
Next announcement is that
we're shifting things a little bit. We're moving Shema on Mondays
because that's what makes sense. She meant it to be on Mondays
because everything that Ciara related is
is to be done on Mondays. Just because the Prophet sallallahu
alayhi wa sallam was asked should I fast on Monday Prophet said
Would it to fee so therefore things related to the prophets
lights on them happen on Monday, we're on the age of the Prophet
Cindy Rasulullah sallallahu alayhi wa salam Babel, my geography
Sydney Rasulullah sallallahu alayhi wa sallam. That's going to
be segment one segment two. We're going to have Maureen coming on to
talk about AI and it updates on AI. What are the facts on the
ground? On a what do we know? As investigators and police officers
like to say
investigators so what do we know? What are the facts? And then what
are the concerns or the theories surrounding these facts? So
that'll be segment two, segment three, we'll take your questions
and answers as usual on a Monday and that'll take us right up to
three o'clock. So let's get going straight up. Babble my Jaffe,
Sydney rasool Allah He sallallahu alayhi wa alayhi wa sallam
had nothing to do with pneumonia, her death in a row. Ivanova had
done as Korea is Huck had death and now I'm in the dinar. And if
nobis kala Mecca Nabil sal Allahu Allahu alayhi wa sallam, we met
Cata filata Asha Rata Senate and you have la heat within Medina to
ashram. What will fear Well, what happened with the left? And was it
Dean, this is the dominant opinion on the matter. Because remember,
in those days, people didn't always keep track of things.
People kept track we know when the Prophet was born because Allah
made it in on the year in which a massive event with Toto occurred
which is a feat. And, and so we know that and then they used to
count how many months or whatever everyone would count differently,
how many winters whatever, people, people would count, well, that
actually wouldn't work in the lunar calendar. Right. So you
would just know that year and how many modems have passed or things
like that?
Yep.
i Some of you are putting questions up on Instagram. I will
take the questions after we finish this segment. And between getting
more in on I'll take some questions and then after Maureen
will take questions again. Maureen is the guest he's a person who has
a great interest in AI and he's going to talk to us about it
have fun with that.
had done a Muhammad a bit of a shot Davina Muhammad Ali jofra on
Shaba and Abby is hop on I'm gonna miss out. And Jared and Anwar
here. And now Samia. Some Yahoo Yakubu Carla Murthy. Rasulullah
sallallahu alayhi wa alayhi wa sallam oho Nutella thing was it
Dean? Will Abu Bakr Omar well and even with an earthen was a teen.
So Abu Bakr and Omar also died at the same age as the prophets of
Allah. What he was saying is that well Buckaroo was two years
younger
than the prophet and he died two years after the Prophet said no
Ahmad was 13 years younger than the prophets of Allah who sent
them and passed away to over 13 years after him. Please be born
say North man, however, lived much longer life.
He lived into his 80s
Next Hadith Hadith Anna Hussein Abu Mahdi al bacillary had death
and Abdul Razak, an oblate, Eurasian and his ovary and orwa
Ana Aisha
and then the via sallallahu alayhi wa. He was salam Amato hope
Nutella and was it Dina Sena so why are they bringing the same
narration over and over different sayings the same thing showing you
that many have said this
not just one person who have said this was said this or two people
many, many people have said this. Okay, and that's the point here
many have said this.
Why does not have made a bit of money I Ananya COVID and a
behemoth, a Dota fleet got up had otherness eliminar Alia and Carla
didn't have that and burner Amar Mola, Benny Hashem, Carla, Samia
to ignite best in your code to fear Rasulullah sallallahu alayhi
wa alayhi wa sallam Waldner comes in we're sitting so So even our
best, says 65. So even our best is the total Jamal Quran. He is the
chief interpreter of the Quran, from the companions of the prophet
named him that Tata joumana Al Quran, the translator meaning the
interpreter of the Quran, yet nonetheless, on this matter, we
say that he's not the one who's right.
Okay. And the one that the one that the answer is right, is that
he was 63 years of age, not 65.
Somebody can be the most noble person, you know, they could make
a mistake, they could say something wrong. Right? A chef a
honored ematic right or wrong. He then went on to author a whole
different philosophy of Islamic law will soon be completely
different. Or I should say methodology philosophies a bit
different, but the methodology of deriving rulings completely
different. Was that supposed to be like offensive or something? No.
Some say that he waited until medic passed away first. So that
he can maintain the data but not hurt his feelings.
Or, or or not affect magic in that way. Maybe, maybe not. But there
is nothing wrong with that.
So nobody should. So when people have see you, and that shade has
an opinion, and you have a better one, and you're qualified now Chef
a was qualified, then you feel free to formulate it. I was just
talking to shake it on me. And he said that. What I've said hutch,
one of his students wrote it up, done him a response to what Robert
said Hutch and we were talking about how there are some people
that have an ideological groupthink, they will never, that
one of their shields will say something that's out of bounds. It
happens because he's a human being. Nobody will say anything.
Right? That's a problem right there. That means that all your
other reputations are insincere. You're all your other refutations
the sincerity is not yet there. If you're truly sincere to the book
and the Sunnah, that's what we follow. That's it, then you have
to admit when you're wrong.
You have to admit when your your shake is wrong.
Because the book and the Sunnah, if the book and the Sunnah can be
known than the accuracy of your shoe can be known right or wrong.
You never follow a chef because, oh, he has a secret source of
knowledge. That's nonsense, always. Okay, legislation of
Ocarina and FIP. political matters, worldly matters is never
ever going to be based upon somebody has a secret source of
knowledge, right? Oh, he's got he sees visions that we don't know.
He knows the future that we don't know. So he has wisdoms that we
don't know. That will that may be the case, but it will never be the
source of our legislation. It will never be the source of a federal
it will never be the source of a political action.
Okay, the stuff will never be the source. We have to understand
this. Secondly, if we can know the Quran and the Sunnah and the
shittier, then we can know the accuracy of our own schewe.
Eventually you will come to that point, right.
You can study with somebody and the beginning you have view you
really have no knowledge at all, he is the only source of knowledge
that you have.
Okay, you can bring in more in whenever you are moyen can even
pitch into this discussion before we get to the AI.
Good. Okay, good. No problem. So, we're 20 minutes out until we have
the AI discussion. But for now, this concept is extremely
important in Islam. The difference between a student and a disciple,
a student will eventually come to learn the sources of his teacher.
The student will eventually come to learn and even if he advances
will be able to assess his teacher but the
disciple always remains his head bowed and he never looks at his
teacher. That's the difference. That's the difference between
student and disciple when we're studying Aki and FIP anyone who
advances can look at the
at the
methodology of his teacher and the conclusion and make an assessment
and that does not decrease from the teachers rank will not one
iota what decreases of his rank if is is if he a does a moral wrong
like he does something haram for example, or decreases from his
rank if another situation where he becomes arrogant and refuses to
admit his error? Or a decrease in his rank? If he deviates from
something that is what we call copy something that should never
even be a question. Then we'd say yeah, okay, that's something that
we
that's a deviation. But for for you ever have a teacher and that
teacher taught you everything you know about a certain subject and
you advance to another teacher to another teacher? And then you come
back to your original teacher and say, Okay, well, he actually made
a mistake here that how does that decrease from his rank? Did you
take him as a teacher because he was my assume
there are groups out there
that refute that the the right and the wrong is not determined by the
book and the Sunnah is really determined by the their teacher,
and that's a problem. And that's where you get you have to close
yourself off from the OMA.
And anytime you see, even a Sunni their concept of a Sunni cult,
yes, they're upon the Sunnah, but they don't they the treatment they
give to their teacher versus the rest of the Ummah, forces them to
cut off everyone else, and they have to cut off everybody else.
Right. That is that type of ideological cultish behavior, that
is a problem. Okay, a major problem. Right? So we all have to
realize when you take on a teacher, you do not take him on as
being my assume.
You take him on because he's willing to teach you he went out
of his way, he learned something that you didn't learn and
benefited you. And you owe them respect for life. Unless he
deviates, completely goes crazy. But otherwise, if you make some
mistake on a small matter, a political decision, a practical
decision, or he just made a mistake in FIP, even an arcade in
the floor.
No, so what people make mistakes.
And as he said,
and shikigami said, and what Robert said has restrict on being
respected by his
students, yet one of our students wrote a lot. He wrote a response
thing, that photo of my shirt was wrong.
That doesn't take anything away from her all and told me that shit
that chicken McFeely.
And chicks do hair. They go at it all the time on photo, on machine
slaughter chicken, they just differ all the time on machine
slaughter chicken.
Except for all the time on the subject, what's wrong with that?
And one goes, Okay, good. There's more chicken for me. And the other
one goes, enjoy your Mehta. They're even joking about it.
Because it's a matter of fun. It's a matter of speculation, you're
thinking, if I'm thinking and you're thinking, then my thought,
and my methodology is correct, and your methodology is correct. And
you arrive at different conclusions. Obviously, some
perceptions are going to be different along the way. So you're
going to arrive at different conclusions. You're not
blameworthy.
And we're saying all of this you wonder why we're saying this.
We're saying this because here we have if not best, saying the
Prophet was 65 Previous to that say that it's just that he was 63.
And another one of my best is 63. So what gets what probably
happened?
Even our best probably change the position.
When you have two narrations from the best one saying 65 One st 63
by dt without a doubt he changed his position, obviously change his
position, right. It's okay to change your position.
Wayne, is it alright let's finish this chapter real quick. I don't
know how many of you have a shot, or how many have an event called
her death and I'm while ignition had Devaney Abby. Katahdin has an
N double Velib Ambala and then absl Allahu alayhi wa sallam will
be the one who comes in with city. There you go. Another one Sahabi
said 65
Good.
Let's see the next one.
On NS of numeric and spend a lot of time with the messenger peace
be upon him and live long after him so we can gather information.
What does he say? Canada sal Allahu Allahu alayhi wa sallam
laced with the wheel baton. The Prophet peace be upon him was not
too tall will ever proceed and he wasn't sure it will be available M
Huck, nor was
See pale white will eyeball Adam nor was he black like very dark
willable judge of cotton, so his skin was between the two. So
whatever two sides you have, he was in the middle. His hair
neither was very curly willable was cept nor was it perfectly
straight. Botha hello to Allah Allah Rossi Arbaeen Asana, Allah
sent him while he was 40 years old at the beginning of his 40th year
for a comma v McCosh. Listening. He stayed in Mecca for for for 10
years. 10 years we've been Medina, Tasha Sydney and 10 years in
Medina whatsoever who La La assisted Tina center. So he
concluded 60 years What did NS Miss there three years of the
secret Dawa. Right? What a Sufi ROTC he will adhere to Yash Runa
charlatan, Badal there was not in his hair or his beard.
More than 20 white hairs less than 20 the number of white hairs in
his hair and his beard was less than 20.
Okay.
So what do we have about that Ennis and Malik? Giving another
number and what do we say about that? That's a mistake he made he
missed the three years of secret doll that was just to the family.
Okay.
isn't nothing wrong with saying your shoe has made a mistake?
If we're here make saying as a hobby.
So hobby, the Sahaba are not their rank, because everything they ever
said was right. The Sahaba are not what they are because every single
thing they said or action they took was correct. It was not
because they lived in the desert there's a harbor or not no,
because they lived in the desert because they were hungry because
that no it's because of their sit and there is to come on the path
of the prophets of Allah he was on them that's what they were. That's
what they were upon. So therefore says okay, no, that's his
narration that's not what we're going by. If you go to automotive
NAB that as is what he told to.
Who was it? What he all might have been Abdulaziz
Subhanallah His name is keeping my mind
secret.
What was his name? Subhan. Allah, I don't know how I'm skipping his
name because I haven't eaten. So
I'm gonna bring out that as he is when he said gather the Hadith of
the Prophet leave off the strictness of Lebanon.
In Omar, the leniency of if not best, and the odd statements of
viburnum episode.
Okay.
Spatola Mohammed bin Muslim, what's he known by? How's it
skipping my mind right now? Muhammad have been Muslim.
Okay, anyway, skipping my mind.
That's what he said about the Sahaba about what their fatawa
they had fatawa they had states that had fatawa on matters that
were
up for discussion.
So so they concluded and he's a Tevye
and he's saying, this one he is, was too strict, this one is too
lenient, strict and lenient relative to what relative to the
opinion of the people of the other scholars. Right, not strict and
lenient relative to like some culture.
So if they're saying that about the Sahaba, and that's not
anywhere near a wrong action for them to do, they're saying about
the judgments. So then what about your automap? When they say
something that should be altered, it should be corrected and 99% of
the OMAS on one wavelength, you're shaved on this wavelength is here.
If you think that you follow him because he's my assume you got
issues. Alright, let's go to segment two before we answer that
what I just said about the secret Dawa means that the Tao of the
Prophet sallallahu alayhi wa sallam for the first period of
time was just his family and who he selected. It was not open
announcement to everybody. Okay, so he was gathering first. The
prophets of Allah what He was was gathering first, the initial core
of Sahaba
All right.
Okay, is there some big deal here? Kevin Lee, do we know him? UFC
fighter entered Islam Very good. Kevin Lee since being public about
my conversion as a Muslim. I've had a lot of people reach out to
give support to the message and calls and I feel the love Allah
always had a plan. And I'm glad I'm on the right path. And he is
friends with Phil Ross is Zabi who was a UFC fighter.
convert to
Islam
really accepted my place in life as a Muslim and just there and
alone kind of brought me into the brotherhood of speaking with more
Muslims and and
me forming like more of a bond with these people
have accepted Islam, right. I've converted over to Islam, and
really accepted my place in life as a Muslim and
Just there and alone. No.
All right, very good. That's nice. Next let's go to Moines. We have
Wayne here today.
It is good. Wayne is
looking there sort of like he's coming out of Star Wars.
This is technical raise your mic online. Sound like can you hear
me? Well, I will say that we got you. Yes. All right. My Europe
talk to us. What do you think what is going on? In terms of AI?
First, I would like to ask you a question the facts about AI.
Ignition. Thank you, Mohammed. I'm sorry Ryan had
the name that was given was the famous Legend of his time ignition
hub. zoete. How many Muslim? Who all might have been Abdulaziz said
gather as the Sunnah. Leave off the excess. The excess strictness
of Ivanova, the leniency of an ibis and the odd statements of
him. So he's talking about their fatawa their judgments, and he is
assessing their judgments. Oh, how can he assess their judgments?
Because we're a religion of sources, textual sources, which we
can all understand. If you understand them, well, you can
eventually come to even assess your teacher.
So that's nothing wrong with that in our religion, we don't honor
people because they're sinless, or they never make mistakes in
academia.
We honor people by their piety, their longevity, right, things
like that. Alright, so let's now move on to back to AI. So first, I
would like to ask you my the facts.
Number two, the concerns. Alright.
I will administrate the energy Bismillah R Rahman r Rahim,
Allahumma salli ala Sayyidina, Muhammad Ali, Salam Bismillah, the
loaded MassMutual invalid. And I also knew that him. So to get
started,
I think you've been brought on to talk about AI. Right. But AI is
broad, right? It's like trying to bring on a doctor to talk about,
you know, medicine in general, there's a lot in medicine. So we
have to dive deeper, right. And I'm not an expert on AI. But I do
work with
technology. And I've been a technologist, I suppose you could
say for most of my career. And there's a lot of chatter that's
been going on about AI over the last
six, I mean, in the tech world has been going on for a while. But in
the lay people community who are not aware it's been going on for
the last few months, especially with the advent of chatbot, or
chat JpT, which was announced by open AI, back in December. And
this was a free this is an was a free tool that's open to the
public. And it really shocked people, especially people, I've
never seen
what some of the advances in technology have been over the
last,
you know, 15 years, if you're coming on to this, and all of a
sudden you log on to chat bot where you log on to mid journey,
which I'll talk about in a bit, where you log on to any of these
newer AI generation tools as we can say,
it's surprising, and it's shocking to a lot of people. Now, your next
question was, you know, what are the facts? Okay, so the facts are
that I'm very hesitant on saying that AI is something that we
should be worried about. But simultaneously, I'm also hesitant
on saying it's something that we shouldn't be worried about.
Because at the end of the day, right, Allah subhanaw taala is the
one who controls all things. And even the invention and creation of
AI is from humans. And you know, Allah subhanaw taala has allowed
this to happen, right? So it's, it's there in the world. So now we
need to contend with it right.
Now, how do we need to worry? Right? Because there's two types
of worries one is sort of this existential type worry, what's
going to happen to me, what's going to happen to me if my family
look, even in a nuclear, you know, war, if you're meant to be taken
care of, you're going to be taken care of. So let's let's not even
discuss the possibility of what's going to happen to you the answer
is probably nothing. Right? You know, even in the worst of times,
if nothing's gonna happen to you, nothing's gonna happen to you,
right? So if Allah has nothing to happen to you, nothing will happen
to you. So let's just leave that off the discussion. Right? Let's
assume that nothing will happen to you. Now let's talk about what are
the practical things that may be impacted in your personal life and
with your family and the world with the growing use of artificial
intelligence technologies. So I'll stop there and then see if you
want to ask any questions, I just want you to separate between AI
and self learning.
So AI is a broad category. And there's there's a subcategory of
when we say AI, it's it's an easy to use term because what is art
Official intelligence, right? Because intelligence if we're
defining it is the ability to be able to process and think and make
conclusions, right. So for example, is a computer intelligent
the way that a human being is? Not yet, right?
Because we would say that a human being is able to reason at a farm
at a much more complex level right at a at a capability that is far
further than a computer. Now, someone what may say, and this is
the belief of many people, such as Sam Harris, who believe that human
beings are just new neurons being fired. And it's all just chemical
reactions, right, there is no separation between a human being
and a machine, all you need to do is map all of the data points and
beliefs, beliefs and behaviors that a human being has. And you
can all map it all back to data, and you could simulate a human
being right. This is the
belief amongst most materialist naturalist, especially in the AI
community. So that's why it's called artificial intelligence,
because there's this, the overarching terms is that
eventually we'll reach this point of intelligence where we can
essentially simulate a human being. Now it's artificial, in
that it's not real intelligence, right. And So machine learning is
a subcategory within artificial intelligence, machine learning is
different types of algorithms that are used in order to
do AI, you can say, so, for example,
there's different types of algorithms out there and machine
learning algorithms. We don't need to go into all the nuances, that
doesn't really matter.
But people use those techniques and algorithms in order to do
certain things. Now, someone might say, well, what's the difference
between regular programming and machine learning? Isn't it just
advanced? advanced programming? Well, yes, of course. It's just
advanced programming. But that's as that's almost as simple as
saying, What's the difference between,
you know, bits on a, you know, motherboard and, you know, going
on to Gmail, it's like, well, they're, yeah, they're both
programming. But one is obviously, something that has much more
impact in the real world and is far more complicated and complex,
right? So it's not the same thing. So saying that, Oh, AI is just
advanced programming. It's like, well, yeah, sure. So is the cell
phone, right. But it obviously makes a difference. So let me let
me look at it this way. So chat GPT.
Made by open AI.
Yes. Made by open AI, which is an Elon Musk's company. Okay. Yeah.
That's irrelevant. I'm just pointing that out. They have a
database of information. Okay, where we got that we don't know.
Okay. And they've basically been able to program this thing to
simulate human writing.
Correct. So
AI is simply
mimicking, basically what it sees already.
And finding solutions,
or gathering the data and then Fasching it together. So a chat
GPT is one level up from from Google, essentially. Because
Google, you can see the information, you have to put it
together yourself. You see the information a lot slower at the
pace of you clicking and reading, right, you then have to synthesize
information on your own. So chat, GPT took this one level up. So
that's why it is in fact, a Google killer. Really, because it now is
gathering far more information than you could have ever. And it's
synthesizing it in a way that you could read. And you don't have to
worry, right? So
if Facebook arguments Twitter arguments caused a lot of people
to study, right, a lot of people would not read or study it unless
someone bothered them with a post. They go then they read the
research, they come back with a really fancy answer. Okay. But
they had to research it themselves. They had to synthesize
it themselves. GPT now has done that for them.
Is that that's an accurate summary of what's going on. Right.
It's accurate in that it does synthesize information, whether
that information so that information is accurate or not. It
but I can filter it. I could say only tell me what Encyclopedia
Britannica it says about rhinoceroses, right, I could do
that on chat. GPT Yes, you can filter stuff. Yeah, depending. So
it doesn't right now it doesn't have access to the internet,
right. It doesn't have access, it has access to what they fed it
right. Which is correct. And it's been it's been trained essentially
on the internet and massive amounts of data.
All right, yeah. So one thing that when we look at every AI, we have
to ask the methodology of the AI, and the resource, the sources, the
sourcing and the method. So with chat GPT, we don't know what the
founders deemed knowledge worthy of feeding this thing. Because if
if I came in, I got a reliable source that said, Here, we're
going to put all of the let's say, Chef a,
in in a in a database, and no human being came out and said,
Yes, we got the chef, a fake, it was reliable. We tested it, we
made sure that the PDFs, and the books and everything were
reliable. Here it is. Now you can go use it as a search engine.
Wonderful. We all accept that. And scholars have been using these
search engines forever. Okay.
You all hear the construction?
It's very light. Okay, good. So all that this AI does, is gather
that information faster than you ever gathered it, and then
synthesizes it now here's Now my next question. What is the
methodology of synthesizing? So if I said,
chat GPT and or my hyper,
my example of Chef a chatbot? What it what things break will do in
the chef a school? According to the dominant opinion of a no,
Rafi?
Okay,
it's gonna go to No, it have a, what if I didn't put that? What if
I didn't put a filter? So that's where I'm asking the methods we
need to know, for every chatbot, the methodology, the sources, and
then the methodology of how it's synthesizing things. So this
depends a little bit on how on the model that it's been trained on,
right.
So for example, you mentioned you know, a specific book is not or
specific text is not in this model, then it won't be able to
reference it, it's just not something that it will be able to
do. Now. Chad GPT is known to do something called hallucination,
right, which is that it will imagine that that thing is there.
And it will just hallucinate what that answer is, and give it back
to you. Right and other models. I know. Google has one that they've
done, that might come out called Sparrow. Sorry, not Google.
DeepMind has one called Sparrow, and a Google has one called
lambda. And they also do similar sorts of hallucinations. Right.
And this is a problem that AI is the AI community is dealing with,
right? It's how do you deal with this hallucination aspect of how
this information is presented? Because the way it really presents
information is it reads character by character, right? Okay. I read
the letter A, and then I match it up to what that A is supposed to
mean. And then I and I understand from that, what is the next letter
that comes and then from that it understands what a word is, it's
similar to how a human being would understand the word and then based
on all that, it's now able to go look up. Oh, so let's say it's,
it's the sentence, the fox jumped over the road, the box jumped over
the log, right? It reads the Fox, and then it goes and tries to
understand what is a fox, it looks in its database of all these
things. And then from that, it's like, okay, the fox jumped. Okay,
what does jumped me in, so then it attaches it back to that Fox. And
so this is how it would construct this entire understanding of this
sentence. And similarly, if you asked it, let's say, in chef, a
fit, I want this thing. It's going to understand that question based
on those words, and then it'll go try to look up based on its
understanding, now it may understanding correctly, that's
why
in the modern iteration of these intelligent these models,
you have to be pretty specific in what you're looking for. Because
otherwise, if you're not, you're going to cause you know it to
hallucinate or give you something that's exactly that's why you need
to be able to the methodology, the sources and the methodology.
Right? The the methodology that you can add filters, that's really
great. Yeah, so But besides that, the sources of the other question.
So, right now Chatbot is just like a general somebody's saying my
thesis here. Oh, there you go. Now we got Zen, we got the guy we had
now we have the real guy, you know, he's the he's the guy who
actually could talk about the details of how these things work.
That's your mic.
Okay.
So that's how that's how these things work. All that's fun. Let's
go into the science fiction stuff.
Ai develops and grows and starts to manage bigger and bigger
things. But there has to always be a manual override to AI right or
wrong. Okay.
Oh, the science fiction is part of this thing asked the question. Can
AI override the manual override?
Can it realize that this is a stopgap
I write and override that manual override. And then you have all
your Netflix movies after that. Is that something that's pure
fiction, or is it even possible?
At the moment, so that first line, can you go? Is he allowed? Okay?
Right now, it's not visible. But at some point,
I can see some scenarios where it will become, it may become
plausible. So, so right now it is not even a scenario not even a
scenario. Okay. So cancel right away. So right away, yes. Good.
Now, next question is,
by the way, just for, can we be introducing the fees as well, just
so that the viewers know, some of these is out of Harvard? What did
you do in Harvard? I did research with artificial intelligence and
computational biology. Okay. As for what program an MA or PhD,
that is my postdoctoral work. postdoctoral, postdoctoral. So
after a PhD, this is one of these. Yes. He's a guy. He's the guy you
want to bring on? Yes.
Oh, Frenchie. Yes. Princeton. So we got postdoc at Princeton and AI
part of our crew. We got postdoc from Harvard and our crew. So we
got to mesh a lot really good crew here. And so in the fees, so where
did you do doc? I stayed in university. Okay. But then you
went to Harvard and your study and you said,
What conclusions you have that could benefit regular people from
my postdoctoral work? Yeah. So basically, I worked
with CRISPR, which is genome editing. So you can you can genome
editing with genome editing, you can change people's DNA, with cell
in yourself.
Like you were like before they're born. Now after they're born. So
now, yeah, now. So mainly, mainly, we were focused on diseased cells.
So suppose for leukemia, you might say might have seen the news
recently, that there was a car or seven years old, okay. She was a
terminal with leukemia. Then they used the genome editing, to change
the only DNA DNA part that was responsible for that leukemia. Now
she is completely cancer free, amazing. Without any radiation
without any emo. I think it's the technical base editing, base
editing everyone here in this, their audience going through it.
Yeah. So Paula, that is amazing. Okay, keep going. Yeah. So my work
was basically how we can use these latest AI works to make genome
editing more efficient, where people are gonna go and make
themselves read has one day blondes and the next day.
How does that work? Yeah. So there. So there's, there's the
ethics part can come in? Well, yeah, because I'm going straight
to human instincts. Okay, I'm better now. Let's use this
technology for a consumer purpose for just a personal Yeah, whimsy
whimsical purpose. Let me make myself blue out today. Yeah. So
there's basically so it's a big discussion in the scientific
community. Yeah. So for example, you can go into a woman's body,
and while she is pregnant, and change the embryos, DNA to make it
not susceptible to some diseases. So the discussion is whether we
should even do it or not, for example, a Chinese scientist who
went rogue, and he changed the baby's DNA, while they were still
in the in their mother's body.
And it sparked a big controversy. And later that scientists like it
really vanished because it was so controversial. The Chinese
government like they disappeared, disappeared.
Okay, can you tell me something? What did he change? Gender? No. So
it was I forgot the specific disease, but there's, there's,
oh, it's for their health for their health. So okay, so So the
specific part is responsible for the disease. So he went there and
change that. But the thing is, you don't know if changing that
particular part permanently, would render what side effects later in
their lives the problem? So that's where the ethics you got to do
this on lambs and monkeys and cats and dogs. Yeah. And even that's an
ethical question, right? Even that's an ethical question, and
also, that they don't even always transfer to human. Of course, it
doesn't even transfer. And now is the mapping of genetics complete.
The genome is complete so you know exactly where the DNA is for
nails, for hair for skin. For bony. Many of the phenotypic, we
call them phenotypic traits. We know like what are there okay, so
that makes the editing the possibilities are endless,
endless. If
Okay, and you could do this, what do you mean the dude while their
person is alive, that means you transform their hair color, their
hair thickness?
I just skin color. Yeah. I'm not sure if you can change it. While
they have already developed those like they have reached a certain
age. I'm not sure what that okay. Now how does this connect now to
AI now? So this connects, because when you're when you're sending
this particular we call it a delivery vehicle. Yep, that
the human being takes in, and then it goes into the cell. And it has
it has kind of a signature with which it understands where to go
and attach to the DNA. Yeah, within those 3 billion letters.
Okay, yeah. And
so what happens is, it's not really foolproof. So sometimes,
most of the time, it will have off target effect. Like it will also
change some other parts of the data, it's always the problem,
right? That's always the problem. So we were trying to find out if
through machine learning, we can design the sequences in such a way
that they want to do this off target editing, they will just
stick to that one particular, VA. So curing cancer in the future
will have nothing to do with clinical testing. And everything
to do with DNA AI. Right. Yeah. I mean, that's what it sounds like,
the idea have come and come in and walk and let me look at you and
let me test your temperature. All that stuff is extremely
rudimentary and go straight to the cause now, and now they have the
sorts of even more advanced techniques where you can also
insert sequences. Yeah. into the, into your own DNA. Yeah. So what
they're doing is basically training these large models, just
like they're trained jadibooti They are training it on the DNA
sequence 3 billion letters. And then they are trying to generate
sequences with which you can if you can insert them into your DNA,
you can permanently change from the the state or things like
those. Okay, okay, good. So these are all the like functional and,
and uses of AI that are not consumer based? No, this is only
within a specific field, right? So we now let's shift over, let's
shift back to the consumer.
In your view, what is the number one thing that a regular person
needs to be aware of a heads up on how life is going to change with
AI?
regular everyday life flow when the when 2007 came, the smartphone
came around.
And then six months later, apps were downloadable from the
smartphone. And amongst them were social media apps, then the world
changed in one year. Right? Before that a big revolution was do
YouTube. Anybody could broadcast themselves to the internet,
through YouTube. That was 2005. These are massive technological
jumps. I mean, it's not even an invention is just the development
of a technology. A jump, that many people weren't aware of that life
would change drastically because of this. So if you want to if I'm
a somebody that I'm a regular guy, I don't want to get caught off
guard like I was last time with the smartphone.
Give me the lowdown. What should I expect? How's it going to change
life?
I think immediately right now what people will see that lots of
things will become much easier.
That's what they will say like writing an email, you just have to
tell it I want to reply to this email with this sentiment in such
a way that this person doesn't get offended even though I am
replying negatively to him. And then it will it will generate this
email for you. Yeah, your shopping experience might get better.
That's because AI will learn like it will it will, I'm most certain
that Google will use this to
target their advertising more efficiently right now. It's very
inefficient.
So most of these changes right now will be mundane.
But the bigger shifts
will happen much later in my in my opinion. That's because
the data that is being trained on right now, right now, it's just
Internet data. And they are doing a lot of filtering on that data.
Because so for example, Open ID data that it trained on, it
filtered it through cheap labor from Kenya to remove like, text
data about like, you know,
* *, like all these all these things that exist on these
forums who's doing this open AI open, it's doing what so they are
they are they are removing this text data through this cheap labor
from Kenya.
From the data set, so that so that Changi PD doesn't train on that.
Otherwise, it will also generate those data again, because it's
being trained. I see. Okay, so you mean manual removal, manual
removal? And what happens is these people are suffering from PTSD
from doing this work. Which means What do you mean like the nine
hour day exposure to exposure to these things? So what does that
actually mean? When the removing the data, like they see a website?
They hit X? Like, what does that mean? Physically? Physically
speaking, what does the guy doing on the computer? So so I don't
know specifically what they are doing? But my guess would be they
are finding this text, they are reading them? And then the text or
websites, text texts from where from the internet, all of this
internet? Okay, probably been compiled on like some Yeah, some
UI that open is created for them to read fine review of
information. And then they're going through and you know,
clicking excelent, when they see certain keywords, they delete that
they couldn't create a software for that. It might be it might be
a little bit more nuanced than just seeing the keyword and
deleting it, maybe this is probably why it's manual, right?
Because what if it's a paper that's talking about the problems
about *, right, or the problems about *, so they need people
to manually and this is why it's cheap labor. These aren't PhDs who
understand, you know, the nuances of all these discussions just
okay, is this does this sound like something that's bad about *,
then, you know, let's, let's exclude this or if this is
controversial, and I'm guessing there's probably a weighting
factor to it? It's not just like some binary yes or no answer. It's
like, okay, this is more problematic or less problematic.
And it's weighted based on something like this. And these
people are getting traumatized by exposure to *, * and
*, nine hours of their working day. This happens
similarly, even in other like content moderation, like things
like in Facebook, Instagram, all these places, because they hire
like people to manually go through many like controversial posts. So
for example, let's say some big person.
I don't know Donald Trump posts something, right. It's not like
some automatic, you know, reject or deny. And as we know, it's
actually manually reviewed by somebody human beings, who says
whether that post is controversial or not controversial aware of the
radar, and if it should be allowed. So yeah, let's let me ask
you this. Let's take a shift. Who's in the lead? I think what
I'm hearing is that Microsoft is in the lead, because they
purchased jet GBG.
Microsoft, and Google is actually sweating. They're behind for the
first time. Before we get to that question. I actually was in
LaLaLand. Yeah. Before we get to that question, I think Nikki's
will know this a little bit better. But I want to answer the
last question as well, which is, what are some of the functional
issues that will occur? Yeah. Because I think day to day life.
Yeah, a lot of emotional. A lot of people that I talked to,
especially who don't understand
product development, right. A lot of people might understand
technology, right, like, so for example.
GPT,
which is the
you know, it's the it's the back end of Chatbot. Right? The
technology for this existed almost a year ago, what you're seeing in
Chatbot. Okay, yeah, it's a little bit more advanced, it's GPD 3.5.
But it's still, it was still available for people to consume
via their API for a, you know, a year now. But the normal public
found out because some product team decided that, hey, we need to
take this technology, put a nice interface on it, and be able to
show it to the people. So there's a difference between understanding
things from a technological perspective, right, which is,
okay, here's the what the engineers understand. Here's what
all the geeks and the nerds and stuff, they're building all this
stuff, okay. And then there's this other level of understanding,
which is okay, how is this going to functionally impact and how do
we bring this to society? So I think what people are a little bit
myopic about is, and short sighted about is this idea that the growth
of AI is exponential. Many people are very forgetful about when the
first iPhone came out. I don't know if you remember, it didn't
have an app store. Yeah, right. It was just like,
iPhone. That doesn't mean that the first iPhone didn't revolution on
revolutionized technology. It was the first device that came out
that combined the iPod, the camera and the phone. Nothing had ever
done that before. It blew people's minds, right. And as soon as they
added the App Store, it blew people's minds again, right. And,
and in the beginning, I don't know if everybody remembers the apps
were very rudimentary. You could do things like you know, turn on a
flashlight, or you could do that light hair thing and people will
be like, Oh, look, I could turn on the lighter. It's so cool. Right?
But that was something that you couldn't do before. And this is
where we are in the stage of AI development.
For there's advancements in AI, for example, there's So GitHub,
which is a repository created, you know, it's it's a code repository
management application. It's used by most coders.
They've, you know, as part of it as part of their suite, they now
included a new
technology called copilot, it allows you to have a, I read your
code, understand it, and it's based off of the same fundamental
technology that Chatbot is, read your code, understand it, and it
codes with you. And I started using it, you know, two months
back, and it's already increased my productivity by 30, to 40%. And
I know many people who are already using it, there are people who are
writing, you know, there, I saw a bunch of posts of people saying,
you know, I didn't have to go to my lawyer to write such and such
things, because I just had chatbot write it for me, I didn't have to
do X, Y, and Z things, because, you know, I was able to outsource
it to the AI. Let's assume that done it, right. Well, it's
irrelevant whether it's done it or right, they've done it right or
not, because people are still using it, right.
And so this is where the trick the issue comes in, in terms of like
practical life.
People keep saying that, well, what chatbot gives you isn't
reliable, but I don't think anybody remembers when people were
using Wikipedia, and you would go to school, or your, you know, or
your college and your professor would say, Hey, don't use
Wikipedia, but people still use Wikipedia anyway. Right? And then
they would just cross verify what was on Wikipedia to make sure that
it was accurate. But almost every student that came after Wikipedia
existed, use Wikipedia said, Oh, here's all the list of the sources
that Wikipedia CITES. Let me go cross reference it make sure it's
accurate. But it was the starting point to begin your journey of
analysis. Yeah. Right. And similarly, chatbot will be the
same thing. People may not, you might say, Oh, well, chatbots not
accurate. Well, that depends if you know the subject or not, if
you know the subject, then it becomes a very good tool for you
to use it to be able to do research and do many other things.
So it's a matter of,
let's look down the line, because this version of chatbot, I mean,
it's supposed to be updated, according to open AI, to GPT,
three in the next quarter or two, right. And I'm sure it already
exists, people have used the beta. And it's far more advanced than
the current version. So what happens when we begin this
iterative phase, and we get to more advanced versions of these,
we need to think like two three years down the line. And that's
pretty, pretty fast, right? The if anybody has looked at mid journey,
which is the image generation, and you go look at, there's a lot of
videos out there now of what mid journey was six months ago, and
what it is today, right? Six months ago, it wasn't able to, you
know, recreate a human being or anything as good as it is now. Now
you look at it, and there's people, there's artists, many
artists complaining that well, this kind of just eradicates you
know, a lot of the work that we were doing, because people are
able to take all this stuff, there's a lot of legal issues that
are happening now. For example, nobody could in the past, take a
painting, take take an imaginary idea, say like a house on the
hill, and say, hey, I want this to look like how Hayao Miyazaki or
Pixar Studios or,
you know, some, some other artists has, you know, made it look, you
would have to hire money to do that. Now, mid journey can go and
make it it can build that. So now that all these legal troubles of,
you know, is Pixar going to start coming down your, you know, coming
down to you and saying, Hey, you're not allowed to use this
because all of
all of your journey was based on a catalog of images. And in those
images, there were Pixar images, and that's how it's able to create
the pixel. Yes, you this, then let me ask you this. Every painter
when he makes a painting, he walks through the Louvre. He looks at
stuff, he gets inspired by 500 images, he produces an image, that
image is based upon the 500 paintings you saw at the Louvre.
What's the difference? There is a difference, but you're in a phase,
you want to take that one. Like, seriously, what is the difference?
You mean, looking through all of your work?
Letting it settle in my mind? Then pushing it aside, then making my
own thing? Clearly, it's gonna look 25% Like yours? 10%, like
yours? 25%, like yours? What's the difference? Well, I would say that
we see whereby it's when we talk about this stuff, right? Because
we have an epistemology of what good is, what truth is, what
morality is, et cetera, et cetera, et cetera. But the argument that's
used by pro AI people is sorry, the argument that's used by anti
AI people is, well, a human being works differently, right? A human
being when they walk into the Louvre, and they see all these
images and they see all these things. We don't have recall
memory the way that a computer does, right? When we recall
something we recall something based on events based on
interactions based on things. So for example, if you told me Hey,
we had a podcast with Alex and the
Pass. And this is what we were talking about. Okay, I'm gonna
remember the smell the day, I'm going to remember what happened
that day what we were talking about. And then based on the
context of that conversation, I'm gonna remember I remember one
thing. And you know what? I'm not going to remember it exactly, it's
going to be a little bit different. It's never going to be
exactly like what it was before a human being. doesn't remember, a
human being remembers, right, a chain recalls information. Right?
That's the difference, right? And so when legally like, Yeah, and so
when the machine when the algorithm, it looks at your prompt
that says, hey, I want you to make me an image of myself in the style
of DaVinci.
It's now going to go and recall every single one of Da Vinci's
paintings, it's going to recall how that style of you know,
whatever Baroque art or whatever it was, at the time, you know, was
done that style of Don Gunn different artists, if I told an
artist paint me the way DaVinci would have painted me, he's gonna
click DaVinci images, look at all the DaVinci image, okay, he looks
like he's doing this stroke here. He uses this color palette, he
uses this background, blah, blah, blah, and then he Commission's any
draws that makes the painting. It's the access is the same. So
for these apps to have access to the internet, there is no way for
an artist to say that you're, you're ripping me off, and you're
basing it because of that. Any artists that I would have hired
would have done the same thing. Yes, but it's different for the
artists because somebody could argue that the artists took a
lifetime to learn how to design and draw in the style of DaVinci
if somebody has a legal Oh, yeah. Yeah. Well, so if if a person is
not a legal argument, it's an ethical argument. Right? If a
person can This is why I think they're gonna lose, they're gonna
lose today IP, right?
Because the AI argument is a very naturalist materialist argument,
they're gonna win. But somebody could argue that this person that
learned how to draw in the style of DaVinci, if you're a DaVinci,
impressionist painter, right? Yeah, you are good at what you do.
Because I can't do that. You can't do that random people can't do
that. But if I can go in and write a prompt into mid journey, it
allows this power of creating Impressionist paintings to
everyone, right before you had to spend a lifetime learning DaVinci,
studying Da Vinci studying the art strong, you know, the, the the
paint strokes, and how all of these things are constructed. And
you had to learn that and you had to practice it. And you had to do
it for years and years and years until he became very good. So this
is a sympathy lawsuit, to not put people out of a job, all right.
They probably people would say, AI is just more efficient than a
human. Yeah, it's not the AI fault. That's true, like so. I
mean, I may have had to practice dunking for five years. Then my
neighbor comes along because he had better genetics, and he's a
foot taller. Right? Or, and then another guy comes along, and he's
a foot shorter than me. And so I domineer him. Is there unfairness
here? What's the Darius? There is no way to win that argument. Yeah,
as a anti AI person without bringing in another version of
epistemology. I will tell you what every technology ends up the
society just out of mercy for the previous generations, you know,
work, slowing the technology down until you guys find another source
of income. Right? So when the typewriting came out, what do you
think the Ottomans did? They flipped out all those scribes
flipped out, they said, this should be banned. There's no way
an idiot can come in and type Heather. And it comes out in a
beautiful script, when it took me 30 years to be able to write that
write the same exact thing. That's actually why the Ottomans didn't
bring in the typewriters. Right, because it's not that they said
that there's something inherently wrong with the typewriters is that
we're now going to put all of these people out of jobs, right?
These are the scribes that we have. And the scribes are so
emotionally affected by that they come up with these arguments,
where the only real basis of it is that you just basically
essentially all your years of training got wiped out in one
second, by me buying a typewriter and typing the book, in the same
exact script that you type it in. And I put zero effort. Literally,
it seems unfair, but that's sort of what life is. Right? It's I'm
putting in zero effort. That's what technology does. And that's
what they call disruptors and all these you know, these things out
in the west coast, but zero effort, and I'm doing the same
exact thing. That's someone now personally speaking. I don't like
it. But I don't see a legal basis for it being illegal. There's no
legal basis. I mean, look, I don't like it. I'd rather go with the
natural guy.
The human the human being who put the effort but Well, I mean, the
reason you don't like it is based on your epistemology of like,
well, what is truth? What is goodness? What is all of these
things? Right? And, and they don't, they're not bringing those
things to the fore when it comes to a legal argumentation, right?
It's nothing, there's none, there's none of that there, right?
Like, we're not going to go sit there and say, hey, you know, we
believe that, you know, these, the effort and the work of these
people matters, right. And this is we have to, we have to favor
humanity over the robot, right? We're not going to go around, say
that to any legal argument, if it comes come from,
based on the ownership of the data the trained on being trained on.
So for example, Microsoft has already been sued, because of his
co pilot, so it released co pilot, which helps software developers
generate code automatically, right. But it was trained on all
of public codes on GitHub repository, and all those public
code. They're written by other people who, who haven't given
explicit permission to use it for training. So that's why it's
there. But you put it out in public is walking in a mall, if I
walk in the mall? Right? If I walk in the street
for inspiration, and I get inspired, and I produce something,
and I sell it, to those people who are in the public who put their
own shops who put their own faces, their brother own stuff in public,
they have a right to that money. No, they don't. So if I go and I
say take my my thing, and I say go onto the internet, go on safari,
do whatever, Google and get all the information, you can then I
produce a product with that information, and I sell it.
I'm using the public stumble using this. I'm like walking in the
street. I'm using the public. How is that? So the purpose of the
argument was?
So there is some basis, there is some basis for their argument.
Right. And, and I think the anti AI people will hop on this
argument because it suits their interests, right? Which is that
hey, and here's the flaw in the argument.
And the problem with this argument is, hey, let's say let's say they
say, well, Disney, the only right way you were able to get all of
these images from Disney, is because you were able to scrape
the web and use copyrighted images that were not permissible for you
to use, you didn't get explicit permission from does use what No,
I just use it for inspiration. No, no, but they would say that when
when you're generating an image, it's directly going in and using
the data model. And the data model itself is based on
copyright images. And so they would say that the data model
itself is is wrong, like you can't use it at all. And so any image
that's created from the data model, is is illegal. Now, here's
the problem with this with this thing, let's say they went on
this, which they might,
it now just allows people who have large amounts of data to collude
amongst one another to make their own AI. Right. So Disney could
just get together, let's say a big company like Warner Brothers, or
Comcast, or Disney or one of these larger media houses. They got
together, they colluded with
open AI or Google and said, Hey, listen, were willing to work with
you. As long as we have proprietary rights to this API, if
there's any ad revenue that comes from it. Like it has to go, it has
to go to us 10%. And we'll allow you to use our enormous dataset
that includes, you know, our films, my data to them.
Well, I mean, stuff I've taken from my website, that's something
from your website from his website, no matter because you are
in a chain stop. But yeah, but it's not about that you can't sue
because you're not as big as Disney. Disney just has the
manpower and the ability and the money to just tell you to be
quiet, right. So it allows, it's not like these people are
principled in the in this idea that oh, yeah. And awareness, not
gonna allow it for anybody. Look, an armistice of AI is not
happening. Right? This whole idea. This is like the guns, people when
people shouldn't have guns. Well, people have guns, what do you want
to do about it? You're going to take everybody's guns, it's not
going to happen. You can sit here and argue all day.
And I'm telling you the argument is, as I said earlier, it's their
sympathy trials to keep. And by the way it should you would
recognize that Shinya would recognize don't put people out of
business, of course, should yeah, would recognize that.
It's not who cares what your technology is, you're not allowed
to just try to put someone out of business tomorrow. You have to
slow this down somehow. Let's shift from this. Who are the
leaders right now?
Microsoft,
Zuckerberg, Facebook,
Apple's in lala land. And who and main research organizations are
Microsoft, Microsoft, because there is that because they bought
chit chat TBT they have their own it's because of their partnership.
With open AI, they have a partnership with Elon Musk. Yeah.
I mean, you don't Musk was previously in what was in open AI,
but he's not anymore. So he's not any more than that. Yeah, yeah.
Fine. So. So then, then Google has deep mind. Yeah. And then Facebook
has fair DeepMind. Fair. Yeah. Fai? Yeah. What a terrible name.
So so the thing is, there's also large open source code
repositories that open source models, right, like so for
example, there's stability, which is, you know, stability, AI has
sufferings.
If you heard of stable diffusion is the image generator, that's
from civilian data. It's open. It's open source. It's, there's
no, I mean, I guess there's a company that runs the open source,
but it's not proprietary, you can use it, anybody can use it. And so
there's always going to be these open source competitors. They're,
they're competing with mid journey and other, you know, top level
tools now. So
it is going to slowly become the case where it doesn't matter how
big you are, you know, if some open source team comes together
and start building things, then you know, they have the same
models, they can build the same things. Let's go to the question
they the two questions as
data, personal data, and then deep fakes. These are two questions
that came up. So who wants to take that one? First? The first one,
your data, personal data? What about how is it affecting this,
like this is going to take our personal data
being out there at another level, it's already at another level with
with what we have now. But it's going to go to another level. So
most people, most people always use this example to say that in
the future, your personal data, and your information will be used
to attack you. Yeah, no, that already happened 10 years ago?
Yeah. Right, this whole idea that, you know, you're going to be
targeted and your data is going to be used? No, no, that already
happened to you. Many people are sitting on this stream, are on
this stream, because they were zooming through Facebook or
through Instagram. And they saw the live icon that was there. And
it was targeted that their Muslim and their followers of Dr. Shadi
are followers of me, or no fees. And based on this, they were
targeted and shown that, hey, these are the people that are on
this live stream you should join in. They already have your data,
they already have your information. And look, let's say
there's a lot of people who say like, well, I don't put my data
out there, you think you don't, but your metadata is, and a
profile of you is composed pretty easily, right? Like, for example,
if I know, I can already
filter down the types of people that are in a specific area,
right? A person that lives in New Brunswick is going to be
distinctly different from a person that lives in Guatemala, right? We
just know this, most of the time. Now you're telling me that this
person also had is Muslim, this person is also friends with XY and
Z people, this person is a male or a female or a non binary, or
whatever they are nowadays. And this person is XYZ, ABC metadata,
we have all this information, there's no personal information,
right? They were able to people say, you know, you can gather all
this information from almost anybody on the internet, if you
just know their name, and you know where they live, you could
probably find a lot of this information. Oh, they're a if I
can find it, then I can guarantee you that, you know, agencies that
are looking for this data can most definitely find it. Yeah, I think
the what what the internet did was actually make most of the FBI
search
teams irrelevant. Because people
put their own information on Facebook, when they check in
somewhere, right? This is why this is why it was so silly. When
people were posting that stupid picture of Greta Thornburg and
saying that, like Andrew Tate got caught because of the pizza box.
It's like, no, they didn't need the pizza box. You think they're
so these, these agencies needed a pizza box to find where somebody
is? No. They know where you are. They know what you're doing.
Right? So they have tons of data already out there. But go ahead
and use I can't see us like there's no, there's no concept of
visual cues to know he wants to go to a level that people the police
can even search through your DNA. Yeah, basically, if they have a,
they have your DNA, then they can start. So these 23andme databases.
Yeah. And that's how they caught some serial killers from the 70s
and 80s Recently, so you think that your data is not there, but
your innermost data is already out there? Yeah. So the question just
becomes, who's using it and how? Because when they when they put up
when I say something like I need to get
I need to get some batteries.
And then I find on my Amazon suggestion is batteries. You did
me a favor to be honest with you, right?
He did me a favor. So I don't even have no problem with that. We're
gonna have a problem where it goes into other levels. Right? And
that's what people are always nervous about. Well, you can get
real philosophical on this really quickly. But for example, I, I'm
pretty
anti tech, even though I work in tech, I do tech, you know, I know
all these things. But I don't have any smart devices in my house,
right? I don't have whatever, Alexa, I don't have a smart
thermostat. I don't have, you know, smart printers. I have no
such devices in my house, right, other than my laptop. And this is
probably the most events. I have a laptop and an iPad, right. But I
don't have other things that are listening. I don't have Siri
turned on on my phone. And somebody might say, well, aren't
you being contradictory? You just said they know everything. So then
what's the why? Why are you trying to hide everything? I'm not trying
to hide everything. It's my right to privacy. Don't you use curtains
on your house? Do you want everybody to see going on in your
life? It's your right to privacy. I don't want people listening in
to my stuff. And I assure you, people do listen, I work on this
stuff. I you know, I build this stuff. Regular normal employees
are listening in on your personal conversations at home. Most
definitely. Oh, you're saying humans, not just it's not just
being gathered. Even humans, let alone the AI the AI is definitely
listening. But like the humans are also listening. I didn't realize
that. So I didn't realize even that humans have access to this.
Like I know that it all goes into some data center. Definitely.
Do have access. Yeah, really humans? Yeah. So tell us more
about that. I mean, so basically, they would have different levels
of levels of classification right in these companies. So some people
will have access to Class One, two, some will have access to
class, higher level classes. And so for the, for the top accounts
in the social media companies, they will have certain amount of
employees assigned to just handle these top accounts. And they have
access to all of that information from there. They can even listen
in to their phones. That's how actually Saudi
Saudi Arabia actually that's why they bribed that's how they bribed
two employees via Twitter to take down their opposition.
opposition's on Twitter. And yeah, they supplied them with all of
their Twitter information internal that internally they had. And they
later got I think arrested by FBI your thumb. Oh, really? Well, I
remember we had mine did an experiment one time and started
talking about luxury car.
supercars. What was it supercars? It was luxury bags, luxury, luxury
women's bags, like,
you know, these $1,000 handbags, just chit chatting about it with
his phone off.
It only took about a day or two days, three, four days. Yeah. And
so I tried to figure out why this happened. Because I don't have
this was about seven years ago, right? Yeah, at that time, this
stuff wasn't as prevailing. And at that time, I didn't have
C return. Anybody who knows me knows I don't use Siri. I don't
have any of that turned on my wife. Now Siri or anything turned
on. I had no application that was listening. But the only
application I had was WhatsApp. And you you you actually
explicitly grant permission for the microphone on WhatsApp to do
voice notes. Now what's happened Facebook claims that they don't
listen outside of that. And it's only when you press that button
that they have access. But I can no longer trust these
organizations and what they're saying that they do, right it's
now could you say that that's illegal? Could you sue Facebook?
If you've if you found out Yeah, sure. If you found out and you
could make a case against it and sue them then yes, but until
somebody brings that up and actually does it then you know,
they don't do think some application like tick tock is
listening. Anybody who's listening to the stream if you have Tik Tok
installed on your phone please delete it if you have Facebook
Messenger on your phone please delete it. If you want to go on a
desktop and use it they're most definitely spying on you
tick tock you should remove it should make them remove it yeah
yeah tick tock is the is the Tick Tock is the number one enemy right
now of any kind of human decency, germs of the content or in terms
of the taking the information in terms of the content and also it's
a Chinese spyware. So basically, they they are targeting the
American youth and they are trying to bring in all sorts of
degeneracy. So they are using tick tock to convert these children
teenagers to all these you know, LGBTQ is yeah, it's one of the
number one converter
in China was that it took is banned in China they made banned
it.
Let's go to deep fakes. Now. Deep fake technology is gone. Is is in
a sense, amazing, but it's actually terrible at the same
time. If you don't know what a deep fake is for those listening,
the deep fake has the ability
Read to mash up a person's voice mouth facial expressions and make
a full video of the person saying something they never said. Now
this has been around for a while. And I remember Key and Peele was
that there was out there called, they made one of Obama trashing
Trump, which was hilarious. And Obama was saying stuff, you know,
in the beginning, it's, it's fine, right? They get they get you
slowly, then eventually they start making Obama say things that, you
know, he would never say, right. And it was hilarious, but it was
scary in that the video looks so real. And then the starting point
of that, or that was a starting point of the saying that
very soon, video evidence will have to have support video, by
itself should mean nothing to people. Let me bring you another
situation for that. This has to do with deep fakes, but there was a
kid one time, who seemed it seemed was harassing an old Native
American, beating his drum.
And it seemed like the kid was staring him down. Whereas the fact
the truth was the exact opposite. The kid was given his speech,
which was his right to give. And then the Native American came up
to him and started beating the drum. So there's two aspects,
there's one aspect of the false clippings
that misleading clips. But this is another level, we're not even
talking about misleading clips. Out of context clips, we're
talking about literally, the person never uttered a single word
of this. And the technology to get to doing this is going to
eventually someday be one of these fun apps that these these
experiments will apps that they released for fun, and everyone
will just type in
Barack Obama uttering the shahada, right, so and so, you know,
cursing somebody else. And then
they're already there, they just haven't trickled down to the
everyday user. So talk to us about that. Let's go into fees first,
then we.
So the the
the main point of discussion here is not the technology, but how
people are how quickly people are getting used to it. Yeah. So
suppose right now, a clip comes out of Biden saying something
around no anonymous clip, in the forums that came out, and he's
saying something, really,
really, really, you know, controversial.
Most people right now will believe it. And it will, even if they
claim that okay, it was it was a deep fake.
I don't think that would fly. Because Because that because it's
the technology is just not out there yet. But I don't know if
people know not enough people know about it. Whereas everyone knows
about Adobe, you know, photoshopping something. Yes.
Yeah. So once these models are like production iced, like they
are in apps, different apps, where you are using them for like
innocent innocuous users, then they will be much more it will be
much more difficult to decipher them. Yeah. And then you're going
to need forensic AV guys to be able to tell us that this what's a
deep fake, and what's real, you're gonna need like forensic editors,
forensic means go into the bare granular little, smallest possible
identifiable trait in something. So you have forensic accountants,
and then forensic scientists, everything.
On the other hand, people will also come with like, come up with
like, reverse models, yeah, where they will be able to, like predict
if this video was like, it was a real clip or like generated by
another model. So so we have to see how that how those two phases
phases. So could you repeat that? So it will be like a reverse
model, where the model takes in a video, and then it can predict
like whether it was a genuine clip? Or it was, it was generated
by another AI model. So that's what we're gonna need. And for
example, chat GPT. Right now kids have already gotten around that
there is an app that can identify if your essay was made by chat
GBT. But youth have already gotten around that by throwing it into
Google Translate, put into Spanish, copy that, paste it into
Spanish, translate that to English, you lose all that, throw
in a couple of your own words. And the kid probably ended up spending
another hour of work right? To avoid 40 minutes of work, right?
Anyway, but as easier were so moinian Your take on this on deep
fakes deep fakes will soon not be a video will not be sufficient
evidence in legal court, but in the court of public opinion. It's
going to be a disaster for a lot of people. Yeah, so I think let's
go back a little bit on on the history of deep fakes.
Because I think there's something even more nefarious here than just
the idea of like this legal, criminal incrimination evidence
and all that stuff, I think enough is covered that pretty well. Deep
fakes originally started with *, right? Because it, it
started off of, I think, some some Reddit forum or some 4chan forum
where somebody made a video of a celebrity, and it was based on,
you know, them doing the act, or whatever. And it was a deep fake,
right. And it started from that, and the growth of the deep fake
technology actually happened from *, right? People wanted to put
their celebrities in various different * videos. And based
on this, they wanted to build it. There's actually a saying in
technology that if you really want to see the most advanced
technology out there go to the * industry, because they have
the most money and the most people working there. Anybody wants to
work on that stuff? I mean, yeah, you have to be a little bit
degenerate to want to work on it, but they have a lot of funding and
a lot of money to build this stuff. But what becomes very
nefarious, is this idea of deep fakes being used to place people
in any sort of situation, right? It might not even be legally
incriminating, right? But would you want a picture of yourself in
like this, you know, sexual escapade that somebody has, would
you want a picture of your your children, your daughters, your
mothers, your sisters, any of these types of things, people by
the way,
one of the most disturbing things I've ever seen in my life is there
is a, there was a secret subreddit of people who had made fake images
using random Muslim sisters pictures, and it was like for
* material, and they like unclothed them and put them
on, you know, different different people, right? Most disturbing
thing I've seen, but they used like this real swap technology,
there's a bunch of these other AI tools out there to make it look
almost, you know, make it look very realistic, right, you
wouldn't be able to tell. And so there was actually a version of
stable diffusion called unstable diffusion, which enable diffusion
in the first place. enabled diffusion is an open source, image
generation tool, like mid journey, or dolly, dolly or any of these
tools. So stable diffusion is the open source version. And there was
unstable diffusion, which was the open source version to create
*, right? To just create naked images. So you could say,
hey, I need a naked image of x actress or y actor, and you would
say, like, I want them in this position, and it would give it to
you. Yeah, right now, there was work being done. And this, this
project got shut down, and you can't find it. Now. I'm sure it
probably exists somewhere and people are working on it, right.
But there was an effort to be able to upload your category, your
category of images and say, Hey, here's this entire like folder I
have of these women that I like, you know, Can you can you generate
an image of these women and myself doing such a such thing? And it
will generate it for you? Right? This is something that's even more
nefarious, right? And it could be with regards to not even legally
incriminating, you could use it to place anybody anywhere at any
time. Right? This is what can't be used. You need forensics to undo
this right. Now. So this is where like, there can be a legal claim
made where okay, this if this causes harm to specific
individuals. Do you need to get consent before you can use
somebody's picture online inside of this database? Yeah, I don't
know. Right. So well, so So the court was the one of the reasons
that this technology actually, we actually sort of needed to speed
up in terms of the trickle down. Because the court of public
opinion is still not there yet. Like not enough people know about
deep fakes, you can easily destroy someone's life on a deep fake
right now, because not enough people know, it's a deep fake,
right? And these saying it's a deep fake wouldn't even be
plausible or believable. Right?
So it's, it's in everyone's best interest that this knowledge at
least of deep fakes trickles down to everybody. So everyone knows
it. So then eventually, all certain types of videos will be
cast aside. Right will not be treated as evidence.
Unfortunately, I think that's only going to happen for a short period
of time. And this is me being a little bit pessimistic. But you
can see that the Overton Window on degeneracy has shifted a lot over
the last, you know, 15 years what was considered something like
* in 1980 is now considered like PG 13 material,
right? Like that's considered nothing. Right? You have decent
people, right? Even people like you and I, who have probably
watched like a normal TV show, and it's has like, you know, explicit
content that probably would be considered * and it
95 Right, like, the the Overton Window on what is acceptable has
shifted so far that I'm afraid that even if something like this
comes out, and there's deep fakes of regular people, I'm afraid that
most people will be like, Well, that's obvious. There's going to
be deep fakes of me out there somebody's you know, making images
and videos of me and using them. Yeah, I'm afraid that that that is
going to be like, well, it's like, Well, that's obvious. That's what
you get with technology. I'm afraid that I'm pretty sure that
that's people just gonna capitulate to that stuff. But at
the same time, the we're still at the point where a clip a video
clip can destroy someone's life. Yes. And there's and we haven't
yet seen a situation where someone responded and said, No, no, that's
a deep fake, like we do we have we had a scandal like that, where a
scandal literally stopped in its tracks, because the person said,
it's a deep fake bottle. The reason for it is just the
technology is like not really at that, that point where it's like
convincing. Yeah, like, yeah, it's convinced it's not convincing yet,
like it still, it still has glitches. But you know that, like
at the root of much more rudimentary things, such as a
tweet, you can easily tweet something crappy or texto, put it
on someone else's tweet. And then and then share that on a different
platform altogether. You will have people for years thinking that
attributing the that fake tweet to that other person. Oh, yeah.
Leftist websites, all? Yeah, they do this all the time. Yeah. So
there's actually
there's actually something I don't want to get bogged down into
philosophy here. But there's actually a postmodernist
philosopher by the name of John Baudrillard. The Matrix movies
were actually made on some of his philosophy, right, that you have
these levels of simulation in a world, and you can reach a point.
So it brings a story that you have, you have the the world,
right, and you have his empire. And in this empire, the king
decided to hire some cartographers, and they decided to
make a map of this empire. And eventually, they made this map and
it was so large that it covered the entirety of the Empire. Right.
And this is how big this map was. And eventually, what happens is
that the terrain, the Empire that it was based on, it withered away,
and it disappeared. And now all you're all you're left with is the
map itself. And so then when people 100 years later, they
commonly say, Oh, well, this is what the Empire is, well, they're
not looking at the Empire. They're looking at the map itself. They're
looking at this, what what he calls a simulacra. You're looking
at this, you know, representation of reality, but it's not reality.
Right? And that's what the matrix movies were kind of Wow. Right,
which is, you have this reality, but it's not really the real
thing. And so I don't think this thing that you're talking about is
going to become plausible, until we reach a stage at which we reach
hyper reality, which is there are certain things which are more real
than the real strength, riches. People trust the the image, or the
video, or the thing that's there more than they do the real person,
because they're gonna say, Well, how can we trust you, you're just
the guy. But we can trust the data, we can trust the machine, we
can trust all of the history of everything that's there and your
data, and we can verify it. And here's all these things. Here's
this is what your doctor said, This is what your mother said,
This is what xy said, this is where you were, this is what's
happening. You're telling me you're not in the video, but all
evidence shows that this is you in the middle? Well, we already have
that, in a sense, through DNA. The most honest person in the world
could tell you I wasn't there. And then so so well, your DNA is all
over the doorknobs, your DNA is all over the pillow, right? And
we're gonna choose the DNA, most people will believe the DNA over
the person. And what are you going to do in the situation where
someone says, you know, I wasn't there, but they say, well, the
data shows that you work? Yeah. Well, what?
Well, let's say somebody, let's say,
for example, if somebody's making a fake video, right, I'm imagining
along with this fake video, they probably did all the things,
right. Because in the future, let's say somebody's trying to
make a fake video to impersonate someone or incriminate someone.
They're also changing, like where they were at a certain location or
how that happens. And there's data points that you're they're also
messing with. So somebody could look at it and say, well, the
evidence is there. It's saying that you're here. And all the data
points to the fact that you were there. So your your work, and the
word of your witnesses is not as strong as the data.
So this is an issue. So we covered some of the basic facts on how AI
works, we covered Who's in the lead. Okay, with AI, we covered
some of this medical, which I think is going to be the biggest
right, curing leukemia, and using AI models, generated models for
that type of work is going to be probably the biggest and most
important advance
And then we talked about some of the practical aspects of life,
including data or personal data being out there deep fakes not
being sound evidence for anything, eventually. So I think we're gonna
get in another chat.
In another discussion we need to have we need to talk about the
metaverse and how that's going to affect people psychologically, and
how that's going to affect their view of actual reality in the
world. Right. So we need to have these Tech Tech conversations like
once every two months to stay up to date, because you can't be
behind the eight ball in these things, you got to be ahead of it
ahead of the curve. And so with that, you guys can hang out and
stay out. Let's turn it to the audience here. Let's turn it to
everybody who has a comment or question on Instagram. If you're
on Instagram, you could still listen in but you can also watch
the video on YouTube, the full video on YouTube. And let's start
going to any questions that we have. All right.
We have nevus Hamid here but we have Nafisa Hamid and in relation
ona Ziva oops I misread it okay. Oh your sister Mashallah. So that
your parents chose you named you Neff is and the Seaton the ZIVA.
Okay, it was just a game, right?
Where the is a is it a play on? Okay, so.
All right. Let's go to comments and questions here. If you're
having depression, Melody 21 says, Just wait. All right. You know,
just wait when the we everyone's on the metaverse, right? When you
take that off, this world will seem to you to be less colorful,
less everything. And especially when they put up the haptic body
suit in which like you would be able to shake someone's hands in
the metaverse and you would feel it, they would tap you on the
back, you would feel it they would touch you in a pleasurable way you
would feel it.
Where's depression gonna go? Then when you take that off of somebody
that's going to be a crack addiction, basically a heroin
addiction? Let me ask you a question. Yeah.
They are. Well, let me ask you a question. Okay.
If I asked you to draw me a princess, what would you?
What's repeat? If I asked you to draw me a princess? Yeah. What
would you draw?
Probably a Disney princess is on my mind. Right? Like, it's no way
to Princess Cinderella princess. So your idea of what a princess
is, has already been influenced right in the immediate shot.
Because what you what you think about a princess, like you're
unable to even comprehend the reality that there could be some
other version of a princess that isn't like Belle, or Cinderella,
or whatever these Disney Princesses are, right? So people
think that they're going to go into the metaverse, and they're
going to enter this simulation and start believing all these things.
No, no, you already believe all those things. Yeah. This is the
whole point of understanding like today's today's work, right?
You've already been fed all of these beliefs. And these ideas.
This is just another level of the simulation. You could say right?
You're already in the simulation, right? You believe what a princess
is what love is, if I asked you what love is, you'll tell me like,
oh, you know, some X, Y and Z movie. You tell me Gone With the
Wind this, this, this and that. Right? This is love Romeo and
Juliet. What Shakespeare? You're saying we've been influenced from
the outside in? Yeah, this
is really, yeah, by strangers by ideologies that have kind of, you
know, taken a hold of people. And now they believe all these things.
But anyways, yeah, so I don't want to get too into that. But yeah,
but an immediate, immediate example would be like, if you are
a heavy social, Facebook or Twitter user. And if you have
friends there, and you haven't met them for a long time, you already
have a certain kind of image of them. Yeah, like of what they are,
what their favorite thing, how they're acting, but that but that
will be probably completely different from what they really
are in the in the real life. Yeah, it's good. And the more someone,
the more that there's a separation of the online person and the on
site person, the more weirdness is going to develop, like, for
example, there's a lot of people that are there in the real world,
and they're online at the same time. Right? There's a lot of
people like that. So there's probably going to be more
consistency between their online and their on in the real world.
Once you have someone with no real world footprint out there, but a
ton of online presence, you know that that that online presence
becomes less and less reliable. So the question is the reliability of
the ability to assess character or assess the person. If there's a
big gap between your online presence and your real world
presence. We're going to say your online presence is not reliable.
Whereas when there's a lot of online
I am on Unreal World, it's probably your online is a reliable
reflection of you. The one piece of advice I can give for most
people with regards to AI coming, okay, it's coming, it's
inevitable, whether it's going to affect your job or not, yeah, I
can give you a plethora of advices. Like, if you're in this
field, you can do this, you can probably find all that stuff on
the internet. Right? But one unconventional piece of advice
that you're not going to find on the internet that, you know, I,
I've understood just from, you know, our understanding of the
world as Muslims is
the way that we understand the world now is so convoluted and
impacted from all these like random ideologies. The one thing
you can prepare for, for the next version of the simulation, as we
could say, you know, when the AI comes in, you know, the world is
impacted from all this stuff, is prepare yourself mentally,
spiritually, right emotionally, for the things that are going to
be coming, right. So for example, I'll give you some very dangerous
example.
Someone starts putting in to a fic bot, let's say there's a fifth bot
out there, right, somebody and we can probably make this now I'm
sure some organization like Yaqeen is already working, right? There's
an AI fic bot, where you can go ask this question. And in this fic
bot, somebody, it as part of the data, one of the chains of one of
the Hadith was missing, right. And then over time, people stop
memorizing the chains. And all they remember is that they're
getting this information from this fake bot. Now people start
learning from this fake bot. And eventually it comes that people
don't actually know the original chain for this hadith, right?
You're going to enter into a world where, you know, it's, it's going
to be really difficult to differentiate between what is
truth and what is false. Right? So that's where, you know, clinging
on to Obama was critical. That's good. That's a good point. Here's
another point. There's going to be a massive disconnection, if we
think that 17 year old today is disconnected from a six year old,
right? And that six year old is going to say to his, let's say his
grandson, man, when I was young, we used to ride our bike and knock
on the doors of our friends to come out and play, right? And the
17 year olds is like, are the 12 year olds is like, what is this
right? Am I gonna hear these stories from ancient times?
That gap, we have to be ready. If you don't go on, if you don't want
to go on to VR and live on VR, there will be a generation that
does live on VR. That generation is coming. I really believe that
jet generation is coming. Because Facebook is not going to rename
its operation meta without really knowing full well that they're
headed to the metaverse, right where you can go and they're going
to create a whole Metaverse and the Taptic technological suit,
give that another 20 years. And people will spend five and six and
eight hours a day at a time. Now, if you choose not to become a
crack addict, because that world will be addictive, okay. And you
choose not to be on it, well then be ready to have a very massive
gap between you and the next generation.
Right. That's something that we have to think about to what degree
is that gap? Okay, to what degree is that gap dangerous? And to what
degree do we literally just check out of life, it's easy to check
out of life, it's much harder than when someone says that your
grandson,
they need you. Right. And that's where I think people who have
families are gonna get pulled, they're always pulled into
adapting. People who don't have families don't have to adapt.
Because I don't I don't need to worry about the other people's
kids, right.
And adapting means that you're going to be someone in their 30s
and 40s and 50s. almost looking like a child putting on these
goggles.
And learning what this world is all about. Right? Is that but I
don't really look at it from this standpoint. I'm a person who's
going slowly towards the afterlife. Do I really want to do
that? Let's say I'm 60 years old. Do I really want to? Do I have the
energy and the temperament? Right. Give the grandson a pep talk in
the metaverse. Yeah, exactly. Do I have the temperament to learn a
brand new life altering technology at that age? That's the those are
the questions that are going to come up. And the second thing is I
really think it is going to be like the drug if the cell phone is
a drug right now. And you still have all your periphery vision
around you imagine the metaverse and imagine now next generation
the next Elon Musk is going to put the metaverse in a little chip in
your brain.
And you're gonna see that without goggles. I pray the magic comes
before that. So to get to that, but but it's important. I think
it's
I think it's critical to actually be ready because a lot of people
their life gets disrupted. Just because technology came in and
took their next generation away from them. And they didn't know
how to figure they didn't know you know how to manage
All right.
So a lot of questions will go ahead.
Okay.
Can you read this?
Alright, so he said,
For the fees, could we potentially use CRISPR to end neurodivergent
C, as in, we change how the brain develops once the symptoms of
autism and ADHD come
up. So so the critical part is basically to find out what to
change, that's the most difficult part. So the problem with these
diseases or especially neurodegenerative diseases, is
finding what is really causing that disease. Most of the times,
it's not really a single gene. So there will be like, many, many,
multiple genes that work together to cause these diseases. So So
then it becomes a problem, like, what to change. And what, like,
you have lots of genes like where to change, you can cannot ideally
change like all of them, right? Because those genes have their own
functions also. So it's the basically the, the hard part of
the research is like, what is causing it?
And for the for, like, diseases like ADHD, schizophrenia, we have
like 20 years of research. Like, it's still, like, most of the
drugs haven't worked, just because
we haven't really found really causal connection between a gene
and and those diseases.
So, good question.
Yeah, so you have to have certainty on the causal, the gene
that's causing it, and the side effects, you can't just pull out a
domino and replace it. And then imagine there's going to be no
side effects human beings, they're all world one. Right? So that's
why the playing around with this stuff does have issues. And
probably the people with who are about to die are going to be the
ones who put themselves up. Right, they're going to throw out the
last thread. They're the ones who are going to be willing to be part
of these experiments, right? Actually, that's recently a paper
came out where like, they studied the sales of six patients who
already died. Alzheimer patients, and they like from that study,
they found a particular gene like that has been like, really,
it was it was producing extra protein in the disease patients
brain. So
I'm going to close out by saying that, I think it's extremely
important for everyone to go get a book called virtues of seclusion.
Virtues of seclusion, and seclusion for us in our world. It
levels out your brain levels out your heart, it levels out your
priorities. And seclusion is not any longer me going out to hang
out with people. Seclusion is seclusion from
stimuli, tech stimuli. That's the that is the seclusion of our day
and age, cutting off technological stimuli. That's our seclusion,
right. Today, in the old days, going out and chit chatting with
people was your stimuli. That today has been so downgraded. And
all we have is the constant digital stimulation. cutting that
off is our version of seclusion. And I believe every person at a
certain time, let's say you do a practice, like seven or 8pm. And
you just go, I don't care who calls, who texts, what happens in
the world. If a meteor hits the hits the world, I don't care. I'm
going to put all my phones in the car, and my computer, my iPads in
the car, and I'm just going to enjoy the rest of the evening. We
also need to know as human beings, what diseases and what makes us
upset and what throws off balance is not having a direction.
I remember there was a chef who used to have a casket in his
house.
In his room, he had a prayer room with a casket. When he would go
there, it would level out because he feels like if that's where I'm
headed. Nothing else matters except the scoreboard at that
moment. Right. And that actually used to wash away a lot of his
whole movement of the concerns, anxieties, fears, past present
future, it washes away because this is not only this, is this the
only thing that matters. This is the one thing that's a worldwide
guarantee. No human being will ever dispute that you're going
there.
And now let's assess who has something that's going to tell us
what's going to happen when we go down there.
And the only real answers is nothing or heaven and *.
Is there really a third answer? No one really believes the Hindu
stuff. Right? No one's buying into that. You come back in a different
form. I don't think so.
And even that is based on fine righteousness, I think. Right? So
That to me is the great balancer the great stabilizer, is that
casket right there. All your decisions eventually have to go
through that filter. It's a binary am I going to do this or not? does
it benefit my afterlife or not? Right and that's the stabilizer.
So when we talk about this stuff,
it's an a, it's a bombardment of information. It's a bombardment of
stimuli and seclusion and remembrance of death ultimately is
the great wiper away and washed away of all the excess that can
make a person Dizzy lose focus, addicted to these things, and this
thing is going to dislodge all those addictions inshallah Tada.
So, we will stop here in sha Allah today. And tomorrow will give more
time for open QA. All right, because today we spent time on
open AI. So get the book virtues of seclusion, in the times of
confusion versus the seclusion in times of confusion from Mecca
books.com Support the live [email protected] forward slash
Safina society. And with that, we will see you all tomorrow in sha
Allah Subhana Allah whom obey him dig
into the software we're going to do what they called us in Santa Fe
was ill AlLadhina m&r Minnesota had what also would have Courtois
so the southern was salam aleikum wa rahmatullah?
God