What Does David Deutsch Think of AGI?

4 min read

Cover Image for What Does David Deutsch Think of AGI?

I'm constantly amazed by artificial intelligence's (AI) capabilities as they become more and more prevalent in the workplace. But its limitations frequently disappoint me.

This generally happens when I want a chatbot to help me come up with a fresh idea rather than merely calculating what I already know.

In such a situation, I can't help but see artificial general intelligence (AGI) emerging—a completely creative, autonomous being unrestricted by programming.

I've been able to clarify my understanding of AGI as I question and look into the broad spectrum of its commencement by listening to the opinions of professionals.

I'm very sure you have listened to David Deutch's opinions on artificial general intelligence (AGI) more than anyone else if you are an eager, insatiable learner like me.

I recognize that there is apprehension in David Deutsch's view of AGI.

His tone is more subtle than outright fear, even as he expresses concerns.

David Deutsch’s perspective on AGI: Apprehension or cautious excitement?

Overall, even though I recognize that David Deutsch may voice concerns over AGI, as a dedicated listener, his message is refined. He says – there’s a responsible way of going forward but we shouldn’t fear.

He extends an invitation to us to tackle this new frontier with open minds and careful thought in order to make sure that AGI turns into a positive force in the world.

Deutsch does not completely discount AGI.

Although he is enthusiastic about its potential to deepen our understanding, he is also cognizant of its complexity and associated difficulties. I think this is what I mentioned in my last blog on Steven Pinker’s take on AGI.

David Deutsch thinks AGI will be the ultimate outcome of AI

According to David Deutsch, AI is completely different from AGI and is actually the opposite of AGI in many ways.

He also asserts that we are far from creating AGI but the process has already begun.

Not only would AGI require a completely different set of programming than AI, but it would also require separate programming sets.

He compares AGI to a human being like you and I, who can refuse to be immune to outside influences.

This helps to simplify the idea of AGI.

An AGI has the same right to be suspicious of me as a human and to refuse to allow me to do a behavioral test on it, just as I would be skeptical and refuse to have a stranger perform a physical examination on me without any obvious cause.

An AGI may occasionally produce no output at all, refusing to respond, just like I have the right to silence when I don't want to answer.

In the same way, my mother cannot dictate to me to become the way she wants, although we can envision AGI in our minds, we are unable to make it exactly how we would like it to be, as in the case of an AI.

Will AGI turn into humans? Human vs AGI theory

No, there is no such concern. AGI will still remain a hypothetical technology and sensation at max.

AGI, in David Deutsch's opinion, is not imminent but rather extremely difficult to achieve. An AGI might commit faults and blunders in the same ways that I do, and it might even correct and flag my mistakes in the same ways that I would.

David believes that humans' ability to create adjustments, vision and explanation sets them apart from other creatures and machines including AI technology.

He claims that the same skill would be needed for AGI, and that no known programming method can accomplish this.

He adds that AGI would have its own objectives and preferences and would function like a person rather than a tool.

He compares the idea of crippling AGI to caging humans, saying that they will eventually figure out a means to escape. He also labels this idea as perversive and incorrect.

To stop harm, we should try to comprehend their logic and cooperate.

​​David Deutsch speculates AGI is far off from reality

David Deutsch believes that AGI still has a long way to go, not due to computational limitations, but because we lack the philosophical framework to understand and build it.

Our current epistemology lacks the framework to understand and create genuine explaining machines. This lack of understanding, not lack of computational power, is the bottleneck.

He argues that AGI must involve:

  • Flourishing thought processes that are currently absent in AI.

  • A new philosophical theory for its programming.

  • Flexible criteria for judging explanations and motivations.

Chasing explanations, not robots

David Deutsch's perspective on AGI pushes me to dig deeper and decipher the explanation itself, rather than fretting about robot takeovers.

He serves as a reminder that real intellect is about seeing the invisible and comprehending the "why" underlying everything, not about manipulating numbers or winning games.

I believe that this route to AGI in artificial intelligence will necessitate a thorough revision of our conception of knowledge and a profound exploration of the core of human meaning-making.

It's a challenge to change the way I see things and perhaps even the nature of intelligence itself.

I believe this feature at AI Authority will help you better understand David's perspective on AGI. If you'd like to add something to it, do comment below.