watch on aatventure.news

Nick Bostrom on AI & The Future of Humanity

Nick Bostrom is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test.

2020-11-21 19:00:00 - Science Time

Artificial Superintelligence or ASI, sometimes referred to as digital superintelligence is the advent of a hypothetical agent that possesses intelligence far surpassing that of the smartest and most gifted human minds.


AI is a rapidly growing field of technology with the potential to make huge improvements in human wellbeing. However, the development of machines with intelligence vastly superior to humans will pose special, perhaps even unique risks.


Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when or how this will happen.


One only needs to accept three basic assumptions to recognize the inevitability of superintelligent AI:

- Intelligence is a product of information processing in physical systems.

- We will continue to improve our intelligent machines.

- We do not stand on the peak of intelligence or anywhere near it.


Philosopher Nick Bostrom expressed concern about what values a superintelligence should be designed to have.

Any type of AI superintelligence could proceed rapidly to its programmed goals, with little or no distribution of power to others. It may not take its designers into account at all. The logic of its goals may not be reconcilable with human ideals.


The AI’s power might lie in making humans its servants rather than vice versa. If it were to succeed in this, it would “rule without competition under a dictatorship of one”.


Elon Musk has also warned that the global race toward AI could result in a third world war.

To avoid the ‘worst mistake in history’, it is necessary to understand the nature of an AI race, as well as escape the development that could lead to unfriendly Artificial Superintelligence. 


To ensure the friendly nature of artificial superintelligence, world leaders should work to ensure that this ASI is beneficial to the entire human race.

More Posts