Nick Bostrom: Difference between revisions
Appearance
No edit summary |
Xinreality (talk | contribs) m Text replacement - "artificial intelligence" to "artificial intelligence" |
||
(4 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
==Biography== | ==Biography== | ||
Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University, with a background in physics, computational neuroscience, mathematical logic, and philosophy. He is the founding director of the Future of Humanity Institute, a multidisciplinary research center at the University of Oxford which enables leading researchers to use mathematics, philosophy, and science to explore big-picture questions about humanity. Recently, the focus of the institute has been exploring questions regarding existential risks and the future of machine intelligence. The Future of Humanity institute works closely with the Centre for Effective Altruism <ref name=”1”> Bostrom, N. Nick Bostrom’s home page. Retrieved from http://nickbostrom.com/</ref> <ref name=”2”> Future of Humanity Institute. Mission. Retrieved from https://www.fhi.ox.ac.uk/about/mission/</ref> <ref name=”3”> Adams, T. (2016). Artificial intelligence: ‘We’re like children playing with a bomb’. Retrieved from https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine</ref>. | Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University, with a background in physics, computational neuroscience, mathematical logic, and philosophy. He is the founding director of the Future of Humanity Institute, a multidisciplinary research center at the University of Oxford which enables leading researchers to use mathematics, philosophy, and science to explore big-picture questions about humanity. Recently, the focus of the institute has been exploring questions regarding existential risks and the future of machine intelligence. The Future of Humanity institute works closely with the Centre for Effective Altruism <ref name=”1”> Bostrom, N. Nick Bostrom’s home page. Retrieved from http://nickbostrom.com/</ref> <ref name=”2”> Future of Humanity Institute. Mission. Retrieved from https://www.fhi.ox.ac.uk/about/mission/</ref> <ref name=”3”> Adams, T. (2016). [[Artificial intelligence]]: ‘We’re like children playing with a bomb’. Retrieved from https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine</ref>. | ||
In the beginning of 1998, the World Transhumanist Association was founded by Nick Bostrom and David Pearce. Its objective is “to provide a general organizational basis for all transhumanist groups and interests, across the political spectrum. The aim was also to develop a more mature and academically respectable form of transhumanism, freed from the “cultishness” which, at least in the eyes of some critics, had afflicted some of its earlier convocations.” The association has since changed its name to Humanity+. There were two founding documents of the World Transhumanist Association: the Transhumanist Declaration and the Transhumanist FAQ. The first document was a concise statement of the basic principles of transhumanism. The FAQ was a consensus document, more philosophical in its scope <ref name=”4”> Bostrom, N. (2005). A history of transhumanist thought. Journal of Evolution and Technology, 14(1)</ref>. | In the beginning of 1998, the World Transhumanist Association was founded by Nick Bostrom and David Pearce. Its objective is “to provide a general organizational basis for all transhumanist groups and interests, across the political spectrum. The aim was also to develop a more mature and academically respectable form of transhumanism, freed from the “cultishness” which, at least in the eyes of some critics, had afflicted some of its earlier convocations.” The association has since changed its name to Humanity+. There were two founding documents of the World Transhumanist Association: the Transhumanist Declaration and the Transhumanist FAQ. The first document was a concise statement of the basic principles of transhumanism. The FAQ was a consensus document, more philosophical in its scope <ref name=”4”> Bostrom, N. (2005). A history of transhumanist thought. Journal of Evolution and Technology, 14(1)</ref>. | ||
Line 30: | Line 30: | ||
Bostrom also suggested the concept of “Maxipok”, which he describes it as an effort to “maximize the probability of an “OK outcome”, where an OK outcome is any outcome that avoids existential catastrophe.” This concept should be taken as a rule of thumb, and not as a principle of absolute validity. Its usefulness is as an aid to prioritization <ref name=”5”></ref>. | Bostrom also suggested the concept of “Maxipok”, which he describes it as an effort to “maximize the probability of an “OK outcome”, where an OK outcome is any outcome that avoids existential catastrophe.” This concept should be taken as a rule of thumb, and not as a principle of absolute validity. Its usefulness is as an aid to prioritization <ref name=”5”></ref>. | ||
==Anthropic principle== | |||
Another topic that Nick Bostrom has studied extensively is anthropic bias, or observational selection effects. A selection effect is a bias introduced by constraints in the data collection process, from the limitations of some measuring device for example. An observational selection effect is one that arises from the precondition that there is some observer properly positioned to examine the evidence. Bostrom has studied how to reason when it is suspected that evidence is biased by this observation selection effect <ref name=”8”> Manson, N. (2003). Anthropic bias: observations selection effects in science and philosophy (Review). Retrieved from http://ndpr.nd.edu/news/23266/</ref> <ref name=”9”> Bostrom, N. (2002). Anthropic bias: Observation Selection Effects in Science and Philosophy. New York, NY, Routledge </ref>. Some questions that involve reasoning from conditioned observations are, for example, “is the fact that life evolved on Earth evidence that life is abundant in the universe? “, “why does the universe appear fine-tuned for life?”, or “are we entitled to conclude from our being among the first sixty billion humans ever to have lived that probably no more than several trillion humans will ever come into existence - that is, that human extinction lies in the relatively near future?” <ref name=”8”></ref> | |||
According to Bostrom (2002), anthropic reasoning seeks to detect, diagnose, and cure biases. It is a very rich philosophical field in terms of empirical implications, touching on so many important scientific questions, posing intricate paradoxes, and containing generous quantities of conceptual and methodological confusion that need to be sorted out. | |||
The term “anthropic principle” is less than three decades old, and it was coined by Brandon Carter, a cosmologist, in a series of papers. The term “anthropic” is a misnomer, since reasoning about observation selection effects have no limited relation with a specific species (in this case Homo sapiens), but with observers in general. There is some confusion in the field, with several anthropic principles being formulated and defined in different ways by various authors. In Bostrom (2002), the author writes that “some reject anthropic reasoning out of hand as representing an obsolete and irrational form of anthropocentrism. Some hold that anthropic inferences rest on elementary mistakes in probability calculus. Some maintain that at least some of the anthropic principles are tautological and therefore indisputable. Tautological principles have been dismissed by some as empty and thus of no interest or ability to do explanatory work. Others have insisted that like some results in mathematics, though analytically true, anthropic principles can nonetheless be interesting and illuminating. Others still purport to derive empirical predictions from these same principles and regard them as testable hypotheses.” <ref name=”9”></ref>. | |||
More recently, Bostrom and colleagues introduced the concept of anthropic shadow. It is an observation selection effect that prevents the observation of certain extreme risks that are close in terms of geological and evolutionary time. The anthropic shadow is cumulative with the “normal” selection effects that are applied to any sort of event. Correcting for this type of bias can affect the probability estimates for catastrophic events, and recognizing it might also help avoiding errors in risk analysis <ref name=”10”> Cirkovic, M. M., Sandberg, A. and Bostrom, N. (2010). Anthropic shadow: observation selection effects and human extinction risks. Risk Analysis, 30(10): 1495-1506</ref>. | |||
==Superintelligence== | |||
The development of [[artificial intelligence]] (AI) could advance rapidly, possibly becoming an existential threat to humankind. Bostrom, in his book Superintelligence (2014), compares the development of AI to humans being like small children playing with a bomb. He also considers it “the most important thing to happen… since the rise of the human species”. Indeed, there is no reason why human psychology should be projected onto artificial minds, and assume that they would have the same emotional responses that humans developed during the evolutionary process. Expecting human characteristics from an AI could impede our understanding of what it might be like <ref name=”11”> Silverman, A. In conversation: Nick Bostrom. Retrieved from http://2015globalthinkers.foreignpolicy.com/#!advocates/detail/qa-bostrom</ref>. This area of study has received some attention, with Elon Musk investing $10 million dollars to fund research about keeping AI friendly <ref name=”12”> Mack, E. (2015). Bill Gates says you should worry about [[artificial intelligence]]. Retrieved from http://www.forbes.com/sites/ericmack/2015/01/28/bill-gates-also-worries-artificial-intelligence-is-a-threat/#b2a52b93d103</ref>. | |||
==Simulation argument== | |||
The simulation argument is, arguably, the most well-known work of Bostrom. This concept comes from a 2003 paper published in The Philosophical Quarterly <ref name=”13”> Stricherz, V. (2012) Do we live in a computer simulation? UW researchers say idea can be tested. Retrieved from https://www.washington.edu/news/2012/12/10/do-we-live-in-a-computer-simulation-uw-researchers-say-idea-can-be-tested/</ref> <ref name=”14”> Bostrom, N. (2003). Are we living in a computer simulation? The Philosophical Quarterly, 53(211): 243-155</ref>. Although the full argument requires some probability theory, the basic idea can be grasped without resorting to mathematics <ref name=”6”></ref> <ref name=”15”> Bostrom, N. (2006). Do we live in a computer simulation? New Scientist, 192(2579): 38-39</ref>. It begins with the assumption that the computer power in future civilizations will be so robust that it will be possible to create ancestor simulations. These are detailed simulations of their forebears, replicating reality to the smallest detail, and allowing for minds in the simulation to be conscious. Due to their enormous computer power it is assumed that they will also run many such simulations. According to Bostrom (2003), “then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race”, therefore being likely that we are among the simulated minds instead of the original biological ones. If we do not believe that we are in a computer simulation, then we cannot assume that our descendants will run a great number of simulations of their ancestors <ref name=”14”></ref> <ref name=”15”></ref> <ref name=”16”> Solon, O. (2016). Is our world a simulation? Why some scientists say it’s more likely than not. Retrieved from https://www.theguardian.com/technology/2016/oct/11/simulated-world-elon-musk-the-matrix</ref>. | |||
Another assumption that needs to be made is that of substrate independence, regarding the philosophy of mind. It means that mental states can occur in different classes of physical substrates and not only biological ones. For example, silicon-based processors on a computer could in principle be capable of generating consciousness. It is believed that it is enough for the generation of subjective experiences that the computational processes of a human brain be replicated with fine-grained detail, to the level of individual synapses. Presently, there is no sufficiently powerful hardware or the necessary software to develop conscious minds in computers, but it is expected that if technological progress continues these problems will be overcome <ref name=”14”></ref> <ref name=”16”></ref>. | |||
The simulation argument tries to demonstrate that at least one of three propositions is true. The first one is that almost all civilizations like ours go extinct before reaching technological maturity; the second, almost all technologically mature civilizations lose interest in creating ancestor simulations; and the third, we're almost certainly living in a computer simulation <ref name=”6”></ref> <ref name=”14”></ref>. | |||
If the first proposition is false than it means that a significant portion of civilizations reach technological maturity. If the second one is false, it would mean that a significant fraction of these civilizations run ancestor simulations. It follows that if one and two are false, then there would be a great number of simulations. In this case, almost all observers with our types of experiences would be living in simulations. The simulation argument does not show that we are living in a simulation. Instead, it states that at least one of the three propositions its true, not telling which one <ref name=”6”></ref> <ref name=”15”></ref>. | |||
==Transhumanism== | |||
In Bostrom (2005), transhumanism is described as “a loosely defined movement that has developed gradually over the past two decades, and can be viewed as an outgrowth of secular humanism and the Enlightenment. It holds that current human nature is improvable through the use of applied science and other rational methods, which may make it possible to increase human health-span, extend our intellectual and physical capacities, and give us increased control over our own mental states and moods. Technologies of concern include not only current ones, like genetic engineering and information technology, but also anticipated future developments such as fully immersive virtual reality, machine-phase nanotechnology, and artificial intelligence.” <ref name=”17”> Bostrom, N. (2005). In defense of posthuman dignity. Bioethics, 19(3): 202-214</ref>. This arises from the human desire to acquire new capabilities. Even in ancient times, humanity as sought to expand the boundaries of its existence <ref name=”4”></ref>. | |||
Transhumanism advocates that human enhancement technologies should be widely available, and that individuals should have the option to choose which technologies they want to apply to themselves. It also promotes the view that parents should decide which reproductive technologies to use when having children. Transhumanists believe that the potential hazards of human enhancement technologies will be surpassed by their benefits. The development and implementation of these future technologies could lead to our descendant being “posthuman”, with indefinite health-spans, greater intellectual faculties, new sensibilities, or possibly the ability to control emotions <ref name=”17”></ref>. | |||
===Cognitive enhancement=== | |||
Cognitive enhancement is “the amplification or extension of core capacities of the mind through improvement or augmentation of internal or external information processing systems.” For example, currently, external hardware and software give human beings effective cognitive abilities that in some aspects surpass those of biological brains. To improve cognitive function, interventions can be directed at the core faculties of cognition: perception, attention, understanding, and memory <ref> Bostrom, N. and Sandberg, A. (2009). Cognitive enhancement: methods, ethics, regulatory challenges. Science and Engineering Ethics, 15(3): 311-341</ref>. | |||
==Bibliography== | ==Bibliography== | ||
Line 44: | Line 78: | ||
===Selected articles=== | ===Selected articles=== | ||
* Bostrom, N. (1998). How long before superintelligence? International Journal of Future Studies, 2. | |||
* Bostrom, N. (2002). Existential risks. Journal of Evolution and Technology, 9(1). | * Bostrom, N. (2002). Existential risks. Journal of Evolution and Technology, 9(1). | ||
* Bostrom, N. (2003). Are we living in a computer simulation? The Philosophical Quarterly, 53(211): 243-155. | * Bostrom, N. (2003). Are we living in a computer simulation? The Philosophical Quarterly, 53(211): 243-155. | ||
* Bostrom, N. (2003). Human genetic enhancements: a transhumanist perspective. The Journal of Value Inquiry, 37(4): 493-506. | |||
* Bostrom, N. (2005). In defense of posthuman dignity. Bioethics, 19(3): 202-214. | * Bostrom, N. (2005). In defense of posthuman dignity. Bioethics, 19(3): 202-214. | ||
* Bostrom, N. (2005). A history of transhumanist thought. Journal of Evolution and Technology, 14(1). | * Bostrom, N. (2005). A history of transhumanist thought. Journal of Evolution and Technology, 14(1). | ||
* Bostrom, N. (2005). Transhumanist values. Journal of Philosophical Research, 30: 3-14. | |||
* Bostrom, N. and Ord, T. (2006). The reversal test: eliminating status quo bias in applied ethics. Ethics, 116(4): 656-679. | |||
* Bostrom, N. and Sandberg, A. (2009). Cognitive enhancement: methods, ethics, regulatory challenges. Science and Engineering Ethics, 15(3): 311-341. | * Bostrom, N. and Sandberg, A. (2009). Cognitive enhancement: methods, ethics, regulatory challenges. Science and Engineering Ethics, 15(3): 311-341. | ||
Line 58: | Line 100: | ||
* Bostrom, N. (2012). The superintelligent will: motivation and instrumental rationality in advanced artificial agents, 22(2): 71-85. | * Bostrom, N. (2012). The superintelligent will: motivation and instrumental rationality in advanced artificial agents, 22(2): 71-85. | ||
* Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1): 15-31. | |||
==References== | ==References== |