Nick Bostrom: Difference between revisions
Appearance
No edit summary |
Xinreality (talk | contribs) m Text replacement - "artificial intelligence" to "artificial intelligence" |
||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
==Biography== | ==Biography== | ||
Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University, with a background in physics, computational neuroscience, mathematical logic, and philosophy. He is the founding director of the Future of Humanity Institute, a multidisciplinary research center at the University of Oxford which enables leading researchers to use mathematics, philosophy, and science to explore big-picture questions about humanity. Recently, the focus of the institute has been exploring questions regarding existential risks and the future of machine intelligence. The Future of Humanity institute works closely with the Centre for Effective Altruism <ref name=”1”> Bostrom, N. Nick Bostrom’s home page. Retrieved from http://nickbostrom.com/</ref> <ref name=”2”> Future of Humanity Institute. Mission. Retrieved from https://www.fhi.ox.ac.uk/about/mission/</ref> <ref name=”3”> Adams, T. (2016). Artificial intelligence: ‘We’re like children playing with a bomb’. Retrieved from https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine</ref>. | Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University, with a background in physics, computational neuroscience, mathematical logic, and philosophy. He is the founding director of the Future of Humanity Institute, a multidisciplinary research center at the University of Oxford which enables leading researchers to use mathematics, philosophy, and science to explore big-picture questions about humanity. Recently, the focus of the institute has been exploring questions regarding existential risks and the future of machine intelligence. The Future of Humanity institute works closely with the Centre for Effective Altruism <ref name=”1”> Bostrom, N. Nick Bostrom’s home page. Retrieved from http://nickbostrom.com/</ref> <ref name=”2”> Future of Humanity Institute. Mission. Retrieved from https://www.fhi.ox.ac.uk/about/mission/</ref> <ref name=”3”> Adams, T. (2016). [[Artificial intelligence]]: ‘We’re like children playing with a bomb’. Retrieved from https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine</ref>. | ||
In the beginning of 1998, the World Transhumanist Association was founded by Nick Bostrom and David Pearce. Its objective is “to provide a general organizational basis for all transhumanist groups and interests, across the political spectrum. The aim was also to develop a more mature and academically respectable form of transhumanism, freed from the “cultishness” which, at least in the eyes of some critics, had afflicted some of its earlier convocations.” The association has since changed its name to Humanity+. There were two founding documents of the World Transhumanist Association: the Transhumanist Declaration and the Transhumanist FAQ. The first document was a concise statement of the basic principles of transhumanism. The FAQ was a consensus document, more philosophical in its scope <ref name=”4”> Bostrom, N. (2005). A history of transhumanist thought. Journal of Evolution and Technology, 14(1)</ref>. | In the beginning of 1998, the World Transhumanist Association was founded by Nick Bostrom and David Pearce. Its objective is “to provide a general organizational basis for all transhumanist groups and interests, across the political spectrum. The aim was also to develop a more mature and academically respectable form of transhumanism, freed from the “cultishness” which, at least in the eyes of some critics, had afflicted some of its earlier convocations.” The association has since changed its name to Humanity+. There were two founding documents of the World Transhumanist Association: the Transhumanist Declaration and the Transhumanist FAQ. The first document was a concise statement of the basic principles of transhumanism. The FAQ was a consensus document, more philosophical in its scope <ref name=”4”> Bostrom, N. (2005). A history of transhumanist thought. Journal of Evolution and Technology, 14(1)</ref>. | ||
Line 43: | Line 43: | ||
==Superintelligence== | ==Superintelligence== | ||
The development of artificial intelligence (AI) could advance rapidly, possibly becoming an existential threat to humankind. Bostrom, in his book Superintelligence (2014), compares the development of AI to humans being like small children playing with a bomb. He also considers it “the most important thing to happen… since the rise of the human species”. Indeed, there is no reason why human psychology should be projected onto artificial minds, and assume that they would have the same emotional responses that humans developed during the evolutionary process. Expecting human characteristics from an AI could impede our understanding of what it might be like <ref name=”11”> Silverman, A. In conversation: Nick Bostrom. Retrieved from http://2015globalthinkers.foreignpolicy.com/#!advocates/detail/qa-bostrom</ref>. This area of study has received some attention, with Elon Musk investing $10 million dollars to fund research about keeping AI friendly <ref name=”12”> Mack, E. (2015). Bill Gates says you should worry about artificial intelligence. Retrieved from http://www.forbes.com/sites/ericmack/2015/01/28/bill-gates-also-worries-artificial-intelligence-is-a-threat/#b2a52b93d103</ref>. | The development of [[artificial intelligence]] (AI) could advance rapidly, possibly becoming an existential threat to humankind. Bostrom, in his book Superintelligence (2014), compares the development of AI to humans being like small children playing with a bomb. He also considers it “the most important thing to happen… since the rise of the human species”. Indeed, there is no reason why human psychology should be projected onto artificial minds, and assume that they would have the same emotional responses that humans developed during the evolutionary process. Expecting human characteristics from an AI could impede our understanding of what it might be like <ref name=”11”> Silverman, A. In conversation: Nick Bostrom. Retrieved from http://2015globalthinkers.foreignpolicy.com/#!advocates/detail/qa-bostrom</ref>. This area of study has received some attention, with Elon Musk investing $10 million dollars to fund research about keeping AI friendly <ref name=”12”> Mack, E. (2015). Bill Gates says you should worry about [[artificial intelligence]]. Retrieved from http://www.forbes.com/sites/ericmack/2015/01/28/bill-gates-also-worries-artificial-intelligence-is-a-threat/#b2a52b93d103</ref>. | ||
==Simulation argument== | ==Simulation argument== |