搜索
热搜: music

Basic concepts

2014-3-26 21:13| view publisher: amanda| views: 1002| wiki(57883.com) 0 : 0

description: Many of the most recognized writers on the singularity, such as Vernor Vinge and Ray Kurzweil, define the concept in terms of the technological creation of superintelligence, and argue that it is diff ...
Many of the most recognized writers on the singularity, such as Vernor Vinge and Ray Kurzweil, define the concept in terms of the technological creation of superintelligence, and argue that it is difficult or impossible for present-day humans to predict what a post-singularity would be like, due to the difficulty of imagining the intentions and capabilities of superintelligent entities.[6][7][9] The term "technological singularity" was originally coined by Vinge, who made an analogy between the breakdown in our ability to predict what would happen after the development of superintelligence and the breakdown of the predictive ability of modern physics at the space-time singularity beyond the event horizon of a black hole.[9]

Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[10][11][12] although Vinge and other prominent writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[7] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore's Law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[11][13]

A technological singularity includes the concept of an intelligence explosion, a term coined in 1965 by I. J. Good.[14] Although technological progress has been accelerating, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.[15] However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is more intelligent than humanity.[16] If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of. It could then design an even more capable machine, or re-write its own source code to become even more intelligent. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[17][18][19]

The exponential growth in computing technology suggested by Moore's Law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's Law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[20][citation needed] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit. Futurist Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[21]) increases exponentially, generalizing Moore's Law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[22] Between 1986 and 2007, machines' application-specific capacity to compute information per capita has roughly doubled every 14 months; the per capita capacity of the world's general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world's storage capacity per capita doubled every 40 months.[23] Like other authors, though, Kurzweil reserves the term "singularity" for a rapid increase in intelligence (as opposed to other technologies), writing for example that "The Singularity will allow us to transcend these limitations of our biological bodies and brains ... There will be no distinction, post-Singularity, between human and machine".[24] He believes that the "design of the human brain, while not simple, is nonetheless a billion times simpler than it appears, due to massive redundancy".[25] According to Kurzweil, the reason why the brain has a messy and unpredictable quality is because the brain, like most biological systems, is a "probabilistic fractal".[26] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date "will not represent the Singularity" because they do "not yet correspond to a profound expansion of our intelligence."[27]

The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how such a new world would operate.[28][29] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[30][31] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Singularity Institute for Artificial Intelligence, which is now the Machine Intelligence Research Institute.[28]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose Moore's Law is often cited in support of the concept.[32][33]

About us|Jobs|Help|Disclaimer|Advertising services|Contact us|Sign in|Website map|Search|

GMT+8, 2015-9-11 21:58 , Processed in 0.151923 second(s), 16 queries .

57883.com service for you! X3.1

返回顶部