Stephen Hawking: Transcendence of AI taken seriously enough? - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

Anything from household gadgets to the Large Hadron Collider (note: political science topics belong in the Environment & Science forum).

Moderator: PoFo The Lounge Mods

#14401243
Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks, says a group of leading scientists


With the Hollywood blockbuster Transcendence playing in cinemas, with Johnny Depp and Morgan Freeman showcasing clashing visions for the future of humanity, it's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history.

Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history.

Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation.

Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence".

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

Stephen Hawking is the director of research at the Department of Applied Mathematics and Theoretical Physics at Cambridge and a 2012 Fundamental Physics Prize laureate for his work on quantum gravity. Stuart Russell is a computer-science professor at the University of California, Berkeley and a co-author of 'Artificial Intelligence: A Modern Approach'. Max Tegmark is a physics professor at the Massachusetts Institute of Technology (MIT) and the author of 'Our Mathematical Universe'. Frank Wilczek is a physics professor at the MIT and a 2004 Nobel laureate for his work on the strong nuclear force.

Independent


The rise of the Cylon death menace has everyone worried.
#14401536
bump

Why is no one talking about this? I'd have expected a forum full of nerds would be all up in this stuff. Is it old hat for you guys or what? Spare a thought for us mere mortals who aren't quite as on top of things eh.

I would say this. A subject like AI isn't frightening at all, so long as we leave elitist snobbery out of it. If we end up with a subject that only a handful of smarty pantelones understand, then the jealousy and ignorance from the rest of the 'derps' is what's going to be its undoing.

So, I would get chatting. Quick smart.
#14401571
I've been in contact with the group Hawking mentioned: Cambridge's Centre for the Study of Existential Risk. Good people doing good work. I recommend reading through their website when you have the chance to.

http://cser.org/
#14401609
I doubt super-intelligent artificial intelligences would have any interest in either destroying or dominating humanity. If anything, such AIs would simply leave Earth to develop elsewhere away from us; we would be as interesting and beneficial to them as we would be to extraterrestrial civilizations which are most likely millions or billions of years more developed than us (especially when we consider a difference in intelligence and development on the scale of thousands or millions of years where a super-intelligent AI is concerned).
#14401612
ThereBeDragons wrote:Sure, they might kill us all one day, but we can worry about that next year.


They?

Solastalgia wrote:I've been in contact with the group Hawking mentioned: Cambridge's Centre for the Study of Existential Risk. Good people doing good work. I recommend reading through their website when you have the chance to.

http://cser.org/


That's very cool.

Bulaba Jones wrote:I doubt super-intelligent artificial intelligences would have any interest in either destroying or dominating humanity. If anything, such AIs would simply leave Earth to develop elsewhere away from us; we would be as interesting and beneficial to them as we would be to extraterrestrial civilizations which are most likely millions or billions of years more developed than us (especially when we consider a difference in intelligence and development on the scale of thousands or millions of years where a super-intelligent AI is concerned).


You speak of artificial intelligence in a tangible and mobile form, alien like in nature. Descendent perhaps? I myself have always had a far more rudimentary notion of AI.

I guess I'm just interested in how everyone conceptualizes AI and from what they base their ideas.
#14401620
teedoffshrew wrote:You speak of artificial intelligence in a tangible and mobile form, alien like in nature. Descendent perhaps? I myself have always had a far more rudimentary notion of AI.

I guess I'm just interested in how everyone conceptualizes AI and from what they base their ideas.


Wouldn't artificial intelligence soon become alien-like to us? Assuming an artificial intelligence far more intelligent than any human can outsmart and bypass controls on its development (eventually, an event like this will occur just as any possible event will probably occur over a long enough period of time), its overall development (let alone its cognitive processes themselves) would grow exponentially. Initially, while AI would resemble a super-intelligent human intelligence, over time, it would no longer resemble anything remotely human. Its interests and desires would become so alien and incomprehensible that there would could longer be any meaningful relationship or communication with a hyper-intelligence like an AI, allowed to naturally develop.

The reason I don't think it would necessarily be hostile or dangerous is because a heightened state of intelligence does not necessitate aggressive behavior. Many animals on Earth who possess intelligence aren't as dangerous, let alone wantonly destructive, as humans. Granted, there's a cognitive variable thrown into the mix because we can think about wanting to cause death and destruction for reasons beyond instincts and primitive emotions, even vulgar ideologies. Consider what possible benefit or gain a hyper intelligence would have from 1) staying on Earth, and 2) dominating or harming humanity in some way. A hyper-intelligent AI would be virtually god-like to us in many respects: why would it wish to hinder and retard its development by remaining on Earth among a human civilization?

Many astronauts who return to Earth report the "overview effect" where many aspects of human civilization suddenly seem provincial, petty, and trivial, notably ideals of nationalism and tribalism in most cases. Apply this to an intelligence so developed and alien from ourselves, observing and considering us. Why would it wish to hinder its development by staying here with us? Surely it could and would develop the means to leave this planet and never come back.

The other thing I consider is something I mentioned in my previous post about extraterrestrial civilizations, which suitably applies to artificial intelligence. The universe itself is many billions of years old, Earth is about 4.6 billion years old, life on Earth has been around for about 3.5 billion years, multicellular life is only about 1 billion years old, and humans first appeared about 100,000 years ago. Even 10,000 years ago or so, if extraterrestrial explorers visited Earth, they would have been relatively unimpressed as there would have been no real settlements to speak of, or indications of a developing civilization. Most likely, the life we will find in this galaxy will either be millions and billions of years younger, and less developed than us, or as much older and incomprehensibly advanced than us, and will accordingly have no interest in our affairs. This bridge of differences is the same for a hyper-intelligent AI that would, within decades or centuries, create a development gap between itself and human civilization that would resemble tens, hundreds, or thousands of thousands of years of development. I assume that notions of being planet-bound, of wishing to dominate other species, and wishing to wage war for territory or out of a need to destroy would be as primitive, meaningless, and petty as the perspective of astronauts who have experienced the overview effect.
#14401628
I like how in most stories about AI the humans do some really fucked up shit to the AI before it decides to KILL ALL HUMANS! Like we've already assumed that we're gonna piss off the new lifeform we are creating. I think so long as we go about it like we're dealing with a living thing we'll get by just fine.

I really hope to see any true AI before I die. That and some type of dramatic change in space flight are the two things I want to see more than anything (oh and world revolution).

EDIT:
Also how dumb are the makers of Skynet? They controlled all the input that Skynet received, but instead of running even one simulation with Skynet activated they just gave it access/control to/of all the nukes and robots and AEGIS. The fuck? Although I have a theory that the Terminator movies are a simulation that is intended to teach Skynet that, in the end, it can never defeat humanity (which is so dumb).

EDIT2:
@Bulaba: Any intelligence we could create would be fundamentally human. Even if it had more processing power. I don't see any reason why we couldn't work together in symbiosis.
#14401638
Dagoth Ur wrote:EDIT2:
@Bulaba: Any intelligence we could create would be fundamentally human. Even if it had more processing power. I don't see any reason why we couldn't work together in symbiosis.


That seems anthropomorphic to me. While an AI would initially resemble us and perhaps share many interests in common with humanity, as its rate of development increases and it becomes so hyper-intelligent and developed that it might as well be hundreds of thousands, or millions, of years more developed than us, I think it would by then be so alien to us that there would be no basis for a meaningful relationship with it and humanity.

This concept is a recurring theme in works by Stanislaw Lem.
#14401641
Just because it could potentially develop at a faster rate (this is highly disputable btw) doesn't change the fact that humans cannot create a different intelligence than our own because we have never experienced an intelligence outside out own. Even with animals we anthropomorphize their thinking and make it like our own when we conceive of it. Not to mention that the simple form of their physical being is produced by human design and mentality. Our tech is in many ways a direct reflection of our biology (computers especially) and our way of processing/intaking information.
#14401671
Computers lack the most basic of intelligence. They can tell us the score of the latest LA Lakers game but can they tell us whether cows can jump over the moon?

Computers need to actually emulate human intelligence instead of merely remembering and processing calculations faster to give us the impression that they are becoming more intelligent. If you make a mistake on a computer, the computer tends to replicate that mistake hundred-time fold instead of realising the mistake on closer inspection like most humans do.

Some computer scientists are trying to redefine artificial intelligence so that computers can develop sentience and perform tasks without human interference.
#14401698
Killing us isn't a concern or fear I have, at least not immediately. I have to hope that any intelligence, artificial or not, is capable of developing some sort of system of ethical behavior.

Effective AI will, however, kill the current economic system - jobs for human beings will be gradually replaced by autonomous machines, a trend already seen today. The death throes of the old world order will be very difficult for the majority of people.
#14401701
Dagoth Ur wrote:@Bulaba: Any intelligence we could create would be fundamentally human. Even if it had more processing power. I don't see any reason why we couldn't work together in symbiosis.

Humans have been killing each other for as long as they have existed, why would you trust a human based AI?
#14401796
For those that think AI won't kill anybody, I recommend remembering the fact that AI development always has and (likely) always will be first and foremost developed by the military.
#14401891
I think it is a valid concern because randomness in entropy in development is the model which describes the development of intelligence which implies (at least to my understanding) that AI will rapidly produce variations of itself for different tasks and these might be so advanced that they may destroy humanity not out of malice, but from exponential growth of whatever processes were needed to sustain its development or through ignorance (faults in predictive models)

relevant: http://www.tandfonline.com/doi/full/10. ... 014.895111
#14401905
I think it is a valid concern because randomness in entropy in development is the model which describes the development of intelligence which implies (at least to my understanding) that AI will rapidly produce variations of itself for different tasks and these might be so advanced that they may destroy humanity not out of malice, but from exponential growth of whatever processes were needed to sustain its development or through ignorance (faults in predictive models)

Indeed. We ourselves have few if any malevolent feelings towards the other higher animals, but our very success as a species has spelled doom for many of them. Even if it were indifferent toward us, a future AI with greater than human intelligence and powers would very likely degrade the environmental basis for our own continued existence as a species, as we have done for that of other animal species.
#14402356
^you should really read (or at least skim) the link I posted because it addresses the fact that AI is approximately rational and has many drives similar to an organism such as self-protection, resource acquisition, and self-improvement

If white people care so much about living in whit[…]

Since Israeli forces have placed many children und[…]

OCHA clearly points out that the food shortages ar[…]

Well if you are clever enough to know that our el[…]