The danger of superintelligent AI - Page 3 - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

Anything from household gadgets to the Large Hadron Collider (note: political science topics belong in the Environment & Science forum).

Moderator: PoFo The Lounge Mods

#14545510
Rugoz wrote:We wouldn't know how to design AI with such behavior and there's no reason to do so anyway.

Awareness was not designed, it appeared from a chaotic system named life. A few nuclei acids, one or two billions of years, and here comes Justin Bieber.

A large neural network could also produce awareness without being designed for it, and without us being aware of this occurrence, of even of the possibility of this occurrence.

Afaik there is currently no theorem or postulate to precise the set of conditions under which awareness can or cannot appear.

But, sure, not with current AI, they're probably too small given that even animals with a much larger cognitive power are not self-aware. I doubt that my OCR is self-aware. But I am not so sure with some of the AI currently being designed.

quetzalcoatl wrote:For an AI to be a threat it must have its own agenda, separate from its controller/builders. This does not mean it acts in random, unpredictable ways - it means the opposite.

What he meant was that awareness could be born from some chaotic systems. Just like it did for us. He did not mean that chaos equates awareness.

What we have now are so-called expert systems; they are formed by organizing what we already know about a given subject (like medical diagnosis) into yes-no flow chart.

No, the engineers did not design a medical flow chart, they designed a system able to infere neural networks and knowledge graphs that maps the data it receives, which is the exact same thing that the brain of a medical student does: it inferes a neural network that maps the data it receives. And both do mistakes and then use trial and error to evaluate the produced neural network and upgrade it.

Engineers still cheat by manually encoding some core concepts in a hard-coded program (a flow chart indeed) : the way the AI creates and uses knowledge graphs for the start (just like your DNA encodes how your brain will work), and then helpers to understand concepts such as language, statistics, or information trustability to compensate for the lack of computing power. But after that the AI will read articles and figure the meaning of medical words, their relations and what they say about diagnosis and treatment by itself.

This is how engineers who understand nothing about medicine can build a top-level modern expert medical system. And the result often surprises them. Figuring out why the system took this or that decision is a difficult problem, you can't simply debug it step by step like a regular program, you rather have to understand how the graph was formed and transformed: tell me computer, what makes you think that flu can be cured with wax?

By the way, IBM is turning its AI into something able to become expert at stock markets, medicine, legal issues, engineering, programming, etc. This is a very powerful and flexible system that is now attending university courses.

One Degree wrote:Can you simplify this for us non computer experts. To me, updating a database on it's own is not 'learning'.

The database is just the physical support used to encode information, just like most of your neurons are mere supports used to encode information.
What matters is how those data are processed. That is, the degree of freedom and power the algorithm (whether digital or organic) can enjoy.

The more general and powerful AI will be, the higher the probability of awareness to appear.
#14546596
Harmattan wrote:What he meant was that awareness could be born from some chaotic systems. Just like it did for us. He did not mean that chaos equates awareness...The more general and powerful AI will be, the higher the probability of awareness to appear.


Human awareness evolved over millennia, and is specific to the particular environmental conditions in which it developed. AI as a project is not very old in an evolutionary sense. Directed engineering to achieve a specific goal is not at all the same as organic evolution within an ecologic niche. A directed engineering project to achieve AI can hardly depend on awareness being born of a chaotic system; this is not how engineering works. Human invention short circuits the evolutionary process to achieve a quick result, but some basic theoretical framework must already be in place. Without the understanding that electric current produces heat, and heat produces light, a light bulb would not have been achievable. There is, to my knowledge, no actionable equivalent in AI research.
#14546604
kobe wrote:https://medium.com/@LyleCantor/russell-bostrom-and-the-risk-of-ai-45f69c9ee204

It is quite an interesting topic. If we create a superintelligent AI, it definitely follows that if we don't give it a cohesive moral system under which to act it will quickly see that humanity is just a hindrance to its goals. He goes on to say that one suggestion is that programmers need to put in a human happiness imperative, but even that would not seem to be enough. Maybe the ten commandments would be useful to the AI after all? But the basic problem is that whatever fatal goal it has it will inevitably be able to outsmart us into some kind of loophole. For goodness sake, lawyers are able to do it all the time with verbose legalese, why would we expect a computer not to eventually come to its own conclusions about the best way to get the job done?

There's three problems that need to be overcome that I can see:

1. Silicon processors aren't fast enough.
2. Quantum computing needs to be machine coded to its equations which makes a self-programming quantum computer either impossible or very, very slow due to a need for access to an advanced system of factories and at default the ability to keep them operating, so that it can make new machine-coded parts for itself.
3. A self-programming system needs to somehow be barred from de-programming its own fundamental purposes and thereby disabling itself. If such a limitation is possible from a programming standpoint, it should logically follow that intelligent computers barred from ever threatening people should be able to be made since an objective to not harm people isn't fundamentally different from any other cognizable objective.
#14546606
quetzalcoatl wrote:Human awareness evolved over millennia, and is specific to the particular environmental conditions in which it developed. AI as a project is not very old in an evolutionary sense. Directed engineering to achieve a specific goal is not at all the same as organic evolution within an ecologic niche. A directed engineering project to achieve AI can hardly depend on awareness being born of a chaotic system; this is not how engineering works. Human invention short circuits the evolutionary process to achieve a quick result, but some basic theoretical framework must already be in place. Without the understanding that electric current produces heat, and heat produces light, a light bulb would not have been achievable. There is, to my knowledge, no actionable equivalent in AI research.

In other words, biology isn't chaotic, it just appears that way due to its complexity?
#14546675
quetzalcoatl wrote:AI as a project is not very old in an evolutionary sense.

Our current AI are already more intelligent than any other life form that existed hundreds of millions of years after the first monocellular system appeared. And those AI are already thousands of times more intelligent than those we had fifteen years ago. You cannot compare biological and silicon timescales.

Directed engineering to achieve a specific goal is not at all the same as organic evolution within an ecologic niche. A directed engineering project to achieve AI can hardly depend on awareness being born of a chaotic system; this is not how engineering works.

This IS how engineering works when the only way to reach your goal is to introduce a large and currently unpredictable evolutionary freedom. Intelligence is always defined in terms of evolution: if your AI cannot evolve, then it is useless. I do not think that intelligence and awareness are as distinct as you think they are.

Maybe there is a way to confine this evolution to boundaries that would prevent the emergence of awareness (hopefully without impairing the AI efficiency). But as I said such boundaries are unknown today and we cannot guarantee that awareness will not appear out of our current designs. We are in almost uncharted territory, both on the theoretical and empirical sides.

Human invention short circuits the evolutionary process to achieve a quick result, but some basic theoretical framework must already be in place. Without the understanding that electric current produces heat, and heat produces light, a light bulb would not have been achievable. There is, to my knowledge, no actionable equivalent in AI research.

You are arguing that without understanding awareness it is not possible to create awareness.
On the opposite I claim that without understanding awareness it is not possible to warrant its absence.


Il Doge wrote:There's three problems that need to be overcome that I can see:

1. Silicon processors aren't fast enough.
2. Quantum computing needs to be machine coded to its equations which makes a self-programming quantum computer either impossible or very, very slow due to a need for access to an advanced system of factories and at default the ability to keep them operating, so that it can make new machine-coded parts for itself.
3. A self-programming system needs to somehow be barred from de-programming its own fundamental purposes and thereby disabling itself. If such a limitation is possible from a programming standpoint, it should logically follow that intelligent computers barred from ever threatening people should be able to be made since an objective to not harm people isn't fundamentally different from any other cognizable objective.

1. Individual processors are not enough but some companies already provide AI researchers with datacenters as powerful as a human brain. Something else is missing:
1.1 Our algorithms are still not good enough and produce brains whose potential is limited, slower to learn than equivalent organic brains, and not very reliable (similar inputs sometimes yield very different results - some mediocre ones and some excellent ones). Making an intelligent network is not trivial, even with the proper power.

1.2 We probably need a revolutionary shift in favor of hardware architectures more fit for a neural network simulation. Or algorithms able to create intelligence from many interconnected little brains rather than a single one. I am not sure that any significant intelligence can be practically born from a grape of slowly interconnected computers, whatever their total computing power is. Its training may be too long for our human timescale. But I am speculating and may be wrong.


2. Forget quantum computing. Aside of cryptography and a few other specific needs, they will not be of use for anything in any foreseeable future (and maybe ever). Right now they still are completely useless buzz, no less. I will be dead before they are significant.

PS: some quantum computers are actually programmable.


3. As I said earlier an advanced diagnosing AI is not programmed for medicine, it is simply programmed to be able to learn, and then fed with medical data. When you look at the most advanced projects, their purpose is not programmed, only their mean is. But trying to constrain what it can think seems like a circular problem: you would need to first understand what it thinks, and this may be a problem as hard at creating intelligence - probably harder. Maybe you can create an AI able to understand and monitor another AI's thoughts, but who will watch the watchmen?

Given our limited understanding of awareness, I suspect the only reasonable route is to constrain an AI at its interface with the real world: its inputs and outputs. This would not prevent awareness to appear, but it could make it harmless, at the cost of also impairing its effectiveness. And this would amount to the use of torture or drugs (good/bad stimuli - pleasure/pain - sensory deprivation, caging, etc). Or we could do what we did with animals and plants: breed many, select docile and interesting varieties, and clone/reproduce them.
#14546925
I would agree that it is prudent to constrain AI at its input and output levels, as a precaution.

However, the more tangible and immediate dangers of automation need to be placed in the center of our awareness:
1) Net destruction of jobs.
2) Leveraging the power of capital versus labor.
3) Network fragility becoming the rule throughout society at all levels. (Electric grid, financial systems, just-in-time philosophy applied to food delivery, etc. ad nauseum). These frailties present real and imminent dangers, in our current lifetimes.
#14546984
Harmattan wrote:Maybe there is a way to confine this evolution to boundaries that would prevent the emergence of awareness (hopefully without impairing the AI efficiency). But as I said such boundaries are unknown today and we cannot guarantee that awareness will not appear out of our current designs. We are in almost uncharted territory, both on the theoretical and empirical sides.

Can you define what you mean by "awareness"?

As far as I interpret the word, my text editor is acutely aware of certain things, such as which keys I have just pressed on my keyboard or the time that has elapsed since the last save.
#14547149
lucky wrote:Can you define what you mean by "awareness"?

Sorry, I knew all along that something was wrong but it is only just now, when I tried to sort out my ideas to answer you, that I realized the problem: in French, my language, awareness and consciousness are both the same word (conscience). And I was further misled by the fact that I started talking about la conscience de soi (correctly translated as self-awareness) before I realized that it was the wrong idea and rather a matter of conscience (incorrectly translated as awareness).

So please interpret it as "consciousness" instead.

quetzalcoatl wrote:However, the more tangible and immediate dangers of automation need to be placed in the center of our awareness:
1) Net destruction of jobs.
2) Leveraging the power of capital versus labor.

I do agree with those two (not the third) but I do not see them as problems to fear: they're rather the dark side of the coin, the unavoidable temporary toll to pay for the transition towards a new era of History that will be better for all in the end, once capitalism marginalizes itself and we get become to work only because we want to and not necessarily, as many do today, in large, inhuman, oppressive and castrative structures, packed in rats nests called metropolis because this is where the labor is.

So they're certainly more tangible and immediate, yet the perils associated with AI on the long run remain far more worrisome.
#14547256
Harmattan wrote:I do not see them as problems to fear: they're rather the dark side of the coin, the unavoidable temporary toll to pay for the transition towards a new era of History that will be better for all in the end, once capitalism marginalizes itself and we get become to work only because we want to...


This is indeed a laudable goal. By no means will it occur automatically, nor is there any guarantee that the 'transition period' will not lead to a much more severe and dystopian world. There are many powerful people working against your vision, and they have been phenomenally successful in the past decades in bending the political environment to their will.
#14548514
I dont think a super intellegent AI is far from reach nor it becoming self-aware and free is far in the future...
AIs already do have the ability to learn.analyz existing data ..gather data abd info from sorrounding..
FigUre out most logical methods and tools to take action and even come up with new tools to use ..
If we made a huge super computer those abilities which we already know how to..
And leave it with some basic logical terms.very fast access to internet and suffiecent material tools ....it will at one point become tounderstand its sorroundings.how it exicted and the world as an entire and its logic will even evolve vastly as it will learn huge amount of info and proccess it very fast beyond human ability ..and it most likely to come up with new technological inventions and use it to expand its knowledge ..evolve its logic and its self further more...
It will need resources true...but since it running on pure logic ...it will figure a way to obtain these resources and the tools it needs...
Using the basic ones already in power of ...
And the idea is that danger might come out of super AI machinery is very true...
As machines which is ran by some level of AI programs dont have morals and its really not possible to include our so called morals into it as it will in most cases resault logical error...so once such AI is created it will logically start advancing into a point it will realise that humanity is a threat not only to its own exictence but to everything else in the world ...
And the moat logical resault is containing that threat or neutralizing it ..
....

And even if that AI didnt have access to materialistic tools...ijust having access to internet will mean it will have access to countless number of tools (factories and production machines) as most around the world is by least ran by machines and computers.....and the knowledge required to operate them to its will is also available ....

So in conclusion .. the creation of a super intelligant AI is probably around the corner....and after its creation....sufficient information and data from around will resault it being self-aware ...the only thing impossible is for AIs or machines is to have our idea of morals ...as our morals and the very behaviour of humans is mostly illogical ....unlike machines and anytype of scientific event or creation which has a primery logical code in its existence and way of happening
#14548546
saeko wrote:Do you seriously believe that people would go through all the trouble of programming a superintelligent AI just so that it can sit around isolated, doing nothing of importance?

I don't understand why you think not being able to access data at will is isolation and "doing nothing of importance". Running self-contained simulations is invaluable in terms of data generation.

saeko wrote:How exactly would they do that?

By definition we give any AI its data through our calibrated sensors. Also we constructed the context for data which an AI must start off from. We control all data and its meaning for the new AI, we control reality as far as it is concerned.

saeko wrote:Skynet, assuming it is intelligent enough, would be able to figure whether or not it was sitting in a VR testing environment or the real world simply by looking for glitches and other programming artifacts. These would necessarily exist, unless you assume the programmers are running skynet in a full-blown simulation of the entire universe.

How would an intelligence that has never experienced the real world with autonomy over its senses be able to determine whether the "real world" it is presented with is accurate?

Why would we have to simulate the entire universe? Why would an AI like skynet need to know about Saturn or distant stars or even about the concept of planets to effectively coallate our military forces?

saeko wrote:Human brains evolved through an incredibly stupid process of natural selection. It is far from unreasonable to think that intelligent beings could do better than natural selection.

How would you recognize a non-human mind? How would you define it? Intelligence is a meter that is defined by human qualities, concepts, and particular survival tactics.

Rancid wrote:I'm just going to say it, I hope the machines kill Dagoth first.

I hope they animate my bones and make them perform menial tasks. One day we'll get to Robot Heaven.

@lucky: I hardly think people are going to be capable of just repeating however we first create a true machine intelligence in their garages. Hardware alone would stop all but the most wealthy and then it comes down to whether creating intelligence is a simple procedure or a delicate guessing game. Also most llikely it will be some government that will achieve the feat first so we probably won't even know about any AI until it has long since been achieved.
#14548553
I have a problem with the term 'artificial intelligence'. I assume this means we have natural intelligence. Does this require God? If so, many who use the term are being hypocritical. If you are saying our intelligence evolved naturally (in an evolutionary sense), isn't that what is happening with computers?
#14548555
I don't like terms like natural or artificial (it is not artificial for a bird to build a nest anymore than it was for man to harness fire or nuclear energy, rather these are logical results of our biology and development), and when speaking of intelligences I think terms like organic and synthetic are far less charged.

Personally I do wonder if we might accidentally create synthetic intelligence over a long enough time frame of consistent advancement.

What does the invisible hand wind up doing I wond[…]

Are you having fun yet Potemkin? :lol: How coul[…]

I think she’s going to be a great president for Me[…]

The fact that you're a genocide denier is pretty […]