Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...
Moderator: PoFo The Lounge Mods
Rugoz wrote:We wouldn't know how to design AI with such behavior and there's no reason to do so anyway.
quetzalcoatl wrote:For an AI to be a threat it must have its own agenda, separate from its controller/builders. This does not mean it acts in random, unpredictable ways - it means the opposite.
What we have now are so-called expert systems; they are formed by organizing what we already know about a given subject (like medical diagnosis) into yes-no flow chart.
One Degree wrote:Can you simplify this for us non computer experts. To me, updating a database on it's own is not 'learning'.
Harmattan wrote:What he meant was that awareness could be born from some chaotic systems. Just like it did for us. He did not mean that chaos equates awareness...The more general and powerful AI will be, the higher the probability of awareness to appear.
kobe wrote:https://medium.com/@LyleCantor/russell-bostrom-and-the-risk-of-ai-45f69c9ee204
It is quite an interesting topic. If we create a superintelligent AI, it definitely follows that if we don't give it a cohesive moral system under which to act it will quickly see that humanity is just a hindrance to its goals. He goes on to say that one suggestion is that programmers need to put in a human happiness imperative, but even that would not seem to be enough. Maybe the ten commandments would be useful to the AI after all? But the basic problem is that whatever fatal goal it has it will inevitably be able to outsmart us into some kind of loophole. For goodness sake, lawyers are able to do it all the time with verbose legalese, why would we expect a computer not to eventually come to its own conclusions about the best way to get the job done?
quetzalcoatl wrote:Human awareness evolved over millennia, and is specific to the particular environmental conditions in which it developed. AI as a project is not very old in an evolutionary sense. Directed engineering to achieve a specific goal is not at all the same as organic evolution within an ecologic niche. A directed engineering project to achieve AI can hardly depend on awareness being born of a chaotic system; this is not how engineering works. Human invention short circuits the evolutionary process to achieve a quick result, but some basic theoretical framework must already be in place. Without the understanding that electric current produces heat, and heat produces light, a light bulb would not have been achievable. There is, to my knowledge, no actionable equivalent in AI research.
quetzalcoatl wrote:AI as a project is not very old in an evolutionary sense.
Directed engineering to achieve a specific goal is not at all the same as organic evolution within an ecologic niche. A directed engineering project to achieve AI can hardly depend on awareness being born of a chaotic system; this is not how engineering works.
Human invention short circuits the evolutionary process to achieve a quick result, but some basic theoretical framework must already be in place. Without the understanding that electric current produces heat, and heat produces light, a light bulb would not have been achievable. There is, to my knowledge, no actionable equivalent in AI research.
Il Doge wrote:There's three problems that need to be overcome that I can see:
1. Silicon processors aren't fast enough.
2. Quantum computing needs to be machine coded to its equations which makes a self-programming quantum computer either impossible or very, very slow due to a need for access to an advanced system of factories and at default the ability to keep them operating, so that it can make new machine-coded parts for itself.
3. A self-programming system needs to somehow be barred from de-programming its own fundamental purposes and thereby disabling itself. If such a limitation is possible from a programming standpoint, it should logically follow that intelligent computers barred from ever threatening people should be able to be made since an objective to not harm people isn't fundamentally different from any other cognizable objective.
Harmattan wrote:Maybe there is a way to confine this evolution to boundaries that would prevent the emergence of awareness (hopefully without impairing the AI efficiency). But as I said such boundaries are unknown today and we cannot guarantee that awareness will not appear out of our current designs. We are in almost uncharted territory, both on the theoretical and empirical sides.
lucky wrote:Can you define what you mean by "awareness"?
quetzalcoatl wrote:However, the more tangible and immediate dangers of automation need to be placed in the center of our awareness:
1) Net destruction of jobs.
2) Leveraging the power of capital versus labor.
Harmattan wrote:I do not see them as problems to fear: they're rather the dark side of the coin, the unavoidable temporary toll to pay for the transition towards a new era of History that will be better for all in the end, once capitalism marginalizes itself and we get become to work only because we want to...
saeko wrote:Do you seriously believe that people would go through all the trouble of programming a superintelligent AI just so that it can sit around isolated, doing nothing of importance?
saeko wrote:How exactly would they do that?
saeko wrote:Skynet, assuming it is intelligent enough, would be able to figure whether or not it was sitting in a VR testing environment or the real world simply by looking for glitches and other programming artifacts. These would necessarily exist, unless you assume the programmers are running skynet in a full-blown simulation of the entire universe.
saeko wrote:Human brains evolved through an incredibly stupid process of natural selection. It is far from unreasonable to think that intelligent beings could do better than natural selection.
Rancid wrote:I'm just going to say it, I hope the machines kill Dagoth first.
@Potemkin , @Verv , @Hakeer , and others: I[…]