the inevitability of a super AI? : artificial

By Prosyscom
In March 23, 2018

We, basically, know very little on how our brain works, how it creates consciousness and allows us to be intelligent, therefore we don’t have a clue about how to teach/program a machine to be as intelligent as a human.

The current way is just creating computers with massive processing power and algorithms structured on layers and nodes connected on ways similar to our neural system (so called neural networks), giving them massive data and expecting that they will learn by trial and error about how to make sense of it.

Even for their programmers, they are “black boxes”.

However, Alphago learned to play Go with human assistance and data, but AlphagoZero learned completely by itself from scratch, with no human data (beside Go’s rules), with the so called reinforcement learning (, by playing countless games against itself. It ended up beating Alphago.

Moreover, the same algorithm AlphaZero learned how to play chess in 4 hours on itself and then beat the best machine chess player, Stockfish, 28 to 0, with 72 draws, with less computer power than Stockfish.

A grand master, seeing how these AI play chess, said that “they play like gods”.

Then, it did the same thing with the game Shogi (

AlphaZero is more or less a General AI, ready to learn anything with clear rules by itself and, then, beat everyone of us.

So, since no one knows how to teach machines to be intelligent, the goal is creating algorithms that will be able to figure out how to develop a general intelligence comparable to ours by trial and error.

Or that will create another different, better algorithm, more intelligent.

It’s that precisely what Google done with its AI Automl ( Automl created an AI, Nasnet, that is better at image recognition than any other previous AI (

If a computer succeeds, and becomes really intelligent, we most probably won’t know how it did it, what are its real capacities, how we can control it and what we can expect from it.
(“even their developers aren’t sure exactly how they work”:

All of this is being done by greedy corporations and some optimistic programmers, trying to make a name for themselves.

This seems a recipe for disaster.

Perhaps, we might be able to figure out, after, how they did it and learn a lot about ourselves and about intelligence with them.

But in between we might have a problem with them.

AI development should be overseen by an independent public body (as argued by Musk recently: and internationally regulated.

One of the first regulations should be about deep learning and self-learning computers, not necessarily on specific tasks, but on general intelligence, including talking and abstract reasoning.

And forget about open source AI. On the wrong hands, this could be used with very nasty consequences (check this 7m video:

I had hopes that a general human level AI couldn’t be created without a new generation of hardware. But AlphaZero can run on less powerful computers (single machine with four TPUs), since it doesn’t have to check 80 million positions per second (as Stockfish), but just 80 thousand.

Since our brain uses much of its capacities running basic things (like the beat of our heart, the flowing of blood, the work of our body organs, controlling our movements, etc.), that an AI won’t need, perhaps current supercomputers already have enough capacity to run a human level GAI.

If this is the case, this all matter is dependent solely on software.

And, at the pace of AI development, probably there won’t be time to adopt any international regulations, since normally this takes at least 10 years.

Without international regulations, Governments won’t stop or really slow AI development by imposing safety measures, because of fear of being left behind on this decisive technology.

Therefore, it seems that a general AI comparable to humans and, so, much better, since it would be much faster, is inevitable on the short term, perhaps less than 10 years.

There is a ranging debate about what AlphaZero achievements imply in terms of development speed towards a GAI (see last part at;

Are we going to keep the recent pace of development or, since we are entering the really tricky issues, the pace will slow?

The truth is that nobody can know, since we are talking about human intelligence level that no one really understands or have a clue on how to build a comparable system.

Many things can happen on 10 years and we simple aren’t prepared for them, especially if AI assumes the direction of this developments.

Will we accept the risk?

Human nature suggests that we are going to take this risk. We are an “all or nothing” species, blind by ambition to overcome our limits.

We, probably, already are on a non-returning path on the development of a super AI and no one can do anything about it. If the american corporations won’t do it, it will be the chinese or someone else.

After a human level GAI, the step towards a super AI will most likely be taken short after and the probability of humans having control over it are 0.

OpenAI Wants to Make Safe AI, but That May Be an Impossible Task

“I met with Michael Page, the Policy and Ethics Advisor at OpenAI. (…) He responded that his job is to “look at the long-term policy implications of advanced AI.” (…) I asked Page what that means (…) “I’m still trying to figure that out.” (…) “I want to figure out what can we do today, if anything. It could be that the future is so uncertain there’s nothing we can do,”.

The reason behind my conclusion that humans will have 0 control over an super AI can be found here:

AI and Greediness from artificial

قالب وردپرس