AI explosion: A perspective
Blogs

AI explosion: A perspective

Intelligence has been at the core of human evolution. Had it not been our innate ability to use our gifted cognitive abilities, improving on results, constantly bettering ourselves in areas of science and technology, we would still be hunters and gatherers. What lies at the heart of evolution and as a matter of fact with science too, is its recursive ability to build upon subsequent results, test them against the whetstone of set standards and the inherent dynamic of self improvement.

Computers have been the torchbearers of human technological prowess and advancement. With each passing year, we have dozens of new, highly advanced systems, in virtually every conceivable space, executing mammoth tasks at speeds way faster than a decade ago. In 1997, IBM’s Deep Blue computer beat the world chess champion proving its mettle. And it has been no stopping the AI juggernaut! With the US congress declaring that a staggering 33% of America’s ground system must be robotic by 2025, and by the 2030, the US navy shall be able to harvest a crop of bird sized flying objects with semi autonomous ability to operate for weeks speaks of AI advancement  level we are attaining. Specialized hardware platforms, innovative architectures, namely neural network processing units (NNPUs), field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and the whole bunch of such micro technologies collectively called neurosynaptic architecture are fuelling this highly intelligent systems. Every tier of cloud to edge ecosystem feeds on this intricate web of components. What is meant here that, computers may still be short of true human intelligence, but what is noteworthy is the pace at which the AI design and tools adding muscle to it, is taking place.  Extrapolating it the future we may arrive at a moment, where AI system starts designing itself, with less and less of human intervention, and more recursive methods. Are we inching close to this moment of singularity and subsequent explosion or more precisely the “intelligence explosion”?  Will it outwit even the smartest human mind on planet?  Will this onslaught of “super intelligence” spell doom for the human race? In 2015, 29% AI researchers affirmed that an explosion was “likely” or “highly likely”.

Genesis of the idea

The term intelligence explosion caught everyone’s fancy when this idea gained salience post a didactic exposition by the statistician I.J.Good in 1965, who expressed the idea in these words: “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.” Since then it has seen in the context of ever evolving AI ecosystem, and stoked both fear and skepticism. This has lead to a lot of public debate centred around AI and what it aims to create, and are we on a collision course or an impending explosion of some sort.

Conflicting opinions

Clearly the camp is split into two opposing groups, with majority of industry insiders and AI researches leaning towards a serious possibility of an AI explosion. Their opinion rests on the premise that the “seed AI” will assume invincible proportions of intelligence due to its loop back or recursive ways and dwarf humans in days to come. We have been designing such AI systems which will overtime need no human intervention and become self sustaining and considering its need of power fulfilled, will raise themselves to stature of immortality. Ray Kurzweil, the noted computer scientist and futurist, who has authored books like “The age of intelligent machines” and “The singularity is near”, predicts that by advent of 2045 “The technological singularity will occur as artificial intelligences surpass human beings as the smartest and most capable life forms on the Earth. Technological development is taken over by the machines, who can think, act and communicate so quickly that normal humans cannot even comprehend what is going on. The machines enter into a “runaway reaction” of self-improvement cycles, with each new generation of A.I.s appearing faster and faster. From this point onwards, technological advancement is explosive, under the control of the machines, and thus cannot be accurately predicted (hence the term “Singularity”). He goes on to predict that this might be the most disruptive thing ever, and has the potential to change the course of human history! Justin R. Rattner, the former director at Intel Labs, thinks we are heading towards “a point when human and artificial intelligence merges to create something bigger than itself” by 2048.” AI researcher Eliezer Yudkowsky is of the opinion that an intelligence explosion would be a reality by 2060.

It is at the same time surprising to note that at AI@50, formally known as the “Dartmouth Artificial Intelligence Conference: The Next Fifty Years” conference held in 2006, some 41% of the participants held the opinion that machines would “never” reach the level of human intelligence. This group sees the explosion as more of a myth propagated over time, based on flawed reasoning and a misreading of the term intelligence itself. For them intelligence is not restricted to machines accomplishing a set of complex tasks rather it is based on the context and the environment in which these cognitive abilities develop. Advocates of this belief opine that super intelligence is not a measure of greater problem solving abilities. Had it been such, the world wouldn’t be reeling under massive socio-economic issues, given we have roughly 50,000 humans with a jaw-dropping IQ of 170 or higher! Proponents of this theory, claim to see through this inflated view of explosion and consider all forms of evolving AI advancements as mere external tools at our disposal, which at best can be used as prosthetics to leverage the potential of existing human knowledge, gathered by human minds in their immediate milieu. They argue that we are definitely solving problems at greater speeds, expanding the capabilities of AI but do not see as any reason to be worried or carried away by the explosion myth.

The ethical question

One of the dilemma of researchers, taking a position of intelligence explosion as a near certainty is how to include the elusive concept of morality and ethics in the realm of discussion, i.e harnessing ability of AI systems for friendly purposes creating a win-win situation. But they are struggling with the inherent ambiguous definition of what may be considered as right and what may qualify as a wrong? As because, the notion of right or wrong itself is relative and hence an AI system may at times take decisions detrimental to human existence. Consortium of like minded businesses are pitching for a greater trajectory path which is accountable. Predicting the future as always has been a tricky thing, as it is outcome of competing world events and changing societal patterns and behaviour. It will be interesting to witness what the future holds for us humans, in this clamour of intelligence explosion.

Leave a Reply

Your email address will not be published.