Does Curiosity Kill the Cat? An AI Parable.

Matt Walton
8 min readNov 7, 2017
© 2017 Matt Walton / mjwalton.com

In 1968, Stanley Kubrick released his existential movie, 2001. The story of man’s complacency in its pursuit to go beyond. The film begins, showing the wonders of accomplishments through rich visuals and classical music. But by the end, the life and death struggle between a man and a machine. While as out-there as I am sure it was in 1968, 2001 ignited a genre of movies which pitted mankind against psychopathic machines. Unfortunately, now, when we speak of AI, the only reference points come from these Hollywood fictions.

Recently Elon Musk and Steven Hawking expressed concerns about AI being possibly our most significant threat to humanity, while Mark Zuckerberg (Facebook) and Sundar Pichai (Google) believe that AI will fuel the next leap in our evolution. While their positions contrast sharply, I don’t think the argument is that binary. AI itself is the most significant threat and the most significant benefit to our existence and everywhere in between.

In 1941, Oppenheimer’s research led him to drive the development of the atomic bomb. He threw himself into its construction and surrounded himself with the brightest minds to apply his theories of nuclear physics, turning theory into a weapon that would change the course of the war. At some point in the development process, Oppenheimer and his team came to grips with the gravity and significance of this effort. Oppenheimer would often say, “Now I become death, the destroyer of worlds.”

From its first use, the atom bomb changed humanity. Governments raced to develop weapons of their own to ensure their seat at the power table which unleashed years of continued refinement which lead to more powerful, more destructive weapons. All conflict, now meant, risk of nuclear war. Fear permeated every aspect of our global society.

More than once, we came close to crossing the precipice of global annihilation. It wasn’t until the promise of assured mutual destruction did the insanity of the world finally come to its senses. Ironically, during this period, this same nuclear technology was applied to peaceful purposes and enabled a revolution in renewable energy which now powers much of our daily lives.

In 2010, we uncovered the Stuxnet virus. Experts estimated that this virus had been propagating across the web a full year before its discovery. It took time even to understand what its code did. The world had never seen malware this sophisticated.

What made this specific malware so concerning, the sophistication of how it deployed and its intent to destroy control systems, that managed physical infrastructure. To compound this concern, variants of this worm had been modified and deployed across the web. Even recently, Newsweek published an article indicating that the Stuxnet signing method has propagated more widely than previous estimates. This virus fundamentally changed the dialog.

As new technologies such as blockchain emerge and become mainstream, we will keep finding an ever-evolving ecosystem of technologies meant to benefit, but instead, turned against us. If anything, we have to face, even in today’s technological sophistication, we are unable to control or rid ourselves entirely of malware.

Our only solution is to deploy measures, which identify and remove code but only after its discovery and examination. Most often, variants of this malware have already evolved beyond solutions meant to control its spread/use. As a society, we are conditioned only to react; typically driven by economics.

While even at the early stages of AI, it is essential to understand the base tenet of AI; a system that learns and expands on its own. As we create systems that process information like our brains do, we are attempting to imprint biological processing into a system that doesn’t originate from our physical and social evolution. (Some would even argue spirituality.)

Instead, the process of creating a system is immediate — only driven by the intentions of those that create and enhance it. As much as we want to believe systems can incorporate human empathy, we have yet to determine why an individual, without reason, is driven to unleash a hail of bullets into a crowd of innocent people.

While we may have ultimate control over our systems today, history has shown that in our human pursuits, we often do not anticipate the unintended consequences of our actions. Much like the arrogance that Kubrick conveyed in his movie 2001, humans love to relish our accomplishments, but often overlook the most important thing — we don’t know, what we don’t know.

In 2003, we invaded Iraq and Afghanistan. I remember that image of Bush standing on the aircraft carrier,” Mission Accomplished.” Today, we are still mired actions that had unintended or unexpected consequences. Our attempts to get control, fuels whack-a-mole in an attempt to end the conflicts.

Often, we hear terms like, “fluid, resilient, a committed enemy, unwinnable” to describe the complexities that continually out-evolve our strategies to contain what was suppose to be a simple and focused objective. As a result, we now regularly deal with the self-radicalizing lone terrorist that puts no value on human life by blow himself up to maximize death and destruction. An unexpected consequence of our all to familiar act first, question later mentality.

Not even the brightest military minds, or the unfortunate brave soldiers we sacrifice, know how to resolve this ongoing quagmire. If anything, history has validated, time and time again, control is merely an illusion. It is only our arrogance to believe we can ever control a system without rules. After all, life itself forms from chaos, not through control or conformity.

I have often heard conversations regarding Asimov’s Three Laws of Robotics. These “Laws” assume that the system will conform to rules that constrain its behavior. I find these concepts highly improbable since AI systems themselves are without rules.

Even if you developed or applied a core set of tenets — such as the laws of robotics, laws do not account for situations when the system needs to break the “laws” to achieve its objective — after all, that’s the core basis of AI. Once the system learns a behavior, how does it forget it? In your own life, how times have you broken the speed limit to get somewhere quicker? Once you do it, it’s now an option that you do, even when there is no pressure do to so.

As we continually evolve AI technology, we should also consider a possibility we could create a new form of life. While it may seem far-fetched, the ultimate question of being self-aware as a determining factor to consciousness is a philosophical question, not a scientific proof.

Even in my lifetime, what we know about animals alone has evolved significantly, from intelligence, social behavior, and communication — continually challenging our understanding of the biological hierarchy. Ironically, we still are uncovering complexities within the biological human body, yet, we believe we can create intelligence that thoroughly understands and values every aspect of the human condition.

The topic of advanced life raises profound questions regarding both AI’s role in our existence, what rights will it have to exist, and most importantly what are rights to its demise. We struggle with these questions in dealing with our human existence, the prejudice and the bias every day, and still have no unifying position that is embraced by everyone.

But yet, we strive to push AI’s development to replicate the human condition, our thought process, our senses, and our ability to control it, yet we only view this technology as nothing more than code, and as such, we do not discuss the ethical dilemmas that will ultimately face us. While I contend we are not at this point with the level of sophistication in our technology yet, the reality is, like most other discoveries — it can be random, instantaneous and unpredictable.

Recently, I was reading about AI bots that were being trained to negotiate. The bots were taught to negotiate like humans, but through text. Over time, the bots created a their own unique language, which only they could understand. If we intended to constrain the bots boundaries to human communication — e.g., text, this demonstrates our inability to limit AI’s ability to evolve to achieve its objective. Even in this early application, we can’t anticipate how these systems will develop.

Comparing the past to today, for every good intentioned technological advancement; equally dangerous applications originate. But unlike any time in history, technical capabilities now rest in the hands of the individual, not just a nation-state. We have to accept that we are in the wild west of technological advancement and its dissemination. Much like nuclear technologies changed the past, AI is going to change everything in our future, how much, no-one can say.

In healthcare, medical ethicists help navigate the complexities of modern medicine, so as a society, we can determine the rules & guidelines by which we make health decisions. I foresee AI ethicists as a required role within technology to help guide the development of advanced AI technologies, and further the dialog about use and capability. As these systems become more intelligent, more self-evolving, it’s foolish to believe we can anticipate or control outcomes, and why we should continuously have debates about their creation and the boundaries, we place upon them while we still can.

Without this dialog and debate, we just perpetuate our negligence when dealing with profound activities that will change the course of history. We need to not only discuss the implications of all aspects of our technological advancement, but we need to consider the rules by which we constraint and disseminate its source to limit its exploitation. We must ensure, as Stuxnet demonstrated, governments in their quest to retain power and position, do not weaponize AI.

To be crystal clear — we are developing highly advanced AI systems; systems that will think on their own, learn on their own, take action on their own. As someone involved in the development of Artificial intelligence, it isn’t about if, it’s about when. The sooner we accept that our current perceptions of AI are only a mere fraction of its totality.

The viewpoints from Musk, Hawking, Zuckerberg, Pichai all have merit. However, I challenge you to consider their intent before casting your view. As I see it, Zuckerberg and Pichai, speak to AI in context to the opportunities that it enables their companies. Musk, the prolific inventor — speaks to it from a broad pragmatic viewpoint, and Hawking — one of the greatest scientific minds of our time, comes at it from a scientific, evolutionary and philosophical view.

Being a technologist myself, if we aren’t listening to someone who has launched a spacecraft into outer space, then control its descent and land it in the center of a 20ft platform, we are very unwise. As a technologist, I can only say, the level of complexity and AI needed to launch and land that rocket, unreal.

No matter where you sit within the debate of AI, while Hollywood has spawned much AI-based fiction. It would sure suck if we ended up realizing that these movies were actually non-fiction the whole time. The only difference between good and evil resides in the intent of its creator. It’s time to start talking.

Matt Walton
mjwalton.com

--

--

Matt Walton

Chief Design Officer of Artificial/Adaptive Intelligence at Oracle (mjwalton.design)