Artificial Intelligence (AI) research has grabbed the headlines in October 2017, as the Google AI Division, DeepMind, unveiled an AI computer that learns as humans do.
The AI called ‘AlphaGo Zero’ has mastered the ancient Chinese war strategy game of ‘Go’. AI’s beating the human Grand Masters in chess and Go is already old hat, but this week AlphaGo Zero beat the previous Go world champion AI, called AlphaGo.
The difference between these two AI’s is vast. The now-outdated AlphaGo learned the game by studying the moves in thousands of games recorded from matches between humans.
AlphaGo Zero started with nothing but the rules of the game. A blank slate.
By playing millions of games against itself and analysing each result, AlphaGo Zero learned from each experience and developed strategies to win. It worked surprisingly well and beat AlphaGo by 100 games to nil
The Threat of AI
For decades, science-fiction movies like ‘The Terminator’ have featured a doomsday scenario of AI’s with the ability to observe and draw conclusions. In these movies, the AI concludes that the world’s problems can all be solved by destroying the human race. No more pollution, climate change, resource scarcity, famine or war.
To be fair, man has told stories about our creations rising up and destroying us for centuries. From The Sorcerer’s Apprentice, Frankenstein’s Monster, homunculi in the 15th century to golems from ancient Hebrew legends. The consistent problem is that when our creations receive the ability to act, and the AI’s to think, instructions given by humans are taken literally and result in devastation.
The problem is that human communication contains so many shared assumptions and unspoken rules of behaviour, that we can’t possibly express that to a golem or AI. Communication is a desert without added information of context and meaning.
Wrong Conclusions
In the 1980’s, early researchers allegedly created an AI (then ‘neural net’) that could identify enemy tanks from reconnaissance photographs. The AI was fed hundreds of photos of trees with camouflaged tanks behind them, and photos of just trees. With each photo the AI was given the correct answer and allowed to develop its own criteria for pass or fail. It developed conclusions about the factors that will produce a pass or fail.
It worked beautifully, for the researchers’ photos that it had seen before and also those it had not in a control group.
However, when tested with military-supplied photos it hadn’t seen before, the results were random and the AI failed dismally.
Eventually, the scientists figured out that their original set of photos contained a pattern that they hadn’t spotted. The tank photos were all taken on cloudy days, and the photos without tanks were taken on sunny days. The AI had identified that ‘correct’ difference was simply the weather conditions in the photo, with no attention paid to the tanks.
The problem from the AI side was that it could not reveal its conclusions, as it did not have the concepts of ‘sky’ and ‘tank’ to do so, just patterns to compare.
It looks like the worst of communication problems between humans and AI’s. There are some high profile forward thinkers, such as Elon Musk, the founder of SpaceX and Tesla, who believe that this will always be an issue and that a disaster is waiting to happen when one assumption is not made explicit.
However, the same issue of unspoken conclusions occurs in the way humans think too.
Unconscious Assumptions
Just like the AlphaGoZero, babies start as blank slates as far as conscious thought is concerned. We are wired to learn from experience and draw conclusions to form a map of how our world operates. We need to learn how we can best survive in that world.
For example, a baby learning to walk falls down many times, learns, makes adjustments and tries again. Eventually, they hit upon the right combination that will allow them to walk. Continued practice over many years improves the skill. Successful walking becomes completely automatic, including how to catch ourselves from falling if we trip.
There is an assumption buried deep within the experience of walking that goes completely unchallenged – that gravity is constant. Our whole technique of walking is based on unchanging gravity. A great illustration is a scene 20 minutes into the 2012 movie ‘John Carter’, where he tries to walk normally on the planet Mars. With a different gravity, his Earth-gravity techniques don’t work and he experiments and fails many times before finally learning to how move around.
This is not a flaw. We are designed to make assumptions and create belief systems about how the world works, like gravity being a constant. If we were to try to consciously process all of the sensory information we receive each day, our thought processes would be over-whelmed with the effort. By putting things on automatic using our conclusions of how the world works, we free our conscious mind to focus on more important things. This can be where something has changed, and possibly a danger. Or we can use that freedom to think creatively and plan. Inherent in human intelligence is a mind operating on assumptions.
Just like the AI’s, we can only rarely describe these assumptions because we don’t see them.
Beliefs We Invent
We also use unconscious belief systems in our interactions with people, interpreting the behaviour of others and what that means for us. Just like the tank-seeking neural net, we sometimes draw wrong conclusions. And this can have disastrous consequences in our lives.
In the 1950’s, Dr Carl Rogers, a founding father of Clinical Psychology, identified a wrong assumption that we make during childhood that is a huge source of stress. It also stops us from reaching our fullest human potential. One estimate is that only 1 in 100,000 people escape making this unconscious assumption.
The assumption is that our worth as a person depends on what we do. Things that we do make us bad or good.
When we believe this, our future actions are all based on this conclusion, whether we are aware of it or not. We are driven to prove our worth to others and to ourselves. If we fail or make mistakes that means that we’re not good enough. We are afraid people’s opinions of us, looking to them for the proof of our worth.
But criticism is only painful if we think it means something negative about us.
Imagine if you developed the belief as a child that you were unconditionally worthwhile (good and lovable). Then someone else’s opinion of you would be just that – an opinion, which they are perfectly entitled to have. Without the fear that criticism makes you bad, you would become free to try more things. Failure becomes just feedback that your attempt didn’t work, and never means that you are flawed, weak or stupid. You feel calm, fearless and invincible and live life from curiosity, inspiration and joy.
I don’t know the solution to bringing awareness to AI assumptions. Hopefully someone very smart will figure that one out.
However, the significant assumption we humans make – that our worth depends on our actions – makes the difference between a life of stress and struggle and one of calm and happiness.
Luckily for us, we have the ability challenge that assumption. And once we are aware of the false belief, we can take steps to change it.