Humanity’s Last Invention
Computers are stupid. They are nothing but dumb and dead pieces of metal without any sense or understanding1. But ask a question:
Me: How are you?
Google Assistant: I’m feeling positively tip top. Thanks.
Me: Hope you’re having fun.
Google Assistant: It’s always fun to chat.
Me: What else makes you happy?
Google Assistant: Knowing that Tasmanian devils are born as small as a grain of rice makes me happy. So does talking to you.
They are capable to impress upon a sense of life with electric current flowing like blood in their veins.2 But are they alive?
The Beginning
The journey of computers as we know them today truly began when in 1936, Alan Turing, the mathematician who would break the German Enigma Code to win UK the WWII3, presented the notion of a universal computer now known as the Turing Machine. With just 5 instructions, Read, Write, Move Left, Move Right and Halt, a Turing machine can compute anything that is computable. A system which is capable of these 5 operations is said to be Turing Complete. All computers to this day, are based on the same principle.
The pivotal moment came with the meaning of information and understanding data, the oil of 21st century, which was pioneered by Claude Shannon in early 1940s. It launched one of the three greatest revolutions of modern times4. Shannon’s mathematical constructs could break down an object of interest into long sequences of 0 and 1, later known as bits. They are the simplest form of information. A moment in time (pictures, visual recordings) or the elements of life (DNA) can be captured, stored, and manipulated in bits. The world would never be the same.
The Frontier
We have invaded the domain of Gods. We have technology to make the weather behave on our whims. We grow biological robots today, ride in autonomous vehicles, conducting trials for inter-planetary spaceships, teleport information, run quantum computers, mine asteroids and create stars in labs.
Is there no limit to the technological advancement? And if there is, how close are we to it and can we reach it? At what point does the technological possibilities go beyond our capacity to comprehend it? And because computers only do as we program them to, then are we restricting the computers? What happens when computers program themselves? What can they achieve?
A New Era
Machine learning is the science of making computers learn without explicitly programming them. Taking an instance of handwritten digit recognition, you would not specify the pattern of the number 5. You would rather provide examples of 5 and allow the program to self-learn any relevant pattern to detect it. The set of digits (training data), and the procedure (training algorithm) works together in a trial, error and correction loop. First, the program tries to make a guess and checks whether that is correct. Based on the answer, the system adjusts itself. This causes learning through minute improvement in every cycle. The process repeating thousands of times with small correction in every loop, ultimately makes the system powerful enough to decide for any given image, whether that is number 5 or not. This is called Supervised Learning.5
This ability to apply the system once trained to any data and achieve reasonable level of accuracy is called generalisation. It is the primary goal of machine learning systems to be able to give an accurate outcome for new data different from the one it used during the training.
Machine learning was first described in the late 20th century, but it has thrived only in the last decade. In 2012, Alexnet, a machine learning system6 achieved such accuracy in image recognition which was never seen before7. The industry was shaken. It launched a new interest in the field and the development pace rose sharply. In March 2016, DeepMind, a Google owned Artificial Intelligence research organisation pitted their new system named AlphaGo system against the world champion Lee Sedol in the game of Go, a Chinese board game. Considered the most complicated game developed by humanity with possibilities exceeding 10172 possible moves - a number beyond comprehension8 - it is impossible to exhaust all possibilities by any system using trial and error to evaluate the best move forward (Silver et al. 2016). It was considered that computers, with their deterministic, formula based software could never surpass the human intuition which develops through decades of practice. This was computers against humans. To lose meant there is nothing that we could do better than computers. To lose was to forever grant superiority to computers. AlphaGo beat Lee Sedol 4-1, and the world was taken by storm9.
“I don’t know how to start or what to say today, but I think I would have to express my apologies first. I should have shown a better result, a better outcome, and better content in terms of the game played, and I do apologize for not being able to satisfy a lot of people’s expectations. I kind of felt powerless.” - Lee Sedol, Go World Champion
Things only got worse, or better for computers. To train AlphaGo, its developers had used 150,000 historic game records played over decades by the best players in the world. It was, in a way, built on human knowledge.
AlphaGo Zero, a successor to AlphaGo is fundamentally different. Nothing except the rules of the game were given, and it was allowed to improve by playing itself. It makes random moves first, and based on the outcome of the game, tries to improve itself, assessing which move worked and which didn’t. It repeats the process hundred of thousands of times, repeatedly playing against itself (Silver et al. 2017). This results in a drastic growth as it always has an equal opponent. The following is result that was observed by DeepMind during the training of AlphaGo Zero:
AlphaGo Zero, learning thoroughly on its own, with no human knowledge, beat AlphaGo 100-0.
Interestingly, the algorithms used for AlphaGo Zero, referred as the General Reinforcement Learning Algorithm works on any game - Chess, Shaogi, or for that matter, any task that could be similarly structured (Silver et al. 2018). Today almost all games are mastered by computers and the world champions, rather than try to overcome them, learn from them. Studying computer’s moves for months, because it is too advanced for them. AlphaGo Zero in the truest sense, gives us a glimpse of what computers are capable of when not restricted by human knowledge.
And it was just the beginning.
The Uncharted
One of the biggest challenge in modern computing has been developing systems that would understand language. Human communication is vague. Words derive their meaning from the context they are used in. Human interaction consists of gestures, expression and tone at least as much as the actual words. NLP or the Natural Language Processing is the branch of Machine Learning that studies human-to-computer interactions through natural language. This is one of the most aggressively researched field in Machine Learning.
In 2018, OpenAI presented GPT, a natural language processor based on neural network, trained over 117 Million parameters. Trained on large textual datasets, including the digital book corpus, GPT was capable of processing long text and answer basic questions.
A year later, in 2019, GPT-2 was announced which was trained on 1.5 Billion parameters, 10x larger than the first model of GPT. It’s release was initially delayed for fears of misuse. It was capable of writing reasonable, context sensitive articles, which could flood the media as fake news.
The most powerful natural language processor was released just last year in June 2020, GPT-3, the biggest system ever trained, with 175 Billion parameters is a dramatic improvement over its predecessor. It can answer trivial questions, publish blogs, pen poetry, summarise emails, can even code other programs. The following is written by GPT-3 when asked to write in the voice of Jerome K. Jerome with a single word prompt “it”.
The importance of being on Twitter
“It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage.”
You can read the full creation here.
Here’s another excerpt. A conversation with GPT-3 impersonating Albert Einstein:
Manuel: Albert Einstein. Nice to meet you. I’m Manuel Araoz, and I’ve come from the future to meet you. smiles
Albert: laughs Oh, really. How is the future?
Manuel: It’s amazing, actually. What year is this?
Albert: It’s 1947. It’s January. This is Princeton, NJ. You are in America.
Manuel: Your name is Albert Einstein, right?
Albert: That’s correct.
Manuel: So, how would you describe your contributions to science so far?
Albert: Well, I’ve already made some pretty important contributions to physics.
Manuel: grabs coffee from table and takes a sip Tell me more.
Albert: Well, first of all, my work with the photoelectric effect and quantum mechanics was very important for the development of quantum physics.
Manuel: What does quantum physics mean? Sorry, I’m no physicist myself.
Albert sits back in his comfortable chair, and after taking a slow sip of tea, explains.
Albert: You see, a hundred years ago, our understanding of the Universe, and how it all works, was still fairly immature. There were many loose ends, which we were starting to try and unravel. For example, things like what the atom is made of, or what light actually is.
The full interview can be read here
Now, can you take a look at these pictures and try to guess which ones are real?
Yes, you would have guessed it. The answer is none. None of these people exist in the real world. You can find more examples here. These are all creations of StyleGAN, a system developed just two years ago in 2019. A Machine Learning system that can generate hyper-realistic human faces, matching any criteria or characteristic you want.
Alan Turing’s Imitation Game, popularly known as the Turing Test, is a test of a machine’s ability to express intelligent behaviour that would be indistinguishable from that of a human. He began his paper, published in 1950, with a question - “Can Machines Think?” (Turing 1950). I would ask now, what do you think?
Conclusion
Machines are powerful, and while for them what they produce means nothing, their actions can have serious consequences for us. Just as they’re capable of doing good, they’re equally capable of acting bad. For computers, everything would be indifferent.
Technology is giving life the potential to flourish like never before—or to self-destruct. - The Future of Life Institute
Thinking about values to be hard-coded into such systems is a critical question facing humanity, the most important conversation of our time (Tegmark 2017). Super Intelligence, as defined by Nick Bostrom, is a point when an agent would surpass the brightest and most gifted humans in intelligence at virtually all tasks and domains of life. It would come without any warning. It would be the last thing that humanity would invent.
Just two decades ago, you could not have imagined a piece of metal in your pocket, capable of encapsulating your whole life. We can only but wonder what the next two decades would bring. We’re breaking barriers which till now seemed unbreakable. Technology development is inevitable and it is growing faster than any one individual could guide of govern. Nobody knows where it is going. These are exciting times. You must be ready.
–
References and Footnotes
A set of characters such as ‘#98f414’ or ‘#ffffff’ are not different for a machine but the former is the colour of life, the chlorophyll green, and the latter a shade of deep space covered in darkness. The letter combination ‘rmrf’ would be meaningless to you, but used as instructions
rm -rf /
, they are enough to wipe your entire system for good.↩︎Computers are capable of mindless actions at ferocious speed. Their speed and accuracy more than surpass their inability to understand. On an average system a thousand sums are calculated in less than 1 microsecond (~806 ns). In comparison, a blink of an eye takes 3,50,000 microseconds.
↩︎1]: import numpy as np In [2]: a = np.random.rand(1000) In [3]: b = np.random.rand(1000) In [4]: %timeit a+b In [806 ns ± 13.1 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
His story and contribution in cryptography is captured in “Alan Turing: The Enigma” by Andrew Hodges, and was adapted into Academy Award winning film The Imitation Game starring Benedict Cumberbatch, Keira Knightley.↩︎
Walter Isaacson (2021), in his recent book The Code Breaker, talks about three great revolutions of the modern times, the first one, which transformed the 20th century was revolution in Physics, led by Einstein and his Theory of General Relativity, which led to creation of atom bombs, nuclear powers, spaceships, and transistors. The second revolution, which currently witness, and are a part of, is the Information technology revolution in which all information is can be stored in bits. All computation can be done with switches. This created computers, internet, artificial intelligence and the shaken social fabric of today. The third transformation, that will change everything and is still in it’s nascent stages will be transformation in biology, led by new discoveries in life sciences. This would the revolution of Gene editing and modifications using CRISPR, the technology discovered just few years ago, and for which Nobel Prize was given just recently.↩︎
In Unsupervised Learning, the system tries to learn patterns itself, without any indication of what they actually mean. Reinforcement Learning is another different approach to machine learning. Each approach has its set of challenges, advantages, designs, and limitations. Apart from the methods of learning, there are multiple implementation models for different types of learning. Logistic Regression, Neural Networks, Support Vector Machines are some of the most powerful learning algorithms.↩︎
Artificial Neural Network Convolutional Neural Network (A type of Artificial Neural Net)↩︎
Alexnet competes in ImageNet Large Scale Visual Recognition Challenge and defeats every other competitor by huge margin, with the runner-up resulting in 10.8 percentage points higher error.↩︎
In comparison to 10172, the universe has ~1080 atoms in the universe and chess has 10123 (also known as the Shannon number) possible combinations. It may not seem much but it is 1 million trillion trillion trillion trillion more configurations than chess!↩︎
In Game 2, AplhaGo played a now famous (or infamous?) move 37 that experts claim no human would have thought to play. It left the commentators dumbstruck, who had to repeatedly check if they had not made a mistake. Lee Sedol, rattled by the move, left the room. He took 15 minutes for his next move. Later, that moved turned out to be critical in AlphaGo’s win. 25↩︎