“AI could spell the end of the human race.”
“AI presents profound risks to society and humanity.”
Prominent leaders in science and technology such as Stephen Hawking and Elon Musk have issued warnings about the potential risks of AI. But what exactly is AI, and what are the imminent threats we face from it?
Artificial Intelligence or, in short, AI, is widely recognized as the driving force behind the so-called Fourth Industrial Revolution (4IR) and is transforming the world and human society in front of our eyes. AI, however, did not simply appear all of a sudden or out of nowhere. It is the result of centuries of scientific and philosophical inquiry, drawing on fields such as philosophy, mathematics, logic, biology, neuroscience, psychology, linguistics, engineering (computer, electronic, and mechanical), and economics. In fact, AI first appeared in computer science over 70 years ago. The first Artificial Neural Network or, in short, Neural Net, was built in 1950 at Harvard by two undergrad students, Marvin Minsky and Dean Edmonds. However, the term, “Artificial Intelligence,” was coined in 1956 by John McCarthy in a conference held at Dartmouth College. The goal was to find ways to make machines that “use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”
AI can be understood as an academic discipline that aims to create machines that can mimic human intelligence, and human behavior in particular contexts. But AI, at least in the present time, cannot simulate the complexity or the nuance of human intelligence and behavior. It is important to note that the quest for human-enhancing technology is not new. From simple tools like sticks to complex inventions like airplanes and computers, humans have always sought to boost their capabilities by imitating the natural world around them or pursuing innovations to fulfill human needs or increase productivity. Steve Jobs, the founder of Apple, once compared his Macintosh computer to a bicycle, as both are accelerators of human ability, allowing us to move faster and, in the case of the computer, to process information more effectively. Therefore, while AI may seem like a revolutionary and disruptive technology, it is ultimately just another step in the ongoing evolution of human knowledge, technological innovation, problem-solving, and progress.
One of the important tools used by AI is algorithms. An algorithm is a set of instructions for performing a task or solving a problem. It may be as simple as sorting data or as complex as controlling and coordinating the movements of robots by interacting with the environment, processing sensory data, and making decisions in real time. It’s interesting to note that the term “algorithm” derives from the Latinized version of the name al-Khwarizmi, a Muslim mathematician. In 820 AD he was the first to generalize the solving of algebraic equations using the principles of equality, as detailed in his book, “The Book on Calculation by Restoration and Reduction.” Thus, he is often credited as the inventor of algorithms. Models are another important tool of AI. Representing the relationship between different variables in a dataset, a model allows the AI to make predictions or decisions based on the data.
The Rebirth of AI
AI has many branches, such as Computer Vision and Perception Processing, Natural Language Processing, Decision Making, and Robotics, among others. Combining them all makes a general AI solution, capable of performing complex tasks in diverse domains. Whether narrow or general AI, there are applications in many fields including healthcare, finance, the military, engineering, transportation, education, scientific research, and entertainment.
However, until the 1990s, computer scientists were constrained by having to manually train Neural Net models. Thus, such training couldn’t go beyond hundreds or thousands of nodes and parameters because it was not practically and economically possible to handcraft large Neural Nets and tweak their values manually. Then, the breakthrough came. With the advent of Machine Learning (ML), scientists were able to economically train AI models (like Neural Nets and others) with huge amounts of training datasets. This paradigm shift was the rebirth of AI. Now, a Neural Net model is built with millions or billions of nodes and parameters and trained using terabytes of data. As an example, consider the most recent sensation in AI, the ChatGPT. While its parameter count and training dataset are not disclosed by OpenAI (the company and its research laboratory which created and maintains ChatGPT), it is similar to the GPT-3 language model which has 175 billion parameters and is trained on a dataset of hundreds of terabytes or more.
Where Might This Lead?
It should be all good to embrace AI with open arms as a hi-tech accelerator just like a computer, right? Actually, it’s not that black and white and many alarm bells are going off now. With the advent of cheap computing power and the abundance of data gathered on the internet, it became possible to turn that old-time Neural Nets into the current state of complexity and potential in which AI can reach of point of exceeding human intelligence in such way that humans lose control of it, and just as much harm as benefit can result.
It’s potentially the most transformational technology of all time, to date, far more than the combustion engine or general-purpose computing technologies. As the saying goes for science, “the sky’s the limit.” What we’re witnessing today with AI is just the tip of the iceberg. However, we are currently operating in the realm of Artificial Narrow Intelligence or ANI, also known as “Weak AI,” which is purpose-built for tasks such as face recognition, email spam detection, search engines, language translation and conversation, autonomous driving, etc.
The real breakthrough will occur when Artificial General Intelligence or AGI, also known as “Strong AI,” becomes widely available (Boston Dynamics Atlas or SPOT robots are glimpses of what AGI can achieve). Once that happens, it will be like having another entity that lives side-by-side with us, capable of doing what we can do and maybe even surpassing our capabilities (that would be Artificial Super Intelligence or ASI). The possibilities are endless and thus come with substantial risks. The biggest concern is that AI may eventually make human beings seem obsolete once it starts self-improvement. Some are saying that the creation and evolution of AI cannot be undone. Fearing where all this may lead is a natural reaction. While artificial intelligence, in and of itself, is neutral, it can be put to good or ill use. And there are those who warn that AI, as it becomes more sophisticated, might, itself, turn against humanity and intend to do it harm. However, if someone suggests that AI is inherently evil then I would argue that there’s nothing in AI that makes it inherently evil or harmful. From an Islamic perspective, things like an interest-based economy, gambling, alcohol, and drugs are inherently harmful to humans and create many subsequent evils. It’s up to humans to use AI for good or evil.
The second part of this article will focus on the impacts AI will have on human beings and society, and ways to prepare for the enormous power that AI will bring, including safeguarding against the human use of AI for harmful or evil purposes. It will touch on the need to put controls in place to steer this technology for the good of humanity.