Are AI generally intelligent or are they just “behaving generally intelligent”?
Artificial Intelligence has become a buzzword in recent times, with discussions about it taking place across the internet, on TV channels, and during conventions. As AI continues to integrate more deeply into our lives, many are curious about how the impact of this technology is evolving.
Because I was curious to know how AI defines itself, I asked ChatGPT, the well-known AI developed by OpenAI, to provide a definition of “Artificial Intelligence”. ChatGPT replied:
“Artificial Intelligence (AI) is a field of computer science and engineering that aims to create machines capable of performing tasks that would typically require human intelligence”.
So, according to ChatGPT, AI requires the possession of human intelligence. But what does that mean? Human intelligence refers to the cognitive abilities of the human brain, including the ability to learn reason and understand complex concepts. It encompasses a wide range of abilities, including perception, memory, problem-solving, and creativity. It is also often characterized by a capacity for self-awareness and adaptability.
However, it is important to qualify this opinion. Indeed, several types of AI do exist and those have different kinds of intelligence.
The most basic is called reactive machines AI. These machines can only react to their environment based on information available at the moment and do not have the ability to remember their past experiences. A common example of this type of AI is industrial robots. Used a lot in the manufacturing business, to complete recurring tasks, industrial robots are programmed to respond to changes in their environment such as the presence of a new component, then react accordingly.
Contrary to reactive machines AI, AI with limited memory has the advantage of being able to use their past experiences to inform their current decisions. However, human-level intelligence has not yet been achieved.
Self-driving cars exemplify how both reactive machines AI and limited memory AI can be combined. Yes, it is possible! These vehicles use limited memory to store information gathered such as the location of other vehicles, crosswalks, and traffic signals. This allows the AI to make decisions on when to stop, accelerate, or turn based on the route it has already taken. At the same time, the vehicle’s AI processes information gathered in real-time through cameras and sensors and reacts accordingly, using the reactive machine technology to safely handle unanticipated elements and avoid car accidents.
However, can we trust an AI to drive in everyday life? If your answer is yes, it means you trust enough AI to make decisions and react in a manner that keeps you safe.
You probably already heard about the Tesla Model S, a self-driving car recently introduced in the market, that has been making headlines due to accidents that have occurred in places like San Francisco and China for instance. According to police reports, these incidents were caused by a malfunction in the full self-driving mode. However, it is important to note that the driver should have taken manual control in this situation. Tesla often said that this technology requires constant active supervision from the driver: those vehicles are not completely autonomous!
According to statistics from the National Highway Traffic Safety Administration (NHTSA), self-driving cars were involved in 522 crashes in the United States within a 10-month period. And among those crashes, 52% involved Tesla models. This high number seems huge for a technology supposed to guarantee safety. However, compared to traditional vehicles, the number of accidents involving self-driving cars is still lower.
Additionally, while self-driving technology aims to reduce the traditional risks that drivers face on the road, it also introduces new risks. What if the software system has a bug? What if the car is affected by a cybercriminal attack? Or what if the sensors of the car suddenly stop working?
The third type of AI is the theory of mind AI, which can understand other agents’ beliefs, desires, and intentions.
For instance, AI playing strategic games such as chess or poker can understand the mental states of their opponents.
With this description, everyone would think about AlphaGo, the AI developed by Google DeepMind, right?
That is not exactly true. AlphaGo is certainly a highly advanced system specifically created to play the strategic Chinese game “GO”, however, it cannot understand or interpret the mental states of its opponents. AlphaGo is not a theory of mind AI!
Then, in this case, a question comes to my mind: does the theory of mind AI really exist nowadays?
Even if research in this field is ongoing and promises breakthroughs, this technology is still currently developing. Although virtual personal assistants and AI playing strategic games can exhibit some characteristics of this type of AI, there is currently no AI system considered to be entirely based on the theory of mind. Indeed, being successful in creating a theory of mind AI requires breakthroughs in many fields such as natural language processing, cognitive psychology, and computer vision (image recognition, object detection, image segmentation, image generation, and 3D reconstruction…).
The closest technology that exhibits aspects of the theory of mind AI is the Theory of Mind net, also called ToMnet. Created by Neil Rabinowitz and his team, ToMnet is composed of Artificial Neural Networks (ANN), a system using machine learning to reflect how human brains learn. These ANNs can observe the tendencies of other AI systems and make predictions about their future behavior. However, the goal of having a complete theory of mind AI system has not yet been achieved.
The last AI category is the most evolved one. Named Self-Aware AI or Artificial General Intelligence (AGI). In theory, it would have the ability to consciousness, of feeling emotions, and self-awareness but, as you might have understood, this technology does not currently exist. AI is only capable of performing specific tasks or making decisions based on information provided during their training (machine learning), which are referred to as artificial Narrow Intelligence or, to make it shorter, narrow AI.
There is no consensus on whether it will ever be possible to create self-aware AI and the question is highly debated due to ethical and societal concerns.
Nick Bostrom, philosopher, and searcher at Oxford University, author of the Book “Super-intelligence: Path, Dangers, and Strategies, considered as the Bible in the AI field.
He states that “sentinent machines (a hypothetical machine that exhibits behavior at least as skillful and flexible as humans do) are a greater threat to humanity than climate change. We, humans, are like small children playing with a bomb […] if we hold the device to our ear, we can hear a faint ticking sound”.
Stuart Russel, electric Ingenior and computer science professor at Berkeley University, California, stated “ Super-intelligent machine would be a tool that we could use to increase the power and capabilities of our civilization”.
However, Russel also reminded an important fact in the debate:
« There are still major breakthroughs that have to happen before we reach anything that resembles human-level AI. Once we have that capability, you could then query all of the human knowledge and AI would be able to synthesize and integrate and answer questions that no human being has ever been able to answer”.
Some might think about the transhumanist movement that consists of optimizing our physical and mental capacities, until we can one day, go further the limits of our mortality.
Bostrom sees this as the best scenario for the development of Artificial Intelligence in the future:
« You must remember I am a transhumanist. I want my life extension pill now. And if there were a pill that could improve my cognition by 10%, I would be willing to pay a lot for that”.
For making improvements in research, experts need to overcome at least the two following obstacles:
The Noisy TV Problem exists in the reinforcement learning methods. The reinforcement learning method consists in learning how to take better decisions while performing actions in a given environment, then receiving rewards or penalties as feedback. And this is where the Noisy TV Problem shows up. More specifically, this is when an agent must learn how to turn off the “noisy TV”. A simple example to illustrate the concept of exploration-exploitation exchange. In this exchange, the agent must choose between an action that it knows is beneficial and which will bring high rewards. And another action that it has never experimented with yet but shall potentially bring an even higher reward.
The second obstacle is the Chinese room argument, identified by John Searle in 1980 [Minds, Brains, and Programs].
This concept answers the following question:
Is an AGI actually generally intelligent or is just “behaving generally intelligent”?
Searle considers the case of a person sitting alone in a room, who only knows English, following instructions in English allowing him to manipulate strings of Chinese characters. Can we assert that this person understands Chinese? It may appear so from outside of the room’s point of view. However, if you are inside of the room, you’ll certainly understand that this person clearly cannot speak Chinese.
Searle has used this image to explain that human intelligence cannot be “copy-paste” by using programs. Indeed, those only follow instructions. Therefore, artificial intelligence are not intelligent in the same way as humans.
Several arguments oppose each other to know the reasons why we don’t have this technology yet.
David Deutsch, physicist and professor at Oxford University stated:
« This is the lack of an adequate philosophy of mind to sufficiently definitely refute the Chinese room, and that leads us to a theoretical understanding of why brains are intelligent and how to make programs that emulate the key relevant properties of brains”.
Ray Kurzweil, a writer in this field, objects. He explains firstly that: “our hardware is far weaker than the human brain. It may be possible to create human-level AGI on current computer hardware, or even the hardware of five or ten years ago. But the process of experimenting with various-proto-AGI programs run slowly, and current software tools, engineered to handle the limitations of current hardware, are complex to use”.
Then he mentions the AGI funding situation as another reason. “The amount of resources society put into computer chip design, cancer research, or battery development, AGI gets a teeny tiny fraction of this. Software companies devote hundreds of men a year to creating products like word processors, video games, or operating systems. An AGI is much more complicated than any of these things, yet no AGI projects have ever been given nearly the staff and funding level of projects like those”.
In my opinion, a theory that could explain this lack of investment in AI research would be people’s concerns about privacy and surveillance, bias and discrimination, and the role of human intelligence.
[Those concerns will be discussed in my next article. Thank you, a lot, for reading this. Tell me your opinions in the comment part 😊.]