Artificial intelligence – or AI – is receiving increasing attention for its rapid development and potential to change society. Researchers are working hard to develop its capabilities, while regulators are racing to ensure it is managed and governed properly. But what do we mean by AI, and how can we define such a complex term? In a recent paper, Professor Pei Wang at Temple University argues that the lack of an agreed definition makes it difficult for policymakers to assess what AI will be capable of in the near future, or even which kinds of AI are desirable. To combat this, he discusses what makes a robust definition, and suggests his own. More
AI is fundamentally changing our world, and we are working out how to navigate its opportunities and risks. The business world is strategizing, while regulations and laws are being suggested in many countries. To be able to confront these challenges successfully, we need to be able to clearly define ‘AI’. However, the term is currently used in many different ways, and there is no widely accepted definition.
Given the complexity of what it means to be ‘intelligent’, it makes sense that there is currently no consensus. However, while we cannot postpone research until a consensus is reached, working towards one is still a vital endeavor. At the same time, it is important to recognize the differences in the current usages of the term.
The field known as AI today was mainly founded by the scholars McCarthy, Minsky, Newell, and Simon. They proposed intuitive but vague conceptions of intelligence. As the field has grown, scholarship has lacked a common theoretical foundation. Consequently, there are many disagreements over evaluation criteria, progress milestones, and benchmark problems.
The different working definitions of AI correspond to various facets of human intelligence and different research goals. In his recent article, Professor Pei Wang identifies five major perspectives from which AI is viewed.
The first is Structure AI, which assumes that intelligence describes the mental capability of the human brain and that it aims to faithfully simulate its structure. Secondly, Behavior AI associates intelligence not to its internal structure but to its external behaviors. The third is Capability AI, in which people’s interest in AI comes mainly from its potential applications and problem-solving abilities.
Fourth, Function AI distinguishes AI from the other branches of computer science, defining it by abstracted cognitive functions, such as searching, reasoning, learning, planning, perceiving, acting, or communicating. Finally, researchers in the field of Principle AI attempt to find fundamental principles that can uniformly explain relevant phenomena.
To work towards a more formalized definition of AI, we need to understand exactly what a definition is. In its most common sense, a definition specifies the meaning or significance of a word or phrase. It can relate to either the usage of a word or the content of the concept expressed. When it comes to AI, the debate is about the latter.
There are at least two types of definition within scientific discussions. A dictionary definition summarizes the existing usage of the term, while a working definition introduces a proposed usage of the term.
With respect to AI, the dictionary definition is relatively clear, and this can be used by a journal or conference reviewer to decide whether a submission is in scope. However, a working definition of AI may be different and can have a diverse range of uses. It is the working definition that Professor Wang seeks to clarify.
So, what makes a good working definition?
Professor Wang draws on the work of Carnap, who sought to create a solid foundation for probability theory by defining the term probability.
Firstly, the working definition should be similar to its common usage, and therefore its dictionary definition. In AI, it usually isn’t a problem to interpret the word ‘artificial’ but defining intelligence is a complex challenge. However, a working definition does not need to cover all common usages of a concept. For example, in the commercial world, the label ‘intelligent’ is often used to mean ‘more powerful’, but this can be neglected for current purposes, since it is not part of the core meaning of the concept.
Secondly, agreeing on a working definition of a concept is intended to resolve ambiguity. Therefore, it should provide sufficient and necessary conditions for deciding the applicability of the concept in any situation. This can only be relatively satisfied, as there is no way to completely remove ambiguity in a definition.
Thirdly, a definition should be useful. When a researcher defines AI, it is not usually something that already fully exists, but something to be built. To serve the role of being a research objective, a working definition should set a clear goal for the research.
Finally, it is widely agreed that a scientific concept should be as simple as possible. This requirement does not deny the complexity of intelligence. Here, the hope is to identify certain essential features of intelligence, from which many other features can be implied.
Using these considerations, Professor Wang suggests his own definition of intelligence. He says that ‘intelligence is the capacity of an information-processing system to adapt to its environment while operating with insufficient knowledge and resources’.
‘Information-processing system’ is used in a broad sense to include all computer systems and robotic devices, as well as many animals. ‘Insufficient knowledge and resources’ specifies the normal working condition of the system. In this context, ‘sufficient knowledge’ means that the system knows the proper algorithm for solving the problem, and ‘sufficient resources’ means that the system has the time and space to apply the algorithm to the problem. In many situations, neither of the two is available, and this is where intelligence is needed.
He suggests this definition as the foundation of a framework called the Non-Axiomatic Reasoning System. This is a system capable of reasoning and learning in a way that mimics human cognitive processes. Unlike traditional logic-based systems, it emphasizes adaptability. It can revise its conclusions in light of new evidence without invalidating previous ones. It can also incorporate truth-value, which represents the degree to which there is evidence for proposition. This means it can handle uncertainty.
Overall, the Non-Axiomatic Reasoning System can reason effectively, learn from experience, and adapt to changing circumstances without relying on pre-defined algorithms.
The working definition of AI matters. Different choices lead research in different directions and will impact its future development and regulation. Currently, there is no correct working definition of AI, and each has its own theoretical and practical values, but they cannot include each other. We should work towards a future where AI and its different subtypes are defined clearly and robustly. Before that time, we should take greater consideration to understand the definitions we use and how this impacts research and action. For instance, one approach of regulation may be proper for one type of AI, but not at all for another type.