What Is AI (Artificial Intelligence)?

Jack Krupansky
14 min readJan 5, 2018

--

Artificial Intelligence, commonly known as AI, is everywhere these days. Or so it seems. Or say they say. But what is AI really? This short, informal paper will provide the casual reader with a very brief explanation that should be readily digestible for those without the patience to read my full, 150-page paper that explores this topic in much greater depth — Untangling the Definitions of Artificial Intelligence, Machine Intelligence, and Machine Learning.

First, the obvious. AI means that the following elements are involved:

  1. A machine.
  2. A computer.
  3. Computer software.
  4. Some degree of intelligence that is suggestive of the intelligence of a human.

The operative definition of AI is fairly simple:

  • AI is the capacity of a computer to approximate some fraction of the intellectual capacity of a human being.

What about robotics, so much of which is merely mechanical and seemingly unrelated to any intellectual activity — is it really AI per se? There is a section on Robotics later in this paper to explore this question a little deeper. The short answer is that it’s a fielder’s choice how much of robotics should be considered AI. If it enables or supports intellectual activity or the carrying out of intellectual intentions, then it’s fair to be considered under the rubric of AI.

The operative word there is suggestive — meaning that AI doesn’t require achieving the full range of human cognitive and behavioral capabilities, but merely enough of a fraction of the full range that at least hints at or gives the appearance of human-level intelligence.

What fraction of human-level intelligence is required to merit the AI label? There is no gold standard. It’s a matter of debate. And it’s very subjective.

Traditionally AI has held a two-level distinction about that fraction:

  1. Weak AI. Only a relatively small, limited fraction of human intelligence.
  2. Strong AI. Much closer if not at or above human intelligence.

You can also read articles about superintelligence, far beyond even human intelligence. But that’s more the realm of science fiction and speculation at this stage.

In fact, even strong AI remains far beyond our technological reach at this stage.

What we have to settle for today is a variety of levels of weak AI.

In my longer paper I settled on five levels of intelligence:

  1. Weak AI or Light AI. Individual functions or niche tasks, in isolation. Any learning is limited to relatively simple patterns.
  2. Moderate AI or Medium AI. Integration of multiple functions and tasks, as in a robot, intelligent digital assistant, or driverless vehicle. Possibly some relatively limited degree of learning.
  3. Strong AI. Incorporates roughly or nearly human-level reasoning and some significant degree of learning.
  4. Extreme AI. Systems that learn and can produce even more capable systems that can learn even more capably, in a virtuous spiral.
  5. Ultimate AI. Essentially Ray Kurzweil’s Singularity or some equivalent of superhuman intelligence. Also called superintelligence.

Weak AI is generally categorized as task-specific or domain-specific. The AI system must be preprogrammed with task or domain-specific knowledge and skill, with minimal ability to learn very much on its own other than patterns, even with so-called machine learning.

Current intelligent digital assistants have achieved a minimal level of Moderate AI, but they still fall far short of Strong AI.

Many current consumer and industrial products have some level of Weak AI, occasionally bordering on minimal Moderate AI. It is common now to use the adjectives intelligent or smart to indicate the presence of Weak AI in a product, system, service, or feature, such as:

  • Intelligent digital assistants.
  • Smart appliances.
  • Smart devices.
  • Smart homes.
  • Smart vehicles.

Again, these systems and devices exhibit some fraction of a human-level function, but usually only in some relatively modest sense. And certainly nothing approaching human-level Strong AI.

My longer paper also discusses levels of competence or how robust and capable a given implementation is in any particular area of function, relative to a fully-functional human. I call this competent AI. Levels of automation competence range from:

  1. Nothing. No automation capabilities in a particular area. User is completely on their own.
  2. Minimal subset of full function. Something better than nothing, but with severe limits.
  3. Rich subset. A lot more than minimal, but with substantial gaps.
  4. Robust subset. Not complete and maybe not covering all areas, but close to complete in areas that it covers.
  5. Near-expert. Not quite all there, but fairly close and good enough to fool the average user into thinking an expert is in charge.
  6. Expert-level. All there.
  7. Super-expert level. More than an average human expert.

That’s it, the starting point for an understanding of AI. Continue reading if you need a little more depth.

My longer paper also discusses the spectrum of functional behavior, to categorize how functional a system is overall. The point of this model is that:

  1. Behavior of both human and digital systems, as well as animals, can be classified based on level of function.
  2. Functional behavior spans a broad spectrum of levels.
  3. Functional behavior must reach the level of being highly functional or high function in order to be considered comparable to human-level intelligence or behavior.
  4. Integration and coordination of functions is requisite for high function and true, human-level intelligence.

The levels of function in this spectrum are:

  1. Non-functional. No apparent function. Noise. Twitches and vibrations.
  2. Barely functional. The minimum level of function that we can discern. No significant utility. Not normally considered AI. Automation of common trivial tasks.
  3. Merely, minimally, or marginally functional, tertiary function. Seems to have some minimal, marginal value. Marginally considered AI. Automation of non-trivial tasks. Not normally considered intelligence per se.
  4. Minor or secondary function. Has some significance, but not in any major way. Common behavior for animals. Common target for AI. Automation of modestly to moderately complex tasks. This would also include involuntary and at least rudimentary autonomous actions. Not normally considered intelligence per se.
  5. Major, significant, or primary function. Fairly notable function. Top of the line for animals. Common ideal for AI at the present time. Automation of complex tasks. Typically associated with consciousness, deliberation, decision, and intent. Autonomy is the norm. Bordering on what could be considered intelligence, or at least a serious portion of what could be considered intelligence.
  6. Highly functional, high function. Highly notable function. Common for humans. Intuition comes into play. Sophisticated enough to be considered human-level intelligence. Characterized by integration of numerous primary functions.
  7. Very high function. Exceptional human function, such as standout creativity, imagination, invention, and difficult problem solving and planning. Exceptional intuition.
  8. Genius-level function. Extraordinary human, genius-level function, or extraordinary AI function.
  9. Super-human function. Hypothetical AI that exceeds human-level function.
  10. Extreme AI. Virtuous spiral of learning how to learn and using AI to create new AI systems ever-more capable of learning how to learn and how to teach new AI systems better ways to learn and teach how to learn.
  11. Ray Kurzweil’s Singularity. The ultimate in Extreme AI, combining digital software and biological systems.
  12. God or god-like function. The ultimate in function. Obviously not a realistic research goal.

What is intelligence?

Unfortunately there is no concise, crisp, and definitive definition for intelligence, especially at the human level. But a number of elements of intelligence are readily identified.

When we refer to human intelligence we are referring to the intellectual capacity of a human being.

See my longer AI paper for a lot more depth, but at a superficial level intelligence includes a significant variety of mental functions and mental processes:

  1. Perception. The senses or sensors. Forming a raw impression of something in the real world around us.
  2. Attention. What to focus on.
  3. Recognition. Identifying what is being perceived.
  4. Communication. Conveying information or knowledge between two or more intelligent entities.
  5. Processing. Thinking. Working with perceptions and memories.
  6. Memory. Remember and recall.
  7. Learning. Acquisition of knowledge and know-how.
  8. Analysis. Digesting and breaking down more complex matters.
  9. Speculation, imagination, and creativity.
  10. Synthesis. Putting simpler matters together into a more complex whole.
  11. Reasoning. Logic and identifying cause and effect, consequences and preconditions.
  12. Following rules. From recipes to instructions to laws and ethical guidelines.
  13. Applying heuristics. Shortcuts that provide most of the benefit for a fraction of the mental effort.
  14. Intuitive leaps.
  15. Mathematics. Calculation, solving problems, developing models, proving theorems.
  16. Decision. What to do. Choosing between alternatives.
  17. Planning.
  18. Volition. Will. Deciding to act. Development of intentions. When to act.
  19. Movement. To aid perception or prepare for action. Includes motor control and coordination. Also movement for its own sake, as in communication, exercise, self-defense, entertainment, dance, performance, and recreation.
  20. Behavior. Carrying out intentions. Action guided by intellectual activity. May also be guided by non-intellectual drives and instincts.

Communication includes:

  1. Natural language.
  2. Spoken word.
  3. Written word.
  4. Gestures. Hand, finger, arm.
  5. Facial expressions. Smile, frown.
  6. Nonlinguistic vocal expression. Grunts, sighs, giggles, laughter.
  7. Body language.
  8. Images.
  9. Music.
  10. Art.
  11. Movement.
  12. Creation and consumption of knowledge artifacts — letters, notes, books, stories, movies, music, art.
  13. Ability to engage in discourse. Discussion, conversation, inquiry, teaching, learning, persuasion, negotiation.
  14. Discerning and conveying meaning, both superficial and deep.

Recognition includes:

  1. Objects
  2. Faces
  3. Scenes
  4. Places
  5. Names
  6. Voices
  7. Activities
  8. Identities
  9. Intentions
  10. Meaning

Only a Strong AI system would possess all or most of these characteristics. A Weak or Moderate AI system may only possess a few or a relatively narrow subset.

The measure of progress in AI in the coming years will be the pace at which additional elements from those lists are ticked off, as well as improvements in the level of competence in these areas of function.

Artificial intelligence is what we don’t know how to do yet

From the dawn of computing, the essential purpose of a computer has been to automate some task that people normally do. Since such tasks always involve information, some degree of intelligence has always been required.

When capabilities seem beyond what a computer can easily do, it is easy to ascribe it to being a matter of intelligence. As if the tasks we have already automated didn’t require intelligence.

Once we do manage to figure out how to automate some seemingly difficult task, we assert that this is artificial intelligence. Or at least until it becomes widely accepted that computers can obviously do a particular task and do it quite well. Only then will we gradually and quietly cease using the AI label for those tasks that we no longer have a need to refer to explicitly.

Maybe the issue is that since we have already automated so much of the low-hanging fruit that we are finally bumping into the knee of the difficulty curve so that it takes increasingly intense levels of effort and resources to advance up the intelligence spectrum, so that each advance comes more slowly and therefore seems so much more spectacular.

Robots, driverless cars, and even intelligent digital assistants certainly seem spectacular right now, but once they get all the wrinkles worked out and they become common and mundane rather than rare and special, the urge to label them AI will quickly fade.

Anti-locking brakes, optical character recognition, spelling and grammar checkers and correctors, and auto-focusing cameras were once quite unusual and exceptional and hence noteworthy as AI, but these days they are assumed and not so notable common features, no longer warranting a label of AI.

Emotional intelligence

Usually, when people are discussing AI or even Strong AI they are referring to relatively mechanical operations and calm, dispassionate reasoning. The non-emotional side of reasoning, including the ability to read the emotional state of another intelligent entity, whether human or machine.

Although there have been experimental efforts to imbue machines with some sense of emotive capabilities, that is still more of a science fiction fantasy than current or imminent reality. It may exist in some relatively narrow or specialized niches, but not in any broad and general sense.

Yes, someday AI systems will have at least some emotive capabilities, so-called emotional intelligence, but not in the near future.

My longer paper delves into this matter a bit more.

Autonomy, agency, and assistants

The true test of Strong AI is the ability to have a robot or AI system which operates completely on its own without any human supervision or control. This is called autonomy.

Short of full autonomy, agency is the capacity for an AI system to pursue a goal on behalf of another entity, whether it be a human or some other digital system. An AI system with agency is free to decide on its own how to achieve its given goal, but is not free to set goals of its own, other than in a manner that is subsidiary to its assigned goal.

Note: In philosophy and sociology agency is taken to mean the same as autonomy is used here, while computer science and AI use this alternative meaning for agency — acting on behalf of another.

A driverless vehicle would fit the definition of agency but not true, full autonomy. The vehicle might choose between alternative routes, but wouldn’t have the autonomy to choose its own destination or to decide whether not to do as it is told by its owner or operator.

An assistant has even more limited autonomy and agency, being given a specific task and instructions and having very little freedom.

An intelligent digital assistant fits this definition for assistant.

For more on autonomy, agency, and assistants see the companion papers:

Although it would be technically feasible to have a truly, fully autonomous AI system, the human race is not ready to have robots and AI systems running around completely independent of human control. Check out the Skynet AI computer network in the Terminator movies or the HAL 9000 AI computer in the 2001: A Space Odyssey movie — once these AI systems take charge, things don’t end well. Agency or semi-autonomous operation is the more practical and desirable mode of operation relative to full autonomy for the foreseeable future.

AI areas and capabilities

This is not an exhaustive or ordered list, but illustrates the range of capabilities pursued by AI researchers and practitioners:

  1. Reasoning
  2. Knowledge and knowledge representation
  3. Optimization, planning, and scheduling
  4. Learning
  5. Natural language processing (NLP)
  6. Speech recognition and generation
  7. Automatic language translation
  8. Information extraction
  9. Image recognition
  10. Computer vision
  11. Moving and manipulating objects
  12. Robotics
  13. Driverless and autonomous vehicles
  14. General intelligence
  15. Expert systems
  16. Machine learning
  17. Pattern recognition
  18. Theorem proving
  19. Fuzzy systems
  20. Neural networks
  21. Evolutionary computation
  22. Intelligent agents
  23. Intelligent interfaces
  24. Distributed AI
  25. Data mining
  26. Games (chess, Go, Jeopardy)

Most of these are covered in my longer paper.

Neural networks and deep learning

This paper won’t delve into any detail on neural networks and the related concept of deep learning, but simply note that they relate to machine learning, the limited degree to which AI systems can seem to learn, mostly relatively simple patterns or images and even some rules, mostly by correlating lots of examples, in contrast with even human children for whom seeing even a single cat or dog is enough to sense how to recognize similar creatures.

A little more detail is provided in my longer paper.

Animal AI

We tend to focus on human intelligence when discussing AI, but AI can be applied to the animal world as well, such as a personable robot dog, a robotic bird, or a robotic flying insect, although the focus in these cases is far less about higher-level cognitive abilities such as reasoning, mathematics, and creativity, and more about the physics, physiology, sensory perception, object recognition, and motor control of biological systems.

Robotics

Much of robotics revolves around sensors and mechanical motions in the real world, seeming to have very little to do with any intellectual activity per se, so one could question how much of robotics is really AI.

Alternatively, one could say that sensors, movement, and activity enable acting on intellectual interests and intentions, thus meriting coverage under the same umbrella as AI.

In addition, it can be pointed out that a lot of fine motor control requires a distinct level of processing that is more characteristic of intelligence than mere rote mechanical movement.

In summary, the reader has a choice as to how much of robotics to include under the umbrella of AI:

  1. Only those components directly involved in intellectual activity.
  2. Also sensors that provide the information needed for intellectual activity.
  3. Also fine motor control and use of end effectors. Including grasping delicate objects and hand-eye coordination.
  4. Also any movement which enables pursuit of intellectual interests and intentions.
  5. Any structural elements or resource management needed to support the other elements of a robotic system.
  6. Any other supporting components, subsystems, or infrastructure needed to support the other elements of a robotic system.
  7. All components of a robotic system, provided that the overall system has at least some minimal intellectual capacity. That’s the point of an AI system. A mindless, merely mechanical robot with no intelligence would not constitute an AI system.

In short, it’s not too much of a stretch to include virtually all of robotics under the rubric of AI — provided there is at least some element of intelligence in the system, although one may feel free to be more selective in specialized contexts.

Artificial life

Technically, some day scientists may be able to create artificial life forms in the lab that have many of the qualities of natural biological life, but have possibly rather distinct chemical bases, structures, and forms. Such artificial life could conceptually be imbued with some form of intelligence as well — artificial intelligence for artificial life.

But, for now, such artificially intelligent artificial life remains the realm of speculation and science fiction. Still, it would be very interesting and potentially very useful. Granted, there might be more than a few ethical considerations.

An exception is virtual reality (VR), where even the laws of physics can be conveniently ignored, if desired. Traditional chemistry and biology present no limitations to the creativity of designer worlds in the realm of VR. In fact, one could say that all forms of life in a VR world are artificial, by definition. One can even imbue otherwise inanimate objects with any degree of life one chooses.

Ethics

Consult my longer paper for a discussion of ethical considerations for AI.

Historical perspective by John McCarthy

To get a sense of the roots and evolution of AI, consult AI pioneer John McCarthy’s own response to the question of What is AI?:

Can machines think?

An AI system may indeed possess a fraction of the cognitive abilities of a human, but is that enough to claim that the machine is indeed thinking?

I have some comments and questions on that topic in a companion paper, Can Machines Think?

Diving even deeper, I have a longer list of questions designed to spur thought on this matter in another companion paper, Questions about Thinking.

What’s the IQ of an AI?

Next question. Seriously, there is no clarity as to how how the human concept of IQ could be adapted to machines. Some people have ideas about how to do it, but there is no consensus. It’s almost kind of moot until we actually achieve Strong AI or something fairly close.

Besides, given the malleable nature of software, the code of an AI system could be quickly revised to adapt to whatever new test came along so that an AI would score significantly higher than if the code hadn’t been tuned to the test.

But that’s the nature of AI today — it is relatively easy to identify specific and relatively narrow niche cases and code up heuristics that work fairly well for those narrow niches, making the software appear quite intelligent, even while it is far more difficult or even near impossible with today’s technology to achieve true, full, Strong AI which works equally well for all niches.

Still, it would be good to have a more objective measure of the level of intelligence of an AI than simply weak or strong, or even my moderate level or my spectrum of functional behavior and levels of competence.

Turing test

In theory, the so-called Turing test (also called The Imitation Game) can detect whether a machine or AI is able to interact in such a human-like manner that no human observer could tell that it was a machine by asking a finite set of questions.

There is some significant dispute about both whether the test is indeed a valid binary test of intelligence (always arrive at the correct conclusion whether the test subject has human-level intelligence or not) and whether claims to have passed the test are truly valid.

The real bottom line is that as a thought experiment the test highlights the great difficulty of definitively defining human-level intelligence in any deeply objective and measurable sense.

That’s really only an issue for defining and testing for Strong AI. Weak AI has no such strong testing requirements — even if only a fraction of human-level capability, or seeming only partially human-like, that’s good enough for many applications.

More details can be found in my longer paper.

And so much more

There is a lot more to AI than offered here. My longer paper — Untangling the Definitions of Artificial Intelligence, Machine Intelligence, and Machine Learning — dives down a few more levels for those who want more than is covered here but aren’t prepared to invest the time, energy, attention, and money in a shelf full of dense text books and academic papers.

For more of my writings on artificial intelligence, see List of My Artificial Intelligence (AI) Papers.

--

--

Jack Krupansky
Jack Krupansky

No responses yet