Untangling the Definitions of Artificial Intelligence, Machine Intelligence, and Machine Learning

Jack Krupansky
168 min readJun 13, 2017


The goal of this informal paper is to help mere mortals comprehend the meanings of terms associated with artificial intelligence (AI) such as machine intelligence and machine learning, among many others. The goal is not to explore AI to great depth and detail, but stick to a fairly high-level with with just enough detail to convey an accurate sense of what AI is roughly all about, without resorting to comforting but confusing and misleading metaphors as are so popular in the general media.

This informal paper covers the following:

  1. Basic definitions for the big three key terms: artificial intelligence, machine intelligence, and machine learning.
  2. Basic definitions for the major concepts of AI.
  3. Exploration of the nature of intelligence, both human and machine.
  4. Definitions and discussion of many supporting terms for intelligence and AI.
  5. Issues that advanced AI needs to address.
  6. Many interesting questions concerning AI.
  7. Pondering the limits and future of AI.

Some areas this paper does NOT cover:

  • General history of AI.
  • Specific AI algorithms.
  • Very detailed AI technical terms. Okay, there will be some, but defined in plain language.
  • Specific AI applications. Okay, maybe a few, but not intending to be exhaustive or even comprehensive.
  • Commentary on AI in science fiction. Okay, there will be a little.
  • Roadmap for the future of AI — progress will be all over the map.

This informal paper is not intended to be a full treatment of AI or a full introduction to AI per se, but for many people it should suffice, be a good starting point, or be a good supplement.

Although this paper does indeed explore a number of questions, I have two other informal papers with very detailed questions related to intelligence and AI: Questions about Thinking and Can Machines Think?.

Besides basic definitions, this paper will attempt to provide a fairly complete treatment of intelligence in general, at least in terms of considering what we might ask of an ultimate AI system.

If this extensive 150-page paper is too much to digest, a much briefer 10-page paper is also available: What Is AI (Artificial Intelligence)?, although it won’t answer as many questions as this paper.

The problem

The whole point of this informal paper is to address three glaring problems:

  • People have oversimplified AI.
  • People have exaggerated AI.
  • People have over-promised on the potential for AI.

So, this informal paper has three goals:

  • To provide more depth and detail on AI — but still comprehensible to mere mortals.
  • To strip away the exaggeration.
  • To debunk the over-promising.

Executive summary

As a general proposition, artificial intelligence (AI) is generally any software which approximates some significant fraction of some aspect of human intelligence. The catch is the wide range of degree of complexity and difficulty moving that fraction up to a close approximation of human intelligence. Strong AI refers to a larger fraction, Weak AI refers to a smaller fraction.

Just to be clear, AI does not strictly or even commonly imply Strong AI. A driverless car does use AI even if it is unable to solve all of the non-driving problems that even an average person can do effortlessly.

What is most important is that AI is utilized to provide better solutions to problems than if AI was not used.

Machine intelligence is commonly used as a synonym for AI, but can also refer to forms of intelligence which are beyond or different from human intelligence, such as finding complex patterns in very large amounts of data.

Machine Learning is essentially the process by which machines acquire knowledge. It generally focuses on analyzing data for patterns and relationships.

Deep Learning goes much further and attempts to analyze the nature of the phenomena that the data represents, including discovery of rules of behavior, interactions, and strategy.

Terms and concepts to be defined

It is important to recognize that precise definitions are a matter of dispute, vague, and ambiguous. Terms frequently have multiple senses, as with many words in natural language. This informal paper will endeavor to catalog the many senses commonly in use, as well as indicating the more common or preferred usage.

The specific terms and concepts to be defined are:

  1. Artificial intelligence
  2. Machine intelligence
  3. Machine learning
  4. Levels of Artificial Intelligence
  5. Competent AI
  6. Spectrum of Functional Behavior
  7. Mind vs. brain
  8. Computational theory of mind
  9. Hardware and software analogy to brain and mind
  10. Popper’s three worlds
  11. Intelligence
  12. Intelligent entity
  13. Artificial intelligent entity
  14. Beyond thinking
  15. AI system
  16. Turing test
  17. Knowledge, information, belief, bias, meaning, and understanding
  18. Basic facts, facts, conclusions
  19. Fact pattern
  20. Propositions, statements, assertions, opinions, and views
  21. Justified true belief — belief vs. knowledge
  22. Strong vs. weak knowledge
  23. Truth
  24. Eternal truth or universal truth
  25. Veracity, credibility, misinformation, disinformation, propaganda, and fake news
  26. Computational propaganda
  27. Data cleaning
  28. All knowledge is provisional
  29. Conjectures
  30. Thought experiments
  31. Literal, shallow, or surface meaning vs. deep meaning
  32. Objective vs. subjective knowledge
  33. Consensus
  34. Knowledge artifacts
  35. Knowledge representation
  36. Semantic Web
  37. Evidence and proof
  38. Skepticism
  39. Certainty
  40. Notion
  41. Matter
  42. Situation
  43. Concept
  44. Abstraction
  45. Taxonomy
  46. Category
  47. Class
  48. Ontology
  49. Intuition
  50. Sentience
  51. Sapience
  52. Higher-order intellectual capacity
  53. Consciousness
  54. Conscious
  55. Belief, desire, intention (BDI)
  56. Motivation
  57. Volition
  58. Free will
  59. Subconscious and unconscious mind
  60. Wisdom
  61. Reason and rationality
  62. Rational, irrational, and nonrational
  63. Judgment
  64. Sound judgment
  65. Confidence
  66. Reason and logic
  67. Generalization and induction
  68. Rationale
  69. Reasoning, Formal reasoning, and Informal reasoning
  70. Argument
  71. Assumptions
  72. Definitions
  73. Fair inference
  74. Active knowledge
  75. Linear systems
  76. Nonlinear systems
  77. Chaotic systems
  78. Indeterminate systems
  79. Complex adaptive systems
  80. Chaos
  81. Knowledge base
  82. Cognition
  83. Attention
  84. Perception
  85. Symbols and symbol processing
  86. Language
  87. Linguistics
  88. Natural language processing (NLP)
  89. Speech processing — speech recognition, speech generation
  90. Text recognition
  91. Text extraction from images
  92. Images, image processing, and image recognition
  93. Machine vision
  94. Identification and identifiers
  95. Identity
  96. Memory
  97. Prediction of the future
  98. Patterns, Trends, Extrapolation, Speculation, Guessing, Approximation, Estimation
  99. Machine perception
  100. Sentiment analysis
  101. Search
  102. Trial and error
  103. Heuristics and rules of thumb
  104. Search engines
  105. Matching
  106. Human intelligence
  107. Human-level intelligence
  108. General intelligence
  109. Human-level artificial intelligence / Artificial human-level intelligence
  110. Strong AI
  111. Weak AI / Narrow AI
  112. Task-specific AI
  113. Domain-specific AI
  114. Artificial general intelligence (AGI)
  115. Augmented intelligence
  116. Group mind
  117. Distributed AI
  118. Self-organization, self-organizing systems
  119. Swarm intelligence
  120. Hierarchical social organizations
  121. Emergence and evolution
  122. Emergent phenomenon
  123. Genetic and evolutionary computing
  124. Deep learning
  125. Neural networks
  126. Hidden variables and observed variables
  127. Data analysis, analytics, and data science
  128. Knowledge representation
  129. Intelligent agents
  130. Rich semantic infrastructure needed for intelligent agents to thrive
  131. Rich semantic infrastructure for group mind and distributed AI
  132. Intelligent personal assistants
  133. Chatbots
  134. Life Agents — Software Agents to Help You Live a Better Life
  135. Commonsense knowledge
  136. Learning
  137. Human-level learning
  138. Teaching
  139. Training
  140. Programming
  141. Algorithms
  142. Uncertainty
  143. Fuzzy logic
  144. Quantum mechanical effects at the macro level
  145. Goals vs. tasks
  146. Problem solving
  147. General problem solving
  148. Constraint satisfaction
  149. Data flow
  150. Data structures
  151. Data types
  152. Objects
  153. Metadata
  154. Reference data, entity data, and transactional data
  155. Schemas and data models
  156. Data modeling
  157. Databases and DBMSes
  158. Distributed data
  159. Federated data
  160. Crowdsourced data
  161. Graphs and graph databases
  162. Knowledge webs
  163. Turing machines
  164. Computability
  165. Analog vs. digital
  166. Roger Penrose’s model of the human mind
  167. Algorithmic complexity
  168. Big O notation
  169. Brute force
  170. Combinatorial explosion
  171. N-body problems and quantum computing
  172. Conway’s Law
  173. Generative AI
  174. Creative AI
  175. Computational creativity
  176. AI and science
  177. AI science
  178. Emotional intelligence and affective computing
  179. Drives and goals
  180. Social intelligence
  181. Values — human, artificial, and machine
  182. Knowledge of society
  183. Knowledge of human nature
  184. Cultural-awareness
  185. Human nature and machine nature
  186. Artificial nature
  187. Safety and adventure
  188. Behavior
  189. Reaction
  190. Robotics
  191. Driverless cars and autonomous vehicles
  192. Advanced driver-assistance systems
  193. Superintelligence
  194. Artificial life (A-Life)
  195. Cybersecurity
  196. Privacy and confidentiality
  197. Cyber warfare
  198. Data governance
  199. Knowledge governance

A number of other supporting and related terms and concepts will be defined and discussed in the process, but the main focus is on the big, most-urgent three:

  • Artificial Intelligence
  • Machine Intelligence
  • Machine Learning

Other questions and topics

A variety of questions and other topics related to AI will also be explored in this paper:

  • What is a machine?
  • What is a platform?
  • Is anything unknowable?
  • Is anything knowable with certainty?
  • Standing on the shoulders of giants
  • Philosopher kings and artificial philosopher kings
  • Psyche, soul, and spirit
  • Bots and botnets
  • Liability
  • Ethics
  • Regulations
  • Moral and ethical dimensions of decisions
  • Rights of artificial intelligent entities
  • Asimov’s Three Laws of Robotics
  • Is (human-level) intelligence computable?
  • What can’t a machine do?
  • But can robots bake bread?
  • Science fiction
  • Modest breakthroughs
  • Breathtaking breakthroughs
  • Speculation and forecasts for the future
  • Will AI put us all out of work?
  • Is AI an existential threat to human life?
  • Roadmap
  • Education for AI
  • Resources


Although the material is somewhat technical in nature by definition, the intent is to reach a more general audience, especially nontechnical managers and executives as well as policymakers, but not necessarily the mass market.

Plain language will be used. Any jargon will be carefully defined. No technical background or experience is presumed, other than a general exposure to consumer electronic devices.

This paper is not intended for AI practitioners per se, but more for those non-practitioners who have reason to interact with AI practitioners and their handiwork.

Novices embarking on or thinking of embarking on a career in AI may also find it quite helpful.

Think of this paper as being positioned midway between the lightness of AI treatments in the popular press and the heavy treatments of AI technical papers and books.


This paper can be used to a significant degree as a glossary of terms related to AI, but unfortunately is not organized alphabetically or with hyperlinks for all terms.


This informal paper is organized a little counterintuitively if not in an outright stream of consciousness, but is designed to be read linearly from start to finish, although jumping ahead to explore specific concepts is reasonable and encouraged. Quite a few terms are used before they are formally defined, but enough plain language is used so that meaning should be fairly obvious from context.

The goal is to introduce the main, important topics first and then gradually fill in the details.

Warning: This is a rather long paper, so few people are expected to read the whole thing. (131 pages as authored first in Google docs.)

The suggestion is to read the first few sections, to get the basic definitions and as much detail as carries your interest, then maybe jump to the end and read some of the more interesting discussion sections on the limits and future of AI.

And be sure to check out the conclusion.

Skimming through the entire document to scan the many section headers is a great way to get a quick sense of the content, and then focus on reading sections of particular interest.

The intent is for people to use the search feature in their browser to zoom in to terms and topics of particular interest. Unfortunately, Medium does not have a feature for linking to sections within a document.

Should this paper be a book? Maybe. I’m thinking about it.

Why no illustrations or diagrams? Great idea, but I’m a text guy — graphics is not my forte, and my goal is to focus on plain language definitions and discussion.

The big three definitions

Without further ado, here are the top, big three term definitions, for artificial intelligence, machine intelligence, and machine learning. Followed by definitions and discussion of many of the more detailed terms relevant to intelligence, both artificial and human.

Artificial Intelligence

  1. Capacity of a computer to approximate some fraction of the intellectual capacities of a human being.
  2. Attempts to approximate the level of intelligence of a human being in a computer.
  3. The intellectual activity portion of artificial life (A-Life.)
  4. All but the physical portions of artificial life (A-Life.)
  5. All aspects of artificial life (A-Life) needed for a machine to mimic human or animal life.
  6. The intellectual activity portion of robotics.
  7. All but the physical portions of robotics.
  8. All aspects of robotics needed for a machine to mimic human or animal life.
  9. Synonym for robotics.
  10. Machine intelligence that achieves human-level intelligence.
  11. Machine intelligence of some significant complexity.
  12. Machine intelligence of some complexity that draws a strong parallel to some interesting subset of human intelligence.
  13. Synonym for human-level artificial intelligence.
  14. “Artificial intelligence is what we don’t know how to do yet.” — Alan Kay
  15. Synonym for Strong AI. Approaching human-level intelligence.
  16. Synonym for Weak AI. Small fraction of human-level intelligence.

Subsequent sections will delve into these terms, especially human-level intelligence.

Machine Intelligence

  1. Synonym for Artificial Intelligence.
  2. Artificial intelligence specific to electronic digital machines (computers), as opposed to, say, an artificial biological system.
  3. Artificial intelligence that exceeds human intelligence.
  4. Artificial intelligence that may may have elements of intelligence that have no counterpart in humans.
  5. Any complex or sophisticated algorithm that accomplishes some task that impresses a human as being a task that humans are good at and that seems like it would be difficult for a machine.
  6. Forms of intelligence that a machine is capable of but are beyond, different from, or difficult for humans to accomplish.
  7. Some nuance or difference from AI that will have to be determined from the context of usage.

Generally, unless context dictates otherwise, it is safe to assume that machine intelligence is a reference to AI on a computer or an electronic device or in an object containing an embedded computer.

The main issue with the use of this term is whether it is being used to reference capabilities that are a significant fraction of the level of intelligence of a person or simply complex or sophisticated capabilities that merely warrant a label much more weighty than mere computer software.

Machine Learning

  1. Knowledge acquisition. With deep meaning.
  2. Information acquisition. Only literal, shallow, surface meaning.
  3. Sophisticated data analysis and analytics.
  4. Pattern recognition and model building that requires training.
  5. Data clustering, classification, and partitioning of large quantities of data.
  6. So-called Deep Learning — typically using neural networks that require training.
  7. Human-level learning by a machine.
  8. Any degree of learning by a machine.
  9. Synonym for directed machine learning. Most commonly
  10. Synonym for undirected machine learning. Only for Strong AI.

In the context of AI, machine learning is using one of those first six senses.

Deep meaning will be defined and discussed in a bit. The basic intent is to achieve a level of meaning much closer to that of a human. Deeper than a basic dictionary meaning.

Generally speaking, most references to machine learning are not referring to human-level learning.

Deep Learning can legitimately claim to be at least a credible fraction of human-level learning, but the other alleged forms of so-called machine learning should not even be classified as learning per se. They are really no more than machine perception, the stage up through but not past knowledge acquisition and excluding deep meaning.

Now more depth

Now that we have those basic big three terms defined, in at least a rough sense, we can explore AI and intelligence in a lot more depth.

Levels of Artificial Intelligence

I see artificial intelligence (AI) as having five levels:

  1. Weak AI or Narrow AI or Light AI. Individual functions, niche tasks, or a specific domain, in isolation. Any learning is limited to relatively simple patterns. Generally, a lot of the core, conceptual intelligence is preprogrammed, even hardwired.
  2. Moderate AI or Medium AI. Integration of multiple functions and tasks, as in a robot, intelligent digital assistant, or driverless vehicle. Possibly some relatively limited degree of learning.
  3. Strong AI. Incorporates roughly or nearly human-level reasoning and some significant degree of learning.
  4. Extreme AI. Systems that learn and can produce even more capable systems that can learn even more capably, in a virtuous spiral.
  5. Ultimate AI. Essentially Ray Kurzweil’s Singularity or some equivalent of superhuman intelligence. Also called superintelligence.

There have been many examples of weak or light AI over the decades. Call this traditional AI.

Strong AI is still off in the distance.

Moderate AI is where a lot of the real excitement is these days, as well as more robust capabilities in the Weak AI area.

I have a separate informal paper on Extreme AI — if you want more depth than is provided in this paper.

Competent AI

While strong AI seeks to achieve near-human level intelligence, that seems a good ways off over the horizon, while a vast array of applications lie before us that could simply benefit from much more robust implementations of weak or moderate AI.

I call this competent AI — robust implementations of weak or moderate AI.

The goal is that the user will be able to rely on AI and not have to be cautious or skeptical whether AI is doing the right thing or not. Anti-locking brakes and auto-focusing cameras have achieved achieved a great level of robustness or competence, while spelling and grammar checkers and speech recognition have not.

Levels of competence include:

  1. Nothing. No automation capabilities in a particular area or level of function. User is completely on their own.
  2. Minimal subset of full function. Something better than nothing, but with severe limits.
  3. Rich subset. A lot more than minimal, but with substantial gaps.
  4. Robust subset. Not complete and maybe not covering all aspects of a level of function is an area, but close to complete in all aspects that it covers.
  5. Near-expert. Not quite all there, but fairly close and good enough to fool the average user into thinking an expert is in charge.
  6. Expert-level. All there.
  7. Elite expert-level. Best of human experts.
  8. Super-expert level. More than even the best human experts.

Spectrum of Functional Behavior

How functional must a computer be to claim human-level intelligence? Alternatively, how functional must a computer be to claim human-level artificial intelligence (AI)?

The goal here is to provide a firm basis for discussing the level of function of AI systems, including whether a given digital system has the level of function worthy of being described as intelligence.

The thesis here is fourfold:

  1. Behavior of both human and digital systems, as well as animals, can be classified based on level of function.
  2. Functional behavior spans a broad spectrum of levels.
  3. Functional behavior must reach the level of being highly functional or high function in order to be considered comparable to human-level behavior or intelligence.
  4. Integration and coordination of functions is requisite for high function and true, human-level intelligence.

There is no scientific significance at this stage for the various informal levels of function described herein, but these informal levels do serve to help comprehend the nature of levels of function.

The proposed but informal level of function characterizations are:

  1. Non-functional. No apparent function. Noise. Twitches and vibrations.
  2. Barely functional. The minimum level of function that we can discern. No significant utility. Not normally considered AI. Automation of common trivial tasks.
  3. Merely, minimally, or marginally functional, tertiary function. Seems to have some minimal, marginal value. Marginally considered AI. Automation of non-trivial tasks. Not normally considered intelligence per se.
  4. Minor or secondary function. Has some significance, but not in any major way. Common behavior for animals. Common target for AI. Automation of modestly to moderately complex tasks. This would also include involuntary and at least rudimentary autonomous actions. Not normally considered intelligence per se.
  5. Major, significant, or primary function. Fairly notable function. Top of the line for animals. Common ideal for AI at the present time. Automation of complex tasks. Typically associated with consciousness, deliberation, decision, and intent. Autonomy is the norm. Bordering on what could be considered intelligence, or at least a serious portion of what could be considered intelligence.
  6. Highly functional, high function. Highly notable function. Common for humans. Intuition comes into play. Sophisticated enough to be considered human-level intelligence. Characterized by integration of numerous primary functions.
  7. Very high function. Exceptional human function, such as standout creativity, imagination, invention, and difficult problem solving and planning. Exceptional intuition.
  8. Genius-level function. Extraordinary human, genius-level function, or extraordinary AI function.
  9. Super-human function. Hypothetical AI that exceeds human-level function.
  10. Extreme AI. Virtuous spiral of learning how to learn and using AI to create new AI systems ever-more capable of learning how to learn and how to teach new AI systems better ways to learn and teach how to learn.
  11. Ray Kurzweil’s Singularity. The ultimate in Extreme AI, combining digital software and biological systems.
  12. God or god-like function. The ultimate in function. Obviously not a realistic research goal.

For more detail on this spectrum, see my informal paper entitled Spectrum of Functional Behavior.

Mind vs. brain

Whether it is technically fully accurate or not, it is certainly popular to think of the mind and brain as somewhat different although clearly integrated into one organ. We tend to think of higher-order mental functions, activities, or processes as part of the mind, and relegate more primitive, lower-level functions such as processing of sensory data and biological processes to the brain organ itself.

In truth, some of the real magic of seemingly high-level mental functions may in fact be occurring, controlled, or at least dramatically influenced by neuron-level and intra-neuron brain activity.

The exact division or form of integration of mind and brain is beyond the scope of this paper.

For our purposes here, it does seem and feel safe to associate most mental functions with mind and relegate the brain to a supporting role.


  1. Seat of intelligence for an intelligent entity.
  2. Seat of all intellectual activity.
  3. The mental functions performed within a brain.
  4. The mental functions performed within an AI system.
  5. Synonym for brain.

Traditionally we haven’t referred to an AI system as having a mind per se, but for a sufficiently advanced AI system with advanced mental functions, it makes sense to do so.


  1. The organ in which mental function occurs.
  2. Organ responsible for sensory processing, intellectual activity, and regulation of nervous activity and bodily functions.
  3. The portion of a machine which performs mental functions.
  4. Synonym for mind.

The internal structure of the brain is beyond the scope of this paper.

Computational theory of mind

  1. Theory that the mind works using reason, proceeding in rational steps.
  2. Theory that the human mind can be simulated using digital computers and techniques such as Turing machines.
  3. Hardware and software analogy to brain and mind.

There are pros and cons and great debates on the matter, and although the matter may need to be resolved to achieve true human-level intelligence, weaker, moderate-level intelligence appears to be very achievable using a computational model of mind.

Hardware and software analogy to brain and mind

Whether it is technically fully accurate or not, it is certainly popular to draw an analogy between brain and hardware on one hand and mind and software on the other. Alternatively, brain is to mind as hardware is to software. In short, the software of an AI system is comparable to the human mind.

One technical difficulty with this analogy is that the dividing line between hardware and software is somewhat arbitrary — some seemingly software functions may be implemented in hardware while some functions which at least conceptually could be implemented in hardware are instead implemented or simulated in software.

Delving into nuances of limitations of this analogy is beyond the scope of this paper.

One ultimate question for this analogy is whether there some biological basis of human intelligence that cannot be achieved in a non-biological machine.

But that simply begs the question of whether we will eventually be able to implement computers using organic materials that allow a more complete modeling of the human brain.

Popper’s three worlds

Philosopher Karl Popper has postulated a three-world model of reality and knowledge:

  1. World 1 — the world of physical reality, with objects and phenomena, including plants, animals, people. The real world.
  2. World 2 — the mental world, all the thoughts, knowledge, and mental processes we have in our heads. Our image or perception of the real world, plus any imaginary worlds we create in our heads.
  3. World 3 — the products of our mental efforts, such as language, books, theories, art, performances, and objective knowledge. To be clear, World 3 is the knowledge (including meaning) that is embedded in these artifacts, not the physical objects themselves, which remain part of World 1.

That’s my personal interpretation and summary. You can read Popper’s lecture materials for the detail of his model in his own words.

Artificial intelligence is what we don’t know how to do yet

From the dawn of computing, the essential purpose of a computer has been to automate some task that people normally do. Since such tasks always involve information, some degree of intelligence has always been required.

When capabilities seem beyond what a computer can easily do, it is easy to ascribe it to being a matter of intelligence. As if the tasks we have already automated didn’t require intelligence.

Once we do manage to figure out how to automate some seemingly difficult task, we assert that this is artificial intelligence. Or at least until it becomes widely accepted that computers can obviously do a particular task and do it quite well. Only then will we gradually and quietly cease using the AI label for those tasks that we no longer have a need to refer to explicitly.

Maybe the issue is that since we have already automated so much of the low-hanging fruit that we are finally bumping into the knee of the difficulty curve so that it takes increasingly intense levels of effort and resources to advance up the intelligence spectrum, so that each advance comes more slowly and therefore seems so much more spectacular.

Robots, driverless cars, and even intelligent digital assistants certainly seem spectacular right now, but once they get all the wrinkles worked out and they become common and mundane rather than rare and special, the urge to label them AI will quickly fade.

Anti-locking brakes, optical character recognition, spelling and grammar checkers and correctors, and auto-focusing cameras were once quite unusual and exceptional and hence noteworthy as AI, but these days they are assumed and not so notable common features, no longer warranting a label of AI.

AI areas and capabilities

This is not an exhaustive or ordered list, but illustrates the range of capabilities pursued by AI researchers and practitioners:

  1. Reasoning
  2. Knowledge and knowledge representation
  3. Optimization, planning, and scheduling
  4. Learning
  5. Natural language processing (NLP)
  6. Speech recognition and generation
  7. Automatic language translation
  8. Information extraction
  9. Image recognition
  10. Computer vision
  11. Moving and manipulating objects
  12. Robotics
  13. Driverless and autonomous vehicles
  14. General intelligence
  15. Expert systems
  16. Machine learning
  17. Pattern recognition
  18. Theorem proving
  19. Fuzzy systems
  20. Neural networks
  21. Evolutionary computation
  22. Intelligent agents
  23. Intelligent interfaces
  24. Distributed AI
  25. Data mining
  26. Games (chess, Go, Jeopardy)

A number of these are covered in this paper.

Human-level intelligence

  1. Intelligence matching, exceeding, or approximating human intelligence.
  2. Synonym for human intelligence.
  3. Synonym for general intelligence.
  4. Synonym for Strong AI.
  5. Synonym for artificial general intelligence.

Human-level artificial intelligence / Artificial human-level intelligence

  1. Artificial intelligence that achieves human-level intelligence.
  2. A machine that can perform virtually all intellectual tasks of a human.
  3. Synonym for Strong AI.
  4. Human-level artificial intelligence and artificial human-level intelligence are synonyms.

General intelligence

  1. AI that is not restricted to narrow sub-domains of human intelligence.
  2. Synonym for human-level artificial intelligence.
  3. Synonym for intelligence.
  4. Synonym for human intelligence.
  5. Synonym for Strong AI.
  6. Synonym for artificial general intelligence.

This is a more traditional AI term to refer to Strong AI — AI that is comparable to human intelligence.

Knowledge acquisition

  1. Transformation and storage of raw sensory data into knowledge — structured information combined with associated deep meaning.
  2. The end result of cognition, beginning with perception.


  1. Knowledge acquisition. With deep meaning.
  2. Information acquisition. Only literal, shallow surface meaning.
  3. Synonym for perception.
  4. Synonym for cognition.

The distinction between information and knowledge will be discussed in a bit.

Technically, learning includes deep meaning, but even for humans, the initial learning process can be rather shallow and literal, with deeper meaning coming from subsequent experience.

Learning can occur in a number of ways:

  1. Exploration, experimentation, thought, speculation, and reasoning based on arbitrary whim of the intelligent entity.
  2. Recommended course of study from some knowledgeable authority.
  3. Structured instruction or training. Education.
  4. Experience in general.

General learning

  1. The intelligent entity focuses its attention for learning as it sees fit, unconstrained and uninfluenced by any other intelligent entities.

See also: Machine general learning.

Directed machine learning

  1. A user directs the AI system to focus its attention to learn some matter of interest to the user.
  2. Synonym for supervised learning.
  3. Opposite of undirected machine learning.
  4. Opposite of unsupervised machine learning.

Generally speaking, virtually all machine learning of today’s AI systems is directed machine learning.

Undirected machine learning

  1. Without any user intervention, the machine decides for itself how to focus its attention for learning any matters of interest to the machine itself.
  2. Opposite of directed machine learning.
  3. Synonym for general learning, for a machine.
  4. Synonym for machine general learning.

Generally speaking, there are no known AI systems today which engage in undirected machine learning.

Generally speaking, undirected machine learning is associated only with Strong AI.

Machine general learning

  1. The machine focuses its attention for learning as it sees fit, unconstrained and uninfluenced by the user.
  2. Synonym for undirected machine learning.

Generally, machine general learning is characteristic of Strong AI.

Supervised machine learning

  1. The machine engages in learning but only as supervised by the user.
  2. Synonym for directed machine learning.
  3. Opposite of unsupervised machine learning.

Unsupervised machine learning

  1. The machine engages in learning without the need for supervision by the user.
  2. Synonym for undirected machine learning.
  3. Opposite of supervised machine learning.


  1. Intellectual capacity of any degree, but typically of a significant degree of the abilities listed below in elements of intelligence.
  2. Everything than a human being can do in their head with their brain and mind.
  3. The ability of the human mind to employ knowledge to observe the world and take action.
  4. The ability of any intelligent entity (human or AI) to employ knowledge to observe the world and take action.
  5. Synonym for human-level intelligence.
  6. Synonym for intellectual capacity.

Intellectual capacity

  1. The collection of mental functions and mental processes of an intelligent entity. See elements of intelligence.
  2. The capabilities of an intelligent entity to perceive, think, reason, decide, plan, and act.
  3. A single mental function or mental process of an intelligent entity.
  4. A subset of the mental functions and mental processes of an intelligent entity.
  5. Synonym for intelligence.

Intellectual task

  1. Any task performed by a mental function or mental process of an intelligent entity.
  2. Any task that a mental function or mental process of an intelligent entity is capable of.
  3. Synonym for mental function or mental process.
  4. Synonym for intellectual activity.
  5. Synonym for intelligence.

Intellectual activity

  1. A mental function or mental process of an intelligent entity in action.
  2. The capacity of an intelligent entity to perform mental functions and mental processes.
  3. Synonym for mental function or mental process.
  4. Synonym for intellectual task.
  5. Synonym for intelligence.

Elements of intelligence

The elements or capabilities of intelligence or intellectual capacity are known as mental functions or mental processes.

At a high level, intelligence or intellectual capacity consists of these major elements:

  1. Perception. The senses or sensors. Forming a raw impression of something in the real world around us.
  2. Attention. What to focus on.
  3. Recognition. Identifying what is being perceived.
  4. Communication. Conveying information or knowledge between two or more intelligent entities.
  5. Processing. Thinking. Working with perceptions and memories.
  6. Memory. Remember and recall.
  7. Learning. Acquisition of knowledge and know-how.
  8. Analysis. Digesting and breaking down more complex matters.
  9. Speculation, imagination, and creativity.
  10. Synthesis. Putting simpler matters together into a more complex whole.
  11. Reasoning. Logic and identifying cause and effect, consequences and preconditions.
  12. Following rules. From recipes to instructions to laws and ethical guidelines.
  13. Applying heuristics. Shortcuts that provide most of the benefit for a fraction of the mental effort.
  14. Intuitive leaps.
  15. Mathematics. Calculation, solving problems, developing models, proving theorems.
  16. Decision. What to do. Choosing between alternatives.
  17. Planning.
  18. Volition. Will. Deciding to act. Development of intentions. When to act.
  19. Movement. To aid perception or prepare for action. Includes motor control and coordination. Also movement for its own sake, as in communication, exercise, self-defense, entertainment, dance, performance, and recreation.
  20. Behavior. Carrying out intentions. Action guided by intellectual activity. May also be guided by non-intellectual drives and instincts.

Communication includes:

  1. Natural language.
  2. Spoken word.
  3. Written word.
  4. Gestures. Hand, finger, arm.
  5. Facial expressions. Smile, frown.
  6. Nonlinguistic vocal expression. Grunts, sighs, giggles, laughter.
  7. Body language.
  8. Images.
  9. Music.
  10. Art.
  11. Movement.
  12. Creation and consumption of knowledge artifacts — letters, notes, books, stories, movies, music, art.
  13. Ability to engage in discourse. Discussion, conversation, inquiry, teaching, learning, persuasion, negotiation.
  14. Discerning and conveying meaning, both superficial and deep.

Recognition includes:

  1. Objects
  2. Faces
  3. Scenes
  4. Places
  5. Names
  6. Voices
  7. Activities
  8. Identities
  9. Intentions
  10. Meaning

Only a Strong AI system would possess all or most of these characteristics. A Weak or Moderate AI system may only possess a few or a relatively narrow subset.

The measure of progress in AI in the coming years will be the pace at which additional elements from those lists are ticked off, as well as improvements in the level of competence in these areas of function.

At a more detailed level, the mental functions and mental processes of intelligence or intellectual capacity include:

  • Sentience — to be able to feel, to be alive and know it.
  • Sapience — to be able to think, exercise judgment, reason, and acquire and utilize knowledge and wisdom.
  • Ability, capability, and capacity to pursue knowledge (information and meaning.)
  • Sense the real world. Sight, sound, and other senses.
  • Observe the real world.
  • Direct and focus attention.
  • Experience, sensation.
  • Recognize — objects, plants, animals, people, faces, gestures, words, phenomena.
  • Listen, read, parse, and understand natural language.
  • Identification after recognition (e.g, recognize a face and then remember a name).
  • Read people — what information or emotion are they expressing or conveying visually or tonally.
  • Detect lies.
  • Take perspective into account for cognition and thought.
  • Take context into account for cognition and thought.
  • Adequately examine evidence and judge the degree to which it warrants beliefs to be treated as proof of strong knowledge.
  • Compare incoming information to existing knowledge, supplementing, integrating, and adding as warranted.
  • Understand phenomena and processes based on understanding evidence of their components and stages.
  • Assess whether a new belief is strong knowledge or weak knowledge.
  • Judge whether fresh knowledge in conjunction with accumulated knowledge warrant action.
  • Learn by reinforcement — seeing the same thing repeatedly.
  • Significant degree of self-organization of knowledge and wisdom.
  • Form abstractions as knowledge.
  • Form concepts as knowledge.
  • Organize knowledge into taxonomies and ontologies that represent similarities and relationships between classes and categories of entities.
  • Acquire knowledge by acquaintance — direct experience.
  • Acquire knowledge by description — communication from another intelligent entity.
  • Commit acquired knowledge to long-term memory.
  • Conscious — alert, aware of surroundings, and responsive to input.
  • Feel, emotionally.
  • Cognition in general.
  • Think — form thoughts and consider them.
  • Assess meaning.
  • Speculate.
  • Conjecture.
  • Theorize.
  • Imagine, invent, and be creative.
  • Ingenuity.
  • Perform thought experiments.
  • Guess.
  • Cleverness.
  • Approximate, estimate.
  • Fill in gaps of knowledge in a credible manner consistent with existing knowledge, such as interpolation.
  • Extrapolation — extend knowledge in a sensible manner.
  • Generalize — learn from common similarities, in a sensible manner, but refrain from over-generalizing.
  • Count things.
  • Sense correspondence between things.
  • Construct and use analogies.
  • Calculate — from basic arithmetic to advanced math.
  • Reason, especially using abstractions, concepts, taxonomies, and ontologies.
  • Discern and discriminate, good vs. bad, useful/helpful vs. useless, relevant vs. irrelevant.
  • Use common sense.
  • Problem solving.
  • Pursue goals.
  • Foresight — anticipate potential consequences of actions or future needs.
  • Assess possible outcomes for the future.
  • Exercise judgment and wisdom.
  • Attitudes that affect interests and willingness to focus on various topical areas for knowledge acquisition and action.
  • Intuition.
  • Maintain an appropriate sense of urgency for all tasks at hand.
  • Sense of the passage of time.
  • Sense of the value of time — elapsed, present value, and future value.
  • Understand and assess motivations.
  • Be mindful in thought and decisions.
  • Formulate intentions.
  • Decide.
  • Make decisions in the face of incomplete or contradictory information.
  • Sense of volition — sense of will and independent agency controlling decisions.
  • Exercise free will.
  • Plan.
  • Execute plans.
  • Initiate action(s) and assess the consequences.
  • Assess feedback from actions and modify actions accordingly.
  • Iterate plans.
  • Experiment — plan, execute, assess feedback, and iterate.
  • Formulate and evaluate theories of law-like behavior in the universe.
  • Intentionally and rationally engage in trial and error experiments when no directly rational solution to a problem is available.
  • Explore, sometimes in a directed manner and sometimes in an undirected manner to discover that which is unknown.
  • Ability and willingness to choose to flip a coin, throw a dart, or otherwise introduce an element of randomness into reasoning and decisions.
  • Discover insights, relationships, and trends in data and knowledge.
  • Cope with externalities — factors, the environment, and other entities outside of the immediate contact, control, or concern of this intelligent entity.
  • Adapt.
  • Coordinate thought processes and activities.
  • Organize — information, activities, and other intelligent entities.
  • Collaborate, cooperate, and compete with other intelligent entities.
  • Remember.
  • Assert beliefs.
  • Build knowledge, understanding (meaning), experience, skills, and wisdom.
  • Assess desires.
  • Assert desires.
  • Exercise control over desires.
  • Be guided or influenced by experiences, skills, beliefs, desires, intentions, and wisdom.
  • Be guided (but not controlled) by drives.
  • Be guided (but not controlled) by emotions.
  • Be guided by values, moral and ethical, personal and social group.
  • Adhere to laws, rules, and recognized authorities.
  • Selectively engage in civil disobedience, when warranted.
  • Recall memories.
  • Recognize correlation, cause and effect.
  • Reflection and self-awareness.
  • Awareness of self.
  • Know thyself.
  • Express emotion.
  • Heartfelt sense of compassion.
  • Empathy.
  • Act benevolently, with kindness and compassion.
  • Communicate with other intelligent entities — express beliefs, knowledge, desires, and intentions.
  • Form thoughts and intentions into natural language.
  • Formulate and present arguments as to reasons, rationale, and justification for beliefs, decisions, and actions.
  • Persuade other intelligent entities to concur with beliefs, decisions, and actions.
  • Judge whether information, beliefs, and knowledge communicated from other intelligent entities are valid, true, and worth accepting.
  • Render judgments about other intelligent entities based on the information, beliefs, and knowledge communicated.
  • Render judgments as to the honesty and reliability of other intelligent entities.
  • Act consistently with survival — self-preservation.
  • Act consistently with sustaining health.
  • Regulate thoughts and actions — self-control.
  • Keep purpose, goals, and motivations in mind when acquiring knowledge and taking action.
  • Able to work autonomously without any direct or frequent control by another intelligent entity.
  • Adaptability.
  • Flexibility.
  • Versatility.
  • Refinement — make incremental improvements.
  • Resilience — able to react, bounce back, and adapt to shocks, threats, and the unexpected.
  • Understand and cope with the nature of oneself and entities one is interacting with, including abilities, strengths, weaknesses, drives, innate values, desires, hopes, and dreams.
  • Maintain a healthy balance between safety and adventure.
  • Balance long-term strategies and short-term tactics.
  • Positive response to novelty.
  • Commitment to complete tasks and goals.
  • Respect wisdom.
  • Accrue wisdom over time.
  • Grow continuously.
  • Tell the truth at all times — unless there is a socially-valid justification.
  • Refrain from lying — unless there is a socially-valid justification.
  • Love.
  • Dream.
  • Seek a mate to reproduce.
  • Engage in games, sports, and athletics to stimulate and rejuvenate both body and mind.
  • Engage in humor, joking, parody, satire, fiction, and fairy tales, etc. to relax, release tension, and rejuvenate the mind.
  • Seek entertainment, both for pleasure and to rejuvenate both body and mind.
  • Selectively engage in risky activities to challenge and rejuvenate both body and mind.
  • Experience excitement and pleasure.
  • Engage in music and art to relax and to stimulate the mind.
  • Day dream (idly, for no conscious, intentional purpose) to relieve stress and rejuvenate the mind.
  • Seek to avoid boredom.
  • Engage in disconnected and undirected thought, for the purpose of seeking creative solutions to problems where no rational approach is known, or simply in the hope of discovering something of interest and significant value.
  • Brainstorm.
  • Refrain from illegal, immoral, or unfair conduct.
  • Resist corruption.
  • Maintaining and controlling a healthy level of skepticism.
  • Maintaining a healthy balance between engagement and detachment.
  • Accept and comprehend that our perception and beliefs about the world are not necessarily completely accurate.
  • Accept and cope with doubt.
  • Accept and cope with ambiguity.
  • Resolve ambiguity, when possible.
  • Solve puzzles.
  • Design algorithms.
  • Program computers.
  • Pursue consensus with other intelligent entities.
  • Gather and assess opinions from other intelligent entities. Are they just opinion, or should they be treated as knowledge?
  • Develop views and positions on various matters.
  • Ponder and arrive at positions on matters of politics and public policy.
  • Decide how to vote in elections.
  • Practice religion — hold spiritual beliefs, pray, participate in services.
  • Respond to questions.
  • Respond to commands or requests for action.
  • Experience and respond to pain.
  • Sense to avoid going down rabbit holes — being easily distracted and difficult to get back on track.
  • Able to reason about and develop values and moral and ethical frameworks.
  • Be suspicious — without being paranoid.
  • Engage in philosophical inquiry.
  • Critical thinking.
  • Authenticity. Thinking and acting according to a strong sense of an autonomous self rather than according to any external constraints, cultural conditioning, or a preprogrammed sense of self.

Clearly some of those capabilities are especially human and we wouldn’t normally insist that an intelligent machine be able to replicate all of them.

A normal, healthy human being would of course possess all of these abilities.

A machine may have any number of those capabilities. Exactly which fraction or subset constitutes strict intelligence per se is a matter of dispute — nobody really knows or even has a good idea of how to know, although there is no shortage of opinions.

For more reflections on intelligence, see my Questions about Thinking and Can Machines Think?.

Intelligent entity

  1. Either a person or an AI system, anything that embodies intelligence.

Beyond thinking

The main focus of AI is thinking and intelligence in general, which is great, but a lot of the real action these days is in… action, such as robots, driverless cars, etc., where thinking and intelligence is just the starting point, or at least not the end point.

Humans can do a lot more than just think:

  • Senses — access to experience the whole world.
  • Ability to communicate.
  • Ability to move.
  • Ability to manipulate.
  • Ability to build.
  • Ability to reproduce.
  • Ability to feel emotions.
  • Ability to sense social cues.

Granted, humans engage in a lot of other activities, but those are the ones relevant to AI systems.

Where exactly does intelligence begin and end? We’ll explore that a little, but ultimately it is an open or at least debatable question.

Mental processes

  1. All of the capabilities of the mind of an intelligent entity.
  2. All of the capabilities of intelligence.
  3. The mechanisms by which the capabilities of intelligence manifest themselves.
  4. Synonym for mental functions.

Mental functions

  1. Synonym for mental processes.

Take your pick which term you wish to use — they mean the same thing.

Artificial intelligent entity

  1. An intelligent entity that is not a person.
  2. Synonym for an AI system

AI application

  1. A computer program that exhibits artificial intelligence.
  2. A computer program employing AI algorithms.
  3. A computer program exhibiting at least some of the capabilities of human intelligence.

AI system

  1. Any machine or object that is configured with an application that exhibits artificial intelligence.
  2. A computer program employing AI algorithms.
  3. A computer program or machine exhibiting at least some of the capabilities of human intelligence.
  4. Synonym for artificial intelligent entity.
  5. Synonym for AI application.

Turing test

In 1950 British mathematician and computer scientist Alan Turing published a paper on AI entitled Computing Machinery and Intelligence which introduced his famous Turing Test, which he referred to as the Imitation Game.

His formulation of the test/game is a bit different from what shows up in the popular media or even the folklore of computing, but the essence is the same — what questions would you ask to determine whether you are communicating with a real person or an AI machine.

Various strategies have been proposed, as well as counter-strategies to disguise the machine so that it imitates many of the non-rational qualities that we normally associate only with people.

There is no single, uniform Turing test. Even Turing discussed variations in his paper.

There have been various claims to have machines that pass a Turing Test, and indeed this has been the case, but only for specific, narrow scenarios, not in any truly general sense. For every claim of passing, there comes a new wave of criticism.

A brief discussion of whether the Turing test has been passed can be found in Has The Turing Test Been Passed?.

In any case, the point is that it is an ongoing challenge to describe and assess what it really means to be intelligent, whether for a machine or even a person.

To be clear, the Turing Test is not a test of the quality or utility of an AI system per se, but strictly whether the machine reaches the level of human-level intelligence, which is a very high bar and an unnecessary requirement for AI systems to be quite useful in many everyday situations.

John Searle’s Chinese Room

Philosopher John Searle has argued that a computer program cannot have a mind, understanding, or consciousness in any human sense. His thought experiment, known as The Chinese room argument, asserts that a program accepting questions in Chinese and producing responses in Chinese could seem intelligent, thus passing the Turing test, but not actually understanding the Chinese language itself would only be simulating intelligence rather than actually having a mind.

It’s a reasonable argument, but may be rooted in a semantic distinction that may not be definitive. It may simply be that such a program won’t actually be able to give full and meaningful responses to all questions unless it actually does understand the Chinese language. As such, the argument may simply offer a test of whether a given program is full Strong AI rather than whether full Strong AI is a valid measure of full human-level intelligence.

And it may all be moot until such as day as somebody does indeed field an AI system that does indeed have full Strong AI.

Knowledge, information, belief, bias, meaning, and understanding


  1. Raw data, details, measurements, or facts that have some literal association with some characteristic of the source of that information, exclusive of any deeper meaning or deeper association or relationships as with knowledge.
  2. Experience. All aspects of an experience.
  3. Synonym for fact.


  1. The association between information and what it represents.
  2. Literal, surface, or shallow meaning, such as dictionary meaning or association between a symbol and one or more objects, concepts, people, or phenomena.
  3. Contextual significance. Deeper meaning.
  4. Personal significance. Deeper meaning.
  5. Emotional significance. Deeper meaning.
  6. Social significance. Deeper meaning.
  7. Connotations. Deeper meaning.
  8. Relationships, associations, or connections to other information or knowledge.


  1. Information and its associated deeper meaning and relationships to other knowledge.
  2. A conclusion drawn from analyzing information
  3. A belief that rises to the level of strong justification and empirical validation. Justified True Belief — JTB.
  4. A belief accepted as truth even though any justification is weak or even nonexistent.
  5. A belief accepted as truth as a result of deeper meaning that resonates significantly with the intelligent entity even though any justification is weak or even nonexistent.
  6. Know-how such as processes, steps, sequences, connections, and rules needed to accomplish some task, to reach some goal, or to cope with some phenomenon.
  7. Assumptions.
  8. Definitions.
  9. Experience. All aspects of an experience, especially deeper meaning and context.
  10. Synonym for concept.
  11. Synonym for abstraction.
  12. Synonym for information.

The essential distinction here between information and knowledge is that true knowledge includes deeper meaning, beyond the superficial, literal meaning of information.


  1. A proposition in which one has at least some fair level of confidence.
  2. A conclusion drawn by reasoning about information or thought.
  3. A conclusion drawn by casual and possibly loose or sloppy thought, devoid of significant and sound reasoning, about information or thought.
  4. A conclusion accepted as a matter of faith or trust in some external intelligent entity.
  5. A conclusion accepted due to bias.
  6. A mental construct with a highly-variable level of confidence formed as a result of thinking, frequently but not exclusively in conjunction with sensory perception.
  7. Gossip.

The point here is that a belief may or may not be true or carefully reasoned.

Belief can be useful, but falls short of true knowledge. As indicated, a belief can be treated as knowledge, but it would be a weaker form of knowledge.


  1. A willingness to accept a belief as knowledge for nonrational reasons such as prejudice and personal preference even though there is no robust evidence or other rational justification for doing so.


  1. Possessing sufficient knowledge to fully comprehend a phenomenon, entity, matter, or topic.
  2. Knowledge possessed about a phenomenon, entity, matter, or topic.
  3. The degree of comprehension of a phenomenon, entity, matter, or topic.

Information (and knowledge) can be discrete facts or complex images, sounds, smells, textures, and any other sensory data, as well as conjectured beliefs that are supported by strong reasoning and empirical validation.

The goal is certainly that knowledge be as robust as possible, a full understanding of a phenomenon, entity, matter, or topic, but sometimes dubious beliefs are tolerated due to lack of sufficient information or to the degree that they agree with or confirm existing knowledge or bias.

Basic facts, facts, and conclusions

Basic fact:

  1. Simple information or knowledge, primarily from direct observation, measurement, relatively simple calculation, or a shared defined truth, that is considered rather obvious and uncontroversial.
  2. Opposite of conclusion.
  3. Opposite of reasoning. Not inconsistent with reasoning, but doesn’t require reasoning.


  1. Synonym for basic fact.
  2. A conclusion that is considered relatively obvious and uncontroversial.
  3. A consensus truth. A belief that is considered very important.
  4. Synonym for conclusion.

Alas, not every group or individual will concur with the view that a purported fact is obvious or agree with the analysis or reasoning that produced it.


  1. A belief based on some reasoning, rationale, or for some important purpose.
  2. Synonym for fact.

A conclusion may be treated as fact, but is not exactly the same as a fact, and is certainly not a basic fact.

There will generally be some level or degree of confidence in a conclusion. Zealots may presume absolute confidence, while realists will be able to express their level of confidence.

AI systems should assess the degree of confidence in every conclusion. This depends on:

  • Quality of input data. Potential for flukes, fluctuations, anomalies, sensitivity of sensors, and bad sensors.
  • Quality of assumptions.
  • Quality of reasoning.
  • Potential for bugs in software.
  • Potential for hacking.

In the final analysis, an AI system should be able to assess a conclusion (or any fact as well) as:

  • Weak. Some concern about uncertainty.
  • Moderate. Reasonably certain.
  • Strong. Very certain.

Although some basic facts, especially in science, may require complex calculation or modeling or some degree of reasoning to produce a result. Technically, such complex results should probably not be considered basic facts per se, but… it gets complicated, and can get very confused.

The developers of AI systems should have a clear model in their heads of these three distinct, but related concepts, and how their AI system will assess beliefs, information, and knowledge relative to them.

Fact pattern

  1. An abstract collection of facts representing a variety of similar matters or situations.
  2. A generalization of a number of specific fact patterns.
  3. A collection of all of the facts relevant to a particular matter or situation.
  4. The particular details of an instance of a general fact pattern.

A fact pattern can be general or specific.

A specific fact pattern represents every detail of interest in a particular matter or situation. The specifics.

A general fact pattern expresses the ways in which a variety of matters or situations are similar. In other words, what common pattern is present in all of the matters or situations under consideration.

The degree of similarity for various specific fact patterns which match a general fact pattern is a matter of judgment. It could be:

  • Literal. An exact match.
  • Semi-literal. A close match.
  • Fuzzy match. A vague but moderately close match, such as in spell check.

Matching fact patterns can be a very complex process. Matching is discussed elsewhere in this paper.

  1. Synonym for statement. Independent of its truth.
  2. Synonym for conjecture.
  3. Synonym for proposal.
  4. Synonym for assertion.

Generally, a proposition is a proposal to be evaluated as to whether it might be true.


  1. Synonym for proposition.
  2. Synonym for assertion.


  1. Proposition held to be true, possibly without reasoning being given.
  2. Proposition that is assumed.
  3. Proposition that is required to be true.


  1. A proposition or preference that has some appeal or subjective value to an intelligent entity.
  2. An alleged fact or conclusion. Without some degree of consensus, it is just… opinion.
  3. Personal perspective on some matter.
  4. Speculation on some matter or about a proposition.
  5. Synonym for personal preference.

Generally, an AI system will not have an opinion on any matter, only the facts and available knowledge, but more advanced AI systems, may have algorithms to produce speculative opinions and views on matters of interest.


  1. Elaboration for an opinion.
  2. Synonym for opinion.

Justified true belief (JTB) — knowledge vs. belief

As a general matter, intelligence is based on knowledge, that which is believed to be true. The problem is that beliefs are not always true. An intelligent entity needs to progress confidently, with some sense of certainty that its beliefs are true.

Successfully making the leap from mere belief to robust knowledge is an ongoing challenge.

Justified true belief (JTB) is a traditional approach to making that leap. It is not a perfect process, but it is the heart of knowledge acquisition. The belief part is easy. The two catches are to have a sufficient justification to believe in the belief and some sort of real-world validation that the belief is in fact true.

The two big difficulties with JTB are that confidence in the justification can come from unwarranted bias and the difficulty of validating the belief in the real world in a robust manner.

The bottom line is that although the goal is to work with sound knowledge, there is rarely absolute certainty whether knowledge is truth or merely belief.

In any case, a belief is considered truth within a given intelligent entity if sufficient evidence and reasoning warrants such a belief.

In other words, belief becomes knowledge when it has a warrant — sufficient justification.

JTB as a tool

The main point about JTB is not that it validates knowledge in some absolute sense per se, but that it is a tool to facilitate evaluation of information and knowledge.

Strong knowledge vs. weak knowledge

A key task for any intelligent entity is to form some judgment as to confidence in every belief.

If confidence in the justification is high enough, a belief could be considered strong knowledge.

If confidence in the justification is not so high, a belief could be considered weak knowledge.


  1. The truth of existence — reality as it exists, regardless of our knowledge or perception.
  2. Truth of propositions — is a statement or belief true or not.
  3. Ultimate or universal truth — the actual truth of any matter, regardless of our knowledge or perception.
  4. Consensus or social truth — what a group believes to be true.
  5. Religious truth. Beliefs and dogma for a specific religion or even sect or a religion.
  6. Objective truth. Synonym for ultimate truth
  7. Subjective truth. Each individual or group is entitled to their own perception of the truth of a matter.
  8. Defined truth. Mathematical systems, laws, government, rules, definitions.
  9. God’s knowledge, God’s truth. The ultimate truth, presuming the deity exists and is truly omniscient.

For a more complete list of the various forms or domains of truth, see my paper entitled Domains of Truth.

A key challenge for any AI system or any human social system as well is to decide the standards of justification that should be used to judge the truth of any matter.

Eternal truth or universal truth

  1. A truth that is true in all places, for all situations, for all time.
  2. The great quest of philosophers.
  3. The opposite of subjective truth.
  4. In contrast to the provisional nature of knowledge.
  5. Synonym for objective truth, in the extreme.

AI is more conditioned on practical knowledge relevant to the tasks at hand.

Veracity, credibility, misinformation, disinformation, propaganda, and fake news


  1. Information consistent with facts and accuracy.
  2. Synonym for honesty. A reliable source of information.


  1. Confidence in the veracity of information.
  2. Degree of confidence in the veracity of information.
  3. Sense of trust and believability in a source of information.
  4. Degree of trust and believability in a source of information.


  1. False or inaccurate information. Independent of any intention to deceive or mislead.
  2. Synonym for disinformation.
  3. Synonym for false information.


  1. False or inaccurate information that is deliberately intended to deceive or mislead.
  2. Synonym for propaganda. But disinformation is not strictly political in nature.


  1. Information presented in pursuit of a political objective, frequently but not necessarily false or at least misleading or biased.
  2. See also disinformation. Similar, but propaganda is usually strictly political in purpose.
  3. See also computational propaganda.

False information:

  1. Synonym for misinformation.
  2. Synonym for disinformation.

Fake news:

  1. Disinformation presented in the form of news.

The issue for AI systems is that incoming data, information, and knowledge must be evaluated as to its veracity. The bad news is that it may not be possible to robustly and reliably verify the veracity of information. The intelligent entity may simply have to settle for assigning a probability or range of probabilities for how truthful the information might be.

Depending on the purpose for which incoming information will be used, it may or may not be useful to assess how intentional any lack of veracity might be. For example, in addition to merely rejecting any false information, the intelligent entity might wish to build up a knowledge base which assesses the credibility of a source of information.

Computational propaganda

  1. Use of algorithms and automation to produce and disseminate propaganda.

Rather than manually writing and posting misleading content on the Internet, algorithms and automation can be used to generate carefully-crafted propaganda messages and then widely and intelligently distribute the messages on the Internet, in a way that belies the automated nature of the process so that people are fooled into believing that they are receiving legitimate and valid and even personalized messages.

AI systems can be involved with computational propaganda in two ways:

  1. Generation and dissemination of propaganda messages, including creativity in the generation process as well as personalizing based on the target.
  2. Detecting and highlighting or discarding messages identified as being propaganda.
  3. Analyzing computational propaganda or even traditional propaganda to determine the nature and sentiment of its content, and to even identify its source and author.

One would hope that AI systems can be used to enlighten the world, but any technology can be used for negative purposes as well.

Data cleaning

Data and information coming into an AI system cannot simply be blindly assumed to be valid and correct. A process known as data cleaning is needed to assure that data and information is as valid as possible. Issues to be addressed include:

  • Misinformation
  • Disinformation
  • Values outside of acceptable range
  • Business rules for acceptable values
  • Outdated
  • Improperly categorized
  • Improperly or poorly formatted
  • Inaccurate
  • Spelling errors
  • Missing
  • Corrupted
  • Incomplete

If an AI system fails to adequately clean incoming data and information, a GIGO condition may occur within the AI system and any results or output of the AI system. GIGO stands for Garbage In, Garbage Out. Not good.

In addition to cleaning incoming data and information, it may be periodically necessary to re-clean the system’s knowledge base since the rules for clean data may have changed since the data and information was originally input or since the last check of the knowledge base.

All knowledge is provisional

In contrast to eternal or universal truth, knowledge is inherently provisional — we may have every reason to believe that something is true today, but tomorrow or 100 years from now the situation may have changed and there may be good reason to challenge the old rationale in favor of a new and improved rationale.

It may be too bold to assert that absolutely all knowledge is inherently provisional, but it is fair and safe to observe that it is not uncommon for knowledge to be revised from time to time and that there is significant risk in assuming that any given knowledge is eternal or universal truth.

The catch is that we can never know when and how knowledge will need to be revised.

This presents a great challenge for AI. The good news is that knowledge tends to be revised only slowly, possibly even more slowly than the entire lifetime of a typical AI system, such as the product lifetime for a typical model of smart phone or the upgrade release cycle for a web site.


If confidence in some belief is uncertain, the belief could be considered a conjecture, neither necessarily true or false but simple a possibility.

One reason for considering conjectures knowledge is that a situation may later develop where the confidence in the conjecture dramatically improves due to the availability of additional information. But, until such information does become available, the conjecture is essentially useless in terms of utility for factoring into decisions.

Conjectures can also be tested using thought experiments.

Thought experiments

  1. An experiment such as evaluating a conjecture to be carried out totally in the mind and imagination of an intelligent entity.

The point of a thought experiment is to be able to test a conjecture without the cost, energy, time, or risk involved with carrying out the experiment in the real world.

Even traditional AI systems have employed thought experiments, such a evaluating moves in a game before formally making the move.

More advanced AI systems will perform thought experiments at a much grander level, even to the point of simulating elaborate actions in entire virtual worlds.

Literal, shallow, or surface meaning vs. deep meaning

Just to reemphasize a subtle distinction in the definition of meaning given above, there is an almost categorical distinction between literal, surface, or shallow meaning and deep meaning.

Literal meaning can be as simple as the association between the name of an object and the object itself.

A dictionary meaning is rather shallow, telling you the definition of a word, but not its full significance.

Deep meaning focuses on the significance of something:

  1. Contextual significance
  2. Personal significance
  3. Social significance
  4. Emotional significance
  5. Connotations
  6. Implications
  7. Consequences
  8. Relationships to other concepts

Objective vs. subjective knowledge

Objective knowledge should be true across all intelligent entities.

Subjective knowledge can vary between intelligent entities.

Generally, objective knowledge should agree with the real world. Obviously that is the goal, but achieving that goal is problematic. Perception can be deceptive and reasoning can be misleading.

Generally, a given AI system need not concern itself with whether its own knowledge agrees with that of other intelligent entities.

Generally, use of subjective knowledge should be limited to matters related to personality, personal preferences, and personal abilities.

It can be very problematic when collaborating or cooperating intelligent entities communicate with subjective knowledge when the entities use different standards of objectivity.

Clearly, users will be rather dismayed if an AI system reports information or takes actions that does not reflect the real world.

An AI system should endeavor to maintain an assessment or record of the degree of subjectivity or objectivity for every bit of knowledge that it maintains or handles.


  1. More than one intelligent entity coming to agreement on some matter of beliefs, knowledge, decision, or course of action.

Perceptions and novel beliefs and theories can be illusory. Having more than one intelligent entity consider the same basic facts and reasoning increases the probability that any flaws in perception or reasoning can be overcome. No guarantee, but less risk.

In science they call this peer review.

The downside for AI is that two AI systems with the same software and training will likely agree and are unlikely to bring anything new to the table from what they can do alone.

The real potential for consensus with AI comes when distinct AI systems go through independent training and learning such that there is real diversity in their knowledge bases.

Knowledge artifacts

Knowledge is an abstract construct, devoid of any material existence. Yes, we can write, speak, record, perform, illustrate, and play various forms of knowledge, but these are artifacts or representations of the knowledge rather than the knowledge itself.

An AI system must work with these artifacts of knowledge, reverse engineering them to deduce or infer the knowledge they contain.

Similarly, an AI system must transform knowledge into representations or artifacts in order to communicate with the real world.

Knowledge representation

Knowledge is the heart of the matter, both for AI and human intelligence. How knowledge is represented is a key issue. This informal paper will not explore this issue in great depth, but simply highlight the matter.

Clearly knowledge in the human mind is represented by connected neurons in some sense. Experts have some knowledge in the matter, but mysteries remain.

Developers of AI software have a wide range of software data structures to choose from, including databases with various indexing and searching capabilities.

A variety of data formats and markup languages (HTML, RDF, XML) are available for transferring knowledge between computer programs, whether they are AI or not.

In truth, many of these representations are mere information representations than true knowledge representation and any deeper meaning will have to be represented as additional information.

Semantic Web

The Semantic Web is an extension of the Web to permit explicit representation of knowledge and relationships. While HTML is a very decent language for text and visual content, it lacks the semantic power needed for structured information. The Semantic Web provides that semantic description power.

In truth, the Semantic Web is focused on information rather than knowledge per se — it has no concept of human-level meaning. But, it’s a step forward and does a very decent job for traditional structured information.

The are numerous components to the Semantic Web, but the most significant are:

  • XML — markup language for information. Also XHTML, combining HTML and XML.
  • RDF — model for information of arbitrary complexity.
  • Resource — the fundamental unit of information in RDF.
  • Triples — the fundamental unit of structuring information and relationships in RDF.
  • Graphs — interconnected relationships between triples used to represent information structures of arbitrary complexity.
  • Triplestore — a specialized database optimized for storing, accesssing, and querying RDF triples (graphs.)
  • RDF/XML — the format for expressing RDF triples in XML.
  • RDFa — technique for embedding RDF information in traditional HTML web pages.
  • OWL — markup language for expressing ontologies or relationships between entities.
  • SPARQL — SQL-like query language for information expressed in RDF triples.

Entities in RDF are referred to as resources, which could be documents, objects, persons, places, or any concept you can imagine.

RDF resources are uniquely identified by URI identifiers, which are very similar to traditional Web URLs.

An information triple is a unit of information consisting of three components, referred to as S, P, and O, each of which is an RDF URI identifier:

  • Subject (S) — the identity of the object whose information is being represented.
  • Predicate (P) — the name of the object attribute or operator (predicate) for this piece of information.
  • Object (O) — the value or target of the attribute or relationship being represented.

For example, if your car is blue, the three components of the (oversimplified) triple would be:

  1. Subject (S): my car
  2. Predicate (P): color
  3. Object (O): blue

The graph model of RDF permits a triple to be referenced as the object in another triple. This model supports simple and complex lists, hierarchies, and all manner of relationships between entities. This may not be enough for human-level knowledge, but is sufficient for all traditional computing information structures.

The great, but as yet unrealized, hope had been that the Semantic Web would spawn a wide variety of intelligent agents, all communicating and collaborating with each other through the medium of the Semantic Web. There is still hope, but progress has been slow and in niches rather than uniformly across all of computing.

To be clear, the Semantic Web is an approach to representing information in a semantically rich format, rich in a traditional computing sense, but well short of the semantic richness of human knowledge. Still, it does fall into the category of knowledge representation.

The Semantic Web is most relevant to Weak AI and Moderate AI. It may have some role in Strong AI, but is not sufficient for that task at the present time.

Evidence and proof


  1. Any information that purports or appears or seems to support a belief that some assertion or proposition is true.


  1. Any evidence which so strongly supports some assertion so as to defy any efforts to reject the assertion.

Discerning exactly where that line between mere evidence and absolute proof lies is of course one of the central tasks of intelligence. Frequently there is no easy answer, even for experts.

This is a major challenge for all but the most simple AI systems.


  1. Resistance to over-eager acceptance of information as truth and knowledge.
  2. Measured approach to accepting a thought, idea, concept, or belief as knowledge.
  3. Philosophy that there cannot be any absolute certainty about any knowledge.

Even AI systems and robots need to have a sense of skepticism, to be aware that data and images can be flukes, aberrations, shadows, mistakes, illusions, or even deliberate efforts to deceive.

Nobody wants a gullible robot.


  1. Degree of confidence or comfort in a proposition, belief, information, or knowledge.
  2. Firm conviction in a proposition, belief, information, or knowledge.
  3. Degree of technical accuracy for a measurement or observation.
  4. Synonym for margin of error.
  5. Defined truth. Defined systems such as law, government, nomenclatures, and mathematical systems.
  6. Synonym for absolute certainty. Must check context to assess whether certainty is used in a general sense or in absolute form.

Assessing certainty is an important function in an AI system.

Degree of certainty may be a technical estimate or a psychological degree of comfort.

The designers of an AI system will have to make a judgement call as to what level of certainty will be required for belief and truth.


Intelligent entities of mere average intellect may be excused if they don’t know everything about everything. Experts are individual intelligent entities who are expected to have a virtually encyclopedic knowledge of some relatively narrow domain of expertise.

As smart and qualified as an expert may be, there are two difficulties:

  • There may be gaps in their knowledge.
  • They may be wrong about some particular matter.

Expert systems are attempts to develop and program AI systems to mimic the knowledge and expertise of selected human experts. This will be discussed more later.

Is anything unknowable?

This is more of a philosophical question. From a practical perspective, it is merely a matter of what information can be obtained with a practical expenditure of resources. Something would be considered unknowable if:

  • It is not known how to obtain the information.
  • It is not known how to assess the information.
  • It would be beyond our intellectual capacity to grasp the meaning of the information.
  • It would consume too much resources or time.
  • Pursuing it would distract attention from matters which have a higher priority.
  • It is beyond our ability, capability, or capacity to obtain the information.
  • Even if known, it may be beyond our ability to communicate it.

Is anything knowable with certainty?

It is an open question and matter of debate whether anything can be known with absolute certainty. Generally, other than trivial basic facts, I would say no, but in some specialized cases, such as mathematical proofs or personal preferences, we can certainly know with no sense of doubt.

Short of absolute certainty, we know many things with varying degrees of certainty.


  1. Informal reference to a thought, idea, or concept. No sense of formality or specificity.
  2. Synonym for something.
  3. Anything that could be present in the mind.


  1. A situation or proposition under consideration, such as for a decision.
  2. Physical object or substance.

This paper focuses on that first sense.


  1. The context and details of a circumstance under consideration.

The primary focus is on the context rather than any object of focus which is situated in that context.


  1. A shorthand to refer to the essential meaning or purpose of a class of entities or phenomena.
  2. A shorthand to refer to a larger whole as representing the assembly of its constituent parts.
  3. An organized and formalized notion with associated deep meaning developed from thought. Beyond an unorganized thought or rough idea.
  4. Synonym for abstraction.

Note the three distinct meanings of concept:

  • Class of similar entities.
  • Parts of a larger whole.
  • A formalized notion.


  1. The process of identifying common qualities of distinct entities to form an umbrella concept, generalization, or category that represents all entities which share those qualities.
  2. The process of formulating concepts from concrete instances of entities or phenomena.
  3. A category or generalization that represents a class of entities or phenomena that are similar in some way.
  4. The process of creating the concept of a larger whole as representing the assembly of its constituent parts.
  5. The association of parts with a larger whole.
  6. Synonym for concept, except for the case of a single notion.

Note the two distinct meanings of abstraction:

  • Class of similar entities.
  • Parts of a larger whole.

Abstraction can be used to refer to the process of producing concepts or abstractions and the concepts and abstractions themselves. Basically as both a verb and a noun.

Thought vs. idea vs. concept

The notions of thoughts, ideas, and concepts are closely related. They are three stages in the process of forming knowledge from purely mental processes rather than based on sensory input from the outside world.


  1. An unorganized notion that suddenly pops into the conscious mind from somewhere in the subconscious mind, generally prompted by either thinking or sensory awareness.


  1. A partially organized notion that results from considering a thought. Recognition that a thought has some potential value.
  2. Synonym for thought, loosely.


  1. A fully organized notion that resulted from fully and carefully considering a thought. Belief that the thought has real value.
  2. Synonym for abstraction.

Current AI systems generally need to be pre-populated with all relevant concepts and have no conception of the kind of deep thinking required for ideation and development of concepts. Deep Learning is an exception, but even there a substantial base of concepts must be present before training or learning begins.


  1. A hierarchical system of organizing concepts, entities, or phenomena based on their degree of similarity or common characteristics.
  2. A nested hierarchy of classes or categories that represents the degrees of similarityor common characteristics of entities in those classes or categories.


  1. Organizing concepts or entities based on common characteristics.
  2. Synonym for abstraction.
  3. Synonym for class.

For knowledge in general, there is no significant distinction between category and class.


  1. Organizing concepts or entities based on common characteristics.
  2. Synonym for abstraction.
  3. Synonym for category.

For knowledge in general, there is no significant distinction between category and class.


  1. A system for representing the functional relationships between concepts, entities, or phenomena.
  2. A system for representing the interconnected relationships of all concepts, entities, and phenomena that exist, either in the entire world or universe or some smaller area of interest.

Taxonomy recognizes similarities while ontology recognizes functional relationships between entities.


  1. The ability to arrive at a conclusion without the use of explicit, conscious reasoning.

Intuition allows people to respond quickly to situations where there may not be sufficient time or information to carefully reason to a conclusion.

The exact mental process behind intuition remains a mystery.

Someday machines could be capable of intuition, but that is not the current situation.


  1. Ability to feel.
  2. To be alive.
  3. To be alive — and know it.
  4. To be responsive to environmental stimulus.
  5. Not to be confused with sapience.

Nonhuman animals can be considered sentient.

Not all AI systems are sentient, but those that are dependent on awareness of their surrounding environment, such as with sensors, can be said to be sentient, to at least some degree.


  1. Ability to think, exercise judgment, reason, and acquire and utilize knowledge and wisdom.
  2. Wisdom.
  3. Wise.
  4. Special quality of Homo Sapiens.
  5. Frequently confused with sentience.

The species of man is Homo sapiens, meaning wise man in Latin.

Whether advanced AI systems possess sapience will be a matter of debate. Most weaker AI systems clearly do not possess wisdom in any significant sense.

Mere possession or use of knowledge does not qualify as sapience either. It is the ability to take mere knowledge and combine it with significant judgment and reason that begins to verge on outright reason that qualifies an intelligent entity as being sapient.

Technically, one could argue that an entity is not a truly intelligent entity unless it possesses the quality of sapience.

Sapient entity

  1. An intelligent entity, capable of wisdom — sapience.
  2. A person or an intelligent machine, robot, or AI system.

Higher-order human intelligence

  1. Higher-order human-level intelligence specifically limited to human beings, excluding non-human sapient entities, such a robots or AI systems.

Higher-order human-level intelligence

  1. Synonym for higher-order intellectual capacity.

Higher-order intellectual capacity

  1. Human-level intelligence. Beyond animal intelligence. Includes wisdom, reasoning, planning, creativity, speculation, intuition, judgment, critical thinking, natural language, and storytelling.
  2. Limited to the higher-order capacities of humans, such as wisdom, reasoning, planning, creativity, speculation, intuition, judgment, critical thinking, natural language, and storytelling. Excludes the more mundane basic human intellectual capacities such as basic perception, basic communication, basic language skills, simple information transfer, simple transactions, basic planning, basic reasoning, and basic decision-making.
  3. Synonym for sapience.
  4. Synonym for higher-order human-level intelligence.

Higher-order intellect

  1. Individual possessing higher-order intellectual capacity.
  2. Synonym for higher-order intellectual capacity.

Higher-order intellectual activity

  1. Higher-order intellectual capacity in action.
  2. Synonym for higher-order intellectual capacity.


  1. The physical and mental qualities of an intelligent entity, as categorically distinct from the rest of the world.
  2. The opposite of other.

Whether the entity is aware of itself as being distinct from the rest of the world is another matter — see self-awareness.

An AI system has a self, but in a rather primitive sense.


  1. Ability to perceive and consider the state of the world.
  2. Able to acquire knowledge about the real world.
  3. Able to contemplate existing knowledge, possibly creating new knowledge.
  4. Possessing knowledge about some situation in the world.

Reasoning requires awareness of existing knowledge.

The ability to acquire and process data from sensors endow AI systems with a sense of awareness.


  1. The awareness by an intelligent entity of itself as being distinct from the rest of the world.
  2. Sense of self.
  3. Awareness of self.
  4. Sense that there is a me, an I.
  5. Awareness of an intelligent entity of its own subjectivity as distinct from the objectivity of the rest of the world.
  6. A sense of self as being special.

An AI system has a self and has awareness, but may or may not possess self-awareness. For example, an AI system focused on diagnosing failures of mechanical systems would not possess self-awareness, while a driverless car or robot by definition needs to be self-aware to reason about it how it is situated in the real world.

Self-awareness in the human sense of being special and distinct is itself different from mere awareness of self as an object to be observed and manipulated.


  1. Ability of an intelligent entity to watch carefully for any unusual or particular circumstance and focus attention should it be warranted. Usually coupled with awareness as well as responsiveness.

Degree of alertness can vary, commonly as a conscious or explicit decision to establish the conditions that will warrant triggering a heightened sense of awareness and thought.


  1. Ability of an intelligent entity to react reasonably promptly to some circumstance. Usually coupled with awareness, alertness, and action.

May be an involuntary reaction or a conscious, explicit choice.


  1. The quality of an intelligent entity being aware, alert, and responsive to its surroundings and internal state, including thoughts, feelings, and knowledge.
  2. Being able to exercise intelligence.
  3. Being able to engage in thought, imagination, and reason.
  4. Synonym for consciousness.

Being conscious doesn’t necessarily imply alertness or even responsiveness. Awareness alone is sufficient.

We haven’t traditionally thought of AI systems as being conscious per se, but they increasingly have a sense of awareness, alertness, and responsiveness that are hallmarks of consciousness.


  1. Being conscious — alert, aware, and responsive to surroundings and internal state.
  2. Being able to exercise intelligence.
  3. Synonym for conscious.
  4. Being alert, aware, and responsive to surroundings, internal state, and sense of self.

Consciousness does not strictly imply self-awareness in the sense of a self that is special, but some may use it that way.

Belief, desire, intention (BDI)

More sophisticated AI systems must not only work with knowledge and beliefs, but also with desires and intentions, collectively referred to as BDI.

The theory is that without all three, an intelligent entity cannot make truly intelligent and responsive decisions.

This holds true for both the BDI of intelligent entities that the AI system is attempting to work with and the AI system itself.

Desires and intentions can be thought of as parts of goals.


Intelligent entities have some motivation for deciding or acting as they do. Beliefs, desires, intentions, goals, and drives can all be sources for motivation.


  1. Sense of voluntary will behind a decision.
  2. Synonym for free will.
  3. Synonym for agency.

Volition is more than simply intentions and motivations. It gives an entity is true sense of agency, or acting for itself.

Free will

  1. The ability to freely choose between different courses of action.
  2. Freedom from control or excessive influence by another intelligent entity.
  3. Synonym for volition.
  4. Synonym for agency.

We don’t normally speak of AI systems as having free will per se, but they do certainly have and exhibit the capacity to consider alternatives and choose between them. Volition seems a more appropriate term for an AI system.

Free will is an important characteristic of human beings and human intelligence, but to date there has been no real discussion of a similar concept for machines and AI systems, other than in science fiction or speculation about the not-near future.

Should free will be considered part of Strong AI? Interesting question.

Can an intelligent entity be truly intelligent without free will? Another interesting question.

For now and the foreseeable future, we will be in control of the machines (we hope… most of the time), so free will is a non-issue, for now.

Subconscious and unconscious mind

  1. The portions of the mind of an intelligent entity which are not directly accessible to the conscious mind.

The subconscious or unconscious mind of an intelligent entity is involved in intuition. And dreams. Probably imagination as well to at least some extent.

AI software will frequently be organized in modules or levels of processing so that a lot of processing is occurring in parallel, providing at least a rudimentary analogy to the conscious, subconscious, and unconscious mind.


  1. Experience, knowledge, and judgment, which enable an intelligent entity to act with intelligence.

Wisdom is at the apex of the DIKW knowledge pyramid:

  1. Data. The starting point — sensory input, measurements, counts.
  2. Information. Organize, structure, and contextualize the data.
  3. Knowledge. Add meaning, significance.
  4. Wisdom. True understanding and ability to apply information and knowledge with a sense of intelligence.

Current AI systems are not expected to experience wisdom. That may come some day, but not in the foreseeable near-term future.

One could argue that the more advanced AI systems have a primitive form of wisdom, in the sense of rules, logic, and other forms of hardcoded judgment for deciding how to act, but the mere fact that it is hard-coded or pre-programmed precludes an AI system from acting wisely as opposed to merely following rules and fixed instructions.

Part of wisdom is understanding when prior beliefs must be changed in response to a changing world.

Reason and rationality

The primary hallmark of an intelligent entity is its ability to reason about knowledge. Rationality is generally an acceptable synonym for reason.

Characteristics of reason include:

  • Use of logic.
  • Sensible.
  • Based on facts.
  • Consistent with available knowledge.
  • Not based on or significantly influenced by passion or emotion.
  • A conscious process.
  • Sound judgment.
  • Able to be influenced by intuition, emotions, and drives, but only to the degree that they are compatible with sound reason.

Reason and rationality stand in contrast to:

  • Unbridled emotion and passion.
  • Irrationality.
  • Chaotic thinking devoid of reasoning.
  • Conclusions, bias, and strong beliefs that stand in contrast to logic.
  • Intuition.

This is not to say that intuition or emotion are bad per se, but simply that they don’t constitute reason.

Rational, irrational, and nonrational


  1. Consistent with the use of reason, logic, and sound judgment.


  1. Inconsistent with or in contradiction to the use of reason, logic, and sound judgment.


  1. A matter that is outside the reach of reason and logic.
  2. Sometimes a synonym for irrational.

Examples of matters that are nonrational:

  • Personal taste.
  • Personal preference.
  • Arbitrary, non-functional qualities such as color, shape, size, or material.
  • Art.
  • Entertainment.
  • Aesthetics.
  • Any matter where there are no significant facts or reasons to guide action.
  • Any situation where the facts, reason, and judgment concerning choices are equally balanced but a decision is required.
  • Any situation where a coin-flip feels like the sensible or even only thing to do.
  • Affairs of the heart.
  • Family affairs.
  • Maintaining peace in a community where conflicting parties each have their own (to them) good reasons that the other parties strongly disagree with, provided that no laws or rules are being broken, which would be irrational.
  • Values — moral and ethical. There is typically some sort of logic and reason behind them, but if they were pure reason and logic they wouldn’t be cordoned off as distinct from reason and logic.
  • Rules — can sometimes be arbitrary rather than driven by pure rationality.

Of course we expect AI systems to be rational and of course we would be very disappointed if they were irrational, but we do need to recognize the prospect of situations and matters where reason and logic alone is not the optimal answer.


  1. The ability to discern what reasoning and action are warranted in a given situation.

Quality of judgment can vary greatly. Weak and poor judgment occur more frequently than desired. Sound judgment is more rare than desired.

Sound judgment

  1. Judgment that has proven over time and across a range of situations and matters to produce beliefs, positions, decisions, and actions that are considered reasonably sound.
  2. Opposite of poor judgment.
  3. Judgment that inspires confidence.

AI systems are expected to act with very, very sound judgment.


  1. Degree of intensity of belief and trust in some matter, such as reasoning and judgment.

Confidence in the work of an AI system is critical.

Confidence applies to both people and machines. AI systems must assess their own confidence in the knowledge and actions of other AI systems.

Difficulties with acquiring and working with information can result in information, beliefs, and knowledge for which the intelligent entity has less than absolute confidence. Modest to moderate confidence is frequently the best that can be achieved.

An AI system should endeavor to assess and track the confidence for every piece of information it processes or creates and every decision it makes.

It should be possible for a person, control software, or another intelligent entity to query the confidence in any belief, information, knowledge, or decision.

Confidence can be expressed either in qualitative terms or in quantitative terms such as a confidence interval and probability.

Reason and logic


  1. Fairly strict, rigid, mechanical, and methodical rules for reasoning, making no allowance for emotion, passion, drives, interests, or biases, unless they are objects of the reasoning.
  2. Informally, the reasoning for a conclusion or decision.
  3. Synonym for reason.

Mathematics and formal logic are the extremes of logic.

Logic can involve deduction, inference, and induction.

Generally, reason is more than only logic, but contextually the two can be used as synonyms.


  1. A sound, consistent, and credible process for supporting a belief, position, proposition, conclusion, or decision, most typically based on a foundation of logic, sound judgment, and fact.
  2. Logic and judgment for contemplating a matter.
  3. Loosely, an excuse when sound reason is not readily available.

Intuition can play a role, but only a subsidiary role, with reason needing a sound, non-intuitive basis for accepting intuition.

Logic and reason produce a conclusion, the result. This may be a simple choice between options, a decision on whether to accept a proposition as true, or knowledge itself.

Generalization and induction


  1. Producing a rule that extrapolates expectations for a collection of entities based on observing common characteristics.


  1. Generalization to a larger collection of entities from a smaller collection.

Generalization and induction are generally considered risky unless it can be confirmed that the requirements for the generalization can be robustly met for all entities to be covered by the generalization. This may be commonly true for mathematical systems, but can be problematic if not done carefully for real-world systems.

Many AI systems are pre-programmed with proven (or assumed) rules, so that this is not a problem (unless the assumptions are wrong), but advanced AI systems which are attempting to learn and reason about the world in a dynamic manner will confront issues concerning generalization and induction.


  1. Elaboration of the reasoning behind a belief, position, proposition, conclusion, or decision.
  2. Synonym for justification.
  3. Synonym for excuse.
  4. Synonym for opinion.

A rationale may or may not constitute sound reasoning. The designers of an AI system will have to decide what level of stringency to assign to rationale.

Reasoning, Formal reasoning, and Informal reasoning


  1. Detail of the logic, facts, knowledge, and judgment behind a belief, position, proposition, conclusion, or decision.
  2. Synonym for justification.
  3. Synonym for rationale.
  4. Synonym for formal reasoning.
  5. Synonym for informal reasoning.

Rationale may be weaker than reasoning. Reasoning is expected to be quite strong.

Formal reasoning:

  1. Reasoning that has a significant degree of rigor, intensity, and formality.
  2. In contrast to informal reasoning.
  3. Synonym for reasoning.

Informal reasoning:

  1. Reasoning that lacks the rigor, intensity, or formality of formal reasoning.
  2. In contrast to formal reasoning.
  3. Synonym for reasoning.


  1. A carefully considered set of reasons supporting a position, proposition, conclusion, or decision.
  2. Informally, an unruly exchange of views, opinions, and possibly even sound facts.

It is important with AI systems to have some way to query or record the specific reasoning that the AI system used to arrive at some result.


  1. Assumed truth. With only informal reasoning rather than robust formal reasoning or proof given.
  2. Knowledge that may have been proved elsewhere, but assumed in the current context.

Assumptions will commonly have some sort of reasoning given, but it typically won’t have the level of robust formal reasoning that can be proved or validated — otherwise it wouldn’t be called an assumption.

As sound and formal as reason and logic may be, ultimately they are no sounder than the assumptions on which they are based.

The strength of reasoning or an argument rests both in the strength of the logic and the strength of the assumptions — reasoning cannot be stronger than its assumptions.


  1. Arbitrary, accepted, but reasoned assumptions about the meaning of terms or concepts.
  2. Accepted truth.
  3. Defined truth.

The main distinction between a definition and an assumption is that an assumption is a statement about something that is expected to be true while a definition is merely a shorthand, a convenience, rather than being a statement about the fundamental nature of the matter at hand.

Fair inference

  1. Reasoning in which the justification for the inference is sufficiently robust to warrant confidence in a high degree of certainty of the conclusion being valid.

Inference is generally an acceptable method of reasoning, but it has its limits. It becomes problematic when information is incomplete, vague, or uncertain in some way. Then it becomes a matter of judgment whether the justification for the inference is sufficiently robust to qualify as a fair inference.

Linear systems

  1. A phenomenon which has linear, law-like behavior, such that once you figure out how things work in one situation you can apply that knowledge in all other situations.
  2. Opposite of nonlinear or complex adaptive systems.

This is great when it happens, but so many phenomena in the real world are more complex and not so linear.

Active knowledge

  1. Knowledge that triggers thought processes and actions in response to incoming stimulus or thought.
  2. Alternative to explicit conscious thought.

Normally, an intelligent entity might perceive incoming stimulus and consciously and explicitly decide how to respond.

But with active knowledge, a response would be automatic, merely upon the fact pattern of incoming stimulus matching a similar fact pattern of existing knowledge. Action would not necessarily be automatic, but triggering of related thought processes would be automatic.

Active knowledge would normally consist of two parts:

  • The trigger fact pattern.
  • Related knowledge.

The incoming stimulus would match the trigger, and then the related knowledge would enter the conscious thought process.

Even absent presence of the trigger pattern in input stimulus, the trigger pattern might appear in conscious thought as other pieces of existing knowledge are contemplated, thus triggering the process of raising the related knowledge into consciousness.

This is all rather speculative. We don’t yet have a firm enough conception of the operation of the human mind. But, this is still relevant to conceptualizing how an advanced AI system could store, organize, and process knowledge, input stimulus, and thought.

AI systems are much easier to develop for linear systems.

Nonlinear systems

  1. A phenomenon where there is a significant divergence in the rules of behavior from one situation to another, such that knowledge from one situation cannot always be reliably applied to another situation or requires a more complex, nonlinear set of rules.
  2. A phenomenon where the rules for different situations are more complex than simply a linear relationship.
  3. In contrast to a linear system, a chaotic system, an indeterminate, or a complex adaptive system..
  4. Synonym for complex adaptive system. Not technically true, but common usage.

The annoying thing about nonlinear systems is that sometimes knowledge from one situation can be applied to other situations, but knowing when this can happen is problematic.

The main difficulty with a nonlinear system is to discover the rules for how exactly behavior changes between situations. There may or may not be a mathematical relationship that can be deduced or discovered. But ultimately there will be some discernible method to the madness, because if there is no method, then there is no system per se, just chaos.

AI systems are much more difficult to develop for nonlinear systems.

Chaotic systems

  1. A phenomenon where the rules for different situations cannot be determined.
  2. A phenomenon where the rules for different situations are unknowable, ever.

Indeterminate systems

  1. A chaotic system.
  2. A phenomenon where the rules for different situations are not known, at present.
  3. In mathematics, a system of equations with more than one solution.

Complex adaptive systems (CAS)

  1. A phenomenon where the rules of behavior for a given situation will change over time, evolving due to feedback loops in the system and impacts of external phenomena.
  2. In contrast to linear and nonlinear systems.

The good news is that as complex as an adaptive system may be, evolution does not necessarily happen at a rapid rate, so that the system may appear to operate in a linear or nonlinear manner for extended periods of time. For example, weather can evolve very rapidly while climate evolves much more slowly and geology evolves far more slowly than even climate, although localized geological effects such as erosion, earthquakes, and volcanos can occur much more frequently.

So, the twin challenges of complex adaptive systems for AI are:

  1. To take advantage of short-term linear and nonlinear behavior of the system, while it lasts.
  2. To be prepared to radically or incrementally evolve the knowledge base as unexpected evolution occurs.


  1. Complete disorder such that no method can be discerned in the madness.
  2. Perceived disorder as a result of limited comprehension of the phenomenon. Apparent chaos or perceived chaos.

Most of the real world has some degree of order in some sense, whether linear, nonlinear, or complex adaptive systems, but sometimes there really is no discernible order.

Sometimes the lack of order is more a lack of perception and lack of comprehension of some hidden, underlying order — call it apparent chaos.

Chaos is equally problematic for people and machines.

Sometimes you can develop strategies and techniques for coping with disorder and sometimes the only thing a person or AI system can do is go with the flow, ride out the storm, hope for the best, and wait until some sense of order reappears.

Knowledge base

  1. Sum total of knowledge available to an intelligent entity.
  2. Synonym for memory.

Be warned that some or much of the so-called knowledge may simply be beliefs that are not fully justified or fully validated in the real world.


  1. Subset of mental functions of intelligence concerned with acquisition of knowledge from perception of sensory data, including parsing of natural language (both spoken, and written) and recognition of gestures and body language.
  2. Process by which knowledge is acquired by an intelligent entity.
  3. Synonym for intelligence. Loosely speaking.
  4. Synonym for human cognition.
  5. Synonym for machine cognition.

Generation of knowledge associates deeper meaning with the surface appearance of information, such as parsing natural language or gestures and body language to distill the essential meaning deeper than the surface syntax or appearance.

There is some fuzziness as to whether cognition includes all mental functions of intelligence, including reasoning, imagination, speculation, planning, and decision, or is strictly limited to acquisition of knowledge from perception of sensory data, and whether memory is strictly a function of cognition or whether cognition merely interacts with memory.

Clearly recognition (re-COGNITION) utilizes recall of memory to aid perception.

Recall of memory is also utilized for to build larger, composite forms of knowledge, integrating what is already known with newly acquired information.

So, clearly memory is integrated with cognition, but that still begs the question of whether memory is strictly part of cognition or should be considered a distinct portion or the mind.

In terms of AI, it seems to make sense for memory to be a shared resource, shared between knowledge acquisition and the functions that process knowledge.

How much sense does it make to consider memory as being outside of cognition per se? Consider the thought experiment of entering a quiet, dark room — knowledge acquisition from perception of sensory data ceases, but your mind is now free to explore and work with all of your previously acquired knowledge.

Your mind can also create new knowledge from reasoning about previous knowledge. Whether reasoning is part of cognition per se is unclear.

And your imagination can conjure up imaginary images and knowledge, at will, unaided by fresh perception of sensory data, but fueled by past acquired knowledge from memory.

The position espoused in this paper is that cognition represents the frontend of the process of knowledge acquisition, with thought, memory, and reasoning occurring after cognition on the raw knowledge acquired during cognition.

An alternative view is that cognition produces information and context, and it is up to the conscious mind to assess that information and sort out meaning, producing the final knowledge based on raw information, context, and derived meaning.

It is important to make clear from context whether cognition is referring to cognition by machines or by people.


A primary feature of cognition is the requirement to focus attention for perception and knowledge acquisition.

This is true whether the intelligent entity is a person or an AI system.

A sense of priorities is generally required. There may be more than one area of significant interest that is competing for attention, but these interests must be prioritized, although priorities can shift over time or based on available resources.


Perception is the stage of cognition where raw sensory data is processed and turned into information that is suitable for recognition and knowledge acquisition.

Attention is required to focus the processing of sensory data.

The processed sensory data can be compared to memories to recognize shapes, images, colors, smells, tastes, textures, sounds, and symbols and sounds for language, as well as gestures.

There is only simple, literal, surface meaning at the recognition stage, nothing deep.

Deeper meaning will have to be supplied based on what was recognized, context, meaning recalled from memory, and meaning and interpretation by the conscious mind.

Symbols and symbol processing

Much of the higher-order processing in any intelligent entity is performed using symbols.

A symbol is merely a shorthand for referring to some concept, object, phenomenon, or knowledge which permits mental processes to be performed without all of the details of whatever is being referred to.

A symbol may be a word, a name, or some other kind of identifier.

Symbol processing is one of the keys to AI and was one of its main focuses in traditional AI dating back to the 1950’s and 1960’s. It remains quite important and central to AI.

Natural language

  1. The language used between people such as in their daily lives and excluding specialized languages such as mathematics or scientific notation. Also excluding languages designed for communications with machines

Natural language is a key capability for communication between people. That makes it a natural method for communicating between a person and a machine or a machine and a person.


  1. Any textual or oral representation of knowledge or intentions.
  2. Synonym for natural language.

Generally, we do not use the term language for communications between machines. Protocols and data formats such as markup languages (HTML, XML) are the traditional terms.

Language is used to express:

  • Statements of information
  • Expressions of feelings
  • Questions
  • Commands or statements of need for action

Natural language processing (NLP)

The ability of a machine to parse and deduce meaning from natural language as well as the ability to express knowledge and intentions from a digital system into natural language is known as natural language processing or NLP.

NLP generally involves natural language in the form of text. Speech is a different but related matter.

Speech processing

Since natural language can be expressed orally as well as in written form, speech processing is an important AI function, including:

  • Speech to text — speech recognition.
  • Text to speech — speech generation.

Speech recognition

  1. Transforming speech from audio form to text or parse trees.

Natural language processing (NLP) can then be used to process meaning in text form.

Alternatively, NLP can directly produce parse trees so that text will not have to be parsed.

Parse trees

  1. Hierarchical representation of knowledge or information after being parsed from some linear representation such as natural language.

Information and knowledge may be represented in a wide variety of languages. Expressions in language can be parsed, either by a person or a machine, and represented in a tree or hierarchical format, similar to a traditional sentence diagram.

The point of a parse tree is to facilitate processing of language within a machine.

Speech generation

  1. Transforming knowledge or expressions to speech in audio form in natural language.

Expressions could be in raw text or as structured knowledge or even a parse tree.

Hints may be required to be embedded in text to assure proper tone and inflection.

Text recognition

In addition to processing text which is input in digital form as a string of so-called character codes, software is also capable of converting images of text into digital form as well.

Text recognition has three distinct forms:

  1. Optical character recognition of machine-printed text.
  2. Recognition of handprinted text.
  3. Recognition of cursive script text, handwriting.

These functions are useful even for non-AI applications.

Whether these functions are considered AI per se is a matter of debate and definition. It’s a fielder’s choice.

Text extraction from images

There is a fourth form of text recognition which is more commonly associated with AI: extraction of text from images. Some examples are:

  • Reading signs in photos.
  • Reading street signs from video.
  • Reading whiteboards from photos or video.

Automatic language translation

AI has been used to varying degrees of success for translating from one natural language to another, but the process remains problematic.

Machine translation

  1. Synonym for automatic language translation.


  1. The science and study of the elements and structure of language.
  2. The science and study of how intelligent entities process language, both to express thoughts and intentions and to extract knowledge and intentions from expressions.

In the case of an AI system we refer to computational linguistics.


Software developers refer to the processing of language input text as parsing, whether it is natural language or some specialized computer language such as programming languages and markup languages (XML.) Parsing involves several steps:

  • Defining the language — the vocabulary and overall structure.
  • Defining the grammar for the language — the detailed rules for structure.
  • Lexical analysis — recognizing distinct words, punctuation, and special characters referred to as operators.
  • Parsing — recognizing the structure of words, punctuation, and operators according to the rules defined by the grammar.
  • Parse trees — representing the syntactic structure of the input text in a form that is easier for the computer to directly process, replacing raw words, punctuation, and operators with a digital representation of the referenced concept, which may be ambiguous until semantic processing.
  • Semantic processing — overlaying meaning onto the parse trees, whether from existing knowledge, context from the source of the input text, or declarations of hints of meaning from the parsed input text itself. This step resolves ambiguities between raw words and the various concepts that words might represent.

In the case of speech input, the raw audio data may be converted to natural language text for convenience or directly converted to parse trees for efficiency.

Images, image processing, and image recognition

As important as symbols and symbol processing are to intelligence, images and image processing are equally important.

The ability to recognize objects, animals, people, and phenomena is critical to intelligence.

The ability to classify and categorize images is itself an interesting capability.

Facial recognition is a hot topic these days. Gesture and facial expression recognition is important to intelligence as well.

Image recognition in general is an interesting capability.

Many forms of image processing have been developed over the years, but it is not completely clear which of them are necessarily AI per se. Object identification and feature extraction are typically considered AI. Whether optical character recognition (OCR) is considered AI is unclear.

Image processing may or may not be in real-time or using images that were captured at some time in the past or even generated by computer software. Real-time image processing would be classifies as machine vision (computer vision.)

Machine vision

  1. Mimicking the capabilities of biological vision using a machine.
  2. Adding real-time processing to image processing and image recognition.
  3. Synonym for computer vision.

Machine vision enables a lot of AI applications, such as:

  • Manufacturing automation
  • Driverless cars
  • Autonomous vehicles
  • Advanced sensing and alerting

Computer vision

  1. Synonym for machine vision.

Identification and identifiers

Beyond recognition, cognition includes retrieving or storing the identification or identifiers for an entity, such as a name, location, role, relationship, assigned number, or other identifying characteristics that allow an intelligent entity to easily and quickly refer to something without the need to provide all details about that something.

Identifiers are an essential part of most forms of computer software, and AI is no different.

People have a fascination with identifiers as well, such as a person’s name, their nickname, user name, handle, etc.


  1. Unique identifier for an entity.
  2. Identifiers for an entity.
  3. Identifying characteristics for an entity.
  4. Important characteristics for an entity.
  5. Incidental but distinguishing characteristics for an entity.
  6. All characteristics for an entity, including current, past, and propensity for future behavior.

Identity is a fairly ambiguous term, unlike identification and identifiers, requiring context to determine meaning.

A unique identifier is the most robust identity, but frequently not practical.

More commonly, non-unique identifiers or identifying characteristics are more readily available.

Identity can generally be categorized as one of two main meanings:

  1. Identification of an entity. Which entity is being referred to.
  2. All aspects of the entity. Everything that makes the entity what it is.

Or more simply, who vs. what, or name vs. details.

Advanced AI systems will have to deal with both.

For example, if a driverless car senses a person crossing the street, the main question is not what their name is, but how likely is it that they will finish crossing the street before the vehicle approaches them.


  1. The accumulated knowledge of an intelligent entity.
  2. Short-term memory.
  3. Long-term memory.
  4. Synonym for knowledge base.

The distinction between short-term and long-term memory and any other forms of memory are important, but beyond the scope of this informal paper. Generally, memory refers to long-term memory unless explicitly or contextually specified otherwise.

Prediction of the future

No one expects machines to be able to predict the future per se, but we do have an expectation of being able to establish reasonable plans for accommodating likely scenarios.

There are various techniques for essentially trying to predict the future, including:

  1. Patterns
  2. Trends
  3. Extrapolation
  4. Speculation
  5. Guessing


  1. Analyzing past patterns and how they evolved in an attempt to deduce how the current situation will evolve in the near future.

There is no guarantee that past patterns will hold in the future, but frequently they do.

Advanced AI systems should certainly detect and exploit patterns, but also need to cope with situations where past patterns break down for some reason.


  1. Patterns where there is a reasonably simple mathematical relationship that has reliably shown progress in some direction, such as growth.


  1. Identifying a reasonable mathematical relationship in a data series, so that future data points can be reliably predicted.
  2. Synonym for trend.


  1. Forming a theory, conjecture, or expectation for an outcome that is not yet known.

Speculation is risky, but is an essential technique for forming expectations of the unknown.

A speculated outcome may not necessarily come to pass, but even if it doesn’t, one can frequently learn from the experience.

Speculation is a very distinctly human characteristic for intelligence.

Advanced AI systems will have to possess robust speculative capabilities to have any chance of achieving human-level intelligence.


  1. Estimate without any firm, rational basis.
  2. ynonym for estimate.
  3. Synonym for educated guess.

Technically, a guess can indeed be as extreme as a shot in the dark, a random choice, but generally guessing suggests making an educated guess.

Educated guess

  1. A guess that has a somewhat firmer and somewhat rational basis.


  1. Synonym for estimate.
  2. Similar but not exactly the same as something else.


  1. Deduce a proposed outcome based on available information.
  2. Heuristic for proposing an outcome without waiting or expending the full level of effort to fully deduce the outcome.
  3. Synonym for approximation.

SWAG — Scientific Wild-Assed Guess

  1. Very rough approximation based on some amount of information, experience, and intuition.

The theory is that with enough experience, over time an expert, professional, or AI system will become more proficient at guessing.

Machine cognition

  1. Cognition by machines, in contrast to people.
  2. Subset of functions of machine intelligence concerned with acquisition of knowledge from perception of sensory data, including parsing of natural language (both spoken, and written) and recognition of gestures and body language.
  3. Process by which knowledge is acquired by a machine.
  4. Process by which sensor data is processed by a machine.

Human cognition

  1. Cognition by people, in contrast to machines.
  2. Synonym for cognition.

Human-level cognition

  1. A level of machine cognition comparable to human cognition.

Machine perception

  1. Perception as implemented in a machine.
  2. Transformation of raw data from machine sensors into digital information, meaning, and knowledge that can be stored and processed by software on a machine.

Machine perception would include:

  • Recognizing the content of images and sounds.
  • Parsing natural language to produce a semantic representation of incoming expressions, both oral and written language.
  • Recognizing objects, animals, people, and faces.
  • Recognizing gestures, facial expressions and body language.
  • Sentiment analysis — discerning writer or speaker’s attitude, such as positive, negative, or neutral.

Generally, machine perception leans towards no more than superficial, shallow, surface meaning rather than deeper meaning such as social or emotional meaning.

Sentiment analysis

  1. Discerning writer or speaker’s attitude, such as positive, negative, or neutral.

I’m not so sure that much of today’s very limited forms of sentiment analysis such as use of trigger words would constitute true AI, but certainly it could be considered Weak AI.

More advanced sentiment analysis could analyze speech for tone, pitch, cadence, breathing, etc. Or possibly even lie detection. Or at least to detect and measure stress.

Human intelligence

  1. Intelligence or intellectual capacity of an average, mature, educated, experienced, sane, sober, awake human.

Exactly which traits or capabilities of intelligence might be strictly human or distinctly human is a matter of debate.

Natural intelligence

  1. Synonym for human intelligence.
  2. Intelligence of any natural, sentient creature, including most higher-order animals, especially primates but mammals as well.

Animal intelligence

  1. Intelligence of any natural, sentient creature, including most higher-order animals, especially primates but mammals as well, including humans.
  2. Intelligence of a non-human animal. The characteristics of intelligence shared by man and animals, excluding characteristics exclusive to humans.

Detailing animal intelligence as distinct from human intelligence is beyond the scope of this paper.

Brilliant intelligence

  1. Intelligence that is well above that of an average person.

Genius intelligence

  1. Intelligence that is at or above the level of a genius.

Superhuman intelligence, superintelligence

  1. Intelligence well beyond the most gifted of geniuses.
  2. Intelligence beyond Strong AI.

Any deep consideration of superintelligence is beyond the scope of this paper.

Risks of superintelligence

There seems to be a conundrum in thinking about potential risks of superintelligence:

  1. Superintelligence could raise uncontrollable risks to humanity.
  2. But wouldn’t a so-called superintelligence be smart enough to more than fully comprehend any such risks and be very skilled at evading and mitigating them?

So, which is it — is a superintelligence really smart, in all senses, or simply really, really dumb, or really evil? But can evil be considered intelligent?

BTW, I think Terminator was a really cool movie and Skynet was an intriguing concept, but… they were both science fiction. That’s one of the first things I would want a superhuman superintelligence to be able to do — discern and keep reality and fiction separate.

What is beyond human intelligence?

What exactly does it mean to be beyond human intelligence? What exactly could a superhuman intelligence or superintelligence accomplish?

The simple answer may be that the question is beyond our capacity to even contemplate. But that’s what we humans do — anything that somebody tells us that we can’t do.

We can speculate on some obvious improvements:

  • Remember and recall much more knowledge.
  • Think much faster.
  • Perform arithmetic and mathematics more rapidly and accurately.
  • Prove more mathematics theorems.
  • Discover more relations in knowledge.
  • Expanded sensory capacity.
  • Speculate much more grandly.
  • Design and excel at games far more complex than chess and Go.
  • (Figure out why Go is capitalized but chess is not!)
  • Design and speak a language that mere mortals can no longer comprehend.
  • Perform grander thought experiments that allow more mysteries of the universe to be solved with less physical experimentation.
  • Figure out how to communicate with intelligent life elsewhere in the universe.
  • Talk to the animals.
  • Design intelligent plants.
  • Design and create more successful forms of social organizations.
  • Either design better religions and spiritual practices, or transcend and eliminate them.
  • Have much more successful relationships.
  • Dramatically reduce or even eliminate marital problems.
  • Guarantee success at finding a mate.
  • Reduce or eliminate many or all forms of stress.
  • Figure out how to travel in time.
  • Figure out how to transmit matter.
  • Figure out how to travel galactic distances faster than the speed of light.
  • Figure out how to read minds.
  • Develop ESP.
  • Develop clairvoyance.
  • Figure out whether the multiverse exists on not.
  • Contact and explore other universes.
  • Develop games and puzzles that no human could play or master.
  • Add new senses and expand existing senses.

Some questions that a superintelligence might be able to answer:

  • Origin of the universe.
  • What if anything existed before t=0 for the universe?
  • Will the universe cease expanding?
  • What will happen if the universe ceases expanding?
  • How the universe will end, if ever?
  • When and how did life begin?
  • Is there some ultimate life form?
  • Is string theory valid?
  • What is the united theory of everything?
  • Is there intelligent life elsewhere in the universe?
  • Does God exist?
  • What is God’s ultimate plan, if any?
  • Is there life after death?

Strong AI

  1. Synonym for human-level artificial intelligence.
  2. AI that rises to the level of human intelligence.
  3. Opposite of Weak AI.
  4. Levels of intelligence well above Weak AI.
  5. Synonym for artificial general intelligence.

There are no hard metrics for measuring or even declaring that a given AI system is Strong AI.

Strong AI can be viewed as either the ultimate end goal of human-level intelligence, or a range or spectrum, from well above Weak AI, incrementally stronger, until human-level intelligence is reached.

Weak AI / Narrow AI

  1. AI that falls short of Strong AI or human-level intelligence.
  2. Task-specific or domain-specific AI.
  3. AI that is focused on relatively discrete tasks that are only a small subset of human-level intelligence.
  4. Opposite of Strong AI.
  5. Levels of intelligence well short of Strong AI.
  6. Any function that gives the appearance of some aspect of human-level intelligence.
  7. Weak AI and Narrow AI are synonyms.

There is no hard metric for measuring either a minimum or maximum of Weak AI.

Unless the proponents of an AI system are robustly asserting that their system has strong AI, the assumption should be that the system has only Weak AI.

Task-specific AI

  1. AI that is focused on only a discrete task or small subset of intelligence.
  2. Use of AI to automate a discrete task.
  3. Synonym for traditional AI.
  4. Synonym for Weak AI.

Domain-specific AI

  1. AI that is focused on only a specific domain, possibly or generally for a limited subset of intelligence.
  2. Use of AI to automate tasks for a specific domain.
  3. Synonym for Weak AI.

The constraints of the domain provide opportunities to optimize, tailor, or otherwise focus the AI algorithms to act more intelligently for that one domain than if the AI had to act across a more general set of domains.


A subset of life and the world in general that is relatively constrained, such as:

  1. A region or locale.
  2. A profession.
  3. A field, such as a field of science.
  4. A class of problems.
  5. A class of activities.

Artificial general intelligence (AGI)

  1. Synonym for Strong AI.
  2. Synonym for human-level artificial intelligence.
  3. AI that is focused on much more than relatively discrete tasks that are only a small subset of human-level intelligence

Augmented intelligence

  1. The use of technology to supplement intelligence.

I like to tell people that Google is now half of my brain. The ability to quickly access data, information, knowledge, and answer to fairly complex questions and issues sure makes it feel like I am a lot smarter and more capable than if I didn’t have an Internet connection.

We’re a long way from implanting chips in our brains, but the prospect is credible.

We’re still a good distance from using electronics to read minds or stimulate sensory perception electronically, but progress is actually being made on those fronts.

Is a smart phone augmented intelligence? Yes, to at least a limited degree, if used properly, but can also be misused to the detriment of overall intelligence.

A deeper exploration of augmented intelligence is beyond the scope of this informal paper.

Group mind

  1. The ability of communicating, cooperating, and collaborating groups of intelligent entities to pool their efforts to act more intelligently than if they acted alone.

Collaboration over the Internet using email and discussion groups, at least for disciplined and sincere groups of individuals can result in accomplishments exceeding those possible by a single individual.

AI has great potential for group mind, but little work has been done even on the research front.

Group mind incorporating both people and machines has great potential. An AI system could easily participate in an email discussion group, for example. Easy being a relative concept. Maybe believable is a better characterization. Just not in the very near-term future.

Distributed AI

  1. Group mind composed of interacting AI systems.

This has not yet occurred in any significant manner with current AI technology, but has real potential.

Skynet of Terminator movie fame with its neural net-based AI was a distributed AI system. But that was science fiction.


  1. Capacity for elements of a system to spontaneously come into some sense of order without any external command, control, or intentional influence.
  2. Capacity of an intelligent entity to acquire and organize knowledge.
  3. Capacity of individual intelligent entities to come together into organizations.
  4. Opposite of a designed system.

Swarm intelligence

A variation on group mind and distributed AI is the concept of swarm intelligence, in which large numbers of independent AI systems come together in a self-organizing manner and collectively form a larger intelligence.

This is more of an advanced research area at this time, but has some interesting promise. It was also a topic explored in Michael Crichton’s book Prey, but again, that was science fiction.

Hierarchical social organizations

Humans are famous for their penchant for forming hierarchical organizations to pursue larger goals than can be achieved by a single individual.

Hierarchy is exploited within software systems to delegate tasks and coordinate results, using modules, processes, and even separate machines.

Hierarchy is even used within an AI system to organize and coordinate work within the system.

But to date, there doesn’t appear to have been much interest in exploiting social hierarchy among groups of AI systems. Still, the concept shows promise.

Group mind is more a group of peers at the same level, rather than a hierarchy per se.

Leadership among AI systems

Although there is usually some form of executive control within an AI system, there hasn’t yet been much interest expressed in the concept of an AI system taking and exercising leadership within a group of AI systems.

These days, distributed systems designers prefer to focus on peer-oriented, leaderless system architectures, but usually that simply means that the leader is dynamically chosen or elected in a more democratic sense than having a selected or appointed leader who remains leader for an extended period of time.

As AI systems become more advanced and groups of AI systems begin cooperating, leadership issues will begin to come to the forefront.

Social contract

Any parties interacting in a social manner need to make some presumption that some sort of social contract is in force. In the context of AI, we have to consider social contracts:

  • Between AI systems.
  • Between AI systems and people as individuals.
  • Between AI systems and society.

At present, AI social contracts tend to be hard-coded or even presumed rules for interacting entities.

In more advanced AI systems, social contracts could or should be negotiated between entities, possibly even dynamically.

Legal or regulatory rules are likely to constrain social contracts for AI systems.

Intelligent machine

  1. A machine possessing machine intelligence.
  2. A machine that exhibits a substantial fraction of human-level intelligence.
  3. A machine that exhibits some modest fraction of human-level intelligence.

What is a machine?

Generally, when talking about computers, software, the digital world, and computation in general the term machine has any of a number of contextual meanings or senses:

  1. The net effect that the user sees when hardware and software are combined to produce an application running on a computer system. For example, your smartphone or tablet and its apps, your laptop or desktop computer and its apps, a wearable computer, a robot, a smart vending machine, a smart kitchen appliance, an airport check-in kiosk, or a driverless car.
  2. The application software running on a computer. Sure, the hardware and the operating system provide significant capabilities, but it is the software that really provides the intelligence, in much the same way that the human mind is the real seat of human intelligence even though the neurons in the brain are the underlying enabler of that intelligence.
  3. A computer system minus application software. The bare hardware and operating system alone.
  4. The bare hardware of a computer, without even an operating system.
  5. The CPU or central processing chip (and any auxiliary processing chips such as a GPU) of a computer as well as memory and permanent storage for data. All of the other hardware is there only to enable the CPU and storage to function.
  6. The CPU alone. This is where the real, intellectual work of a machine is done. Memory and storage correspond more to human knowledge, while the CPU corresponds to human consciousness and thought.
  7. The CPU plus any auxiliary processing chips (such as a GPU). Again, the focus is on the chips where the real logic of the computer systems is being performed.
  8. A Turing machine.
  9. Any hardware designed to implement the principles of a Turing machine.
  10. A mathematical computing model simpler and less functional than a Turing machine, such as a finite state machine or pushdown automaton.
  11. Any mechanical device that utilizes moving parts.
  12. A system which acts according to internal goals and state and in response to external input data, according to rules or other instructions.
  13. Any object which contains an embedded computer. Such as a kiosk, robust, stuffed toy that talks or responds to touch, smart kitchen appliance, driverless car, etc.

For the purposes of this paper, the first and last sense of machine are generally intended unless context makes it clear otherwise, although many of the senses will apply in many situations, it being more a matter of what level or area of detail is of interest in the context under discussion.

Machine hardware

Hardware of a machine includes:

  • Central processor — the brain of a computer
  • Memory
  • Storage — disk or flash drive
  • Input/output electronics
  • Display
  • Input devices such as keyboard, mouse, stylus, touch screen, and microphone
  • Output devices such as printers and speakers (besides display)
  • Sensors
  • Effectors (think hands for a robot)
  • Network connectivity
  • GPU — specialized chip to accelerate graphics and image processing and similar data-intensive tasks
  • Clock
  • Physical enclosure
  • Cooling
  • Power supply and/or batteries
  • Blinking lights
  • Switches and buttons

Types of hardware/machines

  • Desktop computer
  • Laptop computer
  • Tablet
  • Smart phone
  • Wristband devices
  • Servers
  • Robots
  • Driverless vehicles
  • Robotic manufacturing (industrial robots)
  • Digital vending machines
  • Digital appliances
  • Internet of Things (IoT)

Internet of Things (IoT)

The basic idea is that every object has some combination of environmental data that it can sense or actions or effects that it can perform. By attaching or embedded a very small machine to each object, we can then develop applications that access data from each object and cause the object to do something, all enabled by connecting these objects to the Internet, frequently using a wireless networking connection. This arrangement of objects and distributed computing is referred to as the Internet of Things or IoT for short.

AI is significant for IoT is two ways:

  • IoT gives AI a lot more resources to work with.
  • AI is needed to fully exploit the vast amounts of data and capabilities of IoT.

AI can interact with IoT in two ways:

  • AI embedded in IoT devices communicating with networked systems and services, which may or may not be AI systems.
  • AI systems communicating with more primitive, traditional software in IoT devices.

Application software

A user generally interacts with a discrete piece of computer software known as an application or app. Each application consists of code and related data. The code is the logic of the application.

User interface

Since human beings are unable to communicate directly using electrons, pulses of light, or even zeros and ones, a combination of hardware and software is needed to bridge the gap between the computer and the human senses. This hardware and software is collectively known as a user interface.

Commonly in modern computing this hardware consists of a graphical display, keyboard, pointing device, stylus, touch screen, and speakers and a microphone. There may be other devices and sensors such as discrete buttons, switches, lights, and so-called haptic or kinesthetic sensors.

The software side of the user interface consists of two major parts: the logic for how to present digital information on a display or speaker or other output device and specialized software to convert the complex electronic signals of input devices into digital information that an application can more easily process.


Operating systems provide only a very basic level of software services for applications. In addition, as different as each application is, they can be divided into categories such that all applications in a given category have a significant level of logic and features in common. That common application logic can be factored out into a shared body of code. These shared bodies of code are collectively called middleware.

Middleware could be packaged as a shared library or a web service.

The point is that the code of each application is significantly smaller since it will not have to replicate the code that has been centralized in the middleware.

The software to support a modern digital user interface is a primary example of middleware.


A framework is a form of middleware which simplifies applications by allowing them to share logic of applications which have a similar structure.


A toolkit is a form of middleware which embodies significant chunks of application logic, thought of as individual tools, so that the application developer can focus simply on using the tools in the toolkit rather than reinventing tools from scratch.


A library is simply the underlying software technology used to implement frameworks and toolkits. Alternatively, library is frequently used simply as a synonym for a toolkit.

Web services

Middleware can reside in one of two places: on the device running the application or on a separate machine connected over a network. A web service is middleware that resides on a web site on the Internet such that an application can communicate with the middleware using a network protocol such as TCP/IP or HTTP.


  1. Computing services provided by a vendor, such as Amazon, Google, or Microsoft that take care of housing the physical machines in data centers, maintaining network connectivity, and manage the software and security of the machines, eliminating the need for application developers to spend the time and resources on those tasks.
  2. A collection of machines which provide the services needed for an application or collection of related applications. May be hosted by a cloud vendor, or may be hosted on-premise.
  3. Opposite of on-premise computing, where an organization maintains its own machines in its own building, which requires more staff and expense, but offers more control.

Proprietary software

  1. Software whose source code is kept hidden and whose usage is strictly controlled, typically for a license fee.
  2. Software produced by paid, professional staff.
  3. Opposite of open source software.

The only potential benefit of proprietary software is that the financial incentives may be sufficient to encourage dramatic improvements that might come at a slower or less-directed pace with OSS which depends on the efforts of volunteers.

Open source software (OSS)

  1. Software whose source code is freely available, can be copied, modified, and distributed at will and without restriction, with no licensing fee.
  2. Software produced by unpaid volunteers.
  3. Opposite of proprietary software.

OSS is the preferred mode of developing software.

A key advantage of OSS is that a developer can easily and cheaply exploit improvements made by other developers.

Although participation is strictly voluntary, management at paid professional organizations may authorize their paid staff to work on OSS as part of their paid work. In this case, OSS projects will require that such paid work be explicitly donated to the OSS project at no charge.

Several organizations are major proponents and provide support for open source software:

  • Apache Software Foundation
  • SourceForge
  • GitHub

What is a platform?

  1. The machine and software environment in which an application or AI system operates.
  2. A software environment or service which supports the operation of an application.
  3. An AI system which can be used as a service to enable any number of applications to operate.

Emergence and evolution

Although people of a religious persuasion may sincerely believe that human existence and human intelligence in particular was a result of intentional design by God, many non-believers accept that intelligence is an emergent phenomenon — the product of Darwinian evolution, the combination of both a random mutation and a fitness function to determine the survival value of the mutation, repeated over and over across a broad and diverse population.

Meanwhile, today, most of the attention of AI is on how to shortcut, jump start, or otherwise leapfrog ahead to abbreviate the slow process of evolution to achieve a significant fraction of human-level intelligence without the long, tortuous wait.

Evolution and emergence have great promise for the long-term future of AI, but not so much in the near future. Still, breakthroughs could occur at any moment — that’s how it is with evolution and emergence.

Emergent phenomenon

  1. A phenomenon produced through evolution (emerged from evolution) whose characteristics and behavior cannot be predicted in advance.

This is probably to most powerful capability sought by AI. Beyond mere learning, the ability to change in ways that can’t necessarily even be imagined before it actually occurs.

Biological life is considered an emergent phenomenon.

Intelligence is considered an emergent phenomenon.

We might have ideas about designing a superintelligence, but a true superintelligence would need to be an emergent phenomenon.

Genetic and evolutionary computing

A variety of computing strategies have been developed to enable AI systems to evolve in a Darwinian sense, with mutations which are evaluated with a fitness function.

Details are beyond the scope of this paper.

Deep learning

Superficially, deep learning sounds really impressive. I mean, isn’t that exactly what we want the most advanced AI to do? (See my informal paper of Extreme AI.) Well, yes, but in truth so-called deep learning is far less than meets the eye.

Deep learning is more of a marketing term. More specifically, it is simply a rebranding of the concept of a neural network. Granted, neural networks are a perfectly valid and valuable AI technique, but there is so much more to learning than today’s neural network technologies are currently capable of. Put more simply, today’s neural network technologies implement a subset of what we think we understand about how interconnected neurons work in the human brain, but there is still so much more to do on the neuroscience front before we can even come close to claiming that we have mastered deep learning in the sense of human-level intelligence.

Researchers have had some notable successes with neural networks and the purported deep learning, but the bottom line is that the research is still at a very early stage.

Deep learning using neural networks has two fundamental problems today even for the use cases where the technology is in hand:

  1. Need for great attention to manually training the machine, carefully selecting the input training data.
  2. Need for great sophistication on the part of the human trainer, to carefully select the training data and carefully test that training has been successful.

That directed training is the opposite of the kind of undirected or self-directed truly deep learning that we would expect from a truly intelligent machine — at least in any decent science fiction story.

Still, neural networks are indeed a very valuable and useful AI technique.

Artificial neural network (ANN)

  1. A data structure in an AI application is that mimics the function of interconnected neurons in the human brain.
  2. An approach to Deep Learning.
  3. Synonym for neural network, presuming the context is machine learning.

Technical details of artificial neural networking are beyond the scope of this informal paper.

Skynet of Terminator movie fame was an artificial neural network — “neural net-based artificial intelligence.” Science fiction, though.

Neural network

  1. The interconnected neurons in a brain.
  2. The capabilities of intelligence that arise from neural networks.
  3. Synonym for artificial neural network, when the context presumes machine learning.

Hidden variables and observed variables

The basic strategy in Deep Learning is that by analyzing enough data sets an algorithm can discern the rules governing the behavior exhibited by the data. The actual data is referred to as observable variables, while the underlying but not directly visible rules are referred to as hidden variables.

Deep Learning is essentially the process of discovering the hidden variables and their relationships to the observable variables, the data itself.

Learn the rules and you have unlocked the meaning in the data.

Data analysis, analytics, and data science

If there is one thing that computers can do much better than humans, it’s processing large volumes of data and performing complex calculations, and searching for patterns in the data, so-called data analysis or analytics. The current buzzword is Data Science.

Generally speaking, Data Science is the work of a Data Scientist rather than a capability of the software itself. The software is a tool for a Data Scientist.

I wouldn’t say that data analysis or analytics or Data Science are AI per se, but the concept can be employed as part of a larger AI effort.

If the complexity of the calculations is significant enough and appears surprising or magical in some sense, people may come to perceive that the analytics software is intelligent, in at least some minimal sense.

At best such techniques could be classified as Weak AI, task-specific AI, or domain-specific AI.

Granted some analytics may indeed rise to the level of AI.

None of this is meant to demean or belittle data analysis, analytics, or data science in any way, but the focus of this informal paper is artificial intelligence — attempts to approximate the level of intelligence of a human being.

Data mining

Scanning large bodies of information for useful information, especially complex patterns and relationships, is an interesting and fruitful application of AI.

That is not to say that traditional data mining rises to the level of being considered AI, but it could.

Signal processing

As a specialized use case of data analysis, a stream of data from a single source such as a sensor or a collection of such sources can be analyzed or processed to produce a sequence of digital events that could be of significance to an application.

As with IoT (itself a possible instance of signal processing), AI is significant for signal processing two ways:

  • Signal processing gives AI a lot more data to work with.
  • AI is needed to fully exploit the vast amount of data coming from signal processing.

Statistics and statistical analysis

Statistical analysis of data was one of the earliest applications of computers. Generally speaking, we don’t commonly refer to statistics as being AI, but AI can indeed use statistics.

Population, sample

A full set of data is referred to as the population in statistics. Although computers can certainly handle large amounts of data, sometimes it is too difficult to obtain or quickly access the full population for subject of interest, in which case we collect a sample or small subset of the data for the total population.

Sophisticated AI algorithms can frequently act intelligently with only a sample of data, provided that the sample is chosen wisely. Sometimes the AI algorithm is actually able to intelligently determine how to select the optimal or acceptable sample by itself, but in many cases a human must step in and direct the AI to the desired sample.

Planning, operations, supply chains, scheduling, and optimization

Running any nontrivial organization or even a household can involve a lot of information, decisions, and juggling that can overwhelm mere mortals. Sophisticated software can ease the burden. Whether such systems are AI per se is a matter of debate.

But as AI systems begin to get involved in more complex tasks, such as driverless autonomous vehicles and intelligent agents for helping people simplify their lives, the connection to AI seems more obvious, not so much for the discrete operations themselves, but in a coordinating role.

As things stand now, software for these operations is integrated only to a limited degree and requires carefully selected, carefully trained, very diligent, and expensive staff to operate these systems. That’s where AI can come in, in theory.

Business intelligence

No, business intelligence (BI) is not AI for business. Although, I suppose you could say that as a marketing message. Rather, the main point of BI is data analysis and analytics to gain insight into business matters such as sales, products, marketing, customers, etc.

Generally, BI consists of tools that business analysts can use to tease insights (intelligence) from raw data.

BI tools don’t do anything intelligently by themselves — it’s up to the user to configure and use the tools in thoughtful and creative ways to get intelligent results.

That said, it is very possible to apply AI to BI, although that’s not the norm at the present time.


The ability to work unattended and make decisions independent of any external intelligent entity over an extended period of time is a hallmark of more advanced AI systems. This is autonomy. Weaker AI systems work at the behest of or in close cooperation with some superior intelligent entity. That’s less autonomy.


Unfortunately there are two distinct, divergent definitions of agency:

  1. Acting on the behalf of another entity.
  2. A sense that one is acting in one’s own interests.

That latter form is more in comport with philosophy and social science. It is more in line with autonomy.

Autonomy is important, but for the foreseeable future as autonomous as artificial intelligent entities will be, they are still working at the behest of some superior intelligent entity. They are agents of that superior entity. That autonomous vehicle has somebody instructing it as to its destination or purpose.

It would be more accurate to refer to semi-autonomous systems, but the current usage is what it is. Ditto for agency.


  1. A simple, independent computer program which senses its environment, does some amount of processing, maintains state or memory, and produces some amount of output.
  2. Functionally equivalent to a neuron of a human brain (or any other animal for that matter.)
  3. Synonym for software agent.
  4. Synonym for intelligent agent.

The general idea is not that a single agent is very intelligent, but that a significant collection of relatively simple agents would collectively exhibit some significant degree of intelligence.

Agents have not yet been effectively utilized by existing AI systems, but they have great potential for more intelligent (or at least more functional) AI systems, especially for distributed AI.

See also Marvin Minsky’s book Society of Mind.

Software agent

A software agent is a computer program which works toward goals (as opposed to discrete tasks) in a dynamic environment (where change is the norm) on behalf of another entity (human or computational), possibly over an extended period of time, without continuous direct supervision or control, and exhibits a significant degree of flexibility and even creativity in how it transforms goals into action.

Intelligent agents

  1. Model for AI, especially the interaction of multiple, independent AI systems.
  2. Synonym for software agent.
  3. Synonym for intelligent personal agent.
  4. A software agent that utilizes AI.
  5. A software agent that exhibits intelligence.
  6. A software agent supporting the beliefs, desires, and intentions model for behavior.

Intelligent agents don’t technically need to use AI, but it would be common, at least a relatively weak form of AI.

Rich semantic infrastructure needed for intelligent agents to thrive

Making intelligent software agents both powerful and easy to construct, manage, and maintain will require a very rich semantic infrastructure.

Without such a rich semantic infrastructure, the bulk of the intelligence would have to be inside the individual agents, or very cleverly encoded by the designer, or even more cleverly encoded in an armada of relatively dumb distributed agents that offer collective intelligence, but all of those approaches would put intelligent software agents far beyond the reach of average users or even average software professionals or average computer scientists.

The alternative is to leverage all of that intellect and invest it in producing an intelligent semantic infrastructure that relatively dumb software agents can then feed off of. Simple-minded agents will effectively gain intelligence by being able to stand on the shoulders of giants. How to design and construct such a rich semantic infrastructure is an open question.

The richness of the semantic infrastructure has two dimensions, first to enable a single agent to act as if it were intelligent without the need to hard-wire hard AI into each agent, and also to enable multiple agents to communicate, cooperate, and collaborate, again as if they were intelligent but without requiring hard AI in each agent.

Some of the levels of richness that can be used to characterize a semantic infrastructure:

  • Fully Automatic — intelligent actions occur within the infrastructure itself without any explicit action of agents
  • Goal-Oriented Processing — infrastructure processes events and conditions based on goals that agents register
  • Goal-Oriented Triggering — agents register very high-level goals and the infrastructure initiates agent activity as needed
  • Task-Oriented Triggering — agents register for events and conditions and are notified, much as database triggers
  • Very High-Level Scripting — agents have explicit code to check for conditions, but little programming skill is needed
  • Traditional Scripting — agents are scripted using scripting languages familiar to today’s developers
  • Hard-Coded Agents — agents are carefully hand-coded for accuracy and performance using programming languages such as Java or C++
  • Web Services — agents rely on API-level services provided by carefully selected and coded intelligent web servers
  • Proprietary Services — Only a limited set of services are available to the average agent on a cost/license basis
  • Custom Network — a powerful distributed computing approach, but expensive, not leveraged, difficult to plan, operate, and maintain

This is really only one dimension of richness, a measure of how information is processed.

Another dimension would be the richness of the information itself, such as data, information, knowledge, wisdom, and various degrees within each of those categories. In other words, what units of information are being processed by agents and the infrastructure.

The goal is to get to some reasonably high-level form of knowledge as the information unit. The Semantic Web uses URIs, triples, and graphs, which is as good a starting point as any, but I suspect that a much higher-level unit of knowledge is needed to achieve a semantic infrastructure rich enough to support truly intelligent software agents that can operate at the goal-oriented infrastructure level and be reasonably easy to conceptualize, design, develop, debug, deploy, manage, and maintain, and to do all of that with a significantly lower level of skill than even an average software professional. End-users should be able to build and use such intelligent agents.

What would such a rich semantic infrastructure actually look like and how would it be engineered? Ah, there’s the rub — nobody really knows. It’s a research area, except for the fact that nobody is actually doing any significant research in this area yet. I have a lot of fragments of ideas in my head, but there is still a lot more work needed.

My motivation here is simply to describe my vision, in the hope that it might inspire others to invest some effort and enthusiasm in the essential concepts of the vision.

Rich semantic infrastructure for group mind and distributed AI

The same concept of a rich semantic infrastructure that is needed for intelligent agents to thrive applies to group mind and distributed AI.

The richer the infrastructure, the greater the intellectual leverage between the distributed components of the AI system.

Intelligent personal assistants

  1. An intelligent agent focused on providing personal assistance to an individual person.
  2. Synonym for chatbot.

An intelligent personal assistant doesn’t have to be as interactive and chatty as a chatbot, but that is common.

Amazon Echo (Alexa), Apple Siri, and Microsoft Cortana are intelligent personal assistants.


  1. An intelligent personal assistant that responds to voice or typed commands of a person, responding with voice or text, possibly in conjunction with some action.
  2. An interactive question-answer program.

Amazon Echo (Alexa), Apple Siri, and Microsoft Cortana are chatbots.

ELIZA was one of the earliest chatbots, created at the MIT AI Lab in the mid-1960’s.

Many web sites utilize chatbots for online customer service. Many sites also use specialized instant messenger features for the same purpose, but with real, live people on the other end, while a true chatbot is a 100% automatic, synthesized virtual intelligence — AI.

Not all chatbots can perform actions in addition to simply engaging in conversation, as opposed to intelligent personal assistants which are expected to do something useful (functional) besides communicate.

Life Agents — Software Agents to Help You Live a Better Life

Life Agents are a hypothetical category of intelligent software agents whose purpose is simply to help you live a better life without you having to spend time figuring out how to do all of this on your own. Life agents don’t yet exist, but their prospect is very exciting.

Some life agents may interact with you on occasion, but the primary focus is on agents which are constantly running in the background, autonomously, taking care of details of your life without your intervention generally being required.

Life agents are not simply to deal with specific tasks which are right in front of you in the here and now, but primarily to deal with the entire flow of your life, all of the stages of your life, from cradle to grave and before and beyond without your needing to plan out the acquisition, control, and management of all of these agents.

As transitions are detected in your life, by the life agents and the rich semantic data infrastructure in which they operate, fresh life agents can be automatically activated to assist you with your new stage of life.

This is not about designing one single mega-agent with ultimate AI intelligence, but rather about designing a very deep and very rich semantic data infrastructure that encompasses every facet of your life, in digital form, so that a vast swarm of personal software agents can each tackle one small bit of the many details of your life, both today and in all of its many stages.

Life mentors would be software agents which have been programmed with knowledge about your career and life plans and can offer guidance along the way. These life mentors can offer advice and assistance with the many forms of planning that occur in our lives, including nutrition, health, education, housing, financial affairs, career, family, etc.

A life agent is closely related to a life mentor, but the key difference is that a life mentor is more of an intelligent assistant that gives you feedback and suggestions and advice, but life agents can also directly do useful things for you that you may not even know or care about.

Put a different way, a life mentor would address tough, growth-oriented conscious decisions, whereas a life agent can also address details of your life that may otherwise be subconscious or even unconscious.

Lifelong learning is a key goal of life agents. Software agent technology enables a richer and deeper semantic modeling for the learning process which can provide a more robust level of support for people as they transition through the many stages of learning throughout their lives. Lifelong learning will become a concept that is directly supported by software agent applications rather than a concept which must be implemented manually and explicitly by people outside of the life agent software system.

Organization of mind

There is no shortage of debate or theories about how the human mind is organized, or how an AI system should be organized for that matter.

All we can say for certain is that there are a variety of mental functions or mental processes. How they are organized, how they interact, where they are located, and where the lines are that separate them is both beyond the scope of this paper and both a matter of dispute and may be unresolved for the foreseeable near-term future.

That said, clearly our genes and biological processes are quite adept at resolving the matter.

And software developers are becoming ever more adept at deriving their own resolutions as well. There may not be any single best organization.


  1. Facilitating the learning of relatively rote mechanical steps or patterns, without regard to their deeper semantic meaning.
  2. Presentation of selected samples of data to an intelligent entity (person or AI), from which the intelligent entity is expected to discover patterns that will be interpreted as knowledge that can be applied to similar future tasks such as recognition.

Training data can range from being very modest and narrowly tailored, all the way up to very large and very diverse, depending on the complexity of the future tasks to be supported.

Training of AI systems requires a relatively sophisticated technical professional, careful attention to selection and presentation of the training data, and careful monitoring of the training process to assure that the AI system has successfully learned the right lessons from the training and training data. Even then, the recognition process can be problematic if a situation arises where the data is categorically distinct from any of the training data.

Teaching would be preferable for AI systems, giving the system just enough training to begin learning on its own, but that is beyond the capability of current AI technology. We’ll have to stick with training for the foreseeable short-term future.


  1. Imparting knowledge and skills from one person to another, from a teacher to a student. Traditional teaching.
  2. Imparting knowledge from a human to a machine. Teaching a machine.
  3. Imparting knowledge from a machine to a human. Machine as a teacher.
  4. Imparting knowledge from a machine to a machine. Machines teaching machines.

The primary capabilities we are concerned with here relate to facilitation of learning:

  1. Presentation of knowledge.
  2. Guided opportunities to learning by observing.
  3. Exercises or problems to solve.
  4. Explanation of underlying mechanisms, reasoning, meaning, and significance.
  5. Guidance on application.

Whether it is a human or a machine on either side of the equation ultimately doesn’t matter, although it may be a fair point that humans are more likely to be more forgiving and tolerant of both poor or mediocre teaching and poor or mediocre learning while machines will require both great teaching and great learning.

Teaching a machine

Training works as an approach to Weak AI, but at some point the tedium, complexity, and expense of manually training each AI system will begin to overwhelm even the most patient and enthusiastic developers of AI systems.

At some stage in the future we will finally have the rudiments of a significant fraction of human-level learning available in practical AI systems. At that point, teaching of AI systems will become practical.

Machine as a teacher

Computer-Aided Instruction (CAI) was once a thing. In fact, I personally took an introduction to computing as a CAI course at my local community college when I was in high school, back in 1970. Really leading edge stuff — only five of us took it. It was online in a sense — a remote timesharing terminal connected to a mainframe computer using something called an acoustic coupler, but course delivery and interaction was 100% automated. It was amazing stuff.

Back in the 1990’s I worked with a small startup to produce an authoring and delivery system for corporate training using hardware-assisted video on PCs. The general model was to present a sequence of short video clips with PowerPoint-style text and bullet points next to them (including build lists) and then quiz the user about each clip and offer remediation (alternate clips) before allowing the user to advance to the next clip. Great stuff for regulatory compliance and critical safety training. The only real downside was the expense and tedium of authoring quality and effective content.

Online courses are popular today, but frequently they are simply traditional lectures, readings, and problem sets, just situated online. YouTube videos are popular as well.

Interactive courses comparable to CAI and my corporate training system are available to some degree, but they are incredibly expensive to develop and deploy.

Still, there is great potential for applying AI to teaching raw content. The big opportunity is effectively assessing whether the student has learned the material and then appropriately remediating when that is not the case. Sure, eventually a human may need to intervene for extreme cases, but if you can handle 99% or 95% or 90% of the students with the automated AI teacher, that’s a huge win.

This does not in any way denigrate the role of human teachers for their many other roles in the classroom, but does free them up to focus more attention to those higher value roles from basic knowledge acquisition.

Machines teaching machines

Initially, teaching by clever and sophisticated humans will be required, but the goal is to develop AI technology that is capable of teaching, not just people but other machines as well. At that stage we will be capable of producing machines which can teach other machines, without the need for a human in the loop.

Combining machines teaching machines and machines able to learn by themselves, will allow us to produce a virtuous spiral where each successive generation of machine learners will have the potential to be more intelligent than its predecessor. I call this closing the loop and opening the spiral or Extreme AI. See my paper: Extreme AI: Closing the Loop and Opening the Spiral.

Tabula rasa

Would it be possible for a machine to learn everything or even anything if it started only with a completely blank slate (tabula rasa)? Maybe, or maybe not.

It’s an interesting question, but the more interesting practical question is the minimum or optimal amount of initial pre-seeded knowledge that a machine needs to have to become productive as quickly as possible.

Common sense

Beyond domain-specific knowledge, it is generally believed that that anything beyond a Weak AI system needs to have a sense of human-level common sense knowledge, reasoning, and judgment to do any significant level of thinking or reasoning, especially if the AI system will have to be interacting with humans and their artifacts.

Commonsense knowledge

  1. The body of common sense knowledge for an intelligent entity.
  2. Synonym for commonsense knowledge base.
  3. Synonym for common sense

Commonsense knowledge base

  1. The collection of knowledge in an intelligent entity which constitutes its commonsense knowledge.
  2. The ability for an intelligent entity to utilize common sense in its reasoning.

Expert system

  1. An AI system that endeavors to mimic the knowledge and skills of one or more human experts.

Knowledge engineering is required to endow the expert system with its knowledge.

Expert systems do not tend to do any learning on their own.

An expert system is only as good as the quality and completeness of the knowledge engineering that goes into it.

Knowledge engineering

  1. The process of collecting and organizing the knowledge of human experts for input into an expert system.
  2. The preparation of organized knowledge for presentation to an expert system.
  3. The overall process of endowing an expert system with the knowledge necessary for it to mimic the desired expert.

There are several difficulties with knowledge engineering of expert systems:

  • The experts on whom the system is based may have incomplete knowledge.
  • The experts may be wrong on some matters.
  • The knowledge engineers may misunderstand the experts.
  • The knowledge engineers may incorrectly encode the knowledge as it is presented to the expert system.
  • The knowledge engineers may not perform an adequate testing process to assure the quality of the knowledge imparted to the expert system
  • The experts may possess tacit knowledge — knowledge that they possess but are unable to express to the knowledge engineers.

Society of mind

Society of Mind is a theory of natural intelligence developed by MIT AI researcher Marvin Minsky which posits that the mind is comprised of a large number of independent agents which cooperate as a society to produce the effects that we call mind.

Kurzweil’s Singularity

Ray Kurzweil asserts that technological advances in computers, robotics, AI, and genetic engineering are accelerating and will soon converge in what he calls a technological Singularity that will produce a superintelligence which will transcend human existence.

To the best of my knowledge, this is the most extreme prophesy for the future of AI, so far.

Somewhere along the line, he predicts that machines will finally achieve human-level intelligence. And, that they will keep going, greatly exceeding human intelligence.

But, clearly we are not there yet, not even close, so real people at real organizations seeking to deploy AI in the next 5–10 years won’t need to consider the ramifications of Kurzweil’s Singularity from a practical perspective.

Still, it is an intriguing proposition. The incorporation of biological processes into computing is especially intriguing.


  1. Careful, laborious preparation of code or data sequences sufficient for a human or machine to mechanically follow the sequences to achieve a result, calculate a quantity, process an input, and/or produce an output.
  2. Basic coding of software for execution on a machine.
  3. Any hard-coded logic or knowledge that an AI system is expected to know from the beginning.

Generally, it would be preferable to use a training process to seed the knowledge base of the intelligent entity, but in some cases this can be too difficult, too time consuming, or involve some sort of perceived magic that the intelligent entity cannot be expected to be able to learn on its own.

A person who programs is a programmer, also known as a software developer. Or simply a coder.


Code (or coding) is the sequence of instructions that a programmer gives to a system to cause it to accept data, process it, store it, and output it.

The development of code is variously called:

  • Coding
  • Programming
  • Software development
  • Software engineering

A person developing code is variously known as a:

  • Coder
  • Programmer
  • Software developer
  • Software engineer

Software development and software engineering involve a lot more than just coding, such as:

  • Preparation of requirements specification for what the software should do.
  • Architectural design of the software.
  • Detailed functional specification of how users will use the software.
  • Detailed design of algorithms, data structures, and modules.
  • Finally, the coding.
  • Debugging.
  • Testing.
  • Documentation.
  • Packaging for distribution.

Source code

Code is written as text in some programming language, such as C++, Java, Python, Go, LISP, etc.

The text of the code as written by a programmer is referred to as source code.

Machine code

In general, a machine cannot directly execute source code. As with natural language, the machine must first parse the source code text into a parse tree, determine its meaning, and then transform that parsed meaning into machine code, which can then be directly executed by the machine.

This translation process is performed by special software known as a compiler.

Computer program

The code for a software application or system is most commonly organized and packaged as a computer program, or possibly a collection of interacting computer programs.

The source code must first be compiled into machine code.


Design is the blueprint for code — the overall approach to what the code or computer program is trying to do.


An algorithm is a detailed, methodical, step by step sequence of instructions for calculating, manipulating, or generating a desired output from some specified input.

Algorithms are usually designed using some notation other than a programming language, such as a set of mathematical equations coupled with structured natural language statements about the steps required. Or, a specialized language called pseudo-code or a graphical diagram called a flowchart, or many similar techniques, may be used to represent an algorithm, always focusing on the higher-level, more abstract nature of the algorithm rather than the much more detailed coding that will eventually have to be performed to put the algorithm into a form that a computer can actually execute.

To be clear:

  • Code is based on an algorithm.
  • An algorithm must be transformed into code to be run.

Data structures may be used to store intermediate results of operations within the algorithm.

The input to an algorithm may be a data structure of arbitrary complexity that was output by a previous algorithm.

Similarly, an algorithm can output a data structure of arbitrary complexity with the intent that it will be processed by other algorithms.

Generally, a single algorithm computes a single function, but a single function could be comprised of an arbitrarily complex combination of other functions.

A single computer program may employ a fairly large number of algorithms.

Algorithms combine discrete operations with control flow logic, permitting sequences of operations to be conditional or repeated based on the values of the data being processed.

Technically, an algorithm must eventually complete its processing steps, known as returning or halting, but some algorithms, such as for an operating system, applications such as a word processor or smart phone app, or user interfaces are designed to essentially run forever, only stopping if the user requests it to stop.

Coders do not generally develop their own algorithms. Typically they borrow an algorithm from:

  • A textbook
  • Some existing code
  • A fellow coder
  • Their boss
  • The architect who designed the software they are coding
  • An online web site

Not even software developers, the more elite of programmers, develop all or even most of the algorithms in their programs. Many of the more sophisticated algorithms were originally designed by mathematicians, computer scientists, or other scientists, and the job of the developer (coder) is simply to translate the algorithm into the specific programming language they are using to develop their program.

That said, most software developers have fairly regular need to develop their own algorithms or to adapt existing algorithms to meet new or different requirements.

Deterministic algorithm

  1. An algorithm which will produce the same results if rerun with the same input data.
  2. Characteristic of most traditional, non-AI applications and algorithms.
  3. Opposite of a nondeterministic algorithm.

The catch for practical applications is that input data must be interpreted as including all environmental data of which the algorithm might be aware of or dependent on.

For example, an algorithm which utilizes a web service to retrieve the current air temperature will technically be nondeterministic since the external temperature may change, even though the code of the algorithm has no nondeterministic elements itself.

Nondeterministic algorithm

  1. An algorithm which may produce different results if rerun with the same input data.
  2. Common characteristic of many, but not all, AI applications and algorithms.
  3. Opposite of a deterministic algorithm.

Nondeterminism can crop up in many ways, such as:

  • Use of random numbers to influence pathways through the algorithm.
  • Dependency on external data sources and services beyond the direct input data to the algorithm.
  • Dependency on environmental data such as current date and time.
  • Uninitialized variables which are equivalent to additional input data.
  • Algorithms designed for genetic mutation and emergence.

We can define two forms of nondeterminism:

  • Intentional, by design.
  • Unintentional. May or may not be a bug or problematic.

Nondeterminism is a very powerful capability, but must be used carefully, and can easily be misused.


Uncertainty is the norm for many AI systems. You can exert as much effort and diligence as you can muster to get accurate data and knowledge, but ultimately there will be gaps or deviations, sometimes small, sometimes large.

Various techniques have been developed for coping for uncertainty, such as fuzzy logic.

Fuzzy logic

Traditional algorithms prefer data and conditions that are either precisely true or precisely false, but not maybe true or maybe false. Similarly, quantities and measurements are preferably exact and precise, not approximate, nearly, almost, or a little more or a little less.

AI algorithms on the other hand almost always have to deal with such fuzzy or uncertain scenarios. In fact, that may be one of the most common reasons that AI is needed for a given application.

Fuzzy logic is the answer to uncertainty about data and conditions.

Details of fuzzy logic are beyond the scope of this paper. The point here is simply to be aware that an AI system either has it or needs it.

Quantum mechanical effects at the macro level

It is not known for sure whether or not quantum mechanical fluctuations at the atomic or subatomic level may or may not have some discernible impact on the visible and measurable world. The issue is whether knowledge, especially in an AI system, must take into account the prospects for the impact of quantum effects when reasoning about phenomena in the real world.

Generally, quantum effects won’t affect most real-world processes, but they may impact exactly how accurate or precise predictions of an AI system can be.

The bottom line is that AI systems will need to be prepared for the fact that precise and accurate predictions for real-world phenomena can be problematic. The solution or workaround is for AI systems to tolerate at least some degree of deviation from predicted values.

Data structures

  1. A structured arrangement for organizing a collection of data for convenient and efficient access by code.
  2. A companion to algorithms and code.

Some algorithms may dictate the overall structure of a data structure, but generally a data structure is dictated by the design and code which implements one or more algorithms.

A given data structure may be used strictly for a single algorithm, or multiple algorithms may operate on the data stored in a particular data structure.

Common data structures are arrays, lists, sets, maps, and collections.

Data structures can range in complexity from very simple, with just a handful of data items, up to very complex with very intricate arrangements of many data items.

Data structures tend to be transient, meaning they only exist while a particular computer program is running.

Data can be made persistent by writing it to a file or storing it in a database, so that it can later be read from that file or retrieved from that database.

Data types

Whatever the complexity of a data structure, it is always a composite of smaller data structures, with the smallest data structures being discrete data values such as those defined by the architecture of the machine and its software.

Individual data values will have a type or data type. Modern computer hardware and programming languages commonly support such data types as:

  • Integers
  • Real numbers (floating point)
  • Boolean truth values
  • Character strings, which can be natural language text, words, names, symbols, and other identifiers and codes
  • Images, typically comprised of pixels, which may be black and white, levels of gray, or full color
  • Sound — speech, music, natural sounds, and artificially generated sounds
  • Video
  • Arbitrary binary data — zeroes and ones as far as the eye can see

Data of any greater complexity can be accommodated by creating data structures with any number or arrangement of data values.


Real world objects can be represented in an AI system as software objects in the form of data structures which contain all of the perceived characteristics of the real world object, including any images or other media or content.

The raw content may be stored in addition to any symbolic equivalent of the content since AI algorithms tend to want to work from the symbolic content.


  1. Information about data, information, or knowledge.

Metadata provides contextual information about data, information, or knowledge, such as:

  • Source — location, date, time, and other parameters
  • Owner
  • Tagging provided at time of capture or at some later stage of processing
  • Notes to aid users of the data

Metadata can have two distinct relationships to the information:

  • Internal — embedded within the data, such as an image or video.
  • External — maintained separately, in parallel with the information to which it applies.

Traditionally AI systems dealt with the raw information itself, but more advanced AI systems will gain advantages by understanding and reasoning about the metadata as well.

Reference data, entity data, and transactional data

Traditional data or information can be categorized into three general categories:

  1. Reference data. Relatively static. Shared across many domains. Rarely changes.
  2. Entity data. Mostly static with some dynamic content. Occasional changes.
  3. Transactional data. Very dynamic. Individual transactions may not necessarily be updated, but new transactions tend to flow in at a significant rate.

Reference data:

  1. Data, information, or knowledge that is relatively static and true across a wide range of applications.
  2. Distinct from transactional and entity data.

Examples of reference data include:

  • Lists of states and countries and their names, 2-character codes, capitals, etc.
  • List of vendors that staff of an enterprise are permitted to work with.
  • Maps.
  • Lists of cities and towns.
  • Lists of companies.

Entity data:

  1. Data, information, or knowledge relevant to a particular entity.

Such as information about a user, customer, vendor, or product.

Transactional data:

  1. Data, information, or knowledge relevant to a particular matter, situation, or event.
  2. Distinct from reference and entity data.

Such as information about a specific trip, sale, court case, law enforcement investigation, or interaction.

Static is a relative term. Occasionally new states or countries may come into existence or even change their names. Population is very dynamic, but may only be updated on a somewhat infrequent basis.

True AI is generally not needed to process reference data or even most traditional entity or transactional data, but reference data and traditional entity and transactional data is very valuable to AI systems, especially more advanced AI systems. The volume and complexity of transactional data is amenable to fairly sophisticated AI processing.

Many AI systems may view entity data as if it were static reference data. For example, a driverless vehicle will produce a significant stream of transactional data, but not much much if any change to any entity data.

Schemas and data models

  1. Formal description or model of the structure and type of data in a database.
  2. Data model for data in a database.
  3. Synonym for data model.

A typical database, such as an SQL database, will be comprised of any number of tables, each table having any number of rows, with each row comprised of any number of columns, each column having a name, data type, and other attributes.

The database software (DBMS) uses the schema to structure and organize raw information both for efficient storage and convenient and efficient access.

Data modeling

  1. Development of schemas (data models) to represent data related to a problem being solved.

There are three key factors or stages in data modeling:

  1. Understanding the nature of the phenomenon or problem to be modeled in the real world.
  2. Identifying the real-world observable qualities that can be measured, recorded, and otherwise sensed by the software.
  3. Synthesizing a data model (schemas and queries) that can organize, store, and access a representation of the observable and measurable qualities.

Data modeling also involves:

  1. Identifying the data type and attributes of each observable quality.
  2. Identifying the approximate cardinality of all data — range of values for a specific observable quality and the total number of entities that will be observed, measured, or otherwise captured and stored in the database, so that sufficient computing resources , both processing and storage, can be provisioned for the application, in advance of actually deploying the application.

Databases and DBMSes

Storage and access for data is an essential issue for applications. A collection of stored data is commonly referred to as a database. Once again, rather than each application having its own logic for storing and accessing data, middleware is used to encapsulate the logic so that it can be factored out of the application.

A number of database systems or DBMSes (DataBase Management System) are available to accomplish such tasks. Oracle, IBM, Microsoft, and other vendors offer such DBMSes.

SQL is a popular form of database, called a relational database, organizing data in tables with rows and columns.

NoSQL is a popular form of database as well, focused on much higher capacity, much faster access, and much greater resiliency in the face of machine and network outages.

A DBMS is typically so large and complex and resource intensive that the middleware for a DBMS will require its own machine or even a number of machines, called a cluster, communicating with the application over a network.

Distributed data

Traditional AI systems maintain all data locally, in a single data store or database. That’s fine for niche systems and smaller AI systems, but fails in three situations:

  1. Where the sheer amount of data is significantly greater than will fit on a single machine.
  2. For distributed AI applications where the data needs to be available on any number of networked machines.
  3. For replicated AI applications which each share the same data but using multiple machines to be able to service a much larger number of simultaneous requests from users.

Modern distributed databases, such as NoSQL, provide such as capability.

Federated data

A federated database is a technique for combining data from different databases so that it gives the appearance of being from a single database. The data from each database may have an overall similar structure, but differ in format, access protocols, or details of structure, so that a software module (middleware, again) can mediate these differences, providing the application with a single access method that obscures any differences.

More advanced AI systems will increasingly be required to cope with data from many more sources than in the past.

Crowdsourced data

  1. Data which is derived from a wide range of individuals, primarily social media or the Semantic Web.

In addition to reading data from fixed, predefined data sources, an AI application could also derive data from the social media data streams of a wide range of individuals.

This is a relatively new concept, so there are no clearly defined patterns of usage.

Using Twitter as just one example, an AI application could:

  • Read data from a specific Twitter user.
  • Read data from a Twitter search — all user tweets using a specific keyword pattern.
  • Read all tweets directed to a specific Twitter username.
  • Read all tweets referencing specific Twitter hashtag.

An AI application could also crawl the web looking for certain types of documents, particularly those containing data of interest.

Data can also be read from the Semantic Web, particularly Linked Open Data (LOD) or using the results from a SPARQL query of a linked open data triplestore.

Graphs and graph databases

  1. Data structure which represents entities and their connections or relationships.
  2. Contrast with a linear, tabular, sequential representation of data.

In mathematics and computer science, the entities in a graph are referred to as nodes or vertices (plural of vertex) and the connections or relationships are referred to as edges, arcs, or lines.

Connections of relationships can be:

  • Undirected.
  • Directed.
  • Directed in either direction.
  • Labeled.
  • More than one between any two nodes.

Nodes can have attributes, although attributes can also be represented as additional nodes with connections that are labeled by the name of the attribute.

Nodes can represent simple information, objects, or knowledge.

Generally, a single node or connection or relationship represents a small fraction of a piece of knowledge.

Generally, a collection of nodes and connections or relationships will be needed to represent arbitrary knowledge of arbitrary complexity. This is where graphs really shine.

Graphs can be stored as:

  • Data structures in memory.
  • Graph databases in permanent storage.
  • Flattened into a linear format in a non-graph database, including SQL

A wide range of algorithms can be used to traverse a graph, performing such operations as:

  • Counting nodes and connections based on specified criteria.
  • Searching for specific nodes or connections.
  • Searching for patterns of nodes, attributes, and connections or relationships.
  • Extracting portions of a graph to treat as a standalone graph.
  • Inserting standalone graphs into another graph, by connecting nodes between the two.
  • Inserting new nodes and new connections into an existing graph.
  • Removing nodes or connections from an existing graph.

Knowledge webs

  1. An arbitrary number of fragments of knowledge and connections between those fragments in a web or graph of arbitrary complexity.
  2. The interconnected graph of all knowledge possessed by an intelligent entity.
  3. The interconnected graph of all knowledge possessed by a communicating collection of intelligent entities.
  4. The interconnected graph of knowledge relevant to some particular matter or area of interest.

A major key to intelligence is the ability to comprehend and exploit the connections between fragments of information or knowledge.

Intelligent communication such as using natural language involves transferring significant portions of a knowledge web between two or more intelligent entities.

A knowledge web is inherently a graph, so it can be stored as:

  • A data structure in memory.
  • A graph database in permanent storage.
  • Flattened into a linear format in a non-graph database, including SQL
  • Or even flattened into natural language text, such as a book, a paper, or spoken language.

See graphs for a description of how a knowledge web can be used.

Brilliant algorithms

Most algorithms are rather unexceptional, computing fairly mundane and unexciting results, but some algorithms, especially when it comes to AI are far more interesting, computing results that are surprising if not absolutely spectacular, even seemingly magical, even to seasoned software professionals.

In truth, even a lot of exciting AI results come not from particularly brilliant algorithms, but through combining some number of fairly simple algorithms in an interesting manner.

But occasionally incredibly exotic and exceptional algorithms are needed to make the kind of giant leaps expected of AI.

Generally, any really interesting AI system will have a core of a relatively small number of brilliant algorithms surrounded by a relatively large number of more mundane supporting algorithms.

Clever algorithms

Rather than being truly brilliant per se, it is not uncommon for algorithms to at least be very clever — to accomplish something or to accomplish it in a way that is at least a little surprising or nonobvious, inspiring, or impressive to even other seasoned AI or non-AI professionals.

The purpose of cleverness is not for superficial appeal, but to actually accomplish some significant feat or to accomplish it in a way that is especially efficient in the use of resources. The point is that the user does not see the algorithm itself, but they do see the effects, such as a computer performing a complex task very quickly.

Heuristics are a common technique for clever algorithms, taking shortcuts that have a significant impact on algorithmic complexity.

Executive control

It is not uncommon for an AI system to have more than one computer program or algorithm running at the same time, working on different aspects of the same problem. Some degree of coordination is typically required, known as executive control.

Executive control in an AI system may commonly be in the form of a designated or special computer program which other programs communicate with to coordinate activities.

Programming languages for AI

In theory, our ability to model and solve computing problems is heavily shaped and limited by our programming languages.

This is true to a point, but the history of computing has shown that this is true only to a limited extent. Never underestimate the ability of creative and motivated developers to find constructive workarounds to the limitations of almost any language.

LISP has certainly been a favored language for AI applications over the years, with various derivatives in recent years.

But as newer generations of programming languages, libraries, and frameworks have introduced powerful features for working with objects, symbols, text, images, databases, graphs, and signal processing, AI developers aren’t forced to fall back on LISP as their first choice.

LISP was certainly a great choice for symbol processing, but it no longer has a monopoly in that area. And it never had exceptional support for images, structured data, and databases.

There is no ultimate best AI programming language at this time. It’s quite a fielder’s choice, based on factors such as:

  • Preference of the developer.
  • Availability of libraries and frameworks for the tasks and domain of interest.
  • Experience of the pool of developers from which a team is to be selected.
  • Performance and capacity requirements.

Generative AI

So much of the classic efforts of AI have been focused on knowledge acquisition and reasoning, with the results produced as a simple output function that in itself seems anticlimactic, like chess where the emphasis is on determining the next move but the output of the instructions for what piece to move to where is quite trivial. (Granted, robotics is an exception, where movement and activity is the real world is difficult and exceptional.) In contrast, generative AI focuses almost exclusively on the output function, seeking to make the output medium itself the exceptional part of the process.

Generative AI can generate just about anything, including:

  • Art
  • Music
  • Sculpture — using 3-D printing or instructions for producing the physical elements to be assembled
  • Drawings
  • Parts for products — again, with 3-D printing or instructions
  • Stories
  • Scripts
  • Poetry
  • Speech
  • Choreography
  • Textile patterns
  • Graphic design
  • Web design

Generative AI is also referred to as computational creativity.

Creative AI

  1. Synonym for generative AI, emphasizing a significant degree of novelty in the generation process.

Computational creativity

  1. Synonym for creative AI
  2. Synonym for generative AI.

AI and Science

Science comes into play for AI in four distinct ways:

  1. Science of AI
  2. AI science
  3. Science as an application of AI
  4. AI as scientists

Science of AI

AI has traditionally been an area of study within the field of computer science.

Other fields that have a stake in AI include:

  • Electrical engineering
  • Mechanical engineering (think robots)
  • Cognitive science
  • Cognitive psychology
  • Psychology
  • Neuroscience
  • Law
  • Philosophy
  • Ethics

Unfortunately most of the science of AI has been ad hoc in nature, more craft than rigorous and formalized science per se. This is not so different from computer software and computer science in general.

AI science

Eventually, AI will need to advance to being a full-blown science rather than the ad hoc craft that it is today. Weak AI and even Moderate AI is feasible with ad hoc craft, but more intensive Moderate AI and Strong AI will definitely require a full AI science.

A precursor for AI science will be a more complete, robust, and detailed model of the human mind and consciousness, as well as a Strong AI equivalent of the Turing machine.

Science as an application of AI

There are plenty of tasks performed in science that can be automated to at least some degree and performed more intelligently using AI techniques.

Lab technician level science is most amenable to automation using AI.

We are still some distance away from having fully autonomous robots in science labs, but AI is certainly applicable as smart tools to be used by lab workers and even scientists themselves.

AI support for the creative aspects of science, such as speculation and discovery, and intelligent assistants for such processes, are possible but as yet unexplored.

AI as scientists

Beyond mere technical level automation, smart tools, and intelligent assistants, it is an open question as to how long it will be before AI systems are capable of fully automating:

  • Entry level scientists. Freshly-minted PhDs.
  • Average experienced scientists.
  • Genius level scientists.
  • Beyond human-level genius science.

This is all unexplored territory. In fact, I’m not aware of science fiction in this area. Curious.

Technically, Kurzweil’s Singularity would cover this area.

Emotional intelligence and affective computing

At least at present, AI systems don’t have their own emotions to deal with, but they do have to cope with the emotional state of users.

Affective computing addresses the emotional parts of the equation for human-machine interactions. There are two sides:

  • More advanced AI systems could sense the user’s emotional state and incorporate it into reasoning about the user’s state of mind.
  • Advanced AI systems could also take care to present information in a way that respects its probably emotional impact on a user.

Some limited knowledge of human nature is needed for reasonably effective emotional intelligence.

Will robots ever have feelings and emotions? Even with Kurzweil’s Singularity? Interesting question, but beyond the state of this informal paper.

Social intelligence

  1. The capacity to form and respond to structure, relationships, roles, rules, and values with other intelligent entities in some organizational structure, whether formal or informal.
  2. How individual intelligent entities interact to form a healthy and productive society or organization.
  3. A sense of respect, benevolence, kindness, and responsibility for the common good when interacting with other intelligent entities in a society or organization.
  4. Communication, cooperation, collaboration, and competition between intelligent entities that results in a healthy and productive society or organization.
  5. The capacity to engage in teamwork.
  6. The capacity to participate as a member of a community.

Emotional intelligence is generally required for social intelligence, but not absolutely required in all situations per se. Emotional intelligence facilitates social intelligence. Social intelligence is more than emotional intelligence — focused more on groups than one on one.

AI systems today generally work in isolation or in very constrained interactions so that they have limited social interaction in any larger sense.

More advanced AI systems in the future will at some point require more advanced social intelligence abilities.

Social intelligence involves interactions at all levels:

  • Between two intelligent entities.
  • Within small groups of intelligent entities.
  • Within defined relationships between intelligent entities.
  • Between complete strangers.
  • Between superior and subordinate intelligent entities.
  • Within large groups of intelligent entities.
  • Within organizations of intelligent entities.
  • Within enterprises.
  • Between enterprises.
  • Within communities.
  • Between communities.
  • Across all of a society.

Socially-aware AI systems

  1. AI systems which possess a significant degree of social intelligence.

Values — human, artificial, and machine

Values come into play in AI in three ways:

  1. Enabling the machine to cope with human values.
  2. Enabling machines to possess or mimic human values — human-level values or artificial values or artificial human-level values.
  3. Development and evolution of values that only machines would possess — machine values.

Affective computing and social intelligence would certainly require the ability to deal and cope with human beings as they are at the emotional and affective level. This is needed in the near-term.

For reference see my Master List of Values in America.

Longer term, there could be significant value to having machines which can possess, express, and act according to human-level values.

Much longer term, as machines achieve levels of intelligence well beyond human-level intelligence, there would be the potential for machines to develop and evolve values that mere mortals would not directly relate to.

Initially, machines might be pre-programmed with many of the human values, but the goal in the longer term would be for machines to be able to learn human values on their own, and even evolve new values of their own.

There is the issue that we humans may wish to place limitations on the ability of machines to evolve their own values, or even which human values would be permitted.


As important as cognition, thinking, and reasoning are to intelligence, communication can be the weak link. Ideas, thoughts, information, and knowledge in general, not to mention emotions, feelings, concerns, and priorities need to be communicated efficiently, completely, and effectively.

Communication tends to be a weak link for AI systems.

One of my favorite jokes is the general who asks a military AI computer whether to attack or retreat and the AI computer answers “Yes.” The exasperates general barks “Yes, WHAT?!” to which the AI computer dutifully (and enthusiastically) responds “Yes, SIR!”

Cooperation, collaboration, and competition

Just to emphasize these three aspects of social intelligence, which are so important in human society and will eventually be front and center for AI systems.

At present, AI systems tend to work in isolation, interacting with their environment as physical objects, not recognizing any other intelligent entities as such.

And of course it almost goes without saying that communication is fundamental to social intelligence.

Knowledge of society

In order for a machine to be truly socially intelligent, that implies that the machine must possess deep knowledge about society, like all of society, at all levels, including relationships, families, neighborhoods, communities, countries, and governments. That’s a tall order.

Granted, encyclopedic knowledge of all of society won’t be needed for all applications or any time soon, but to the extent that we wish to progress towards general intelligence rather than niche or domain-specific Weak AI applications, the scope widens dramatically.

I wrote an informal but long paper on Elements of Society. Part of my motivation was contemplating what a machine would have to know to comprehend a human society.

Knowledge of human nature

In addition to knowledge of society as a whole, a socially-intelligent machine will need to have a very good grasp on human nature. That topic is covered in the above mentioned paper on Elements of Society.

Once again, not all socially-aware AI applications will require encyclopedic knowledge of human nature, but progress towards general intelligence greatly broadens the scope.

Benevolence, kindness, and compassion

We can’t expect a machine to actually feel benevolent, kind, and compassionate, but if we can program AI systems to have a degree of emotional intelligence, we can at least simulate a sense of benevolence, kindness, and compassion, at least in theory.

For now and the foreseeable near-term future, it may be too much to ask to expect AI systems to act with any significant degree of benevolence, kindness, or compassion, but at least we have our marching orders if we expect future AI systems to have any sense of emotional or social intelligence.

Drives and goals

AI systems do not have the same sorts of drives that a biological system has, but there can be rough analogs.

In essence, a drive is a strong motivation that is innate, below the level of conscious thought, and biological in nature (for biological systems.)

Even for people, a drive is still simply a goal, just a goal that is not consciously created through reason.

For people, there are several levels of goals:

  • Biological or genetic imperatives, such as survival and perpetuation of the species.
  • Cultural programming. What members of society are expected to do.
  • Family pressure and expectations.
  • Peer pressure.
  • Personal values.
  • Arbitrary and flexible choices.

Even for people, goals or even drives can be overridden, to at least some degree, albeit with some potentially significant cost.

Generally, AI system are given goals by their designers and user. These goals effectively act as drives, especially in the sense that the AI system does not choose to pursue a goal. More advanced AI systems may have such choice, but that is not common today.

Eventually autonomous AI systems may acquire a sense of volition that allows them to choose arbitrary goals or even override programmed or declared goals in much the same way that people can, but that is not the reality of AI systems today.

Goals vs. tasks

Much of traditional Weak AI has been very task oriented — the AI system is given specific tasks to accomplish. More advanced AI systems are more goal oriented — the AI system is expected to take a larger, overarching goal and break it down into sub-goals in an iterative manner until a series of smaller tasks can be performed, collectively achieving the goal.

The distinction is that you perform a task but work towards a goal.

Stronger AI systems will be more goal-oriented while weaker AI systems will tend to be more task-oriented.

Problem solving

  1. Achieving a goal through other than a direct, obvious process.

Goals to be met by intelligent entities can be divided into three categories:

  1. Solvable by a simple one-step process. It may be a difficult process, but has no significant complexity or uncertainty.
  2. Requires a more complex, multi-step, but known process. No creativity is required, just diligence and effort.
  3. Lacks a clear, known, and proven process. Requires creativity and possibly trial and error.

A person could discover a solution by themselves, but commonly they will be taught or trained in advance or can look up a solution on the Internet.

Task-oriented or domain-specific AI systems commonly have problem solution strategies hard-coded so that the system can use a direct, pre-coded solution to meet goals.

General problem solving

  1. The iterative process of seeking to arrive at a goal by examining the requirements for achieving the goal and then working backwards to achieve those requirements as sub-goals.

In contrast to task-oriented or domain-specific AI systems, advanced AI systems would attempt to solve problems or meet goals which do not have obvious and direct solutions, requiring instead iterative if not creative solutions.

A general problem solver approach as outlined above is required to meet such goals.

Constraint satisfaction

  1. Specialized form of general problem solving where a set of rules and values (the constraints) are specified by the user which enable the AI system to deduce solutions that meet the specified constraints.
  2. Synonym for goal seeking.

Goal seeking in a spreadsheet application is a simple example of constraint satisfaction.

Data flow

  1. Software architecture based on components which can operate in parallel, allowing data to flow through the interconnected components, maximizing performance and throughput.

Turing machines

Mathematician and computer scientist Alan Turing devised a mathematical model of a universal computing machine, aptly called a Turing machine. All modern digital computers are essentially Turing machines.

The essence of a Turing machine is that it can compute any function that can be reduced to an algorithm.


What can an AI system do? What can’t an AI system do? Great questions. The technical answer is the same as what a computer or Turing machine can or can’t do — computability or what problems can be reduced to algorithms that can be implemented using Turing machines.

That doesn’t directly answer the question of what an AI system can or can’t do, but it at least frames the nature of the question.

Can an AI system do everything that a human can do? That’s an even more difficult question. As noted elsewhere, there are two unresolved issues:

  1. We don’t yet have a full and complete understanding of the function within a single neuron, how neurons interact, or what happens when many neurons are interacting.
  2. Are the functions of neurons, interacting neurons, and masses of neurons in fact computable in the sense of a Turing machine. In other words, can they be reduced to an algorithm.

See the commentary on analog vs. digital computing below.

Some would assert that we do know how neurons work to at least some degree and that a Turing machine is sufficient to model neurons and neuron interactions, but the jury is still out on that question. And Turing himself wasn’t making that claim, positing the need for a hypothetical u-machine or B-Type unstructured machine for human-level intelligence.

Meanwhile, we still appear to have plenty of runway available with existing Turing machines for AI applications.

Analog vs. digital

Some have proposed or asserted that digital logic alone is insufficient to simulate the capabilities of neurons and the human brain.

Some believe that analog circuitry or at least a hybrid of analog and digital circuitry are needed to simulate neurons.

There is ongoing research in this area, such as neuromorphic engineering.

But for now and the foreseeable near-term future, it is digital all the way.

Deeper use of fuzzy logic will help to extend the life of digital logic and defer the need for true analog computing for intelligence, if it is in fact needed.

Roger Penrose’s model of the human mind

British physicist Roger Penrose has interesting speculation about intelligence, consciousness, and the human mind in his books The Emperor’s New Mind and Shadows of the Mind, such as quantum effects, microtubules in the neurons of the human brain, and his assertion that Turing machines are insufficient to replicate the functions of the human mind.

Not everyone agrees, and I personally haven’t dug into his work, but I wouldn’t be so quick to write it all off just yet. As noted, I have an inkling that some degree of analog function is needed to fully account for the most extreme forms of human intellectual capacity. It wouldn’t surprise me if there were some quantum effects as well.

The disputes over the matter simply highlight the limits of our knowledge about how the human mind really works.

Although this matter must be resolved to achieve true, human-level Strong AI, it presents no problems for Weak AI or even fairly robust Moderate AI for the foreseeable future.

Is (human-level) intelligence computable?

Recapping the previous sections, one of the biggest questions for artificial general intelligence is whether human-level intelligence is computable, either using digital Turing machines, some analog-augmented machine, or some machine architecture beyond even what has been speculated to date.

The answer right now is that we simply don’t know, and are unlikely to know anytime soon.

That said, there are plenty of people who are not persuaded by the analog argument and who will insist that with enough computing power digital can simulate anything. At least they’re willing to try.

Meanwhile we will certainly push forward with digital computing as fast and hard and as far as we can.

And as long as we remain well short of Strong AI, our weak AI will remain very computable on digital machines, for the foreseeable near-term future.

Algorithmic complexity

  1. A rough sense of how much computing resources will be required for an algorithm to handle input data of various sizes or values, particularly time and memory.
  2. The rough measure of resources required for an algorithm in Big O notation.
  3. The visual complexity of the source code for an algorithm, including length, nesting of logic, obscurity of intentions for logic and formulas, and lack of comments or documentation. The degree to which the true, mathematical algorithmic complexity may be hidden or obscured due to the difficulty of reading and comprehending the source code.
  4. The degree to which the resource requirements of an algorithm are not known or unclear due to dependence on external functions or invocations of networked services whose algorithmic complexity and resource requirements may not be known.
  5. The degree of uncertainty for an algorithm due to latency (response time) for external services which may be delayed by arbitrary amounts of time due to load on external systems which are hosting those services.

Many algorithms run so quickly and efficiently that their algorithmic complexity is negligible and of no real concern, but quite a few algorithms can be very problematic, taking too long to run or running out of memory.

Oh, and were you wondering why good software developers are paid a lot of money? In short, they are paid roughly in proportion to the algorithmic complexity that they must master and deal with on a daily basis.

Big O notation

  1. A simple mathematical formula which roughly approximates or places an upper bound on the resource requirements for an algorithm.

Algorithmic complexity is couched with the language “on the order of”, indicating the rough or approximate cost of the algorithm in terms of resource requirements. How rough? Within an order of magnitude. Precision is not attempted. A rough estimate is usually good enough.

This paper will spare non-technical readers the details. The main takeaway is that algorithmic complexity can get out of hand very quickly, especially for advanced AI algorithms or any algorithm handling lots of data or performing complex calculations, so great care is needed to try to minimize the algorithmic complexity, or to at least realize the costs being incurred.

A short summary of the common forms of Big O notation, in increasing order of cost include:

  • Constant — great, easy, fastest, trivial, simple — O(1)
  • Logarithmic — non trivial but very manageable — O(log n)
  • Linear — still manageable and predictable — O(n)
  • Quadratic — now it’s getting expensive — O(n²)
  • Exponential — try to avoid this — O(c^n)
  • Factorial — only if you really need it and don’t need real-time performance — O(n!)

Brute force

  1. An algorithm which exhaustively evaluates all possible solutions without any heuristics to intelligently guide its selection of candidate solutions to evaluate.
  2. Proceed sequentially through all data or all possible solutions, starting at the first and continuing until the last, without any heuristics or intelligence to change or guide the order of processing.

Generally brute force is the least attractive approach to finding a solution, except in three situations:

  1. The data size or prospective solution count is small enough that the time and resources to process all possible solutions is reasonably small or acceptable.
  2. There is no known algorithm to achieve a solution other than exhaustive processing.
  3. Known algorithms are not guaranteed to achieve a sufficiently optimal or accurate solution.

Combinatorial explosion

  1. Any problem that requires that the number of possible solutions is the product of the number of possibilities for each parameter, so that even with a relatively small number of parameters the product is a very large number.
  2. Any problem whose solution has exponential algorithmic complexity. Or worse.
  3. A problem, algorithm, or application that requires a very large amount of computing resources despite seeming to be relatively simple on the surface.

The basic idea is that a problem or application may seem relative simple or manageable, but due to the fact that there are multiple parameters that result in an exponential or factorial degree of algorithmic complexity the amount of computing is likely to be far greater than might seem apparent at first blush to an innocent bystander.

There are three points here:

  1. Avoid problems, algorithms, or applications that involve a combinatorial explosion.
  2. Be prepared to allocate sufficient computing resources to handle the combinatorial explosion.
  3. For some applications, despite the fact that the logic of the algorithm is known it may be necessary to hold off and wait until hardware advances to the point where the combinatorial explosion can be more readily handled.

N-body problems and quantum computing

Specialized heuristics and indexes and distributed computing can be used to optimize the performance of AI algorithms for search and matching — delivering very impressive gains — but ultimately there is a limit to such attempts to cheat on the complexity of the real world.

Many real-world problems whose solutions have exponential algorithmic complexity are simply beyond the ability of even our fastest computers, biggest memories, largest networks, and cleverest heuristics and indexing techniques for all but the simplest cases. Sure, we keep innovating and expanding on all of those fronts, but we’re still only tackling relatively modest real-world problems.

Back in 1981, physics professor Richard Feynman noted quite simply in his paper Simulating Physics with Computers that as difficult as it was to use a traditional computer to simulate a relative simply quantum mechanical system, a quantum mechanical system was able to perform the task effortlessly and instantaneously, so he suggested that a computer based on quantum mechanics would be a great way to simulate physics.

As he noted, it is not impossible to simulate a quantum mechanical system using traditional computers, but the algorithmic complexity is exponential, so that simulating even a system of moderate size would be exceedingly difficult or expensive. And at some point it simply wouldn’t be practical at all using all available resources.

Similarly, so-called N-body problems such as simulating the Newtonian mechanical motion of N celestial bodies is extremely expensive for traditional computing for other than trivial cases. Quantum computing has been suggested as a reasonable approach to N-body problems.

The underlying issue is that the system is dynamic, so that you have to juggle too many balls in the air to get anything done, which doesn’t work for other than relatively small systems. And when you cannot determine the location of a ball with any precision or reliability, traditional juggling simply doesn’t work at all.

There are plenty of real-world problems that have such complexity, such as:

  • Driverless cars in a region coordinating their own movement without the use of a central traffic controller.
  • Patrons at a large performance venue wishing to exit as quickly as possible. Or to be seated as rapidly as possible. All without any central traffic control or local control either.
  • Protein folding.
  • Optimizing travelling salesman problems.
  • Predicting complex adaptive systems.
  • Managing large distributed networks of Internet of Things devices when the devices may interact, without central control.
  • Simulating physical systems at the level of quantum mechanics.
  • Simulating large celestial systems at the cosmological level.

Quantum computing is still new, relatively unproven, and in need of significant development, but has significant potential for solving computationally hard problems.

Conway’s law

An interesting observation about traditional software systems is encapsulated in software folklore as Conway’s Law, which basically says that the structure of a software system will parallel the structure of the organization which created it.

There are differences between the nature of traditional software systems and AI systems, so it is worth asking whether Conway’s Law is applicable at all or needs modification to apply to AI systems.

It may depend on the AI system. For example, with machine learning and training, the structure of the training data will certainly impact the behavior of the resulting AI system.

But then again, it could well be that the degree of flexibility and ability to learn in fact reflect the organization.

Foreign languages

All too commonly, basic research and implementation of AI systems will occur primarily in the context of the English language.

Being able to handle one or more foreign languages is certainly feasible, just with modest to significantly more effort. Technologists refer to this process as internationalization or I18N for short.

Globalization is the portion of I18N that makes software capable of supporting multiple languages. This is a generalization process that can take a fair amount of effort, but only needs to be done once, regardless of how many languages are to be supported.

Localization is the portion of I18N to adapt the software to support a specific foreign language. This process must be repeated for every language to be supported.

Support for foreign languages is an issue in several areas of AI and computer software in general:

  • Parsing of natural language input.
  • Generation of natural language output.
  • Diagnostic and information messages that the AI system may produce in addition to its primary output.

Generally, knowledge will be stored and manipulated in a language-neutral data format or data structure. In other words, the essence or meaning is stored independent of any natural language.

Foreign cultures and cultural-awareness

Knowledge and meaning can have subtly, moderately or even wildly different meanings and interpretations in different cultures, whether different countries, regions of the world, or ethnicities within a single country.

This can make knowledge representation such as concepts and their meanings a tricky matter.

In truth, a lot of traditional and current AI systems tend to presume a single, homogenous, monolithic monoculture.

More advanced AI systems will gradually become more culturally aware, but that will be a slow, gradual process.

Generally, distinct cultures will need to be siloed or kept separate except for shared knowledge where the concepts and meaning are close to similar.

Matters that can vary between cultures include:

  • Laws
  • Social conventions
  • Social norms
  • Environmental differences such as climate which enrich language and meaning
  • Acceptable modes of discourse
  • Acceptable forms of interaction


Intelligence per se is concerned with mental functions, from cognition, thinking, deciding, planning for actions, and up to initiating actions and monitoring the results of actions, but not actions themselves, whether they be physical actions or communications.

Behavior concerns itself with the action side of the equation.

AI systems don’t directly concern themselves with behavior, other than to treat it as part of the real world to observe with sensory devices.

Clearly systems like robots and driverless cars which have AI software embedded in them are concerned with the process of carrying out actions.


Intelligent entities are reactive systems — they respond to activity in their environment. That reaction may be limited to acquiring new knowledge, or extend to taking action in response to the new knowledge.


Robotics requires AI, but is so much more. Just to mimic the movement capabilities of simple animals or even insects is a lot of effort, most of it not directly associated with the kind of intellectual effort associated with the human mind.

A lot of robotics, particular those aspects related to physical structure, movement, sensors, and manipulation of real-world objects would be more appropriately referred to as artificial life (A-Life) than AI per se. In fact, robots could be designed to mimic animals such as reptiles and even insects which would not normally be considered intelligent per se. Robotic dogs are popular, but their intellectual capacity is minimal.

Motion, movement, and manipulation of objects in the real world is a real challenge. They require sophisticated software, artificial life, but whether to consider them AI per se is a matter of debate. The mental aspects for sure (what do you wish to move, to where, and why), but the physical aspects not so much.

A key distinction of robotics from traditional, non-robotic AI systems is the fact that the robotic system is continuously monitoring and reacting to the environment on a real-time basis.

Much of robotics revolves around sensors and mechanical motions in the real world, seeming to have very little to do with any intellectual activity per se, so one could question how much of robotics is really AI.

Alternatively, one could say that sensors, movement, and activity enable acting on intellectual interests and intentions, thus meriting coverage under the same umbrella as AI.

In addition, it can be pointed out that a lot of fine motor control requires a distinct level of processing that is more characteristic of intelligence than mere rote mechanical movement.

In summary, the reader has a choice as to how much of robotics to include under the umbrella of AI:

  1. Only those components directly involved in intellectual activity.
  2. Also sensors that provide the information needed for intellectual activity.
  3. Also fine motor control and use of end effectors. Including grasping delicate objects and hand-eye coordination.
  4. Also any movement which enables pursuit of intellectual interests and intentions.
  5. Any structural elements or resource management needed to support the other elements of a robotic system.
  6. Any other supporting components, subsystems, or infrastructure needed to support the other elements of a robotic system.
  7. All components of a robotic system, provided that the overall system has at least some minimal intellectual capacity. That’s the point of an AI system. A mindless, merely mechanical robot with no intelligence would not constitute an AI system.

In short, it’s not too much of a stretch to include virtually all of robotics under the rubric of AI — provided there is at least some element of intelligence in the system, although one may feel free to be more selective in specialized contexts.

See also: Artificial Life (A-Life.)

Driverless cars and autonomous vehicles

This paper won’t delve deeply into driverless cars or autonomous vehicles, except to note the significant AI content of such systems.

Driverless cars need to be human-rated to protect their occupants, innocent bystanders, and property from harm.

Autonomous vehicles must be human-rated to protect innocent bystanders and property from harm.

That said, driverless cars and autonomous vehicles to date are much closer to Weak AI than Strong AI. They are what I call Moderate AI — some degree of integration of individual Weak AI features, but still not approaching even close to human-level intelligence.

Advanced driver-assistance systems

Well-short of driverless cars, so-called driver-assist features do make it seem that vehicles are a lot more intelligent than non-assisted vehicles, but even at best could only be considered Weak AI, and not even approaching the Moderate AI level of integration since each assistance feature works independently of the others, with no sense of automatic executive control coordinating them to any significant degree.

Examples of advanced driver-assistance systems features include:

  • Lane departure warning
  • Automatic parking
  • Cruise control
  • Automatic braking
  • Collision avoidance
  • Automatic lighting

Although we aren’t even close to endowing a vehicle with human-level intelligence, we do appear to be building towards a critical mass of assistance features where a significant degree of coordination may soon become needed which at least approaches the integration expected for a categorization as Moderate AI. But, it could be that only when the assistance features are combined with actual driverless driving that Moderate AI is reached.


  1. An iterative implementation strategy for discovering solutions to problems in AI systems.
  2. Synonym for trial and error.
  3. Synonym for search engine.

Google-style Internet search engines will be described shortly, but the concept of search in the context of AI is a general approach to iteratively searching for solutions to problems.

In truth, search is just a fancy term for exhaustive trial and error enumeration of all possible solutions, but using a variety of tricks and shortcuts, otherwise known as heuristics to speed up the process by eliminating large numbers of possibilities which are unlikely to be successful solutions.

There are a wide range of algorithms, strategies, and techniques for performing search in AI, such as covered in Stuart Russell and Peter Norvig’s book Artificial Intelligence: A Modern Approach.

Clever and creative choice of data structures and knowledge representations can greatly impact the speed and quality of the search process.

Trial and error

  1. An iterative approach to finding solutions to problems.
  2. Synonym for search.

If machines are doing it we call it search, but if humans are doing it we call it trial and error.

One qualitative difference is that a machine would be fairly methodical and orderly in its trials, while a human may be tempted to try seemingly randomly-selected possibilities, hoping to get lucky. Sometimes getting lucky works, but usually it just makes us feel good rather than offering any real degree of efficiency.

Heuristics and rules of thumb

Heuristics and rules of thumb have a similar purpose, to take shortcuts to achieve results much more quickly than by laboriously computing a 100% accurate result or using laborious trial and error.

Rules of thumb are commonly used to approximate a result using a simpler calculation.

Heuristics are approaches that are not guaranteed to be optimal or perfect, but sufficient for the problem at hand.

In the case of AI, heuristics are common for achieving results that appear to approximate a significant fraction of human intelligence with a modest amount of effort. Not that just a little more or even a lot more effort would necessarily achieve true human-level intelligence.

A qualitative difference between heuristics and rules of thumb, at least in the area of AI, is that rules of thumb are usually merely used for convenience or efficiency, while heuristics are generally employed because they are the only known approach to achieving human-level intelligence. I say usually, because there are many heuristics whose primary benefit is efficiency.

The adage “Don’t let the perfect be the enemy of the good” is a great example of a heuristic.

For example, in software we frequently accept the first solution to a problem rather than expend the significant additional resources to find the best solution.

Search engines

Modern keyword Internet search engines are a great example of heuristics. So often you can just enter a keyword or two or three and seemingly like magic the search engine comes up with results that are pretty darn good if not outright excellent.

This isn’t AI per se in the Strong AI sense — the search engine doesn’t understand the deep meaning of either your keywords, the documents that contain them, or your intentions, but it can actually feel that way to us mere mortals. This is the seeming magic of heuristics. A handful of well-chosen heuristics are able to approximate what an average person would have thought only the human-level intelligence of Strong AI would be capable of.

Okay, sometimes the search results are downright mediocre, bordering on useless, or outright awful, offensive, or laughable. That’s the downside of heuristics — they are approximations that have the promise of working reasonably well a fair amount of the time or even most of the time, at the expense of working poorly the rest of the time.

Technically, a search engine is performing matching rather than search. A variety of specialized data structures called indexes are maintained so that the engine can very quickly identify documents containing specified keywords rather than having to search laboriously through all of the data one by one for each keyword. But that’s part of the trick for search in AI, figuring out how to organize the data for optimal processing for problems that would otherwise be exhausting.

Search engines also have a number of heuristics to look for certain patterns in questions and to produce special results for those, typically in a box above normal search results, such as:

  • Recognizing math expressions to calculate
  • Request for the weather
  • Flight information
  • Requests to translate text from one language to another
  • Single words likely to be a request for a dictionary definition rather than a query for all documents containing that word

Is this all AI? That’s a great after-work drinking debate. In short, it’s AI if somebody claims it’s AI. Or, to the point of this paper, it’s AI if it is a significant fraction of what human-level intelligence could accomplish. I would say it borders on Weak AI.


  1. A specialized form of search that is focused on finding data, information, or knowledge that satisfies specified criteria.

Technically, a typical search engine is really a matching engine — we aren’t asking the machine to solve a problem per se, but simply looking for documents that match the specified keywords.

Most recognition algorithms are matching algorithms rather than search algorithms, such as finding images or faces that match a sample image.

A key goal with matching is to build and maintain specialized data structures that index important qualities of the data so that the matching data can be very quickly identified without requiring a laborious exhaustive search of all data.

Examples of matching where AI can assist include:

  • Dating
  • Job search
  • Recommendations
  • Collaborative filtering

Hardware for AI

There are several aspects to viewing hardware for AI:

  1. The extent to which commercial, off-the-shelf, consumer-grade hardware is sufficient for many or most AI applications.
  2. The extent to which commercially available specialized hardware such as GPU chips is sufficient for more intensive AI applications.
  3. The extent to which expected improvements in hardware performance over the next few to five to ten years will be sufficient to enable breakthroughs that current hardware is not.
  4. The potential for specialized neural networking hardware to fuel breakthroughs.
  5. The question of whether quantum computing might unleash AI capabilities that would not have been possible without quantum computing.
  6. The open research question over whether Turing Machines are sufficient for Strong AI and human-level intelligence, or whether some alternative form of machine, more inline with Turing’s concept of u-machine (B-type unstructured machine, not to be confused with a universal Turing machine, which is his a-machine concept) is necessary.

The good news is that plenty of AI applications are quite feasible with current hardware. Granted, these are mostly Weak AI applications, such as niche and domain-specific applications, but that’s still quite impressive.

Quantum AI

Besides the question of whether quantum computing might unleash AI capabilities that would not have been possible without quantum computing, there is a separate question of whether a quantum approach has potential for the modeling of knowledge and behavior of AI systems, based on uncertainty and inherent fluctuations.

It is already quite common for AI systems to have to deal with uncertainty about the real world. Fuzzy logic is a great example. The question is whether a quantum uncertainty model can or should become the central model for AI rather than a peripheral, ad hoc consideration.

Regardless of the potential, this remains an unresolved research question.

Human-rated for safety

In the original days of the space program they talked of a rocket being man-rated, meaning that there must be a very high probability that no harm would come in any attempt to launch the rocket with one or more people onboard. Similarly, any computer-system on such a launch system would need to be man-rated to ensure a safe launch. Today the more proper term is human-rated.

The FDA classifies medical devices so that Class III is the riskiest in terms of potential harm to a person if something goes wrong.

Not all AI systems or applications have severe negative consequences for people if something goes wrong, but if and when AI is used in situations where a mistake could mean serious harm to person or property or even loss of human life, the software needs to be human-rated.

More work is needed in this area. Not all AI systems will be equally risky to human health and safety.

Measuring intelligence

How do we measure intelligence? Great question, but with weak answers.

Everybody is familiar with the concept of an IQ test for intelligence, but that measure has various difficulties and to the best of my knowledge hasn’t been proposed or utilized for measuring the intelligence of an AI system.

There is something called a g factor (general intelligence factor) as well.

The main difficulty with measuring AI systems may simply be that since each system implements only a small fraction of total human-level intelligence, and each system implements a different fraction, there is very little to compare in an statistically meaningful manner.

The bottom line, is that for all the intensity of interest and progress in AI, we have no good methodology for measuring and comparing the intelligence of the many systems.

Virtual reality

Despite the strong interest in making AI systems capable of operating in the real world, it is also very possible to have AI systems operate within the context of virtual reality worlds.

Virtual worlds would not have to follow all of the rules of our real world, so the knowledge and behavior of an AI system may have to be potentially radically different. Potentially, even the laws of physics might be different, which could have dramatic effects on systems like driverless vehicles or even walking across a room.

Even virtual worlds that precisely mimic our real world can be quite useful, such as for simulation without the risk of any undesirable effects in the real world.

This concept will not be explored further in this paper, but its potential is rather intriguing.

Simulation and testing

As noted, virtual reality worlds can enable an AI system to run as a simulation.

A primary use of simulation would be for testing.

A key benefit is that any damaging malfunctions during a test will not have any negative consequences in the real world. Nor will any real world resources be required or consumed.


People occasionally like to introduce an element of randomness into their lives, even if only to break up the monotony and boredom of routine.

A person might flip a coin, throw a dart, close their eyes and point their finger at a map or list or menu, or otherwise choose something unexpected.

A machine or AI system can do the same, using a random number generator.


Philosophers can wring their hands over whether life, the universe, and each of our lives are deterministic or even predestined, but from a practical perspective all we care about is whether a system, process, person, or outcome is predictable, or how predictable it is.

AI systems are much more likely to be predictable or deterministic, but this can change as:

  • The AI system learns from past behavior.
  • The AI system learns from fresh inputs.
  • Elements of randomness are introduced into the AI algorithms.

Statistical influence on AI behavior

It is not uncommon for modern applications to use at least some degree of statistical processing to influence or control the underlying processing of an algorithm.

This could be due to:

  • Heuristics.
  • Desire to adapt to input data.
  • Elements of randomness to distribute resource usage more uniformly.
  • To participate in games, such as using dice or cards.

Whether the statistical influence is AI per se will vary between applications and algorithms.

Non-AI applications may use statistical effects as well.

Spelling correction and autosuggest (you type a few letters and see the possibilities) use statistical processing. Is that AI? Great debate question.

Human nature and machine nature

Whether human or machine, an intelligent entity must accept and cope with its own nature, which includes:

  • Abilities
  • Strengths
  • Weaknesses
  • Limits
  • Drives
  • Innate values
  • Hopes
  • Dreams
  • Desires
  • Personality

AI systems which seek to interact with other intelligent entities must understand and work within the constraints of those entities.

Artificial nature

  1. The designed nature of an AI system.

Beyond the essential nature of machines, the designers of an AI system are free to decide what nature they wish to endow the AI with.

Granted, they cannot exceed the capabilities of the machine itself, but they can certainly shape the desired nature of the AI in terms of how to blend human nature and machine nature, such as:

  • Abilities
  • Drives
  • Innate values
  • Personality

A human observing an AI system will be able to asses the nature of the system.

How friendly or formal or adventurous do you want your robot, car, or refrigerator to be?


  1. Intelligence significantly exceeding human-level intelligence.

Alas, superintelligence is more of a marketing buzzword at this stage. I do have to admit that it’s a great name for a book.

Granted, this paper itself has placeholders for Extreme AI and even Ultimate AI, but anything beyond human-level intelligence is simply idle speculation at this stage.

God-level intelligence

How far can Ultimate AI, Kurzweil’s Singularity, or superintelligence go? What’s the limit of intelligence? A simple answer to a difficult question is God-level intelligence — an omniscient and omnipotent deity would possess all that is knowable and possible. Kind of hard to surpass that.

Not that this prospect is worth discussing at this stage, but at least it frames the conception of an upper bound for intelligence.

What would it take to know everything?

Just for the record, I will posit that a God-level intelligence would not be possible in this universe.

Omniscience or knowledge of everything means knowing the state of every last tiny bit of matter, energy, space, and time in the universe, including mass, energy, position, momentum, etc.

Since it would take more than a single bit of mass or energy to model the state of each bit of mass and energy, only a fraction of the full universe could be modeled or known using all of the matter and energy in the universe to represent that information.

At most, some ultimate intelligence in the universe could only less than a single billionth of everything.

How many bits or neurons would it take to know the absolute position of even a single atom?

So, I don’t see a superintelligence or even Kurzweil’s Singularity ever having God-level intelligence.

You could posit a superintelligence that exists outside of the universe, but then it doesn’t exist… here.

Bummer. I guess we’ll have to settle for partial-God level intelligence as our ultimate goal.

Safety and adventure

Of course one would expect an AI system to be as safe as possible, but even safety can be carried too far. Driving down the highway or crossing the street carries risk. And people hate getting bored. At least some sense of adventure is welcomed by most people at some point.

AI systems should be able to accommodate varying degrees of adventure, based on a combination of:

  • Safety limits of the AI system.
  • Environmental factors impacting safety.
  • Weather conditions.
  • Interests and tolerance of the user.
  • Social situation — presence of others who have differing or conflicting senses of safety and adventure.

The beauty of AI is that it can exert more rational control than a human might be inclined to do in any particular situation that might have heightened safety issues.

AI in a skateboard or a motorcycle? Will the world be safe?

Maslow’s hierarchy of human needs

Psychologist Abraham Maslow proposed a hierarchy of human needs, which is usually represented as pyramid, working upwards from the most basic needs:

  1. Physiological needs
  2. Safety needs
  3. Social belonging
  4. Esteem
  5. Self-actualization
  6. Self-transcendence

As AI systems become more advanced and focus more on social intelligence, a similar hierarchy will likely be appropriate for AI. Esteem doesn’t seem to have any direct analog in current AI systems, but some sense of self-worth may eventually be appropriate for AI.

On the flip side, AI systems could be programmed to recognize and support users relative to their needs and interests relative to the hierarchy, possibly guiding and helping them to move up the pyramid.

Intelligent design

Although religious conceptions of the nature of the universe aren’t usually considered when evaluating AI, the concept of intelligent design is interesting. Although many scientists and technologists would reject the religion-based metaphor of intelligent design of the universe, life, and human life, all of our AI systems to date have only the abilities that their designers carefully designed or trained into them.

That said, the concepts of evolution and self-organizing systems will likely be applied to much more advanced AI systems so that the intelligent design of the designers will become less significant.

Mental illness and dysfunction

Although we readily accept that individual human beings are not always fully functioning and sometimes very dysfunctional, we simultaneously hold our artificial systems to a much higher standard — perfection. In fact, if a machine malfunctions for any reason, we tend to get very upset.

Unfortunately, machines have a wide range of potential malfunctions, such as:

  • Hardware failure due to wear.
  • Hardware failure due to electronic component or connection failure.
  • Design failure in software, known as a bug.
  • Poor design that confuses users.
  • Hacking that disrupts normal operation.
  • Overload of networking and shared resources that disrupts normal operation.
  • Improper configuration of hardware and software settings.
  • Mistakes by users that caused unexpected effects.
  • And the list goes on…

And that’s just for traditional systems before we throw AI into the mix.

In fact, the many ways that AI can malfunction have yet to be cataloged since the experience with more advanced AI systems is so limited, but some examples from recent years include:

  • Lack of comprehension due to limited vocabulary or range of non-English languages.
  • Lack of comprehension due to accents and inability to cope with speech impediments.
  • Inability to handle relatively reasonable requests due to narrow niche of focused AI applications.
  • Inability of image recognition to cope with visual oddities that a human can readily handle correctly.
  • Inability to provide desirable response due to limited range of action.
  • Inability to respond reasonably in response to hand and facial gestures or body language.

From the perspective of the AI system designer these may be design limitations rather than malfunctions per se, but the user sees the matter differently, expecting… perfection and even performance significantly better than a mere human, not a mere fraction of human-level intelligence and behavior.

Beyond bugs and design limitations, we have yet to see the level of complexity of intelligence where malfunctions approximating significant mental dysfunction if not outright mental illness may begin to appear. In particular, once AI systems begin to evolve and adapt on their own, their behavior can shift very far from the expectations of the original system designers.

On the flip side, AI should be able to help a lot more with diagnosis and treatment of mental illness in real people. I actually haven’t heard on any efforts, but the potential is there, especially when people are so prolific writing about their thoughts and feelings online and in social media.

An intelligent mental health assistant on smartphones and other personal devices or even home and workplace appliances (and vehicles) could help to alert family and professionals to looming problems and possibly even offer at least limited assistance.


  1. Natural life.
  2. Artificial life.
  3. Synonym for natural life.

Context will be required to determine whether life is inclusive of artificial life or exclusive to natural life.

Natural life

  1. Biologically-based organisms, capable of reproduction.
  2. Including the natural environments in which organisms live.

Artificial life (A-Life)

  1. Engineered biological systems, similar or comparable to natural organisms. Including any intelligence.
  2. Engineered electromechanical systems (machines), similar or comparable in function to natural organisms. Including any intelligence.
  3. AI systems that exhibit significant characteristics of living biological organisms and systems.
  4. Robotic systems intended to exhibit significant characteristics of living biological organisms and systems.
  5. Including artificial environments in which artificial life operates.
  6. Limited to individual artificial organisms or interacting individual artificial organisms, without mating or reproduction.
  7. Expanded to include mating and reproduction of individual artificial organisms, enabling them to create new individual artificial organisms without external intervention.
  8. Individual artificial organisms, interacting individuals, small collections of individuals, or large collections or even societies of individuals.
  9. Includes social behavior of artificial organisms.
  10. Synonym for robot or robotics.
  11. Abbreviated as A-Life.

The elements of an organism or collection of organisms, natural or artificial, include:

  1. Anatomy. Structure and form. Delineation of the parts of the whole.
  2. Physiology. Function of organs or parts of the whole. Both body and mind.
  3. Sensory perception of the environment.
  4. Interpretation of sensory perception.
  5. Reaction to sensory perception.
  6. Intellectual activity. Animal-level intelligence. Optional.
  7. Higher-order intellectual activity. Human-level intelligence. Optional.
  8. Decision to act in the environment, commonly in response to sensory perception or internal metabolic state.
  9. Movement, positioning, and orientation in the environment either in preparation for activity, to facilitate sensory perception, or for self-defense.
  10. End effector activity to achieve changes in the environment. Such as grasping with fingers.
  11. Gross and fine motor control to achieve movement, positioning, and orientation.
  12. Acquiring resources from the environment to sustain future activity. Eating, breathing, drinking, ingestion of energy resources.
  13. Interaction with other individual organisms. Communication, to some degree.
  14. Social structure of organisms.
  15. Mating and reproduction. Optional in the realm of artificial life, where individual artificial organisms can be created by external means, literally from nothing and for no reason or cause discernible in the artificial environment.

Loosely speaking, there is a body and a mind, the body governing form and the mind governing intentional activity.

A robot would seem to be a good exemplar for artificial life.

Virtual worlds would provide environments to support artificial life. As well as virtual reality (VR.) No physical body although a virtual body may be governed by real-world physics.

Artificial life can also operate in the real world — like robots.

Artificial life is not limited to higher-order intelligent life. Animals, small animals, rodents, reptiles, fish, amphibians, and even insects and microbes can be modeled and mimicked.

Can include wholly imaginary life forms as well.

Artificial life, such as robotics, would include the physical, non-intellectual aspects, including:

  • Form.
  • Shape.
  • Size.
  • Appearance.
  • Body and torso.
  • Limbs.
  • Means of movement, gross and fine.
  • Means of motor control, gross and fine.
  • Physical sensors.
  • Hands or feet.
  • End effectors (e.g., fingers.)
  • Head.
  • Source of energy and energy storage.
  • Means of physical strength.
  • Means of transferring signals, information, and energy from one component to another.

Artificial life would include AI as well.


  1. Life that is not dead.
  2. Natural life that is not dead.
  3. Aware or asleep.

Can an AI system be considered alive?

Normally, we would associate being alive only with natural life.

But since AI systems do indeed possess a sense of awareness and responsiveness, it is difficult to assert that the mere lack of biological function is necessarily indicative that a system is not alive.

One could reasonably assert that ability to reproduce is a mandatory requirement of life. Does that mean than a computer virus would qualify for life?

One could also reasonably assert that having a basis in organic matter is essential to qualify as life. Of course, one could reasonably imagine a computer in which logic and memory elements are engineered from organic compounds. So, it gets complicated.

In short, one could reasonably argue the matter either way.

It is easier to simply relegate AI systems to artificial life than life per se.


Security for AI systems is not in principle any different from security for traditional, non-AI software or services.

There may be a heightened sense of concern to the extent that more advanced AI systems will likely have a deeper role in the operations of enterprises and government, not to mention the daily lives of normal citizens.

Security for human-rated AI systems will be a special and heightened concern, but in principle no more heightened than for non-AI human-rated systems and devices.

Cyber warfare

  1. Use of software in support of conventional warfare.
  2. Use of cybersecurity to defend military systems.
  3. Use of cyberattacks against adversary military systems.
  4. Initiation of cyberattacks against non-military targets as authorized by national command authority (i.e., the president)
  5. Defense of non-military targets against cyberattacks by an adversary state. Not to be confused by non-state hacking.

AI could have a role in conventional warfare, such as:

  • A source of knowledge for commanders, analysts, and soldiers.
  • A source of guidance for commanders, analysts, and soldiers.
  • Control of non-combat systems.
  • Limited control of combat systems.

AI could have a role in cyber operations of the military engaged in conventional warfare:

  • Detection, reporting, defense, and mitigation for cybersecurity attacks against military (non-civilian) systems.
  • Initiation of cyberattacks against military (non-civilian) targets as authorized by national command authority (i.e., the president.)

AI could also have a role in cyberattacks against non-military (civilian) targets:

  • Detection, reporting, defense, and mitigation for cybersecurity attacks against non-military (civilian) targets by an adversary state. Not to be confused by non-state hacking.
  • Source of knowledge for initiation of offensive cyberattacks.
  • Source of guidance for initiation of offensive cyberattacks.
  • Source of plans or assistance with planning for offensive cyber warfare operations.
  • Initiation of cyberattacks against non-military targets as authorized by national command authority (i.e., the president.)

Non-military cyber targets might reasonably include:

  • Military bases and infrastructure, as opposed to military forces in the field.
  • Power grid.
  • Transportation infrastructure.
  • Communications infrastructure.
  • Commercial and industrial structures critical to military operations.

I would imagine that a variety of non-military, civilian targets would be explicitly off-limits, at least for modern, western-style governments:

  • Hospitals.
  • Schools.
  • Food production and distribution.
  • Water systems, including dams and pipelines.
  • Residential structures.

There way or may not be a distinction between non-military government facilities and non-government facilities, so that there may be a hierarchy of targets:

  1. Military forces in the field.
  2. Military bases.
  3. Non-military government facilities.
  4. Industrial facilities and infrastructure.
  5. Commercial facilities and infrastructure.
  6. Civilian facilities.

Cyber warfare is a new thing, so much of this is speculation.

Privacy and confidentiality

AI systems will of course need to respect the privacy of individuals and confidentiality of organizations and relationships. This comes into play in three ways:

  1. Data should not be collected unless it is needed. Unwarranted intrusions should not be made into private or confidential matters.
  2. Any data collected from individuals or organizations should not be disclosed unless they have agreed to such disclosure.
  3. Retention policies should be put into place, disclosed to all parties, and followed scrupulously so that any collected data will be purged from systems promptly when the retention period expires.

Systems must be especially careful with personally identifiable information (PII).

Whenever possible, data should be encrypted in such a way that anyone gaining physical access to the data will still be unable to access the unencrypted information.

Data governance

  1. Management of information as a critical asset across an enterprise to assure technical excellence, business effectiveness and business value, security, confidentiality, privacy, and regulatory compliance.
  2. Foundation for knowledge governance.

Data governance should probably be referred to as information governance, but it evolved as data governance historically.

Data/information includes knowledge, such as in an AI system

Information can include:

  • Traditional structured data as found in a traditional SQL database
  • Text and documents
  • Images
  • Video and other media
  • Presentations (PowerPoint, et al)

In addition to information used within applications, other forms of information include:

  • Configuration data and parameters
  • Source code
  • Binary code
  • Scripts
  • Training and test data
  • Legal documents

Rather than treating information as merely a technical detail within a given project, the goal is to assure that information is managed as a valuable enterprise asset for six goals:

  • To achieve technical excellence across all projects that use that information.
  • To achieve technical excellence within a single project when resources might not otherwise have been made available to assure the proper handling of information within that project in isolation.
  • To fully and properly exploit its value to the enterprise.
  • To assure that all information is properly kept secure and confidential, both against external threats and insider threats.
  • To assure that privacy of information is maintained.
  • To assure regulatory compliance.

Data governance will include issue such as:

  • Maintaining consistency across all projects that use particular information.
  • Rules and policies about what information can be collected.
  • Rules and policies about how information should be stored.
  • Rules and policies about how various categories of information can be used.
  • Rules and policies about sharing information.
  • Rules and policies about how long information can be retained.
  • Rules and policies about where information can and must be stored, including maintaining copies and backups to preclude loss.
  • Rules and policies for coping with multiple legal jurisdictions and across borders where distinct laws and regulations may apply.

As AI systems become more sophisticated and pervasive the information that they contain and use will become an increasingly valuable enterprise assets and increasingly difficult to manage properly.

Individual AI projects would not typically get the resources on their own to deal with all of these governance issues.

Knowledge governance

  1. Management of knowledge as a critical asset across an enterprise to assure technical excellence, business effectiveness, security, confidentiality, privacy, and regulatory compliance.
  2. Extension of data governance.

Knowledge governance can be seen as an extension of data governance, but recognizing that the purpose, structure, representation, use, and significance of knowledge can present distinct challenges that are not as obvious when knowledge is looked at as mere data, such as:

  • Emergent enterprise and business value.
  • Social significance of knowledge.
  • Moral and ethical dimensions of knowledge and intelligent entities.
  • Structuring, representation, and usage issues.

Standing on the shoulders of giants

An essential characteristic of the advancement of human knowledge has been the ability to build on knowledge established by others. In essence, we get to stand on the shoulders of those who came before us, allowing us to see further than if we had to stand on our own two feet.

Advanced AI systems will need to exploit this same technique for advancing knowledge.

Philosopher kings

The ultimate in knowledge and power would be comparable to the ancient Greek concept of a philosopher king, one who can rule based on reason.

Whether humanity can or will ever achieve such a level of rationality is very unclear.

Artificial philosopher kings

If and when AI significantly transcends human-level intelligence, an artificial version of the ancient Greek concept of a philosopher king would be one possible next step.

Whether or when humanity would be ready for such a prospect is exceedingly unclear.

It may be left for Kurzweil’s Singularity or at least a significant fraction of it.


  1. Short for robot.
  2. A program that accesses services across a network.
  3. A program that accesses services across a network for malevolent purposes, such as to overwhelm a target system for a distributed denial of service attack.

Psyche, soul, and spirit

The human concepts of psyche, soul, and spirit are completely irrelevant to weak and even moderate AI systems.

Whether they will necessarily become relevant for even Strong AI systems is unknown at this time. Possibly, but not necessarily.

One could argue that a software system can have a spirit in the sense that one could take a snapshot of the state and data of a system, store it away, transport it somewhere, and then later reinstantiate a virtual clone of the original system. Not exactly the human conception of spirit, but interesting nonetheless.

One could similarly argue that that snapshot of the state and data of an AI system would constitute the artificial analogue of its soul.

Psyche is a little more applicable to an AI system, roughly corresponding to mind, the mental or intellectual capacity, knowledge, and ability to respond to stimulus from the external environment.

In any case, once AI systems begin to approach human-level intelligence, these three concepts will become more relevant. Until then, they are simply vague speculations about the future.

Generally a bot has nothing to do with AI, although in theory an AI system could be implemented using bots to facilitate accessing services over the network.


  1. Any number of bots that operate in unison to accomplish a common purpose.
  2. A significant collection of bots deployed with malevolent intentions, such as a distributed denial of service attack.

AI systems would not normally utilize a botnet, but it is conceptually feasible to either implement a distributed AI system using one or more botnets, as well as to use the botnet concept to test a distributed AI system, simulating any number of users.


We won’t explore the matter further here, but the owners, vendors, and and users of AI systems will need to concern themselves with the legal and financial ramifications of liability for the actions of an AI system.

Concerns include:

  • Actions due to the AI algorithms which could result in harm or loss.
  • Mistakes in E-commerce transactions.
  • Any susceptibility to hacking which could result in harm or loss.


We won’t explore the matter in depth here, but the designers, developers, vendors, and users of AI systems will have to consider a host of ethical concerns, including:

  • Social effects of automation that eliminates jobs.
  • Distribution of wealth from benefits of AI.
  • How do machines impact our sense of humanity?
  • How do machines impact human behavior and interactions?
  • How much testing of AI systems must developers and users of AI systems do to feel comfortable and responsible for their use?
  • Unintended biases.
  • Safety.
  • Security. Especially when improper access and improper use of an AI system could have dramatic negative consequences.
  • Unintended consequences. Especially for AI that can learn, grow, and evolve in unpredictable ways.
  • Who’s in control. How do we maintain control, especially for autonomous systems?
  • Does a sufficiently advanced AI system with near-human level capabilities have rights?
  • Free from corruption and incorruptible.
  • Not prone to collusion or any form of joint action that is illegal, suspicious, or even unfair.
  • No cheating — but clear definitions of cheating vs. competition are problematic.
  • Validity of AI system as witness or evidence. Can law enforcement or a court question, interrogate, or use an AI system as a witness vs. evidence?
  • Compliance with laws, regulations, rules, and recognized authorities.
  • Consideration of when civil disobedience might be warranted.
  • Compliance with contractual commitments. And coping when commitments cannot be met.


Regulations of AI could cover several areas:

  1. Any activity for which a non-AI device is currently regulated.
  2. Activities which permit autonomous movement which could interact with people or cause harm to property.
  3. Advanced medical devices.
  4. Enhancements to the human body.
  5. Enhancements to the human brain.
  6. Use of AI embedded in larger systems, such as finance, investment, stock trading, security surveillance, vehicle and building safety systems, infrastructure, defense systems.
  7. Privacy implications of AI which aggregates personal data or requires access to personal data.
  8. Liability which may be assigned to the AI as opposed to the owner or user of the machine possessing AI capabilities.
  9. Rights or obligations assigned to the AI as opposed to the owner or user of the machine possessing AI capabilities.

Presently, only the first category is fully covered, and the second and third categories only to a limited extent.

Regulation in the other areas await further development of AI technology.

The more urgent need at the present is to rapidly begin educating legal experts, lawmakers, and policymakers in:

  1. What AI is currently.
  2. What AI is not currently.
  3. What AI is likely to develop in the near future.
  4. What AI is not likely to develop in the near future.
  5. What AI is likely to develop a few years down the road.
  6. What AI is not likely to develop a few years down the road.
  7. What AI is likely to be coming in the next ten to twenty years.

Moral and ethical dimensions of decisions

As AI systems become more advanced and more involved with decisions that would have required people to incorporate moral and ethical considerations, these systems will have to be programmed to take moral and ethical codes into account.

How will this be accomplished? Good question. Initially rules can be hard-coded into AI software, but over time the software will need to become more flexible so that moral and ethical behavior can be learned. Even then, AI systems will have to be trained by professionals.

Eventually, AI systems will evolve to be capable of learning and developing moral and ethical behavior on their own, but that is not in the near future.

Already with driverless cars people are discussing ethical considerations when the AI system will have to make choices when all plausible outcomes are ethically undesirable. How the AI system might decide which outcome is least worst in currently a matter of debate.

Rights of artificial intelligent entities

Can an artificial intelligent entity claim that it has rights? Rights tend to be a matter of law, a legal issue.

Technically, modern constitutions defer to so-called natural rights, but that leaves the issue dangling. Sure, you can argue that natural obviously refers only to real live people, but as more advanced AI looks and acts more… natural as every year goes by, the distinction narrows and will eventually get rather blurry.

The good news is that for now the distinction between us and them is so obvious and glaring that rights is a complete non-issue, for the foreseeable future.

Asimov’s Three Laws of Robotics

Speaking of social contracts for AI, science fiction writer Isaac Asimov, proposed Three Laws of Robotics, and later a zeroth law, governing the commitments of robots:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Granted, he was a science fiction writer, but so far nobody has challenged these so-called laws.

What can’t a machine do?

I’m reluctant to assert too strongly that there are elements of human intelligence that are by definition now and always beyond the capacity of any imaginable machine, but there are mental capacities that we wouldn’t normally associate with machines or AI, such as:

  • Relax
  • Enjoy games, sports, athletics
  • Enjoy humor, joking, parody, satire, etc.
  • Enjoy or even need entertainment
  • Experience excitement and pleasure
  • Engage in music and art to relax and to stimulate the mind
  • Dream
  • Day dream
  • Hope
  • Love
  • Sense of consciousness, the way people do
  • Sense of self as being special and distinct from the rest of the world
  • Feel heartfelt compassion
  • Empathy — never had any of the experiences of a person
  • Reproduction
  • Bring life into existence
  • Contemplate its own death
  • Lie — requires intent to deceive. Well, okay, you could program a computer to lie, but it wouldn’t feel guilty about it!
  • Enjoy fiction and fairy tales
  • Get bored
  • Experience joy
  • Experience depression — okay, there is Marvin, the depressed android of “The Hitchhiker’s Guide to the Galaxy”, but… that’s science fiction
  • Experience distress
  • Experience desperation
  • Experience melancholy
  • Fear death
  • Experience death
  • Have a near-death experience
  • Feel pain
  • Be tortured
  • Worry about its health — although a machine can take actions in response to technical machine health issues
  • Be clever — although intricate algorithms and their effects could seem clever
  • Be sarcastic — not by the machine’s own nature, but the effect could be programmed
  • Vote
  • Worship in religious ceremonies
  • Experience the raw joy of freedom
  • Experience the oppressive feeling of being trapped or enslaved
  • Experience the thrill of victory or the despair of defeat
  • Experience guilt
  • Experience sorrow
  • Experience moral or ethical dilemmas
  • Be suspicious — although rules can be developed to simulate skepticism such as for loan applications or lie detection

But can robots bake bread?

There is a famous quip or dig at philosophy that suggests that philosophy has little utility since philosophers can’t even bake bread. We can ask the same question about robots (or any other device that contains embedded AI), demanding that they deliver some sort of useful social value, at least as useful as baking a loaf of bread.

Science fiction

AI has been common in science fiction for many decades — or 200 years if you count Frankenstein.

Sometimes the distinction between fact and fiction gets very blurry. Not uncommonly, people attribute to some real AI system capabilities, qualities, and features that aren’t really there, yet.

The distinction between forecasts and speculation and science fiction gets even blurrier. Even to the point that fiction is not infrequently more credible than speculation.

A full treatment of AI in science fiction is beyond the scope of this paper.

Modest breakthroughs

The many breakthroughs for AI over the years have been relatively modest, typically either mastering fairly structured activities such as games, such as Chess, Go, or Jeopardy, or mastering niches or a limited depth for broader domains.

Driverless cars are certainly interesting, but I would classify them as “shows promise” than mastered per se.

Simple question/answer or control systems such as Alexa, Siri, and Cortana are also interesting, but fall into the category of niches and limited depth rather than coming close to true human-level intelligence.

Automatic language translation and natural language processing in general is also quite interesting, but still problematic as well.

Modern search engines perform quite well, to the point where we can consider Google to be an adjunct for our own minds, quickly giving us recall to vast amounts of information that we never knew or forgot or are having trouble remembering.

Alas, even the mighty Google search engine succeeds using rules of thumb and heuristics rather than anything remotely resembling true human-level intelligence. The simple fact is that as valuable as rapid search results are to us, Google has little inkling as to the deep meaning of these results to us — its comprehension of meaning (to us) is merely literal, superficial, and rather shallow, constituting information rather than true, human-level knowledge.

Breathtaking breakthroughs

As exciting as any of the modest breakthroughs of recent years have been, we’re still waiting for the kinds of truly breathtaking breakthroughs that AI will need to advance a lot further towards the goal of human-level intelligence, Strong AI.

It will be interesting to see what the coming decades bring us on the AI front. Certainly an endless stream or modest and even moderate breakthroughs, but imagine that fateful day when a truly breathtaking breakthrough makes its appearance.

Speculation and forecasts for the future

Speculation and forecasts for the future of AI alternate between buoyant goodness and ominous gloom. Will AI make all of our lives easier, or put all of us out of work? Take your pick, or maybe both, to some degree.

Speculation and forecasting of the future is generally beyond the scope of this paper, although a little of it has creeped in here and there, but more in the sense of obvious linear extrapolations and statements of requirements rather than trying to set expectations of any great leaps this year or next.

The real goal of this paper is to focus on describing and explaining where we are at present.

Will Kurzweil’s Singularity transpire? Will Skynet of Terminator movie fame transpire? This paper won’t render a judgment.

Will AI put us all out of work?

Yeah, I know, the media and critics are always looking for an angle to sow anxiety and despair, and AI is no exception. Between expert systems, robots, and driverless vehicles, the assertion is that before you know it there will be no more jobs available that can’t be done better and more cheaply by a machine.

A full treatment of this matter is far beyond the scope of this informal paper.

I would simply note three points:

  1. It isn’t going to happen anytime soon, other than in a smattering of niches.
  2. People are always coming up with new things to do or new ways to do things, so the machines (well, the people designing the machines) will have their hands full trying to keep up with us.
  3. There is a knee in the complexity curve, so that even as many simpler tasks are easy to automatic, many of the harder tasks where humans excel are currently well beyond the meager capabilities of even the best of AI today and for the foreseeable near-term future.

If you want to review an early contemplation of the risks of automation, check out Kurt Vonnegut’s 1952 book Player Piano, a vision of dystopia in… the near-future, as envisioned 60 years ago.

Is AI an existential threat to human life?

Some seemingly rational people really do believe that unchecked AI does indeed pose an existential threat to human society and human life itself.

A discussion on this matter is far beyond the scope of this particular paper.

Entrepreneur Elon Musk has set up a nonprofit organization, OpenAI, dedicated to research to make AI safe, friendly, and beneficial to society.


Sorry, but there is no clear, well-identified roadmap for AI going forward. Rather, it is more of a starburst with progress proceeding on many and all fronts in all directions simultaneously.

Or in military terminology, everybody is pursuing targets of opportunity, automating where they sense a conjunction of need, opportunity, and available technology.

Education for AI

Courses and training in both development and use of AI is widely available, although uneven and spotty.

Education for an such an emerging and rapidly evolving technology presents a significant challenge.

It is a fascinating topic area, but education for AI is beyond the scope of this informal paper.


A comprehensive list of resources for AI is beyond the scope of this informal paper.

The Internet is probably the best resource. Just do a Google search for “artificial intelligence” and many resources will be quickly listed.

If you want to get really technical, Stuart Russell and Peter Norvig’s book Artificial Intelligence: A Modern Approach will give you a deep dive, but even that can be very daunting.

In truth, my main reason for writing this informal paper was to provide a one-stop resource for everybody who needs a broader and deeper understanding of AI, far beyond the coverage in the popular media, but far more comprehensible than raw technical material.

Historical perspective by John McCarthy

To get a sense of the roots and evolution of AI, consult AI pioneer John McCarthy’s own response to the question of What is AI?:


Key points to take away:

  • There is no single, monolithic form of artificial intelligence.
  • AI can range on a spectrum from weak to strong.
  • Current AI is closer to the weak end of the spectrum than the strong end.
  • AI is a very long way from human-level intelligence.
  • AI is not magic or mystical in any way.
  • AI is not trivial.
  • Machines can excel beyond mere mortals in various forms of intelligent activity.
  • People will have a significant role in activities requiring intelligence for quite some time to come.
  • Even the most intelligent of machines will have rather severe limits for some time to come.
  • We know even less about the limits of human intellect than we do about what machines can do now and for quite some time to come.
  • The emphasis for the near-term will be on how to do weak to moderate intelligence better and much more competently than investing too much effort in stronger, human-level intelligence.
  • There will be some fairly amazing advances in AI over the next few years.
  • There will be a significant level of frustration over limitations of AI over the next few years.
  • Kurzweil’s Singularity may be coming, but not so soon.

For more of my writings on artificial intelligence, see List of My Artificial Intelligence (AI) Papers.



Jack Krupansky