Some Brief Notes on Artificial Intelligence (AI)

Jack Krupansky
11 min readJul 6, 2023

--

Artificial intelligence (AI) is a very rich and complex field, but a very few brief notes can help to dramatically simplify navigation of the field. Or at least allow the reader to feel that they can navigate the field, or at least feel a lot less intimidated by the field.

These notes were prepared to facilitate a philosophy group discussion of AI and humanity — What should be the relationship between humanity and artificial intelligence? — to ensure that participants have some basic and common basis for discussion. But they should be of interest to anyone interested in AI. These notes focus on the AI portion of that question, not the humanity or socio-political and economic policy aspects, although there are a few tidbits here and there.

Brief notes

  1. No commonly accepted definition for AI.
  2. Heuristics.
  3. AI systems vs. robotics.
  4. Strong vs. weak AI.
  5. Artificial general intelligence (AGI).
  6. Automation vs. AI.
  7. Nomenclature for levels of AI.
  8. Beyond human intelligence.
  9. Animal intelligence.
  10. Autonomy and agency.
  11. Intelligent digital assistants.
  12. Generative AI.
  13. Large language models.
  14. Emotional intelligence vs. intellectual intelligence.
  15. AI is not a one-size-fits-all technology.
  16. Asimov’s three laws of robotics.
  17. Seminal books and movies about AI.
  18. The precautionary principle for coping with risk.
  19. Pascal’s wager for coping with risk.

No commonly accepted definition for AI

Sorry, that’s the way it is at this juncture.

But, I’ll offer my own simplified definition for AI:

  • Artificial intelligence (AI) is the use of a computer system to mimic at least some fraction of the mental capacity of a human being.

That’s the essence of what AI is endeavoring to do.

Technically, the concept of AI can be applied to animals as well, since they commonly have at least some primitive level of mental capacity, and they definitely have a brain.

Heuristics

True, full, human-level AI is really, really hard. So, how is it that researchers have been able to do so well in recent years, such as with large language models (LLMs)? That’s an easy question… in one word: heuristics.

Even human beings use heuristics — mental shortcuts — to approximate very complex processes with a very minimum of effort.

Heuristics are never (rarely) perfect, but they are almost always (usually) good enough, and save lots of time and effort.

The same technique can be utilized in computer software, including AI systems.

Oversimplifying, analogously to the Pareto principle, a heuristic lets you get 80% of the benefits with only 20% of the work. Actually, it’s not uncommon with computer software heuristics to get even 99% or more of the benefit with a mere 1% or less of the effort.

Search engines use lots of heuristics to enable them to usually give very decent results with just a very few keywords.

The catch is that heuristics are very tricky, very finicky, and devilishly difficult to discover or invent. They take a lot of effort, discipline, care, and persistence to discover or invent, but once you do all of that, boy do they deliver great results (even if maybe only usually or even just sometimes.)

So the key to large language models is not that they have manually pre-programmed vast numbers of careful and complex rules, but simply that they have fiddled around and contrived a relatively few heuristics which deliver great results based on information gleaned from vast amounts of data.

Simpler AI systems and robots also rely on clever heuristics, to deliver very impressive (even if not perfect) results much of the time.

AI systems vs. robotics

The concepts of AI can be implemented in two main forms:

  1. AI system. An application or function embedded in a computer system to provide artificial intelligence capabilities.
  2. Robot. A physical machine capable of moving around, physically communicating, and physically interacting with objects and people in its environment. Contains embedded AI capabilities, but communication and interaction is with the robot at a human-like level rather than focused on interacting with the embedded computer system. Generally doesn’t feel like a computer.

Strong vs. weak AI

It is common to see AI classified as Weak AI vs. Strong AI:

  1. Weak AI. Offers only a very limited subset of the mental capabilities of a human being. Maybe even a single mental function. Minimal integration of functions.
  2. Strong AI. Offers a substantial fraction of the mental capabilities of a human being. Or at least some number of capabilities. At least a fair degree of integration of those functions. Generally still well short of the full mental capabilities of a human being — that would be classified as artificial general intelligence (AGI).

Artificial general intelligence (AGI)

Artificial general intelligence (AGI) is intended to refer to the full mental capabilities of a human being.

This would be well beyond even strong AI.

For more detail, see my informal paper:

No, even the best of current AI systems, such as ChatGPT or Bing Chat, are still not full AGI. Yes, they are strong AI but they still lack a lot of human capabilities.

Automation vs. AI

There’s no clean, clear, and meaningful distinction between traditional automation and AI. Literally, it’s a subjective matter — it’s all in the eye of the beholder.

Just about anything you can do with a computer that a human can do could be called AI.

In the past, people may have been reluctant to call some computer operations AI since there was traditionally a stigma associated with AI.

Or, maybe the computer operations seemed too simple to be worthy of being called AI.

But these days, even a lot of relatively simple automation is being labeled as AI.

Nomenclature for levels of AI

People use the term AI or artificial intelligence in a lot of confusing ways so that we can never be sure what they are talking about. I think of AI as a number of levels of intensity of function and integration of intelligence capabilities.

For details, see my informal paper (note) which attempts to lay out a somewhat more comprehensible nomenclature for the term AI:

Beyond human intelligence

For at least a brief discussion of levels of intelligence well beyond human intelligence, including superintelligence (or artificial superintelligence — AGI) and Kurzweil’s Singularity, see my informal paper:

Animal intelligence

There’s no question that animals have their own levels of intelligence, even if it is very far from the capabilities of humans.

In fact, it’s important and worth noting that some fraction of what we call intelligence in humans exists in animals as well.

So, human intelligence is a combination of animal intelligence and human-only intelligence.

And with robotics, a significantly higher fraction of the capabilities of a robot are in fact almost identical to what is found in animals — moving around, some form of communication with others, and interacting with objects in the environment.

Autonomy and agency

Autonomy refers to the degree of freedom that an intelligent entity has to choose how to act in any situation.

Humans generally always have autonomy, unless they are slaves, prisoners, children, students in a school, or serving in the military.

Whether or not an AI system or robot has autonomy or what degree of autonomy it has will vary greatly and be a matter of debate. Generally, at least for now, they have fairly limited and very constrained autonomy.

Animals generally have a fair degree of autonomy, unless they are held captive in a zoo or pets.

Agency refers to the degree to which an intelligent entity is obligated to act relative to the goals and constraints set for it by another intelligent entity, its principal, for which it is an agent.

A person can act as an agent, such as an insurance agent, real estate agent, sales person, purchasing agent (corporate buyer), lawyer, etc.

An AI system may or may not have a degree of agency.

Robots will tend to have some non-trivial degree of agency, consistent with the need for them to have mobility and directly interact with real world objects.

An assistant is an intelligent entity with a very limited but still meaningful degree of agency.

The distinction between an agent and an assistant is that an agent is generally given one or more goals and is free to decide for itself how to pursue those goals, while an assistant is given only narrow, well-defined tasks and has no authority to decide for itself how to achieve those tasks.

For more detail on autonomy and agency, see my informal paper:

And for deeper detail, see my informal paper:

Intelligent digital assistants

The purpose of an intelligent digital assistant is for a computer system to perform relative discrete tasks comparable to a human assistant, such as answering questions and requesting relatively discrete activities such as making a phone call, turning a light or piece of equipment on or off, or ordering a pizza. Some assistants can be relatively simple while others can be significantly more complex and sophisticated.

For more detail on intelligent digital assistants, see my informal paper:

Since I wrote that in 2017, seven years ago, lately large language models such as ChatGPT, Bing Chat, Google Bard, et al have introduced a whole new level of capability (a wider range of activities) for intelligent digital assistants with generative AI, but the basic concept is still the same — providing assistance.

Generative AI

Although commonly currently mostly associated with large language models (see next section), generative AI is a general concept for AI systems (generally not robots) which can generate media content, whether it be text, images, audio, or video, in contrast with more simplistic AI systems which simply answer a question or retrieve data, or perform specific actions which don’t require any sophisticated media.

Text can include traditional natural language, formatted text, structured documents, and programming language source code. Any form of text.

Generated text can be traditional prose, stories, essays, poems, structured text, and computer program source code. Many qualities of the text can be selected.

Despite some claims to the contrary, generative AI alone does not imply full artificial general intelligence (AGI) or even necessarily strong AI. There’s much more to human intelligence than simply generating media content.

Large language models

A large language model is simply one approach to generative AI, which focuses on ingesting very large quantities of textual and image material and then constructing a mathematical (statistical) model of the words (or pieces of words or tokens) and their relationships. That model can then be used to generate media content based on a user’s input.

ChatGPT, Bing Chat, and Google Bard are examples of generative AI using large language models.

Emotional intelligence vs. intellectual intelligence

Intellectual intelligence, the world of ideas, concepts, language, logic, reason, decision, planning, and initiating and managing activities generally gets the lion share of attention when we talk about AI, but human intelligence includes emotional intelligence as well, the world of emotions, feelings, concerns, anxieties, fears, compassion, empathy, sensitivity, responsibility, joy, anger, and despair, and the ability to identify, respond to, and manage all of these, both internally and externally.

At least at present, emotional intelligence is not present in most AI systems or robots, or present in only a minor, superficial, or canned manner.

AI is not a one-size-fits-all technology

Artificial intelligence is not a monolithic, uniform, one-size-fits-all type of technology. It comes in all shapes and sizes.

In particular, any characteristics that apply to one particular shape, size, form, or packaging of AI technology might not apply to some other shape, size, form, or packaging of AI technology.

Some concepts might transfer from one form of AI to some other form of AI, but that is not a guaranteed given.

AI that applies to one application problem (use case) might not apply to some other application problem.

Not all AI systems or robots have the same identical capabilities, limitations, or issues as other AI systems or robots. There may be similarities, or not. Generally, there will be at least some similarities, but the degree of similarity will vary greatly.

Be careful not to over-generalize and assume that perceptions, characterizations, conclusions, or opinions of one AI system can be blindly presumed to be true of all other AI systems or robots.

Asimov’s three laws of robotics

While people (primarily critics) complain about the risks of robots and AI systems taking over, they rarely call any attention to science fiction writer Isaac Asimov’s three laws of robotics, which would reduce or eliminate a lot of the concerns about AI taking over. His proposed laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

He later added a zeroth law:

  • A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

That last or zeroth law is very relevant to a discussion of the relationship between humanity and AI.

Actually implementing these laws could be very challenging or even problematic, but at least they would provide a firm foundation for discussing the relationship between humanity and AI, and its risks.

Granted, he was a science fiction writer, but so far nobody has challenged these so-called laws.

By the way, Asimov was the luncheon speaker when I graduated from college. My only recollection is of how impressed he was with the fact that our civil engineering majors were designing, building, and racing canoes made out of… concrete.

Seminal books and movies about AI

This list of AI-related books and movies is not intended to be comprehensive, but at least represents a starting point for discussion and covers a lot of the issues.

  1. Player Piano. Novel about industrial automation by Kurt Vonnegut. 1952.
  2. Colossus: The Forbin Project. Movie about a military computer taking control. 1970. Based on the book Colossus by Dennis Feltham Jones. 1966.
  3. 2001: A Space Odyssey. The HAL computer taking control. By Stanley Kubrick. 1968.
  4. The Terminator. Movie series about a world in which an AI computer network, Skynet, has taken over. By James Cameron. 1984.

The precautionary principle for coping with risk

This has nothing specifically to do with AI per se, but one general approach to dealing with risk is the so-called precautionary principle, which basically says that we must scientifically prove and fully verify that a technology or policy is truly harmless or has acceptable harm before pursuing that technology or policy.

I personally find this approach too extreme, but others believe in it.

I believe in taking reasonable steps to identify and manage risks.

For more detail, see the Wikipedia article:

Pascal’s wager for coping with risk

Another approach to risk is Pascal’s wager. Although originally developed to address the risk of being wrong about whether God exists — if you don’t believe that God exists and you’re wrong then you will spend the rest of eternity in hell, which is clearly an unacceptable outcome, it can be applied to any technology or policy for which there is even a slight concern of some potential for some extremely adverse or catastrophic outcome or consequence. Basically it says that if the consequences of being wrong are going to be truly extreme and the cost of pursuing an alternative approach is fairly low, it is clearly more advisable to pursue that alternative rather than run the risk of an extremely adverse outcome.

I also personally find this approach too extreme, but others believe in it.

I believe in taking reasonable steps to identify and manage risks.

For more detail, see the Wikipedia article:

For more basic information on AI

For more, basic information on AI, see my informal paper:

For more detailed information and depth on AI

For a lot more depth on AI, see my informal paper:

For more of my writing on AI: List of My Artificial Intelligence (AI) Papers.

--

--