How Close Is AI to Human-level Intelligence Here in April 2018?

Jack Krupansky
36 min readApr 28, 2018

Artificial intelligence (AI) is progressing rapidly, but how close is it to true human-level intelligence here in April 2018? Not very, in my own estimation. It’s got a long way to go.

This informal paper won’t delve so much into specific AI projects or features, but will endeavor to explore elements of a conceptual framework for judging progress of AI towards full, human-level intelligence, also known as Strong AI.

Even super-optimist Ray Kurzweil of The Singularity Is Near: When Humans Transcend Biology fame was still touting 2029 last fall as his target date for machines achieving human-level intelligence, on their way towards his technological singularity of exponential superintelligence in 2045.

So, here we are, still over a decade short of the super-optimist’s forecast for human-level intelligence.

But where are we really?

Are we on track for human-level intelligence in 2029?

Are we behind?

Or are we possibly ahead of schedule, on a path to achieve human-level intelligence in 5–7 years rather than 11 years?

First off, all bets are off, or maybe I should say that all bets are on since nobody, even Kurzweil, has any clue where in that 5–11 year time horizon we really are.

Generally, progress proceeds in fits and starts, with occasional short bursts of phenomenal breakthroughs, but interspersed with prolonged periods of disappointingly slow progress.

Superficially, we seem to be in one of those rare periods of rapid advance, but who’s to say how long it will last.

And who’s to say how long we will have to wait for the next burst.

And who’s to say how many bursts and breakthrough we will need to master before we finally do break through to true human-level intelligence.

To recap, there are two very distinct questions:

  1. How close is AI in April 2018 to human-level intelligence?
  2. Is AI on track to achieve human-level intelligence in 2029–11 years from now?

And, I would submit that there is another pair of questions which are more urgent:

  1. What fraction of applications which are crying out for AI capabilities have those needs met with off the shelf AI packages — or at least custom AI which could be completed within no more than a few months by no more than a few people?
  2. What fraction of off the shelf AI capabilities are truly ready for prime time deployment in production applications?

On the matter of actual progress, there are two aspects:

  1. Specific progress. Actual technical accomplishments.
  2. A more abstract framework for criteria to judge progress.

I won’t recount specific technical technical accomplishments myself, but I would refer curious readers to a recent post entitled Frontier AI: How far are we from artificial “general” intelligence, really? by venture capitalist Matt Turck of First Mark Capital, which mentions quite a few of the recent accomplishments. His overall conclusion:

  • So, how far are we from AGI [Artificial General Intelligence]? This high level tour shows contradictory trends. On the one hand, the pace of innovation is dizzying — many of the developments and stories mentioned in this piece (AlphaZero, new versions of GANs, capsule networks, RCNs breaking CAPTCHA, Google’s 2nd generation of TPUs, etc.) occurred just in the last 12 months, in fact mostly in the last 6 months. On the other hand, many the AI research community itself, while actively pursuing AGI, go to great lengths to emphasize how far we still are — perhaps out of concern that the media hype around AI may lead to dashed hopes and yet another AI nuclear winter.

As far as a more abstract framework for criteria to judge progress, I’ll defer that for a future paper, but many of the elements of such a framework will be explored in the remainder of this paper, and are already discussed to a fair degree in a companion paper, Untangling the Definitions of Artificial Intelligence, Machine Intelligence, and Machine Learning.

A simplified framework for evaluating AI systems relative to strong AI has several dimensions:

  1. Areas of intelligence.
  2. Levels of function. In each area.
  3. Degree of competence. At each level of function, in each area.

Unfortunately it is impractical to characterize current AI systems according to such a framework today since the structure and competence of such systems doesn’t really parallel human-level intelligence to any significant degree.

Instead, what we have in AI systems today is either:

  1. Small chunks of intelligence embedded in larger software systems. Not really resembling a larger form of intelligence.
  2. Narrow towers of intelligence. Where the machine performs comparable to or even outperforms human-level intelligence. Such as data analysis or playing games such as Go and chess.
  3. Relatively weak forms of learning that require significant human assistance or training. Such as image or pattern recognition. Commonly called machine learning (ML).
  4. So-called deep learning. The machine does extremely well, but requires very careful setup with ground rules and preprogramming of basic logic. Formerly called neural networks.
  5. Preprogrammed intelligence. Such as natural language processing (NLP) or data analysis.
  6. Relatively thin layers of intelligence. The current crop of intelligent digital assistants are quite amazing, but only in a rather superficial sense, answering only basic questions and performing basic tasks.
  7. Just automation, billed as AI. There is no law as to what can be called AI, so many algorithms can be treated as if AI, even though they don’t really involve something comparable to higher-order human intellect. Data analysis, scheduling, optimization.

So, the basic problem we have today is that we can’t even begin to compare any machine to the pathway of human intellectual development and ask the basic question:

  • If a typical AI system were a person, what age or stage of intellectual development has it achieved?

Sure, some of these towers of intelligence and preprogrammed intelligence have achieved mature human-level of skill but at the same time they lack a lot of basic skills of human intelligence that we expect from even small children.

And true general learning — as mastered by even small children — is well beyond even the most advanced of AI systems today.

If I had to summarize today’s AI systems in a single statement, I would say:

  • The state of the art for AI today is primarily in task-specific and domain-specific AI systems.

And a follow-own statement regarding learning:

  • The best AI systems today hinge critically on some degree of preprogrammed basic intelligence and relatively narrow task or domain-specific supervised training or limited, narrow learning.

Granted, there is some preliminary research in unsupervised learning, but that is the rare exception rather than the general rule, and the whole trust of this paper is on what is general and common today rather than fringe or atypical, or coming further down the road.

In short, AI is now quite common but still quite primitive, with rare exceptions.

If you have a problem which you wish to solve using AI, you absolutely cannot just go out to the store (or Amazon) and order an off the shelf solution, except in a relatively small number of areas.

Some of the areas where fairly sophisticated AI can be bought or downloaded for free include:

  • Basic natural language transcription. Speech recognition.
  • Basic natural language commands.
  • Basic but relatively primitive automatic natural language translation. And detection of language. Google Translate, built into Google Search.
  • Particular games. Play chess against the machine, online, for free.
  • Intelligent digital assistants. Alexa, Siri, Google.

Topics to be covered in this paper

The topics to be covered in this informal paper include:

  1. Task-specific and domain-specific AI.
  2. Learning and machine learning.
  3. Learning concepts.
  4. Robotics vs. intellectual capacities.
  5. Is it AI or just automation?
  6. Is it AI for just a heuristic?
  7. Progress on gaming.
  8. Is it AI or just machine intelligence.
  9. What fraction of strong AI is needed for your particular app?
  10. Areas of intelligence.
  11. Areas of human intelligence.
  12. Areas of AI research.
  13. Levels of function.
  14. Specific human-level functions of intelligence.
  15. Degree of competence.
  16. Common use of natural language processing (NLP).
  17. Autonomy, principals, agents, and assistants.
  18. Intelligent agents.
  19. Intelligent digital assistants.
  20. The robots are coming, to take all our jobs?
  21. How intelligent is an average worker?
  22. No sign of personal AI yet (strong AI).
  23. AI is generally not yet ready for consumers.
  24. Meaning and conceptual understanding.
  25. Emotional intelligence.
  26. Wisdom, principles, and values.
  27. Extreme AI.
  28. Ethics and liability.
  29. Dramatic breakthroughs needed.
  30. Fundamental computing model.
  31. How many years from research to practical application?
  32. Turing test for strong AI.
  33. How to score the progress of AI.
  34. Links to my AI papers.
  35. Conclusion: So, how long until we finally see strong AI?

For more detail, see:

Task-specific and domain-specific AI

Much of the recent advances in AI have been in task-specific and domain-specific AI.

And when an advance does indeed transcend multiple tasks or multiple domains, it is usually fairly narrow.

Learning and machine learning

Learning is a fundamental component of intelligence. Machine learning is all the rage, but despite impressive achievements at learning, the state of the art involves either significant, supervised training, directed training, or explicit focus on a narrow problem and preprogrammed foundation concepts.

Learning concepts

As of today, there has been no real breakthrough on the ability of a machine to independently and without human direction discover concepts, especially foundation concepts.

Granted, even genius-level humans come preprogrammed with a vast array of conceptual and intuitive knowledge, encoded in our DNA, as well as culturally programmed when we are young and in school, so that the precise nature of what it means to learn as an adult learns does not easily or quickly translate into how a machine should learn.

Even the learning process for children is still well beyond the capabilities of even the most capable AI systems with the most advanced machine learning algorithms.

Robotics vs. intellectual capacities

One important distinction to draw in the progress of AI is robotic versus intellectual capabilities.

Physical bodies and movement in the real world has a lot less to do with human-level intelligence and more to do with the combination of:

  • Animal-level body structure and mechanics.
  • Animal-level movement.
  • Animal-level intelligence.
  • Mechanical and electrical aspects of robotics.

Granted, intellectual capacities are needed to decide where to move and what actions to take when you get there, but the basic mechanics of moving a physical body from point A to point B requires only the same capabilities as possessed by most animals. That’s the primary job of robotics.

Whether the non-intellectual aspects of robotics should be considered intelligence or AI is a matter of debate. See a companion paper, How Much of Robotics Is AI?.

This paper does not get into robotics per se, focusing more strictly on intellectual capacities.

Is it AI or just automation?

Technically, just about anything a person can do in their head can be considered intelligence, but do we really want to label relatively simple tasks such as the following as artificial intelligence:

  1. Ability to add a few numbers and compute an average.
  2. Organize a list of names, addresses, and phone numbers.
  3. Recognize printed text.
  4. Schedule or optimize deliveries for a business.
  5. Sorting or searching a very large dataset.
  6. Organizing photos based on heuristics such as colors or superficial features.

I think not.

Automation, yes, but rising to the level of higher-order human intelligence, no.

I have a full paper on this topic: Is It Really AI or Just Automation?

Yes, incredible progress has been made on automating tasks performed by people.

But I would assert that none of that constitutes progress towards Strong AI, automating higher-order intellectual capacities.

Is it AI for just a heuristic?

Computer scientists have done a great job over the years of coming up with relatively simple rules and shortcuts or heuristics which have the effect of mimicking human-level intelligence.

But merely getting comparable results to a human for a given task does not necessarily mean that the machine has human-level intelligence.

Besides, a lot of tasks performed by people are relatively simple in the first place, so that they aren’t necessarily tapping into the core of the higher-order intellectual capacities which may be present but not necessarily used.

Heuristics and mental shortcuts are highly valued and to be applauded, but they most certainly are not the same as higher-order intellectual capacities.

Progress on gaming

Some of the highest profile and most impressive advances in AI have been on the gaming front, including:

  • Chess.
  • Go.
  • Learning classic video games.

While impressive, and achieving human-level performance, there are difficulties asserting that this constitutes progress towards Strong AI. Notably:

  1. These are niches, or what I call towers of intelligence. Excellence in one of these areas implies nothing about general intelligence in disparate areas.
  2. Humans must preprogram basic knowledge and basic logic, such as ground rules. These AI systems are not strictly learning from a completely blank tabula rasa.
  3. Heuristics and statistics rather than true, higher-order human-level intellectual capacities are being exploited.

In short such advances are significant progress in machine intelligence, but not artificial higher-order human intelligence per se.

Is it AI or just machine intelligence

Although some (many) people treat machine intelligence and artificial intelligence as synonyms, I would strongly advise treating the terms as distinct.

We can and should strongly applaud advances in machine intelligence without any need or obligation to assert that such advances necessarily constitute advances in artificial intelligence of the higher-order human-level intellectual capacities kind.

I have another companion paper on this topic, Is It AI or Machine Intelligence?

What fraction of strong AI is needed for your particular app?

Not every application requires full human-level intelligence. A mere fraction of human-level intelligence may be sufficient.

This is not unlike the simple fact that even for jobs staffed by people, not all positions require a genius or even more than a very modest fraction of what the staff are really capable of. Like, Einstein working as a patent clerk.

The fraction has three dimensions:

  1. Area of intelligence.
  2. Level of intelligence. In a given area of intelligence.
  3. Degree of competence. For a given level in a given area of intelligence.

Not every application requires all areas of human-level intelligence. Not every app needs to be a chess grandmaster. Not every app requires facility with quantum mechanics.

Even in a given area of intelligence, not every app requires all levels of function in that area. An auto mechanic doesn’t need to be able to design a new engine. A roadside assistance technician doesn’t need to be able to tear down and rebuild an engine.

Even for a given level of function in a given area of intelligence, a given app doesn’t require the maximum level of competence. Basic competence may be quite sufficient and readily achievable, while expert or genius level competence may be expensive, difficult, or even impractical.

Areas of intelligence

There are two rather distinct ways to look at areas of intelligence:

  1. Abstract human intelligence. Nothing to do with AI per se, but certainly applies to Strong AI.
  2. Areas of research in AI. The areas that AI researchers feel are fruitful for progress in automating human-level intelligence.

Areas of human intelligence

Even ignoring efforts in AI, human intelligence can be summarized as a variety of major distinct areas:

  1. Perception. The senses or sensors. Forming a raw impression of something in the real world around us.
  2. Attention. What to focus on.
  3. Recognition. Identifying what is being perceived.
  4. Communication. Conveying information or knowledge between two or more intelligent entities.
  5. Processing. Thinking. Working with perceptions and memories.
  6. Memory. Remember and recall.
  7. Learning. Acquisition of knowledge and know-how.
  8. Analysis. Digesting and breaking down more complex matters.
  9. Speculation, imagination, and creativity.
  10. Synthesis. Putting simpler matters together into a more complex whole.
  11. Reasoning. Logic and identifying cause and effect, consequences, and preconditions.
  12. Following rules. From recipes to instructions to laws and ethical guidelines.
  13. Applying heuristics. Shortcuts that provide most of the benefit for a fraction of the mental effort.
  14. Intuitive leaps.
  15. Mathematics. Calculation, solving problems, developing models, proving theorems.
  16. Decision. What to do. Choosing between alternatives.
  17. Planning.
  18. Volition. Will. Deciding to act. Development of intentions. When to act.
  19. Movement. To aid perception or prepare for action. Includes motor control and coordination. Also movement for its own sake, as in communication, exercise, self-defense, entertainment, dance, performance, and recreation.
  20. Behavior. Carrying out intentions. Action guided by intellectual activity. May also be guided by non-intellectual drives and instincts.

Communication includes a variety of subareas:

  1. Natural language.
  2. Spoken word.
  3. Written word.
  4. Gestures. Hand, finger, arm.
  5. Facial expressions. Smile, frown.
  6. Nonlinguistic vocal expression. Grunts, sighs, giggles, laughter.
  7. Body language.
  8. Images.
  9. Music.
  10. Art.
  11. Movement.
  12. Creation and consumption of knowledge artifacts — letters, notes, books, stories, movies, music, art.
  13. Ability to engage in discourse. Discussion, conversation, inquiry, teaching, learning, persuasion, negotiation.
  14. Discerning and conveying meaning, both superficial and deep.

Recognition includes a variety of subareas:

  1. Objects
  2. Faces
  3. Scenes
  4. Places
  5. Names
  6. Voices
  7. Activities

The measure of progress in AI in the coming years will be the pace at which additional areas from those lists are ticked off, as well as improvements in the level of competence in the levels of function in each area.

Progress in AI will likely continue to be uneven, with both strength and weakness in distinct areas, levels of functions, and degrees of competence.

Areas of AI research

Areas of research in replication of human intelligence and human behavior in which AI researchers feel they can make fruitful progress:

  1. Reasoning.
  2. Knowledge and knowledge representation.
  3. Optimization, planning, and scheduling.
  4. Learning.
  5. Natural language processing (NLP).
  6. Speech recognition and generation.
  7. Automatic language translation.
  8. Information extraction.
  9. Image recognition.
  10. Computer vision.
  11. Moving and manipulating objects.
  12. Robotics.
  13. Driverless and autonomous vehicles.
  14. General intelligence.
  15. Expert systems.
  16. Machine learning.
  17. Pattern recognition.
  18. Theorem proving.
  19. Fuzzy systems.
  20. Neural networks.
  21. Evolutionary computation.
  22. Intelligent agents.
  23. Intelligent interfaces.
  24. Distributed AI.
  25. Data mining.
  26. Games (chess, Go, Jeopardy).

For more depth in these areas, see Untangling the Definitions of Artificial Intelligence, Machine Intelligence, and Machine Learning.

Levels of function

In a given area of intelligence, we can also discern levels of function — what is the AI system actually accomplishing, relative to what a human might be able to accomplish.

Here are a list of generalized, abstract, but informal levels of function that can be applied to any area of intelligence:

  1. Non-functional. No apparent function. Noise. Twitches and vibrations.
  2. Barely functional. The minimum level of function that we can discern. No significant utility. Not normally considered AI. Automation of common trivial tasks.
  3. Merely, minimally, or marginally functional, tertiary function. Seems to have some minimal, marginal value. Marginally considered AI. Automation of non-trivial tasks. Not normally considered intelligence per se.
  4. Minor or secondary function. Has some significance, but not in any major way. Common behavior for animals. Common target for AI. Automation of modestly to moderately complex tasks. This would also include involuntary and at least rudimentary autonomous actions. Not normally considered intelligence per se.
  5. Major, significant, or primary function. Fairly notable function. Top of the line for animals. Common ideal for AI at the present time. Automation of complex tasks. Typically associated with consciousness, deliberation, decision, and intent. Autonomy is the norm. Bordering on what could be considered intelligence, or at least a serious portion of what could be considered intelligence.
  6. Highly functional, high function. Highly notable function. Common for humans. Intuition comes into play. Sophisticated enough to be considered human-level intelligence. Characterized by integration of numerous primary functions.
  7. Very high function. Exceptional human function, such as standout creativity, imagination, invention, and difficult problem solving and planning. Exceptional intuition.
  8. Genius-level function. Extraordinary human, genius-level function, or extraordinary AI function.
  9. Super-human function. Hypothetical AI that exceeds human-level function.
  10. Extreme AI. Virtuous spiral of learning how to learn and using AI to create new AI systems ever-more capable of learning how to learn and how to teach new AI systems better ways to learn and teach how to learn.
  11. Ray Kurzweil’s Singularity. The ultimate in Extreme AI, combining digital software and biological systems.
  12. God or god-like function. The ultimate in function. Obviously not a realistic research goal.

Specific human-level functions of intelligence

At a more detailed level, the mental functions and mental processes of intelligence or intellectual capacity include:

  • Sentience — to be able to feel, to be alive and know it.
  • Sapience — to be able to think, exercise judgment, reason, and acquire and utilize knowledge and wisdom.
  • Ability, capability, and capacity to pursue knowledge (information and meaning.)
  • Sense the real world. Sight, sound, and other senses.
  • Observe the real world.
  • Direct and focus attention.
  • Experience, sensation.
  • Recognize — objects, plants, animals, people, faces, gestures, words, phenomena.
  • Listen, read, parse, and understand natural language.
  • Identification after recognition (e.g, recognize a face and then remember a name).
  • Read people — what information or emotion are they expressing or conveying visually or tonally.
  • Detect lies.
  • Take perspective into account for cognition and thought.
  • Take context into account for cognition and thought.
  • Adequately examine evidence and judge the degree to which it warrants beliefs to be treated as proof of strong knowledge.
  • Compare incoming information to existing knowledge, supplementing, integrating, and adding as warranted.
  • Understand phenomena and processes based on understanding evidence of their components and stages.
  • Assess whether a new belief is strong knowledge or weak knowledge.
  • Judge whether fresh knowledge in conjunction with accumulated knowledge warrant action.
  • Learn by reinforcement — seeing the same thing repeatedly.
  • Significant degree of self-organization of knowledge and wisdom.
  • Form abstractions as knowledge.
  • Form concepts as knowledge.
  • Organize knowledge into taxonomies and ontologies that represent similarities and relationships between classes and categories of entities.
  • Acquire knowledge by acquaintance — direct experience.
  • Acquire knowledge by description — communication from another intelligent entity.
  • Commit acquired knowledge to long-term memory.
  • Conscious — alert, aware of surroundings, and responsive to input.
  • Feel, emotionally.
  • Cognition in general.
  • Think — form thoughts and consider them.
  • Assess meaning.
  • Speculate.
  • Conjecture.
  • Theorize.
  • Imagine, invent, and be creative.
  • Ingenuity.
  • Perform thought experiments.
  • Guess.
  • Cleverness.
  • Approximate, estimate.
  • Fill in gaps of knowledge in a credible manner consistent with existing knowledge, such as interpolation.
  • Extrapolation — extend knowledge in a sensible manner.
  • Generalize — learn from common similarities, in a sensible manner, but refrain from over-generalizing.
  • Count things.
  • Sense correspondence between things.
  • Construct and use analogies.
  • Calculate — from basic arithmetic to advanced math.
  • Reason, especially using abstractions, concepts, taxonomies, and ontologies.
  • Discern and discriminate, good vs. bad, useful/helpful vs. useless, relevant vs. irrelevant.
  • Use common sense.
  • Problem solving.
  • Pursue goals.
  • Foresight — anticipate potential consequences of actions or future needs.
  • Assess possible outcomes for the future.
  • Exercise judgment and wisdom.
  • Attitudes that affect interests and willingness to focus on various topical areas for knowledge acquisition and action.
  • Intuition.
  • Maintain an appropriate sense of urgency for all tasks at hand.
  • Sense of the passage of time.
  • Sense of the value of time — elapsed, present value, and future value.
  • Understand and assess motivations.
  • Be mindful in thought and decisions.
  • Formulate intentions.
  • Decide.
  • Make decisions in the face of incomplete or contradictory information.
  • Sense of volition — sense of will and independent agency controlling decisions.
  • Exercise free will.
  • Plan.
  • Execute plans.
  • Initiate action(s) and assess the consequences.
  • Assess feedback from actions and modify actions accordingly.
  • Iterate plans.
  • Experiment — plan, execute, assess feedback, and iterate.
  • Formulate and evaluate theories of law-like behavior in the universe.
  • Intentionally and rationally engage in trial and error experiments when no directly rational solution to a problem is available.
  • Explore, sometimes in a directed manner and sometimes in an undirected manner to discover that which is unknown.
  • Ability and willingness to choose to flip a coin, throw a dart, or otherwise introduce an element of randomness into reasoning and decisions.
  • Discover insights, relationships, and trends in data and knowledge.
  • Cope with externalities — factors, the environment, and other entities outside of the immediate contact, control, or concern of this intelligent entity.
  • Adapt.
  • Coordinate thought processes and activities.
  • Organize — information, activities, and other intelligent entities.
  • Collaborate, cooperate, and compete with other intelligent entities.
  • Remember.
  • Assert beliefs.
  • Build knowledge, understanding (meaning), experience, skills, and wisdom.
  • Assess desires.
  • Assert desires.
  • Exercise control over desires.
  • Be guided or influenced by experiences, skills, beliefs, desires, intentions, and wisdom.
  • Be guided (but not controlled) by drives.
  • Be guided (but not controlled) by emotions.
  • Be guided by values, moral and ethical, personal and social group.
  • Adhere to laws, rules, and recognized authorities.
  • Selectively engage in civil disobedience, when warranted.
  • Recall memories.
  • Recognize correlation, cause and effect.
  • Reflection and self-awareness.
  • Awareness of self.
  • Know thyself.
  • Express emotion.
  • Heartfelt sense of compassion.
  • Empathy.
  • Act benevolently, with kindness and compassion.
  • Communicate with other intelligent entities — express beliefs, knowledge, desires, and intentions.
  • Form thoughts and intentions into natural language.
  • Formulate and present arguments as to reasons, rationale, and justification for beliefs, decisions, and actions.
  • Persuade other intelligent entities to concur with beliefs, decisions, and actions.
  • Judge whether information, beliefs, and knowledge communicated from other intelligent entities are valid, true, and worth accepting.
  • Render judgments about other intelligent entities based on the information, beliefs, and knowledge communicated.
  • Render judgments as to the honesty and reliability of other intelligent entities.
  • Act consistently with survival — self-preservation.
  • Act consistently with sustaining health.
  • Regulate thoughts and actions — self-control.
  • Keep purpose, goals, and motivations in mind when acquiring knowledge and taking action.
  • Able to work autonomously without any direct or frequent control by another intelligent entity.
  • Adaptability.
  • Flexibility.
  • Versatility.
  • Refinement — make incremental improvements.
  • Resilience — able to react, bounce back, and adapt to shocks, threats, and the unexpected.
  • Understand and cope with the nature of oneself and entities one is interacting with, including abilities, strengths, weaknesses, drives, innate values, desires, hopes, and dreams.
  • Maintain a healthy balance between safety and adventure.
  • Balance long-term strategies and short-term tactics.
  • Positive response to novelty.
  • Commitment to complete tasks and goals.
  • Respect wisdom.
  • Accrue wisdom over time.
  • Grow continuously.
  • Tell the truth at all times — unless there is a socially-valid justification.
  • Refrain from lying — unless there is a socially-valid justification.
  • Love.
  • Dream.
  • Seek a mate to reproduce.
  • Engage in games, sports, and athletics to stimulate and rejuvenate both body and mind.
  • Engage in humor, joking, parody, satire, fiction, and fairy tales, etc. to relax, release tension, and rejuvenate the mind.
  • Seek entertainment, both for pleasure and to rejuvenate both body and mind.
  • Selectively engage in risky activities to challenge and rejuvenate both body and mind.
  • Experience excitement and pleasure.
  • Engage in music and art to relax and to stimulate the mind.
  • Day dream (idly, for no conscious, intentional purpose) to relieve stress and rejuvenate the mind.
  • Seek to avoid boredom.
  • Engage in disconnected and undirected thought, for the purpose of seeking creative solutions to problems where no rational approach is known, or simply in the hope of discovering something of interest and significant value.
  • Brainstorm.
  • Refrain from illegal, immoral, or unfair conduct.
  • Resist corruption.
  • Maintaining and controlling a healthy level of skepticism.
  • Maintaining a healthy balance between engagement and detachment.
  • Accept and comprehend that our perception and beliefs about the world are not necessarily completely accurate.
  • Accept and cope with doubt.
  • Accept and cope with ambiguity.
  • Resolve ambiguity, when possible.
  • Solve puzzles.
  • Design algorithms.
  • Program computers.
  • Pursue consensus with other intelligent entities.
  • Gather and assess opinions from other intelligent entities. Are they just opinion, or should they be treated as knowledge?
  • Develop views and positions on various matters.
  • Ponder and arrive at positions on matters of politics and public policy.
  • Decide how to vote in elections.
  • Practice religion — hold spiritual beliefs, pray, participate in services.
  • Respond to questions.
  • Respond to commands or requests for action.
  • Experience and respond to pain.
  • Sense to avoid going down rabbit holes — being easily distracted and difficult to get back on track.
  • Able to reason about and develop values and moral and ethical frameworks.
  • Be suspicious — without being paranoid.
  • Engage in philosophical inquiry.
  • Critical thinking.
  • Authenticity. Thinking and acting according to a strong sense of an autonomous self rather than according to any external constraints, cultural conditioning, or a preprogrammed sense of self.

That’s a very large amount of intellectual capacity which an AI system will need to possess to be truly classified as Strong AI.

But as this paper suggests, we can partition intelligence into distinct areas, distinct levels of function in each area, and distinct degrees of competence for each level of function in each area.

Degree of competence

A given AI system will have some degree of competence for some level of function in some area of intelligence. That AI system may have differing or even nonexistent competence for other levels of function or in other areas of intelligence.

Levels of competence include:

  1. Nothing. No automation capabilities in a particular area or level of function. User is completely on their own.
  2. Minimal subset of full function. Something better than nothing, but with severe limits.
  3. Rich subset. A lot more than minimal, but with substantial gaps.
  4. Robust subset. Not complete and maybe not covering all aspects of a level of function is an area, but close to complete in all aspects that it covers.
  5. Near-expert. Not quite all there, but fairly close and good enough to fool the average user into thinking an expert is in charge.
  6. Expert-level. All there.
  7. Elite expert-level. Best of human experts.
  8. Super-expert level. More than even the best human experts.

Levels of competence for current AI systems are all over the map.

In some towers of intelligence current AI systems may indeed be at the expert, elite expert, or even super-expert level.

But as a general proposition, current AI systems tend to be in the minimal to rich subset range of competence for most functions.

Common use of natural language processing (NLP)

One of the brighter spots of AI in recent years has been the widespread use of fairly competent natural language processing (NLP).

Recent widespread popularity of intelligent digital assistants which focus on natural language processing exemplify this progress.

There are several unfortunate blemishes on this progress:

  1. Most intelligent digital assistants require an extremely complex service in the cloud. All of this NLP progress is still well beyond the capabilities of a typical personal computing device.
  2. Too much of the progress has been with proprietary systems rather than with open source software.
  3. Too much of the progress is beyond the reach of average software developers. Only elite AI professionals need apply.

I expect these blemishes to be overcome without too much difficulty, but it may still be another two to five years before natural language processing becomes a slam dunk and second nature for computing in general. For now, it remains more of a special feature rather than a presumed general feature.

Autonomy, principals, agents, and assistants

Autonomy is a key requirement for true strong AI. this means that the AI system would be able to set its own goals, not merely do the bidding of a human master.

I have identified three levels of autonomy:

  1. Principals. Full autonomy. Entity can set its own goals without approval or control from any other entity.
  2. Agents. Limited autonomy. Goals are set by another entity, a principal. Entity has enough autonomy to organize its own time and resources to pursue tasks needed to achieve the goals which it has been given.
  3. Assistants. No significant autonomy to speak of. Unable to set its own goals. In fact an assistant is given specific, relatively narrow tasks to perform, with no real latitude as to how to complete each task.

A strong AI system would have full autonomy. It would be able to act as a principal.

Note that a driverless car would be an agent. It cannot decide for itself where to go, but given a destination, it is free to choose the route to get there.

Technically, we shouldn’t expect full autonomy for AI systems in the foreseeable future. That would mean citizen robots which control their own destiny rather than merely doing our bidding, which would have limited them to being agents. Think Hal in 2001 or Skynet in Terminator.

Drones might be either agents or assistants, depending on whether their flight is completely automated (including scheduling) or remotely controlled at all times by a human pilot. The former constituting an agent, the latter an assistant.

For more on autonomy, principals, agents, and assistants see these two papers:

Intelligent agents

Although agents by definition do not have full autonomy, it is very helpful if they have a significant degree of autonomy so that the user can request that a goal be pursued without needing to expend any significant energy detailing how the agent should achieve that goal.

Intelligent agents don’t really exist today. Rather, we are seeing significant activity and progress with intelligent assistants, but these AI systems are focused on narrow, specified tasks rather than broader goals with much less latitude as to how to achieve them.

As mentioned previously a driverless car is effectively an intelligent agent. It exercises a significant degree of discretion to achieve the requested goal.

Intelligent digital assistants

Although we don’t have much in the way of principals and agents (besides driverless cars), we are seeing significant activity and progress with intelligent assistants or intelligent digital assistants, such as Alexa and Siri, able to complete relatively simple tasks requested in spoken natural language.

For more on intelligent digital assistants, see What Is an Intelligent Digital Assistant?

The robots are coming, to take all our jobs?

Are massive waves of workers about to lose their jobs due to automation and intelligent robots.

Uh, in a word, no.

Yes, robots are getting incrementally more sophisticated as every year goes by, but they are still quite primitive.

We are definitely seeing significant progress in machine intelligence, but not so much progress yet on higher-order human-level intellectual capacities.

Yes, incrementally, small numbers of workers will be displaced by robots, but nothing major anytime soon.

Ongoing innovation will tend to create new forms of employment as quickly as older jobs are eliminated. Granted, training and relocation may be required, but that’s the world we live in.

How intelligent is an average worker?

Although robots are nowhere close to being capable of replacing large numbers of workers, at some point in the more distant future that may indeed be the case.

Besides, an average worker doesn’t really utilize more than a tiny fraction of their intelligence for the tasks they are commonly assigned.

So, how intelligent does a machine have to be to replace an average worker?

Not very.

But, still, probably significantly more intelligent than the current crop of robots.

Besides, even if 90% to 99% of the tasks performed by an average worker require minimal intelligence, that other 1% to 10% of their work may require significantly more intelligence, such as:

  1. How to deal with equipment which fails.
  2. How to deal with balky equipment which behaves in an inconsistent manner on occasion.
  3. Ability to cope with special requests.
  4. Flexibility and adaptability.

Granted, even many of those areas are also significant opportunities for automation, but progress will continue to be inconsistent, undependable, and problematic.

Sure, eventually, most issues will be resolved. But not so soon.

Incrementally, more and more workers will have their full jobs automated, but that gradual process will likely be compatible with the incremental appearance of new forms of work.

Even if it is indeed theoretically possible to innovate all existing jobs away, there will always be practical reasons that this process will not occur rapidly.

And to the extent to which average workers are only using a small fraction of their intellectual capacity, they have excellent potential to be trained for new jobs.

No sign of personal AI yet (strong AI)

The appearance of the personal computer (PC) was a major revolution. Ditto for the cell phone and smartphone. We haven’t seen such a revolution for personal AI yet, in the sense of strong AI. Weak AI, yes; strong AI no.

Okay, we have Alexa and Siri and other intelligent digital assistants, but they are too minimal and too specialized to constitute a true revolution to what I would call personal AI.

Alexa and Siri remind me more of the very early personal computers, such as the Altair, Commodore, and Atari computers, which were truly mere toys and more suitable for playing games and having fun than any serious computing. Even the Apple II was in that category.

It wasn’t until the advent of the IBM PC and the Apple Macintosh that personal computers could finally be counted on to do serious work.

That’s the kind of transition we are still waiting for for AI. From toylike, simple, single-task features to broader and richer goal-oriented activities.

More significantly, we need AI systems that can figure out our needs and automatically address them without requiring us to explicitly and carefully detail individual tasks.

AI systems and features currently provide plenty of automation, but are not yet offering any significant higher-order human-level intellectual capacities.

Another benefit of the personal computer was that they offered a fairly rich set of features right out of the box without requiring an expensive connection to an external service. Alexa and Siri are interesting, but most of their function is accomplished in the networked cloud rather than locally.

The four main qualities which I am looking for for personal AI, which is a breakthrough comparable to the personal computer (IBM PC and Apple Macintosh) are:

  1. Fairly rich set of features. 10X to 100X what Alexa provides. Covers a much broader swath of the average person’s daily life.
  2. Very easy for average user to set up, configure, control, monitor, and understand. No degree in rocket science required. No technical sophistication required.
  3. No network connection required. Yes, a network connection may provide additional features and power, but would not be required. Or at least not always be required.
  4. Based on open source software. Users should not be held hostage by vendors and should be able to view and even enhance their systems. Even if a user doesn’t wish to do this themselves, they can at least take advantage of the work of other users who are willing and capable of taking advantage of open source capabilities.

AI is generally not yet ready for consumers

AI systems are currently more oriented towards high-end applications than consumer-oriented features except for more basic features.

Sure, web sites may use lots of AI under the hood or have AI chat bots for automated customer service, but none of this is a direct benefit for the consumer.

Generally, AI is not yet ready for consumers for any higher-order human-level intellectual capacities.

Only limited, narrow niches seem particularly ripe for consumer AI, such as:

  1. Simple robotic animals. May be fun, amusing, and interesting, but offer little in the way of intellectual capacity.
  2. Task-specific AI or domain-specific features.
  3. Broader but shallow features, such as intelligent digital assistants.
  4. Photo manipulation and management.
  5. Gaming.
  6. Driving automation.

Driving automation has significant impact, but is generally weak to moderate AI, well short of strong AI, with relatively narrow, specific functions such as:

  1. Self parking cars.
  2. Self driving vehicles.
  3. Automated navigation.
  4. Collision avoidance. Still problematic, but showing promise.

Meaning and conceptual understanding

A major shortfall of current AI systems relative to the metric of strong AI is a complete lack of comprehension of human-level meaning and concepts.

An advanced AI system may be able to associate identities of objects, but the system has no notion of how an object is important to a person, why it is important, or even what its human-level importance is. AI systems may have mastered a lot of the details of objects but the concepts are still out of reach on anything more than a superficial basis.

Strong AI will of necessity and by definition need to comprehend human-level meaning.

Emotional intelligence

One important area where most AI systems are severely lacking is emotional intelligence.

Emotional intelligence is not even needed for many weak AI applications.

But for AI systems to operate effectively in social environments, emotional intelligence will be essential.

The mere fact that emotional intelligence does not even get mentioned for most AI systems focuses attention on the fact that these systems are not focused on strong AI.

That will be a key indicator of true progress towards strong AI — that emotional intelligence begins to play a more significant and even essential role.

Wisdom, principles, and values

I personally subscribe to the four-level model for knowledge, DIKW:

  1. Data
  2. Information
  3. Knowledge
  4. Wisdom

The first two levels are handled very adequately by current digital computing.

Knowledge is a mixed bag, with some fairly decent advances, but still some gaps. Facts are a slam-dunk for the most part. Know-how can still be problematic, depending more on hardcoded preprogramming more than true machine learning of concepts and human-level meaning.

But wisdom is a whole other category, virtually untouched by current AI systems. And considered essential for a mature human being.

Beyond basic facts and practical know-how, wisdom includes principles and values, both more abstract than concrete. And the ability to apply abstract principles and values in other domains where concrete knowledge, facts, and know-how may be minimal.

A companion paper has a proposed list of core principles of general wisdom for any AI system which wishes to qualify as Strong AI:

We may well be more than a few years away from AI systems that exhibit human-level wisdom, or even a small fraction of human-level wisdom.

Extreme AI

I refer to a concept called extreme AI, which would be a significant steppingstone to Kurzweil’s Singularity.

Extreme AI indicates that a machine has several key capabilities:

  1. It can learn how to learn. Far beyond preprogrammed knowledge and intelligence or even so-called machine learning. Not only is the machine capable of learning, but it is also capable of learning how to learn. In other words, the capacity to learn is not limited to preprogrammed capabilities.
  2. It can generate new AI systems on its own, without human intervention.
  3. It can teach another AI system how to learn. And how to learn how to learn.
  4. It can learn how to learn how to teach how to learn how to learn. That closes the loop, allowing new generations of AI systems to be significantly more powerful, without human intervention.

For more on extreme AI, see the companion paper Extreme AI: Closing the Loop and Opening the Spiral, as well as the larger paper, Untangling the Definitions of Artificial Intelligence, Machine Intelligence, and Machine Learning.

Current AI systems are not even beginning to exhibit extreme AI capabilities.

Extreme AI doesn’t even appear to be on the more distant horizon.

Ethics and liability

This paper is not intended to delve into ethical and legal issues of technology, focusing primarily on the technology itself. These matters are covered a little in the companion paper Untangling the Definitions of Artificial Intelligence, Machine Intelligence, and Machine Learning.

Other than to simply say that yes, there will be lots of ethical issues that will arise as we begin to dawn on the age of strong AI, but we’re not even close yet, so not to worry.

There will be legal liability issues as well, but many of them already apply to existing software systems.

Lethal autonomous weapons (LAWs) will present interesting ethical and legal challenges, including international law and the law of war.

Fully autonomous or near-fully autonomous AI systems will present serious ethical and legal challenges as well.

I don’t want to downplay these matters, but they are somewhat beyond the scope of this paper. They warrant a separate paper or papers, but the unfortunate fact is that we won’t be able to address ethical and legal issues in any meaningful depth until we understand much better the capabilities of such systems, which we don’t.

Attempting to solve a problem which doesn’t yet exist is always problematic. Yes, we can and should anticipate potential problems, but over-anticipating could cause more problems than it might solve.

Dramatic breakthroughs needed

There is no question that a wide variety and range of dramatic breakthroughs are needed to achieve true strong AI.

But what are these breakthroughs?

And how do we get them to happen?

The unfortunate realities of breakthroughs in general is that they have:

  1. Unpredictable pace.
  2. Unpredictable timing.
  3. Unpredictable impact.

The best and most valuable breakthroughs tend to come out of nowhere, when you least expect them.

Personally, I have no faith in:

  1. Manhattan-style projects.
  2. Moonshot programs.

Those approaches do work, but only on rare occasion, when there is a critical mass and all of the fundamental elements are essentially in place.

Yes, significant research is still needed, and that means money, people, and priority, but the emphasis should always be on patient effort, not some insane belief that if we throw enough money at the problem a magical solution will appear overnight.

Patience is the guideword.

That said, when breakthroughs do eventually come, they always come fast and furiously.

But when?

That’s the eternal — and unanswerable question.

But with AI as it is today, far too few of the fundamental elements are in place, or anything close to it.

Fundamental computing model

All of our great progress in digital computing has been based on the power of Turing machines, but there is no great clarity as to whether a Turing machine has sufficient conceptual power to simulate the human brain and mind.

Turing himself hypothesized what he called a B-type u-machine which could rival the computational power of the human mind. The operations of his unorganized machine (u-machine) more closely parallel the neurons of the human brain. A B-type machine has the ability to dynamically reconfigure the connections between those computing elements.

Whether today’s neural network computing models have sufficient power to simulate the human brain is unclear, especially as many of them are simulated on Turing machines. There have been efforts to do specialized hardware for AI and neural networks in particular, but there is no clarity about the effectiveness or limitations of such efforts. They may be faster than a Turing machine, but the essential question is whether they can compute anything different than a Turing machine. Certainly there is great hope, but hope itself is not a solution.

Some researchers believe there is a lot more going on within neurons and their connections that we do not yet fully fathom.

In the end, the fundamental computing model will matter, but we are not there yet, here in April 2018.

How many years from research to practical application?

When we do achieve conceptual breakthroughs, how long will it take to get them from the lab to the hands of consumers?

Unfortunately, the answer is that the time to market is unknown, unknowable, and highly variable.

Stages of the process include:

  1. Conceptual understanding of an area of intelligence. The theoretical, conceptual essence of the breakthrough.
  2. Development of strategies for implementing that conceptual understanding.
  3. High-end research lab implementation.
  4. High-end government application.
  5. High-end business application.
  6. Average business application.
  7. General business application.
  8. High-end consumer application.
  9. Above-average consumer application.
  10. Average consumer application.
  11. General, low-cost consumer application.

Some of those stages can be slow and laborious and depend on multiple breakthroughs, while others may happen in rapid succession, in parallel, or even skipped in some cases.

There is always the chance that some guy in his garage might engineer a breakthrough that can be taken directly to market, but that is more of a fluke than something we should depend upon. And usually the guy in his garage is building upon a lot of foundational work which was done previously for high-end applications.

Turing test for strong AI

How do we actually test, measure, or evaluate whether a given AI system does or doesn’t possess human-level intelligence?

Unfortunately, there is no great clarity for measuring intelligence.

The two main measures have been:

  1. Standardized IQ tests.
  2. The Turing test.

Human intelligence has traditionally been measured using standardized IQ tests. People commonly recognize that an IQ of 140 is a genius and 160 is a super-genius.

Granted, there are a variety of disputes over standardized IQ tests, but they are the gold standard.

But today’s AI systems could not even take an IQ test, let alone score high. That alone says something about where we are relative to strong AI.

Although, a clever AI researcher could preprogram an AI system to be able to parse and answer a wide enough range of typical IQ test questions so that the AI system could not only take the test, but score fairly highly.

But, that exemplifies the state of affairs for AI today, namely that an AI system can be fairly readily preprogrammed for a specific, relatively narrow application, but that does not mean that such an AI system is capable of general intelligence applicable to a wide range of problems.

There have been some proposals for specialized IQ tests designed specifically for AI systems, but that only proves the point, that machine intelligence is rather distinct from higher-order human-level intelligence.

The so-called Turing test, which Turing himself referred to as The Imitation Game, is more of a metaphor for arriving at a true/false answer as to whether a human or a machine is on the other end of a communication link (in another room from the observer.)

The basic concept of the Turing test is to construct questions such that the answer will provide a clue as to whether the responding entity is intelligent or not. So, ask a bunch of questions, evaluate the responses, and decide whether the entity might be a person or likely merely a machine which is imitating a person.

Technically, it’s a very hard problem.

I haven’t heard of any AI system that can credibly pass, consistently. There have been some claims of success, but there have also been rebuttals to such claims. So, it remains a matter of dispute.

And even if a claim to pass a given Turing test were to hold up, it merely means that the entity was intelligent for that particular test, with no guarantee that the entity would respond intelligently for other tasks.

Again, there is the risk that the AI system might be preprogrammed to pass the test rather than truly intelligent and able to learn on its own.

In short, the traditional Turing test is too weak and vague to be a technically robust test of true, higher-order human-level intelligence.

CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a variation on Turing’s original test which focuses more on visual perception rather than being an intellectual challenge. The test presents a small amount of arbitrary text that is artificially distorted to make it very difficult to apply traditional optical character recognition techniques. It does indeed work fairly well, although it has some limits and there have been a number of recent efforts that do seem to be able to defeat or pass CAPTCHA tests automatically, or by diverting the test query to a pool of users who earn a small fee for correctly responding to the challenge.

But just because an algorithm does indeed defeat particular CAPTCHA tests does not mean that it can defeat all CAPTCHA tests.

Worse, CAPTCHA was never really a test of intelligence in the sense of higher-order intellectual capacities. At best, you could claim a tower of intelligence which works well for a narrow range of tasks but has no real applications to a wide range of tasks.

The main issue with algorithms to defeat CAPTCHA is that it represents a preprogrammed or trained skill rather than true human-level learning.

Worse, the CAPTCHA system doesn’t even comprehend the human-level meaning of what it is doing, whereas a true human-level intelligence would in fact fathom the nature of its tasks.

There has also been some work in training AI systems to pass various college-level tests.

Once again, that is impressive as a heuristic or machine intelligence, but the AI system doesn’t comprehend the meaning and concepts behind the subject matter being tested.

The AI system may well be able to pass the test, but wouldn’t necessarily be able to succeed at applying the subject matter for solving real-world problems.

A better test would be to provide the AI system with only a raw PDF of the textbook and any related materials, and then take the test. Or even better, answer detailed questions on the subject from a sophisticated user.

Even that would not assure that the AI system actually comprehended the full depth of the meaning and concepts of the subject matter.

Maybe that’s the indicator of our status with AI, that current AI systems mostly depend on preprogrammed knowledge and limited, domain-specific learning so that they are not truly facile with the concepts and their true meaning.

In short, we are not yet in a position to have reliable tests to evaluate the intelligence of an AI system.

How to score the progress of AI

I haven’t worked out any precise, numerical scoring system for AI progress towards strong AI, but there are some possibilities.

I think it would make sense to have four levels of scoring:

  1. A score for each degree of competence in each level of function for each area of intelligence. This would be the finest grain of scoring. Very specific.
  2. An overall score for each level of function for each area of intelligence.
  3. An overall score for each area of intelligence.
  4. An overall score across all areas of intelligence. The total score. The IQ of the AI system.

How to score AI systems which are towers of intelligence, focus on particular tasks or domains, or are deficient in some areas while excelling in others is problematic. Having separate scores for each area would make it easier to tell what the real story was.

Links to my AI papers

I have a single master list of links to all of my AI-related papers:

The most brief introduction to AI:

The most in-depth coverage of AI:

Conclusion: So, how long until we finally see strong AI?

Okay, I’ll throw caution to the wind and plant a stake in the sand. I’ll say that we are at least ten years from widespread application of strong AI. And that’s being very optimistic.

Ten years from now would be 2028, a year short of Kurzweil’s target of 2029.

Fifteen years feels a little more comfortable.

So, I’ll say ten to fifteen years, or 2028 to 2033. That’s not grossly out of step with Kurzweil’s target.

Looking back, how much progress have we made in the past ten to fifteen years, since 2003 to 2008?

On the one hand, a lot of progress has been made, but on the other hand there are so many fundamental aspects of human intelligence where we haven’t yet even barely scratched the surface.

In truth, it wouldn’t surprise me if it took another twenty to twenty five years to finally achieve strong AI. That would be 2038 to 2043.

Equally truthful, I wouldn’t be surprised if we had some monumental breakthroughs in five to seven years, 2023 to 2025, or even at any moment in the very near future, but the simple fact is that we are not seeing even any hints of being on that trajectory.

I look forward to the advances in AI which will unfold in the coming years and decades.

Progress is indeed being made, but true, strong AI, with higher-order human-level intelligence is neither here today nor on the near or even the distant horizon.

For more of my writings on artificial intelligence, see List of My Artificial Intelligence (AI) Papers.

--

--