Computers are getting more and more capable of seemingly human behavior every day, but can they really think? It all depends on what you mean by thinking.

In short:

  1. Yes, but in a very limited manner.
  2. At a human-level? No.
  3. Getting close to human level any time soon? No.
  4. Might machines think someday? Maybe.
  5. Theoretically possible to achieve human level? Unknown due to lack of a robust model of either the brain or mind.
  6. Is Artificial General Intelligence (AGI) or Strong AI — the level of AI needed to fully replicate the intelligence of a competent, adult human — real, now or any time soon? No. AGI simply provides a framework or goal for what AI researchers are shooting for. For now, we settle for so-called Weak AI, the more limited forms of AI that are practical today.
  7. Theoretically possible to construct a human-like brain, albeit very different from a typical digital computer? Possibly, eventually, but no time soon.
  8. Can machines think in science fiction books and movies? Yes, absolutely… but so what?
  9. Can science fiction books and movies help us better understand the question of whether machines are able to think at some human level? No, not really. They highlight the question, but don’t offer any insight.
  10. Can computers perform many human-like tasks? Yes, in at least some cases.
  11. Can even a small child outperform even the best computers for many human-like tasks? Yes.
  12. Can a machine understand what humans feel? Conceptually, maybe, eventually, but not at present, and even then only to a limited degree — THAT we feel or the label attached to what we feel, but not the feelings themselves in a human sense.
  13. Can machines replicate all aspects of a conscious human mind? Not currently, and not any time soon, but maybe eventually, but that may require a type of machine capable of the biological functions of a human brain rather than simple a typical digital computer — see #5.
  14. Can machines fully understand human language? Not fully, but approximately, to some degree.
  15. Can machines fully grasp all aspects of humanity? Not currently, but conceivably if #5 is achieved, but no time soon.
  16. Is Ray Kurzweil’s artificial superintelligence Singularity coming soon or even ever? More of a pipe dream than a certainty, not outside the realm of possibility, but we don’t appear on a path to it any time soon.
  17. Machines are great for relatively discrete tasks or activities, but less suited for open-ended goals such as live a happy and productive life, do good, help make the world a better place to live in.
  18. Can a machine mimic a human mind and at least seem human? Yes, Eliza proved that, as well as several very specialized cases that appear to pass the Turing Test.
  19. Can self-driving cars think? No, they are programmed for a narrow range of activities, but don’t think for themselves or go off and drive wherever they want to independent of their human masters.
  20. Can chat bots think? No, they are programmed with a limited range of response. Yes, they can be quite helpful — sometimes or maybe rarely — but they have no real depth or ability to learn on their own or pursue their own interests.
  21. Can intelligent assistants think? No, they are preprogrammed for certain types of tasks, but have no sense of agency — an ability to think and act on their own for their own interests.
  22. Are chess-playing computers thinking? Not really — mostly they are simply mechanically evaluating a lot of pre-programmed rules. Still, they can seem human.
  23. Do IBM’s Deep Blue and Watson think? Did winning Jeopardy prove that machines can think? Not really — again, mostly they are simply mechanically evaluating a lot of pre-programmed rules. Still, they can seem human.
  24. Are search engines thinking? Not really — again, mostly they are simply mechanically evaluating a lot of pre-programmed rules. Still, they can seem human in their uncanny ability to quickly give you what you want with so little input. Unlike narrow question-answer programs search engines have a very broad reach, but with very shallow depth, no sense of the true meaning of what they are searching for or what they find.
  25. Are question-answer programs thinking? Not really — Eliza, chat bots, Watson, and search engines may seem intelligent, but that is the power of rules, heuristics, and pre-programmed responses, with nothing in the way of intuitive leaps, creativity, or speculation, or comprehension of meaning beyond superficial syntax and symbol associations.
  26. Is machine learning, even deep learning (or artificial neural networks), evidence that machines think? Conceptually in some cases, but not necessarily, and not usually. A lot of machine learning is simply recognizing patterns and even synthesizing rules or simplistic abstractions, but so far machines have not evidenced the ability to develop higher-order abstractions or to deeply understand the true meaning or implications of the patterns they recognize. A lot of machine learning is little more than rote learning.
  27. Tactical versus strategic decisions — current systems are adept at narrow, well-defined tasks, but not capable of mastering broader strategic goals, especially those requiring creative, intuitive, or out-of-the-box thinking.
  28. Creating vs. following models — current systems do a decent job or working within models that we explicitly program, but systems are not yet adept at creating their own models.
  29. Messy problems requiring judgment — current systems do a decent job when crisp data can be accurately measured and acted on with relatively fixed rules, but systems are not yet adept at coping with very messy problems that require significant intuitive judgment.
  30. Feedback and adaptation — current systems can be programmed to cope with a significant level of complexity, but are not yet able to adapt themselves dramatically in response to significant feedback from the environment.
  31. Beyond individual machines, might thinking and even intelligence emerge from a cloud of interconnected machines, each machine corresponding to a single neuron or small cluster of neurons? Possibly, but that is purely speculative at this stage.
  32. Is thinking by (arbitrary) definition a process that only humans engage in? That’s a matter of debate, with both sides on equally soft ground.
  33. Is the headline question a strict boolean yes or no, or is it a matter of degree or level? Seems like a spectrum, a range with a rock or dumb computer program at zero, Nikola Tesla and Albert Einstein at 100, most people in the 30 to 70 range range, and sophisticated AI programs in the 10 to 30 range, at least at the present.
  34. Is self-modifying code an indication of the kind of adaptation needed for learning and thinking? Possibly. It may be necessary, but not necessarily sufficient.
  35. Are data-driven programs, where the sequencing of functions is driven by patterns in the input data and even self-modification of the data by the program itself, an indication of the kind of flexibility and freedom from preprogrammed rules that normally indicate that a program is not adaptive enough to be considered thinking? Possibly. It may be necessary, but not necessarily sufficient.


See also Questions About Thinking for thinking in general, not just machines.

  1. What time frame are we talking about — right now, the near future, next five year, ten years, 25 years, 50 years, 100 years, ever?
  2. Are we talking about from a practical perspective — could we build a human-level thinking machine over the next few years if we put our minds to it in a drop-everything-else Manhattan Project-style effort, or are we merely trying to speculate about the distant future about technologies we don’t even know exist yet?
  3. How to define machine.
  4. How to define thinking.
  5. Does thinking refer to intelligence as well?
  6. Does thinking refer to all mental activities, or only those closely associated with reasoning?
  7. Is human-level thinking required?
  8. Is human-level intelligence required?
  9. What level of human intelligence is implied — average adult, genius, young child, low-IQ?
  10. Is the full range of human mental capabilities included, or is a narrow range enough?
  11. Is thinking strictly intellectual, or does it need to address emotions, feelings, and drives?
  12. Is love included in thought?
  13. Is empathy included in thought?
  14. And what of genetically-inherited programming — is that included in the definition of thinking as well?
  15. Is intuition and intuitive leaps included in thinking?
  16. Is learning included in thinking?
  17. Can there be true thinking without true learning?
  18. Can machines reflect and contemplate?
  19. Can machines meditate?
  20. Can machines recognize when they make mistakes?
  21. Can machines incorporate new knowledge that invalidates previous beliefs?
  22. Is morality part of thinking?
  23. Is thinking an emergent phenomenon not fully determined by the underlying physiology of the brain, or is it completely dependent on the specific physiology of the brain?
  24. Does thinking require neurons?
  25. Is the full range of function enabled by interconnected neurons adequately comprehended and fully simulated by (discrete-state digital) machines and computer programs?
  26. Is the continuous, non-discrete nature of neurons needed and fully supported in machines and computer programs attempting to think?
  27. How much of thinking involves only higher-order capacities for logic and reason, as opposed to responses to lower-order, more primitive, animal-like tendencies?
  28. Does the ability of a computer program to modify itself provide sufficient capacity to enable the kind of emergent behavior required for human-level thinking?
  29. Even if we accept the model of brain and mind as equivalent to computer and software, does a computer possess all of the necessary capabilities of the human brain, especially in terms of non-discrete, continuous processing?
  30. Is a sense of consciousness needed for a machine to think?
  31. Is a sense of self needed for a machine to think?
  32. Does a machine need a sense of time to think?
  33. Does a machine need a sense of connectedness to humans to think?
  34. How deeply must a machine understand to claim that it thinks?
  35. Can machines think about human conceptions such as beauty and emotions and feelings such as fear, anger, love, lust, happiness, and sadness?
  36. Are machines motivated to learn? Are they curious?
  37. How would Siri, Alexa, and Cortana answer the question? Is that human-level thinking or simply a pre-programmed response?
  38. Is the Turing Test a valid test of thinking or even intelligence? I think it merely detects most cases of a machine imitating a human, but I see no justification for it being able to judge whether question responses exhibit human-level thinking per se — maybe even expert humans do not have the faculties to fully judge the presence of human-level thinking at the 100% confidence level with zero false positives and zero false negatives.
  39. Check out the ongoing bet between Mitch Kapor and Ray Kurzweil over whether a machine will satisfy the Turing Test by 2029. The trick is that they agreed to very stringent rules, so that merely tricking a few judges for a short period is not sufficient.
  40. Check out Turing’s 1950 paper on The Imitation Game, including a lot of the objections to it.
  41. Consider Turing’s conception of an “O” machine (O for Oracle) where the primitive operations are black-box functions that are not computable by even a universal Turing machine. This is not a type of machine that we currently know how to build, but the thought is that it would take at least a step closer to a machine that actually could think.
  42. Check out John Searle’s Chinese Room Argument.
  43. Check out John Searle’s Scientific American article that disputes the notion that the human brain and mind are just a computer and a computer program.
  44. Traditional definition of AI — everything we don’t no how to do yet. Traditionally, only researchers worked on AI because nobody knew how to reduce it to practice, but once it was reducible to practical code, it was no longer of interest to AI researchers. This is changed today since people wish to tout AI as a product feature.
  45. A subtle distinction between artificial intelligence and machine intelligence — the former aims to fully replicate features of the human mind while the latter seeks to maximize the capabilities of a machine, independent of whether that happens to be less than human or maybe even much more than human. AI struggles with trying to compare humans and machines, while machine intelligence has no such baggage to drag around.
  46. Can physics help us understand the nature of thinking that machines might use?
  47. Can chemistry help us understand the nature of thinking that machines might use?
  48. Can biology help us understand the nature of thinking that machines might use?
  49. Can psychology help us understand the nature of thinking that machines might use?
  50. Can cognitive science help us understand the nature of thinking that machines might use?
  51. Can neuroscience help us understand the nature of thinking that machines might use?
  52. Can computer science help us understand the nature of thinking that machines might use?
  53. Can electrical engineering help us understand the nature of thinking that machines might use?
  54. What sciences can inform our knowledge of thinking that machines might use?
  55. Check out MIT Professor Marvin Minsky’s paper Why People Think Computers Can’t paper that reviews a number of aspects of computers and thinking.
  56. Professor Stephen Hawkings told the BBC that “ The development of full artificial intelligence could spell the end of the human race.” The same article quotes technology entrepreneur Elon Musk as warning that artificial intelligence is “our biggest existential threat.”
  57. A number of serious AI researchers and technology experts have warned in an open letter of the potential perils of autonomous weapons systems that can indeed think for themselves without the direct human control found in drones and other remotely piloted or controlled systems.
  58. Does a machine need a personality to think?
  59. Does a machine need an ego to think?
  60. Does a machine need beliefs to think?
  61. Does a machine need knowledge to think?
  62. Does a machine need wisdom to think?
  63. Is optimism or pessimism needed for a machine to think?
  64. Does optimism or pessimism enable a machine to think better?
  65. Is skepticism needed for a machine to think?
  66. Does skepticism enable a machine to think better?
  67. To what extend are people simply projecting or anthropomorphizing, imagining that machines are more sophisticated and human-like than they actually are, treating the machines as mirrors of themselves?

For more of my writings on artificial intelligence, see List of My Artificial Intelligence (AI) Papers.

Freelance Consultant