What Are Autonomy and Agency?

Jack Krupansky
24 min readDec 4, 2017

When considering robots, intelligent agents, and intelligent digital assistants, questions of autonomy and agency arise. This informal paper attempts to define these key concepts more clearly and explore questions of what are they, how they are different, and how they are related.

Synthesized definitions for autonomy and agency will be provided after discussing all the relevant aspects of these concepts.

A companion paper, Intelligent Entities: Principals, Agents, and Assistants, will build on these concepts, but essential concepts of intelligent entities, principals, agents, and assistants will be introduced here as well since these two sets of concepts are closely related and intertwined. You can’t have one without the other.

Note that this paper is more focused on people, robots, and software agents rather than on countries or autonomous regions within countries (e.g., Catalonia in Spain or Kurdistan in Syria and Iraq), although the basic definition of autonomy still applies to those cases as well.

Also note that sociology, philosophy, and agent-based modeling and simulation use the terms agency and agent as the terms autonomy and autonomous entity are used in this paper (freedom of choice and action, unconstrained by any other entity.)

For quick reference, see the section entitled Definitions of autonomy and agency for the final definitions of these terms.

Dictionary definitions

A later section of this paper will come up with synthesized definitions for autonomy and agency that are especially relevant to discussion of intelligent agents and intelligent digital assistants, but the starting point is the traditional dictionary definitions of these terms.

Definition entries from Merriam-Webster definition for autonomy:

  1. self-directing freedom and especially moral independence
  2. the state of existing or acting separately from others
  3. the quality or state of being independent, free, and self-directing
  4. the quality or state of being self-governing

Definition entries from Merriam-Webster definition for agency:

  1. the relationship between a principal and that person’s agent
  2. the capacity, condition, or state of acting or of exerting power — operation
  3. a person or thing through which power is exerted or an end is achieved — instrumentality
  4. a person or thing through which power is used or something is achieved
  5. a consensual fiduciary relationship in which one party acts on behalf of and under the control of another in dealing with third parties
  6. the power of one in a consensual fiduciary relationship to act on behalf of another
  7. general agency — an agency in which the agent is authorized to perform on behalf of the principal in all matters in furtherance of a particular business of the principal
  8. special agency — an agency in which the agent is authorized to perform only specified acts or to act only in a specified transaction
  9. the law concerned with the relationship of a principal and an agent

Three related terms are entity, principal, and agent.

Definition entries from Merriam-Webster definition for entity:

  1. independent, separate, or self-contained existence
  2. something that has separate and distinct existence and objective or conceptual reality

There are other meanings for entity, but those are the senses relevant to this paper.

Definition entries from Merriam-Webster definition for principal:

  1. a person who has controlling authority or is in a leading position
  2. a chief or head man or woman
  3. the chief executive officer of an educational institution
  4. one who engages another to act as an agent subject to general control and instruction
  5. the person from whom an agent’s authority derives
  6. the chief or an actual participant in a crime
  7. the person primarily or ultimately liable on a legal obligation
  8. a leading performer — star

Definition entries from Merriam-Webster definition for agent:

  1. one that acts or exerts power
  2. something that produces or is capable of producing an effect
  3. a means or instrument by which a guiding intelligence achieves a result
  4. one who is authorized to act for or in the place of another
  5. a computer application designed to automate certain tasks (such as gathering information online)
  6. a person who does business for another person
  7. a person who acts on behalf of another
  8. a person or thing that causes something to happen
  9. something that produces an effect
  10. a person who acts or does business for another
  11. someone or something that acts or exerts power
  12. a moving force in achieving some result
  13. a person guided or instigated by another in some action
  14. a person or entity (as an employee or independent contractor) authorized to act on behalf of and under the control of another in dealing with third parties

Intelligent entities

Autonomy and agency are all about intelligent entities and their freedom to make decisions and take actions, and their authority, responsibilities, and obligations.

Generally, an entity is any person, place, or thing. In the context of autonomy and agency, an intelligent entity is a person or thing which is capable of action or operation and at least some fraction of perception and cognitionthought and reason, coupled with memory and knowledge.

More specifically, an intelligent entity has some sense of intelligence and judgment, and is capable of making decisions and pursuing a course of action.

Whether or not an intelligent entity has autonomy or agency is not given:

  1. Some entities may have autonomy, but not agency.
  2. Some entities may have agency but not autonomy.
  3. Some entities may have both autonomy and agency.
  4. Some entities may have neither agency nor autonomy.

Computational entities

An intelligent entity can be a person or a machine or software running on a machine.

The latter are referred to as computational entities or digital entities. They include:

  • Robots
  • Driverless vehicles
  • Smart appliances
  • Software agents
  • Intelligent agents
  • Digital assistants
  • Intelligent digital assistants
  • Apps
  • Web services

How much autonomy or agency a given computational entity has will vary greatly, at the discretion of the the people who develop and deploy such entities based on needs, requirements, desires, preferences, and available resources and costs.

Sometimes people want more control over their machines, and sometimes they value greater autonomy, agency, or automation to free themselves from being concerned over details.

Entities

As a convenience and for conciseness, this paper will sometimes use the shorter term entity as implicitly referring to an intelligent entity, either a person or a computational entity.

Actions and operations

Definitions:

  1. Action. Something that can be done by an entity. An observable effect that can be caused in the environment.
  2. Operation. Generally a synonym for action. Alternatively, an action that persists for some period of time.

For example flipping a switch to turn on a machine is an action, while the ongoing operation of the machine is an operation. The flipping of the switch was an operation too, only of a very short duration.

If a machine would operate only while a button was depressed, the pressing and holding of the button as well as the operation of the machine would both be actions and operations.

Tasks, objectives, purposes, and goals

Definitions:

  1. Task. One or more actions or operations intended to achieve some purpose.
  2. Purpose. The reason or desired intent for something.
  3. Goal. A destination or state of affairs that is desired or intended, but without a plan for a set of tasks to achieve it.
  4. Objective. Synonym for goal.
  5. Subgoal. A portion of a larger goal. A goal can be decomposed into any number of subgoals.
  6. Motivation. The rationale for pursuing a particular objective or goal.
  7. Intentions. Desired objective or goal. What is desired, not why or how.

Principals and agents

A separate paper, Intelligent Entities: Principals, Agents, and Assistants, will delved deeper into principals and agents, but definitions for the purposes here:

  1. Principal. An intelligent entity which has the will and desire to formulate an objective or goal.
  2. Agent. An intelligent entity which has the capacity and resources to pursue and achieve an objective or goal on behalf of another intelligent entity, its principal.

A given entity may be either:

  1. A principal but not an agent. Does all actions itself, without any delegation to agents.
  2. An agent but not a principal.
  3. Both a principal and an agent. A principal for subgoals.
  4. Neither a principal nor an agent. Possibly an assistant for specific tasks, but not any goals.

Delegation of responsibility and authority

The essence of the relationship between principal and agent is delegation. The principal may delegate responsibility and possibly even authority for one or more objectives or goals to one or more agents.

Principal as its own agent

Some intelligent entities may act as both principal and agent, doing its own work, rather than delegating its work to one or more agents.

Agent as principal for subgoals

For more complex objectives, an agent may decompose a larger goal into subgoals, with each subgoal delegated to yet another agent for whom this agent acts as principal.

Authority

The authority of an intelligent entity is the set of actions that the entity is permitted to take.

A principal would have unlimited authority.

An agent would have limited authority related to the goal(s) that the principal is authorizing the agent to pursue.

In the real world, many principals are in fact agents since they act on behalf of other principals. A company has a board of directors, investors, and shareholders. Robots have owners.

Responsibility, expectation, and obligation

The responsibility of an intelligent entity is the set of expectations and obligations of the entity in terms of actions.

A principal has no responsibility, expectations, or obligations per se. A principal may act as it sees fit.

An agent has responsibility, expectations, and obligations as set for it by its principal. An agent may act as it sees fit, provided that its actions satisfy any limitations or constraints set by its principal.

In the real world, many principals are in fact agents since they act on behalf of other principals. A company has a board of directors, investors, and shareholders. Robots have owners. So a company or robot may have responsibilities, expectations, and obligations set by somebody else.

General obligations

Regardless of obligations which result from autonomy and agency, all intelligent entities will have general obligations which spring from:

  • Physics. Obey the laws of physics. Reality. The real world. Natural law. For example, gravity, entropy, and the capacity of batteries.
  • Limited resources and their cost. For examples, the availability and cost of electricity, storage, computing power, and network bandwidth.
  • Laws. Obey the laws of man. Including regulations and other formalized rules.
  • Ethics. Adhere to ethical codes of conduct. Including professional and industry codes of conduct.

Ethics

Just to reemphasize from the previous section, that intelligent entities will have to adhere to ethical considerations in the real world.

Liability

A principal may be exposed to liability to the extent that it enlists the aid of an agent and that agent causes harm or loss or violate laws or rules while acting on behalf of the principal.

Requested goals might have unintended consequences which incur unexpected liability.

An agent may be exposed to liability if it naively follows the guidance of its principal without carefully reviewing whether specified goals, expectations, or obligations, might cause harm or loss or violate laws or rules when carried out.

Elements of a goal

A goal must be:

  1. Formulated. Clearly stated.
  2. Planned. A strategy developed. A plan developed. Resources allocated. Tasks identified.
  3. Pursued. Individual tasks performed. Decisions may need to be made or revised and the original plan adapted based on results of individual tasks.
  4. Achieved or not achieved. The results or lack thereof.

Relationship between principal and agent

Power, action, control, and responsibility are involved in formulating a plan for setting and pursuing objectives and goals.

  1. Power. The principal has the power to set the objectives and goals to be pursued. The agent has only the delegated power to select tasks to achieve the objectives and goals set by the principal and to pursue them through actions, but no power to change the objectives or goals themselves.
  2. Action. The agent is responsible for performing the actions or tasks needed to achieve the objectives and goals set by the principal. The agent is also responsible for deciding what tasks and actions must be performed to achieve the objectives and goals, and for coming up with a plan for performing them
  3. Control. The principal controls what objectives and goals are to be pursued. The agent controls what tasks and actions must be performed to achieve the objectives and goals and how to perform them.

The principal is in charge. The principal is the boss.

The agent is subservient to the principal.

The principal delegates to agents or assistants.

Contracts

Generally there is a contract of some form between a principal and its agent, which clearly sets out the objectives and goals, responsibilities, expectations, and obligations of both parties, both the principal and the agent.

The contract details what is expected of the agent, what the agent is expected to deliver, what the agent needs to pursue the specified goals, including resources, and what compensation the agent will receive in exchange for achieving the goals.

The contract also details any limitations or restrictions that will apply to the agent and its work.

The contract authorizes and empowers the agent.

Contracts are needed both for human entities and for computational entities.

Capacity for agency

There are really two distinct senses of agency:

  1. The capacity to act or exert power.
  2. The relationship between a principal and an agent that empowers the agent to operate on behalf of the principal.

The latter requires that there is a principal involved, doing the empowerment, the authorization to act on its behalf.

The former can exist even if there is no principal present. An intelligent entity can act on its own interests, on its own behalf, being its own principal. An entity can be self-empowering. That’s what it means for an entity to have agency in a traditional, sociological or philosophical sense.

The first sense is true in both instances, where either a principal is present as an external entity, and when no principal is present.

In the context of intelligent agents and intelligent digital assistants, agency usually refers to the latter sense, that the agent is acting on behalf of the principal, which is commonly a human user, but may also be some other computational entity, such as another intelligent agent or a robot.

Assistants

A separate companion paper, Intelligent Entities: Principals, Agents, and Assistants, will introduce the concept of an assistant, which is quite similar to an agent in the sense that it is capable of performing the tasks needed to achieve goals, but can only perform specific tasks as dictated by its principal without any sense of any larger goal or objective that the task is needed to achieve, and with much less room for discretion as to how to perform the tasks.

An assistant has limited agency in that it performs tasks on behalf of a principal but it lacks the authority or capacity to decide which tasks to perform in the context of a goal or objective.

Full autonomy of a principal

A principal has full autonomy or complete autonomy, the full freedom to formulate, choose, and pursue goals and objectives.

An agent does not have such full autonomy.

Limited autonomy or partial autonomy of agents

Generally speaking, agents do not have autonomy in the same sense as the full autonomy of a principal, but agents do have limited autonomy or partial autonomy in the sense that they are free to choose what tasks to perform to achieve the goals or objectives chosen by the principal, and how to pursue those tasks.

Assistants have no autonomy

Unlike principals and agents, assistants have no autonomy whatsoever. They don’t get to choose anything. Their only job is to perform the tasks given to them by an agent or their principal.

Okay, technically, assistants do have a modest degree of autonomy, but very modest and very minimal. Any system that doesn’t require a principal to be directly controlling every tiny movement by definition is delegating at least a small amount of autonomy. But not enough for the term autonomy to have any significant relevance to the freedom of action of such a system. That’s the point of distinguishing assistants from agents — to indicate the almost complete lack of autonomy.

Assistants have responsibility but no authority

A principal can delegate to both agents and assistants. Both will have responsibilities, but only agents have even a limited sense of authority, the authority to decide how to turn an objective or goal into specific tasks or actions.

An assistant has no authority, simply the responsibility for a specified task or action, as specified, with little or no room for discretion or decision.

Control

A principal always has control over agents to which it has delegated responsibility for goals, and control over assistants to which it has assigned specific tasks.

A principal could change or revise or even cancel goals, instructions which agents would be obligated to comply with.

A principal can at any time request a status report on progress that an agent is making on a goal or objective.

Robots

Superficially, robots would seem to be fully autonomous, but in reality they have the more limited autonomy or partial autonomy of agents. After all, robots are owned and work on behalf of their owners, performing tasks and pursuing goals as their owners see fit, and dictate.

That said, as with an agent, a robot can be granted a significant level of autonomy and be given fairly open-ended goals, so that they could actually be fairly autonomous even if not absolutely fully autonomous.

Robots and computers out of control with full autonomy?

That would make for a fairly scary science fiction story, a world in which robots and computers could be granted complete autonomy and not have to answer to anybody. But I wouldn’t expect that reality anytime soon.

But it’s also possible that someone might mistakenly grant a robot complete autonomy and it might be difficult to regain control over the robot. Although, it would be possible to make it illegal to grant a robot full autonomy.

The HAL computer in the 2001: A Space Odyssey movie and the Skynet AI network of computers and machines in the Terminator movies were in fact machines which somehow gained full autonomy — with quite scary consequences.

It would be interesting to see a science fiction movie in which fully autonomous robots have a strictly benign and benevolent sense of autonomous responsibility. But maybe that violates the strict definition of autonomy — if they act as if to serve people, then they aren’t truly autonomous.

Maybe robots would need to exist in colonies or countries or planets or space stations of their own, with full autonomy there, rather than coexisting within our human societies. Robot societies and human societies could coexist separately and could interact, but respecting the autonomy of each other, with neither in charge or dominating the other. Maybe.

Mission and objectives

A mission is a larger context than discrete goals. Think of the mission of an enterprise or organization. It’s purpose. It’s market or area of interest.

The mission will break down into objectives, which will break down into discrete goals.

The enterprise or organization may periodically review and adjust, revise, or even radically change its mission and objectives. At its own discretion. That’s autonomy.

An agent is given a discrete goal to pursue. A small part of a larger mission and its objectives. An agent does indeed have a mission and objective, but they are set by its principal. An agent has no control over its mission or objective.

A principal has a larger mission and associated objectives for which discrete goals are periodically identified and assigned to discrete agents. A principal sets its own mission and objectives.

For more discussion of mission and objectives, see the companion paper, Intelligent Entities: Principals, Agents, and Assistants.

Mission and operational autonomy

There are two categorical distinctions concerning the autonomy of an entity:

  1. Mission autonomy. The entity can choose and control its own missions and objectives rather than be constrained to pursue and follow a mission or objective set for it by another entity, a principal. This is closer to true autonomy.
  2. Operational autonomy. The entity can decide for itself how to accomplish operational requirements. This is independent of control of the overall mission and objectives. This is characteristic of an agent, although an autonomous entity would tend to also have operational autonomy as well.

So:

  1. Principals have mission autonomy. And generally operational autonomy as well.
  2. Agents have operational autonomy. But no mission autonomy.

Independence — mission and operational

Autonomy is roughly a direct synonym for independence.

We can speak of two categorical distinctions concerning the independence of an entity:

  1. Mission independence. The entity can choose and control its own missions rather than be constrained to pursue and follow a mission set for it by another entity, a principal. This is closer to true autonomy.
  2. Operational independence. The entity can decide for itself how to accomplish operational requirements. This is independent of control of the overall mission. This is characteristic of an agent, although an autonomous entity would tend to also have operational independence as well.

So:

  1. Principals have mission independence. And generally operational independence as well.
  2. Agents have operational independence.

Luck and Mark d’Inverno: A Formal Framework for Agency and Autonomy

Michael Luck and Mark d’Inverno published a paper back in 1995 entitled A Formal Framework for Agency and Autonomy which examined agency and autonomy as this paper does but focused strictly on software agents and multi-agent systems in particular:

The abstract:

  • With the recent rapid growth of interest in MultiAgent Systems, both in artificial intelligence and software engineering, has come an associated difficulty concerning basic terms and concepts. In particular, the terms agency and autonomy are used with increasing frequency to denote different notions with different connotations. In this paper we lay the foundations for a principled theory of agency and autonomy, and specify the relationship between them. Using the Z specification language, we describe a three-tiered hierarchy comprising objects, agents and autonomous agents where agents are viewed as objects with goals, and autonomous agents are agents with motivations.

The relevant phrases:

  • agents are viewed as objects with goals
  • autonomous agents are agents with motivations

I cite this reference here neither to blindly accept it nor to quibble with it, but simply to provide a published foundation which readers can reference.

That said, I’ll offer a couple of relatively minor quibbles, more along the lines of how to define terms:

  1. I’d prefer to use the term entity or even intelligent entity rather than object. To my mind, objects include trees, rocks, and mechanical machines, but generally not include peoples and animals per se. Technically, intelligent entities are indeed objects, but the term object doesn’t capture the essential meaning of an intelligent entity.
  2. The cited paper defines an object as an entity that comprises a set of actions and a set of attributes.
  3. This notion of having a set of actions or capable of acting is a bit more than the traditional, real-world, non-computer science sense of the meaning of the concept of an object.
  4. A machine is capable of acting in some sense, but unless it has some sort of robotic brain, it has no sense of sensing its environment and making decisions about how to interact with its environment. A sense of agency is needed.
  5. A washing machine or refrigerator would fit the meaning of an object in the sense of the cited paper, although I would refer to them as assistants rather than mere objects in a real-world sense. They have no agency, with no ability to choose how to pursue a goal rather than to blindly perform a specific task. That ability to perform tasks does fit the definition of an assistant used in this paper.
  6. A driverless car would be a good fit for what I would call an intelligent entity and would fit the concept of agent used in the cited paper. You tell the car where you want to go and it figures out the rest, coming up with a plan and figuring out what tasks are needed to get you to your objective.
  7. Said driverless car would superficially seem to have a sense of autonomy, in that it can move around without a person at the controls, but it lacks the ability to set its goals. It can pursue and follow goals given to it, but not set them. In that sense, both mine and the cited paper, said driverless car does not have autonomy.
  8. Driverless cars did not exist back in 1995, but I think even now the authors of the cited paper would likely agree that a driverless car lacks the motivation or ability to set goals that is required to meet the definition for autonomy.
  9. As the paper would seem to agree, goals are set from motivations.
  10. As the paper would seem to agree, being an agent does not automatically confer the presence of motivations. Agents don’t need to be motivated. They just need to be able to pursue and achieve goals.
  11. In the context of software agents, which was indeed the context of that 1995 paper, I’d refer to degree of autonomy, meaning the extent to which the agent is free to make its own choices, as opposed to the degree to which the agent’s principal has already made choices and has decided to constrain the choices or autonomy of the agent.
  12. An upcoming companion paper, Intelligent Entities: Principals, Agents, and Assistants, will explore this notion of principal with respect to autonomy.
  13. The cited paper uses the term motivation to essentially mean that the agent has the ability to set its own goals.
  14. I agree with the cited paper that agents are all about goals.
  15. The open issue is who sets the goals for a given agent.
  16. In my terms, it is the principal which sets goals. That could be a person, or some piece of software or even a robot. And this paper does allow for the prospect of subgoals so that an agent can act as principal for a subgoal.
  17. In the terms of the cited paper, an autonomous agent would correspond to my concept of principal.
  18. A key difference between the terminology of the cited paper and of this paper, is that this paper first seeks to ground the terms in the real world of human entities or people before extending the terms and concepts to the world of machines and software.

Motivation

Motivation is a greater factor in autonomy, but can be relevant to agency as well.

A principal should clear have some good reason for its choices in setting objectives and goals. Its motivation.

An agent might have some minor motivation for its choices as to what tasks to perform to pursue and achieve the goals given to the agent by its principal, but those minor motivations pale in significance to the larger motivation for why the goal should be pursued at all, something only the principal can know.

The contract between principal and agent may likely express the motivation for each goal or objective, although that expression may have dubious value to the agent.

One exception is when the specification for the objectives might be technically weak and too vague, incomplete, or ambiguous, leaving the agent with the job of deducing the full specification of objectives by parsing the motivation. That’s not the best approach, but may be the only viable or sane approach.

Sociology and philosophy

The concept of agency takes on a different meaning in sociology and philosophy — it is used as a synonym for what is defined as autonomy in this paper and in the context of robots, intelligent agents, and intelligent digital assistants.

The relevant dictionary sense is:

  • the capacity, condition, or state of acting or of exerting power

With no mention of any principal or other external intelligent entity setting objectives for the agent to follow.

That would be more compatible with the sense of principal as agent used in this paper, where the agent is indeed setting its own objectives and goals.

That’s an unfortunate ambiguity, but that’s the nature of natural language.

For more information on these usages, consult the Wikipedia:

Agent-based modeling (ABM) and agent-based simulation (ABS)

One other field in which agency is defined as being synonymous with autonomy is agent-based modeling (ABM), also known as agent-based simulation (ABS), in which agents have a distinct sense of independence, autonomy. These agents are more like the principals defined in this paper.

ABM/ABS is a hybrid field, a mix of computer science and social science, and not limited to computer science or social science, either. In fact, it can be applied to other fields as well. Anywhere that there are discrete, autonomous entities that interact and can have some sort of aggregated effect. ABM/ABS is more of a tool or method than a true field per se.

For all intents and purposes, ABM/ABS could be considered part of social science and sociology.

Definitions

As promised, here are the synthesized definitions of autonomy and agency as used in this paper:

  1. Autonomy. Degree to which an intelligent entity can set goals, make decisions, and take actions without the approval of any other intelligent entity. The extent to which an entity is free to exert its own will, independent of other entities. Can range from the full autonomy of a principal to the limited autonomy or partial autonomy of an agent to no autonomy for an assistant. The entity can decide whether to take action itself or delegate responsibility for specific goals or specific tasks to other intelligent entities, such as agents and assistants.
  2. Agency. Ability of an intelligent entity, an agent, to plan, make decisions, and take actions or perform tasks in pursuit of objectives and goals provided by a principal. The agent has limited autonomy or partial autonomy to decide how to pursue objectives and goals specified by its principal. A contract between principal and agent specifies the objectives and goals to be pursued, authorizing action and obligations, but leaving it to the agent to decide how to plan, define, and perform tasks and actions. The agent may decompose given objectives and goals into subgoals which it can delegate to other agents for whom this agent is their principal. Note: In sociology and philosophy agency refers to autonomy or the extent to which an entity is free to exert its own will, independent of other entities.

Some derived terms:

  1. Degree of autonomy. Quantification of how much autonomy an entity has.
  2. Limited autonomy. Partial autonomy. Some degree of autonomy short of full autonomy.
  3. Weak autonomy. Entity with limited autonomy, constrained by goals set by other entities. Roughly comparable to agency.
  4. Autonomous intelligent entity. Intelligent entity that has some degree of autonomy.
  5. Autonomous entity. Synonym for autonomous intelligent entity. Or any entity which acts autonomously, even if not strictly intelligent.
  6. Full autonomy. Complete autonomy. Absolute autonomy. True autonomy. Unlimited, unrestricted autonomy. No other entity is able to exert any meaningful control over such an autonomous entity.
  7. Mission autonomy. The entity can choose and control its own missions rather than be constrained to pursue and follow a mission set for it by another entity, a principal. This is closer to true autonomy.
  8. Operational autonomy. The entity can decide for itself how to accomplish operational requirements. This is independent of control of the overall mission. This is characteristic of an agent, although an autonomous entity would tend to also have operational autonomy as well.
  9. Limited agency. Some degree of agency short of full agency. Some degree of autonomy short of full autonomy.
  10. Full agency. Unlimited, unrestricted agency, limited only by the contract between the agent and its principal. Still only a limited degree of autonomy, constrained by its contract with its principal.
  11. Degree of agency. Quantification of how much agency an entity has.
  12. Agent. Any entity with some degree of agency, but lacking full autonomy.
  13. Autonomous agent. Improper term, in the view of this paper. An agent would, by definition, not be fully autonomous. Nonetheless, the term is somewhat commonly used in computer science to indicate an agent with a relatively high degree of autonomy.

These definitions should apply equally well to human and computational entities, or at least be reasonably compatible between those two domains.

Terms used within those definitions are defined elsewhere in this paper, including:

  • Entity
  • Intelligent entity
  • Principal
  • Agent
  • Assistant
  • Objective
  • Goal
  • Task
  • Action
  • Subgoal
  • Responsibility
  • Authority
  • Delegation
  • Contract

Autonomous systems

Generally and loosely speaking, people speak of autonomous systems, whether it be a robot, a software application, a satellite, a deep space probe, or a military weapon.

This is not meant to imply that such systems are fully, completely, and absolutely autonomous, but simply that they have a high degree of autonomy. Or what we call limited autonomy or partial autonomy in this paper.

And to draw a contrast to directly or remotely controlled systems such as drones where every tiny movement is controlled by a human operator.

Lethal autonomous weapons (LAWs)

A very special case is what is called a lethal autonomous weapon or LAW. These weapons are of significant ethical concern since they largely take human judgment, human discretion, and human compassion out of the equation.

As noted for autonomous systems in general, even so-called lethal autonomous weapons will not typically be fully, completely, and absolutely autonomous.

They may have a significantly higher degree of autonomy, but not true, full autonomy.

There is some significant effort to assure that at least some minimal degree of human interaction occurs, what they call meaningful human control. That’s still a somewhat vague term, but the concept is still in the early stages.

Even an automatic rifle or machine gun has a trigger, causing it to stop firing when a person decides to stop holding the trigger. That’s meaningful human control.

Even before we start getting heavily into artificial intelligence (AI), there are already relatively autonomous systems such as the Phalanx CIWS close-in weapon system gun for defense against anti-ship missiles. It is fully automated, but with oversight by a human operator. It can automatically detect, track, and fire on incoming missiles, but the operator can still turn it off.

A big ethical concern for lethal autonomous weapons is the question of accountability and responsibility. Who is responsible when an innocent is harmed by such an autonomous weapon when there is no person pulling the trigger?

A practical, but still ethical, concern is the technical capability of discriminating between combatants and civilians. Granted, even people have difficulty discriminating sometimes. Technical capabilities are evolving. They may still be too primitive today by today’s standards, but further evolution is likely. In fact, there may come a day when autonomous systems can do a much better job of discrimination than human operators.

The only truly fully autonomous lethal weapon I know of is the minefield. Granted it has no AI or even any digital automation, and the individual mines are not connected, but collectively it acts as a system and is fully, absolutely autonomous. It offers both the best and worst of military and ethical qualities. It has no discrimination. It is fully autonomous. It is quite reliable. It is quite lethal. It is quite humane. It has absolutely no compassion. It has no accountability. No responsibility. And no human operator can even turn it off other than by laboriously and dangerously dismantling the system one mine at a time. Somebody put the mines there, but who?

Now, take that rather simple conception of a minefield and layer on robotics, digital automation, and even just a little AI, and then you have mountains of technical, logistical, and ethical issues. That’s when people start taking about killer robots and swarms.

Sovereignty

Another related term which gets used in some contexts as a rough synonym for both autonomy and independence is sovereignty.

From the Merriam-Webster definition of sovereignty:

  • freedom from external control

One can refer to an entity as being sovereign if it is autonomous or independent.

But generally, it won’t be necessary to refer to sovereignty rather than autonomy.

Summary

To recap:

Autonomy refers to the freedom of an intelligent entity to set its own objectives and goals and pursue them, either by acting directly itself or delegating goals to agents.

An autonomous intelligent entity (principal) controls its own destiny.

Agency refers to the freedom of an intelligent entity (agent) to pursue goals delegated to it by its principal as it sees fit, although subject to expectations and obligations specified by its principal in the contract which governs their relationship.

An agent owes its allegiance to its principal.

Although in sociology, philosophy, and agent-based modeling and simulation the terms agency and agent are used and defined as the terms autonomy and autonomous entity are in this paper.

One can also refer to degree of autonomy, so that an agent has some limited degree of autonomy and so-called autonomous systems have a fair degree of autonomy even though they do no have full, complete, and absolute autonomy.

Lethal autonomous weapons? Coming, but not here yet, and not in the very near future.

For more of my writings on artificial intelligence, see List of My Artificial Intelligence (AI) Papers.

--

--