(This document is for historical reference — I wrote it for my now defunct web site at http://agtivity.com/manifesto.htm back in 2003 and 2004, but hopefully some of the concepts are still reasonably relevant.)

My intent is that this page will eventually evolve into a full-blown Software Agent Manifesto, but for now it is simply the starting point(s) for such a “manifesto”.

This web page details my current thinking about software agents. Some are rather clear and I have a high level of confidence in them. Others are murky and I have no real confidence. Over time I hope that all of my thoughts become quite clear. Or maybe the frontiers of my thoughts will just keep moving and I’ll always still have just as many unclear thoughts.

The manifesto is designed to suggest a long-term (10–20 years) vision for software agents, not a short-term roadmap.

For the most part, the manifesto is a set of requirements that ‘need’ to be met to build a new software agent industry. Some requirements relate to encouraging additional research and others stem from the difficulties in moving software agent technologies from the research lab into industrial-grade real-world environments.

At present, there is no particular order to the points. They are merely ordered as a stream of consciousness.

(Click here for my Random Thoughts which have not yet been developed enough even for this page.)

1. Definitions

Any definitions in the domain of software agents should be based first on standard English and then on established computer science, software art, and software practice. Then, we can start getting creative with additional terminology.

I have attempted to compile a lexicon of English words that might be relevant for discourse on software agents.

I need a little more time before I come up with a definitive definitions of “agent”, “software agent”, etc. First I want to elaborate a concept I have called “dimensions of agency” which will be a long shopping list of characteristics of agents. After I organize that shopping list I hope to come up with a fairly small number of broad categories of characteristics of agents and use that as the basis for defining a number of specialized categories of agents. Then I hope to generalize and finally come up with a simple definition of agent that takes on a variety of variations are adjectives are added.

Before even beginning that whole collection of tasks, I first want to summarize the variety of usages of the term “agent” that I have run across in my research.

For my current definition of the term ‘software agent’, click here.

2. Is it a Software Agent or a Program?

My feeling is that agents are a new paradigm for software design that subsumes the existing software paradigms. First, a software agents is still a computer program. Some existing programs would be considered to be agents and the rest are “degenerate” cases of agents that would be recognized as agents if extended or “wrapped” with additional functionality. Not all agents are created equal and given the large number of possible characteristics of agents, it is possible that the overlap between two perfectly valid agents might be nil.

3. Software Agent Infrastructure

Much of the “work” in an agent should be pushed down into a common infrastructure so that agents will be as simple as possible and less error-prone.

4. Smart Software Agent Infrastructure or Agent Operating System

Rather than the developer or user having to ‘manage’ each software agent, the infrastructure needs to have sufficient knowledge of the purpose, intent, and state of the agent so that the infrastructure can assure that the agent performs as expected. This includes performing activities automatically whenever possible and detecting fault conditions in the agent. A sufficiently robust, smart software agent infrastructure could be referred to as an Agent Operating System.

5. Software Agent Characteristics

  1. Goal-directed. The user or other invoking entity specifies a goal (desired world-state) rather than simply a task or “recipe” of tasks to be performed. The idea is that the software agent (coupled with a smart infrastructure) will figure out how to accomplish the goal rather than simply mechanically carrying out pre-programmed tasks.
  2. Level of intelligence and areas of intelligence. It’s quite possible that a software agent could be quite intelligent in some respects or domains and quite ‘dumb’ in others.
  3. Ability to Learn. A software agent could come with strictly pre-planned knowledge, or could acquire knowledge, expertise, and even some aspects of intelligence based on input from its environment.
  4. Autonomy. A software agent would typically have at least some ability to work independently from the user or invoking entity, but the degree of autonomy could range from simple, immediate task execution (offering some value-added service) up to endlessly pursuing a goal once initiated.
  5. Mobility. A software agent could move (migrate) around the network, either of its own volition or as moved by the agent infrastructure to balance or optimize network or application performance.

6. Is a Daemon a Software Agent?

Well, technically yes, but not really. Typically a daemon performs a closely related set of tasks for more than one user (e.g., mail server, http server).

7. Does a Software Agent have to be Intelligent?

Yes and no, depending on how you wish to define intelligence. Certainly there are plenty of opportunities and benefits of ‘intelligent software agents’, but simply working towards an interesting goal without direct user supervision should be sufficient to classify a computer program as a software agent. There is also the issue of what level of sophisticated computing rises to the level of machine intelligence.

Certainly we want to enable software agents to exhibit levels of intelligence far beyond those exhibited by legacy software.

The need for autonomy by itself requires that a software agent be ‘smart’ enough to deal with dynamic contingencies that may not have been anticipated by the developer.

Finally, note that a significant amount of the ‘intelligence’ of a software agent will actually be embodied down in the smart infrastructure and the web services that the agent uses. And, a relatively ‘dumb’ agent could appear quite intelligent simply by utilizing the services of other, more intelligent software agents.

8. Does a Software Agent have to be Mobile?

No. Back in the old days, mobility was a required characteristic to get around the lack of sufficient connectivity, but nowadays computer programs can frequently access resources directly across the net. There are still applications and devices that may exploit mobility, but for most applications it is not required.

9. Organization of Software Agents

  1. Single Software Agent. One program accessing resources across the net.
  2. Ensemble of Software Agents or Team of Software Agents. The software developer assigns roles to each software agent and they work in concert towards a common goal. An ensemble would typically have a specific ‘objective’.
  3. Armada of Software Agents. A collection of software agents in which there are a potentially large number of roles, possible numerous goals, possibly extensive replication or redundancy, and any given software agent may not know about the existence or roles of most of the other software agents. An armada would typically pursue a large-scale ‘campaign’.
  4. Open Community of Software Agents. An open-ended collection of software agents where neither the developer nor the software agents have knowledge of how many agents may ultimately be in the community. Additional software agents (community members) may come and go in a very dynamic manner.
  5. Swarms of Software Agents. When unrelated software agents are randomly combined, there is the prospect of swarm intelligence or emergent behavior in which the swarm can perform tasks or achieve goals that the developers of the individual software agents did not intend or expect.

10. Can a Software Agent Have a User Interface

Although a software agent tends to operate autonomously (without interaction with the user), it’s quite acceptable for the software agent to have a user interface for initiation, reporting results, or even periodically interacting with the user to confirm decisions (e.g., “I found this great deal… Should we take it?”). Also, there should ALWAYS be a way for the user to be able to ‘recall’ the agent, either to stop it, modify its goal, or simply to query its status. The important thing is that the user does not have to feel ‘tethered’ to the software agent with an application window.

11. Connection Between Software Agents, Robots, and Nanotechnology

A robot requires two types of software: an interface to the real world and higher-level planning and control software. The latter can be quite similar to a software agent, so we can think of a robot’s software as a software agents plus the real-world interface software. In addition, there is no reason why a software agent could not migrate across a communications link in much the same way a software agent could migrate across a network. In fact, a robot could have sufficient memory and compute power to simultaneously support a number of software agents in a single robot. They could all simultaneously process the robot’s input, but obviously there would need to be some level of control (negotiation) if more than one software agent sought to control the output effect of the robot. Also, software agents running in different robots could coordinate their computing to coordinate the operation of the robots in the real world. Note that the software agents do not actually have to physically run inside the robot, but could run on the net and communicate with a ‘stub’ of the higher-level robot software which resided on the net. But if the robot was intended to operate disconnected from the net at times, then obviously the software agent code would have to migrate (or hopefully be automatically migrated) into the physical robot.

A robot could be micro-miniaturized, even down to the level of nanotechnology (presuming you design a nano-computer with sufficient nano-memory to run a reasonable amount of software), so the comments concerning the similarity of software and the higher-level software of a robot can be relevant to a nano-scale robot as well. Given very simple nano-robots, it might be that a software agent does not actually control or migrate into the nano-robot, but rather goes through a process that takes a statement of the desired behavior of the nano-robot and actually builds (‘compiles’) the detailed design of the nano-robot from the specification. Conceptually, there could also be ‘helper nano-robots’ that could ‘communicate’ between the fielded nano-robot and some ‘home office’. That same ‘helper’ function could be used with macro-scale robots as well (i.e., a sophisticated version of a carrier pigeon).

In any case, there is tremendous potential for cross-fertilization between the field of software agents, robotics, and nanotechnology.

12. Distributed Artificial Intelligence

Software agents are an ideal mechanism for conceptualizing, designing and implementing distributed artificial intelligence applications.

13. Distributed Computing

Software agents are an ideal mechanism for conceptualizing, designing and implementing distributed computing applications. In fact, I would go so far as to say that software agents are the single best approach to distributed computing.

14. Virtual Machines

It would be very, very useful for agent servers to utilize the concepts of the old IBM VM operating system so that software agents (especially those with horrible bugs or even malicious viruses) could run in a 100% sand-boxed mode so that people don’t have to worry about security and ‘wild software agents’.

15. Parallel Universes for Testing Software Agents

Part of the benefit of a true VM sand-box is that it provides a ‘parallel universe’ microcosm in which the software agent can think that it has access to real resources, but is in fact dealing with virtual resources. The same concept should be provided for the net so that software agents could be tested against the ‘production’ net without the virtually impossible task of replicating the entire net in a separate sand-box. The basic idea is that the agent can access the actual net resources as long as it is not modifying them, and any attempted modification (e.g., invoking a web service to perform an action or make a commitment) would perform a ‘copy-on-write’ operation which would copy the invoked service into a new sand-box so that the copy of the invoked service would now be running in the invoking agent’s ‘parallel universe’. Transitive closure would then keep pulling invoked agents into the parallel universe on an as-needed basis.

16. Autonomic Control System for Software Agents

Ala the human body and IBM’s Autonomic Computing Initiative, the underlying software agent infrastructure would have a variety of always-running monitoring and control mechanisms to facilitate operation of software agents and to allow developers and ‘the software agent police’ to suspend, interrogate, debug, restart, or shutdown an entire collection of software agents that are running as part of an application, as well as monitoring and controlling all applications that are using particular software agents.

17. Software Agents for Parallel Computing

I believe that software agents are the single best approach to implement parallel computing on the net.

18. Software Agents for Grid Computing

The ‘grid’ is a great way for organizing compute resources on the net, but interacting software agents are the great way for organizing computations to run on the grid.

19. How is a Software Agent Different from a Software Virus?

Actually, there is no technical difference at all! It’s all a matter of intent, perception, and effect. They are both computer programs that run in a networked environment. The difference is that a software agent causes no ‘harm’ whereas a software virus does cause harm. But, if a software agent has bugs and does cause harm, should it not be thought of as a software virus? And if a software virus has bugs which prevent it from having its intended harmful effects, is it not then behaving merely as a benign software agent? The point is that the net’s software and users may not be able to tell whether they are looking at a bad software agent or a benign software virus. The other point is that the software agent infrastructure should be designed so that the net and users have excellent ‘defense mechanism’ to protect against both software viruses and rogue software agents. Developing a much more robust net ‘autonomic monitoring and control system’ would be a single stone to kill two birds.

20. Software Agency for Legacy Applications

My simple definition of a software agent is that it is a computer program that exhibits the characteristics of software agency, with autonomy being the primary characteristic. Part of the reason for this definition is to allow for the possibility that legacy computer programs could be retrofitted to include at least some of the characteristics of software agency. A big old server-based app may not be mobile or very flexible, but it does have at least some sense of autonomy from ‘the user’. It should be possible to embed intelligence modules and agent communications protocol modules in a legacy app so that the app can look and talk to software agents on the net as if the legacy app really was a newly designed software agent itself.

21. Organization of the Human Brain

I myself am certainly not in a position to expound on beliefs about how the human brain works and is organized, but I do raise the question as to whether current approaches to interacting software agents represent a subset of what the human brain is capable of or are in fact a step or leap beyond the human brain. Are we mimicking or extending? Is each agent (or small clusters of agents) simply an element of ‘thought’? Is a web full of software agents still far short of ‘intelligence’ or is it a new ‘collective, shared brain’ that allows us to each act as if we had a bigger and better brain?

Somehow, it seems that even a single software agent has the potential to be an extension of a human brain, assuming that the software agent has sufficient machine intelligence and that the communications interface is sufficiently compatible with human thought and discourse.

Could we back a bunch of software agents in a PDA such that ‘we’ (us and our PDA working seamlessly together) are more intelligent? Maybe ‘PDA’ should be conceptually changed to ‘PBE’ for ‘Personal Brain Extension’ or is it ‘PIE’ for ‘Personal Intelligence Extension’. The terminology for brain, intelligence, knowledge, self, presence, and personality gets rather fuzzy. The point is not how to discuss the human brain itself, but the combination of a human and one or more software agents and the hardware devices needed to support those software agents.

22. Rogue Agents

Whether by design, intent, or merely bugs, a software agent may be perceived as being a rogue software agent. The goal is that the autonomous smart agent infrastructure should have enough defensive capabilities to detect and deal with rogue software agents and prevent harm to the net and communities of software agents, service providers, and users.

23. Email for Transport of Software Agents

Despite the current plague of email-based software viruses, in principle there is no good reason not to use a mechanism similar to email to transport agents and to provide a platform for their execution. A common use would be to send a smart survey to a user which would be a software agent that would then run in the user’s environment and then communicate back to the sender. Obviously there need to be defense mechanisms and a flexible permissions system so that users could decline such software agents, possibly by listing out the kinds of tasks or questions that will be entertained.

24. Software Agents Targeted at Users

In addition to interacting with web services and other software agents, a software agent could also initiate communications with real-world users, such as in my email survey example, or via instant messages. Obviously such a capability could be abused by the computing equivalent of telemarketers, but within companies, organizations, or limited communities, the capability could be extremely beneficial.

25. Software Agent Defense Mechanisms that Respect User Privacy

A robust set of defense mechanisms will be needed to protect user privacy any time a software agent is attempting to interact with a real-world user. An entire hierarchy of capabilities, by domain, will be needed so that the user can optimize the benefits of gaining services from inbound software agents while protecting their own privacy and the value of their time.

26. Roaming of Software Agents Considered Harmful

The idea that the average software agent will go around visiting host computers on the net is absurd. For a small network that might make sense, but in a web of thousands, millions, and even billions of host computers that makes little sense. The goal is that the software agent is interested in making contact with one or more services, resources, or users.

In some cases the software agent simply wishes to “select one” out of the many possibilities, so it wants to call upon the services of a broker who then consults a distributed registry of services.

In other cases, the software agent does want to interact with all (or a designated portion) of the population that match a set of criteria. Once again, a matching broker service seems more appropriate rather than each agent laboriously traipsing around the net knocking on every door like a cold-caller.

If a number of hosts match the request, then the theory should be to broadcast a request (or the agent “code” itself) to the sub-population of matches and exploit the parallel execution of the various hosts.

27. When in Doubt, Exploit Parallelism

Many real-world problems can be tackled more effectively if pursued in parallel rather than sequentially.

28. Sequential Sucks

A restatement of the merits of pursuing parallel execution of software agents. The goal is to get as much code as possible running in parallel so that more useful work can be accomplished per unit of time. In this sense, software agents can be a “time machine” enabling us to travel into the future faster than at the snail’s pace of traditional programming paradigms..

29. Programming Languages Need to Become More “What” Oriented Rather Than “How” Oriented

This is the focus on goals rather than tasks. Declare the overall requirements for the solution and let the smart agent infrastructure analyze and decompose the requirements and repackage for more efficient execution.

29a. Design of Computations vs. Programming as Packaging

Much of the work of a software developer is spent taking a conceptual design and packaging it in terms of lines of code, functions, modules, and programs and deciding how those programs will be deployed on various host computers. All of this packaging is essential for execution, but actually interferes with the ability of a smart infrastructure to analyze the design and dynamically package elements of the design to execute the overall design more effectively on a network and to do so in a way that exploits redundancy and massive parallelism.

Methods and tools are needed to permit software developers to focus more attention on the design of computations and less attention on packaging decisions that should be left to the smart infrastructure.

We could also refer to the smart infrastructure as the Computation Operating System.

30. Constraint Management and Declarative Programming

The idea is that you “code” declarations (equations) and the “compiler” generates sub-agents, tables, and code that monitor the dependencies and update variables or trigger code (or software agents) based on changes in the environment. Most of the processing to achieve constraint management should reside down in the smart software agent infrastructure.

31. Polling Considered Harmful

Programs should declare (register) to the smart software agent infrastructure what data and events are of interest so that the program can go into idle mode while awaiting the conditions of interest. In many cases, the smart infrastructure can perform responses automatically as directed by the declarations without requiring the program to gain direct control. Polling can be error-prone and consume far too much resources when the dependencies are great.

32. Empower the User to Access Remote Resources and Services

Software agents can automate the process of seeking and accessing resources on the net. The user may not know where to look or how to behave when resources are located.

33. Empower the User to Massively Exploit the Inherent Parallelism of the Net

Manually and consciously exploiting parallelism is difficult and interferes with the user’s though processes. Software agents and the smart infrastructure can automate and otherwise facilitate the process.

34. Respect the User’s Time and Cognitive Capacity

Information overload is all about giving the user both more than they want and more than they can handle. Software agents have the potential to trim away the excess and reformulate what remains so that a much smaller fraction of the user’s time and cognitive capacity is needed.

35. Permit Knowledge, Ideas, Concepts, and Wisdom to be Manipulated as Easily as Traditional Data

Program languages, communications protocols, database systems, and the smart infrastructure should allow the programmer and user alike to manipulate knowledge, ideas, concepts, and wisdom to be manipulated and accessed as easily as numbers and strings are today.

36. Eschew Mere Optimization and Focus on Enabling Breakthrough Innovation

The idea is no to do the things we do today faster (or even “better”), but to eliminate the tasks we do today and to open up whole new worlds of discourse that enrich our lives.

37. Replace the Personal Computer with the Personal Computing Environment (PCE)

First, stop thinking about programs that run on the PC or a server and think of the PC as a Personal Computing Access Device (PCAD) which is a combination of a window into all of the personal computations that are running on the user’s behalf (either locally or out on the net) combined with some amount of local storage and compute power needed for interactive tasks and when temporarily disconnected from the net.

The user should be able to run a program on another PC or server as easily and transparently as locally. Software agents can run locally or elsewhere on the net. Most importantly, the smart infrastructure can morph a single agent (or collection of agents) so that some parts will run locally or on the net as seems optimal to the infrastructure. The user and the software agents should see one seamless computing environment whether running or accessing resources locally or on the net. Code and intermediate results could be cached on the local PCAD so that net disconnection merely degrades performance, but does not impede the correct function of the users personal computations.

38. Smart, Transparent, Automatic Caching

The user and software agents should not have to worry about how, when, and where to cache code, data, and code execution to optimize the performance of individual personal computations. Caching should be a robust, transparent, and automatic function of the smart agent infrastructure.

39. Reverse Caching for Software Agents and Data

Software agent code and execution can be pushed out towards the resources that they need more efficient access to. Also, if many more than one computation on the net requires execution of the same agent (or requirements common to different agents), the common computation could be pushed to a more efficient host and then intermediate results broadcast to interested parties.

Similarly, the repository for a data item could be migrated to be closer to its users rather than be forced to live near the host where it is generated. Once again, the code that may generate the data item should not care or be dependent on where the data-item officially resides.

40. Mutability between Code and Data

The nice thing about a language like LISP is that the code and data format is identical. Programs should be able to easily query, generate, and modify code stored not as ASCII text, but in its abstract semantic form. Conversely, programs can be written to execute or interpret information stored in abstract semantic form.

There should be a bias towards programming in the form of requirements and declarations which can be both directly manipulated as if they were data, and processed and optimized on a more global level by the smart agent infrastructure.

41. Software Agents as the New Software Paradigm

The single most important test of whether the concepts related to software agents are ready for prime time is whether they can be applied usefully to all forms of software, including interactive applications, operating systems, device drivers, BIOS code, PDAs, cell phones, embedded systems, compilers and other software development tools, etc.

Software Agents are not just for AI applications.

42. Software Agents Could be the Dominant Software Paradigm in 10 to 20 Years

  • Current software agent technology is extremely rudimentary.
  • Much more research is needed.
  • Some interesting stuff should be available in 2 to 3 years.
  • Some commercial-quality stuff should be available in 3 to 5 years.
  • The first, ‘advance’ wave of widespread use should occur in 5 to 10 years.
  • Software agents should be the mainstream, ‘preferred’ technology in 10 to 20 years.

42a. Next 2–3 years will Merely Wet the Appetite and not Satisfy

Software agent technology is not even yet really in its infancy. The next 2–3 years will be the time when people begin to grasp the vision for software agents, but research efforts are still too tentative to be the basis for the true flowering of the potential for software agents. As such, we should all be very prepared for an extended period of waves of irrational exuberance and dashed hopes. But, out of each stage of failure will come knowledge and conviction that the subsequent stages will be better and more successful.

43. Software Agents Should Be Able to Be Written in Any Programming Language

Software agents are computer programs. Java may be in vogue right now, but LISP or C++ may be ‘better’ for any number of reasons. The software agent infrastructure should not bias itself to one programming language.

44. Computer Architecture-independent Intermediate Language

There should be a machine-independent intermediate language (whether it is similar to the Java Virtual Machine (JVM), the Intel x86, or something new). The idea is that the developer can code in whatever programming language fits the application and then compile the source code into theSoftware Agent Intermediate Language (SAIL). An agent server may choose to either directly execute SAIL (ala the JVM), or further compile it into a machine-dependent binary code.

45. Code and Data Should Be Separated in Software Agents

  • Although people may think of software agents as being small and simple, that’s not necessarily the case. They could be quite large, say 1 to 10 MB if they are very sophisticated.
  • If the same agent (but different instance) is sent repeatedly to a given host computer, it seems completely silly to repeatedly send the same code. The host should be able to cache the code.
  • It should be possible to update the code portion of a software agent on the fly, either to fix bugs, improve performance, or otherwise improve its capabilities.

46. Software Agent Code Server

  • The software agent developer should store the code for a software agent on a registered software agent code server (SACS).
  • Whether both the source code and SAIL code are stored would be up the the developer.
  • When a software agent is to be initiated, the host computer contacts a relevant agent code server to see a more recent ‘required’ version of the agent code is available.
  • If a new version is desired, the local host can either opt to request the SAIL code and either interpret it ala JVM or compile it locally (ala JIT) to the specific host machine or request that the code server supply a machine compiled version. The optimum overall system and net performance would likely be for the agent code server to compile the SAIL code for each particular machine architecture as needed.

47. Software Agent Code Filters

Since the local host may have special security requirements or limitations or privacy concerns, it may request that one or more standard code filters be applied to the SAIL code to verify that the code for the software agent will be ‘safe’ on the host computer. The SACS would only need to run the filters once for each compilation of the SAIL code.

48. Applications Considered Harmful

Marketers and software developers spend inordinate amounts of time and resources designing the packaging of code into computer software applications. That’s fine if your needs approximately match the packaged software’s functions, but frequently (or usually) they do not.

What we need are better methods and tools so that the underlying software design can be represented in a form that can be dynamically reconfigured to meet the specific needs of subgroups or even specific users. A typical user or class of user at a large organization may only need 15% of the functions of the so-called application, but may also need those functions to be customized to more dramatically improve the productivity and effectiveness of the users.

The actual application packaging should be more in the way of hints on how the software should be presented. System integrators could then be able to easily modify or extend those hints to cause the software to be morphed into a form that more closely approximates the end-user and organizational needs.

System integrators should be able to fix and match subsets of the overall application computation to meet specific needs.

The application packaging decisions made by the vender’s marketing and software developers should be the starting point, rather than the end point of the packaging.

49. Leapfrog then Backfill

It is difficult and indeed counterproductive to attempt radical innovation by merely incrementally extending an existing technology ad infinitum. The burden of 100% backward compatibility simply stifles creative thinking and creative energy.

It is better to focus on leapfrog advances that are completely unconstrained by current technology and market ‘needs’. Only by passing into such brave new worlds can we truly comprehend the power and potential of the new technology.

Then, and only then, can we begin to contemplate how to backfill subsets of the new technology into legacy platforms and applications.

We can then ping-pong between the old and the new worlds, allowing each to guide the other.

At some stage, the gap between the two can become conceptually small enough that legacy uses can be shifted to the new world.

In short, backfilling is important, but is still secondary to the leapfrog advances.

50. Shared Knowledge vs. Inter-agent Communication

It should be possible for a pool of software agents to share and jointly manipulate a pool of knowledge without the need for explicit communications (such as using an agent communication language or protocol). The details of how to update and control the knowledgebase would be done autonomously by the smart infrastructure. Developers should be able to think about a collection of software agents as simply as they currently think about a collection of functions.

51. Agent-Oriented Software Engineering is Still Software Engineering

Even will the adoption of software agent technology, the needs, methodologies, and tools of software engineering are still as important as ever. Although software agents may provide a somewhat higher-level abstraction for design, there is still a need for software engineering to focus on user requirements and the process of mapping user requirements to software features.

One change would be to put a higher emphasis on matching the goals of software agents with the requirements of users and the fact than many of the ‘details’ of software development (and even software engineering) would be automated and autonomously implemented by the smart software agent infrastructure rather than laboriously massaged by the software ‘engineer’.

52. Software Agent Artifacts and Knowledge Refinement

Software agents work towards goals by manipulating raw data or information and knowledge ‘snippets’ in various stages of refinement. We could view the work of a software agent as simply knowledge refinement. A side effect of the processing of one or more software agents is the production of software agent artifacts. These might be intermediate results, a detailed knowledgebase, or final results ready to be presented to a user. Some will be of direct interest to the user and some will be only of interest to software agent developers. But they will all be of interest to someone. The smart infrastructure should treat all of them as interesting and provide tools for cataloging, accessing, and manipulating them. They should not simply be treated as bits in a communications stream that are used once and discarded. In fact, the bits in a communication stream will tend to merely be summaries of the software agent artifacts as they exist in their ‘proper’ repositories.

53. Use Software Agents to do New Things Rather Than Merely to do Old Things Better

Although software agents can undoubtedly be used to optimize and enhance existing tasks, their real promise is to act as an enabling technology to permit people to pursue goals that might have been even unimaginable with pre-agent legacy technology. Instead of building a better mousetrap, why not build a device that can harness the best abilities of the mouse and extend and exploit them in a socially beneficial manner (and hopefully enhance the mouse’s quality of life at the same time).

54. Open IP (Intellectual Property)

It is absolutely critical that as much of the technical innovation as possible be free and openly available to stimulate the vast amount of development that will be needed simply just to get the software agent ‘movement’ moving. That is not to say that for-profit venders can’t retain ownership of their products, but the extent to which key technologies are kept proprietary will hinder development of the industry.

Venders should think very carefully about the extent to which their business models are based on licensing products at high fees versus open source products and earning income from providing services.

The extent to which basic software agent technologies are patented and require payments of fees is the extent to which development of the industry will be impeded.

That said, there is no reason why any particular implementation of software agent technologies can’t be patented or provided under proprietary license fees. In other words, as long as there are a wide variety of free implementations of each type of black box, there is no reason why someimplementations of each type of black box cannot be proprietary.

In general, protocols and reference implementations of the handlers for those protocols should be free and open source.

But proprietary implementations that add significant value (e.g., higher performance or scalability and capacity) for high-end industrial-grade systems should certainly be encouraged. But adding features which would tend to cause ‘lock-in’ to a vender are to be highly discouraged.

55. Agents in the Large versus Agents in the Small

Larger software agents, analogous to current personal computing applications, tend to work on goals that have apparent meaning to the user or the organization. These are macroscopic software agents.

Smaller software agents, more analogous to individual functions or objects, and sometimes referred to as sub-agents or fractional agents, tend to work on portions or fractions of goals or sub-goals that might not even be recognizable at the user or organization level. These are microscopic software agents.

Of course, there is no reason that a user-level or organization-level goal can’t be pursued by a very small agent or why a sub-goal can’t have a very large, even multi-agent, manifestation.

A macroscopic software agent may frequently have a user interface for occasionally interacting with a user to collaboratively work towards a goal, but it is far less likely that a microscopic agent would ever interact with a user.

56. [Rev 2] Redundant Computing

The goal is to avoid ‘fragility’ in our computing environments. Any node or even collection of nodes in a network could be temporarily or permanently incapacitated, so redundancy is a valuable tool for assuring the robustness of networked computations. A software agent might be cloned, not simply to double the computing bandwidth of the application, but to double the odds that that portion of the computation is performed successfully.

56. [Rev 2] Volition

A software agent may be pre-programmed to make deterministic choices when presented with specified inputs and environmental conditions, or the agent may in fact have a significant degree of what might be called free will which would enable the agent to act on its own volition. A software agent might be a grunt which is merely expected to do as it has been ordered, or the agent may have been granted a fair degree of autonomy which enables the agent to act on its own volition. One might distinguish between the authority to choose versus the ability and resources needed to actually make a choice. Volition would require ‘all of the above’. A sophisticated (or possibly even intelligent) software agent would be enabled for volition.

57. [Rev 2] Ontologies

The concept of an ontology (a set of concepts, axioms, and relationships that describe a domain of interest) is relevant to the domain of software agents at two distinct levels: 1) for describing how a particular software agent (or class of software agents) operates and interacts with its environment, and 2) for describing the field of software agents itself and how the many flavors of software agent can be defined or classified and the concepts, axioms, and relationships that apply to all or a subset of the totality of software agents. The first is a tool for developers of specific software agents or narrow classes of software agents, defining how those agents work and interact. The second is a tool for anyone who wishes to understand the field of software agents, broad categories of software agents, or simply how to discuss the commonality or differences between two software agents. For lack of better terms, I will refer to the specific ontology of a software agent versus the general ontology of software agents. A specific ontology is like defining the behavior patterns of an animal, versus the general ontology for describing how DNA works independently of the particular animal or class of animal.

58. [Rev 2] Specific Ontology of a Software Agent

The specific ontology of a software agent involves the concepts, tools, and techniques needed to define (or specify) the operation and interaction of a specific agent (or well-defined class of agent). More simply put, an agent ontology defines the rules for working with an agent. A specific ontology relates to describing the purpose, intent, and behavior of the particular agent (or class of agents).

59. [Rev 2] General Ontology of Software Agents

Workers in the field of software agents need a structured set of concepts, tools, and techniques to discuss the similarities and differences of different agents (or different classes of agent). The general ontology of software agents relates to the work that researchers and developers (and others) do to conceptualize, design, and implement the infrastructure needed to support software agents. For example, if someone wants to implement a new software infrastructure platform for general purpose software agents, what concepts must be implemented by that platform and the related development tools.

A hierarchical categorization is needed for the various types, kinds, flavors, classes, and categories of software agents. There may be multiple hierarchies and there may be networks rather than simple hierarchies. Well,maybe this is simply the taxonomy described below.

Ultimately, there should be a null class of agent, which defines the qualities that all agents possess, regardless of their differences. Ultimately, there is the question of what the simplest agent consists of. There might be hierarchical derived classes of agents in which the derived class further specializes the base class.

60. [Rev 2] Taxonomy of Software Agents

The general ontology of software agents gives us concepts, tools, and techniques for reasoning about agents in the abstract, but ultimately the real-world development and deployment of software agents requires a practical taxonomy of software agents, which is effectively a cross-referenced, hierarchical categorization (or catalog) of all known classes of software agents, organized in a manner that makes it easy for real-world, practicing designers, developers and deployers to work with software agents.

The difference between a taxonomy and the general ontology of software agents is that the taxonomy focuses on organizing specific classes of agent by their similarities and differences, whereas the general ontology focuses on the universal pool of software agent qualities upon which specific agent classes are defined and specified. A simpler, operational analogy is that a taxonomy of software agents is the the general ontology as applications are to an operating system and development tools.

A taxonomy of software agents would not by a simple, hierarchical catalog since there are a number of different qualities or characteristics by which one may want to organize or lookup a software agent. Examples of these dimensions of categorization include:

  • Application domain
  • Platform

Operating systemAgent infrastructure

  • Implementation language
  • Communications protocols supported
  • Degree of communications with other agents

NoneOnly pre-authorized agents (relatively hard-wired)A well-defined ‘community’ or class of agentsTotally open, but optionally subject to various security contraints

  • Research versus degree of comercialization
  • Revisions, each of which may radically change the qualities of the agent
  • Various ‘distinctions’ such as interactive, conversational, background, running on the net, roving/migrating/mobile
  • Single vs. small team vs. large ‘army’ vs. open ‘community’ vs. offering a ‘publicized’ web service
  • Degree of robustness
  • Degree of scalability
  • Whether or not the agent can be cloned
  • Degree of intelligence

Ultimately, the developer or deployer has a set of requirements in mind and really only wants to peruse the subset of the total taxonomy related to the requirements.

60. [Rev 2] Rules and Rule Management

Rather than hand-coding explicit code for dealing with data conditions and events, it is much better to ‘code’ as much as possible of a software agent (or collection of agents) as a set of rules and to have automated rule management be performed by the software agent infrastructure. Benefits are that is is easier, less error-prone, more robust, easier to maintain, easier to understand, and facilitates optimization by the agent infrastructure, both within single agents, across a collection of agents, and across the universe of running agents.

Researchers speak of inference-enabled web applications and rule markup languages, such as RuleML. Ultimately, the software agent infrastructure needs to be able to reason about software agents, as if they were pieces of data themselves, albeit very complex ‘pieces’ of data.

61. [Rev 3] Discrete Agent vs. Continuous Agent

There is a radical difference between a discrete agent which is expected to complete a designated task within an expected interval of time (or event occurrence) and a continuous agent which endlessly evaluates a set of conditions and occasionally takes actions or reports results, but without terminating.

An example of a discrete agent would be a travel planner agent which is expected to supply a complete travel plan as quickly as possible and with a deadline. An example of a continuous agent would be one which periodically reports to the client on the ‘best’ weekend getaway deals.

From a computer science perspective, a discrete agent is a program which terminates, whereas a continuous agent is not expected to terminate (although there will tend to be a mechanism for the client or agent OS to command the agent to terminate.)

The distinction between these two classes of software agents has significant implications for the design process, debugging, testing, QA, monitoring, and support features needed in the agent OS.

Note that not all agents will be purely discrete or continuous. The travel planner may in fact be technically continuous if it presents its results for review and approval and then proceeds to replan based on the client saying “not good enough, try again.” And what about a continuous agent designed to run for a person’s entire life and then terminate on notification of the person’s death. A person’s life is certainly (currently) rather discrete, but from the perspective of the Agent OS the agent is expected to run for a very long period of time. In short, we would normally talk about the agent’s predominant mode of operation and allow for the fact that any agent could behave radically differently under special circumstances.

62. [Rev 3] Process vs. Object vs. Agent

It is not yet clear where the proper boundary line and separation of capabilities is for processes (e.g., traditional computer programs), objects (usually embedded within the data space of an application), and software agents.

Should a software agent by definition be a process, or could one process implement multiple agents or at least sets of agency capability.

Should objects be able to exist outside a process data space as if they were processes themselves. Is it merely a matter of adding supper for dramatic numbers of lightweight processes?

In theory, I could implement an object as a process and a software agent as a process, so is a process simply in implementation scheme (technology) rather than an inherently high-level computing concept. I’m beginning to think so. Maybe a process is simply another tool in the relatively low-level toolbox that includes machine instructions, memory locations, function calls, and method invocations.

63. [Rev 3 1/17/04] Agent Application State

A software agent will have a variety of internal and possibly external “state” that is needed for the agent code to “compute” its goal, but a subset known as the “application state” is the information that would need to be stored external to the agent so that the agent can be restarted if the agent “dies” or becomes inoperative for any reason (e.g., server goes down or must be restarted, earthquake destroys the server, hacker or cyber-terrorist corrupts the system, etc.). This application state can also be thought of as the “checkpoint state”. So, as the agent goes about its business, the application state should be periodically output to a “backing store” so that the agent infrastructure can automatically restart the agent if the infrastructure determines that the agent is no longer functional.

64. [Rev 3 1/17/04] Functional Agents

A software agent is “functional” based on it adhering to the “function rules” that are registered with the agent infrastructure when the agent is initiated. Think of it as an agent “flight plan”. The idea is that the infrastructure can continually monitor the agent’s progress to detect when the agent may have become non-functional or dysfunctional, meaning that the agent is no longer “flying” on its established “flight plan”. The flight plan can be dynamic, with the agent or client communicating “a change of plans” to the agent infrastructure. The rules in the agent flight plan can be quite open ended, but the goal is simply to support to purpose of the agent and to protect the agent from harm and to protect the agent environment from dysfunctional agents.

65. [Rev 3 1/17/04] Long-Life Agents

The lifetime of many software agents will be relatively brief, from conception to achievement of a goal (e.g., “plan my vacation”). But, some agents will have open-ended goals which will be ongoing for the life of the client (e.g., “advise me on how to invest my retirement account”.) Obviously no single program or process can be assured of actively running for timescales far beyond the expected mean-time-to-restart for real-world servers. In some cases a running agent can be automatically migrated to another running server, but there are any number of unexpected contingencies would could preclude such migration, such as problems with the system software, power problems, earthquakes and tornadoes, terrorist attacks, cyber-attacks, human error, etc. The solution is that any agent which is intended to have a “long life” should continually output its application state to a reliable storage server so that the agent infrastructure can automatically restart the agent whenever the infrastructure detects that the agent is dysfunctional.

There are two categories of long-life agents: 1) software agents which are active continuously (e.g., monitoring changes in temperature or stock trading activity) and 2) intermittent software agents which experience long periods of inactivity as they wait for low-frequency events to transpire (e.g., wait for the client to reach a designated age.)

66. [Rev 3 1/17/04] Low-Activity Agents

Some software agents is simply monitoring environmental conditions and “waiting” for some configuration of conditions to occur, but otherwise the agent may have nothing to do until those conditions occur. The agent infrastructure should support the registering of “condition requirements” so that the agent can simply hibernate or remain inactive or idle until the desired conditions pop up. That period of inactivity could be relatively short (e.g., “wait for the price of stock ticker IBM to change”), medium-term (e.g., “wait for the price of IBM to rise by $5”), or relatively long (e.g., “wait until the next quarterly report for IBM”). For relatively short “waits”, traditional operating system capabilities can be used for an active process to enter an idle state. For relatively long waits, it isn’t even necessary for the agent to be an active OS process until the conditions occur on time scales of days, weeks, years, or even decades (e.g., “wait for client to reach retirement age”.) In some cases, it may not even be necessary to initially start up an OS process for an agent until the conditions specified in the agent’s flight plan warrant agent execution. So, it is possible that millions or even tens of millions of agents exist simultaneously not ac active OS processes, but as stored agent application state and agent flight plan registration tables in the agent infrastructure.

Written by

Freelance Consultant

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store