Complexity and Cognitive Overload Are Not Your Friends

Jack Krupansky
45 min readApr 20, 2018


Complexity and cognitive overload have a cost. Professionals pride themselves on their ability to master complexity and engage in dramatic feats of cognitive activity. Maybe that reflects their desire to be well-compensated for their efforts. Or maybe it’s just the thrill of the challenge. Pride, ego, and all of that. But there are costs to that complexity, financial and otherwise. Cognitive overload is one of those costs. And the downstream effects of complexity and cognitive overload are a never-ending stream of costs — and headaches, both figurative and literal.

Modern systems can be too complex for any individual or even a small team to comprehend the full complexity, let alone all of the additional complexity of a collection of interacting systems.

This informal paper attempts to characterize the nature of complexity and cognitive overload of systems, and to make the case for investing much more effort at controlling complexity and cognitive overload.

The primary focus of this paper is technology systems, such as computer and software systems, including artificial intelligence (AI) systems, but the risks and dangers of complexity and cognitive overload apply to systems in general, including large infrastructure projects, complex industrial plants, automated vehicles, and organizations, groups, and teams of people.

General problems with complexity:

  1. Delegation is a powerful tool and skill, but is not a valid substitute for comprehension of the full system, both as a whole and how all of its detailed components interact.
  2. Cost. It simply costs a lot more to design, build, operate, and maintain complex systems.
  3. Time to complete tasks.
  4. Risk of mistakes in design.
  5. Risk of failures in operation.
  6. Uneven or unacceptable performance.
  7. Loss of control. Technical staff, managers, and executives are unable to control the behavior of their systems as completely and finely as they would like or expect.
  8. Liability. Open-ended and unlimited liability. Loss of control over liability.
  9. Poor knowledge of security vulnerabilities. Loss of control over cybersecurity. Getting hacked. Data breaches. More liability.
  10. Expense of staffing.
  11. Difficulty of attracting and retaining staff.
  12. Excessive dependence on key staff, whose loss or departure might cripple the organization.
  13. Expense of management.
  14. Difficulty of attracting and retaining management.
  15. Coping with complexity requires extraordinary focus.
  16. Multitasking can be fun, satisfying, and productive, but when combined with severe complexity can be as dangerous as drunk driving.
  17. Juggling tasks can be fun, satisfying, and productive, but can be quite dangerous when combined with severe complexity.
  18. Extreme risk for lethal autonomous weapons (LAWs).
  19. Artificial intelligence (AI) can be a powerful tool to manage and limit complexity, but even AI has limits, so that many combinations of AI and complexity can be extraordinarily dangerous, with unknown risk and unknown liability.
  20. Deters (significant) innovation. It’s a lot harder to modify and enhance or even replace complex systems.
  21. No longer able to exhaustively test all combinations of features and conditions for a system.
  22. Lack of visibility into the true complexity of systems. The complexity is not visible or even measurable.
  23. Need multiple architects, but multiple, siloed architects with limited visibility present a significant risk of not knowing the full complexity of the system.
  24. Silos in general. They can be a great management technique for partitioning large projects into manageable chunks, but they tend to hide overall complexity and deter staff from thinking about complex interactions between silos.
  25. Downstream effects of complexity. The impact of an overly-complex system can cause people and other systems to exert extra effort and incur extra costs, and even overload people and other systems. In other words, the impact of complexity is not guaranteed to be limited to the particular system itself.
  26. Risk of cognitive overload. Users have difficulty using the system. Staff have difficulty comprehending and managing the system.
  27. Hype and panaceas. People can be easily deluded into believing that a technology or solution is relatively easy and practical when it is anything but. There is no shortcut to coping with raw complexity. Hype doesn’t magically make complexity go away or become trivial. On the contrary, hype is great, perfect, and ideal for masking and hiding complexity. In fact, hype is a great way to inoculate a project to protect it from too close an examination of its true complexity and risk.

So, what is the solution, the cure, for complexity and cognitive overload?

Sorry, but there is no silver bullet, just a hodge-podge of techniques and approaches that can help to moderate, mitigate, minimize, manage, and cope with complexity, cognitive overload, and their downstream effects.

You could say that simplicity is the cure, the silver bullet, the holy grail, to eradicate complexity, but it is more the ideal than a practical destination. Generally, simplicity is more of an unfulfilled fantasy or even an unfulfillable fantasy.

Key danger of complexity

Even if we do manage to get a complex system to work, or at least appear to work, we have a real problem.

Many modern systems are too complex for any individual or even a small team to comprehend the full complexity of the system, let alone all of the complexity of a a number of interacting systems, each of which is in turn too complex for a single individual or a small team to comprehend in full.

Just as with the Sorcerer’s Apprentice in Disney’s Fantasia, it can be all to easy to conjure up a complex arrangement of activities, but then they can quickly get out of control, with no Easy button in sight.

Perception is not the same as knowledge

We may think or imagine that we know all there is to know about the nuances of a complex system, but our perception or belief that we know is not the same as actual, verifiable knowledge.

Overly complex systems

For the purposes of this paper, the term complex system is used to refer to any system that is overly complex, meaning any system where the individual interacting with the system possesses only a tiny fraction of the knowledge needed to comprehend the full operation of the system.

If a system is truly properly designed, with all subsystems and components smoothly and properly interacting, and virtually no chance that an individual could ever make a mistake that would cause a catastrophic problem with the system, then there wouldn’t be any need to artificially label the system as complex.

It is only when there is some nontrivial chance that the subsystems and components might fail in some relatively catastrophic manner or that a fairly trivial mistake by an individual could cause a catastrophic failure that we need to refer to the system as overly complex or simply a complex system.

A small plant, small rodent, or even a single-celled organism is technically a very complex system, but all elements of the structure tend to work so exceedingly well that there is no psychological need to refer to such simple organisms as complex.

User experience (UX) complexity and cognitive overload

Beyond the internal design of a system, complexity and cognitive overload can be visible to users in the user experience or UX of the system.

There may simply be far too many features for the user to comprehend and cope with.

Or those features may be implemented in a way that doesn’t make sense to typical users.

Or even if the user comprehends the features and they make sense, it may take too much effort to use the features effectively for typical tasks.

Or maybe everything is fine for typical tasks, but in more atypical or extreme tasks the user can become overwhelmed by the complexity and cognitive overload kicks in.

Although poorly-designed user experiences are not uncommon, it is probably more common that difficulties experienced by users are driven by excessive complexity in the underlying system design. There may be too many features and controls in the underlying system, which forces the user experience to be comparably overly-complex. And then cognitive overload kicks in again.

In general, get the complexity of the underlying system under control, and then the user experience is far less likely to be a problem in terms of complexity and cognitive overload.

And in general, efforts at the user experience level to compensate for excessive complexity in the underlying system are less than likely to result in a positive user experience and more likely to result in cognitive overload.

Our reach exceeds our grasp

For all of the technical skill and knowledge of even relatively sophisticated organizations, all too frequently our reach exceeds our grasp when it comes to complex systems.

We imagine that we can handle systems of a given complexity, but in practice, reality intrudes and proves us wrong.

It happens all of the time.

Or, just as fatal, we merely hope that we can handle the complexity, but our hopes and dreams are so easily shattered by reality.

Cognitive overload

Every individual has some capacity for cognitive activity, including thinking, planning, and reacting and responding to input from the real world.

The human brain and mind is capable of some amazing things, but it does have its limits.

Attempting to exceed the limits of the human brain and mind is known as cognitive overload.

This means that the tasks we are seeking to accomplish are beyond our ability to intellectually manage. We can only do so much.

Complexity greatly increases the chances that our brains and minds will be overloaded.

We’re so proud of our ability to multitask and juggle multiple activities, but there are limits, and the complexity of modern systems is increasingly exceeding those limits.

Cognitive overload comes in two forms with complex systems:

  1. Using the system. Our ability to monitor and respond to all of the displays, knobs, and levers, especially under pressure as systems handling increasing amounts of data.
  2. Comprehending the system. Our ability to comprehend all of the myriad components, modules, and subsystems within the system, including all of the interactions between them, as well as interactions with other systems.

A system may be built from relatively simple components, but there are so many of them, with so many interactions that cognitive overload is virtually assured.

Complexity, cognitive overload, and faith

In truth, there are many instances where we are able to confront systems far beyond our comprehension, and instead of pulling out our hair and screaming and running away from such systems, we simply close our eyes to the complexity and accept on faith that somebody else has indeed mastered all of that complexity for us so that we simply don’t have to care about that complexity.

Some examples:

  1. Getting on a plane.
  2. Getting on an elevator.
  3. Trusting a bank or brokerage firm.
  4. Trusting a website.
  5. Trusting a computer or smartphone.
  6. Trusting a medical device implanted in our body.
  7. Trusting an x-ray machine or CAT scan or MRI machine.
  8. Trusting a driverless vehicle.

All based on raw faith rather than comprehension and mastery of complexity.

Effective complexity vs. literal complexity

Literal complexity is the kind of run of the mill, routine complexity that is easily and readily dealt with using traditional, proven technical and managerial methods. We analyze the complexity and apply indicated resources to master that complexity. It’s a slam dunk.

With literal complexity we can know what we are getting into in advance and know (or at least feel that we know!) how to deal with it.

Effective complexity is the kind of unusual complexity that is outside the envelope of efficacy of proven technical and managerial methods. We simply have no clue what the true complexity really is, its breadth, depth, or scope. So we have no clue how to cope with and master such complexity.

Effective complexity is the kind of complexity that completely overwhelms us. We didn’t see it coming, and we have no clue how to deal with it now that it is here.

We can treat literal complexity as if it wasn’t even there since we have it handled. We make it look easy and nobody feels that the system is complex. It’s like strolling onto an airplane or pushing a button in an elevator. So simple.

But with effective complexity we can see it and feel it. It is very real to us. It is overwhelming, but it is something which we can sense. Not simple at all.

I’m not sure which is really worse, feeling overwhelmed by effective complexity, or fooling ourselves and imagining that effective complexity is really literal complexity and then misguidedly applying traditional technical and managerial methods, oblivious to their ineffectiveness.

Neither is a good thing.

We need to do a much better job of detecting and recognizing effective complexity. Even better, we need to do a much better job of understanding potential complexity in the first place and taking steps to reduce and eliminate it in advance, before it becomes a problem.

Criteria for assessing whether a system has gotten too complex

Unfortunately, there are no precise, crystal clear, technical criteria for judging when a system has gotten too complex, but some general, if a bit vague, notions of criteria include:

  1. System is too big. Or at least feels too big.
  2. Too cumbersome.
  3. Too unwieldy.
  4. Too difficult to understand.
  5. More than any mere-mortal average individual can cope with.
  6. Too expensive.
  7. Too difficult to maintain.
  8. Too difficult to enhance.
  9. Too difficult to use.
  10. Too difficult to deploy.
  11. Too many balls in the air (juggling metaphor.)
  12. Too many moving parts.
  13. Too many interactions.
  14. No single individual knows all the moving parts and all of the interactions.
  15. Nobody even knows what relatively small collection of individuals do collectively have full knowledge of the true complexity of the entire system.
  16. Causes more anxiety than joy.

That last one is my favorite. Technology and systems should make our lives easier and more joyous, not cause us to pull our hair out.

Simplicity is the ideal, the holy grail

Let there be no mistake, simplicity is the ideal, the holy grail for system design.

That said, it’s far easier said than done, and frequently appears and commonly actually is virtually impossible to achieve.

We should:

  1. Value it.
  2. Make clear that writing more lines of code is less valued than simplifying code and design.
  3. Give it a priority.
  4. Train for it.
  5. Pursue it.
  6. Measure it.
  7. Compensate for it.

Everything should be made as simple as possible, but no simpler

That’s a quote attributed to Albert Einstein: “Everything should be made as simple as possible, but no simpler.

But according the WikiQuote, the proper quote is:

  • It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.

Same sentiment.

But all evidence I see is that we are facing absolutely no risk of running afoul of this dictum.

Granted, sometimes designs are outright bad because they fail to encompass all nuances of the problem to be solved, but that’s different from a simple design that actually solves the whole problem.

Fragility, resilience, robustness, fault tolerance

Complex systems are more likely to be fragile and more prone to either outright failure or cognitive overload.

The goal is to produce systems which are resilient and robust.

Fault tolerance is essential, the ability to detect and respond to defects and problems, so that the system continues to function without significantly impacting the user.

These are key characteristics of great systems design:

  1. Minimize fragility.
  2. Maximize resilience.
  3. Maximize robustness.
  4. Maximize fault tolerance.

Problem areas where complexity is killing us

These days, complexity is everywhere. Many previously everyday objects now contain a computer or even more than one computer.

Some of the problem areas:

  1. Cybersecurity. Networked computers.
  2. Distributed systems. Networked computers, again. Many issues.
  3. Financial systems. High value. High risk.
  4. Transportation. Vehicles and systems.
  5. Infrastructure.
  6. Defense systems.
  7. Defense threats.
  8. Counterterrorism.
  9. Complexity of software systems.
  10. Complexity of networked systems.
  11. Artificial intelligence (AI) system complexity.
  12. Social media vendors. So many users and so much data and so many interactions.
  13. Social media overload. Problems for users themselves.
  14. Matching job seekers and jobs. Lots of attempts to solve this problem, but still too many people out of work.

Transportation complexity

Transportation presents complexity on two fronts:

  1. Vehicles.
  2. Transportation systems. Infrastructure.

There is a wide variety of vehicles, most now include one or more computers:

  1. Cars.
  2. Trucks.
  3. Buses.
  4. Trains.
  5. Mass transit.
  6. Planes.
  7. Ships.
  8. Rockets.
  9. Motorcycles.
  10. Bicycles. Rental bikes.

Transportation systems and infrastructure includes:

  1. Roads, streets, and highways.
  2. Traffic lights.
  3. Toll booths.
  4. Roadway lighting.
  5. Fueling facilities.
  6. Rest facilities.
  7. Food facilities.
  8. Bridges.
  9. Tunnels.
  10. Border control.
  11. Immigration.
  12. Ports.
  13. Airports.
  14. Air traffic control.
  15. Reservation systems.
  16. Websites related to monitoring transportation systems.
  17. Traffic control.
  18. Law enforcement.

The particular downsides of transportation complexity include:

  1. Takes too long to produce vehicles, systems, and infrastructure.
  2. Costs too much.
  3. Mistakes and quality failures.
  4. Staffing requirements are excessive.

But… even with all of those downsides, transportation investments remain very attractive politically despite their complexity.

Infrastructure complexity

Infrastructure includes:

  1. Transportation systems.
  2. Power systems. Electric grid.
  3. Water collection, purification, treatment, storage, and distribution.
  4. Communications networks.
  5. Satellites.
  6. Manufacturing plants.
  7. Chemical plants.
  8. Distribution networks.
  9. Food production, distribution, and storage.
  10. Entertainment.
  11. Leisure.
  12. Hospitality.

Plenty of opportunity for complexity and cognitive overload to creep in.

Complexity of AI systems

Artificial intelligence (AI) presents a whole new level of complexity for computer software systems.

Most forms of automation are fairly straightforward even if sometimes it involves a lot of data and some complicated mathematics.

But AI is categorically distinct. Comparing intelligence to math is not an easy concept for most people to relate to.

The complexity of AI systems has these qualities:

  1. Inherently unknowable. Unless it is a relatively simple system.
  2. Inherent sophistication of AI algorithms is likely to be far beyond the grasp of even many competent but average professionals.
  3. AI to manage complexity of AI is possible, but then who can know what’s going on in that AI system except yet another AI system, and so on ad infinitum.

In any case, we need to insist and even demand that AI professionals fully characterize and quantify the complexity of their systems. We have to know what we are getting into.

Machines can quantitatively handle more complexity, as in lots of data. If we have more data then we just need more or faster machines. That’s fairly easy to understand. Although a larger number of machines can present a management complexity challenge of its own.

But qualitative complexity will be a significant challenge. Wide variety in the forms of data is categorically distinct from volume of similar forms of data.

As AI gets more advanced, fewer individuals will be able to grasp what the AI can purportedly grasp. The AI system may grasp what it does, but how many real people will?

Complexity AI

We need better AI tools for managing complexity. Call it complexity AI.

But even that will add yet another layer of complexity.

Still, if an AI tool can help us visualize the complexity of a system, that’s a huge improvement over what we have today.

If we can see something, we stand a better chance of addressing it than if we are unable to see it in the first place.

Even a simple, zero-dimensional numerical score for overall system complexity would be a huge leap over what we have today.

Social media

Social media was quite simple when it first made its appearance. But complexity has quickly crept in.

Some areas in which complexity is getting out of control for social media:

  1. Extremely large numbers of users.
  2. Growing number of forms of interaction.
  3. Anti-social behavior. So much data that it is overwhelming human editors and moderators.
  4. Fake News and disinformation. Again, too much data that overwhelms traditional efforts. AI can help, but AI can be a problem of its own, and may not really be ready for all that we want and need it for at the present time.
  5. Fake identity. Are users really who they say they are? How can you tell?

Matching job seekers and jobs

Despite many jobs being available, many people remain unemployed or underemployed.

Even despite a wealth of resources for matching workers and available jobs, many workers remain without productive and fulfilling work, and many jobs remain unfilled.

Some of the issues:

  1. Long distance. People may not be aware of where work is available, and employers may not be aware of where the workers are.
  2. Mismatches in skills and stated requirements. Despite sophisticated matching systems, even AI systems are still not yet able to recognize who could do well in a position if given even a modest amount of training and assistance even if they superficially are not a match for listed requirements.
  3. Need for significant education and training, coupled with an unwillingness of employers to train. More education or training may be required. And employers may need to be more open to their own training of available workers.

In any case, this is a complex problem and despite relatively complex attempts to resolve it, it remains unresolved.

Looming complexity threats

Beyond the many areas in which complexity is already causing headaches today, looming threats include:

  1. Automated lethal autonomous weapon systems (LAWs).
  2. AI for military and security intelligence.
  3. AI for financial decisions.
  4. AI for healthcare and medical decisions.
  5. Push to transition from weak AI to stronger AI, with no clear path as to how to manage the dramatic rise in complexity.
  6. Cybersecurity as systems get too complex to discern all of the nuances of their security vulnerabilities.
  7. Blockchain. Whether for cryptocurrency ledgers or other applications.
  8. Complex adaptive systems (CAS.) The complexity is literally unfathomable.
  9. Quantum computing. A whole new ball game. Far beyond the scope of this paper, but it is coming.

Individual vs. group cognitive overload

Cognitive overload can occur at both the individual and group level.

But it is not uncommon for both to occur simultaneously.

Although it may be more common for a subset of group members to experience greater cognitive overload even as the remainder of the group and the group as a whole experience a lesser degree of cognitive overload.

Or vice versa.

In any case, cognitive overload for both individuals and groups needs to be addressed. There may be some overlap, but adequate attention needs to be given to where they do not overlap.

Can complexity be achieved without cognitive overload?

Again, cognitive overload comes in two distinct areas:

  1. Difficulty using a system.
  2. Difficulty comprehending the internal operation and design of the system

In theory, a system could be designed so that it is very usable but has all of the complexity hidden under the hood so to speak, but in practice this is very difficult.

Yes, systems can automate functions so that the system is much easier to use, but this simply shifts the complexity under the hood.

Worse, the more complex a system is under the hood in its internal workings, the greater the risk that eventually that complexity will end up surfacing in some unexpected manner and have some undesirable impact on the user of the system, whether due to performance, sluggishness, more limited function, higher cost, or any number of other effects beyond the basic functions that were automated in an attempt to eliminate cognitive overload.

And that’s if all of the cognitive overload could be engineered away, which is not all that likely.

As discussed in the Solutions section, there are a variety of techniques and methods which can be used to reduce complexity for the internal structure of a system, but the net effect is that the best way to reduce cognitive overload is to reduce the overall complexity of the system.

How much complexity can be managed before cognitive overload begins to overwhelm even elite individuals and groups?

That’s the great, unknown question — what’s the threshold of complexity before cognitive overload begins to kick in.

Unfortunately, there simply isn’t any good answer.

Other than to insist that the only answer is to work ever-harder to limit and reduce complexity so that cognitive overload does not have any chance to rear its ugly head.

Just to be clear, any answer will be different for each of:

  1. Elite individuals.
  2. Well above average individuals.
  3. Somewhat above average individuals.
  4. Average individuals.
  5. Somewhat below average individuals.
  6. Well below average individuals
  7. Elite groups.
  8. Well above average groups.
  9. Somewhat above average groups.
  10. Average groups.
  11. Somewhat below average groups.
  12. Well below average groups.

The lesson there is that if you absolutely must field a more complex system, then you have to accept the cost of requiring and appropriately resourcing above average and elite individuals and groups.


Liability for harm and damage is a real problem for complex systems.

Due to lack of understanding of the true liabilities of a given complex system, the risk is that liability is open-ended and unlimited. Unlimited. Really.

Professionals have lost the ability to even know what the true liabilities of a complex system are.

Management has lost control over liability.

Liability has a number of dimensions:

  1. Legal. Strict legal liability. Laws, courts, crime, lawsuits, judgments, legal and regulatory restrictions, regulatory violations.
  2. Moral. Not strictly a direct business issue per se, but can present a public relations disaster, loss of faith, trust, and confidence, and loss of business.
  3. Financial. Any monetary cost or loss, whether an out of pocket cash loss or loss of business or increase in expenses.
  4. Professional. Ethical issues. May make it difficult for professionals to do their jobs, or to attract and keep qualified professionals.
  5. Managerial. Loss of control over something that management is supposed to control.
  6. Ethical. General ethics and codes of conduct. What are people supposed to do in the face of excessive, unknowable, and uncontrollable liabilities?

Liability — best to avoid it in the first place by keeping tighter control over complexity and cognitive overload.

Complexity requires extraordinary focus

Complexity requires intense focus. Extraordinarily intense focus.

Juggling with a lot of balls in the air can work, for some people, for awhile, but the extraordinary intensity of focus required to cope with complexity and cognitive overload can tax and even be beyond the capabilities of mere mortals and even the most elite of elite professionals and groups.

Multitasking is a powerful skill, but can also pose an extraordinary risk.

Obstacles to fighting complexity

If complexity can be so problematic, why aren’t skilled professionals doing a much better job of avoiding, reducing, eliminating, and managing it?

There are powerful disincentives in play:

  1. Professionals are paid more to cope with higher complexity. Reducing complexity would reduce the need for such skilled professionals, or the need to pay them so highly.
  2. Ego and pride. Professionals take great pride in being able to cope with higher complexity.
  3. Ignorance. Oddly, academic and professional training rarely focus much attention on complexity.
  4. Incompetence. Coping with complexity requires somewhat different skills which may not be present, or trained properly.
  5. Management and executives aren’t educated and trained in avoiding, eliminating, mitigating, and managing complexity. Or even if they are, they aren’t willing or able to allocate sufficient resources, attention, and priority.

Delegation is only a partial answer not the complete solution

Delegation is the most common and most powerful technique for managing complexity.

Decompose a complex system into subsystems and components, and then assign responsibility of subsystems and components to groups, teams, and finally individuals.

This works, sort of, in a fashion, but breaks down horribly when there are complex interactions between subsystems and components. Or when the group is not staffed adequately or not managed effectively.

Professionals and managers go to great lengths to analyze, define, and document interfaces between subsystems and components, but that only works until it doesn’t work.

Sometimes, interfaces and interactions are just too complicated or ill-defined. Performance and capacity planning can be quite problematic. Especially with distributed systems.

You can’t delegate away human nature and human error.

Yes, you can review, test, and approve specifications. And do that ad infinitum, but at some point human nature and human error make their appearance.

Cognitive overload comes into play here as well. Too many interfaces, too many reviews, too many tests, and too little time or too little resources, and, presto, somebody or a bunch of somebodies find themselves cognitively overloaded and issues get overlooked and mistakes get made.

Sure, maybe if you doubled or quadrupled the people, resources, and time, the complexity could indeed be managed, but too often that simply isn’t practical.

And sometimes, managers and even diligent professionals either let their egos get the best of them, or pride gets in the way, or they are simply too embarrassed (or bullied) to say “no, I can’t do it with the time and resources available.” It happens. All too frequently.

Multitasking — boon or bane?

Is multitasking a good thing or a bad thing? How do we know?

Good questions. And subject to much and very spirited debate.

But the real question here in the context of complexity is whether excessive multitasking when working with complex systems can cause such severe cognitive overload that the question is when and how much rather than if.

Ability and skill with multitasking is a source of great pride for many individuals. In fact it is their preferred mode of working.

And for relatively simple tasks and relatively simple systems that is all quite true and credible.

But, when dealing with complex systems, meaning overly complex systems, multitasking can be the proverbial straw that breaks the camel’s back.

Attention, focus, and intensity of application of intellectual activity are essential when working with many aspects of (overly) complex systems.

The real danger is that even with an (overly) complex system, not all of the complexity is uniform, so that even though some aspects are incredibly complex, so many other aspects are at least seemingly almost trivial, so that the individual is easily lured into a sense of complacity that their multitasking tendencies are adequate for these more trivial aspects of the system, so that the individual may not even notice when they segue into the more complex aspects of the system where attempts to multitask may fail horribly.

Even worse, multitasking while working with more complex aspects of the system may in fact be deceivingly quite successful, for awhile, maybe even for an extended period of time, until, finally, under complex conditions not always predictable or well understood, the cognitive overload spikes, and multitasking no longer works and failure, even catastrophic failure occurs.

Some points to keep in mind for multitasking with complex systems:

  1. Can be a source of significant risk.
  2. Potentially risky if done under pressure.
  3. Okay if strictly voluntary and done with a healthy mental state.
  4. No clarity on the threshold of higher risk.
  5. No clarity on the limits.
  6. No clarity on how much is acceptable.
  7. No clarity on how much is recommended.
  8. No great clarity on what specific conditions cause it to be extremely unacceptable and extremely hazardous.

Maybe the short answer is to categorically restrict multitasking to trivial and simple tasks that have at most mild or minimal complexity, and to strictly ban multitasking for complex systems of even a moderate complexity, and to absolute ban multitasking for significantly complex systems.

Of course, we should probably ban overly complex systems entirely, so then multitasking is not an issue at all for such nonexistent systems, but unfortunately there will continue to be many overly complex systems designed and deployed in the years ahead, so the only solution or workaround is to severely restrict or outright ban multitasking on such systems.

Computer system complexity

The concepts in this paper are not limited to computer and software systems, but they are the main focus and of great interest.

Computer software systems can vary greatly in their complexity from very simple, even trivial, to extremely complex, or even a level of complexity where nobody can provide an accurate characterization of the complexity.

There are a number of dimensions over which to characterize complexity of computer software systems:

  1. Operational performance complexity.
  2. Design complexity.
  3. Code complexity.
  4. Complexity of conception, design and implementation.
  5. Complexity of internal testing.
  6. Complexity of packaging.
  7. Complexity of final testing.
  8. Complexity of deployment.
  9. Complexity of operation. How many people does it take to keep the system up and running, including helping users.
  10. Complexity of maintenance.
  11. Complexity of evolution.

Operational performance complexity is the traditional computer science notion of algorithmic complexity. This includes:

  1. How much time is needed to complete a single task. What the computer scientists call computational complexity. Usually in terms of a mathematical formula related to how much data is involved. Commonly expressed using so-called Big O notation.
  2. How much resources, such as storage or memory, are needed to complete a single task.
  3. How many tasks can be performed simultaneously.
  4. How many tasks can be completely per unit of time.

Design complexity is a sense of how complicated the design of the software is. This can be a vague measure, such as how many pages of paper are needed to to fully document the design — the specification of what the code should do.

Code complexity is a sense of how complicated the software source code is. This can include a variety of measures such as:

  1. Raw lines of code.
  2. Number of functions.
  3. Number of classes and methods for an object-oriented design.
  4. Number of modules.
  5. How simple and clean or complicated and intricate a typical function or method is.
  6. Number of subsystems.
  7. Number of processes.
  8. Number of distinct computer systems which must interact for the full system to function.

Complexity of conception, design and implementation is a sense of the size of the team and elapsed time needed to move the overall idea of the system from conception to packaging and final testing.

Complexity of internal testing is a sense of the number of professionals and how much time is needed to fully test all components of the system whenever changes are made and the updated system is ready to be a candidate for release. When is engineering complete.

Complexity of packaging is a sense of how many components and other deliverables must be packaged or pulled together to have a completed software system ready to deploy. And how many people are required and for how long to complete packaging to be ready for final testing.

Complexity of final testing is a sense of how many professionals and and how much time is needed to fully test the fully packaged system after changes have been made and the updated system is a candidate for release.

Complexity of deployment is a sense of how much effort is needed to install, configure, check out, and roll out a new release of the system, to go live for real-world users. How many people and how much time.

Complexity of operation is a sense of how much effort is needed to keep the deployed system running smoothly. This includes monitoring and addressing any issues or anomalies that may arise, as well as routine, scheduled maintenance tasks. How many people and how much time. And how many computer systems and associated storage and networking hardware and ancillary services are needed. Also includes capacity planning and provisioning, including changes during operation as usage evolves, as well as handling peaks and spikes of load. And how much staff and resources are needed to provide support to users.

Complexity of maintenance is a sense of how much effort is needed to fix bugs and make minor changes to the system. How many people and how long it typically takes to complete a task. From initiation of the task until the change is fully tested and ready for deployment, and deployment effort as well.

Complexity of evolution is a sense of how much effort is needed to make nontrivial changes to the system, including both minor, major, and radical changes. Is it fairly easy or fairly hard? How many people are needed to staff such work, and how long does it typically take to implement a single change, as well as how quickly a modified system can be tested, packaged, and tested to be ready to release. And how much effort and resources may be required to migrate operations and users from the previous release to this new release.

Collective behavior

One major source of complexity in modern systems is that the system is a collective of multiple subsystems or even separate systems which must work together.

Collective behavior increases complexity and adds technical risk.

Some of the issues with collective behavior:

  1. Basic synchronization. Getting even only two components, subsystems, or systems to work well together.
  2. Ensembles. Getting more than two components, subsystems, or systems to work well together. Lean towards cooperation and teamwork more than central control.
  3. Armadas. Getting a larger number of components, subsystems, or systems to work well together, under relatively central control.
  4. Swarms. Getting a very large number of fairly independent actors to work together, not as a result of any central control but as a result of shared purpose.
  5. Storms. Many independent actors acting independently, without any significant coordination, frequently in competition and even at cross purposes. The system must operate in the presence of storms of independent actors.
  6. Redundancy. Need for replications of components, subsystems, or systems so that bottlenecks and loss or unavailability of one does not interfere with other components, subsystems, or systems which depend of the unavailable or overloaded entity.
  7. Consensus. Getting multiple components, subsystems, or systems to agree on some data pattern, such as a contract, a transaction, or values of a collection of data.
  8. Emergence. Behavior that emerges from collective actions of components, subsystems, or systems, and is not so obvious from even a deep comprehension of the individual components, subsystems, or systems.
  9. Self-organization. Emergence that results in a super-system that has a significant level of sophistication, once again not obvious from even a deep comprehension of the individual components, subsystems, or systems.
  10. Cooperation vs. competition. It is important to know whether two or more portions of a system are cooperating or competing, although it may not be at all obvious from even a deep comprehension of the individual components, subsystems, or systems. In some cases, it may not even be possible to tell which it is, in which case that introduces a whole new level of system complexity.

Calling all polymaths

Modern systems tend to cut across multiple disciplines, such that comprehending the totality of the system increasingly requires the capabilities of a polymath.

In the old days (1970’s and 1980’s) we called them generalists, in contrast to specialists, and frequently it was a term of disparagement rather than a term of praise.

But today, we don’t have a lot of choice. Individual specialists, as valued as they still are, are insufficient to grasp and cope with the complexity of modern systems.

We need polymaths. A lot more of them.

The problem is that they are not so easy to find. And they cannot be educated and trained so easily. Education and training can impart a sense of many disciplines, but the complexity of modern systems requires a deep grasp of multiple disciplines.

In truth, the only way to get there is through long and very hard experience.

Architects, risk of multiple, siloed architects

Any nontrivial system, and even most trivial systems, requires an architect, a professional who knows all of the pieces of the system and how they fit together.

The problem is that due to the raw complexity of modern systems, a single architect is frequently not enough. Multiple architects are needed, each with their own distinct area of expertise and responsibility.

That sort of works, until it doesn’t.

Each architect has their own silo of expertise and responsibility, but complex systems involve complex interactions between silos so that no single architect is master of the complexity across all of the silos.

Yes, you can add another level of architect or technical management, but even this only goes so far and may merely paper over the essential risk that no single architect is master of all of the complexity of the entire system.

Need for a chief architect

Every system of any significance requires a chief architect, the individual who knows the entire system inside and out. Or at least should know. But with very complex systems that knowledge will tend to be limited. But at least full knowledge remains the goal, the ideal.

Conceptual integrity, coherence, and elegance

The number one task of a chief architect is to absolutely assure the conceptual integrity of both the system as a whole and in all of its details.

The chief architect needs to assure that all aspects of the system have a sense of coherence, that all components are working towards a common purpose and designed to work well together.

Elegance is a poorly understood and much maligned concept. A coherent system will almost by definition be elegant.

A common problem with larger systems is that they have too many architects, each with different goals and different values. Conceptual integrity, coherence, and elegance tend to suffer. It’s a classic problem of too many cooks in the kitchen, that too many cooks spoil the stew.

A chief architect is the only answer.

That said, such an ideal chief architect is a very rare breed.

Need for deputy architects

Every architect should have one or more deputies, for a variety of reasons:

  1. To fill in for the architect when scheduling or absence prevents presence.
  2. To handle more requests for assistance or review from team or group members.
  3. To take over very quickly if the architect should leave.
  4. To add a second set of eyes.
  5. To add some degree of diversity.
  6. To mitigate the bus factor — if or when the architect is suddenly taken out of the picture without advance warning, such as by an accident (that’s where the famed bus comes in, or a plane), severe illness, or leaving to join another organization.

The bus factor

Generally speaking, professionals are relatively replaceable. If one professional is unavailable, another professional can quickly step in and take over where the previous professional left off.

But with increasingly complex systems and increasing levels of specialization, it is not uncommon if not typical that one or more professionals on a team possess essential skills or essential knowledge such that another professional cannot quickly step in at a moment’s notice.

This means that an entire project may be placed at risk if these key professionals were suddenly to become unavailable, such as if they were hit by a bus, hence the term bus factor.

It’s not that the problem can’t be managed, but only at great cost or delay.

Of course, the single best way to manage the problem is to keep the complexity down to a manageable level where the bus factor becomes negligible since other professionals can quickly step in should the need arise.

Several key points here are:

  1. Don’t keep all eggs in one basket. Spread knowledge and responsibility among multiple individuals.
  2. Need at least several individuals who are fully knowledgeable of all aspects of the system. Or are at least capable of fully coming up to speed on all aspects if they are needed in a pinch or a crunch.
  3. Need a credible technology succession plan for how to cope with unexpected losses or departures.

Special risk for medical systems where human life is at risk

Medical systems are still relatively simple, where complexity is not so much of an issue, but as medical systems get more and more complex, coupled with potential interactions between systems, plus the potential and risk of AI, complexity will eventually rear its ugly head.

Beyond mere bugs which merely annoy people, human lives are at risk with medical systems.

In addition to risk to life and limb, quality of life is also at risk.

Examples of loss of control

Here are some examples of complex systems where staff lost control and were unable to successfully operate and control complex systems. Details are well known but beyond the scope of this paper.

  1. Apollo 13 lunar mission.
  2. Shuttle Challenger loss.
  3. Chernobyl nuclear plant loss.
  4. Three Mile Island nuclear plant loss.
  6. Titanic.

Or from the domain of fiction:

  1. Hal 9000 AI computer is 2001: A Space Odyssey.
  2. Skynet AI network in Terminator.

Great success with rockets and space missions

As a general proposition, our greater successes at maximizing control and minimizing loss of control has been in rockets and space missions.

Yes, indeed, rockets have been some of our more spectacular failures, but the fact that we have been as successful as we have only illustrates both the difficulty of success and our ability to achieve success when we marshal sufficient focus, discipline, and resources. And sufficient time is required. Delays are not only common and to be expected, but a necessary aspect of such efforts.

And these efforts only serve to highlight the challenge of sticking to techniques and methods which are required for success.

The Coming Software Apocalypse

Some key points about the complexity of computer software systems from a September 2017 article by James Somers in The Atlantic, entitled “The Coming Software Apocalypse” and subtited “A small group of programmers wants to change how we code — before catastrophe strikes.


  1. “When we had electromechanical systems, we used to be able to test them exhaustively,” says Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years.
  2. “The problem,” says MIT professor Nancy Leveson, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”
  3. “The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.”
  4. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  5. “all of that complexity is packed into tiny silicon chips as millions and millions of lines of code. But just because we can’t see the complexity doesn’t mean that it has gone away.”
  6. “The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.”
  7. “You just cannot anticipate all these things.”
  8. “basically people are playing computer inside their head.”
  9. “ So the students who did well — in fact the only ones who survived at all — were those who could step through that text one instruction at a time in their head, thinking the way a computer would, trying to keep track of every intermediate calculation.”
  10. “The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.”
  11. “becoming very, very complicated.”
  12. “model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules.”
  13. “Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  14. “The 80s had seen a surge in the number of onboard computers on planes. Instead of a single flight computer, there were now dozens, each responsible for highly specialized tasks related to control, navigation, and communications. Coordinating these systems to fly the plane as data poured in from sensors and as pilots entered commands required a symphony of perfectly timed reactions. “The handling of these hundreds of and even thousands of possible events in the right order, at the right time,” Ledinot says, “was diagnosed as the main cause of the bug inflation.””
  15. “low-level programming techniques will not remain acceptable for large safety-critical programs, since they make behavior understanding and analysis almost impracticable.”
  16. “model-based design, sometimes known as model-driven engineering, or MDE”
  17. “We already know how to make complex software reliable, but in so many places, we’re choosing not to.”
  18. “all he could think about was that buried deep in the designs of those systems were disasters waiting to happen.”
  19. “some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  20. “An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.”
  21. “code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,””
  22. “Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do — and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.”
  23. “Lamport sees this failure to think mathematically about what they’re doing as the problem of modern software development in a nutshell”
  24. “programmers aren’t aware — or don’t believe — that math can help them handle complexity. Complexity is the biggest challenge for programmers.”
  25. “This code has created a level of complexity that is entirely new. And it has made possible a new kind of failure.”
  26. “Code will be put in charge of hundreds of millions of lives on the road and it has to work.”
  27. “Computing is fundamentally invisible,” Gerard Berry said in his talk. “When your tires are flat, you look at your tires, they are flat. When your software is broken, you look at your software, you see nothing.”

Complexity of the problem vs. complexity of the solution

We should be extremely careful not to confuse the complexity of a problem with complexity of a solution. The two may be tightly linked, but not necessarily.

And there are commonly any number of potential solutions for a given problem, each of which has different qualities and complexity characteristics.

Sure, a complex problem may indeed require a complex solution, but a superficial analysis won’t necessarily provide the correct evaluation of that proposition. Much deeper analysis, insight, intuition, and creativity may be required.

There are four possibilities, a two by two matrix, for the combinations of complexity of the problem and the (chosen or candidate) solution:

  1. Simple problem, simple solution.
  2. Simple problem, complex solution.
  3. Complex problem, simple solution.
  4. Complex problem, complex solution.

And sometimes the superficial evaluation of a problem is an illusion:

  1. A seemingly simple problem may be far more complex than first envisioned.
  2. A seemingly complex problem may be dramatically simplified using novel and insightful techniques.

And sometimes our initial forecast of a solution can be an illusion:

  1. The proposed simple solution may have a lot of hidden complexity that we overlooked or were not aware of.
  2. A proposed complex solution may be radically simplified with a little more thought, insight, and creativity.

Beware when complexity of the solution exceeds complexity of the problem

It is all too easy to lose sight of the original problem when we are focused heads-down on the solution.

The solution doesn’t necessarily embody the original problem.

Rather, any solution embodies an approximation of the original problem, a model of the problem.

The issue is that this model of the original problem may be more generalized that the specific nature of the original problem, so that the solution may be really solving a more complex problem than the original problem.

That’s fine if the more generalized solution is somehow simpler and more elegant than a more particularized solution, or if there is some other relevant motive, but too frequently more generalized solutions have a habit of being more complicated and hence more complex.

The result can be that the implementers of this chosen solution end up having to deal with a lot more complexity than the original problem actually required.

Again, if the more generalized solution gives you some special benefit when compared to a more particularized solution, that’s great, but if the extra cost and extra effort are beyond the bounds of reason, that is not so good.

In short, system designers and managers need to be very careful when generalizing from a specific problem.

Solutions to complexity and cognitive overload

Ultimately, there is no single magic-bullet solution to complexity and cognitive overload.

Yes, simplicity is a solution, but presumably you only have to deal with complexity because you were unable to come up with a simple solution in the first place.

At best, as mentioned in the introduction, there is no magic, silver bullet, just a hodgepodge of techniques and approaches that can help to moderate, mitigate, minimize, manage, and cope with complexity and cognitive overload and their downstream effects.

That hodgepodge includes:

  1. A chief architect, focused first and foremost on conceptual integrity, coherence, and elegance. Someone who is less likely to allow complexity and cognitive overload get out of control in the first place.
  2. Simplify the original problem as much as humanly possible at the get-go.
  3. Consider a wider range of alternative solutions.
  4. Simplify the initial solution as much as humanly possible.
  5. Belated efforts to simplify the solution. Doing this after the fact can be expensive, error-prone, and less likely to succeed.
  6. Belated efforts to simplify the original problem. Maybe (read: usually) initial efforts were overly ambitious.
  7. Focus of modularity of the solution. Easy to say but hard to do. Requires a level of technical and managerial discipline that is usually beyond the reach of average projects.
  8. Focus on testing early, before the design solution is committed. Degree of difficulty for testing should be a criteria for choosing between alternative solutions.
  9. Redesign and retrofit components, modules, and even entire subsystems when difficulty testing and using the system becomes problematic.
  10. Use commodity components, modules, subsystems, and services whenever possible to capitalize on known complexity and failure characteristics rather than introduce fresh uncertainty over complexity and failure characteristics.
  11. More appropriate staffing. Selecting the right people makes all the difference. Raw skill, raw experience, raw education, and even raw track record are not necessarily the best indicators of the kind of individual contributors and technical and nontechnical managers who are needed to master complexity for a particular project.
  12. Better education about complexity, cognitive overload, and their consequences.
  13. Better professional training about complexity, cognitive overload, and their consequences.
  14. Staff diversity. Different perspectives can help remove blinders.
  15. Tools to monitor, measure, characterize, and visualize the complexity and cognitive load of a system.

Ignorance of complexity, cognitive overload, and their effects

It’s rather amazing that after all of these years and decades, how few professionals have a significant grasp of the nature of complexity, cognitive overload, and their effects and consequences.

In fact, it’s actually mind-boggling that this state of affairs exists here in the 21st century, despite all of the amazing science and technology that surrounds us.

We need better education and better professional training.

Need for education and training on complexity and cognitive overload

It would seem rather obvious that better education and professional training about complexity, cognitive overload, and their downstream effects would be much more widely recognized and even demanded, but the sad reality is that this is not the case.

In fact, if anything, we seem headed in the opposite direction.

Rather than focusing on consequences and prerequisites, we instead take a cavalier Just Do It approach to so many systems.

We should be trying to engineer systems, but it’s more common to focus on coding and hackathons. And endless refactoring of bad code.

Sure, education and professional training programs focused on complexity, cognitive overload, and their consequences could readily be devised, but the simple fact is that there is very little demand for them. Or even interest.


One large, monolithic system is much less desirable than a system architecture that is modular, focusing or a significant number of smaller modules and subsystems.

Each module should be:

  1. Reasonably self-contained.
  2. Have very simple, clear, and well-defined interfaces to other modules.
  3. Relatively isolated from other modules.

A modular architecture should be based on:

  1. Smaller subsystems.
  2. Greater isolation between subsystems.
  3. Commodity modules.

The principle of commodity modules is reuse, which results in modules which are:

  1. Cheaper. Economy of scale.
  2. More predictable. Well characterized from extensive experience.
  3. Longer history. More of the bugs and performance and capacity issues worked out.

The longer history of experience with a commodity module results in:

  1. Proven use. It works. Less risk of failure and need to test.
  2. Failure rate is known.
  3. Failure consequences are known.
  4. Bugs have been worked out. Maybe not all of them, but more of them.
  5. Availability of staff with enough knowledge about the technology.

In short, modules, commodity modules, and modular architecture are a big win when trying to reduce the overall complexity of a system.

And this reduces the cognitive overload of comprehending the design and implementation of the system.

Staffing to reduce complexity and cognitive overload

A mediocre effort of staffing a project can result in excessive and uncontrolled complexity and cognitive overload.

There are three levels to staffing which all matter very greatly when seeking to control complexity and cognitive overload:

  1. Technical contributors. The individuals who actually do the technical work, as well as technical architects.
  2. Technical management. Directly supervising the technical contributors.
  3. Nontechnical management. Influence the resources available to the project. Some degree of control over definition of the problem to be solved.

Another dimension to staffing is functional roles:

  1. Developers.
  2. Product management. The individuals who exercise the most control over the definition of the problem to be solved.
  3. Quality assurance. Testing.
  4. Documentation.

Each of those functional roles has the same three levels listed previously.

The two main aspects of staffing are:

  1. Team organization.
  2. Selection process for team members.

A team is best organized for minimizing complexity and cognitive overload if:

  1. It is streamlined.
  2. The emphasis is on smaller size.
  3. Strong emphasis is placed on agility.
  4. Strong emphasis on conceptual integrity, coherence, and elegance.

Selection of team members needs to focus on meeting the real needs of the specific task rather than bureaucratic requirements or over-generalized, commodity, interchangeable staff members.

As indicated in an earlier section, complexity and concern for cognitive overload impacts staffing, so that the level of complexity and cognitive overload which can be managed will vary between:

  1. Elite individuals.
  2. Well above average individuals.
  3. Somewhat above average individuals.
  4. Average individuals.
  5. Somewhat below average individuals.
  6. Well below average individuals
  7. Elite groups.
  8. Well above average groups.
  9. Somewhat above average groups.
  10. Average groups.
  11. Somewhat below average groups.
  12. Well below average groups.

The lesson there is that if you absolutely must field a more complex system which risks cognitive overload, then you have to accept the cost of requiring and appropriately resourcing above average and elite individuals and groups.

Diversity of staff

Diversity of staff can impact how a team confronts and addresses complexity, cognitive overload, and their effects. Different perspectives can help remove blinders that prevent people from seeing things that are not exactly where they are focused in their immediate task.

But, diversity is much more easily talked about than accomplished.

Closed vs. open systems

Any particular system will tend to be either:

  1. Closed. A fixed set of known components. Complexity is fixed or bounded.
  2. Open. A variable set of components, only some of which are known when the system is deployed, with additional components arriving or departing as operation of the system evolves. Complexity is variable and even unbounded.

A closed system may also have dynamic components in addition to static components. Some of those dynamic components may be mandatory and always present, while other may be optional so that the system must be able to run without them and react in a reasonable manner when they are not configured. This can add significant complexity and cognitive overload, even though it may be very well-intentioned and necessary.

An open system includes, by definition, unknown components. There may be a significant number of known components for the base system, but dynamic components can come from anywhere at anytime. The total number and complexity of dynamic components is both unknown and unknowable. This makes reasoning about overall system complexity and cognitive overload especially problematic. Specialized monitoring and system management tools are needed for such open systems.

In truth, specialized monitoring and system management tools are needed for all systems.

Spikes and peaks of demand and interactions

One great uncertainty for the complexity of any system is how it will behave under extremes of load.

There are three forms of excessive load:

  1. Peaks. Which are relatively predictable based on the calendar and clock.
  2. Spikes. Which are inherently unpredictable, seeming to come out of nowhere. Possibly due to unpredictable external events, but possibly just more of a random coincidence, such as a classic perfect storm.
  3. Denial of service (DOS) attacks. Hacking. To be discussed in the next section.

Peaks can occur at various time scales:

  1. Time of day. One or more hours, or even shorter intervals, when demand and load tend to be substantially higher than the rest of the day.
  2. Day of week. Some days tend to be busier than others.
  3. Day of month. There may be some special days of the month.
  4. Seasonal demand. There may be seasons or intervals of time around holidays when demand is significantly higher.

Complexity presents a special challenge when dealing with peaks and spikes.

The response of the system to excessive load may be extremely nonlinear during peaks and spikes. And very unpredictable.

Designing systems to be responsive during spikes and peaks is always a special challenge. And introduces whole new levels of complexity and opportunities for cognitive overload.

Denial of service (DOS) attacks

Systems can be hacked. One special form of hacking is the denial of service attack or DOS.

The goal of a DOS attack is to present a system with such excessive load that it cripples the system, on the theory that most systems are poorly designed for spikes in demand.

A special form of DOS attack is the distributed denial of service or DDOS attack. That just means a large number of computers, sometimes called bots or a bot net, are simultaneously attacking the target system.

The bottom line is that a DOS or DDOS attack is somewhat similar to a peak or spike in demand.

This falls into the category of cybersecurity, which is beyond the scope of this paper.

But if a system is designed properly and handles peak demand and spikes in demand, DOS and DDOS attacks should not run the risk of crippling the system.

Complex adaptive systems (CAS)

Complex adaptive systems or CAS are systems in which the components and interactions are nonlinear, dynamic, and constantly changing due to feedback effects. They are also very sensitive to initial conditions and ever-changing environmental effects.

The net effect is that the behavior of a CAS is very unpredictable.

Hence, the complexity and cognitive load of a CAS is extremely unknowable.

The maddening thing about a CAS is that sometimes the behavior appears to be very predictable for extended periods of time but then without warning the behavior can suddenly change or begin evolving in some unpredictable way, all in an unpredictable manner.

That’s the bad news.

The good news is that the vast majority of the systems that we design are not CAS.

The really bad news is that in the future, especially with AI and open systems, more of our systems will be CAS.

Ego and pride

Human nature. What can be done about it?

Well, there is always something to try to find a way to cope with human nature.

But, all too commonly, we end up in a Sisyphean situation. Like Sisyphus, we try so hard to push a large boulder up a hill, but when we reach the top and start patting ourselves on the back it gets away from us and rolls back down to the bottom of the hill where we start over right where we started. Rinse and repeat. We’ve seen this movie. We know how it ended. But, still we replay it. And we can’t stop replaying it. It’s our nature.


Our ego. And our pride.

We can’t help ourselves.

Rolling that boulder up the hill provides us with such tremendous psychic satisfaction and sense of accomplishment that we don’t care what might inevitably come next.

Creating a complex system makes us feel like a god, a master of the universe. Consequences be damned. A classic pact with the devil.

So, not only do we not learn our lessons, but we go far out of our way to go on to larger disasters.

Larger egos. And greater pride. They are our Holy Grail.

Support from management

Much of the burden for controlling complexity and cognitive overload rests on the shoulders of technical staff, but support from management is essential.

And that support must be deep, broad, consistent, and sustained to have the desired effect.

Parts of that support can be emotional, intellectual, or technical, but a fair chunk of it must be financial.

Management commitment to fighting complexity and cognitive overload has to show up in management’s budget.

Budget for fighting complexity and cognitive overload

Management must budget sufficient resources for the eternal battle against the encroachment of complexity and cognitive overload.

But how much money, staff, and other resources are needed?

Nobody really knows.

In truth, time is usually the more critical factor — having enough time to do a more thoughtful system design, time to pursue conceptual integrity, time to pursue coherence, time to pursue elegance, time to fully test the system, time to develop better tests, but overall it simply takes time to get it all right.

That said, raw time is not the simple answer. Without the appropriate staff, all the time in the world won’t deliver the kind of conceptual integrity, coherence, and modularity needed to minimize complexity and cognitive overload.

Sometimes management just doesn’t want to pay top dollar for the more senior staff needed for architects and senior technical contributors, plus the necessary support staff to allow senior technical staff to focus on the critical tasks that constrain conceptual integrity, coherence, and modularity.

Or sometimes management is willing to pay, or at least say they are willing, but the overall organization doesn’t have the level of appeal to attract the staff who are needed.

Competition is fierce for top talent, so it is quite possible that it will be very difficult if not impossible to attract and retain the necessary talent, even if management budgets for them.

In truth, sometimes an organization is overreaching and trying to pursue a project that is beyond their ability. Sometimes that works, but commonly it doesn’t.

In any case, without sufficient resources in the budget, ambitious projects will be crippled.

Who should drive efforts to limit complexity and cognitive overload?

Should the technical staff be responsible for promoting and sustaining efforts to limit and reduce complexity and cognitive overload?

Sure, in an ideal world. But they can’t do it alone.

Should management be responsible?

To at least some degree. Without their support and possibly with their complicity, efforts of the technical staff will be undermined.

Should executive staff be responsible?

They don’t have much to say about the technical work, but management needs their support. They need to assure that the budget and resources will be available for such efforts. And, most critically, they need to be diligent and disciplined about assuring that the efforts will have a high enough priority, rather than being a mere afterthought or an effort that is only addressed after disaster strikes.

The efforts of all three are needed.

All three must drive the process, each in their own way.

Who should own efforts to limit complexity and cognitive overload?

It’s all well and good to have lots of people driving efforts to limit complexity and cognitive overload, but somebody has to actually own the effort.

If everybody is responsible, then nobody is responsible.

A somebody has the be the owner, the single individual to whom everybody looks to for guidance at moments of extreme stress when laser-focused guidance is most urgently needed. As well as being the individual who selects the target and sets the vision and tone for the effort.

I would say that the technical staff ultimately own the effort. They are the ones who are doing the actual work, and they are the ones who have the best grasp of all of the technical nuances.

Unfortunately, the technical staff is frequently lost down in the weeds and under a lot of time pressure so that they aren’t able to do the best job of controlling complexity and cognitive overload that they would like.

Ultimately, I would say that the chief architect of each project is the main owner of the effort to control complexity and cognitive overload. If they don’t own the problem, there is little chance that complexity and cognitive overload will be controlled.

Calling all Luddites!

Ugh. Let’s just hope it doesn’t come to that, but the rise of modern-day anti-tech, anti-complexity Luddites is a very real risk.

Some people may revel in complexity and cognitive overload, but for many it is a source of oppression. Or at least a source of annoyance.

We need to remain alert for any significant movement of the anxiety meter from minor annoyance towards unbearable oppression.

We’re not in terrible shape today, but there are already many warning signs and it wouldn’t take much of a push to send us sliding unstoppably down the slippery slope to that unbearable oppression.

What we need to be doing is marshalling efforts to control and reduce complexity.

But that’s not a priority today, yet.


Complexity and cognitive overload are real challenges. And they are rising.

It takes a lot of effort, energy, attention, focus, discipline, skill, persistence, and support from management to keep complexity and cognitive overload in check. And that includes sufficient budget — money, staff, and other resources. And controlling complexity and cognitive overload must be a clear and explicit high priority for management, including top executives.

Unfortunately, many of those essential ingredients are in woefully short supply all too often.

The situation is likely to get a lot worse before it gets any better.

There is no single, universal, magic, silver bullet to cure the problem, other than a lot of good old-fashioned hard work. And very smart work.