Complexity and Cognitive Overload Are Not Your Friends
Complexity and cognitive overload have a cost. Professionals pride themselves on their ability to master complexity and engage in dramatic feats of cognitive activity. Maybe that reflects their desire to be well-compensated for their efforts. Or maybe it’s just the thrill of the challenge. Pride, ego, and all of that. But there are costs to that complexity, financial and otherwise. Cognitive overload is one of those costs. And the downstream effects of complexity and cognitive overload are a never-ending stream of costs — and headaches, both figurative and literal.
Modern systems can be too complex for any individual or even a small team to comprehend the full complexity, let alone all of the additional complexity of a collection of interacting systems.
This informal paper attempts to characterize the nature of complexity and cognitive overload of systems, and to make the case for investing much more effort at controlling complexity and cognitive overload.
The primary focus of this paper is technology systems, such as computer and software systems, including artificial intelligence (AI) systems, but the risks and dangers of complexity and cognitive overload apply to systems in general, including large infrastructure projects, complex industrial plants, automated vehicles, and organizations, groups, and teams of people.
General problems with complexity:
- Delegation is a powerful tool and skill, but is not a valid substitute for comprehension of the full system, both as a whole and how all of its detailed components interact.
- Cost. It simply costs a lot more to design, build, operate, and maintain complex systems.
- Time to complete tasks.
- Risk of mistakes in design.
- Risk of failures in operation.
- Uneven or unacceptable performance.
- Loss of control. Technical staff, managers, and executives are unable to control the behavior of their systems as completely and finely as they would like or expect.
- Liability. Open-ended and unlimited liability. Loss of control over liability.
- Poor knowledge of security vulnerabilities. Loss of control over cybersecurity. Getting hacked. Data breaches. More liability.
- Expense of staffing.
- Difficulty of attracting and retaining staff.
- Excessive dependence on key staff, whose loss or departure might cripple the organization.
- Expense of management.
- Difficulty of attracting and retaining management.
- Coping with complexity requires extraordinary focus.
- Multitasking can be fun, satisfying, and productive, but when combined with severe complexity can be as dangerous as drunk driving.
- Juggling tasks can be fun, satisfying, and productive, but can be quite dangerous when combined with severe complexity.
- Extreme risk for lethal autonomous weapons (LAWs).
- Artificial intelligence (AI) can be a powerful tool to manage and limit complexity, but even AI has limits, so that many combinations of AI and complexity can be extraordinarily dangerous, with unknown risk and unknown liability.
- Deters (significant) innovation. It’s a lot harder to modify and enhance or even replace complex systems.
- No longer able to exhaustively test all combinations of features and conditions for a system.
- Lack of visibility into the true complexity of systems. The complexity is not visible or even measurable.
- Need multiple architects, but multiple, siloed architects with limited visibility present a significant risk of not knowing the full complexity of the system.
- Silos in general. They can be a great management technique for partitioning large projects into manageable chunks, but they tend to hide overall complexity and deter staff from thinking about complex interactions between silos.
- Downstream effects of complexity. The impact of an overly-complex system can cause people and other systems to exert extra effort and incur extra costs, and even overload people and other systems. In other words, the impact of complexity is not guaranteed to be limited to the particular system itself.
- Risk of cognitive overload. Users have difficulty using the system. Staff have difficulty comprehending and managing the system.
- Hype and panaceas. People can be easily deluded into believing that a technology or solution is relatively easy and practical when it is anything but. There is no shortcut to coping with raw complexity. Hype doesn’t magically make complexity go away or become trivial. On the contrary, hype is great, perfect, and ideal for masking and hiding complexity. In fact, hype is a great way to inoculate a project to protect it from too close an examination of its true complexity and risk.
So, what is the solution, the cure, for complexity and cognitive overload?
Sorry, but there is no silver bullet, just a hodge-podge of techniques and approaches that can help to moderate, mitigate, minimize, manage, and cope with complexity, cognitive overload, and their downstream effects.
You could say that simplicity is the cure, the silver bullet, the holy grail, to eradicate complexity, but it is more the ideal than a practical destination. Generally, simplicity is more of an unfulfilled fantasy or even an unfulfillable fantasy.
Key danger of complexity
Even if we do manage to get a complex system to work, or at least appear to work, we have a real problem.
Many modern systems are too complex for any individual or even a small team to comprehend the full complexity of the system, let alone all of the complexity of a a number of interacting systems, each of which is in turn too complex for a single individual or a small team to comprehend in full.
Just as with the Sorcerer’s Apprentice in Disney’s Fantasia, it can be all to easy to conjure up a complex arrangement of activities, but then they can quickly get out of control, with no Easy button in sight.
Perception is not the same as knowledge
We may think or imagine that we know all there is to know about the nuances of a complex system, but our perception or belief that we know is not the same as actual, verifiable knowledge.
Overly complex systems
For the purposes of this paper, the term complex system is used to refer to any system that is overly complex, meaning any system where the individual interacting with the system possesses only a tiny fraction of the knowledge needed to comprehend the full operation of the system.
If a system is truly properly designed, with all subsystems and components smoothly and properly interacting, and virtually no chance that an individual could ever make a mistake that would cause a catastrophic problem with the system, then there wouldn’t be any need to artificially label the system as complex.
It is only when there is some nontrivial chance that the subsystems and components might fail in some relatively catastrophic manner or that a fairly trivial mistake by an individual could cause a catastrophic failure that we need to refer to the system as overly complex or simply a complex system.
A small plant, small rodent, or even a single-celled organism is technically a very complex system, but all elements of the structure tend to work so exceedingly well that there is no psychological need to refer to such simple organisms as complex.
User experience (UX) complexity and cognitive overload
Beyond the internal design of a system, complexity and cognitive overload can be visible to users in the user experience or UX of the system.
There may simply be far too many features for the user to comprehend and cope with.
Or those features may be implemented in a way that doesn’t make sense to typical users.
Or even if the user comprehends the features and they make sense, it may take too much effort to use the features effectively for typical tasks.
Or maybe everything is fine for typical tasks, but in more atypical or extreme tasks the user can become overwhelmed by the complexity and cognitive overload kicks in.
Although poorly-designed user experiences are not uncommon, it is probably more common that difficulties experienced by users are driven by excessive complexity in the underlying system design. There may be too many features and controls in the underlying system, which forces the user experience to be comparably overly-complex. And then cognitive overload kicks in again.
In general, get the complexity of the underlying system under control, and then the user experience is far less likely to be a problem in terms of complexity and cognitive overload.
And in general, efforts at the user experience level to compensate for excessive complexity in the underlying system are less than likely to result in a positive user experience and more likely to result in cognitive overload.
Our reach exceeds our grasp
For all of the technical skill and knowledge of even relatively sophisticated organizations, all too frequently our reach exceeds our grasp when it comes to complex systems.
We imagine that we can handle systems of a given complexity, but in practice, reality intrudes and proves us wrong.
It happens all of the time.
Or, just as fatal, we merely hope that we can handle the complexity, but our hopes and dreams are so easily shattered by reality.
Cognitive overload
Every individual has some capacity for cognitive activity, including thinking, planning, and reacting and responding to input from the real world.
The human brain and mind is capable of some amazing things, but it does have its limits.
Attempting to exceed the limits of the human brain and mind is known as cognitive overload.
This means that the tasks we are seeking to accomplish are beyond our ability to intellectually manage. We can only do so much.
Complexity greatly increases the chances that our brains and minds will be overloaded.
We’re so proud of our ability to multitask and juggle multiple activities, but there are limits, and the complexity of modern systems is increasingly exceeding those limits.
Cognitive overload comes in two forms with complex systems:
- Using the system. Our ability to monitor and respond to all of the displays, knobs, and levers, especially under pressure as systems handling increasing amounts of data.
- Comprehending the system. Our ability to comprehend all of the myriad components, modules, and subsystems within the system, including all of the interactions between them, as well as interactions with other systems.
A system may be built from relatively simple components, but there are so many of them, with so many interactions that cognitive overload is virtually assured.
Complexity, cognitive overload, and faith
In truth, there are many instances where we are able to confront systems far beyond our comprehension, and instead of pulling out our hair and screaming and running away from such systems, we simply close our eyes to the complexity and accept on faith that somebody else has indeed mastered all of that complexity for us so that we simply don’t have to care about that complexity.
Some examples:
- Getting on a plane.
- Getting on an elevator.
- Trusting a bank or brokerage firm.
- Trusting a website.
- Trusting a computer or smartphone.
- Trusting a medical device implanted in our body.
- Trusting an x-ray machine or CAT scan or MRI machine.
- Trusting a driverless vehicle.
All based on raw faith rather than comprehension and mastery of complexity.
Effective complexity vs. literal complexity
Literal complexity is the kind of run of the mill, routine complexity that is easily and readily dealt with using traditional, proven technical and managerial methods. We analyze the complexity and apply indicated resources to master that complexity. It’s a slam dunk.
With literal complexity we can know what we are getting into in advance and know (or at least feel that we know!) how to deal with it.
Effective complexity is the kind of unusual complexity that is outside the envelope of efficacy of proven technical and managerial methods. We simply have no clue what the true complexity really is, its breadth, depth, or scope. So we have no clue how to cope with and master such complexity.
Effective complexity is the kind of complexity that completely overwhelms us. We didn’t see it coming, and we have no clue how to deal with it now that it is here.
We can treat literal complexity as if it wasn’t even there since we have it handled. We make it look easy and nobody feels that the system is complex. It’s like strolling onto an airplane or pushing a button in an elevator. So simple.
But with effective complexity we can see it and feel it. It is very real to us. It is overwhelming, but it is something which we can sense. Not simple at all.
I’m not sure which is really worse, feeling overwhelmed by effective complexity, or fooling ourselves and imagining that effective complexity is really literal complexity and then misguidedly applying traditional technical and managerial methods, oblivious to their ineffectiveness.
Neither is a good thing.
We need to do a much better job of detecting and recognizing effective complexity. Even better, we need to do a much better job of understanding potential complexity in the first place and taking steps to reduce and eliminate it in advance, before it becomes a problem.
Criteria for assessing whether a system has gotten too complex
Unfortunately, there are no precise, crystal clear, technical criteria for judging when a system has gotten too complex, but some general, if a bit vague, notions of criteria include:
- System is too big. Or at least feels too big.
- Too cumbersome.
- Too unwieldy.
- Too difficult to understand.
- More than any mere-mortal average individual can cope with.
- Too expensive.
- Too difficult to maintain.
- Too difficult to enhance.
- Too difficult to use.
- Too difficult to deploy.
- Too many balls in the air (juggling metaphor.)
- Too many moving parts.
- Too many interactions.
- No single individual knows all the moving parts and all of the interactions.
- Nobody even knows what relatively small collection of individuals do collectively have full knowledge of the true complexity of the entire system.
- Causes more anxiety than joy.
That last one is my favorite. Technology and systems should make our lives easier and more joyous, not cause us to pull our hair out.
Simplicity is the ideal, the holy grail
Let there be no mistake, simplicity is the ideal, the holy grail for system design.
That said, it’s far easier said than done, and frequently appears and commonly actually is virtually impossible to achieve.
We should:
- Value it.
- Make clear that writing more lines of code is less valued than simplifying code and design.
- Give it a priority.
- Train for it.
- Pursue it.
- Measure it.
- Compensate for it.
Everything should be made as simple as possible, but no simpler
That’s a quote attributed to Albert Einstein: “Everything should be made as simple as possible, but no simpler.”
But according the WikiQuote, the proper quote is:
- It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.
Same sentiment.
But all evidence I see is that we are facing absolutely no risk of running afoul of this dictum.
Granted, sometimes designs are outright bad because they fail to encompass all nuances of the problem to be solved, but that’s different from a simple design that actually solves the whole problem.
Fragility, resilience, robustness, fault tolerance
Complex systems are more likely to be fragile and more prone to either outright failure or cognitive overload.
The goal is to produce systems which are resilient and robust.
Fault tolerance is essential, the ability to detect and respond to defects and problems, so that the system continues to function without significantly impacting the user.
These are key characteristics of great systems design:
- Minimize fragility.
- Maximize resilience.
- Maximize robustness.
- Maximize fault tolerance.
Problem areas where complexity is killing us
These days, complexity is everywhere. Many previously everyday objects now contain a computer or even more than one computer.
Some of the problem areas:
- Cybersecurity. Networked computers.
- Distributed systems. Networked computers, again. Many issues.
- Financial systems. High value. High risk.
- Transportation. Vehicles and systems.
- Infrastructure.
- Defense systems.
- Defense threats.
- Counterterrorism.
- Complexity of software systems.
- Complexity of networked systems.
- Artificial intelligence (AI) system complexity.
- Social media vendors. So many users and so much data and so many interactions.
- Social media overload. Problems for users themselves.
- Matching job seekers and jobs. Lots of attempts to solve this problem, but still too many people out of work.
Transportation complexity
Transportation presents complexity on two fronts:
- Vehicles.
- Transportation systems. Infrastructure.
There is a wide variety of vehicles, most now include one or more computers:
- Cars.
- Trucks.
- Buses.
- Trains.
- Mass transit.
- Planes.
- Ships.
- Rockets.
- Motorcycles.
- Bicycles. Rental bikes.
Transportation systems and infrastructure includes:
- Roads, streets, and highways.
- Traffic lights.
- Toll booths.
- Roadway lighting.
- Fueling facilities.
- Rest facilities.
- Food facilities.
- Bridges.
- Tunnels.
- Border control.
- Immigration.
- Ports.
- Airports.
- Air traffic control.
- Reservation systems.
- Websites related to monitoring transportation systems.
- Traffic control.
- Law enforcement.
The particular downsides of transportation complexity include:
- Takes too long to produce vehicles, systems, and infrastructure.
- Costs too much.
- Mistakes and quality failures.
- Staffing requirements are excessive.
But… even with all of those downsides, transportation investments remain very attractive politically despite their complexity.
Infrastructure complexity
Infrastructure includes:
- Transportation systems.
- Power systems. Electric grid.
- Water collection, purification, treatment, storage, and distribution.
- Communications networks.
- Satellites.
- Manufacturing plants.
- Chemical plants.
- Distribution networks.
- Food production, distribution, and storage.
- Entertainment.
- Leisure.
- Hospitality.
Plenty of opportunity for complexity and cognitive overload to creep in.
Complexity of AI systems
Artificial intelligence (AI) presents a whole new level of complexity for computer software systems.
Most forms of automation are fairly straightforward even if sometimes it involves a lot of data and some complicated mathematics.
But AI is categorically distinct. Comparing intelligence to math is not an easy concept for most people to relate to.
The complexity of AI systems has these qualities:
- Inherently unknowable. Unless it is a relatively simple system.
- Inherent sophistication of AI algorithms is likely to be far beyond the grasp of even many competent but average professionals.
- AI to manage complexity of AI is possible, but then who can know what’s going on in that AI system except yet another AI system, and so on ad infinitum.
In any case, we need to insist and even demand that AI professionals fully characterize and quantify the complexity of their systems. We have to know what we are getting into.
Machines can quantitatively handle more complexity, as in lots of data. If we have more data then we just need more or faster machines. That’s fairly easy to understand. Although a larger number of machines can present a management complexity challenge of its own.
But qualitative complexity will be a significant challenge. Wide variety in the forms of data is categorically distinct from volume of similar forms of data.
As AI gets more advanced, fewer individuals will be able to grasp what the AI can purportedly grasp. The AI system may grasp what it does, but how many real people will?
Complexity AI
We need better AI tools for managing complexity. Call it complexity AI.
But even that will add yet another layer of complexity.
Still, if an AI tool can help us visualize the complexity of a system, that’s a huge improvement over what we have today.
If we can see something, we stand a better chance of addressing it than if we are unable to see it in the first place.
Even a simple, zero-dimensional numerical score for overall system complexity would be a huge leap over what we have today.
Social media
Social media was quite simple when it first made its appearance. But complexity has quickly crept in.
Some areas in which complexity is getting out of control for social media:
- Extremely large numbers of users.
- Growing number of forms of interaction.
- Anti-social behavior. So much data that it is overwhelming human editors and moderators.
- Fake News and disinformation. Again, too much data that overwhelms traditional efforts. AI can help, but AI can be a problem of its own, and may not really be ready for all that we want and need it for at the present time.
- Fake identity. Are users really who they say they are? How can you tell?
Matching job seekers and jobs
Despite many jobs being available, many people remain unemployed or underemployed.
Even despite a wealth of resources for matching workers and available jobs, many workers remain without productive and fulfilling work, and many jobs remain unfilled.
Some of the issues:
- Long distance. People may not be aware of where work is available, and employers may not be aware of where the workers are.
- Mismatches in skills and stated requirements. Despite sophisticated matching systems, even AI systems are still not yet able to recognize who could do well in a position if given even a modest amount of training and assistance even if they superficially are not a match for listed requirements.
- Need for significant education and training, coupled with an unwillingness of employers to train. More education or training may be required. And employers may need to be more open to their own training of available workers.
In any case, this is a complex problem and despite relatively complex attempts to resolve it, it remains unresolved.
Looming complexity threats
Beyond the many areas in which complexity is already causing headaches today, looming threats include:
- Automated lethal autonomous weapon systems (LAWs).
- AI for military and security intelligence.
- AI for financial decisions.
- AI for healthcare and medical decisions.
- Push to transition from weak AI to stronger AI, with no clear path as to how to manage the dramatic rise in complexity.
- Cybersecurity as systems get too complex to discern all of the nuances of their security vulnerabilities.
- Blockchain. Whether for cryptocurrency ledgers or other applications.
- Complex adaptive systems (CAS.) The complexity is literally unfathomable.
- Quantum computing. A whole new ball game. Far beyond the scope of this paper, but it is coming.
Individual vs. group cognitive overload
Cognitive overload can occur at both the individual and group level.
But it is not uncommon for both to occur simultaneously.
Although it may be more common for a subset of group members to experience greater cognitive overload even as the remainder of the group and the group as a whole experience a lesser degree of cognitive overload.
Or vice versa.
In any case, cognitive overload for both individuals and groups needs to be addressed. There may be some overlap, but adequate attention needs to be given to where they do not overlap.
Can complexity be achieved without cognitive overload?
Again, cognitive overload comes in two distinct areas:
- Difficulty using a system.
- Difficulty comprehending the internal operation and design of the system
In theory, a system could be designed so that it is very usable but has all of the complexity hidden under the hood so to speak, but in practice this is very difficult.
Yes, systems can automate functions so that the system is much easier to use, but this simply shifts the complexity under the hood.
Worse, the more complex a system is under the hood in its internal workings, the greater the risk that eventually that complexity will end up surfacing in some unexpected manner and have some undesirable impact on the user of the system, whether due to performance, sluggishness, more limited function, higher cost, or any number of other effects beyond the basic functions that were automated in an attempt to eliminate cognitive overload.
And that’s if all of the cognitive overload could be engineered away, which is not all that likely.
As discussed in the Solutions section, there are a variety of techniques and methods which can be used to reduce complexity for the internal structure of a system, but the net effect is that the best way to reduce cognitive overload is to reduce the overall complexity of the system.
How much complexity can be managed before cognitive overload begins to overwhelm even elite individuals and groups?
That’s the great, unknown question — what’s the threshold of complexity before cognitive overload begins to kick in.
Unfortunately, there simply isn’t any good answer.
Other than to insist that the only answer is to work ever-harder to limit and reduce complexity so that cognitive overload does not have any chance to rear its ugly head.
Just to be clear, any answer will be different for each of:
- Elite individuals.
- Well above average individuals.
- Somewhat above average individuals.
- Average individuals.
- Somewhat below average individuals.
- Well below average individuals
- Elite groups.
- Well above average groups.
- Somewhat above average groups.
- Average groups.
- Somewhat below average groups.
- Well below average groups.
The lesson there is that if you absolutely must field a more complex system, then you have to accept the cost of requiring and appropriately resourcing above average and elite individuals and groups.
Liability
Liability for harm and damage is a real problem for complex systems.
Due to lack of understanding of the true liabilities of a given complex system, the risk is that liability is open-ended and unlimited. Unlimited. Really.
Professionals have lost the ability to even know what the true liabilities of a complex system are.
Management has lost control over liability.
Liability has a number of dimensions:
- Legal. Strict legal liability. Laws, courts, crime, lawsuits, judgments, legal and regulatory restrictions, regulatory violations.
- Moral. Not strictly a direct business issue per se, but can present a public relations disaster, loss of faith, trust, and confidence, and loss of business.
- Financial. Any monetary cost or loss, whether an out of pocket cash loss or loss of business or increase in expenses.
- Professional. Ethical issues. May make it difficult for professionals to do their jobs, or to attract and keep qualified professionals.
- Managerial. Loss of control over something that management is supposed to control.
- Ethical. General ethics and codes of conduct. What are people supposed to do in the face of excessive, unknowable, and uncontrollable liabilities?
Liability — best to avoid it in the first place by keeping tighter control over complexity and cognitive overload.
Complexity requires extraordinary focus
Complexity requires intense focus. Extraordinarily intense focus.
Juggling with a lot of balls in the air can work, for some people, for awhile, but the extraordinary intensity of focus required to cope with complexity and cognitive overload can tax and even be beyond the capabilities of mere mortals and even the most elite of elite professionals and groups.
Multitasking is a powerful skill, but can also pose an extraordinary risk.
Obstacles to fighting complexity
If complexity can be so problematic, why aren’t skilled professionals doing a much better job of avoiding, reducing, eliminating, and managing it?
There are powerful disincentives in play:
- Professionals are paid more to cope with higher complexity. Reducing complexity would reduce the need for such skilled professionals, or the need to pay them so highly.
- Ego and pride. Professionals take great pride in being able to cope with higher complexity.
- Ignorance. Oddly, academic and professional training rarely focus much attention on complexity.
- Incompetence. Coping with complexity requires somewhat different skills which may not be present, or trained properly.
- Management and executives aren’t educated and trained in avoiding, eliminating, mitigating, and managing complexity. Or even if they are, they aren’t willing or able to allocate sufficient resources, attention, and priority.
Delegation is only a partial answer not the complete solution
Delegation is the most common and most powerful technique for managing complexity.
Decompose a complex system into subsystems and components, and then assign responsibility of subsystems and components to groups, teams, and finally individuals.
This works, sort of, in a fashion, but breaks down horribly when there are complex interactions between subsystems and components. Or when the group is not staffed adequately or not managed effectively.
Professionals and managers go to great lengths to analyze, define, and document interfaces between subsystems and components, but that only works until it doesn’t work.
Sometimes, interfaces and interactions are just too complicated or ill-defined. Performance and capacity planning can be quite problematic. Especially with distributed systems.
You can’t delegate away human nature and human error.
Yes, you can review, test, and approve specifications. And do that ad infinitum, but at some point human nature and human error make their appearance.
Cognitive overload comes into play here as well. Too many interfaces, too many reviews, too many tests, and too little time or too little resources, and, presto, somebody or a bunch of somebodies find themselves cognitively overloaded and issues get overlooked and mistakes get made.
Sure, maybe if you doubled or quadrupled the people, resources, and time, the complexity could indeed be managed, but too often that simply isn’t practical.
And sometimes, managers and even diligent professionals either let their egos get the best of them, or pride gets in the way, or they are simply too embarrassed (or bullied) to say “no, I can’t do it with the time and resources available.” It happens. All too frequently.
Multitasking — boon or bane?
Is multitasking a good thing or a bad thing? How do we know?
Good questions. And subject to much and very spirited debate.
But the real question here in the context of complexity is whether excessive multitasking when working with complex systems can cause such severe cognitive overload that the question is when and how much rather than if.
Ability and skill with multitasking is a source of great pride for many individuals. In fact it is their preferred mode of working.
And for relatively simple tasks and relatively simple systems that is all quite true and credible.
But, when dealing with complex systems, meaning overly complex systems, multitasking can be the proverbial straw that breaks the camel’s back.
Attention, focus, and intensity of application of intellectual activity are essential when working with many aspects of (overly) complex systems.
The real danger is that even with an (overly) complex system, not all of the complexity is uniform, so that even though some aspects are incredibly complex, so many other aspects are at least seemingly almost trivial, so that the individual is easily lured into a sense of complacity that their multitasking tendencies are adequate for these more trivial aspects of the system, so that the individual may not even notice when they segue into the more complex aspects of the system where attempts to multitask may fail horribly.
Even worse, multitasking while working with more complex aspects of the system may in fact be deceivingly quite successful, for awhile, maybe even for an extended period of time, until, finally, under complex conditions not always predictable or well understood, the cognitive overload spikes, and multitasking no longer works and failure, even catastrophic failure occurs.
Some points to keep in mind for multitasking with complex systems:
- Can be a source of significant risk.
- Potentially risky if done under pressure.
- Okay if strictly voluntary and done with a healthy mental state.
- No clarity on the threshold of higher risk.
- No clarity on the limits.
- No clarity on how much is acceptable.
- No clarity on how much is recommended.
- No great clarity on what specific conditions cause it to be extremely unacceptable and extremely hazardous.
Maybe the short answer is to categorically restrict multitasking to trivial and simple tasks that have at most mild or minimal complexity, and to strictly ban multitasking for complex systems of even a moderate complexity, and to absolute ban multitasking for significantly complex systems.
Of course, we should probably ban overly complex systems entirely, so then multitasking is not an issue at all for such nonexistent systems, but unfortunately there will continue to be many overly complex systems designed and deployed in the years ahead, so the only solution or workaround is to severely restrict or outright ban multitasking on such systems.
Computer system complexity
The concepts in this paper are not limited to computer and software systems, but they are the main focus and of great interest.
Computer software systems can vary greatly in their complexity from very simple, even trivial, to extremely complex, or even a level of complexity where nobody can provide an accurate characterization of the complexity.
There are a number of dimensions over which to characterize complexity of computer software systems:
- Operational performance complexity.
- Design complexity.
- Code complexity.
- Complexity of conception, design and implementation.
- Complexity of internal testing.
- Complexity of packaging.
- Complexity of final testing.
- Complexity of deployment.
- Complexity of operation. How many people does it take to keep the system up and running, including helping users.
- Complexity of maintenance.
- Complexity of evolution.
Operational performance complexity is the traditional computer science notion of algorithmic complexity. This includes:
- How much time is needed to complete a single task. What the computer scientists call computational complexity. Usually in terms of a mathematical formula related to how much data is involved. Commonly expressed using so-called Big O notation.
- How much resources, such as storage or memory, are needed to complete a single task.
- How many tasks can be performed simultaneously.
- How many tasks can be completely per unit of time.
Design complexity is a sense of how complicated the design of the software is. This can be a vague measure, such as how many pages of paper are needed to to fully document the design — the specification of what the code should do.
Code complexity is a sense of how complicated the software source code is. This can include a variety of measures such as:
- Raw lines of code.
- Number of functions.
- Number of classes and methods for an object-oriented design.
- Number of modules.
- How simple and clean or complicated and intricate a typical function or method is.
- Number of subsystems.
- Number of processes.
- Number of distinct computer systems which must interact for the full system to function.
Complexity of conception, design and implementation is a sense of the size of the team and elapsed time needed to move the overall idea of the system from conception to packaging and final testing.
Complexity of internal testing is a sense of the number of professionals and how much time is needed to fully test all components of the system whenever changes are made and the updated system is ready to be a candidate for release. When is engineering complete.
Complexity of packaging is a sense of how many components and other deliverables must be packaged or pulled together to have a completed software system ready to deploy. And how many people are required and for how long to complete packaging to be ready for final testing.
Complexity of final testing is a sense of how many professionals and and how much time is needed to fully test the fully packaged system after changes have been made and the updated system is a candidate for release.
Complexity of deployment is a sense of how much effort is needed to install, configure, check out, and roll out a new release of the system, to go live for real-world users. How many people and how much time.
Complexity of operation is a sense of how much effort is needed to keep the deployed system running smoothly. This includes monitoring and addressing any issues or anomalies that may arise, as well as routine, scheduled maintenance tasks. How many people and how much time. And how many computer systems and associated storage and networking hardware and ancillary services are needed. Also includes capacity planning and provisioning, including changes during operation as usage evolves, as well as handling peaks and spikes of load. And how much staff and resources are needed to provide support to users.
Complexity of maintenance is a sense of how much effort is needed to fix bugs and make minor changes to the system. How many people and how long it typically takes to complete a task. From initiation of the task until the change is fully tested and ready for deployment, and deployment effort as well.
Complexity of evolution is a sense of how much effort is needed to make nontrivial changes to the system, including both minor, major, and radical changes. Is it fairly easy or fairly hard? How many people are needed to staff such work, and how long does it typically take to implement a single change, as well as how quickly a modified system can be tested, packaged, and tested to be ready to release. And how much effort and resources may be required to migrate operations and users from the previous release to this new release.
Collective behavior
One major source of complexity in modern systems is that the system is a collective of multiple subsystems or even separate systems which must work together.
Collective behavior increases complexity and adds technical risk.
Some of the issues with collective behavior:
- Basic synchronization. Getting even only two components, subsystems, or systems to work well together.
- Ensembles. Getting more than two components, subsystems, or systems to work well together. Lean towards cooperation and teamwork more than central control.
- Armadas. Getting a larger number of components, subsystems, or systems to work well together, under relatively central control.
- Swarms. Getting a very large number of fairly independent actors to work together, not as a result of any central control but as a result of shared purpose.
- Storms. Many independent actors acting independently, without any significant coordination, frequently in competition and even at cross purposes. The system must operate in the presence of storms of independent actors.
- Redundancy. Need for replications of components, subsystems, or systems so that bottlenecks and loss or unavailability of one does not interfere with other components, subsystems, or systems which depend of the unavailable or overloaded entity.
- Consensus. Getting multiple components, subsystems, or systems to agree on some data pattern, such as a contract, a transaction, or values of a collection of data.
- Emergence. Behavior that emerges from collective actions of components, subsystems, or systems, and is not so obvious from even a deep comprehension of the individual components, subsystems, or systems.
- Self-organization. Emergence that results in a super-system that has a significant level of sophistication, once again not obvious from even a deep comprehension of the individual components, subsystems, or systems.
- Cooperation vs. competition. It is important to know whether two or more portions of a system are cooperating or competing, although it may not be at all obvious from even a deep comprehension of the individual components, subsystems, or systems. In some cases, it may not even be possible to tell which it is, in which case that introduces a whole new level of system complexity.
Calling all polymaths
Modern systems tend to cut across multiple disciplines, such that comprehending the totality of the system increasingly requires the capabilities of a polymath.
In the old days (1970’s and 1980’s) we called them generalists, in contrast to specialists, and frequently it was a term of disparagement rather than a term of praise.
But today, we don’t have a lot of choice. Individual specialists, as valued as they still are, are insufficient to grasp and cope with the complexity of modern systems.
We need polymaths. A lot more of them.
The problem is that they are not so easy to find. And they cannot be educated and trained so easily. Education and training can impart a sense of many disciplines, but the complexity of modern systems requires a deep grasp of multiple disciplines.
In truth, the only way to get there is through long and very hard experience.
Architects, risk of multiple, siloed architects
Any nontrivial system, and even most trivial systems, requires an architect, a professional who knows all of the pieces of the system and how they fit together.
The problem is that due to the raw complexity of modern systems, a single architect is frequently not enough. Multiple architects are needed, each with their own distinct area of expertise and responsibility.
That sort of works, until it doesn’t.
Each architect has their own silo of expertise and responsibility, but complex systems involve complex interactions between silos so that no single architect is master of the complexity across all of the silos.
Yes, you can add another level of architect or technical management, but even this only goes so far and may merely paper over the essential risk that no single architect is master of all of the complexity of the entire system.
Need for a chief architect
Every system of any significance requires a chief architect, the individual who knows the entire system inside and out. Or at least should know. But with very complex systems that knowledge will tend to be limited. But at least full knowledge remains the goal, the ideal.
Conceptual integrity, coherence, and elegance
The number one task of a chief architect is to absolutely assure the conceptual integrity of both the system as a whole and in all of its details.
The chief architect needs to assure that all aspects of the system have a sense of coherence, that all components are working towards a common purpose and designed to work well together.
Elegance is a poorly understood and much maligned concept. A coherent system will almost by definition be elegant.
A common problem with larger systems is that they have too many architects, each with different goals and different values. Conceptual integrity, coherence, and elegance tend to suffer. It’s a classic problem of too many cooks in the kitchen, that too many cooks spoil the stew.
A chief architect is the only answer.
That said, such an ideal chief architect is a very rare breed.
Need for deputy architects
Every architect should have one or more deputies, for a variety of reasons:
- To fill in for the architect when scheduling or absence prevents presence.
- To handle more requests for assistance or review from team or group members.
- To take over very quickly if the architect should leave.
- To add a second set of eyes.
- To add some degree of diversity.
- To mitigate the bus factor — if or when the architect is suddenly taken out of the picture without advance warning, such as by an accident (that’s where the famed bus comes in, or a plane), severe illness, or leaving to join another organization.
The bus factor
Generally speaking, professionals are relatively replaceable. If one professional is unavailable, another professional can quickly step in and take over where the previous professional left off.
But with increasingly complex systems and increasing levels of specialization, it is not uncommon if not typical that one or more professionals on a team possess essential skills or essential knowledge such that another professional cannot quickly step in at a moment’s notice.
This means that an entire project may be placed at risk if these key professionals were suddenly to become unavailable, such as if they were hit by a bus, hence the term bus factor.
It’s not that the problem can’t be managed, but only at great cost or delay.
Of course, the single best way to manage the problem is to keep the complexity down to a manageable level where the bus factor becomes negligible since other professionals can quickly step in should the need arise.
Several key points here are:
- Don’t keep all eggs in one basket. Spread knowledge and responsibility among multiple individuals.
- Need at least several individuals who are fully knowledgeable of all aspects of the system. Or are at least capable of fully coming up to speed on all aspects if they are needed in a pinch or a crunch.
- Need a credible technology succession plan for how to cope with unexpected losses or departures.
Special risk for medical systems where human life is at risk
Medical systems are still relatively simple, where complexity is not so much of an issue, but as medical systems get more and more complex, coupled with potential interactions between systems, plus the potential and risk of AI, complexity will eventually rear its ugly head.
Beyond mere bugs which merely annoy people, human lives are at risk with medical systems.
In addition to risk to life and limb, quality of life is also at risk.
Examples of loss of control
Here are some examples of complex systems where staff lost control and were unable to successfully operate and control complex systems. Details are well known but beyond the scope of this paper.
- Apollo 13 lunar mission.
- Shuttle Challenger loss.
- Chernobyl nuclear plant loss.
- Three Mile Island nuclear plant loss.
- HealthCare.gov.
- Titanic.
Or from the domain of fiction:
- Hal 9000 AI computer is 2001: A Space Odyssey.
- Skynet AI network in Terminator.
Great success with rockets and space missions
As a general proposition, our greater successes at maximizing control and minimizing loss of control has been in rockets and space missions.
Yes, indeed, rockets have been some of our more spectacular failures, but the fact that we have been as successful as we have only illustrates both the difficulty of success and our ability to achieve success when we marshal sufficient focus, discipline, and resources. And sufficient time is required. Delays are not only common and to be expected, but a necessary aspect of such efforts.
And these efforts only serve to highlight the challenge of sticking to techniques and methods which are required for success.
The Coming Software Apocalypse
Some key points about the complexity of computer software systems from a September 2017 article by James Somers in The Atlantic, entitled “The Coming Software Apocalypse” and subtited “A small group of programmers wants to change how we code — before catastrophe strikes.”
See:
- “When we had electromechanical systems, we used to be able to test them exhaustively,” says Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years.
- “The problem,” says MIT professor Nancy Leveson, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”
- “The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.”
- “The complexity,” as Leveson puts it, “is invisible to the eye.”
- “all of that complexity is packed into tiny silicon chips as millions and millions of lines of code. But just because we can’t see the complexity doesn’t mean that it has gone away.”
- “The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.”
- “You just cannot anticipate all these things.”
- “basically people are playing computer inside their head.”
- “ So the students who did well — in fact the only ones who survived at all — were those who could step through that text one instruction at a time in their head, thinking the way a computer would, trying to keep track of every intermediate calculation.”
- “The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.”
- “becoming very, very complicated.”
- “model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules.”
- “Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
- “The 80s had seen a surge in the number of onboard computers on planes. Instead of a single flight computer, there were now dozens, each responsible for highly specialized tasks related to control, navigation, and communications. Coordinating these systems to fly the plane as data poured in from sensors and as pilots entered commands required a symphony of perfectly timed reactions. “The handling of these hundreds of and even thousands of possible events in the right order, at the right time,” Ledinot says, “was diagnosed as the main cause of the bug inflation.””
- “low-level programming techniques will not remain acceptable for large safety-critical programs, since they make behavior understanding and analysis almost impracticable.”
- “model-based design, sometimes known as model-driven engineering, or MDE”
- “We already know how to make complex software reliable, but in so many places, we’re choosing not to.”
- “all he could think about was that buried deep in the designs of those systems were disasters waiting to happen.”
- “some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
- “An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.”
- “code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,””
- “Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do — and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.”
- “Lamport sees this failure to think mathematically about what they’re doing as the problem of modern software development in a nutshell”
- “programmers aren’t aware — or don’t believe — that math can help them handle complexity. Complexity is the biggest challenge for programmers.”
- “This code has created a level of complexity that is entirely new. And it has made possible a new kind of failure.”
- “Code will be put in charge of hundreds of millions of lives on the road and it has to work.”
- “Computing is fundamentally invisible,” Gerard Berry said in his talk. “When your tires are flat, you look at your tires, they are flat. When your software is broken, you look at your software, you see nothing.”
Complexity of the problem vs. complexity of the solution
We should be extremely careful not to confuse the complexity of a problem with complexity of a solution. The two may be tightly linked, but not necessarily.
And there are commonly any number of potential solutions for a given problem, each of which has different qualities and complexity characteristics.
Sure, a complex problem may indeed require a complex solution, but a superficial analysis won’t necessarily provide the correct evaluation of that proposition. Much deeper analysis, insight, intuition, and creativity may be required.
There are four possibilities, a two by two matrix, for the combinations of complexity of the problem and the (chosen or candidate) solution:
- Simple problem, simple solution.
- Simple problem, complex solution.
- Complex problem, simple solution.
- Complex problem, complex solution.
And sometimes the superficial evaluation of a problem is an illusion:
- A seemingly simple problem may be far more complex than first envisioned.
- A seemingly complex problem may be dramatically simplified using novel and insightful techniques.
And sometimes our initial forecast of a solution can be an illusion:
- The proposed simple solution may have a lot of hidden complexity that we overlooked or were not aware of.
- A proposed complex solution may be radically simplified with a little more thought, insight, and creativity.
Beware when complexity of the solution exceeds complexity of the problem
It is all too easy to lose sight of the original problem when we are focused heads-down on the solution.
The solution doesn’t necessarily embody the original problem.
Rather, any solution embodies an approximation of the original problem, a model of the problem.
The issue is that this model of the original problem may be more generalized that the specific nature of the original problem, so that the solution may be really solving a more complex problem than the original problem.
That’s fine if the more generalized solution is somehow simpler and more elegant than a more particularized solution, or if there is some other relevant motive, but too frequently more generalized solutions have a habit of being more complicated and hence more complex.
The result can be that the implementers of this chosen solution end up having to deal with a lot more complexity than the original problem actually required.
Again, if the more generalized solution gives you some special benefit when compared to a more particularized solution, that’s great, but if the extra cost and extra effort are beyond the bounds of reason, that is not so good.
In short, system designers and managers need to be very careful when generalizing from a specific problem.
Solutions to complexity and cognitive overload
Ultimately, there is no single magic-bullet solution to complexity and cognitive overload.
Yes, simplicity is a solution, but presumably you only have to deal with complexity because you were unable to come up with a simple solution in the first place.
At best, as mentioned in the introduction, there is no magic, silver bullet, just a hodgepodge of techniques and approaches that can help to moderate, mitigate, minimize, manage, and cope with complexity and cognitive overload and their downstream effects.
That hodgepodge includes:
- A chief architect, focused first and foremost on conceptual integrity, coherence, and elegance. Someone who is less likely to allow complexity and cognitive overload get out of control in the first place.
- Simplify the original problem as much as humanly possible at the get-go.
- Consider a wider range of alternative solutions.
- Simplify the initial solution as much as humanly possible.
- Belated efforts to simplify the solution. Doing this after the fact can be expensive, error-prone, and less likely to succeed.
- Belated efforts to simplify the original problem. Maybe (read: usually) initial efforts were overly ambitious.
- Focus of modularity of the solution. Easy to say but hard to do. Requires a level of technical and managerial discipline that is usually beyond the reach of average projects.
- Focus on testing early, before the design solution is committed. Degree of difficulty for testing should be a criteria for choosing between alternative solutions.
- Redesign and retrofit components, modules, and even entire subsystems when difficulty testing and using the system becomes problematic.
- Use commodity components, modules, subsystems, and services whenever possible to capitalize on known complexity and failure characteristics rather than introduce fresh uncertainty over complexity and failure characteristics.
- More appropriate staffing. Selecting the right people makes all the difference. Raw skill, raw experience, raw education, and even raw track record are not necessarily the best indicators of the kind of individual contributors and technical and nontechnical managers who are needed to master complexity for a particular project.
- Better education about complexity, cognitive overload, and their consequences.
- Better professional training about complexity, cognitive overload, and their consequences.
- Staff diversity. Different perspectives can help remove blinders.
- Tools to monitor, measure, characterize, and visualize the complexity and cognitive load of a system.
Ignorance of complexity, cognitive overload, and their effects
It’s rather amazing that after all of these years and decades, how few professionals have a significant grasp of the nature of complexity, cognitive overload, and their effects and consequences.
In fact, it’s actually mind-boggling that this state of affairs exists here in the 21st century, despite all of the amazing science and technology that surrounds us.
We need better education and better professional training.
Need for education and training on complexity and cognitive overload
It would seem rather obvious that better education and professional training about complexity, cognitive overload, and their downstream effects would be much more widely recognized and even demanded, but the sad reality is that this is not the case.
In fact, if anything, we seem headed in the opposite direction.
Rather than focusing on consequences and prerequisites, we instead take a cavalier Just Do It approach to so many systems.
We should be trying to engineer systems, but it’s more common to focus on coding and hackathons. And endless refactoring of bad code.
Sure, education and professional training programs focused on complexity, cognitive overload, and their consequences could readily be devised, but the simple fact is that there is very little demand for them. Or even interest.
Modularity
One large, monolithic system is much less desirable than a system architecture that is modular, focusing or a significant number of smaller modules and subsystems.
Each module should be:
- Reasonably self-contained.
- Have very simple, clear, and well-defined interfaces to other modules.
- Relatively isolated from other modules.
A modular architecture should be based on:
- Smaller subsystems.
- Greater isolation between subsystems.
- Commodity modules.
The principle of commodity modules is reuse, which results in modules which are:
- Cheaper. Economy of scale.
- More predictable. Well characterized from extensive experience.
- Longer history. More of the bugs and performance and capacity issues worked out.
The longer history of experience with a commodity module results in:
- Proven use. It works. Less risk of failure and need to test.
- Failure rate is known.
- Failure consequences are known.
- Bugs have been worked out. Maybe not all of them, but more of them.
- Availability of staff with enough knowledge about the technology.
In short, modules, commodity modules, and modular architecture are a big win when trying to reduce the overall complexity of a system.
And this reduces the cognitive overload of comprehending the design and implementation of the system.
Staffing to reduce complexity and cognitive overload
A mediocre effort of staffing a project can result in excessive and uncontrolled complexity and cognitive overload.
There are three levels to staffing which all matter very greatly when seeking to control complexity and cognitive overload:
- Technical contributors. The individuals who actually do the technical work, as well as technical architects.
- Technical management. Directly supervising the technical contributors.
- Nontechnical management. Influence the resources available to the project. Some degree of control over definition of the problem to be solved.
Another dimension to staffing is functional roles:
- Developers.
- Product management. The individuals who exercise the most control over the definition of the problem to be solved.
- Quality assurance. Testing.
- Documentation.
Each of those functional roles has the same three levels listed previously.
The two main aspects of staffing are:
- Team organization.
- Selection process for team members.
A team is best organized for minimizing complexity and cognitive overload if:
- It is streamlined.
- The emphasis is on smaller size.
- Strong emphasis is placed on agility.
- Strong emphasis on conceptual integrity, coherence, and elegance.
Selection of team members needs to focus on meeting the real needs of the specific task rather than bureaucratic requirements or over-generalized, commodity, interchangeable staff members.
As indicated in an earlier section, complexity and concern for cognitive overload impacts staffing, so that the level of complexity and cognitive overload which can be managed will vary between:
- Elite individuals.
- Well above average individuals.
- Somewhat above average individuals.
- Average individuals.
- Somewhat below average individuals.
- Well below average individuals
- Elite groups.
- Well above average groups.
- Somewhat above average groups.
- Average groups.
- Somewhat below average groups.
- Well below average groups.
The lesson there is that if you absolutely must field a more complex system which risks cognitive overload, then you have to accept the cost of requiring and appropriately resourcing above average and elite individuals and groups.
Diversity of staff
Diversity of staff can impact how a team confronts and addresses complexity, cognitive overload, and their effects. Different perspectives can help remove blinders that prevent people from seeing things that are not exactly where they are focused in their immediate task.
But, diversity is much more easily talked about than accomplished.
Closed vs. open systems
Any particular system will tend to be either:
- Closed. A fixed set of known components. Complexity is fixed or bounded.
- Open. A variable set of components, only some of which are known when the system is deployed, with additional components arriving or departing as operation of the system evolves. Complexity is variable and even unbounded.
A closed system may also have dynamic components in addition to static components. Some of those dynamic components may be mandatory and always present, while other may be optional so that the system must be able to run without them and react in a reasonable manner when they are not configured. This can add significant complexity and cognitive overload, even though it may be very well-intentioned and necessary.
An open system includes, by definition, unknown components. There may be a significant number of known components for the base system, but dynamic components can come from anywhere at anytime. The total number and complexity of dynamic components is both unknown and unknowable. This makes reasoning about overall system complexity and cognitive overload especially problematic. Specialized monitoring and system management tools are needed for such open systems.
In truth, specialized monitoring and system management tools are needed for all systems.
Spikes and peaks of demand and interactions
One great uncertainty for the complexity of any system is how it will behave under extremes of load.
There are three forms of excessive load:
- Peaks. Which are relatively predictable based on the calendar and clock.
- Spikes. Which are inherently unpredictable, seeming to come out of nowhere. Possibly due to unpredictable external events, but possibly just more of a random coincidence, such as a classic perfect storm.
- Denial of service (DOS) attacks. Hacking. To be discussed in the next section.
Peaks can occur at various time scales:
- Time of day. One or more hours, or even shorter intervals, when demand and load tend to be substantially higher than the rest of the day.
- Day of week. Some days tend to be busier than others.
- Day of month. There may be some special days of the month.
- Seasonal demand. There may be seasons or intervals of time around holidays when demand is significantly higher.
Complexity presents a special challenge when dealing with peaks and spikes.
The response of the system to excessive load may be extremely nonlinear during peaks and spikes. And very unpredictable.
Designing systems to be responsive during spikes and peaks is always a special challenge. And introduces whole new levels of complexity and opportunities for cognitive overload.
Denial of service (DOS) attacks
Systems can be hacked. One special form of hacking is the denial of service attack or DOS.
The goal of a DOS attack is to present a system with such excessive load that it cripples the system, on the theory that most systems are poorly designed for spikes in demand.
A special form of DOS attack is the distributed denial of service or DDOS attack. That just means a large number of computers, sometimes called bots or a bot net, are simultaneously attacking the target system.
The bottom line is that a DOS or DDOS attack is somewhat similar to a peak or spike in demand.
This falls into the category of cybersecurity, which is beyond the scope of this paper.
But if a system is designed properly and handles peak demand and spikes in demand, DOS and DDOS attacks should not run the risk of crippling the system.
Complex adaptive systems (CAS)
Complex adaptive systems or CAS are systems in which the components and interactions are nonlinear, dynamic, and constantly changing due to feedback effects. They are also very sensitive to initial conditions and ever-changing environmental effects.
The net effect is that the behavior of a CAS is very unpredictable.
Hence, the complexity and cognitive load of a CAS is extremely unknowable.
The maddening thing about a CAS is that sometimes the behavior appears to be very predictable for extended periods of time but then without warning the behavior can suddenly change or begin evolving in some unpredictable way, all in an unpredictable manner.
That’s the bad news.
The good news is that the vast majority of the systems that we design are not CAS.
The really bad news is that in the future, especially with AI and open systems, more of our systems will be CAS.
Ego and pride
Human nature. What can be done about it?
Well, there is always something to try to find a way to cope with human nature.
But, all too commonly, we end up in a Sisyphean situation. Like Sisyphus, we try so hard to push a large boulder up a hill, but when we reach the top and start patting ourselves on the back it gets away from us and rolls back down to the bottom of the hill where we start over right where we started. Rinse and repeat. We’ve seen this movie. We know how it ended. But, still we replay it. And we can’t stop replaying it. It’s our nature.
Why?
Our ego. And our pride.
We can’t help ourselves.
Rolling that boulder up the hill provides us with such tremendous psychic satisfaction and sense of accomplishment that we don’t care what might inevitably come next.
Creating a complex system makes us feel like a god, a master of the universe. Consequences be damned. A classic pact with the devil.
So, not only do we not learn our lessons, but we go far out of our way to go on to larger disasters.
Larger egos. And greater pride. They are our Holy Grail.
Support from management
Much of the burden for controlling complexity and cognitive overload rests on the shoulders of technical staff, but support from management is essential.
And that support must be deep, broad, consistent, and sustained to have the desired effect.
Parts of that support can be emotional, intellectual, or technical, but a fair chunk of it must be financial.
Management commitment to fighting complexity and cognitive overload has to show up in management’s budget.
Budget for fighting complexity and cognitive overload
Management must budget sufficient resources for the eternal battle against the encroachment of complexity and cognitive overload.
But how much money, staff, and other resources are needed?
Nobody really knows.
In truth, time is usually the more critical factor — having enough time to do a more thoughtful system design, time to pursue conceptual integrity, time to pursue coherence, time to pursue elegance, time to fully test the system, time to develop better tests, but overall it simply takes time to get it all right.
That said, raw time is not the simple answer. Without the appropriate staff, all the time in the world won’t deliver the kind of conceptual integrity, coherence, and modularity needed to minimize complexity and cognitive overload.
Sometimes management just doesn’t want to pay top dollar for the more senior staff needed for architects and senior technical contributors, plus the necessary support staff to allow senior technical staff to focus on the critical tasks that constrain conceptual integrity, coherence, and modularity.
Or sometimes management is willing to pay, or at least say they are willing, but the overall organization doesn’t have the level of appeal to attract the staff who are needed.
Competition is fierce for top talent, so it is quite possible that it will be very difficult if not impossible to attract and retain the necessary talent, even if management budgets for them.
In truth, sometimes an organization is overreaching and trying to pursue a project that is beyond their ability. Sometimes that works, but commonly it doesn’t.
In any case, without sufficient resources in the budget, ambitious projects will be crippled.
Who should drive efforts to limit complexity and cognitive overload?
Should the technical staff be responsible for promoting and sustaining efforts to limit and reduce complexity and cognitive overload?
Sure, in an ideal world. But they can’t do it alone.
Should management be responsible?
To at least some degree. Without their support and possibly with their complicity, efforts of the technical staff will be undermined.
Should executive staff be responsible?
They don’t have much to say about the technical work, but management needs their support. They need to assure that the budget and resources will be available for such efforts. And, most critically, they need to be diligent and disciplined about assuring that the efforts will have a high enough priority, rather than being a mere afterthought or an effort that is only addressed after disaster strikes.
The efforts of all three are needed.
All three must drive the process, each in their own way.
Who should own efforts to limit complexity and cognitive overload?
It’s all well and good to have lots of people driving efforts to limit complexity and cognitive overload, but somebody has to actually own the effort.
If everybody is responsible, then nobody is responsible.
A somebody has the be the owner, the single individual to whom everybody looks to for guidance at moments of extreme stress when laser-focused guidance is most urgently needed. As well as being the individual who selects the target and sets the vision and tone for the effort.
I would say that the technical staff ultimately own the effort. They are the ones who are doing the actual work, and they are the ones who have the best grasp of all of the technical nuances.
Unfortunately, the technical staff is frequently lost down in the weeds and under a lot of time pressure so that they aren’t able to do the best job of controlling complexity and cognitive overload that they would like.
Ultimately, I would say that the chief architect of each project is the main owner of the effort to control complexity and cognitive overload. If they don’t own the problem, there is little chance that complexity and cognitive overload will be controlled.
Calling all Luddites!
Ugh. Let’s just hope it doesn’t come to that, but the rise of modern-day anti-tech, anti-complexity Luddites is a very real risk.
Some people may revel in complexity and cognitive overload, but for many it is a source of oppression. Or at least a source of annoyance.
We need to remain alert for any significant movement of the anxiety meter from minor annoyance towards unbearable oppression.
We’re not in terrible shape today, but there are already many warning signs and it wouldn’t take much of a push to send us sliding unstoppably down the slippery slope to that unbearable oppression.
What we need to be doing is marshalling efforts to control and reduce complexity.
But that’s not a priority today, yet.
Conclusion
Complexity and cognitive overload are real challenges. And they are rising.
It takes a lot of effort, energy, attention, focus, discipline, skill, persistence, and support from management to keep complexity and cognitive overload in check. And that includes sufficient budget — money, staff, and other resources. And controlling complexity and cognitive overload must be a clear and explicit high priority for management, including top executives.
Unfortunately, many of those essential ingredients are in woefully short supply all too often.
The situation is likely to get a lot worse before it gets any better.
There is no single, universal, magic, silver bullet to cure the problem, other than a lot of good old-fashioned hard work. And very smart work.