What Is Algorithmic Complexity (or Computational Complexity) and Big-O Notation?

  1. The visual complexity of the source code for an algorithm, including length, nesting of logic, obscurity of intentions for logic and formulas, and lack of comments or documentation. The degree to which the true, mathematical algorithmic complexity may be hidden or obscured due to the difficulty of reading and comprehending the source code.
  2. The degree to which the resource requirements of an algorithm are not known or unclear due to dependence on external functions, external data such as databases, or invocations of networked services whose algorithmic complexity and resource requirements may not be known with any degree of certainty or may be highly variable.
  3. The degree of uncertainty for an algorithm due to latency (response time) for external services which may be delayed by arbitrary amounts of time due to load on external systems which are hosting those services.

Iteration and looping

  1. Processing a large amount of input data.
  2. Performing complex computations with many operations.
  3. Generating a large amount of output data.

Functions and function calls

  1. To use a common code sequence in more than one place in an algorithm without having to literally repeat the full code sequence every time it is used.
  2. To partition the complexity of an algorithm in a modular manner so that each modular component is less cluttered and easier to follow, and easier to manage and maintain.

Subroutines

Function recursion

Quantum parallelism

Hybrid quantum/classical algorithms

Secondary storage and external services

  1. If all data cannot fit or be kept in memory, it may be necessary to access secondary storage such as spinning disks, which operate much more slowly than memory.
  2. Accessing external services, such as across a network or the Internet, can dramatically impact performance.

Input size

  1. For algorithms processing a variable amount of input, input size will be the count of items or records of input data. Such as searching or sorting a list of data. The data may be directly input or may come from an external source such as a database.
  2. For algorithms with only a fixed amount of input data, such as a set of parameters, the input size must be calculated from the input parameters. Such as for an algorithm which computes the factorial function or factors a large number. The actual value of an input parameter may be the effective input size.

Big-O notation

  • O(formula)
  • O(n²)
  • O(1) or O(k). Constant. Great, easy, fastest, trivial, simple.
  • O(log log n). Double logarithmic. Relatively trivial except for very large input.
  • O(log n). Logarithmic. Non-trivial but very manageable.
  • O(sqrt(n)). Quadratic speedup. Faster than linear, but not as fast as logarithmic. For example, Grover’s algorithm for searching unstructured data on a quantum computer.
  • O(n). Linear. Still manageable and predictable, but not optimal for large amounts of data.
  • O(n log n). Linearithmic. Not quite as good as linear, but not so bad.
  • O(n²). Quadratic. Now it’s getting somewhat expensive, but not terrible.
  • O(n^k). Polynomial. Worse than quadratic (for k > 2), but not as bad as exponential.
  • O(2^n) or O(k^n). Exponential. Try to avoid this. This is bad, except for very small amounts of data.
  • O(n!). Factorial. Only if you really need it and don’t need real-time performance. Or your input data is very small.
  • Note: There is no clarity as to whether log n represents a natural logarithm (base of e), a base 10 logarithm, or a base 2 logarithm. Technically the distinction between the three is not critical since they all differ by a constant factor and Big-O notation is a rough approximation rather than mathematically precise. Generally, you can differentiate between the three possibilities by context, but generally base 10 logarithm is assumed by default. Use ln n when you explicitly intend a natural logarithm, and use log2 n when you explicitly intend a base 2 logarithm.
  • For an algorithm of quadratic complexity — O(n²), O(20²) is 400.
  • For an algorithm of exponential complexity — O(2^n), O(2²⁰) is one million.

Polynomial complexity

  1. O(n²)
  2. O(n³)
  3. O(n⁴)
  4. O(n³ + m²)

Subpolynomial complexity

Sublinear complexity

  1. O(1) or O(k). Constant. Great, easy, fastest, trivial, simple.
  2. O(log log n). Double logarithmic. Relatively trivial except for very large input.
  3. O(log n). Logarithmic. Non-trivial but very manageable.
  4. O(sqrt(n)). Quadratic speedup. Faster than linear, but not as fast as logarithmic. For example, Grover’s algorithm for searching unstructured data on a quantum computer.

Exponential complexity

  1. O(2^n)
  2. O(3^n)
  3. O(4^n)
  4. O(3^n 2^n)

Polynomial complexity vs. exponential complexity

Brute force

  1. The data size or prospective solution count is small enough that the time and resources to evaluate all possible solutions is reasonably small or acceptable.
  2. There is no known algorithm to achieve a solution other than exhaustive processing.
  3. Known algorithms are not guaranteed to achieve a sufficiently optimal or accurate solution.
  4. A quantum computer can be used to simultaneously evaluate all possible solutions in parallel in a single step using quantum parallelism.

Random and statistical methods

Space complexity

  1. An algorithm might generate a large amount of data, even for a small amount of input data.
  2. Even a relatively simple algorithm can generate a large amount of data.
  3. Even if an algorithm doesn’t generate a large amount of data for final results, a significant amount of data may be needed for intermediate results, which may be discarded when the algorithm completes, but still consume resources while the algorithm is running.
  4. Space or memory includes both main memory — commonly RAM or random access memory — and storage — commonly on a spinning disk or flash memory. Memory tends to be transient while storage tends to be permanent or persistent, although those distinctions are not necessarily always the case.
  5. Some algorithms may use very little space, while others, especially those which are data-intensive, such as data analysis, logging, real-time data, and databases, can require dramatic amounts of space.
  6. Memory and storage requirements are frequently an overlooked aspect of the design, development, evaluation, and deployment of algorithms and applications. Deployment of a poorly-characterized algorithm can lead to very unpleasant surprises.

Combinatorial explosion

  1. The problem may be ripe for implementation on a quantum computer where quantum parallelism can potentially avert the combinatorial explosion.
  2. Avoid problems, algorithms, or applications that involve a combinatorial explosion.
  3. Try to simplify such problems.
  4. Be prepared to allocate sufficient computing resources to handle the combinatorial explosion.
  5. For some applications, despite the fact that the logic of the algorithm is known it may be necessary to hold off and wait until hardware advances to the point where the combinatorial explosion can be more readily handled.
  6. Consider whether a first-fit solution might be acceptable and cost a lot less than an optimal best-fit solution.
  7. Consider sub-optimal but acceptable heuristics which dramatically reduce the algorithmic complexity.

N-body problems and quantum computing

  • Driverless cars in a region coordinating their own movement without the use of a central traffic controller.
  • Patrons at a large performance venue wishing to exit as quickly as possible. Or to be seated as rapidly as possible. All without any central traffic control or local control either.
  • Protein folding.
  • Optimizing travelling salesman problems.
  • Optimizing complex business processes, particularly those which are geographically disperse at many sites.
  • Predicting outcomes for complex adaptive systems.
  • Managing large distributed networks of Internet of Things devices when the devices may interact, without central control.
  • Simulating physical systems at the level of quantum mechanics.
  • Simulating chemical reactions.
  • Predicting the effects of drugs and other chemicals.
  • Designing new materials.
  • Evaluating the chemical properties of new battery designs.
  • Simulating large celestial systems at the cosmological level.

What are P, NP, NP-complete, and NP-hard algorithms?

What are BQP and QMA algorithms?

--

--

--

Freelance Consultant

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

HOW TO CREATE RECYCLE VIEW IN KOTLIN ANDROID

Deep Dive into Android Services

Most recommended ways to master a programming language or framework

All things accessible

22 lessons learned from “The Unicorn Project” by Gene Kim- Part-1

Introduction to NumPy(A Python Tool)

[ExpDev] Exploit Exercise | Protostar | Stack0

Valuing Ephemerality

An attempt at the Avengers: Infinity War dispersion effect on the Slack logo

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Jack Krupansky

Jack Krupansky

Freelance Consultant

More from Medium

An Introduction to Grover’s Algorithm

Quantum computing: Genius or dangerous?

Optimal Superdense Coding

Part 1: Introduction to Quantum Computing in Biology and Variational Approach