Me and IBM, a Love/Hate Relationship, Confessions of an Outsider
I’ve never worked at IBM nor ever even wanted to, but they’ve figured rather prominently at a number of times in my life, my adult career as a software developer, in school, and even as a child, but in a rather schizophrenic Dr. Jekyll and Mr. Hyde manner, sometimes in a very positive manner but just as often in a rather negative manner. This informal paper chronicles my own personal journey with IBM over the past 60 years, as an outsider, of course. I can’t even begin to imagine what insiders at IBM, so-called IBMers, experience through their careers.
IBM is a maddening mix of brilliance and mediocrity. We’re thrilled, awed, and inspired by the occasional Dr. Jekyll, but we’re generally disappointed and saddened by the omnipresent Mr. Hyde. IBM can be downright maddening, providing awesome inspiration one moment, but then disheartening disappointment, boredom, desperation, and despair the next, constantly blowing hot and cold, rarely at a nice even and comfortable temperature.
On occasion IBM has flashes of true brilliance, but those occasional flashes are interspersed with extended areas or periods of depressing mediocrity and boneheaded stupidity.
To be sure, IBM has indeed played a pivotal role in my career and the computer industry overall.
Although this informal paper was not intended to focus on quantum computing per se, quantum computing was indeed my primary motivation, to provide a background for the lens through which I perceive IBM’s contemporary push into quantum computing relative to my past experiences and perceptions of IBM in previous generations of general-purpose computing technologies. I do follow IBM’s efforts in quantum computing quite closely, but this is much more frustrating than satisfying. This informal paper should give you a better perspective as to why that is the case for me, given my background with IBM.
Caveat: This paper is not intended to be an exhaustive, comprehensive review of all of the positives and negatives of all aspects of IBM over the entire history of IBM, but simply my own perspective based on my own interests, experiences, and exposure to IBM and what I’ve heard and read about IBM that I feel has informed my own sense of what IBM is all about, the positives and negatives, from my own perspective.
Topics discussed in this paper:
- First light
- Storied history
- Overall, IBM is a very mixed bag, with some great positives but some terrible, lousy, and simply mediocre negatives
- Summary of major points about IBM
- Maddening combination of brilliance and stupidity
- IBM set a very high bar for expectations
- Data processing and computing
- 80-column punched cards, terminals, and display screens
- IBM — business machines, me, not so much
- 10% positive vs. 90% negative/neutral/mediocre, or is it 2% and 98%, or even 1% and 99%!
- Top ten or dozen or so of awesome products and features
- IBM and Feynman at Los Alamos and the Manhattan Project
- My first punched card
- IBM machines at my college
- Invention of DRAM
- Early core memory
- 7094
- System/360 series
- 8-bit bytes for characters and addressing
- First mainframe based on integrated circuits
- IBM System/360 Principles of Operation manual
- System/360 Model 91 at Princeton
- System/360 Model 75
- Historical unit record equipment at IBM headquarters on Madison Avenue in New York City
- Selectric typewriter
- 2741 terminal
- Ocean County College and Ocean County Information Network, IBM CICS
- 3270 display terminal
- Time-sharing on the System/360
- Interactive PL/I programming on RUSH-5 from Allen-Babcock Computing
- System/360 was the peak and end of line for me and IBM… until the PC
- 1130
- FAA air traffic control (ATC)
- 1620
- Clones of the IBM System/360 instruction set
- I did have one other exposure to IBM mainframes via the Wang Data Center
- Proprietary operating systems, then UNIX
- 1403 line printer
- IBM 5100 Portable Computer — IBM’s first personal computer!
- Hijacking the 2741 for use as a PDP-10 terminal
- College interviews
- RISC, 801, ROMP, and PowerPC
- Intel 386 was a missed opportunity for IBM
- IBM PC vs. IBM PS/2
- Me and my IBM PC
- My initial impressions of the IBM PC
- Toshiba laptop PC
- Wyse and Compaq
- OS/2 and Microsoft, Windows NT
- IBM PCjr
- ThinkPad
- My Lenovo laptop — heritage from the IBM ThinkPad
- Minicomputers and the Series/1
- System 38
- System/360, 370, 303X, 309X, 390, Z family
- System /370?
- The long, slow decline of IBM — 1970 to 1990
- IBM bounces back under Louis Gerstner
- IBM back in the saddle, but with clipped horns
- Deep Blue and Watson, the Watson Health debacle
- Space Shuttle avionics computer
- Relational databases, SQL, Oracle, Datastax
- Datastax
- FORTRAN high-level programming language
- FORTRAN H
- COBOL
- PL/I
- APL
- Sierra and Summit supercomputers
- IBM Quantum — what is it, really?!
- Quantum computing research — a mixed bag
- Quantum computing commercial products — a no-show
- Quantum computing engineering — a mixed bag
- Overall for quantum, IBM has done reasonably well with the science, but not so well as far as engineering of commercial products
- Maybe IBM’s quantum computing efforts will fail but they can buy their way to success, which they’ve done before
- Twilight? Quantum computers
- AI?
- Smartphones and tablets
- THINK
- IBM. Not just data, reality.
- Wild ducks
- I’ve had only four visits to IBM
- Limited exposure to IBM research
- Adjunct Professor from IBM Research
- IBM researcher at a California database conference
- IBM researchers at IEEE chapter meeting
- Another adjunct professor, guest lecturer from IBM
- Boneheaded stupidity and PhDs
- Me and bureaucracies, a match made in… hell!
- I never worked at IBM, nor ever wanted to
- Huh… IBM has no products or services for me!
- My critical mass cloud model for innovation
- Shareholder
- Right now
- Next?
- IBM has indeed played a pivotal role in my career and the computer industry overall
- Conclusions
First light
I don’t think I had ever heard of IBM until 1964, when I was 10 years old, when they made a big splash at the 1964/1965 New York World’s Fair with this very iconic giant egg-shaped pavilion. I don’t recall much, just that it had something to do with something called a computer and three letters: “I”, “B”, and “M”. Oddly, I also indelibly remembered what those three letters stood for, International Business Machines, although I couldn’t understand what that had to do with computers, at least at age 10.
I vaguely recall that IBM had this service at their pavilion where you wrote your birth date on a card and they gave you a printout of historical news from your birth date on a card.
Storied history
IBM had a storied history. Predating even the development of electronic digital computers. Dating back to the late 19th century with Herman Hollerith and his punched cards, the 1890 census. And then providing what today we call IT for the social safety net in the Great Depression with its unit record equipment, punched card processing equipment.
Thomas J. Watson, Sr. joined the Computing-Tabulating-Recording Company (C-T-R) in 1914. C-T-R itself was formed in 1911 by the merger of Hollereith’s company, Tabulating Machine Company, and two other companies, International Time Recording Company and
Computing Scale Company of America.
Watson changed the cumbersome name of C-T-R to International Business Machines (IBM) in 1924.
Unit record equipment — punched card machines — were the mainstay of IBM through the Great Depression and World War II.
The closest IBM had to a computer by the end of World War II was the 405 Alphabetical Accounting Machine (or 405 tabulator) that could add and subtract and print. Feynman used these machines to perform atomic bomb calculations at Los Alamos for the Manhattan Project.
IBM’s unit record equipment, accounting machines, were world-class devices in the pre-computer era.
IBM’s first electronic digital computer the 701 was introduced in 1952.
And computers — electronic digital computers — have been the meat and potatoes of IBM ever since, although, technically, it took some years before revenue from computers finally overtook revenue from the old-fashioned unit record equipment which many organizations were still using.
The 704, introduced in 1954, the year I was born, was IBM’s first computer with core memory. Still a very new thing, but still quite impressive. John McCarthy invented LISP on a 704 at MIT.
The 709, introduced in 1957, was probably IBM’s first truly successful computer, firing on all cylinders, IBM at its best. But still using old-fashioned vacuum tubes.
IBM introduced a transistorized version of the 709 called the 7090, in 1959. Computers really began to fly.
For me, the transistorized 7094, introduced in 1962, was really the point at which computers were finally worthy of our attention. And IBM had one of the best machines available with the 7094. NASA used it for the Mercury and Gemini programs.
And just three years later, in 1965 IBM delivered four new models of the spiffy new System/360 — models 30, 40, 50, and 65, with different levels of performance and capacity for a range of prices. Something for the budget of every corporation.
The model 75 came a year later — it was the first 360 that outperformed the totally awesome 7094. Computing had really arrived in the modern era with the IBM System/360. IBM at its best.
Overall, IBM is a very mixed bag, with some great positives but some terrible, lousy, and simply mediocre negatives
It all depends on your own interests, needs, priorities, and tolerance for mediocrity.
Details to follow.
Summary of major points about IBM
These are from my own, personal experience, but I suspect that many others shared a lot of these points.
- Alternating between awe and exasperation or boredom.
- Maddening combination of brilliance and stupidity.
- Dr. Jekyll and Mr. Hyde.
- 10% positive, 90% negative/neutral/mediocre, or 5%/95%, or 2%/98%.
- Rarely exciting.
- Too many offerings, too confusing.
- Usually very capable and functional, but too often not the best offering and not the most innovative, but sometimes they did in fact have a brilliant offering.
- Too many other, scrappier firms lapping and zooming past IBM in so many areas, even if IBM has retained some core competencies.
Maddening combination of brilliance and stupidity
It sure has seemed as though every time I turned around I might as well flip a coin as to whether the latest development from IBM was either a stroke of brilliance, or inexcusable stupidity.
The positive:
- Competence. Both technical and business.
- Technical advances.
- Brilliance. So much better than everybody else.
- Breathtaking. Sometimes.
The negative:
- Incompetence. Both technical and business. Maybe they started with a good or even great idea, but the execution or implementation was severely lacking.
- Taking an initial success and then squandering or failing to exploit that success.
- Sluggish advances or failure to see obvious technical opportunities.
- Dull and lacking any excitement or enthusiasm.
IBM has always been quite the Jekyll and Hyde story.
Overall, I would rate IBM as 10% positive but 90% negative/neutral/mediocre. Or maybe a 5%/95% split. Or at times even a 2%/98% split.
IBM set a very high bar for expectations
IBM set a very high bar for success with all of their early computers.
People like me grew up with that high bar as an expectation to have for everything that IBM did.
Little did I know that the people running and staffing IBM were mere mortals, with human frailties, not always capable of reaching such an impossibly high standard for success.
Data processing and computing
My main focus and interest with IBM has been for hard-core computing, but much of IBM’s early success was much more vanilla data processing, initially with lots of punched cards and so-called unit record equipment, and then with a lot of the same punched cards but with electronic digital computers supplanting the ancient electromechanical unit record equipment, and with the computers merely mimicking a lot of the unit record data processing, just a lot faster, and eventually transitioning from punched cards to magnetic data storage medium such as drums, tapes, and disks.
Even today, a lot of the uses for computers are still for fairly basic data processing.
80-column punched cards, terminals, and display screens
IBM invented or at least popularized the 80-column punched-card data format.
Curiously, this 80-column format was carried over to early computer terminals such as those from Teletype. Although the IBM 2741 supported 88 columns.
And even eventually to most display terminals, such as the ubiquitous 24 x 80 display format for time-sharing and standalone terminals.
And even the backbone of early IBM PCs with its standard monitor. Okay, it was then 25 lines, but still 80 characters wide.
It was great to have this standardization, although it was a relief to advance beyond it in later years with graphics-based text and proportionally-spaced and sized fonts.
Although, even today, having text wider than 80 columns is more the exception than the rule.
80 columns of text actually seems optimal for most people. I mean, how many people do you know who want to read a book that’s a lot wider than that?!
IBM — business machines, me, not so much
Sure, IBM — and its predecessor, CTR, got started with business machines alone, and commercial information processing is still a huge focus for the company, but scientific, engineering, pharmaceutical applications have risen as well. But still, even today, IBM is still most closely identified with… business machines, or at least business applications.
That said, my own early interest in computers in general, and IBM in particular was not about business machines or business applications in general.
In fact, I wasn’t really so interested in scientific, engineering, or pharmaceutical applications, either.
From my earliest interests, I was a hard-core technologist. I was much more interested in how the machines and software worked than what people might do with them.
I was interested in computer architecture, systems programming, programming languages and compilers, and software tools.
My interest was in the technology which would enable all of the business-oriented applications, not the applications themselves.
10% positive vs. 90% negative/neutral/mediocre, or is it 2% and 98%, or even 1% and 99%!
Subjectively, I estimate that only 10% of what IBM does impresses me, and 90% of what they do disappoints, depresses, bothers, or maybe simply bores me.
On further reflection, maybe it’s a 2%/98% split, or maybe even call in a 1%/99% split to dramatize the point.
So, 1% of the time IBM does things that really impress me, but the other 99% of the time everything that they do either disappoints, depresses, bothers, or simply bores me.
That sounds about right. And emphasizes the point.
Maybe in some briefer intervals of time, IBM manages to impress me 10% of the time. But over much longer stretches of time only 1% of the time they impress me.
How others might feel about IBM and its accomplishments is another matter. I only speak for myself.
Top ten or dozen or so of awesome products and features
I don’t want to try to judge what the absolute top IBM products and features were, especially since that can vary based on personal or organizational interests, but here’s a representative sample:
- Unit record equipment and punched cards, 029 keypunch
- 704 and 709
- 7090/7094
- 7030 Stretch. A supercomputer.
- System/360
- 80-column punched cards
- RJE (Remote Job Entry) for batch processing of card decks. Technically, IBM may not have invented it, but they did popularize it.
- 8-bit bytes for characters and addressing
- First mainframe based on integrated circuits
- Semiconductor memory
- Invention of DRAM
- Selectric typewriter
- 2741 terminal
- 1403 line printer
- IBM PC (and XT and AT). With relatively high-quality but cheap graphics.
- ThinkPad with integrated pointing/tracking device
- FORTRAN, PL/I, and APL programming languages. Everybody had COBOL.
- Relational databases and the relational data model, and SQL. E.F. Codd, IBM researcher, and other IBM researchers.
- Removable disk drives (disk packs)
- Winchester disk drives
- Floppy disks
- Space Shuttle avionics computer
- RISC CPU concepts
- PowerPC chip
- IBM Watson AI system
- Initial research and early systems in quantum computing
Ditto for the top or I should say bottom ten or dozen or so of flops or exasperating products or decisions or policies of IBM. A representative sample:
- 701 computer. Put iBM on the map for computing, but too limited and few shipped — the 704 was more successful since it had magnetic core memory.
- 7030. Great, but technical issues and a commercial flop.
- Blindsided by minicomputers and super-minicomputers
- Blindsided by high-end workstations and servers
- Series/1 minicomputer. Late to the game.
- System 38. Incompatible, but advanced architecture.
- IBM PCjr
- Blindsided by the PC clones
- No 386 PC
- PS/2
- Microchannel
- OS/2 and Presentation Manager
- Allowing Microsoft to gain most of the benefit from DOS and Windows for the PC
- Operating systems in general. Maybe VM/370 was sort-of okay.
- Blindsided by others stealing their thunder on database technology. E.g., Oracle.
- Very late adoption of UNIX
- Stumbles with Watson and AI. E.g., health care.
- Later quantum computer systems. Yes, some decent technical advances, but failing to put together a credible system that can tackle real world problems the way early IBM classical computer systems could. Has yet to fully capitalize on all of their years of research.
IBM and Feynman at Los Alamos and the Manhattan Project
The Manhattan Project to develop the atom bomb certainly needed a lot of calculations, just the task for a computer, but… there were no computers at the time. The closest IBM had to a computer by the end of World War II was the 405 Alphabetical Accounting Machine (or 405 Tabulator) that could add and subtract and print, using punched cards for input. Feynman used these machines to perform atomic bomb calculations at Los Alamos for the Manhattan Project.
Feynman also had so-called Marchant electromechanical calculators that could do basic arithmetic, but manually, one operation at a time. He also had rooms full of shifts of women, called “computers”, who operated the Marchant calculators.
The IBM 405 Tabulators came later, but were part of the mix.
ENIAC, the first real industrial scale electronic computer, came out a year later, in 1946 and was able to perform computations for the hydrogen bomb project (“Super”), but was too late to help with the atom bomb itself.
So, IBM did have a role at Los Alamos on the Manhattan Project, just not with a general-purpose computer as we know the term.
I, of course, was not involved with any of this, but the stories about it, the history, unit record equipment, punched cards, and all, informed my early (and later) perceptions of IBM.
My first punched card
I think it was back in 1967, we were visiting my uncle living in Massachusetts who was a nuclear engineer for Combustion Engineering in Connecticut, and he brought home a couple of punched cards which he gave us. I was bummed out when one of my much younger little cousins did all of the things you’re not supposed to do with punched cards to my card — he folded it, he spindled it, and he otherwise mutilated it. Yeah, I was very bummed out, but it was a valuable lesson in computer technology.
A couple of years later, in 1969, at the time of the Apollo 11 moon landing, when my Uncle was living in Connecticut, closer to Combustion Engineering’s headquarters in Windsor Locks (after he had returned from Germany where the family had moved for a couple of years so he could do the nuclear engineering to start up and test one of Germany’s early nuclear power plants), he took us in to see the computer that he used, a big Control Data machine. It wasn’t an IBM system, but it still used the same punched cards.
IBM machines at my college
Although the DEC PDP-10 was the main computer that all of us students and faculty used at Stevens Institute of Technology in the mid-1970s, they had used IBM mainframes as well.
I worked in the Stevens computer all four years I was there. All on the PDP-10.
The operations manager used to tell stories about the old vacuum tube-based UNIVAC computer (likely an 1103) that he had operated in his younger days at Stevens. One story was that each morning he had to take a short stack of punched cards held together with a rubber band, go behind the machine and then give the top of each tube a whack with the card deck to make sure the vacuum tubes were firmly seated, since there was so much vibration from the cooling fans.
Eventually they replaced the ancient UNIVAC with a spiffy modern IBM System/360 Model 40. That machine was first delivered in 1965, but I don’t know when Stevens got theirs. Strictly for batch processing of punched card decks.
But by 1970 or so, they replaced the IBM machine with a DEC PDP-10 (KA-10 processor) time-sharing system with Teletype terminals.
The 360 Model 40 was quite a decent system, as batch processing systems of the 1960 era went. But the modern time-sharing of the PDP-10 was a whole new level of capability, a big leap.
I don’t ding IBM for the 360 Model 40 in any way. It was a world-class machine for its time. IBM certainly deserves praise for innovating and delivering such systems.
The college also had a small IBM System/360 Model 20 for administrative data processing. But that was completely separate from the main computer center.
As wonderful as the PDP-10 was, especially with modern time-sharing, a lot of students and faculty were still doing scientific and engineering calculations using batch processing of punched cards. That meant that a lot of keypunch machines were needed, which DEC did not supply. We relied on the venerable IBM 029 Keypunch machines. There was a separate keypunch room in the computer center (basement of the new library) with a number of 029 keypunch machines. In the fall of every year, they had to temporarily bring in another dozen or so 029’s since the entire freshman class had a mandatory class in FORTRAN programming — based on punched cards.
That’s the extent of the use of IBM computers at my college.
Invention of DRAM
IBM discovered the key innovation that enabled DRAM memory.
Yes, IBM benefited from that innovation — but so did all of us, and so did all of the modern DRAM manufacturers. IBM did make its own memory chips for some time, but eventually stopped, leaving all of the big DRAM manufacturers to reap the profits.
So, IBM did this great innovation, benefited from it, even made money directly from it, and then… walked away from it.
Intel also was in the memory chip business but then also walked away from it.
Maybe they both made the wisest decision. Or maybe not.
It’s just that DRAM is such a huge market and these early pioneers aren’t profiting (directly) from it.
The flip side may be that without the investment burden of staying competitive in the DRAM market, both IBM and Intel reap indirect benefits from the DRAM market since cheap and fast memory enables higher-margin products of both companies.
Still, one has to wonder.
Early core memory
The earliest electronic digital computers used a variety of oddball technologies for their main memory, including magnetic drums, display tubes, electrostatic tubes, capacitors, and acoustic memory delay lines.
Yeah, these methods worked, in a fashion, but all had limitations and issues.
Several researchers investigated the use of magnetic cores.
One enterprising researcher was Dr. An Wang at Harvard. He had the forethought to patent his invention.
IBM licensed Dr. Wang’s method for core memory. The story they told at Wang Labs (where I worked for several years in the late 1970s and early 1980s) was that Dr. Wang wanted a lump-sum upfront payment, but IBM insisted that they would only pay a per-unit licensing fee — since they didn’t think there was that much of a market for these new computers. Dr. Wang had little choice. IBM ended up selling lots of computers, earning Dr. Wang a small fortune. IBM’s first computer using core memory was the IBM 704, introduced in 1954 (the same year I was born.)
IBM did benefit handsomely from this licensed technology, but having to pay such licensing fees probably irked IBM senior executives, which still included Thomas J. Watson, Sr., so one has to wonder whether this episode may have helped to push IBM into investing a lot more heavily in its own research, including the successors to core memory, SRAM and DRAM semiconductor memory in the late 1960s and early 1970s.
Nonetheless, IBM did benefit from the licensing.
A couple of decades later, they once again benefited from licensing with the IBM PC, with a microprocessor chip from Intel, operating systems software from Microsoft, and memory chips from Intel and Texas Instruments even though IBM already made high-end memory chips of its own for its midrange and mainframe computers. The key driver for the IBM PC was that it had to be super-cheap to manufacture and didn’t need the quality, features, or performance of IBM’s midrange and mainframe computers.
Overall, memory and licensing have been a mixed bag for IBM, offering some benefit, but mixed blessings as with any pact with any devil.
7094
Although it predated my own involvement in the computer industry, the IBM 7094 computer was a very impressive system for the time, the early 1960s. I never saw or used one, but it was the backdrop for its successor systems that I did see and use.
It was the last of IBM’s 36-bit computers. Actually, it was preceded by the 7090, a transistorized version of the 709, but had functional and performance enhancements that made it great for scientific computing.
NASA used the 7094 for Project Mercury and Project Gemini of the space program.
MIT used the 7094 for its CTSS (Compatible Time-Sharing System) project, which demonstrated the fundamentals of time-sharing, allowing many more users to directly use the system, via terminals rather than antiquated punched cards and batch processing.
It was a great example of the old IBM with bold, dramatic innovations.
Nothing to complain about here.
It was followed by the System/360 series of 32-bit computers, but reigned as a powerhouse system into the late 1960s.
System/360 series
As powerful as the 36-bit 7094 was, it was focused more on scientific computing rather than general-purpose or business applications.
Its successor, the 32-bit System/360 series, was truly general-purpose and excelled at business applications.
A key feature of the System/360 was that all of the machines in the 360 family shared the same instruction set architecture, documented in the IBM System/360 Principles of Operation manual.
A prominent feature of the System/360 was the 8-bit byte. Granted, it used a proprietary IBM character code, EBCDIC, but it was such a convenient unit of information that even today virtually all computers use it, although the ASCII character code and later 8-bit encoding for UNICODE supplanted EBCDIC for non-IBM systems. The IBM PC used a variant of ASCII.
No real complaints about the System/360 series. WIth it, IBM was really firing on all cylinders.
These were the golden years for IBM.
Okay, actually, it is the System/360 hardware that was such a big success and influence on the entire computer industry, but IBM’s operating systems left much to be desired and much to complain about.
I mean, if IBM had had a great operating system for the System/360, there wouldn’t have been a need for:
- Multics
- UNIX
- TOPS-10
- TENEX
- DOS
- Windows
8-bit bytes for characters and addressing
Today, 8-bit bytes for characters and for addressing is the norm. IBM introduced this innovation with the System/360 in 1964. Previously 18 and 36-bit words and word addressing, and 6 and 7-bit characters were the norm.
But as with so many IBM innovations, competitors quickly picked up on this innovation and ran with it, leaving IBM in the dust.
Granted, IBM had a weird-ass character set, with a weird-ass name, EBCDIC or Extended Binary Coded Decimal Interchange Code, while almost everybody else used standard ASCII (initially a 7-bit code in 8-bit bytes but then expanded to 8-bit with another 128 character codes), but the 8-bit character, byte, and byte addressing quickly became the norm for all newer computer architectures.
Sure, usage by competitors did vary, somewhat. The RCA Spectra series used EBCDIC for compatibility, while the Wang VS used 8-bit ASCII even though their data formats and the instruction set were fully compatible with the IBM 360.
This included the 16-bit DEC PDP-11 minicomputers, and their 32-bit VAX and 64-bit Alpha follow-on super-minicomputers.
And virtually all advanced microcomputers did so as well, notably the Intel x86 architecture and Motorola 68000 microcomputer chips.
Another example of an IBM innovation that was a key competitive advantage, for a while, but then fully co-opted and superseded by competitors.
IBM may no longer control the standards for bytes, character codes, and addressing, but they deserve great credit for the initial innovation and for getting the ball rolling, even though the ball has now rolled out of IBM’s control.
First mainframe based on integrated circuits
IBM was in fact the first of the big computer companies to come out with a mainframe computer based on integrated circuits rather than discrete transistors, the System/360 series.
On the flip side, everybody else quickly switched to integrated circuits, limiting IBM’s competitive advantage.
IBM System/360 Principles of Operation manual
At some point in my high school years I got a hold of a copy of the IBM System/360 Principles of Operation manual. I studied it, cover to cover, very carefully. It fully documented the machine architecture — not the raw hardware, which differed from model to model, but the features from the perspective of how a programmer could use them. Registers, data types and representations, and instructions.
It was awesome.
To this day, it is the gold standard I adhere to for how a computer should be documented.
It was IBM at its best.
Alas, the great difficulty is that IBM is not always and too-frequently not at its best.
System/360 Model 91 at Princeton
We had an informal computer club in high school, led by one of the math teachers. One Saturday in 1970 we had a field trip to the Theoretical Division at the Forrestal Plasma Fusion Laboratory at Princeton University. Princeton had a big System/360 Model 91 which was actually considered a supercomputer at the time. It was housed at a central computer center, but we could submit programs via remote job entry (RJE) using punched cards. The Theoretical Division was a small office building next to the big lab building where they actually conducted nuclear fusion energy experiments that the theory guys designed and simulated using the Model 91 supercomputer.
We were programming in FORTRAN, for the first time. Playing, really, but it was awesome to be using such a big and impressive machine, an actual supercomputer, and we were just in high school.
And it was awesome to be coding in the shadow of a high-end physics lab where they were doing leading edge physics calculations and simulations.
At lunch time we visited the computer center and could see the console of this behemoth, this supercomputer. It was very impressive.
Once again, IBM at its best. Innovative, solid engineering, high quality, high performance, high reliability.
Little did I know that this was IBM at its literal best, and within a couple of years decline would have set in.
System/360 Model 75
IBM had a lot of very successful models of the IBM System/360 family of mainframes, but the Model 75 really stood out for me. It was a truly amazing machine for the time.
NASA used several of the Model 75s to support the Apollo Program. The earlier Mercury and Gemini programs used the older, but quite amazing IBM 7094, but the IBM System/360 Model 75 was a much more advanced and higher performance system, as the Apollo Program needed.
I attended a summer computer training program for high school students in 1971 at Stevens Institute of Technology called SSTP — Student Science Training Program — focused on the DEC PDP-10 architecture, assembly language programming, and numerical analysis. We had a field trip to an IBM data center (I think they called it a Management Information Center) in Mahwah, New Jersey where they had rows and rows of Model 75s. It wasn’t quite as powerful as the Model 91 I had seen in Princeton the previous year, but still, seeing so many of them was extremely impressive. It really made the point about what a technical and business powerhouse IBM was in the 1970s.
The Model 75 was a sterling example of IBM at its best, innovating, manufacturing, marketing, sales, quality, functionality, reliability, and performance. IBM deserves great praise for the Model 75.
Historical unit record equipment at IBM headquarters on Madison Avenue in New York City
Another field trip we had at that summer training program at Stevens in 1971 was to IBM’s headquarters on Madison Avenue in New York City, where they had a significant display of antique unit record equipment — various devices for processing the ubiquitous (back then!) IBM 80-column punched cards.
That’s all very ancient history now, but at the time it was yet another reminder of what a powerhouse in data processing IBM was back in the 1970s and in the preceding decades.
Once again IBM at its best. Its glory days.
Selectric typewriter
The IBM Selectric typewriter was a truly amazing typewriter. It had a spherical “golfball” type head mechanism and a much friendlier keyboard. The print quality was fantastic.
It was the handiwork of IBM Research, not some product development group.
I actually never saw one until I got to college and started working in the computer center where they had one in the office.
A variation of the Selectric was embedded in the IBM 2741 terminal, which gave it amazing print quality, in contrast to most Teletype-style terminals at the time.
I do remember the last time I saw a Selectric. It was perched on top of a pile of garbage bags on the sidewalk on Avenue of the Americas (6th Ave) in New York City, just a block or two north of Radio City Music Hall. I was out for my lunchtime walk to Central Park, walking north on the east side of the street when I saw it. It was a rather unusual sight, even for NYC. I laser-focused on it like a hawk, cut right in front of some dude to get to the curb side of the sidewalk where the bags were, and stared at it for a few moments. It was in great condition Sure, the thought occurred to me to carry it with me, but it was bulky and heavy and I wasn’t going to use it anyway since everybody used PC-based word processors at that stage. Just before I was about to continue my walk, I glanced up and a woman was standing in the middle of the sidewalk staring at me with her mouth wide open. At first, I thought I had met a soulmate, a woman who really appreciated amazing technology, but… the first words out of her mouth in a hushed but excited tone of voice crushed both my heart and soul, as she said… “That was HARRISON FORD!!” The dude I had cut in front of! I just kept on walking. Such is life… in NYC!
2741 terminal
When I was in high school the most of my computer access was time-sharing using the IBM 2741 terminal, connected by phone modem — acoustic coupler — to a remote computer, first an IBM System/360 model 50 and then to an RCA Spectra 70/46, using PL/I on the IBM system and BASIC on the RCA system.
The 2741 was a great terminal. It was based on the IBM Selectric typewriter for the keyboard and printing, plus a lot of electronics for remote access.
It was rock-solid, high quality, very reliable, great performance (for a terminal, back then), and very professional-looking and feeling. IBM at its best.
Lots to praise, nothing to criticize. Again, IBM at its best.
There was one time when an IBM technician came in to do some work on our 2741 terminal. I can’t recall what the issue was, but when the technician removed the back panel of the terminal, to get access to the electronics, I could see that there was this shelf inside and there was documentation for using and repairing the terminal. Hmmm… that’s interesting, I thought, and I filed that information away in the back of my head, not realizing at the time that it would come in handy some years later when I was at Stevens.
Ocean County College and Ocean County Information Network, IBM CICS
When I was in high school, most of my computing was through the Ocean County Information Network (OCIN) which was the computer center at Ocean County College in Toms River, New Jersey.
Previously, the college had been using a mid-range IBM 1401 for traditional data processing applications.
But by the time I discovered computers in the 10th grade, the college was switching over to a brand new RCA Spectra 70/46 time-sharing system, using terminals for applications rather than punched cards.
I got a summer job at OCIN after I graduated from high school. First doing data entry, but then programming — in assembler since the RCA COBOL compiler was such a hog and not oriented towards interactive applications anyway.
I also dropped out of college, temporarily, for a semester, halfway through my sophomore year, and started working full time at OCIN as an applications programmer.
Most of the work at OCIN, primarily data entry and updating records for students, was accomplished using modern CRT display terminals. Datapoint was a big-deal company back then. They had some higher-speed hardcopy terminals as well. Not IBM per se, but IBM-inspired.
One day, the local IBM sales team stopped by and gave us a presentation on CICS, their Customer Information Control System. It was interesting, but didn’t really do anything for us that we weren’t already doing. There were no added benefits or added value for us. And it would have required switching to an IBM mainframe computer, which wasn’t going to happen. The RCA Spectra was plenty powerful for the applications we were doing.
CICS was indeed a power tool for a typical big company, but we weren’t that kind of big company with that kind of transaction volume.
Still, it did establish IBM’s credentials as providing the level of software capabilities that major organizations required. This was IBM at its best for big organizations.
Whether or not CICS was a great product, an innovation, or what its quality was, I couldn’t say, since it wasn’t my bailiwick and I never personally used it or even saw it in operation.
In truth, I was never a major organization kind of guy. So, IBM was never really a fit for me, either as an employee or a user — except for the IBM PC. And the 2741, back in the day.
3270 display terminal
Although many of us were very familiar with computer terminals in the 1970s, they were mostly either ancient Teletype-style terminals or very dumb CRT terminals, displaying text, no editing capabilities, and having command line interfaces.
But in 1971, IBM introduced the 3270 Terminal. Which added a whole new level of functionality.
Designed to mimic fill-in-the-blank forms, they had easy to use editing features and were high performance.
Granted, they had no value to us programmers, but they were ideal for corporate and business users, such as for data entry and updating data.
Again, they were IBM at its best, innovative, rock-solid, high-quality, extremely reliable, and high-performance.
I personally never used one, but when I got to Wang Labs in 1979, whose Wang VS computer was a mid-size clone of the IBM 360/370 instruction set, their display terminals were basically clones of the IBM 3270 display terminals. Rock-solid terminals, even if not very friendly for text-oriented programming and software development.
IBM deserves significant praise for their 3270 terminal.
Ironically, IBM itself killed off demand for the 3270 terminal — by introducing the IBM PC which could readily emulate all of the features of the 3270 terminal.
In fact, when I first saw the IBM PC, it reminded me a lot of the 3270 terminal.
Time-sharing on the System/360
The only real negative, for me, of the System/360 was that it had only mediocre time-sharing capabilities.
The DEC PDP-10 was so, so much better for time-sharing.
Somehow, IBM stumbled and really missed the boat on time-sharing. Oh, yes, IBM did have some time-sharing offerings, including VM, but overall, IBM’s time-sharing… sucked.
I can’t point to IBM and say how wonderful its time-sharing was. Mediocre, at best, was more like it, in general.
Interactive PL/I programming on RUSH-5 from Allen-Babcock Computing
We did have access to time-sharing on an IBM System/360 Model 50 when I was in high school, but it was with a custom operating environment and programming system, not standard IBM software.
Most of my early programming in high school was on this interactive version of the PL/I programming language (awesome!), running on a time-sharing system called RUSH-5 (Remote Use of Shared Hardware) from Allen-Babcock Computing (ABC) in northern New Jersey. It was a truly awesome experience. But that was not typical IBM.
RUSH-5 had a lot of interesting features and was actually one of the best and most programmer-friendly development environments I ever used. You could be running your PL/I program, hit the Attention key (“ATTN”), examine and modify variables or even execute some code in immediate mode, maybe make some changes to your code, do some edits, and then continue your program right where it was interrupted. You could even interrupt your program (the ATTN key), save it, and then come back the next day, reload it and continue right where you left off. Amazing! Then… and even now!
ABC’s work with PL/I and RUSH-5 was a major influence on IBM’s offering of CPS, Conversational Programming System.
Again, I can’t point to IBM and say how wonderful its time-sharing was. Mediocre, at best, was more like it, in general.
System/360 was the peak and end of line for me and IBM… until the PC
Once I finished using that IBM System/360 Model 50 running RUSH-5, that was really the end of the line with IBM systems for me, until many years later when the IBM PC came out.
And after I got out of college and went off to work at DEC, I literally paid no attention to IBM.
For me, at the time, there was a big gap covering the System/370 through AS/400, 3033, and right through the Z Series. I literally had no exposure to System/370, 3033, AS/400, or any of those similar systems after the System/360.
1130
I did have one other, minor, exposure to IBM systems when I was in college. I went to visit a high school classmate who was attending Seton Hall, also in northern New Jersey. They had an IBM 1130 system. It was a very low-end system, marketed to cost-sensitive organizations. Mostly running local batch jobs with punched cards.
A rather curious artifact of computing history, but it wasn’t of much interest to me given that I had full access to a glorious PDP-10.
But, it did represent the kind of business IBM did and was very successful at — low-end data processing.
FAA air traffic control (ATC)
My classmate from high school and Seton Hall went on to work for the FAA on their air traffic control system (ATC), which was based on a specialized version of the IBM System/360.
Actually, he was not an employee of the FAA, but for whoever currently had the contract for the development and maintenance of the system. Every few years the FAA would switch to a new contractor, and my classmate would then become an employee of that company. Same desk… different company. I recall mention of Computer Sciences Corp. and even IBM.
At some point they were no longer able to get replacement parts for the old IBM system, but it was so old and relatively slow that they were just able to run an emulator for the old hardware on modern hardware.
There have been a number of efforts to revamp this system. Most failed, or only partially succeeded.
I hear that IBM itself now has a contract to completely rearchitect the system.
My classmate spent his entire adult career working on that old system.
1620
I heard tell of the math department at Stevens having an IBM 1620 in the early sixties. Supposedly installed in 1961 but unclear how long they had it. Or what they used it for. I never saw it or talked to anyone who had. I suspect that use of it declined or even evaporated once Stevens installed an IBM System/360 Model 40 in 1965.
The 1620 was a popular machine both in business and academia. It wasn’t a high-end mainframe, but for a lot of more basic computation and training on computers it was a good fit.
IBM should be applauded for developing and marketing this system in the early 1960s.
But, later, they dropped the ball and were eclipsed by minicomputers such as the PDP-8 and PDP-11 and Data General Nova. They did continue to offer low-end computer systems, but more focused on business applications, so-called data processing and what we call IT today, rather than the academic and technical users common in academia and commercial departments with a technical (STEM) focus. Systems such as the System/32.
Clones of the IBM System/360 instruction set
For the time, the IBM System/360 had a very decent instruction set, the machine language used at the assembly language level.
It was so good and so popular that a number of vendors cloned or copied it.
There were two types of cloning:
- Fully-compatible clones, so-called plug-compatible mainframes. They were functionally identical to the IBM hardware and could run the IBM system software and applications, unchanged. Amdahl Computer was a prominent plug-compatible vendor. Gene Amdahl, founder and CEO of Amdahl Computer, had been a hardware engineer for the IBM 709 and 7090/7094, and chief hardware architect for the entire IBM System/360 family.
- Basic instruction set clones. Not designed to be fully plug-compatible, but exploited significant partial compatibility. Had their own distinctive operating systems.
I used two of the basic instruction set clones:
- RCA Spectra 70/46. Even used EBCDIC for character codes.
- Wang VS. Fully compatible with data formats and instruction set, but did use ASCII for character codes.
If you knew how to program in assembler on the IBM System/360, you could program in Assembler on these machines.
It was great that IBM planted the seeds for other companies, and we all benefited, but it’s once again a great example of how IBM was unable to fully capitalize on its own great innovations and its own initial success.
Yes, the System/360, and its successor families of processors, were a great success for IBM. IBM still has the Z family of processors, which are still compatible with the basic System/360 programming architecture, data formats, and instruction set, but the entire rest of the computer industry has moved on to more modern and more efficient computer architectures.
I did have one other exposure to IBM mainframes via the Wang Data Center
Wang Labs, famed for shared-logic calculators and work processors, also bought a local data center in Massachusetts. A data center based on IBM systems. I never paid attention to exactly what systems they were. They could have been System/360s or might have been System/370s by then. I couldn’t have cared less. This was in 1979 and 1980.
But I did have to use them for a few months when we were porting a PL/I compiler to the Wang systems that was licensed from either Translation Systems (TSI) or Language Processors (LPI) — I can’t remember which, since I had dealt with both. Whichever it was, they had licensed their PL/I compiler (a derivative of the Multics PL/I compiler) to Data General, DEC, and Wang.
I needed the IBM system to bootstrap the compiler since it was written in PL/I. No time-sharing, just using RJE, Remote Job Entry, from the Wang VS systems. We would compile the TSI/LPI compiler on the IBM system, then use it to compile itself, outputting assembly language source code on the IBM system, which we then transferred back to the Wang VS system, assembled it, and could then run the compiler natively on the VS system. We then added the code to generate native Wang VS object modules and then it behaved as a normal compiler.
So, I did use an IBM system, but I knew nothing about it since the only thing I saw was the RJE interface on the Wang VS system.
Proprietary operating systems, then UNIX
One huge, glaring issue for IBM was that despite their prowess with hardware, they never really excelled with operating systems software. Sure, they had proprietary operating systems, many of them, but they never really settled on a true winner. Sure, they were functional and I’m sure many customers were very (or at least somewhat) satisfied, but from my perspective they were always rather mediocre and lackluster.
VM was maybe the closest to a real winner, but in true IBM fashion, even though it was innovative, IBM was never able to fully exploit it.
Only much, much later did IBM discover UNIX, as everyone else had much earlier, and finally abandoned their own, proprietary operating systems.
IBM’s adoption of UNIX was a bold, effective move. Virtually all of their earlier offerings, not so much.
It was always baffling to me how the great, all-powerful IBM, so great with hardware, stumbled so badly with operating systems. Including selecting Microsoft DOS for the PC, when UNIX, or at least a stripped-down UNIX would have been a much better technical choice. And their brief foray into OS/2 for the PS/2, again with Microsoft, when UNIX was the obvious best choice.
1403 line printer
Everybody loved the DEC PDP-10 for academic computing, including science, engineering, mathematics, computer science, and research in general. Business applications, not so much, until later, but that wasn’t an issue for the academic and research environments.
But as wonderful and much-loved as the DEC PDP-10 was, overall, one super-glaring defect was that DEC’s line printer was super-crappy. It was slow, unreliable, the print quality was poor — and it was UPPER-CASE ONLY (yeah, really!!)
The more elite and well-funded academic environments dumped the DEC printer and bought the… IBM (yeah, you read that right!!) printer offering, the IBM 1403 line printer. It was everything that the DEC printer wasn’t — speed, print quality, reliability, and… it could print lower case letters!
My school, Stevens Institute of Technology, wasn’t quite as elite or as well-funded, so most of my years — and I worked at the computer center all four years — were plagued by the super-crappy DEC printer.
But in my senior year, we finally got a true-blue IBM 1403. I really enjoyed the lower case letters.
The 1403 was a truly awesome device. This was IBM innovation and manufacturing at its best. No complaints at all on this front.
Fast forward to today… IBM has no printer offering. Go figure!
IBM 5100 Portable Computer — IBM’s first personal computer!
When I was in college, working at the computer center as a programmer, in 1975, one day the local IBM sales team came in to give us a demo of the brand new IBM 5100 Portable Computer, arguably IBM’s first personal computer. Really!
It was rather big, bulky, and heavy, requiring a wheeled cart to move it around. Okay, it was portable in some sense, but it was a complete computer system in an integrated unit, although it had no printer.
It had a small, fully integrated display screen, a full professional-quality keyboard, also fully integrated into the system unit.
It had BASIC and APL hard-wired into the unit. APL? Yes, APL!
It did, sort of, at least feel like a personal computer, sort of. It didn’t look like any traditional computer, unlike the MITS Altair 8800 which had been released a year earlier, in 1974.
It didn’t offer us any features that we wanted that we didn’t already have at our fingertips from our PDP-10 (albeit via time-sharing), but it was quite impressive to see so many features packed in such a small box.
It really was an engineering marvel.
But it was emblematic of my love/hate relationship with IBM — impressive engineering innovation, but far less than desirable alignment with any real market demand.
As so common with a lot of engineering innovations, it was ahead of its time.
Just six years later, IBM introduced its IBM PC, which had far less innovation, but was an absolutely roaring market success. Go figure!
But at the time, as a technologist, I was really impressed by the 5100.
Hijacking the 2741 for use as a PDP-10 terminal
I had one other interesting IBM-related experience in college. Somebody at Stevens needed to get access to some application that was running on an IBM mainframe at some other school. It may have been Rutgers, I’m not sure. They needed an IBM 2741 terminal to do that. So, they brought in the terminal and set it up in an unused office space near our PDP-10 computer room, along with a modem and acoustic coupler so they could dial into the IBM computer.
As I mentioned, the IBM 2741 was a very impressive piece of hardware. We were all talking about it. It was especially impressive when compared to an old-fashioned Teletype terminal. Having lower case and a faster and quieter typing mechanism were much-appreciated features.
At some point, someone raised the question of whether we could connect it to our PDP-10, which primarily used old-fashioned, clunky, and unreliable, and very noisy, Teletype terminals — without lower-case letters.
Tom P., another student working in the computer center, was responsible for maintaining the software that ran on the PDP-8 minicomputer which acted as the front-end for connecting teletypes to the PDP-10, including both hardwired terminals in the nearby terminal room, and a large rack of modems that were used for dial-in connections.
I mentioned that, unfortunately, the 2741 used EBCDIC rather than the ASCII that the PDP-10 and PDP-8 used.
And we knew from playing around with the 2741 that it had a very odd communication protocol.
Tom had no idea what the technical details were for communicating with the 2741 terminal.
Then, it all came back to me, my experience when the technician came in to service our 2741 in high school. I told Tom about the shelf inside the 2741 with the technical manual and schematics. His eyes lit up!
So, we unscrewed the back of the terminal, and there they were. Tom — and I — scrutinized the technical manual carefully. We didn’t need to mess with the actual hardware, we just needed the communications protocol, which was really weird, as well as the need to convert between ASCII and EBCDIC.
One thorny issue was that the Teletype terminals were basically all ten characters per second and the hardware clock used by the PDP-8 software was built around that. The 2741 was a bit faster, at the very odd rate of 134.5 bits per second. The PDP-8 had a faster system clock as well. Tom was able to figure out a hybrid interrupt timer using the two clock rates to match the 2741 bit rate fairly closely, close enough.
So, Tom coded all of this up — in 12-bit PDP-8 assembly language — and we tried it out late one night after the computer center had closed for normal users. It didn’t work at all. Just gibberish in both directions.
Undaunted, Tom hooked up the oscilloscope that he borrowed from the DEC PDP-10 field service technician’s office in the computer center. And we watched the bits on the oscilloscope in both directions.
The data rate was fine. Tom pointed to one of the square waves on the oscilloscope screen and said “Okay, that’s the first bit.” But I replied “No, that’s the LAST bit.” Tom insisted “No, that’s the FIRST bit!” Then the light bulb went on over both our heads. This crazy IBM 2741 communication protocol sent the bits in the reverse order from the way a Teletype terminal sent them. Ah!!
So, Tom quickly modified his code to reverse the order of the bits for both send and receive, downloaded the update, rebooted the PDP-8 and we waited. I dialed the 2741 back in and we tried it out.
It actually worked! Mission accomplished!
On to the next technical issue.
The odd communication protocol was half duplex so that the keyboard was locked while the terminal was printing. You could manually unlock the keyboard by hitting the Interrupt key (“ATTN”), but usually the IBM mainframe would automatically send the special unlock code when the operating system was ready to accept input.
This time, Tom came up with a patch for the PDP-10 operating system (called “The Monitor”). It was only a temporary patch to the running system, so each day, when the system was rebooted, the patch needed to be reinstalled in the running system. Which Tom could do, but I stayed away from that kind of thing since I was more of a programming language and compiler and tools guy rather than an operating systems programmer, as he was.
It was a fun little project, although it was cumbersome to use.
I used it a little for some class writing assignments since the keyboard and printing was identical to an IBM Selectric typewriter and I could easily edit the text and then just print it again, but I later found an old Friden Flexowriter that I was able to use for writing assignments — although once again, I required Tom’s technical assistance to get it working, but that’s not a story about IBM.
In short, my experiences (and adventures) with the 2741 convinced me of IBM’s significant technical capabilities, but also that despite their significant technical prowess, IBM’s engineers could make some really bad decisions and compromises (likely aided by managers, God bless them!) that can really frustrate customers and users alike. Quite a mixed bag.
College interviews
If I had been any other random computer science graduate interviewing at college, there were two obvious relatively local candidate companies for a student from Stevens Institute of Technology, Bell Labs and IBM. I interviewed at neither.
I had already made up my mind years earlier that DEC was the place for me. I only interviewed with one company, DEC, and I had no on-campus interviews. It was kind of a tradition with Stevens and DEC that one student who worked at the computer center each year would go to work in the software engineering group for the Large Computer Group (LCG) which developed, manufactured, and marketed the PDP-10 based systems. I was that one guy my year, 1976. My boss at the computer center passed my resume directly to the manager of the language group at LCG since I was a programming language and compiler guy. They flew me up to Boston, my first and only college interview, my first plane flight, my first car rental, my first hotel rental, my first experience of the joy of driving in downtown Boston at rush hour. One job offer. One acceptance. Alas, it only lasted three and a half years, but it was a glorious place to start and a valuable experience to have.
Bell Labs? IBM? It honestly never occurred to me to even take any interest. I had had masters-level graduate school adjunct computer science and electrical engineering instructors from both, so I knew a little about each, but maybe DEC was more appealing since they were more of a scrappy newcomer (like me) than a storied, bureaucratic ivory castle as Bell Labs and IBM were.
Nobody ever advised me “Oh, you have to go to Bell Labs or IBM”, so what did I really know?!
Who knows how things would have turned out for me if I had started at Bell Labs or IBM.
It’s not that I hated either, but I didn’t love either, either. All that I knew was that I had to go to DEC.
So, coming right out of school, I didn’t either love or hate IBM. IBM was simply irrelevant to me. IBM was a fact, and an annoyance.
RISC, 801, ROMP, and PowerPC
By the 1970s computer systems and their instruction sets had gotten quite complicated, making it difficult for compilers and programmers alike to fully utilize the raw computing power. Such computers were called CISC — Complex Instruction Set Computers.
IBM had one of the two groups which researched and advanced the notion of a radically simplified but still much more advanced and powerful type of central processing unit, called RISC — Reduced Instruction Set Computers. The other group was at Berkeley in California.
The initial IBM research project, the IBM 801, was an intriguing result.
Unfortunately, IBM’s first commercial foray into RISC, called the ROMP chip, and packaged as the IBM RT PC was an abysmal failure, a commercial flop. A variety of engineering tradeoffs required to stuff it into a desktop PC left it with very poor performance. Really poor, unusable. An embarrassment.
I was at an electronic CAD/CAE startup in Boulder, Colorado at the time. We had our own high-end microprocessor-based workstation based on the Motorola 68010 16/32-bit microprocessor but were looking to put a subset of our software on standard IBM PCs. We initially focused on the 286-based IBM PC AT, which actually worked reasonably well, but were hoping that IBM would have a 386-based PC soon. IBM wanted our business and pushed the RT as the solution. We even had a visit to the IBM facility in Austin, Texas (they called it a “competency center”), saw their automated manufacturing setup and met with the systems architects for RT. But, RT was a real dog, a true non-starter (or as some would put it, “A dog that doesn’t hunt.”) We kept pestering IBM to have a 386 PC offering, but they insisted that wasn’t going to happen. Compaq actually came out with a 386 PC before IBM.
The RT and ROMP died. RIP.
Meanwhile RISC was beginning to flourish out in California from Berkely, through MIPS and Silicon Graphics.
Eventually IBM and Motorola teamed up with Apple to design a viable RISC chip architecture which became the PowerPC.
The PowerPC was a huge success for Apple, but other than Apple, nobody really bought into it.
Seeing RISC rising, Microsoft decided to make their new super-Windows OS, Windows NT portable and had it running on all four of the top chips, Intel (and AMD), MIPS, DEC Alpha (Digital Equipment Corporation), and IBM PowerPC. In fact, I visited a small IBM facility in Kirkland, Washington to port and test my object oriented programming language (Liana) on an unreleased PowerPC running Windows NT. That was a rather interesting experience. But, once again, IBM had a success but was unable to exploit it. And with Intel and AMD continuing to make steady advances and even leaps, IBM either couldn’t or didn’t want to keep up and the IBM PowerPC running Windows NT never saw the light of day. And since Intel-based Windows NT was doing so well, MIPS and Alpha disappeared as well.
Despite the success of the PowerPC and MIPS, Intel and AMD were well aware of the potential and threat of RISC, so they beefed up their microprocessor offerings to the point where neither the PowerPC nor MIPS (nor the DEC 64-bit Alpha) had any great advantage over an x86 chip.
So, eventually, Intel won the chip business of Apple, and that was basically the end of the PowerPC. (Granted, Apple eventually dumped Intel as well, opting for its own proprietary processor chip, but that’s not IBM’s fault.)
Once again, IBM had an incredible advance, but was unable to successfully exploit it.
Intel 386 was a missed opportunity for IBM
The IBM PC AT was a really nice machine, based on the Intel 80286.
And clones of the IBM PC based on the 286 chip were quite impressive and very successful. I had one from Wyse in 1986.
When Intel announced the much more capable 386 chip to replace the 286, people were really chomping at the bit, waiting for a 386 version of the IBM PC.
But, IBM dawdled. They were late to the party, very late to the party. They were a no-show for most of the party. This was inexcusable.
After the big success of the original IBM PC, the IBM PC XT, and the IBM PC AT, IBM was really struggling with their new IBM PS/2.
They could have easily added a 386-based PS/2, but, they didn’t. No explanation given.
Rather, IBM was betting the farm on the new IBM RT PC, based on their new ROMP RISC-based chip.
The IBM RT PC was a great idea, but both a technical and market flop.
IBM was one of the two major research sites for early RISC computing, but their ROMP chip in the RT was a real dud. It couldn’t compete with the Intel 386 chip.
Sure, their investment in RISC and ROMP eventually led to the PowerPC, which was a big success with Motorola and Apple, for a while, but eventually it petered out and is now just a historical footnote.
And meanwhile, IBM’s PC competitors were literally eating IBM’s lunch with 386-based PC clones, notably Compaq.
IBM had stumbled badly, very badly, relative to the 386.
It was sad, really sad, that IBM had missed such a golden opportunity.
Again, typical IBM. Mr. Hyde had taken over and had pushed Dr. Jekyll aside.
IBM PC vs. IBM PS/2
IBM’s great success with the IBM PC was truly amazing.
And it was a relatively open architecture that made it easy for device vendors to design, manufacture, and sell low-cost add-on hardware.
But… then… after the IBM PC AT, IBM stumbled, very, very badly. All self-inflicted wounds, own goals. I won’t go into all of the details. This next debacle was called the IBM PS/2.
The IBM PS/2 had an entirely new, completely proprietary bus architecture so that device manufacturers could only add hardware if they licensed the interface for the new bus, the Micro Channel Architecture (MCA) from IBM at a relatively high price. MCA did have technical advantages but was incompatible with the original ISA (Industry Standard Architecture) and had a relatively exorbitant license fee.
PC clone manufacturers, such as Compaq were out of luck, since licensing MCA would make their machines cost too much or leave too little for profit.
The clone manufacturers rebelled and came with their own alternative to MCA, called EISA (Extended Industry Standard Architecture) which was fully compatible with ISA and offered most of the technical advantages of MCA.
Business took off for the clone manufacturers. Not so much for the IBM PS/2. Sure, there were plenty of loyal “IBM shops” who stayed with the PS/2, but for non-IBM shops, it was a no-brainer to dump IBM and its doomed PS/2.
IBM eventually ended its PS/2 business.
I personally never even saw an actual PS/2, let alone used or bought one.
I had a Compaq Deskpro 386 with a 110MB disk drive, which was a truly amazing system for the time.
Once again, IBM has a great success, but then fritters it all away with very poor technical and management decisions, and a misreading of the market.
Me and my IBM PC
I and several people I worked with at Wang Labs in 1981 bought the original IBM PC — at a Sears Business Center, of all places. I had no intentions of what to do with it other than to experiment with it to see how it worked, as I had earlier done with an Apple II.
I took it into work and had it on my desk. People would come by to play and fiddle with it, writing small BASIC programs, and even an interactive (but very primitive) video game.
Then, in 1982, I moved to Boulder, Colorado to join an electronic CAD/CAE startup. Again, I had it sitting on my desk, mostly gathering dust at that stage since it was more fun to play with the CAD workstation we were developing, based on the Motorola 68010 microprocessor. Then, the VP of Finance saw it and asked if he could upgrade it and put Visicalc on it for his finance calculations. I said sure. Another floppy disk drive and more memory. And at some point he had the company buy it from me. End of that story, but the PC had served its purpose for me.
My initial impressions of the IBM PC
Overall, I was very impressed with the IBM PC.
I previously had an Apple II, to experiment with, but I donated it to my high school when I was done learning what I could from it.
I was very impressed by the IBM PC, right from the beginning.
I bought an early model and was amazed at how professional a system it was at such a cheap price. The system unit, monitor, and keyboard were all rock solid and professional quality, unlike the cheap, cheezy, plastic boxes that were common for early personal computers. I was really impressed.
Overall:
- Solid metal case.
- Rock solid, robust, and very professional keyboard.
- Very professional monitor. Crisp, full, 80-character lines, 25 lines.
- This was a real, professional computer, not a toy for hobbyists. Okay, it doubled as a toy for hobbyists, but starting from a professional base.
- This was a home run for IBM. A grand slam home run.
The IBM PC XT was a decent advance over the original PC.
The IBM PC AT was also a very decent advance over even the IBM PC XT.
At the electronic CAD/CAE startup I worked at in Boulder, Colorado, as a third product we were able to develop a stripped down CAD schematic capture software package for it, the IBM PC AT.
The EGA graphics card had some limitations, but still fairly decent and quite professional. 640 x 350 pixels and 16 colors. Very basic, but for a low-end electronic CAD station for schematic capture, reasonably impressive, at the time.
IBM did upgrade the EGA car to the VGA graphics card, initially for the PS/2, but picked up by the IBM clones. This was an even better and more professional graphics capability. Resolution upped to 640 x 480, giving crisper graphics. Still only 16 colors, but that was enough for our needs, at the low end.
Higher resolution graphics followed, but mostly not from IBM, although IBM did introduce XGA, but for the PS/2, only.
So, I give IBM credit for getting the ball rolling, but then I have to ding them for… dropping the ball and eventually even just walking off the field. Typical IBM. Mr. Hyde takes over from Dr. Jekyll.
Toshiba laptop PC
A few years later, I bought a Toshiba T1100 Plus laptop. It was an incredible machine, a huge advance over the original IBM PC. Toshiba had the laptop market to itself. Neither IBM nor Apple had the intelligence or forethought to recognize the potential for this nascent market. The ThinkPad came years later.
Once again, IBM missed a golden opportunity and was late to the party.
And, IBM has left the party, spinning off their ThinkPad business to become Lenovo. I’m typing this text on a Lenovo Yoga.
Although, it is worth noting that even Apple stumbled badly, having a few mediocre offerings before they hit a home run with the Macbook, just as IBM was hitting a home run with the MacBook.
Still, Apple’s screwup was no excuse for IBM to screw up.
Wyse and Compaq
After a couple of years I left the startup and went out on my own, consulting, and even trying to raise some venture capital for some ideas. Along the way, I bought a succession of IBM PCs, including an IBM PC AT-compatible clone from Wyse, and a Compaq 386-based machine with an incredible 110MB hard drive. IBM had nothing like it.
IBM could have easily scored all of this business, but… they didn’t. They excelled at sometimes having great success, but so many times completely missing or squandering opportunities.
OS/2 and Microsoft, Windows NT
When IBM transitioned from the IBM PC (and XT, AT, and jr) to the PS/2, they also intended to transition to a much more advanced operating system, OS/2 (OS/2 for the PS/2), leaving the old DOS behind.
As mentioned earlier, it’s completely baffling how IBM failed to see the value of UNIX much earlier than they later did.
And it’s also completely baffling why they decided to double down on Microsoft for operating systems, other than the simple fact that the IBM PC had done so well with the DOS operating system from Microsoft.
The initial release of OS/2 did not have a graphical user interface such as Microsoft Windows — it was purely text and command-line oriented.
IBM and Microsoft then jointly developed a Windows-like graphical user interface which IBM called Presentation Manager.
Disputes arose between IBM and Microsoft and eventually they went their separate ways.
I had developed a small C-like special-purpose object-oriented programming language and class library called Liana for early versions of Windows. I briefly considered porting it to OS/2, but the license price for the SDK was an outrageous $2,700! Another example of IBM shooting itself in its own feet.
There was a brief war between Windows, which was really taking off, and OS/2 Presentation Manager which was struggling to take off.
OS/2 really was a much better operating system base than DOS, but Windows was more popular.
Windows soared, OS/2 struggled.
And then, Microsoft developed a whole new operating system, Windows NT, which was everything OS/2 with Presentation Manager should have been.
Windows and Windows NT developed and competed in parallel for awhile, both with great success, Windows for consumers and low-end systems, and Windows NT for serious enterprise customers, the same ones IBM had targeted with OS/2 Presentation manager.
And then eventually Windows and Windows NT merged as Windows XP, which was a great success for Microsoft, and users everywhere. OS/2 got buried.
Again, IBM did some serious innovations, had an early lead, and then managed to squander all of that. So typical of IBM.
IBM PCjr
IBM tried to do a stripped-down IBM PC for the low-end consumer market for home computers, called the IBM PCjr (“Junior”) but it was a complete flop. Too many technical compromises made it too unappealing.
Once again, IBM stumbles badly despite their existing success, with no good excuse for stumbling at all, let alone so badly.
ThinkPad
IBM was a little late to the laptop/notebook PC market, with Toshiba owning the market.
But, true to IBM’s innovative heritage and building on the success of the IBM PC (and despite the bad stumbles with PCjr and PS/2), IBM introduced the ThinkPad notebook form factor computer, which was an instant, huge success, especially for businessmen on the road, such as salesmen, managers, and executives. A huge win for IBM.
Even Apple was late to the laptop/notebook PC market. Steve Jobs lamented that he had to buy his college-bound daughter an IBM ThinkPad since Apple had no offering.
Eventually, Apple came out with the Macbook line. But only after a few stumbles with early portable Macintoshes.
And then… for reasons that no sane person could comprehend, IBM sold off its booming ThinkPad business (and the clunky old PC business, at that stage) to Lenovo.
Lenovo is thriving to this day. I’m typing this text on a Lenovo notebook computer, although not branded as a ThinkPad. Some Lenovo notebook computers are still branded as Thinkpads.
Once again, IBM has a great success, but squanders it.
My Lenovo laptop — heritage from the IBM ThinkPad
I had been using Toshiba laptop computers for over two decades, but Toshiba exited the business. I guess you could say that they pulled an IBM, throwing success away! Looking around at Best Buy, I settled on a Lenovo Yoga laptop.
My machine is not a ThinkPad, but it feels like it is part of the heritage from the IBM ThinkPad line from years ago.
It’s well-built, high quality, reliable, so it continues the IBM reputation for engineering, quality, and reasonable cost for relatively low-end consumer products.
Nothing to blame IBM for here.
Just wondering whether IBM really made the right call by walking away from this business, other than that they wanted to move away from low-end, low-margin, low-profit, low-service hardware and stay focused on high-end, high-margin, high-profit systems and services.
Minicomputers and the Series/1
Despite its dominance in the computer industry, including its offering of smaller mid-range computers, minicomputers took the industry by storm in the 1960s and 1970s, but IBM missed the boat, big time.
Simpler, smaller, cheaper, and easier to use and maintain, IBM had no response.
IBM could easily have had a solid response, but they chose not to.
Finally, in the late 1970s IBM did introduce the IBM Series/1 minicomputer. It wasn’t a complete flop, but it wasn’t a roaring success either.
Ironically, they introduced the Series/1 just as microcomputers, which later fueled the PC industry, were just beginning to take off, in the mid 1970s.
Just as with a lot of activities in life, timing can be everything. And just like quite a few other products and services, as good as IBM could be with raw technical advances, their timing could really suck.
The IBM Series/1 was not a bad product per se, but the timing really sucked.
Another black eye and bloody nose for IBM.
System 38
IBM had great commercial success with smaller to mid-range departmental data processing systems, including the System 32, System 34, and System 36. Customers loved them.
But, then (after the System 34, but before the System 36) IBM revamped the product technology with the System 38 which included some really advanced features that promised some significant advantages, but… the System 38 was not compatible with the System 32 and System 34. Customers hated that. Oops!
The System 38 had an advanced system architecture with 48-bit unified memory and storage addressing based on research at MIT. And it had an integrated database system. Very impressive engineering.
A couple of IBM staffers from the Rochester, Minnesota IBM facility where the System 38 engineering work was being done jumped ship and joined Wang — I worked with two of them at Wang. Disenchantment with IBM management. IBM’s loss.
Meanwhile, over at Wang Laboratories, we (including me) ginned up some conversion tools which made it very easy for disappointed IBM System 32 and System 34 users to easily move their applications over to the Wang VS midrange system. Problem solved!
How could the much-vaunted IBM, master of so much technology, and with so much acumen, and with such a great sales force screw up so incredibly badly? I’m sure there are some great stories that could be told inside IBM, but the net-net bottom line is that this oddly seems to be one of IBM’s superpowers, snatching defeat from the jaws of victory.
System/360, 370, 303X, 309X, 390, Z family
The IBM System/360 computer line, and its successors is one of the very few very bright spots in IBM’s product track record.
I honestly lost track of IBM’s progress (and tribulations) after I got deeply involved with the PDP-10 and DEC systems in general, and then microprocessor-based systems after that.
IBM was effectively ancient history to me until the introduction of the IBM PC.
The extremely successful IBM System/360 family was followed by successive innovations, which I was faintly aware of, but basically ignored.
The line of succession included:
- System/360
- System/370
- 303X series
- 308X series
- 3090
- System/390
- Z family
Even today the latest systems in the IBM Z family have the same basic instruction set as the original System/360 processors, albeit with a lot of enhancements.
Hey, sometimes, even IBM gets things right!
But, true to IBM’s reputation, their Z family is not getting the respect it deserves.
Sure, they do still have a loyal base of large customers, but it’s a relatively stagnant market, without any significant growth potential. Maybe it is still a cash cow, of sorts. But it’s not what anybody would call a roaring success. And not anything that most people would talk about, let alone extol.
System /370?
As impressive as the IBM System/360 family was, and that the new System /370 was a notable step forward, by the time the 370 finally came out, I had already moved on to RCA and DEC, so I paid no attention to it.
Or to its successors, such as the 3033 and IBM Z family.
Sure, IBM was still out there, doing plenty of technical innovation, plenty of robust marketing, and selling lots of systems.
But by 1970 and 1972, people had already discovered that they could get a large fraction of the benefit of an IBM system at a fraction of the price from IBM’s competitors.
As impressive as the technical innovation, marketing, and sales of the 370 were, they just weren’t as relevant as they used to be for IBM, or for the computer industry overall.
The long, slow decline of IBM — 1970 to 1990
A lot of the high-end technology that IBM used and even pioneered gradually became commoditized so that in the 1970s you could buy super-minicomputers which could perform as well as IBM mainframe computers, or at least good enough for many applications.
Microprocessors, microcomputers, and personal computers themselves didn’t crush IBM, but microprocessors rapidly grew in functionality, performance, and capacity, especially with networked microprocessor-based servers, so that by the 1980s they were supplanting the super-minicomputer systems, and eventually began supplanting even mainframe computer systems for many and most applications.
By the early 1990s IBM was really struggling, no longer able to reliably depend on the cash-cow of mainframe computers.
IBM bounces back under Louis Gerstner
Louis Gerstner took the helm of IBM in 1993, restructuring and reorganizing IBM. Focusing more on service revenue than mainframe sales.
IBM was on the ropes, failing and flailing. There was talk that maybe it should be broken up, but… Louis Gerstner brought it back from the dead.
IBM had lost relevance. The rise of super-minicomputers challenged low-end mainframes and IBM’s low-end and mid-range systems.
There was some concern that IBM was sacrificing revenue and investment in research in favor of short-term profit. But, somehow, Gerstner sorted it all out, sort of, but close enough for Wall Street investors — and IBM customers.
It took the better part of a decade, but by the early 2000s IBM was once again a force of nature, but to a more limited degree, although Silicon Valley, Intel, Microsoft, Apple, and PC-clone vendors such as Dell and HP were the real force of nature now.
Workstations and networked servers — and PCs — were all the rage, fueled as well by the rise of the Internet and Web. IBM was marginalized to a significant degree. Just another player rather than the hands-down leader.
IBM back in the saddle, but with clipped horns
So, yeah, by the early 2000s, IBM did bounce back, in a fashion, but as a shadow of its former self. No longer the great powerhouse feared by all.
People, and competitors, and technology companies in general were no longer waiting for IBM to lead the way.
People didn’t hate IBM per se, but they didn’t fear or revere them either. Any love for IBM was replaced with indifference.
Deep Blue and Watson, the Watson Health debacle
I honestly haven’t followed — or otherwise been interested in — IBM’s forays into artificial intelligence (AI), but Deep Blue and Watson were notable achievements.
Deep Blue focused on playing chess and was even able to defeat world champion Garry Kasparov.
Watson excelled at what we today call large language models — processing lots of text, and was even able to win on the Jeopardy TV game show.
I can’t personally speak to the quality of IBM’s AI offerings, but I do have to note that they have achieved significant success.
That said, in the same spirit of the overall thrust of my overall experiences with IBM, it has to be noted that not all of IBM’s Watson AI efforts have been sterling successes. Very notably, IBM Watson Health was an abysmal failure. How IBM could succeed so well and then fail so miserably is the eternal conundrum of IBM.
Space Shuttle avionics computer
IBM developed a customized computer for the avionics for the NASA Space Shuttle. It worked like a charm. IBM also developed ground support software for the Shuttle
Proof that IBM can develop quality products and services that satisfy customer needs when they put their mind to it.
Since the computer was originally designed in the 1960s, when semiconductor memory was still new, unproven, and there were concerns about radiation, it used old-fashioned magnetic core memory. But this worked fine. A later upgrade to the Shuttle avionics in the 1990s upgraded from core memory to semiconductor memory.
One benefit of the magnetic core memory was that it maintained its state even when powered off — and even if submerged in water. When the Space Shuttle Challenger was destroyed, they were able to retrieve the avionics computer from the Atlantic Ocean and even able to successfully read the contents of the core memory. How much more impressive than that can you get!
More evidence that IBM can do amazing things… sometimes… when they have the right kind of focus and competent management.
Relational databases, SQL, Oracle, Datastax
Another great example of an amazing technical advance from IBM is the conceptualization and development of relational databases and relational database management systems (RDBMS).
This was originally just a research project, led by E. F. Codd in 1970.
A separate project at IBM then developed the concept of the SQL Structured Query Language as a very high-level language for accessing and manipulating structured data.
IBM did turn the research into a successful product, Db2.
And Db2 is very popular in IBM shops.
So far, so good.
But not everybody is an IBM shop.
Minicomputers grew up to be networked mainframe-class servers, and UNIX took off as a vendor-neutral operating system.
Oracle came along and picked up these concepts of relational databases and SQL, based on UNIX — and Windows — and the rest is history. Oracle is now the hands-down leader in database systems.
IBM and its captive users continue to benefit and profit from relational databases and SQL, but Oracle and a raft of other database vendors, including Microsoft, and a number of open-source vendors, and their users, get most of the benefit and profit, not IBM.
Yet another example of IBM being innovative, but not getting the most or best from their own innovation.
In addition, there have been many innovations in the database world that IBM has not kept up with, including:
- Open source database systems.
- High-performance and high capacity NoSQL database systems. Such as Cassandra.
- Unstructured and semi-structured database systems. Such as MongoDB.
- High performance, high capacity, distributed, and fault-tolerant NewSQL database systems. Using traditional SQL on the frontend, but completely new and more advanced backends. Such as CockroachDB.
So, once again, IBM innovated, took the lead, and then… got left behind. Very disappointing.
But… maybe now they are trying to catch up. Just recently they announced that they would be acquiring Datastax, a leader in NoSQL with their own proprietary additions to the open-source Apache Cassandra NoSQL database system.
This intersects with me because I did a fair amount of consulting work with Datastax a decade ago. I also worked informally with Cockroach Labs with CockroachDB.
So, maybe IBM will bounce back and reclaim leadership in database systems. Or maybe not. This is what is so frustrating about IBM — no real clarity or sense of commitment to taking leadership in so many important areas of computing.
Datastax
It just dawned on me that if I had joined Datastax as an employee, which was offered at the time, then I would now be an IBM employee now that the acquisition has completed! But, I didn’t, so I’m not. Still, quite interesting.
FORTRAN high-level programming language
The development of the FORTRAN programming language in the mid to late 1950s was an absolutely amazing, watershed moment for both IBM and the computer industry as a whole.
Scientific and engineering users no longer needed to be experts in machine and assembly language to do either simple or complex calculations.
Although FORTRAN is no longer one of the major programming languages in use, it dominated computing, overall and for scientific and engineering computation in particular, for quite some time. Other languages such as COBOL, PL/I, and BASIC stole a lot of its thunder, but even today, FORTRAN (now Fortran) does have its applications.
Sure, other vendors quickly replicated IBM’s efforts and offered their own FORTRAN compilers, but that’s typical of many technological advances.
In any case, the development of FORTRAN earns IBM praise for getting things right. They didn’t screw up when it came to FORTRAN.
Still, despite their innovation, they failed to financially capture all or even most of its success.
On the flip side, it may be true that the availability of FORTRAN enabled IBM to sell a lot more computer hardware than they might have without it.
FORTRAN H
A programming language isn’t much use without a compiler. IBM had a compiler for early versions of FORTRAN, but people quickly began to perform more and more complex and data-intensive computations and just as quickly realized that they wanted and needed greater performance. Sure, IBM offered more powerful computers for users needing greater performance, but the need for greater performance was insatiable.
Enter FORTRAN H, a specialized compiler which used very advanced techniques to analyze the user’s code and discover opportunities for optimizing execution of the code.
People loved it. Users because they got basically instant gratification of much greater performance, and management since they didn’t need to buy a more expensive computer or buy more computers.
FORTRAN H was a big hit, a real commercial success. And it was a real technological leap forward. IBM deserves great praise for it.
Granted, other vendors and other programming languages eventually replicated all or most of the optimization features of FORTRAN H, but that’s typical of many technological advances.
Still, it’s yet another example of IBM innovating, but not capturing the lion’s share of the financial profits. And not capitalizing on their initial success with successor products.
COBOL
IBM was one of a number of computer vendors who collaborated on a committee to design the COBOL programming language.
So COBOL wasn’t an exclusively-IBM innovation, but they played an important role in making COBOL commercially viable.
Maybe more to the point, they didn’t screw it up.
PL/I
Skipping over COBOL, where IBM and a slew of other vendors had great commercial success for commercial applications, it was IBM’s conceptualization and development of the PL/I programming language, introduced in 1964, which really stood out for me. I have nothing but praise there for IBM.
It really stood out in large part because it was a general-purpose programming language, suitable for commercial/business, scientific, engineering, and systems programming alike.
The MULTICS project at MIT even chose PL/I as the systems programming language for the MULTICS operating system.
Alas, PL/I was later eclipsed by C and Pascal, and later C++ and other programming languages, and truth be told, many users were content with the features of FORTRAN, COBOL, and BASIC, but it still was such a truly glorious programming language, well ahead of its time.
Okay, Algol was also a big success, but much more limited, at least here in the U.S.
I give IBM great praise for their conceptualization and development of PL/I.
For me, most of my early programming in high school was in an interactive version of PL/I, running on a time-sharing system called RUSH-5 (Remote User-Shared Hardware) from Allen-Babcock Computing (ABC). It was a truly awesome experience. It was a real letdown when the local community college pulled the plug on the use of RUSH-5 and we had to downgrade to programming in BASIC on the college’s new RCA time-sharing system (Spectra 70/46.)
APL
IBM invented this really weird programming language, APL, that was actually rather interesting, but had limited success because it wasn’t for everyone.
If they introduced it today, they’d say it excelled at data science — manipulating multidimensional arrays.
It was most commonly used on a 2741 terminal with a special typeball (the removable part of a Selectric typewriter) with all sorts of special symbols that allowed a very concise representation of your code.
It was a niche thing.
Besides great, big successes, IBM also excelled at very interesting but niche products.
So, it’s a great example of IBM excelling at innovation, but not leading to a great commercial success.
Sierra and Summit supercomputers
IBM was involved with the design of the Sierra and Summit supercomputers for the Department of Energy. Both systems were massively parallel systems with nodes that were based on an advanced derivative of the IBM PowerPC RISC processor.
These were both significant innovations and advances by IBM, but… they didn’t end up resulting in commercial product lines for IBM. It was one and done (okay, two and done.)
So, this was partially a technical achievement by IBM, but not a commercial achievement.
And they don’t advance IBM’s interests in the two top frontier application areas, artificial intelligence and quantum computing.
These were certainly interesting systems, but why did IBM develop both, other than simply to find some relevance for their significant investment in the PowerPC and RISC processors? They’re effectively dead-ends, from a commercial perspective.
I constantly have trouble remembering which DOE (U.S. Department of Energy) supercomputer was IBM responsible for, Summit or Frontier. Frontier was developed by Hewlett Packard Enterprise.
IBM Quantum — what is it, really?!
As I’ve noted, I’ve spent a lot of time following IBM’s efforts in quantum computing. This is primarily — actually, virtually exclusively — a research effort at IBM. They’re also trying to get customers to experiment with the technology, but they currently don’t have commercial-grade products or services for quantum computing. Yeah, they charge users to access their quantum computers, in the cloud, but, again, this is for experimentation, not production application development or deployment.
IBM makes a big deal about IBM Quantum, but what is that really?! Nobody really knows! It’s more of a code name than anything else. Or maybe just a placeholder.
This is a perfect example of how frustrating it can be dealing with IBM.
Some of the possibilities for what IBM Quantum is:
- A business unit. But I can find no direct evidence of that.
- A division. Again, no direct evidence of that.
- A product line. Again, no direct evidence of that.
- A department within IBM Research. That seems to be the closest choice.
- A brand name. Actually, technically, it is, in the sense that it is a registered trademark. But people don’t work for a trademark, and there are people who claim to work for IBM Quantum.
- IBM itself is unable to decide whether IBM Quantum is a research project or a commercial product line. Sometimes they seem to consider it either, or even both.
- A work in progress. With nobody exactly sure what it is or will be, just yet. Just a vague feeling that it will be big, sometime, eventually, just that nobody really knows what or when that will be, for sure, yet. Clear as… mud.
I checked IBM’s SEC filings and could find no mention of IBM Quantum in terms of revenue, expenses, investment, profit, or earnings. Or risks, but that’s a story for another day.
The IBM Quantum End User Agreement, Effective date: Dec 02, 2023 3:00 AM, has this definition for IBM Quantum:
- e. “IBM Quantum” means and may include actual quantum systems and the hardware, software, and web based applications required to use them. IBM Quantum may include, but is not limited to, (i) application programming interfaces (“API”), (ii) graphical user interfaces (“GUI”) (iii) quantum software development kits (“SDK”), (iv) quantum simulators and emulators, (v) Quantum Inputs, (vi) applications to administer access to IBM Quantum, (vii) applications to consume IBM Quantum resources, (viii) applications to execute programs on quantum computers and (ix) related webpages.
That’s the closest I could find to an explicit definition for IBM Quantum. But it explicitly limits it to hardware and software, none of the possibilities I listed above, and not a business unit.
So, we’re left dangling, wondering what IBM Quantum really is.
And maybe that is the ultimate truth, at present, for how IBM itself even thinks about what IBM Quantum really is.
This is a perfect example of how maddening it can be to deal with IBM. Even just to follow them. Or to simply describe what they’re doing.
They have a lot of interesting technology, they have a lot of business acumen, and manufacturing capabilities, and they have a great sales force, and then… they have way too much of this kind of… nonsense!
Quantum computing research — a mixed bag
IBM was doing great in the earlier stages of quantum computing. It was the kind of deep research that IBM excelled at. IBM Research, in Yorktown Heights, NY, a classical ivory tower, a force of nature.
Their 5-qubit system was an amazing advance. So far, so good.
From there, IBM has consistently increased the qubit counts. Again, so far, so good.
They did decrease the error rate and increase the coherence times, for a while, but then those improvements slowed down, dramatically. Low fidelity of qubits and gates is a huge problem with IBM’s quantum computing offerings at the stage, with no sign of that changing significantly in the coming years, just modest, very meager improvements, slowly, over time.
Quantum error correction (QEC)? It’s a joke, a gagline, that just isn’t funny anymore. There’s no sign or hint of a practical solution in the coming years. Classical computing has error correction that is simple, cheap, fast, and automatic, but IBM’s promises and proposals for quantum error correction (which are constantly changing) offer none of those qualities.
They were increasing Quantum Volume steadily, which was great, but then that slowed as well, then totally stalled, and now they no longer even report it. So, once again, the familiar bad-boy, Jekyll and Hyde pattern of IBM, doing well and then not so well.
IBM has consistently failed to improve qubit connectivity. Zero improvement. None. (Okay, now as I edit this, they are promising a modest improvement with their upcoming Nighthawk system.)
IBM has put a lot of effort into tools for developers, but I view that as a bug rather than a feature since so many of the tools are simply attempts to compensate for or mitigate deficiencies of the hardware. I call it overtooling — the tools are more a symptom of underlying problems rather than a solution to a real problem other than deficiencies in the hardware. Hardware problems should have hardware solutions, not software workarounds.
In short, IBM’s quantum computing research efforts have been a mixed bag. A significant number of sterling advances, but mixed in with too many missteps and stalled efforts.
Again, this is the kind of maddening pattern that is unfortunately too common for IBM. and not limited to their efforts in quantum computing.
Quantum computing commercial products — a no-show
Despite all of the great research in quantum computing, IBM has failed to achieve the critical mass of innovation to produce a viable commercial product for quantum computing. Research is essential, but it is not the end goal.
Sure, IBM does actually charge people to use its research systems, for experimentation, but none of this is for production-grade commercial deployment.
The IBM roadmap for quantum computing goes to years 2027, 2029, and even 2033 and beyond, but with no indication as to when an IBM quantum computer might be capable of solving production-scale real-world practical problems of real significance with a demonstrable quantum advantage worthy of all of the effort and attention required to do so.
It’s all too vague and contingent.
This is not the IBM of its glory days of the 1950s and 1960s and 1970s or its IBM PC days.
This is a real disappointment.
Quantum computing engineering — a mixed bag
A fair amount of the basic engineering for IBM quantum computers has been quite decent, but for areas such as the architecture for the programming model, not so much, a real mixed bag.
While IBM did great with the System/360 architecture and programming model, they have significantly failed to meet that same bar with their quantum computers.
With the System/360, IBM had senior, experienced computer engineers and software engineers who knew both computer architectures and programming models inside and out, but for quantum computing… not so much.
Not that IBM is alone on this front. All of the quantum computer vendors are in this same boat. Basic engineering they do just fine, but computer engineering, programming architecture, and programming models, not so much.
Overall, IBM Quantum is a real disappointment on this front, the engineering of commercial products that can solve production-scale real-world practical problems of real significance with a demonstrable quantum advantage worthy of all of the effort and attention required to do so.
Overall for quantum, IBM has done reasonably well with the science, but not so well as far as engineering of commercial products
As already noted, IBM has done quite well with the science, research, at least through their five-qubit system, but beyond that initial, brilliant success, not so much.
Oh, to be sure, they’ve done a great job with a lot of bits and pieces, but the overall whole, the net effect, leaves a lot to be desired.
IBM Quantum is not IBM at its best.
But then again, what exactly is IBM Quantum supposed to be, anyway?
This is not to say that IBM can’t and won’t achieve great success, just that at present they are not on such a track to any such great success, and with no slam-dunk guarantee of ever getting onto such a track.
So, flip a coin as to whether IBM does find the right track eventually, or not.
Maybe IBM’s quantum computing efforts will fail but they can buy their way to success, which they’ve done before
Or, maybe, IBM’s own in-house efforts at quantum computing do fail or run out of steam and they end up buying some startup who has figured out the magical formula for a successful and useful quantum computer.
Like it was with Google and video, where their in-house efforts failed, but they bought a slam-dunk home run with YouTube.
Or the way IBM just bought Datastax, or Red Hat.
Twilight? Quantum computers
My most recent encounters with IBM have revolved almost exclusively around quantum computing. I’ve written a lot about quantum computing over the past eight years, and IBM has figured prominently in my thoughts and writings. True to IBM’s traditional form, sometimes very positively, but generally very negatively. Still, on average they demand and deserve notable attention, even if their efforts cannot be given an A+ letter grade overall.
But, has IBM peaked again and is once again in a period of decline? Sure, as in past declines they are still innovating and engineering products, but not with the same level of critical mass and alacrity as the past glory days of IBM.
That doesn’t mean that you can definitively count them out. But it does mean that you can’t count on them for the foreseeable future, until something changes.
AI?
AI might provide a brighter future for IBM, but again, given IBM’s Jekyll and Hyde performance over the years and decades, it is most likely to be a roller-coaster ride.
Smartphones and tablets
Apple, Google, MIcrosoft, and Hewlett Packard have all had forays into phones, tablets, and other forms of mobile computing devices, beyond laptop computers, to widely varying degrees of success, but not IBM. It’s not clear if that is a good thing or a bad thing for IBM.
Technically, IBM did have a tablet device — pen-based — in the early 1990s, but it wasn’t a great success, and it was just that one time — and it was enterprise-oriented, not for the consumer market.
Overall, IBM’s primary focus has been enterprises, the business market, not the consumer market.
Again, it’s not so clear if that’s a good thing or a bad thing for IBM.
Overall, I lean towards praising IBM for its enterprise focus, the BM in their name, for Business Machines, but the questions remain.
THINK
THINK was the ubiquitous slogan for IBM. I don’t know if it still is, or not. And if it is, whether it means the same or as much as it originally did. It was invented by IBM founder Thomas J. Watson, Sr. even before he was at IBM, in 1911, while he was at the National Cash Register Company (NCR).
From the Wikipedia page for THINK:
- At an uninspiring sales meeting, Watson interrupted, saying “The trouble with every one of us is that we don’t think enough. We don’t get paid for working with our feet — we get paid for working with our heads”. Watson then wrote THINK on the easel.
- Asked later what he meant by the slogan, Watson replied, “By THINK I mean take everything into consideration. I refuse to make the sign more specific. If a man just sees THINK, he’ll find out what I mean. We’re not interested in a logic course.”
I like that thinking: “If a man just sees THINK, he’ll find out what I mean. We’re not interested in a logic course.” Amen!
I like the idea of this slogan, although I don’t think it usually translates well in the typical modern tech office environment.
Somewhere along the way in my early computing days I came into possession of an official IBM THINK standup desk plaque. I recall having it on my desk in my DEC days. I may have left it at DEC, because I don’t recall having it after that, although I may have still had it when I was at an electronic CAD/CAE startup in Boulder, Colorado, but I’m sure I didn’t have it after that.
IBM. Not just data, reality.
Back in 1971 I ran across a very impressive IBM magazine ad, maybe it was TIME Magazine, which I carried around from job to job for a number of years. 90% of the ad was a two-page spread picture of a solitary guy sitting on a rock of a rocky ocean shore with high hills in the distance (maybe California, Oregon, or Washington) with this simple line of text at the very bottom:
- No one can take the ultimate weight of decision-making off your shoulders. But the more you know about how things really are, the lighter the burden will be. IBM. Not just data, reality.
This stuck with me for a very long time. Okay, it still does!
I carried that ad around and posted it in my offices for a number of years. I last recall having it at the electronic CAD/CAE startup in Boulder, Colorado, circa 1985. The paper and scotch tape holding the pages together was getting brittle and yellowed.
I managed to find it here on eBay:
Great slogan. I actually like it better than THINK! And I do think it works as well today as 55 years ago.
Wild ducks
I don’t recall exactly when, just that it was decades ago, I recall hearing that IBMers who didn’t really fit into the normal company culture at IBM were known as wild ducks.
As long as they were productive and innovative, they were both tolerated and praised.
But mostly just tolerated, at best, and never really truly accepted.
I’ve heard conflicting narratives as to whether they were encouraged or simply tolerated and not really preferred at all. One narrative is that IBM’s founder, Thomas J. Watson, Sr., encouraged wild ducks and considered them essential.
Nonetheless, they were considered valuable from an innovation perspective.
But that was many decades ago.
I can’t speak as to whether wild ducks are still tolerated, encouraged, valued, or praised at IBM. And to whether to the same degree as decades ago, or whether the culture at IBM has evolved much at all or not.
I’ve had only four visits to IBM
With all these years and exposure to IBM, it’s actually a surprise, to me, that I’ve only visited IBM facilities four times in these over five decades:
- Their Mahwah, New Jersey data center when I was in high school attending a six-week intensive summer student science training program at Stevens Institute of Technology.
- The IBM headquarters on Madison Avenue in NYC, again when I was in high school for that summer training program at Stevens. To explore their display of old unit record equipment in their lobby.
- Many years later, as a professional software developer at an electronic CAD/CAE startup in Boulder, Colorado to visit IBM’s Austin, Texas “competence center” for a briefing on the new IBM PC RT and ROMP (RISC) chip to try to convince us to use it as the basis for a CAE workstation.
- Some years later as an independent consultant and entrepreneur to visit a small IBM field office in Kirkland, Washington to port my Liana programming language to Windows NT running on the PowerPC.
That’s it.
All of the rest of my experience with IBM is from the media, reading, using their products, and folklore and rumors that I’ve heard from others.
Limited exposure to IBM research
I’ve had no direct exposure to IBM Research. Some indirect exposure though.
Three incidents stand out. But, literally, I had no other exposures to IBM Research
Adjunct Professor from IBM Research
My first, indirect exposure to IBM research was actually when I was in school, college, as a senior, taking an upper-level grad seminar course in graph theory in the math department at Stevens Institute of Technology, taught by an adjunct professor from IBM Research, from IBM T. J. Watson Research Center in Yorktown Heights, from the IBM Research ivory tower itself.
He was a much older mathematician, very expert in advanced flow chart graph theory. It was a tough course and most of the students dropped out, but it was great to get so deep into theory since my entire background and interest was strictly practical in nature. The take home final exam ended up being fifteen pages of mathematical proofs, for me. That’s as deeply as I ever got in math. I did take a few computer science courses in the math department, but mostly I took computer science and electrical engineering courses in the Electrical Engineering department, where I earned my Masters Degree in Computer Sciences, in parallel with my Bachelor of Science degree in four years (actually three and a half years since I dropped out for a semester to…work.)
So, at least coming out of school, I had had a positive even if indirect experience with IBM Research. An initial positive impression.
I actually took this advanced grad math course for undergraduate credit since I was not permitted to take any undergraduate computer courses since I had taken the grad-level equivalents of all of the undergrad computer science courses by the middle of my sophomore year, before the first elective undergrad courses were usually permitted.
IBM researcher at a California database conference
Back in the very early 1980s I attended a database conference in California, in Los Angeles, near UCLA in Westwood. I ran into a researcher from IBM, Almaden, I think. He was telling me about this application generator tool that he had developed.
He had just finished the research, he wrote a report as his boss requested, but when he asked his boss what the next step would be, thinking it would be to develop a product plan, but he was absolutely stunned when his boss simply said that was it, this was the end of the project, he put the report on his shelf, and told this guy that it was now time for him to find a new project! When this guy, this researcher, objected, his boss simply said that “We only do research, not products.”
Really! Wow!!
IBM instantly lost a lot of credibility with me.
IBM researchers at IEEE chapter meeting
I’m not sure of the year, maybe late 80s or early 90s, when I had traveled back east from Boulder, Colorado for a year to do some consulting at another tech startup, I attended an IEEE chapter meeting held at the Holiday Inn in Marlborough, Massachusetts. The event may have actually been sponsored by IBM — IBM Research.
It was a bit strange, considering my previous encounter with a guy from IBM research. This time, the IBM researchers were very eager, EXTREMELY eager to see their research results get commercialized.
I don’t remember much, but there was a lot of discussion of a new chip technology they had developed using copper rather than aluminum for component interconnections on the chip.
And I do recall clearly that they were especially proud of the fact that they had implemented the first 1 GHz Intel-compatible chip — using these new copper interconnects.
It was a very interesting experience, providing another — and different — window into IBM Research.
I did come away with a renewed, somewhat positive impression of IBM Research, but I wasn’t so comfortable that they were so intensely focused on near-term commercialization.
It made me wonder if this was one of those times when senior management grew tired of investing boatloads of cash in research, not seeing revenue and profits from it after extended periods of time, and then pressuring researchers to prove that their work had commercial value.
Hard to say for sure, but that was the impression I got.
Another adjunct professor, guest lecturer from IBM
In another one of my grad courses at Stevens Institute of Technology, in the math department, the adjunct professor (I can’t recall which company he worked for) brought in a guest lecturer, another adjunct professor from IBM. He was actually a relatively recent graduate from Stevens. I don’t recall which part of IBM he worked in, but it was doing systems programming, but not in IBM Research.
I actually didn’t have a chance to talk with him since it was just that one class. He did a decent job, seemed competent, but… he was with IBM, so that didn’t really appeal to me, since I had already decided that I was going to work at DEC when I graduated.
His lecture gave me a fairly positive impression of IBM, as a professional organization, but not the kind of scrappy young company that DEC had a reputation for being. Not the kind of company that had any appeal for me.
Boneheaded stupidity and PhDs
A priori, one would expect individuals with advanced degrees, especially PhDs, to make exceptionally fantastic decisions, but throughout my career I’ve found that there is no strong correlation between having a PhD and making exceptionally fantastic decisions.
Generally, the PhD is irrelevant for most decisions in a high-tech business.
But, it is not so uncommon for super-smart people with PhDs to have blinders on and biases which prevent them from seeing the world and real-world considerations as they are, getting hung up on esoteric and abstract models (and math!) that are disjoint from the real-world.
IBM is no exception to this ironic phenomenon.
IBM Quantum is a perfect example.
It could be that the particularly intense focus on esoteric physics involved in quantum computing overly-focuses IBM management on the role of the PhDs, even to the degree that practical, common-sense real-world considerations are denigrated in favor of esoteric physics-based models.
In short, PhDs are no great defense against boneheaded stupidity.
Me and bureaucracies, a match made in… hell!
I can actually get along with a lot of bureaucracies and individual bureaucrats just fine, at least up to a point, which is not very far, at all.
I used to say that I can put up with any work environment for six months, but then I start to feel a sense of ownership and expectation and can’t accept how badly the organization is being run (at least from my perspective.)
Superficially, politely, me and bureaucracies get along just fine. At least for a brief visit. But, going beyond the superficial, once a disagreement, dispute, or misalignment occurs between me and a bureaucracy, then… WATCH OUT!! I’m the poster child for the old adage about someone who does not suffer fools gladly.
And if IBM is one thing, it’s a bureaucracy, a very large and rigid bureaucracy, not an organization which would tolerate technical staff who are the likes of me.
Maybe I could have survived at IBM for a year or two or three, maybe four, but then the honeymoon would most certainly be over
I never worked at IBM, nor ever wanted to
Summarizing,
- I never even considered working at IBM. I never even imagined it as a possibility.
- I didn’t even consider interviewing with IBM when I got out of school.
- Even in later years, I never considered it as even a remote possibility.
- If I had gone to IBM, I would have been labeled a “wild duck” — and have considered that a badge of honor rather than a serious personality flaw!
Huh… IBM has no products or services for me!
Come to think of it, I am currently not using even a single IBM product or service.
Yeah, I did once own an original IBM PC..
And my current PC is a Lenovo laptop, but that’s not IBM either, now.
Although… I am an IBM shareholder, so I am technically a part-owner of the company. And the stock does pay a dividend, so that’s a product of sorts. But I meant IT products and services.
Sure, I have an interest in quantum computing, and I do read about IBM’s quantum computing capabilities, but I’m not a customer or user.
This is so curious, and fascinating, to me.
IBM is simultaneously so near, so close, but so far away.
And I don’t expect that I’ll ever use another product or service of IBM. That’s just the nature of IBM, more enterprise focused. The IBM PC, as a consumer product, was such an aberration.
My critical mass cloud model for innovation
Even when I was at DEC, I noticed that even DEC had quite a few notable technology, product, and market failures, despite having lots of successes.
Thinking about that a lot, I came up with my critical mass cloud model for innovation, that reconciled DEC’s successes and failures.
The model is very simple.
- Think of everything that happens within the company as happening deep within an opaque cloud.
- A lot of failed innovations fail deep within the cloud, never making it out of the cloud, never seeing the light of day. They remain totally hidden from observers in the outside world.
- Any big successes come flying out of the cloud like rockets. That’s what most people see, the successful rockets, not the failures that never make it out of the cloud.
- As long as there is a critical mass of big successes flying out of the cloud, it simply doesn’t matter how many failed innovations don’t make it out of the cloud.
I think this same model reconciles IBM’s successes and failures as well.
Much later, I realized that this is similar to the venture capital (VC) investment model. Invest in a range of companies, knowing that a fair number will fail, that a fraction will be mediocre successes, and that a very few will be ten-bagger home runs.
My model isn’t perfect since there are plenty of innovations that make it at least part way out of the cloud and then fail, so people do see a fraction of the failures. But, those early market failures, infant mortality, are really just the exceptions that prove the point. The essence of the model is how many innovations and products have a sustained existence outside of the cloud — the constellations of stars outside the cloud. All is good if a company has a critical mass of constellations of stars.
Shareholder
Although my retirement account focuses primarily on younger high-flying tech stocks, I have stock in a number of older, established tech companies as well, including IBM.
Actually, I bought that stock just a few years ago when it seemed rather beaten down during the COVID-19 pandemic. It has bounced back nicely since then, but been volatile. I honestly don’t know what the stock’s future performance will be. Although, it pays a moderately decent dividend to reward patience.
Will AI or quantum computing boost IBM significantly in the coming years and decades? Hard to say, but given IBM’s Jekyll and Hyde performance over the years and decades, it is most likely to be a roller-coaster ride.
When I started writing this paper, in August 2024, the stock price was almost exactly where it was on March 8, 2013, over eleven years earlier. It has moved up a bit since then, but not so much on an annualized basis. But it has paid a semi-decent dividend over all of those years, so that may be a saving grace.
Right now
My involvement with IBM right now, here in 2025 is limited to:
- I’m a shareholder, with IBM stock in my retirement account. Doing nicely. IBM does indeed have a critical mass of commercially-successful products and service offerings, for now, at least, even if they do continue to make a lot of boneheaded decisions and squander a lot of innovative opportunities.
- I continue to follow IBM’s research and promotional efforts in quantum computing. It’s a real mess. I am not optimistic. Once again, they had a great research project, had some great initial results, but then… they’ve squandered that initial success, to the point that they are now, literally, wandering in the wilderness, with no hope of finding a good direction, let alone driving it to a great success. What a shame!
- Hmmm… there must be a third one, but, honestly, I can’t think of one!
Next?
I honestly have no idea how IBM might figure in my career (retirement) and life in the coming years.
Technically, I’m officially retired now, so I have no career future, per se.
I am continuing to follow IBM’s foray into quantum computing, for now, but I’m not so sure how much longer they’ll be able to keep my interest.
IBM has indeed played a pivotal role in my career and the computer industry overall
To be sure, IBM has indeed played a pivotal role in my career and the computer industry overall.
Granted, it has been a very mixed experience, but not an absolute bust by any measure.
Clearly, my own trajectory has benefited from IBM, its innovations, and its business successes.
Conclusions
What more is there to say, other than that we have no idea what the future will bring, other than that IBM will continue to be downright maddening, providing awesome inspiration and even brilliance one moment, but then disheartening disappointment, boredom, desperation, and despair the next, constantly blowing hot and the cold, with more cold than hot from my perspective, rarely in a nice even and comforting temperature. Dr. Jekyll and Mr. Hyde, indeed! Bwahaha! Bwahahahahaha!!!
For more of my writing: List of My Papers on Quantum Computing.
