COBOL / Mainframes have the reputation of having superior performance – „Workhorse“ is a term often associated with them. The well-known COBOL brain drain survey from 2009 revealed that 87% of COBOL users still believed that COBOL had much better – or at least about the same – performance than other languages. I do believe that some time in the past, this has actually been the case, but mostly due to Mainframe-Hardware being superior at the time. The advocates just kept repeating the claim – they never re-evaluated whether it was still true.

One factor I see is that COBOL is rather low-level. In the past, low-level was seen as an advantage for the performance. When I started my academic studies, people still believed that compilers would never surpass Assembly code hand-written by humans. But in the early 2000s, this boundary was broken – todays compilers write such good Assembly code, that even the best human Assembly gurus can’t compete anymore1. So nowadays, low-level is a disadvantage. Low-level means that you are an obstacle to the compiler. Your interference prevents it from doing its job as good as it could.

In 2006, the University of Münster has found that COBOL programs are 20 times slower than corresponding Java programs (which in-turn are about 50% slower than C++ programs), but I don’t think that this is a fair comparison. COBOL programs often run on Mainframes, whose processors have hardware-support for some of COBOLs features (like calculations in binary coded decimal). Intel Processors have, for example hardware-support for floating point calculations. So CPU-cycles aren’t everything here. On Intel CPUs, COBOL programs would have a performance-impact beyond CPU-cycles. On Mainframes, C++ programs would have a performance-impact beyond CPU-cycles. So the performance is hard to compare fairly. Yet, most comparisons you find online are based on this false assumption.

Let me tell you about my personal experience on that topic.

I have worked for a company that develops an enterprise resource planning software written in COBOL. It runs in non-mainframe scenarios, so my own experience could be subsumed under the previous paragraph, but I have lots of details to expand on the reasons for the performance-problems in COBOL. The software said company sells is for windows and has a graphical user interface (this wouldn’t even be worth mentioning, if we weren’t talking about COBOL)

  • fun fact: they switched from text-mode to graphical user interfaces as late as 2004 – 30 years after the introduction of Windows 1.0!
  • Another fun fact: the software they sell is for 32-bit Windows, because the compiler vendor shut down their COBOL operation in 2006 – my old employer still uses that ancient compiler to this day (as far as I know). On 64-bit Processors (which have been the standard since around 2010), 32-bit Programs only run in a compatibility mode that further impairs the performance. Also, 32-bit Programs cannot use more than 4 GB of memory or use files larger than 4 GB.

What I saw at that company looked very different from „workhorse“ performance! The problem is that graphical user-interfaces need to be updated frequently. For example if you move window A in front of window B and then activate window B again, then window B needs to re-draw. Otherwise you’ll just see a gray or white rectangle where window A had been. In other programming languages, this problem has been solved decades ago: you just have a separate thread (called the „GUI thread“) which only waits for messages from the operating system. This thread re-draws your window if and only if it receives a WM_PAINT message, i.e. you only spend the time on re-drawing, when it is actually necessary to re-draw. The business-logic can work on a different processor-core uninterrupted – except for waiting on a mutex here and there, which doesn’t slow down the business-logic much. The business-logic doesn’t need to (and according to programming best-practices shouldn’t) know that the GUI even exists.

This is what makes modern programs so smooth and responsive.

COBOL doesn’t provide multi-threading, so this solution doesn’t work in COBOL – GUI-updates and business-logic must be interwoven. Without function-pointers you don’t have dependency-injection and therefore, the business logic must know about the GUI and explicitly interact with it. This is a massive violation of the Model-View-Control separation principle and in-turn a massive violation of the Single Responsibility Principle – which is the first (and in my opinion most important) of the S.O.L.I.D. principles. In my experience, any violation of the S.O.L.I.D. principles causes coupling – one major cause of  „dependency hell“ and software-erosion.

Apart from the architectural problems – developers want to avoid dealing with messages from the operating system, so instead of looking for WM_PAINT messages, they just redraw the window in fixed intervals – typically after handling about 50 table-rows, regardless if a re-drawing is actually necessary or not! Now consider that drawing a window means explicitly setting every single Pixel (that’s more than 2 million on an HD Monitor)! This takes a lot of time, because – Hardware-Acceleration? What is that!?
Also take into account that the absence of threads means that the drawing has to be done on the same thread as the business logic. What this means is that the business logic needs to be stopped for the entire time that is spent on all the (often unnecessary) re-drawing! So about 99,9975% of the time is spent on drawing the GUI.
And how often should you re-draw? If you don’t redraw often, the UI becomes unresponsive, so you embarass yourself in front of your customers. If you re-draw often, then your business logic gets extremely slow and it still feels unresponsive, because user inputs aren’t processed while the window is redrawing (so there is a noticeable delay between clicking a button and the button going down – or between pressing a key on the keyboard and the character appearing in a text-box). I haven’t seen a single program there, that felt smooth and responsive! But I have seen programs that are so slow, they needed hours just to process a few hundred database entries. Even assuming a redraw in every single iteration, I can’t even imagine what they might have done to make the software that slow.

What about Mainframe-Scenarios?

I think I have detailed the performance-problems in non-mainframe scenarios enough, but let me emphasize again: unless you have a mainframe, COBOL is the clear loser at this point.

In scenarios where you do have mainframes, a crucial question is the age of the mainframe. On average, mainframes are 17 years old – a $35 Raspberry Pi 3 from 2016 outperforms a $1,470,000 IBM z800 from 2002. Also, the raspbian repositories contain a .deb package for the Hercules Mainframe Emulator. So you could even easily install MVS or z/OS on a Raspberry Pi and migrate your whole mainframe to it. A fitting case for it could be built out of LEGO. (IBM doesn’t license z/OS for Hercules though, as far as I know).
Regardless of the performance – I strongly question whether such a heavy investment in a Mainframe even makes sense financially – given how fast it loses its value.

In the past, a main reason for mainframes has been VSAM – a technology with which you can manually map your database-structure to physical hard-drive locations, such that you can optimize the access-speed on disk-based hard-drives. With Solid-State Drives, this has become completely pointless.
Also, VSAM works with fixed-size records. So in VSAM databases, each First Name, Surname, Company-Name, Street-Name, City-Name etc. must have the same length for each entry. To allow long names, the size of the records must be long. But that also means that shorter names must be padded with many, many whitespace characters – wasting a lot of disk-space.

Speaking of hard-drives – a huge problem in COBOL is that its programs are often designed for scenarios with very VERY little RAM. A few hundred KiB maybe (early versions of COBOL didn’t even support more than 64 KiB). Because of that, programs often don’t read data into the memory and process it there – no, they process the data on the hard drive, copying the data back and forth between different auxiliary files. This saves a big deal of RAM – but in an era where Servers come with hundreds – sometimes even thousands of GiB of RAM, in-memory processing makes much more sense and it is about 20,000 times faster. Virtual mainframes could pretend to write data on the hard-drives, but actually work on in-memory caches. This would heavily improve the performance – without the slightest change in the operating system or the program.

In light of these numbers, this article about platform-migration into the cloud becomes less surprising. The article is in german – it says „Zum Erstaunen vieler eingefleischter Mainframe-Verfechter ist die Performance bei leistungsstarken SDMs häufig sogar noch höher als beim klassischen Mainframe“ which translates to „To the astonishment of many die-hard mainframe-advocates, the performance in powerful SDMs is often even higher than in the classic Mainframe“. Also it says that they cut their code down by 77.5% and their costs by 66%.

I do not argue for platform-migration. Especially not into the cloud!
But this does show, that the performance on a real mainframe is already worse than on a virtual mainframe running on a „normal“ cloud-server (despite requiring additional virtualization!).
This would be impossible if mainframes really were the workhorses that they are sold as – and this proves that the performance on a real mainframe is even worse than the factor 20 from above.


1 This is probably not only due to the incredible optimizations done by modern compilers (like computations being delegated to co-processors ahead-of-time) – I think it is also due to the crazy optimized libraries for handling of performance bottlenecks (file buffering, memory management, hardware accelerated graphics, hashing, etc.) found in the major programming frameworks.