In the past two months, the move toward IT efficiency in the U.S. federal government has gained new importance. Per an audit by the U.S. Government Accountability Office (GAO), the federal government spends over $100 billion a year on IT systems that often do not speak to each other and are rife with outdated capabilities and security vulnerabilities. Updating these systems will be necessary to achieve the target level of efficiency.
As a part of this effort, and to make the government sleeker, faster, and more efficient, legacy IT issues within the federal government must be addressed. Particular attention has been paid to COBOL on the mainframe as one area needing improvement. On its face, this is an easy target to focus on, but as those of us in this game know, it’s not that simple or straightforward. If you are going to talk efficiency and cost savings, nothing is as efficient or can handle and process large amounts of complex data like the mainframe. This is why most of the world’s largest IT shops run their most important workloads on the mainframe platform.
I have run many successful projects updating systems and modernizing application development in the private sector, so I look at this as an inflection point for government agencies and the vendors who serve them. Now that there is finally organized momentum to enhance efficiency in the government, and more specifically the IT systems that serve our citizens and government, it is incumbent upon us in the vendor community to lean into that efficiency mantra.
The world’s compute runs on mainframe: Data and processing power must remain on mainframe
Citing a Rubin Worldwide report, a recent IBM® study stated that 72 percent of the world’s compute runs on mainframes, while the platform makes up just eight percent of IT costs. The same IBM study showed that the cloud cannot scale as efficiently as the mainframe, with 75 percent of IT executives finding the mainframe as good as or better than the cloud for total cost of ownership (TCO). In summary, the mainframe platform is cheaper and more scalable than the cloud platform.
These data points are perfect examples of the cost and efficiency you get with the mainframe. In the 2024 BMC Mainframe Survey, we saw a platform on the rise, with 94 percent of respondents having a positive perception of the mainframe and 62 percent citing it as a platform to grow and attract new workloads. This tracks with the 62 percent who report adopting DevOps practices on the platform.
It’s no surprise, then, that two of the top five priorities cited by respondents are efforts to enhance automation and modernize applications. While it is true that some workloads will benefit by moving off the mainframe into the cloud or other distributed platforms, data and processing power will need to stay on the mainframe as it is by far the most efficient and cost-effective platform.
To make systems more efficient, the focus needs to be on improving the applications running on the platform, not the platform itself, which is as modern as any system on the market today. Instead, it is the applications operating on the mainframe, many of which have not been significantly changed or updated in decades, that need to be updated with modern architectures and development patterns while being refactored to be sleeker and more efficient. That is why you see more and more talk about artificial intelligence (AI) and graphical scanning and mapping tools to parse, map, and refactor legacy code bases and monolithic code into more manageable assets.
Mainframe + AI is the platform of efficiency for the future
AI gives organizations the ability to quickly onboard new resources and get them familiar with their code base faster to become more productive. AI makes large monolithic code bases more approachable for the next generation of mainframe engineers. AI gives developers the tools at their fingertips to cut down on the planning time needed for changes and rapidly understand what needs to be done where in the code to speed up development time. AI allows site reliability engineering (SRE) teams the ability to create smart automated pipelines to deploy, monitor and, if necessary, quickly rollback changes boosting efficiency. AI give the operations teams faster analysis and investigation capabilities to improve mean time to repair (MTTR) and get to root causes much more quickly than ever before.
This is why IBM and the mainframe are leaning hard into AI with things like the IBM® z17® mainframe and the inclusion of the IBM Telum® II chips. The platform will come with AI capabilities and efficiency built directly into the hardware to make all of the above possible. The mainframe will be the platform of efficiency for the future.
Efficiency should be a commonly accepted word
Along with the efficiency built into the platform, efficiency in managing and operating applications on the mainframe will also need to be a focus. For years, organizations have been striving to update and automate their mainframe systems to align with the automation of their distributed cousins. From adopting agile source code management (SCM) systems like the BMC AMI DevX Code Pipeline solution and Git control system to automating their build, deploy, scanning, and testing capabilities, organizations are investing time and money into updating their mainframe application development platforms.
I would argue that most mainframe shops have either completed a DevOps modernization journey, are in the midst of one, or are planning one. Mainframe engineers are all the way down the rabbit hole with DevOps, and have been for a while, making your mainframe software development lifecycle (SDLC) more efficient, eliminating manual processes, and increasing the quality and velocity of legacy applications, so this should not be anything new. We as vendors need to continue building more innovative, automated capabilities to support the innovations happening on the platform. With efficiency becoming a commonly accepted word in government agencies, the message of automating everything everywhere will ring truer than ever.
In addition to addressing efficiency at the platform and application level, it also needs to be addressed in the local development environment of the engineers working on the mainframe. Making the process of building and managing application development solutions delightful and intuitive for developers is essential for bringing efficiency into IT solutions for the government. If the developer experience for engineers is subpar, you will not see a boost to efficiency. Removing manual bottlenecks, reducing or eliminating context switching, streamlining archaic development processes, and adopting an agile culture are all easy ways to improve the developer experience.
Conclusion: Mainframe efficiency is not a new mantra
A better developer experience is in no way a new concept in IT; the distributed side of the house has been practicing this for years. A simple Google search will bring you a baker’s dozen articles extolling this fact. Mainframe engineers need this same level of capability and focus on the developer experience, too. They need tools that are integrated together into a single platform, that are easy to use, agile at their core, and create a seamless, intuitive development environment. To achieve the level of efficiency now required for doing business with the U.S. government, it is incumbent upon us vendors to provide these tools, make them open, and work with the government to get them implemented and adopted. Making teams leaner and more efficient is a mantra we have been expressing for a while, and now we are able to give this mantra a voice.
BMC, BMC Software, the BMC logo, and other BMC marks are the exclusive properties of BMC Software, Inc. and are registered or may be registered with the U.S. Patent and Trademark Office or in other countries.
IBM, z17, and Telum II are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both.
©Copyright 2025 BMC Software, Inc.
www.bmc.com