Mainframe Blog

Whether Baby Steps or Giant Steps, Cloud Is the Path to Modernize the Mainframe

4 minute read
Gil Peleg

Everyone is under pressure to modernize their mainframe environment—keeping all the mission-critical benefits without being tied to a crushing cost structure and a style of computing that often discourages the agility and creativity enterprises badly need.

Several general traits of cloud can deliver attributes to a mainframe environment that are increasingly demanded and very difficult to achieve in any other way. These are:


Leading cloud providers have data processing assets that dwarf anything available to any other kind of organization. So, as a service, they can provide capacity and/or specific functionality that is effectively unlimited in scale but for which, roughly speaking, customers pay on an as-needed basis. For a mainframe organization this can be extremely helpful for dealing with periodic demand spikes such as the annual holiday sales period. They can also support sudden and substantial shifts in a business model, such as some of those that have emerged during the COVID pandemic.


The same enormous scale of the cloud providers that delivers elasticity, also delivers resilience. Enormous compute and storage resources, in multiple locations, and vast data pipes guarantee data survivability. Cloud outages can happen, but the massive redundancy makes data loss or a complete outage, highly unlikely.

OpEx model

The “pay only for what you need” approach of cloud means that cloud expenses are generally tracked as operating expenses rather than capital expenditures and, in that sense, are usually much easier to fund. If properly managed cloud services are usually as cost-effective as on-premises and sometimes much more, complex questions of how costs are logged factor into this.

Unlike the mainframe model, there is no single monthly peak 4-hour interval that sets the pricing for the whole month. Also, there is no need to order storage boxes, compute chassis, and other infrastructure components, nor track the shipment and match the bill of materials, or rack and stack the servers, as huge infrastructure is available at the click of a button.

Finally, cloud represents a cornucopia of potential solutions to problems you may be facing, with low compute and storage costs, a wide range of infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS) options – including powerful analytic capabilities.

Experiment First

Fortunately, for those interested in exploring cloud options for mainframe environments, there are many paths forward and no need to make “bet the business” investments. On the contrary, cloud options are typically modular and granular, meaning you can choose many routes to the functionality you want while starting small and expanding when it makes sense.

Areas most often targeted for cloud experimentation include:

  • Analytics – Mainframe environments have an abundance of data but can’t readily provide many of the most-demanded business intelligence (BI) and analytics services. Meanwhile, across the business, adoption of cloud-based analytics has been growing but without direct access to mainframe data—it has not reached its full potential. Data locked in the mainframe has simply not been accessible.

Making mainframe data cloud-accessible is a risk-free first step for modernization that can quickly and easily multiply the options for leveraging key data, delivering rapid and meaningful rewards in the form of scalable state-of-the-art analytics.

  • Backup – Mainframe environments know how to do backup, but they often face difficult tradeoffs when resources are needed for so many critical tasks. Backup often gets relegated to narrow windows of time. Factors such as reliance on tape, or even virtual tape, can also make it even more difficult to achieve needed results.

In contrast, a cloud-based backup, whether for particular applications or data or even for all applications and data, is one of the easiest use cases to get started with. Cloud-based backups can eliminate slow and bulky tape-type architecture. As a backup medium, cloud is fast and cost-effective, and comparatively easy to implement.

  • Disaster recovery (DR) –The tools and techniques for disaster recovery vary depending on the needs of an enterprise and the scale of its budget but often include a secondary site. Of course, setting up a dedicated duplicate mainframe disaster recovery site comes with a high total cost of ownership (TCO).

A second, slightly more affordable option, is a business continuity colocation facility, which may be shared among multiple companies and made available to one of them at a time of need. Emerging as a viable third option is a cloud-based business continuity and disaster recovery (BCDR) capability that provides essentially the same capabilities as a secondary site at a much lower cost. Predefined service level agreements for a cloud “facility” guarantee a quick recovery, saving your company both time and money.

  • Archive – Again, existing mainframe operations often rely on tape to store infrequently accessed data, typically outside of the purview of regular backup activities. Sometimes this is just a matter of retaining longitudinal corporate data but many sectors such as the financial and healthcare industries which are heavily regulated are required to retain data for long durations of up to 10 years or more. As these collections of static data continue to grow, keeping it in “prime real estate” in the data center becomes less and less appealing.

At the same time, few alternatives are appealing because they often involve transporting physical media. The cloud option, of course, is a classic “low-hanging fruit” choice that can eliminate space and equipment requirements on-premises and readily move any amount of data to low-cost and easy-to-access cloud storage.

A painless path for mainframe administrators

If an administrator of a cloud-based data center was suddenly told they needed to migrate to a mainframe environment, their first reaction would probably be panic! And with good reason. Mainframe is a complex world that requires layers of expertise.

On the other hand, if a mainframe administrator chooses to experiment in the cloud or even begin to move data or functions into the cloud, the transition is likely to be smoother. That is not to say that learning isn’t required for the cloud but, in general, cloud practices are oriented toward a more modern, self-service world. Indeed, cloud growth has been driven in part by ease of use.

Odds are good that someone in your organization has had exposure to cloud, but courses and self-study options abound. Above all, cloud is typically oriented toward learn-by-doing, with free or affordable on-ramps that let individuals and organizations gain experience and skills at low cost.

In other words, in short order, a mainframe shop can also develop cloud competency. And, for the 2020s, that’s likely to be a very good investment of time and energy.

Access the 2023 Mainframe Report

The results of the 18th annual BMC Mainframe Survey are in, and the state of the mainframe remains strong. Overall perception of the mainframe is positive, as is the outlook for future growth on the platform, with workloads growing and investment in new technologies and processes increasing.

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

See an error or have a suggestion? Please let us know by emailing

Business, Faster than Humanly Possible

BMC works with 86% of the Forbes Global 50 and customers and partners around the world to create their future. With our history of innovation, industry-leading automation, operations, and service management solutions, combined with unmatched flexibility, we help organizations free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead.
Learn more about BMC ›

About the author

Gil Peleg

Gil has over two decades of hands-on experience in mainframe system programming and data management, as well as a deep understanding of methods of operation, components, and diagnostic tools. Gil previously worked at IBM in the US and in Israel in mainframe storage development and data management practices as well as at Infinidat and XIV. He is the co-author of eight IBM Redbooks on z/OS implementation.