It’s me again, sharing more of my thoughts about the data we found in this year’s BMC Annual Mainframe survey. This time I wanted to focus on how workloads have changed and are continuing to change on the platform, and the problems that creates.
A couple of years ago, one of the things that came up in conversations with many customers was that despite them not necessarily changing the sorts of applications they were running on their mainframe, the nature of the use of those apps is fundamentally changing. The move towards being a digital business has, for many people, meant not only do their workloads grow, but the predictability of those workloads becomes ever harder. Now instead of an app being accessed by a member of staff, during office hours, it is accessed directly by the user, from wherever they are, via a mobile phone or maybe even a watch, and at whatever time they choose, when they wake up, when they get home from the pub, whatever! This year the survey again shows that many people are still seeing an increase in volatility, it appears that being more changeable is the new normal. But what does this mean for those people that have to manage the system?
Well, we already know today’s systems managers have less experience than ever before, and we see more MIPs being deployed than ever to cope with these additional workloads. This combination means the approach that we have always employed, where the people who manage on a day to day basis ‘know’ what is normal and what is not, where specific alarms have been built based on 20 or 30 years of knowledge, just won’t work anymore.
It is inevitable that in order to keep delivering the exceptional performance and availability that has epitomised the Mainframe for the last 50 years the way we manage the system will have to transform. The rapidly changing nature of the workload means the tools themselves have to become aware themselves of what is ‘normal’, it is beyond what a human can cope with. Additionally, embedding expertise in these management tools means that the relevance of all of the metrics can be understood properly, so deviations can be scored in a way that makes sense, helping bring order from the chaos, and turn data into real information!
What do you think, how do you see the way you manage your mainframe evolving as the workloads and users around it change?