Using Mainframes is becoming more and more mainstream. The value-add of leveraging mainframes for quick deployment and lowering costs is becoming more of a standard practice. There are some great solutions for leveraging mainframes... sorry, corrupted sector in my brain, I meant Virtualization everywhere I said Mainframe.
Seriously, while we have come full circle in some peoples opinion, Virtualization is becoming a common technology in the data center for several reasons. You'll see value-add pitches for lowering your costs for hardware, space and power, shorter time for provisioning new/existing solutions, automated expansion and reduction of the technologies supporting a service, reducing the complexity of disaster recovery, and many more great use cases.
The idea is that you set up one of the technologies that represents the service or supports a service. One example is a fully configured Operating System with a web server and the underlying configuration (HTML, CSS, JSP, etc) that is required. Once you have verified the configuration to be accurate through ITIL processes/practices, the overall configuration can be captured as a Workload, Slice, VM, etc and saved off. One of the best practices is to store the workload within a version control system, for those on the ITIL bandwagon, the Workload should be placed in the Definitive Software Library. There are other needs such as ensuring that the Workloads have the proper monitoring/agents to report on health (I'd like to see the vendors get more creative in this area.. sounds like another blog).
For those who want to move to a more automated environment, with the proper monitoring tools in place, the Service can expand and contract automatically based on over/under usage. Tie together all of the management systems, tools, cmdb, change management, etc into a live service model with state propagation rules, thresholds to compare against and automated service adjustments and/or making adjustment via a point and click. In this case, the monitoring tool watches the CPU utilization, or session count (or some other KPI) and as the service usage increases, more workloads can be automatically provisioned to reduce the stress on the service and in turn provide a more predictive end user experience. The opposite holds true as well, when usage of the service has lowered to specific metrics, duplicate technologies supporting the service should be automatically de-provisioned to reduce heating and cooling costs. Automated Service provisioning is an ideal world, waiting for end users to complain about the performance via Help Desk after the fact is not good for IT or the Business.
There are some corporations that found that there is a clear lower cost associated to outages (revenue, personnel costs, etc) by spinning up a new Workload, pulling the failed Workload out of Service, slide in the newly powered up Workload and getting the Service back online as quick as possible. When the Service is restored, then start analyzing what the failure point was in a more offline capacity. Upon resolution, adjust any configurations, update the DSL (opps, DML), CMDB, etc.
In a dynamic environment, it becomes a requirement to have the proper tools to build, manage, secure and measure the workloads in order to keep IT agile, compliant and focused on the service offerings, aligning IT with the Business.
Disclaimer: As with everything else at Cool Solutions, this content is definitely not supported by Novell (so don't even think of calling Support if you try something and it blows up).
It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test, test, test before you do anything drastic with it.