If you have a Unisys mainframe, you may be thinking that cloud computing isn't an option. You'd like to take advantage of all that cloud computing offers, but don't think it's possible because it can't handle your transaction workloads, or the architectures are just too disparate to merge, or even that it's just plain too hard to do. I used to think the same thing. And until fairly recently, it was true.
But cloud computing has quickly matured, as have the offerings of service providers like AWS, and it's now proving itself to be a viable option for running mainframe application workload. But before I go into what changed my mind, let's set the stage with a little history on Unisys mainframes and the evolution of cloud computing.
Unisys can trace its roots all the way back to 1886 and American Arithmometer Company, which later became the Burroughs Corporation. The Unisys Corporation of today was formed in 1986 when Burroughs combined with the Sperry Corporation, which was originally founded in 1910 as the Sperry Gyroscope Company. Unisys and its various predecessor companies are credited with developing the first general-purpose computers, known as BINAC and UNIVAC, truly amazing achievements of the late 1940s and early 1950s that changed the world forever.
When Burroughs and Sperry merged to form Unisys, each company had its own line of mainframe computers, each with its own loyal customer base. There were attempts to unify the two architectures and their respective technologies, but the two distinct systems survive to this day.
Unisys ClearPath Libra mainframes originated from the Burroughs line, while ClearPath Dorado's heritage is Sperry. Even though there are distinct differences between these two sets of mainframe technologies, they are in many ways very similar and share many of the same basic characteristics.
Little did anyone know at that time, advancements in computing architectures and miniaturization would lead to the ubiquitous nature of computers today. They're everywhere you look; cars, smartphones, tablets, even kitchen appliances like refrigerators, microwave ovens, and yes, even toasters. The average smartphone today makes the early mainframes look like children's toys; their computing power and speed far exceed that of their ancestors, for a mere fraction of the cost. An incredible evolutionary pace in a relatively short span of time.
Flash-forward to around 1996. We began to hear about a new, revolutionary form of data processing called cloud computing. Many of us, including myself, were somewhat skeptical. Although we found it an interesting idea, we wondered whether it was just another trend du jour, and we took a wait and see attitude. Many of us figured it would fizzle-out like other revolutions as soon as the next, shiny computing trend garnered attention. Certainly, those of us with mainframes thought it would have little impact on mainframe computing.
By the early-2000's, it became apparent that cloud computing had legs. It did not disappear into obscurity like so many failed IT trends before it. Still in its infancy, it was nonetheless proven to be a reliable, cost-effective computing paradigm that had merit—for some. Companies like Salesforce.com began offering business functions like customer relationship management (CRM) as a service.
No longer did you need invest in hardware and software to track customer data and interaction, or manage and automate sales activities. For a reasonable price, you could set up an account to do all this through any standard Internet browser. No servers to buy and manage, no software licenses to track, no upgrades to plan and manage. Cloud computing was real and it had real benefits.
In 2006, Amazon introduced its Amazon Web Services (AWS) Elastic Compute Cloud (EC2) service, followed quickly by Microsoft Azure in 2008. This seemed to solidify cloud computing's standing as a solution for more than just specific business needs, like CRM; it could also be used to run virtually any kind of Windows, Linux, or Unix application.
Using a pay-for-use model, you could migrate existing application workloads and development activities to AWS or Azure cloud environments. No more expensive data centers or co-location facilities to maintain, no more costly hardware upgrades to meet demands of your expanding business, and through cloud replication you could even satisfy your back-up and disaster recovery needs without an expensive remote mirroring facility! How awesome was that?
But again, those with mainframes felt the needs being met by their big-iron couldn't possibly be matched by cloud computing. I mean, the high-volume transaction requirements alone were enough to make it a non-starter for enterprise needs. Not to mention the security implications of letting your sensitive data reside somewhere in the cloud. I admit I'm guilty of having had the same view. Until late one night in 2010.
I was shopping on Amazon, marveling at being able to buy pretty much anything at any time from the comfort of my couch, with just my laptop and a secure Wi-Fi connection -- like millions of other people around the globe. And then it hit me. Never had I not been able to purchase something on Amazon -- it was always available. They were satisfying millions of transactions reliably, quickly, and securely. Twenty-four hours a day, seven days a week. And not a mainframe in sight.
It was a revelation. It was my "ah-ha moment" that cloud computing was ready for mainframe workload. And, it made me feel like an idiot because it was so obvious—it had been in front of me the whole time. I'd been in the business of migrating mainframe applications to open-systems for the better part of a decade, and what was AWS? A huge, distributed, open-systems environment.
In fact, it was probably more stable and secure than many companies squeezing the last bit of life out of their existing networks and hardware to avoid the inevitable cost, complexity, and risk of upgrading. Migrating Unisys mainframes to open-systems was a proven solution for reducing costs and risks, as well as opening valuable business functions and data to modern technologies, like mobile devices. Now if only the mainframe market would see the cloud as a viable alternative.
By 2015, I began to see a shift in the attitude of Unisys mainframe shops. They'd seen the numerous successes and benefits of cloud computing, understood that data security was manageable, high-volume transaction throughput was there, and we began to get more and more inquiries from Unisys shops looking to slowly dip their toes into cloud computing waters. While they may not have been ready just yet, they understood that it was something they had to look at -- the benefits we just too compelling to ignore.
In the meantime, Unisys developed x86-compatible underpinnings that enabled mainframe customers to run existing ClearPath Libra (Burroughs) and Dorado (Sperry) workloads on standard open-systems hardware. While this could potentially be taken to the next step and ported to AWS, the applications are still running in an antiquated, proprietary OS and database platform with a rapidly dwindling pool of skilled programming resources.
Additionally, this approach doesn't take full advantage of the AWS platform. Even though other applications can access DMS, DMSII, or RDMS data via replication or other just-in-time translation, it requires and added later of complexity that increases both costs and single point of failure risks. The same being true for integrating legacy business functions with other applications and processes as stateless services.
The most effective method to exploit the value of Unisys mainframe applications and data is a transformative migration to modern systems frameworks in AWS, reusing as much of the original application source as possible. A least-change approach like this reduces project cost and risk (compared to rewrites or package replacements), and reaps the benefits of integration with new technologies to exploit new markets—all while leveraging a 20 or 30 year investment.
The best part is that once migrated, the application will resemble its old self enough for existing staff to maintain its modern incarnation; they have years of valuable knowledge they can also reuse and pass on to new developers. The problem is most Unisys shops, having been mainframe focused for a very long time, don't know where to start or how to begin. But don't let that stop you. The rest of this article will give you some guidance.
I mentioned above that I've been in the business of moving mainframe applications to open-systems for quite some time. It's a proven solution, proven technology, and a proven methodology that's been fine-tuned over decades. It's not hard to extend this process to AWS, which is based on open-systems technology. It's really the same basic recipe with a few new ingredients:
The first thing you need to do is catalog and analyze all applications, languages, databases, networks, platforms, and processes in your environment. Document the interrelationships between applications and all external integration points. Use as much automated analysis as possible, and feed everything into a central repository. For my projects, I use all this data to establish migration rules in an automated transformation engine. These rules get updated and refined throughout the project.
Analyze all the source code, data structures, end-state requirements, and AWS cloud components to design and architect the solution. The design should include details such as types and instances of AWS components, transaction loads, batch requirements, programming language conversions and replacements, integration with external systems, 3rd-party software requirements, and planning for future requirements.
You'll want to select which mainframe migration tools you want to use; choosing ones that require you to make the least amount of change is best since it greatly reduces project costs and risks. However, you will need to design custom-developed solutions to meet requirements that aren't met by emulation tools. COBOL is almost always migrated, but Algol, MASM, AB Suite (aka LINC), BIS (aka MAPPER), and the like will need to be replaced.
Some functions may be replaced by the target operating system or other target-platform components, so do some analysis to find the gaps. This is also where you'll need to define your data migration strategy. You can keep flat-files in their same flat form, but it's probably best to convert them to relational. Hierarchical data should be converted to relational data using conversions tools or extract-transform-load (ETL) programs.
This is an iterative, automated process to make mass changes to source code. If the modified code compiles, it's ready for unit test. If it doesn't, then developers review the errors, finds a fix, update the migration rules, and run the program(s) through again. Many times, error fixes in one program may be applied en masse to fix the same errors in other programs—economies of scale begin to come into play here. As you go through the modernization process, it gets faster and more accurate.
This is also when developers write source to replace those legacy components that will not migrate to AWS, and data specialists build-out and validate the new databases. Once validated, static data can be migrated to the target database and file systems in parallel with code migration and development. Dynamic data—data that changes frequently—will be migrated during cutover to Production.
The good news about testing is that you only need to focus on the code that's been changed. I've written previously on this topic, stating that there is no need to test every line of code since most of it hasn't changed. Testing should focus on data accesses, sorting routines that may be affected by using ASCII vs. EBCDIC, code modifications to accommodate data type changes, newly developed code, etc.
The bad news is that most legacy applications have few, if any, test scripts. Nor is there much documentation. So just because you don't have to test as much doesn't mean it's easy. It's likely you'll need to spend time and resources to develop test scripts. However, this is a solid investment since they can be re-used for testing the applications going forward in AWS. You'll also need to perform load and stress tests to ensure your applications are prepared to handle high volumes.
When migrated applications have been tested, verified, and optimized, the process of deploying those applications may begin. In reality, many deployment activities are initiated in parallel with earlier phases—things like creating and configuring AWS component instances, installing and configuring mainframe emulation software, migrating static data, and other infrastructure or framework activities.
In some cases, environments may be replicated to achieve this, or existing environments may be re-purposed. The specifics of this may depend upon application and data characteristics and any company standards or preferences you might have. After dynamic data is migrated and validated, cutover to Production mode can be completed.
Describing an implementation in words is one thing. But, if you're anything like me, a visual representation of before and after states makes things a lot clearer. The image below depicts how a Unisys ClearPath Libra system maps to AWS:
Similarly, the image below depicts how a Unisys ClearPath Dorado system maps to AWS:
Since every system is unique and every shop has unique requirements and standards, the images above should be viewed as a general guideline. To give you a closer look at the AWS components and how Unisys mainframe components map to them, they are described below at a high level. Keep in mind that I'm not digging into the gory details here; I'm just provide high-level descriptions. There are too many possible configurations to cover them all in one article.
Amazon Virtual Private Cloud (VPC) lets you provision a logically isolated section of AWS where you launch and manage AWS resources in a virtual network that you define. It's your private area within AWS. You can think of this as the fence around all the systems you have in AWS. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. You can use both IPv4 and IPv6 in your VPC for secure and easy access to resources and applications.
Elastic Compute Cloud (EC2) provides secure, resizable compute capacity in AWS. It serves as the foundation upon which your application sits. It's the container that holds the operating systems, mainframe emulators, application executables, and other supporting software that make up your application. Depending on your specific circumstances, you may separate some pieces into their own EC2 instances, or you may run everything into one instance—it depends on your unique requirements. Maybe you—ll have an EC2 dedicated to batch COBOL and another dedicated to Online. You may even segregate EC2s by applications. Again, it really depends on your specific circumstances.
Simple Storage Service (S3) can be thought of as a hard drive for storing data. Lots and lots of data. S3 serves as the primary storage "device" for cloud-native applications, bulk data repositories, or "data lakes" for analytics. It is designed to deliver 99.999999999% durability, and scale past trillions of objects worldwide. If your legacy system as a tremendous amount of flat file data that you want to preserve in its flat file format, you may use S3 for this purpose. There are other uses as well, which you can find on the AWS site.
Amazon's Relational Database Service (RDS) is where all legacy relational data will reside. This includes any flat file data that's been converted to relational. All your DMS and DMSII data would be converted to relational and migrated to RDS. RDMS data would also be migrated here. This container is optimized for database performance. It's cost-efficient, has resizable capacity, and is designed to reduce time-consuming database admin tasks.
RDS is available in several familiar database engines, including Microsoft SQL Server, Oracle, PostgreSQL, MySQL and MariaDB. However, you may want to consider migrating your relational data to Amazon Aurora, a MySQL-compatible database that has been optimized for AWS and can perform up to 5 times faster than MySQL. An analysis of your existing legacy databases and application will reveal all the changes required to migrate your data to Aurora or any other RDBMS running in AWS.
Applications with a high volume of transactions require something to balance the workload. Amazon Elastic Load Balancing (ELB) does just that. It automatically distributes incoming application traffic across multiple EC2 instances to achieve fault tolerance in your migrated applications. It provides the load balancing capability needed to route traffic evenly among your applications and keep them performing efficiently.
In the AWS environment, you'll be using Lightweight Directory Access Protocol (LDAP) for accessing and maintaining distributed directory information services. While there are other possibilities, this is most likely where you'll map your legacy application user IDs, passwords, permissions, etc. Hosting LDAP services on a smaller separate EC2 instance often makes it easier to maintain independently of applications.
However, a full analysis of your legacy security environment is required to determine how to best architect and configure security in the migrated system. AWS Identity and Access Management (IAM) enables you to create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. This is for AWS infrastructure security rather than application-level security.
Every IT system needs to be monitored. CloudWatch is a monitoring service for AWS cloud resources running the legacy applications you deployed to AWS. You use this tool to collect and track metrics, monitor log files, set alarms, and automatically react to changes in your AWS resources. This data is used to resolve problems quickly and keep your migrated applications running smoothly—much like you do on the mainframe today. Other cloud-ready monitoring tools are available from 3rd parties as well.
Just as you have products and processes to control your application sources and manage application releases on your mainframe today, you need to have a similar set of tools in AWS. AWS CodeCommit is a fully-managed source control service providing secure and private Git repositories. It eliminates the need to operate your own source control system or worry about scaling its infrastructure. CodeCommit is where you'll store your migrated application source code and binaries, new sources and binaries, and anything else you want to archive.
Migrating Unisys mainframe applications to AWS might seem like a daunting, impossible task. It's definitely a challenge. But when carefully planned, managed and executed, the rewards are numerous. Besides the cost savings of the pay-for-use model, once your mainframe application set has been fully deployed on AWS, you'll have the freedom to integrate proven business logic with all the latest technologies (like mobile and augmented reality), expanding your business to new markets, customers, and partners.
Markets and technology don't stand still, they constantly change. Using technologies and services provided by cloud venders like AWS, smart businesses can adapt to market demands at dizzying speeds and outpace their competition. With that in mind, migrating mainframe applications to cloud seems more like a necessity than a luxury.
Have a Unisys Mainframe? Are you looking to Repatform to the AWS Cloud? Have a look at this white paper.
View moreGet in touch with our experts and find out how Astadia's range of tools and experience can support your team.
contact us now