Advanced Computing in the Age of AI | Friday, April 19, 2024

How Unisys Transitioned from Proprietary to Open Architecture 

As EnterpriseTech recently reported, Unisys just completed a decade-long transition of the architecture of our flagship ClearPath systems. We moved the architecture for the two constituent branches of the family – Sperry Univac-legacy Dorado and Burroughs-legacy Libra systems – from proprietary CMOS processor technology to a software-based fabric architecture running on Intel x86 processors.

As Unisys’ chief engineer, I guided the teams responsible for this transition from start to finish. At the risk of sounding presumptuous, I must say it was a monumental achievement – with all kudos due to our world-class engineering team. They’re the ones who figured out how to move two operating environments – OS 2200 for the Dorado line and MCP for the Libra systems – from proprietary to open technology while delivering consistent and eventually superior system performance.

Many said it couldn’t be done, including a job applicant with a strong background in IBM mainframes who denied – midstream and despite clear evidence of progress – that we could actually pull it off. This is a moment I reflect on regularly – the epiphany being that you have to believe you can succeed in order to do so.

But we’ve done it. Moreover, I believe our process provides instructive lessons for systems engineers faced with a similar challenge.

The Core of the Challenge

The primary reason we initiated the transition was economic. The cost of developing proprietary processor chips and the related hardware for the two discrete ClearPath product lines was significantly expensive and, given the need to provide increased power to our clients, certain to be never-ending.

We made the decision to undertake the transition in 2005. Fortunately, our own prior experience and the industry landscape made us optimistic about our prospects for success. We had already transitioned both ClearPath lines to a largely common hardware platform following the Burroughs-Sperry merger in 1986. Moreover, we had gotten out of the power supply, cabling, disk, and ASICS businesses because we realized we could procure those from reliable partners and invest the R&D funds elsewhere. The processor transition would be the last and most complicated step of them all.

We were optimistic in undertaking the processor transition, though. We were confident we could source our processors from Intel, leveraging their massive investment in processor power and continual improvement. (After all, Moore’s law originated there.) Intel spent more on developing individual processor generations than we did to develop entire systems.

Plus, we were used to working with Intel. We had already licensed their bus design for our systems. In fact, as far back as 1989 we offered a low-end ClearPath Libra predecessor system, called the MicroA, which included a platform with an Intel processor that we leveraged to control the system’s I/O. So we were familiar with the power of Intel’s products and the quality of the company’s engineering.

After a long deliberation where we evaluated processor designs from various suppliers, we selected the Xeon versus the Itanium processor as our horse. This was a difficult choice to make in 2005: the Itanium offered a lot of power, but we believed Xeon would carry the day because it offered the most flexible and cost-effective roadmap for our purposes.

While going with the Intel processor freed us from the necessity of hardware redesign, it made it necessary to re-implement the two architectures through software – some new and some adaptations of our existing microcode – on top of the Intel processor environment.

The Scope of the Transition

We began the transition in earnest in 2006. We had a good start for the Libra product line because of the work we had previously done with the MicroA and its successor products. It enabled us to run a modest configuration with performance suitable for the entry-level Libra client early in the process. The Dorado product would prove more challenging because the engineering team for that family didn’t have the head start the Libra team enjoyed.

The first major issue to address was long-term processor performance, because we knew it would be a challenge for both product lines.

As economically and logistically advantageous as it was to use the Intel Xeon to anchor future platforms, the processor itself really provided no inherent performance advantage over our CMOS technology. While the Intel processors provided faster clock rates than our CMOS chips, other performance requirements mitigated raw speed as the controlling advantage.

We knew that gaining parity in overall system performance would be challenging, and we understood we would need to roll out both proprietary and open systems in parallel for an extended period.

We also knew we had to take a stepwise approach, making the new architecture functional at the lower end of the product line first and then optimizing performance over time in mid-range and top-of-the-line systems. And, ultimately, it took the better part of a decade to attain processor parity between proprietary and Intel processors at the high end of our product line.

Initially Intel was making major clock rate improvements in their processors generation over generation, but then that rate of improvement slowed. We’re clock-rate-sensitive, so we compensated by taking advantage of improvements in the Intel microarchitecture. But that alone wasn’t enough.

We used multiple processor cores in the Intel architecture to compensate for the lack of intergenerational clock-rate improvement. That was imperative for acceptable pipeline processing. We used different cores for instruction pre-fetch and execution so as many operations as possible could occur in parallel.

For me, the realization that we should take that approach was a “back to the future” moment – a return to first principles in large-systems design. In the end, and going forward, our ability to continue providing increased performance will similarly rely on multiple factors. Hardware, processor, memory, and I/O channels all contribute, but it is the creative use of multiple cores that makes the difference for us.

Necessary Tradeoffs

Our commitment to both data and code compatibility between the proprietary and open architectures posed another key challenge. We wanted to ensure our clients didn’t have to reformat data to go from a proprietary to an open ClearPath system. We understood that any pain in the transition would encourage clients to consider other migration strategies. It was 100 percent binary code and data compatibility or nothing.

To achieve that compatibility, we had to emulate everything that happened on the old system and do data translation at the same time. On the Dorado platform this was very difficult, because we didn’t expect clients to ever recompile their programs. So binary program executables compiled since 1972 – multiple memory-addressing architectures ago – still needed to be executed on the new Intel based platforms. For Libra systems we have a convention of forcing recompilation every three to five years, so that feature – coupled with an architecture that has no assembler – made the process a bit more manageable.

We decided to leverage the fact that the two architectures were at different places in terms of execution in a pure Intel environment, applying some lessons from the Libra transition to that for Dorado, even though the disparate architectures made some methods and techniques different. However, the fundamental dissimilarity in the systems effectively meant we had to do the whole transition twice, once for each product line.

For example, in MCP arithmetic is done in octal (base 8 math) using an IEEE floating point format. We had to emulate that on the Intel processor. We built the floating point capability in software so it could deliver the same results running on the Intel processor as it did on the proprietary chip. There was no need for the same process with OS 2200.

However, there were distinct challenges with OS 2200. The memory architecture isn’t a flat- memory model. We had to emulate a completely different memory management model to run on the Intel processor, which doesn’t use the same kind of paged memory. That dissimilarity would have had a huge impact on performance if we hadn’t compensated for it.

We chose different open implementations for each, too. In MCP, we had been doing cooperative processing between native applications and those running in Windows, so it made sense to continue doing so. The OS 2200 environment was biased toward Linux, so we changed the Linux kernel to accommodate the transition. It was easier to create a purpose-built version of Linux. We couldn’t take a similar tack for MCP because we couldn’t change the core of Windows. Each was the right thing to do in the context of the specific platform at that point in time. But the solutions were very different.

New Secure Partitioning Technology Drove a Performance Breakthrough

The way we traditionally handled I/O for each line was quite different as well, so we decided to unify the approach for both platforms and streamline the number of components involved. We concluded the best way to do that was through software – by creating a stack that enabled us to put processor and I/O functions on the same physical platform. This all-software representation of the I/O subsystem constituted “the last hard bit” for the Dorado team.

We came up with a software technology we call secure partitioning (s-Par). It implements “shared-nothing” virtualization in managing all workloads on the system, assigning a secure partition to each application workload running on the system. Unisys s-Par relies on the same Intel Virtualization Technology (VT) hardware that standard “shared everything” virtualization environments use.

We couple s-Par with high-speed, software-based interconnect technology to directly link all the partitions and allied resources in an architecture we call “the fabric.” Today’s fabric implementation is based on InfiniBand technology that we borrowed from the high-performance computing (HPC) space. This allows us to couple multiple x86 platforms into one cooperative computing complex.

Each s-Par partition is a software-defined blade with dedicated processing, memory, and I/O resources for its workload. This eliminates the “noisy neighbor” syndrome – that is, cross-partition leakage among workloads – that almost invariably accompanies “shared everything” virtualization. That leakage can significantly degrade application performance.

The s-Par containment eliminates resource contention and enables fast, predictable application performance with exceptional security among OS 2200, MCP, Linux and Windows applications residing in different parts of the fabric.

We delivered the first Libra/MCP systems based on the s-Par technology in late 2010 and followed up with the first s-Par-enabled Dorado/OS 2200 systems in mid-2012. We achieved our initial goal within four years of starting out.

In September 2013, the Libra team achieved its greatest performance milestone by eclipsing all previous Libra designs for single-thread performance and single-image capacity. A year later they completed the journey with delivery of the first fabric-based Libra system. In May 2015, the Dorado team accomplished the same feat in delivering the fabric based Dorado 8300, which surpasses all previous Dorado designs in terms of performance and capacity.

Success Is Transparent

However, I believe the delivery history of the systems is less important than our clients’ reaction to the architectural transition: They haven’t noticed. For example, a client running a benchmark from the proprietary Libra 690 on a fabric-based Libra 6300 says they can’t tell the difference. To give a real-world example that reinforces the point even more: In 2013 a Unisys client that processes transactions totaling $2 trillion each day transitioned seamlessly, overnight, from a proprietary Libra system to one based on s-Par and Intel Xeon technology – and nobody noticed.

Naturally, we view that transparency as a measure of our success – the ultimate compliment, in fact. We did what our clients expected, and the monumentality of the effort is completely transparent.

The journey is far from over, though. Clients in both system families tell us they will require higher levels of performance and capacity in the near future. We also know we won’t get processor clock-rate improvements from Intel that will completely satisfy those clients, so the Unisys engineering team must be creative to continue doing what we’ve been told cannot be done.

I take additional satisfaction in knowing we had no roadmap for this transition other than our own. We embarked on the evolutionary process with the conviction that Intel and the IT industry would develop technologies and components that would help us achieve our objective. More importantly, Unisys’ senior leadership has consistently shown unshakeable confidence in our engineering team – in its capacity for innovation and commitment to carry out a transparent revolution that would advance our flagship product line and greatly benefit Unisys’ clients.

About the Author:

Jim Thompson is chief engineer at Unisys Corp. Follow him on Twitter @unisys_chiefeng or @unisyscorp

About the author: Alison Diana

Managing editor of Enterprise Technology. I've been covering tech and business for many years, for publications such as InformationWeek, Baseline Magazine, and Florida Today. A native Brit and longtime Yankees fan, I live with my husband, daughter, and two cats on the Space Coast in Florida.

EnterpriseAI