As we at SAP have learned over the years, 36 years after delivering our first packaged application, successful large scale enterprise software follows essentially the same lineage. It solves fundamental problems that businesses face every day, over generations of business change and of technological change and, in doing so, it continuously evolves in a constant cycle of renovation. I call this Timeless Software, and want to write here about what some of its fundamental characteristics are, and how it will help define our software for the next several generations of changes to come.
Where We Are
SAP’s software today covers a massive breadth of business activities. Functionality in the Business Suite covers a large spectrum of business processes, from finance and human resources to sales and service, from planning and procurement to manufacturing and logistics, from managing supply chains to managing business strategy, decision-making and compliance, and others. In addition, its functionality spans variations on these processes across 100+ countries and 25+ industries. Despite this massive reach, customers expect a fundamental degree of coherence, stability, reliability and integration across the various elements of such software. The expectation of stability, given the mission critical nature of many of these business processes, coupled with the fundamental ways in which business deploy and use the software to mirror their own business and its uniquenesses, means that our software and our relationships with customers, are very long-lived and often last decades. Over this long lifespan, customers simultaneously expect the software to contribute to their two fundamental metrics:
- Costs, by ensuring that the software is integrated and comprehensive, and easy to reliably operate and cheaply administer, and
- Growth, by ensuring that the software addresses differentiated areas and is easy to evolve, change, and integrate into others as necessary
So this, then, is the essential duality that our customers expect from their IT landscape: Deliver operational efficiency via coherence and stability, while enabling business growth and managing change necessary to survive and grow. And this becomes our prime requirement: Enable evolution of our software without disruption; provide a large breadth of stable functionality, over generations of change. And it is around this requirement that we seek to design and architect the evolution of our software.
What is the nature of this change dynamic? Business requirements change all the time; markets evolve, circumstances governing customers’ purchase of products change constantly, businesses are bought and sold, regulations change, and just the day-to-day challenges of competing require a constantly shifting and evolving IT landscape. But change occurs at other layers as well. People’s behavior evolves constantly. There are now millions of blackberry carrying business users worldwide, who carry out quite of a bit of their tasks outside their office. This year we estimate that nearly a billion people worldwide will conduct some or the other business activity on a mobile device. The technological layers change as well. Every year we see roughly two new major UI paradigms. Just in the last 3 years, we have seen the iPhone, Google’s work on Google maps and highly interactive web applications enabled by
As I look to the future, evolving our products for the next generation, this becomes our essential challenge: How do we build applications to serve the needs of every user, and every activity, in every business around the world? And how do we do so effectively, efficiently and with maximum coherence? And how do we evolve these applications, their ongoing change, consumption, delivery and integration, as well as their connection to the present, across generations of change? How do we deliver software that is always reliable, and yet always modern? In other words, how do we build timeless software?
- Content, i.e. the application content, the UI content, the integration content, to represent and serve the activities of users
- Containers, i.e. the runtime(s) that this content inhabits, and
- Change, i.e. the ongoing operation and evolution of both the content and the containers over the lifecycle of a solution while maintaining a continuous link with the past
There are other aspects, to be sure, but these are the three basic ones and I want to share my view on their evolution next.
The Evolution of Content Creation
But lots of languages and dialects also exist for other reasons: There are many different domains & problem characteristics within enterprise systems and for each domain, unique combinations of syntax, tooling conveniences and programming models emerge over time. From Jon Bentley’s “little languages” to the modern-day notion of “domain specific languages”, there are many variations in essentially the same exercise: expressing meaning in convenient, specialized ways. There are programming models and domain-specific languages around User Interfaces, for instance. Data has lots of variations too. Modeling and querying business data, languages for reporting and analytics, for search (as Google showed with their map/reduce programming model), for managing XML based or other hierarchical data, and others. Describing flows, events, rules, software lifecycle, and other aspects each bring their own variations, and the same thing happens in specific application areas and in particular industries. Over time, with successful adoption, these abstractions and conveniences increase. Our own ABAP, for instance, saw several programming models integrated within a general purpose language: abstractions and extensions for data access, for reporting, for UI, even object-oriented programming within ABAP, in the form of ABAP objects. Java, similarly, grew over the years in lots of domains and ultimately the JSR institution served to systematize the inclusion of extensions and programming models within the language. And there are similar examples in other domains, in hardware design for instance. Even cliques of teenagers invent their unique DSLs for texting.
Another key source of diversity in programming stems from the nature of the programmers. Programmers bring different degrees of training/understanding in computer science concepts, in business, and in particular domains. So languages and language constructs, as well as specific abstractions emerge for different programmer segments, be it system programmers, business analysts, administrators, or others.
This diversity is great, insofar as it enables abstractions and separation of concerns, so different classes of problems are dealt with uniquely. After all, the world does not speak one language, as any visit to the UN assembly hall would demonstrate. But the challenge is the resulting complexity that these isolations create. The various abstractions/specializations lead to islands of diverse, non-interoperable languages, language run-times and software lifecycles. Like barnacles attaching themselves to a host, these variations often lead to increased landscape complexity and dramatically higher costs of operation.
So my sense is we need an enterprise programming model that is deeply heterogeneous yet integrated. One that enables expression of meaning in a wide variety of simple and convenient ways, including ways yet to be invented, without losing coherence. One that:
1. Enables developers across lots of domains and specializations to use their native abstractions and conveniences
2. Uses a family of integrated domain-specific languages and tooling conveniences to build software artifacts with maximum efficiency and productivity
3. Has a powerful glue that binds these diverse elements together
4. Can be extended by communities and developers of various sorts in lots of different ways, and
5. Can integrate the next great languages, including languages yet to be invented, and can itself be renovated and embedded in other programming models
The Evolution of Containers: Next runtimes
More than a decade ago, I examined one aspect of this issue in my own Ph.D. work, in looking at how meaning, specified in highly generic logic-based languages, could be executed optimally using specialized procedures that could cut the layers of abstraction to achieve significant optimization compared to a generic logical reasoning engine. The principle underneath this is the same one -- by separating meaning from optimization, a system can provide both: the efficiency and generality of specification in a wide variety of specialized dialects interoperating over a common glue, and a very efficient implementation of that glue down to the lowest layer possible in the stack, across the layers of abstraction
There are examples of this principle at work in other areas in our industry. The OSI stack implements seven very clean layers of abstraction in the network, and yet a particular switch or a router optimizes across these layers for extreme runtime efficiency. Hardware designers, similarly, use a variety of languages to specify various hardware functions, e.g. electrical behavior, logical behavior or layout, and yet when a chip is assembled out of this, it is an extremely lean, optimized implementation, baked into silicon. Purpose-built systems often can dictate their requirements to the platform layers below, whereas general-purpose systems often do not know in advance how they will be utilized, and can often be suboptimal compared to purpose-built systems, but more widely applicable.
But beyond crossing the layers of abstraction, run-times have an additional burden to overcome. In enterprise systems, we are often faced with tradeoffs in managing state across boundaries of processes and machines. There are three key building blocks in computing: networks, i.e. moving data around, processors, i.e. transforming data, and state, i.e. holding data, in memory or on a disk, etc. And different types of applications lend themselves to differing optimizations along these three dimensions. Several years ago, when dealing with some difficult challenges in advanced planning and optimization, our engineers did some pioneering work in bringing applications close together with main-memory based data management in our LiveCache technology. The result, implemented successfully in our
So to summarize, I believe the next great run-times will implement the glue at lowest levels possible in the stack, cutting across the layers of abstractions that make developers’ lives easy at design-time but are not needed at run-time. These runtimes will flexibly enable various different application-oriented optimizations across network, state and processing and will enable execution in specialized containers or consolidated containers, in elastic, dynamically reconfigurable ways. This deployment elasticity will take virtualization several layers higher in the stack, and will open new ways for customers to combine flexibility and optimization under one unified lifecycle management, the final piece of the puzzle.
The Evolution of Change: Lifecycle Management
Perhaps the most important piece of this trichotomy is the third one: Change, i.e. managing the lifecycle of a system over the continuous change in its contents and containers.
One of the fundamental pre-requisites of lifecycle management is the ability to precisely describe and document existing or legacy systems. This documentation, whether it describes code, or system deployment, is a critical link across a system’s life. ABAP systems have well-defined constructs for change management, software logistics, versioning, archiving, etc., as well as metadata for describing code artifacts that makes it easier to manage change.
Consuming legacy software often means understanding what is on the “inside”. Well-defined wrappers, or descriptors, of software can help with this. But it is also often necessary to carve well-defined boundaries, or interfaces, in legacy code. Such firelaning, which has long been a practice in operating systems to evolve code non-disruptively, is essential to managing code’s evolution over the long haul. Service oriented architectures are a step in this direction, but having legacy code function side-by-side with “new” code often requires going far beyond what the SOA institution has considered so far. It requires having data, especially master data interoperability, enabling projections, joins and other complex operations on legacy code, having lifecycle, identity, security, and versioning related information about the legacy code, having policies in place to manage run-time behavior, and other aspects. Most of these steps today are manual, and enterprises pay significant integration costs over a system’s lifetime to manage these. Over time I see this getting significantly better. But it starts with provisioning, or enabling, existing code to behave in this manner, carving nature at her joints, as Alan Kay once told me the Greeks would say. I also see incumbents with an existing enterprise footprint, as having a significant advantage in getting here. It is often far easier to carve a lane out of existing code, than it is to replace it.
Great lifecycle management is the essential change management mechanism. My sense is, next generation lifecycle management will enable systems that can easily be tried, consumed, extended, added to, removed from, projected on, integrated with, etc. This will be achieved by enabling every artifact in a system to be measured, managed, and tested. We will see existing and legacy code being instrumented for administration, for documentation as well as for integration. This will require us to provide precise mechanizable specification and documentation of all important aspects of the system as a key ingredient. The specification of a system’s behavior, its usage, service-levels and policies describing its operation, especially for security, access and change, will be fundamental to this. We already see efforts in this direction towards precise, mechanized specifications of system behavior and we will see more of this. SAP has already taken some steps in this direction with our enhanced enterprise support offering, that enables a business to lifecycle manage system landscape across their entire business from one console.
Deep interoperability between design-times, run-times and lifecycle management, will enable us to combine deployment options in ways that were not possible before. For the foreseeable future we see customers employing some parts of their processes as on-demand services, but deploying most of their processes on-premise. Our lifecycle management frames will ensure that customers can make such deployment choices flexibly.
The evolution of our products along Timeless Software
Our portfolio of products, starting with the Business Suite, including Business Objects and NetWeaver and Business ByDesign, will continually evolve along these principles of timeless software.
As the picture above illustrates, we will continue to enhance our massive yet coherent breadth of functionality, to reflect ever increasing business activities across industries, geographies, and roles. This functionality will be built and extended using an evolving programming model, often in languages that have not yet been invented. And will be deployed in new ways, in the cloud, as appliances, on-premise, and all of the above. This functionality will be exposed for wide varieties of consumption, across consumers, business user workplaces, and devices, rendered via a wide variety of specialized client-side technologies, built by SAP as well as others. And yet all of this functionality will be under the same lifecycle frame, the backbone that will support the constant evolution, and constant optimization of our landscape at our customers. Our products will therefore reflect these principles. We will continually carve new lanes, and deliver new functionality, even deep new technologies. The applications will evolve continuously, and piecewise, as nature does: bringing new things, renovating others, adding here and retiring there, and doing so without breaking its essential qualities: reliability, integrity, integration, seamless administration, change and lifecycle management. Just as every few years we humans shed most of our cells, acquire new memories and lessons, decisions and beliefs, evolve and yet stay essentially who we are, I believe it is possible for software to renovate itself completely, and yet continuously.
So as excited as I am looking ahead to innovations on the horizon and beyond, that there is tons of new technologies, new capabilities, and new functionality to be delivered in our software, it is perhaps most reassuring that none of these will break the essential promises at the heart of timelessness, of reliability, integrity, coherence and continuous evolution.
On that reassuring thought, it is time to press the bed button on my seat and try out the fancy new lie-flat bed to end a day that began already 3 timezones away, 20 hours ago. And as I browse thru the 80 movies onboard, and notice the flight monitor displaying the plane’s airspeed of 567 miles/hr, things that passengers 40 years ago couldn’t have imagined, I find myself thankful for being in the comfort of a well engineered timeless system.