A brief history of digital transformation

A brief history of digital transformation

A brief history of digital transformation

Transformation through technology can be traced back dozens (internet), hundreds (printing press), or even thousands (wheel) of years. Creating software applications, building IT infrastructure, and rolling out business processes is not new—every tech publication from Business Insider to Wired has touted digital transformation, and the MIT Sloan School of Management has an entire initiative on the digital economy.

But exponential technological growth does have an origin story. It began with infrastructure, which became the foundation for applications that redefined business processes. And in that way, everything is connected: The infrastructure determined what kinds of apps were used, which determined what kinds of processes worked best.

  1. IT infrastructure is the primary digital transformation disruptor. Mainframes led to servers, which led to networks, which led to cloud hosting, which led to today’s hybrid environments. But the need for every company to adopt or adapt to the latest infrastructural breakthrough didn’t happen immediately. It all started with mainframes. The jurassic metal machines redefined data processing by doubling the amount of computations that could be processed in a minute. But the technology wasn’t adopted all at once. Governments were the first to put mainframes to work, and cut census processing time from a dozen years to just 1-and-a-half. Once servers connected networks—particularly when a 2GB server began hosting the world wide web in 1991—businesses had to change their infrastructure approach or get left behind. Every business that wanted a place on the internet needed a server. And now, servers host intranets that support private clouds, connect to the internet to support public clouds, and support both via hybrid clouds.
  2. Applications are how business gets done today, but this wasn’t always the case. Applications began disrupting the market around the same time servers became the popular infrastructural tool. (Think about that for a minute: New IT infrastructure wasn’t even fully mature when a different technological disruptor began shaking things up.) Monolithic applications came first: 1 application to 1 server. Want a new application? You’ll need a new server. Input, output, and processing were often handled by a single piece of hardware. The breadth of an application’s disruption was limited by businesses’ literal footprint—you had to have room for more servers if you wanted more (or better) applications. Some monoliths gave way to n-tier architecture, which essentially breaks the functional pieces of the architecture up, allowing 1 server to handle the needs of more than 1 application. Using a client-server method, process requests were pooled in 2 tiers running on client systems (tier 1) that connected back to servers (tier 2). Today, some n-tier applications have been replaced by microservices, which break down apps into even smaller components. Today, many businesses revolve around a single app. And many integral business processes—logistics, manufacturing, research, development, management—depend on apps. The rise in applications’ business prevalence was first due to server’s market disruption and then to the evolution in application architecture. That architecture is still evolving today, and it’s affecting businesses processes.
  3. Business processes may not seem transformative (How can a process—an inherently abstract workflow—be digital?), but they’re made so by the systems the processes depend on. The waterfall approach allowed 1 group of researchers, developers, or operators to use a machine at any given time. It was a slow process with only a few code releases per year. This was because it took an entire mainframe to run (for example) the very complex mathematical calculations required to determine the orbital entry of astronauts. And at a calculation rate of 2,000 process per minute, compared to today’s 1,000,000,000,000,000—yep, that’s a 1 with 15 zeros at the end—that calculation could take days. On top of the time it took for a computer to process inputs and output results, each mainframe was larger and more expensive than today’s machines. So even if you could afford multiple mainframes, you may not have had anywhere to put them, since a single mainframe took up more than 350 square feet of space. With so few systems in such high demand, there weren’t a lot of process options besides a waterfall method. Multitier processing allowed for more agile development processes, but development and operations teams still worked separately. This wasn’t a bad thing, each team simply required different workflows and environments. But it did lead to some speed bumps. Consider this, a developer might create a new app with great features. Those great features also hog a ton of resources, but that’s not something the developer thinks about because implementation is operation’s responsibility. On the other hand, the operations team might need to measure resource use by app tier, which means the developer has to shoehorn additional code into the app, which may or may not fit properly. These are the kinds of hiccups that microservices and containers—which enable DevOps processes—alleviate. It makes tighter collaboration possible, where teams can work iteratively on components using a consistent set of tools and with code that can be migrated between teams and environments as needed.

[Source: Chris Bradley & Clayton O’Toole. An Incumbent’s Guide to Digital Disruption. McKinsey & Co ]

admin

Leave a Reply

Your email address will not be published. Required fields are marked *