How Did We Get Here?
A few years ago, friend of mine who runs the DBA group in a large automotive corporation commented, “I remember when I was in charge of a mainframe database. Everything was so much simpler. Security was centralized, the storage system was all right there, access was clear. Why did they have to go and split this into so many boxes, layers and systems?”
I was struck by his complaint mainly because it was so different from the then accepted norm. This was well before the current trends of server consolidation and cloud computing became self evident.
Has the decentralization pendulum swung its course and is simply swinging back?
Are these moves technology driven? Is now the right time, being that the internet is now light years ahead of where it was when Netscape and Larry Ellison wanted to replace the (Microsoft powered) PC with a network computer (NC – this acronym was widely used for a while).
Or are we simply seeing economics and control plays?
Old questions revisited
Let me take you back to the early 80’s. Our jungle had several species of mainframes fighting for supremacy (dothe names CDC, Burroughs, or Honywell ring a bell?). A slew of smaller creatures, minicomputers, were threatening the hegemony of mainframes. Moreover, even smaller computers than minis, microcomputers (to be later named “Personal Computers”), started to emerge.
The minicomputers of the 80’s were no slouches. They boasted operating systems like UNIX and VAX/VMS and were manufactured by giants such as DEC and IBM (even AT&T jumped in for a while). When a company needed to decide where to house its next application, on a brand new minicomputer or on the (perhaps upgraded) mainframe, how did it make that decision?
An economic definition of a minicomputer that was popular among department managers was: “A mini is a computer you can hide in your departmental budget.”
Rather than base the definition on technology parameters, the computer was defined in terms of the conflict between centralized control and local flexibility and power.
The technical advantages of centralization did not trump the promised local control and flexibility of minicomputers. They survived and flourished until PCs became more powerful and more useful. You can find many articles from around 1990 debating a similar battle to the one we’re seeing waged now. “What is better,” the argument ran, “a powerful minicomputer with central storage and control, or a LAN with multiple PCs?”
Similar arguments were used, but the old supply and demand curves won again. Low cost PC LANs with new operating systems (do the names LAN Manager and Netware ring a bell?) found acceptance among users who could not afford the more expensive minicomputers.
That demand goes up when the price goes down doesn’t always mean you buy two for one. It often means that new customers come in at a more affordable price point.
In parallel, companies with minicomputers also bought PC based LANs for other applications. We now had three types of species roaming the IT jungle – mainframes, minicomputers (who have later been renamed to Servers) and PCs as processing platforms.
Those were joined by a fourth species: networks. The networks did not have their own processing or storage but allowed access and data sharing. The plethora of network attached storage, and their high speed brethren appeared later. The new core idea that emerged from these networks was that of centralized storage that is not “owned” by any particular processing computer. A computer was dedicated to manage the access to storage and it was given a name: a file server, and a new species was born.
Remember the computer you could hide in your department’s budget? You were in full control. No need to wait in line for IT, no need to compete with other departments. Well, the same departments that got file servers, later got web servers, application servers, you-name-it servers. Decentralization reached its zenith.
It is somewhat ironic that virtualization technology was initially adopted to help cope with testing and deployment in decentralized environments. Virtual machines “simulated” the real world out there and QA teams were able to subject applications to real world conditions. Virtualization graduated from that environment into production and provided the force needed to swing the pendulum back towards centralization.
A second force, the high speed internet, joined to provide another centralization alternative – the Cloud. Whether used as a central platform offering virtual machines as “servers on demand” or as centralized application or storage services, the Cloud is a new alternative to be evaluated.
The Network Computer, after a brief disappearance seems to be emerging under a new name: netbook. As usual, it is not the network computing ideology but rather “mundane” elements such as form factor and price point that fuel the netbooks acceptance. Later, people will ask, why can’t we use it in the office?
Evolution: Mutation or Survival of the Fittest?
As any student of evolution knows, mutation and survival of the fittest are not mutually exclusive. They join together to determine the set of species that can survive in a given environment. A key factor is the pressure the environment is putting on the species (or lack thereof).
It seems prudent to realize that what we are seeing is not real centralization. Rather, it’s the emergence of new categories of computing alternatives. From the smart devices (such as iPhone) to the Cloud, with the traditional desktops, servers, clusters, virtualized servers, private networks and VPNs — are all in contention.
Who will win? Well, when mainframes become extinct, call me and I will venture a prediction.
Till then, the IT jungle seems to be hospitable enough to a variety of species that might continue to mutate and proliferate.