Oh The Places We Have Been: The Evolution of Enterprise IT

Inside the Glass House – 1980’s IBM Mainframe

Back in the days before the personal computer when IT was called MIS (management information systems) we had these giant behemoth mainframe computers.  Everything lived in the mainframe: files, early databases, programs, transaction processing systems…everything.  Distributed throughout a building or corporate campus were a number of “dumb” terminals connected via  control units and coaxial cables.  While the old IBM green screen terminals were quite large and rather heavy today they might be called “thin clients”.

As personal computers (PC’s) worked their way into the corporate enterprise some of the processing that had taken place on the mainframe could now been done at the desktop on a PC.  Throughout the 1980’s PCs ran the DOS Operating System and PC screens really did not look much different that mainframe terminals.  The initial desktop applications centered around word processing and early spreadsheets.  At first the lucky few that got a PC still had to have that dumb terminal on the desk .  It wasn’t long before they were asking why they had to have two “terminals” on their desk.  Eventually network cards and terminal emulation applications were became available that would allow a PC to connect to the mainframe and be used as a terminal.  It would take several years however before it was possible for the mainframe to “serve” data to the PC “fat client” application.  This new “client/server” architecture was my first experience with a “Discontinuous Innovation” as best described by author Geoffrey Moore in his 1991 book Crossing the Chasm.

Prior to the invasion of the PC large companies had huge MIS departments with teams of developers writing custom “programs” for the exclusive use of their employer.  MIS was the gatekeeper, and if you wanted a new application or just wanted a title on a report changed it was a long and arduous process.  In time software companies like SAP developed vertically aligned applications that they would sell directly to the business units.  This was attractive as it was now much easier for end users to get new applications developed and make changes without having to deal with MIS.  MIS fought this as long as they could but it was clearly a loosing battle.  In the 1990’s the Windows Operating System began to displace DOS as the predominant desktop operating system in the enterprise.  With the Windows Graphical User Interface (GUI) and software development platforms such as “Powerbuilder” programmers now had access to software development tools specifically designed for developing client/server applications.   Eventually MIS (a department within a company that controlled everything) became the Information Technology (IT) department as they now had to support a broad set of IT applications and equipment across the enterprise.

While companies had been sharing data between mainframe systems and certain “mini-computers”, “Client/Server” application was the first true form of distributed computing.  This changed everything.


Interesting Note:  In the world of centralized computing a system outage was felt across the entire organization and you did not want to be responsible for taking the system down.  You had to be very methodical in planning software updates or maintenance operations.  Since all of the devices such as dumb terminals and printers located inside the building(s) were connected via coaxial cable there was no internal “Network” for us to consider.  The network was primarily outside of the building and managed by companies like Bell Atlantic and AT&T.  Since we didn’t fully understand network communications, and it wasn’t our responsibility, the most common representation of the network was a cloud that floated across the top of systems architecture diagrams.  Hence the term “Cloud Computing”.