Category Archives: Uncategorized

The IoT Podcast – How We Got Here – Chapter 1

“The Internet of Things” takes the concept of distributed computing to a level that is exponentially more complex than anything that could have been imagined back when PC’s were first introduced into the enterprise .  In this podcast we review the early days of distributed computing and how the introduction of client-server architectures meant that MIS would forevermore would be know as IT.

What I Learned in My (short) Time in Telco and How This Applies to IoT

Just as the Internet bubble was about to burst I was lured away (those were the days of signing bonuses) from a successful post-IPO startup to go to work for a telecommunications software company developing a new set of software tools for developing new telecommunications “services” (this was before apps).  Up until then, and still sometime after, companies like Lucent and Northern Telecom had a lock on the market for developing new service offerings (e.g. Texting, web surfing with WAP browsers, etc.) and would charge telco service providers enormous sums of money to develop even the most basic of new offerings for their customers.

This particular company was deeply involved in supporting the guts of the telecommunications networks based on a standard called SS7.  By supporting the emergence of a new standard for developing telco services would enable the company to move up the software stack with their telco customers.  Thanks the the telecommunications act of 1996 the opportunity seemed huge.

After a few months of meeting with the internal development teams and traveling the world meeting with current and potential customers it became apparent to me that the market was not yet ready for such a dramatic change.  I reported my findings to the CEO who was none too pleased since he had already placed a pretty big bet on this new market without really doing his homework.  I did what I could to help them salvage their investment and started looking for another job.

As it turns out a co-worker at the company who had been working in telco for many years agreed with my assessment of the non-emergent market.  Soon after delivering the bad news to the CEO he approached me about join him at a new telco centric startup…which I did.

By this time the internet bubble had fully burst and the market for new tech took a major hit. This new company had taken in a good amount of funding at exactly the wrong time and was under a great deal of pressure from investors.  They did their best to salvage what they could to enter different markets than had been originally planned until they ultimately closed the doors.

My primary take away from this brief tour through the telco sector was that software that runs our telecommunications networks is incredibly complex. The software developers and engineers that understand this world are few and far between…and they are really smart. 99.999% (aka five nines) availability of the telecommunications network is an absolute requirement as any downtime would cause havoc amongst customers and commerce and could potentially compromise national security.  Heady stuff to be sure which is why this world is incredibly hesitant to adopt change. But alas, they must.

IoT takes John Gage’s quote “The Network is The Computer” to a whole new level; without ubiquitous and dynamic mobile connectivity there is no IoT.  Consider the following:

  • The major telcos (Verizon, AT&T, etc.) own the keys to the kingdom and they want an increasingly larger percentage of the enterprise IT and cloud services business.
  • Established systems integrators will increasingly find themselves in co-opitition with their telco partners and will need to develop partnerships and tools to protect their positions in the enterprise.
  • Alliances between SI’s and Cloud/Hosting companies will continue to grow…but they still will not own the network.

Companies that can provide software development tools that can abstract the complexities of the telecommunications components of Enterprise IoT will be amongst the first to cash in on the multi-kabillion dollar IoT market.  I will be watching this segment with great interest…stay tuned.

RFID, NFC, Beacons, Sensors & Other “Things”

In the emerging “Internet of Things” (IoT) market there are numerous references to automated identification  (AutoID) and sensor technologies such as Radio Frequency Identification (RFID), NFC, and Beacons.  I have heard statements such as “RFID is going to be really important” or “NFC will take over the payment industry” or “smart wearables are blowing up” from well intentioned individuals who appear to have limited experience with these technologies.   Since I do have a bit (ok, a lot) of experience in this area I thought it would be helpful to share among the masses how these technologies work and how I believe they may fit into the emerging IoT market.

RFID Overview
While Radio Frequency Identification (RFID) was actually developed by the British in WWII it became most widely know by IT professionals as a result of a mandate Walmart placed upon their suppliers in 2003.  This mandate required that certain consumer packaged good companies apply passive RFID “smart labels” on cases and pallets of products that were supplied to Walmart.  Walmart did a great job of telling suppliers how Walmart would benefit by using RFID to streamline supply chain operations and maintain optimal inventory levels in stores.  The suppliers who were being forced to invest in RFID and assume the cost of the RFID tags were left to their own devices on how they might get some internal benefit from using RFID.  While the suppliers put up a good fight everyone ultimately fell in line and the Walmart mandate continues to hum merrily along.

How RFID Works
An RFID tag is a fairly simple piece of technology.  It includes a small silicon chip capable of storing a serial number or other limited identifying data along with a small antenna. The overall package can be quite small and can be integrated into adhesive labels for cases and pallets, high durable tags for on-metal performance in harsh environments, security cards, or implanted in pets, for example.  Such RFID tags are passive meaning that they have no onboard power.  Passive tags are brought to life by the RF energy sent wirelessly by a suitable RFID reader. The read distances for passive RFID tags is determined by the technical implementation (LF, HF, UHF), the RFID reader power and antenna design, and the RFID tag design.  Over the past decade the most rapidly growing segment of the RFID market is for UHF RFID technology that supports the EPC Global Generation 2 (EPC Gen2) Standard.

The primary components that make RFID “Work” areTags and Readers (aka Interrogators).  From an IT perspective RFID readers are little more than peripheral devices that read/write data from/to RFID tags and transfer this data to back end IT systems. Fixed Readers are mounted to a specific location (e.g. a dock door or similar portal) and require one or more external antennas and associated cabling for an increased read area.  Less powerful Mobile Readers offer greater flexibility but with a decreased read range as compared to fixed readers.

RFID tags fall into three primary categories: Passive, Battery Assisted Passive (BAP), and Active.  Passive RFID tags operate by harvesting energy through electromagnetic waves provided by the RFID reader to power up an integrated circuit in the tag which then transmits and receives information using either inductive coupling or backscatter:

  • Inductive coupling (Near Field) works when a tag and reader transfer energy through a shared magnetic field.  Near Field RFID tags have a relatively short read range of just a few inches.  This is how the increasingly promoted NFC (Near Field Communications) technology operates.
  • Backscatter (Far Field) works by reflecting electromagnetic waves back in the direction which they came from.  Far Field tags such as EPC Gen2 passive RFID tags can support much longer read distances depending on tag size, reader configurations and power, and environmental conditions.

Passive RFID tags fall into three sub categories defined by their operating frequency:

  • Low Frequency Passive (LF) tags operate at 125 – 131 kHz and communication with LF readers using indicative coupling.  Read distance for LF RFID is typical no more than a few inches.  LF is unique among passive RFID technology in its ability to transfer through thin metallic substances and items with a high liquid content.  Typical applications for LF RFID include access control and animal tagging.
  • High Frequency Passive (HF) tags which operate at 13.56 MHz and typically follow the ISO 14443 or ISO 15693 standards to communicate with an HF reader using inductive coupling.  Similar to LF tags HF typically has a read distance of no more than a few inches and are commonly used to support transit ticketing and library check in/check-out applications.
  • Ultra-High Frequency Passive (UHF) tags (e.g. EPC Gen2 tags) harvest energy from the RFID reader which stimulates the tag antenna to power up the chip.  The tag then uses backscatter to send data from the chip back to the reader.  The read distance on UHF passive tags can vary a great deal from just a few inches to over 80 feet depending on reader power and tag design.  UHF is rapidly becoming the de-facto standard for supply chain, retail inventory, and asset management applications.

Tags that require an on-board power source (battery) include:

  • Battery Assisted Passive (BAP) tags operate in a way similar to UHF passive tags with the exception that they use battery power to significantly boost the read range of the tags (over 100 feet).  BAP tags must first be contacted by the RFID reader (a.k.a Reader Talks First) before the battery is engaged to power up the chip and broadcast data back to the reader.  The downside to BAP tags is that the batteries on these tags typically only last a few years years, after which they operate as a standard passive UHF tag with a much shorter read distance.
  • Active RFID Tags use battery power to effectively act as a beacon that is identified up by any active reader within a range of approximately 300 feet.  Active tags operate in a manner where the tag broadcasts its unique identifier, and in some cases sensor data, which is then transmitted to an antenna.  For Real Time Location Systems (RTLS) the active tags need to be picked up by more than one antenna so the software can identify the location of the tags through proprietary triangulation algorithms.  Active tags suffer even more limitations than BAP tags, when the batteries run out they stop working.  Active RFID tags are typically proprietary and subject to vendor lock-in, and as a result, can be significantly more expensive.

EPC Gen2 UHF Passive RFID Tags – The Global Standard
With the exception of transit ticketing and payment applications which use HF RFID, UHF tags that support the EPC Gen2 standard are by far the most common around the glob4.  The reasons behind this are as follows:

  • Major retailers and the US DoD mandates that suppliers apply EPC Gen2 labels to cases and pallets of products shipped to their respective distribution centers.
  • EPC Gen2 has been adopted by ISO (ISO 18000-6) and is truly a global standard supported by all leading RFID technology manufacturers.
  • The costs of UHF RFID tags and readers continues to drop as the industry matures making it easier for more organizations to justify adopting the technology.

Because different regions of the world use dissimilar frequencies in the UHF range, many UHF tags are tuned to perform best in a particular region (e.g. 915 MHz for the US, 868 MHz for the EU).  However, a growing number of tag providers are developing tags that perform equally well in all regions.

In the world of EPC standards, no personal consumer information is stored on RFID tags.  The limited data (96 bits, stored in HEX) referenced by the EPC code that is stored on the tag is maintained in one or more secure databases (e.g. manufacturers, distributors, retailers, etc.) so that these organizations can efficiently track to movement of goods through the supply chain.  If someone were to hack into a system where this information was stored they may learn who manufactured the product, when it was made, product ID number, etc. – pretty boring stuff and nothing that you can’t read on the packaging.  Reading UHF Gen2 tags happens in milliseconds; encoding the tags takes a good bit longer.  The EPC Gen2 standard requires support to re-encode tags but in practice once they are encoded for a specific purpose the are rarely re-encoded.

RFID – Facts, Myths, & Innovation
The easiest way for someone to understand RFID technology is to suggest it’s like electronic barcode.  The primary difference being that barcode readers require “line-of-site”; the barcode scanner must “see” the lines of the barcode in order to read the data.  This also means that barcode readers can only read one barcode at a time.  RFID does not require line of sight as tags can be read through a variety of materials.  For supply chain applications the key benefit of RFID is its ability to provide granular data on unique items, cases, and pallets of products as they move from manufacturing through consumption.  This differs from linear barcode systems that apply codes that are often not unique (.e.g. every widget has the same barcode). RFID readers can read multiple tags in milliseconds at one time where barcodes scanners are limited to processing one barcode at a time.

A great deal of attention has been paid to the use of RFID for supply chain applications ever since Wal-Mart announced their RFID Supplier Mandate.  The value proposition for consumer goods manufacturers continues to be elusive even when the RFID labels are as cheap as $.10.  An area that gets much less attention, and has significantly more value for enterprise organizations, is using RFID for managing high value assets including IT assets, healthcare equipment, specialty tooling, shipping containers, and aircraft parts.  Tags for asset management applications can cost several dollars each but the value proposition to the end users easily justifies the added cost.

In the early days of the RFID mandates privacy advocates were concerned that RFID tags will enable companies to track consumers to influence purchasing habits and invade their privacy.  The physics of passive UHF RFID alone dictate that it is a really lousy way to try to track some let alone steal their personal information.  I would be much easier for a thief to simply steal your wallet.  The truth is that if you have a mobile phone, surf the web, use a free email service like gmail or yahoo, and make purchase with credit cards, you have already agreed to trade off a great deal of your privacy.  For individuals concerned about privacy having an RFID chip in a credit card that you willing hand over to strangers in restaurants RFID is the least of your worries.

RFID tag designs vary based on the type of item being tagged and the operational environment.   For cases and pallets of consumer goods or as hang tags on apparel items the most inexpensive RFID labels will work just fine.  Based on volume these tags are now priced under $.10 per tag.  These inexpensive RFID abele however will not work in more demanding applications and will not work when applied to metal objects.  For asset management applications when most items are metal a tag designed for on-metal applications must be used.  On-metal tags come in a wide variety of shapes and sizes.  Construction of the tags also varies based on the environment conditions in which the tag mud operate.  Simple on-metal tags for IT assets in office environments are less expensive than tags designed to survive on construction equipment in Alaska.  Determing which tags are best for your applications requires the type of expertise provided by companies like RFID TagSource – which is a blatant plug for our RFID company.

It is important to note that passive RFID tags have no inherent location/GPS capability – location information is determined strictly by a reader “checkpoint” (e.g. tag ABC passed reader 123 at 10:40 AM…).  It is also important to note that any RFID chips embedded into credit cards, transit cards, NFC stickers, etc. are all Near Field tags that can only be read when the tags share the same magnetic field with a reader and are limited to a read distance of a few millimeters   Regardless of the claims of self appointed privacy advocates…Passive RFID tags can NOT be tracked by satellites and thieves will NOT be able to use a handheld RFID reader to steal your identity while you hand your credit card over to teenager you’ve never met at your local coffee shop.

Contactless Cards and NFC
Most common payment systems (e.g. credit cards, ATM cards, transit tickets, etc.) require physical contact between a magnetic strip on the back of a card/ticket and a magnetic head in a reader/terminal.  Over time card reader machines need to be repaired or replaced due to wear and tear.  For a small restaurant it is not such a big deal.  For a large transit systems with thousands of ticket dispensing machines and turnstiles this represents a huge expense.  Transit systems in Europe and the Asian Pacific region have long adopted High Frequency (HF) RFID “contactless smart card” systems eliminating a great deal of the expense of maintaining equipment.

NFC builds upon the previous “contactless smart card” standards by adding more robust and secure two-way contactless communications.  The benefits of using NFC for contactless transit tickets by daily users of public transit systems are understandable.  Pushing NFC into credit cards and mobile phones to replace traditional credit cards is less understandable.  I am already on record stating NFC is DOA in the US and the recent announcements regarding NFC and Apple Pay do little to alter my opinion.  Mag strip credit cards are very easy to use and credit card terminals are ubiquitous.  NFC is currently very difficult to use as NFC terminals are few and far between – and even if you can find one chances are the retailer does not know how to process an NFC transaction.

I recently met several NFC proponents at a local IoT meet up who were espousing that NFC is going to be “HUGE”.  When I asked them about their personal experiences paying with NFC it was as if I called their baby ugly.  I rest my case.

Beacons
In a consumer sense IoT “Beacons” are a fairly recent development primarily being promoted by Apple via their iBeacons™.  “Beacons” are low-energy bluetooth devices that operate by broadcasting a unique identifier to a specific local area.  Devices with applications that support receiving beacon notifications will then be made aware of a specific event (e.g. secret sale on funky socks for beacon users) along with an estimation with how far away the user is from the beacon.  The only examples I have heard for beacon applications come from the world of retail sales.  I am admittedly at the end of my knowledge of beacons, but If you are concerned about your privacy and fearful technology could be used to influence your purchasing decisions…let the beaconed beware.

Sensors
Sensors require power, which pretty much eliminates all forms of passive RFID tags.  Active RFID tags have had sensor capabilities such as monitoring shock, vibration, temperature, humidity, etc. for many years.  A limited number of  battery assisted passive (BAP) tags are also adding sensor support.  The downside of these active RFID sensors is that they are often proprietary and very expensive.  A better option would be to look at Low Energy Bluetooth sensors that are increasingly available at more attractive price points and do not require proprietary hardware/software to gather the sensor data.

So there you have it – a basic primer on RFID, NFC, & Beacons and my view of how these technologies may fit in the emerging IoT market.  Keep in mind that RFID, NFC, Beacons, etc. are enabling technologies; the real value comes from developing intgrated solutions that may include RFID, Mobile, WiFi, Bluetooth, Cloud, and legacy enterprise applications.  Do not make the mistake of looking for “An RFID Solution”.  Do your homework, develop your requirements, and combine best of breed technoogies that best support your specific needs.

Comments and questions are always welcome and appreciated.  I can be reached at [email protected]

NFC is DOA in the US.

NFC is DOA in the US.  There, I said it.  More specifically NFC is dead on arrival in the emerging space called “The Internet of Things” in the US market.  Regardless of how wonderful a specific technology may be it still gets down to value; who pays and who benefits.  The value of NFC to consumers in everyday use for payments, transferring files, pairing, etc., is limited at best.  The people who stand in line for hours to get the new “i Thingy” will jump on this as soon as Apple says its cool.  But trying to force this technology on everyday consumers, and more importantly retailers, who just want to buy stuff and move along is like pushing on a rope.  Regardless of how much time and effort you put behind it the result is ultimately futile.

First a little background:

NFC stands for Near Field Communication, which is actually an implementation of high frequency Radio frequency identification technology (RFID).  HF RFID works through a process called inductive coupling which means for objects to communicate they must share the same magnetic field.  That magnetic field is rather small therefore the objects in question like NFC phones or chips in credit cards must be in very close proximity that is often measured in millimeters.  This is why fears of identity thieves scanning your credit cards from a distance are completely unfounded.  Thieves would have to be so close that It would be much easier to simply steal your wallet, or worse yet, your phone.

The real world basis of what we now know as NFC comes from what had been known as “Contactless” technology.  Legacy technologies such as the magnetic strips found on credit cards and transit tickets require physical contact to complete a transaction. For your local retail store this is not an issue.  For transit systems handling hundreds of thousands of commuters transactions per day the cost of maintaining those turnstiles that lie between you and your train is very expensive.  Contactless technology such as RFID requires no moving parts to complete the transaction.  The cost associated with refitting a transit system with Contactless ticking systems is an expensive proposition but the cost/benefit is increasingly more attractive.  The primary expense had been in maintaining proprietary hardware controlled by a single vendor.  With standards based Contactless technology the primary cost item is now the smart card…and they pass that expense along to the commuters.

Another form of low-cost standards based “Contactless” technology that has been widely adopted for decades, and thanks to low-cost imaging systems (e.g. Cameras on smartphones) is virtually ubiquitous, is called bar codes (more on this later).

So now we understand how the technology works and have identified a great application for contactless transactions that operate using near field communication technology.  NFC as it is now being marketed adds another layer of security and capability into the chips.  The physical interaction with the outside world however is exactly the same…and that’s the problem.

Contactless technology is a great fit for closed loop applications where a single organization controls the entire system (e.g. Transit,  security, immigration control).  However, NFC is being heavily marketed to address a not so clearly defined need for more secure retail transactions or data transfer between smartphones in an open environment with divergent technologies.  Mag stripe credit card machines and barcode readers are ubiquitous in retail environments.  Consumers are very familiar with the process of scanning a credit card and singing the little screen on the machine.  Self service checkout combines this process with the good old barcode technology and before you know it you are one your way.

Let’s try this with NFC.  First you need to know ahead of time that the retailer can handle an NFC transaction (good luck with that).  You would then need to launch the payment application for that specific retailer and hold it up to the reader.  All the while the cashier is looking at you and waiting for you to scan your credit card and the people behind you in line are wondering why you are taking so long.

By the way, many of those payment applications can also dynamically display a unique barcode on the screen of your smart phone that you simply place under the retailers barcode scanner.  The airlines already do this with electronic ticketing and it works great.

The value proposition for NFC in consumer retail is completely backwards.

The costs associated with NFC must be borne by the consumer and retailers.  Adding NFC chips to phones and credit cards is not free and those costs are passed along to you, the customer.  The bigger problem is the costs to retailers who will be required to upgrade their payment terminals and train in store associates on how the new systems work.  Retailers would also need to be prepared to provide on the spot technical support to anyone holding up the line because they can’t figure out how to open the payment app on their phone.  The first time I see someone complete a retail transaction using an NFC equipped smart phone I will stand in line and pay close attention (I may even time it so I can see how it compares to a standard credit card swipe).  I fully expect the second time it happens I will move to another line.

It’s fairly obvious that companies that sell NFC chips to mobile phone manufacturers and companies that make credit cards for banks stand to benefit near term.  What is not so obvious is the real reason behind the push behind NFC in the consumer payment business.  The simple truth is that the payment processing industry is very lucrative and tech companies and network operators want in on that business.

Every time a credit card transaction is processed the retailer pays a fee that is typically 2-4%.  Visa and MasterCard control just about everything that happens in the payment processing ecosystem from the plastic in your pocket to the point where money is exchanged with your bank.  With the advent of ubiquitous wireless connectivity and ever more capable smart phones network operators can completely bypass the legacy payment processing infrastructure.  Adding NFC into the mix implies that this process is more secure than swiping a credit card, completely ignoring the fact that hacking massive credit card databases is much more attractive than hacking smartphones at your local coffee shop.

Assuming that appropriate security and consumer protection mechanisms are place circumventing the legacy payment processing ecosystem may have value.  Companies like Square are doing just that in a way that is elegant in its simplicity.

I recently took my daughter to her first “real” concert.  Of course she wanted a t-shirt mark the occasion.  Once she selected the t-shirt the gentleman behind the merchandise table pulled out his iPhone, used the camera to scan a barcode, slid my credit card through a square dongle on the iPhone, and handed me the iPhone to enter my email address for the electronic receipt which was in my inbox before we walked away from the table.  The entire transaction was done in seconds and we didn’t even have to bump phones.

Square has created a discontinuous innovation that has value.  For the retailer they are now free of the traditional credit hard-wired card processing machine.  For consumers the “process” of paying is pretty much the same but the convenience of paying by credit card in areas that used to require cash has real value.  NFC is a discontinuous innovation that does not have value to consumers or retailers.  Oh, and the purported problem with carrying all of those loyalty cards…no problem, what’s your phone number?

So why do I have do I have an issue with NFC?  I’ve been in the Tech industry for a long time and I am the co-founder of an RFID company.   My expertise spans the area in tech where NFC is trying to find a home.  I have NFC manufacturers who are familiar with our RFID business and a growing customer base who want us to push their products.  I also have friends and colleagues who have heard that NFC is “the next big thing” and want me to explain how it works.  In this scenario I can spend a great deal of time and money on something that delivers zero value.  For me it’s easier to write and article/blog that I can refer to and politely move on.

To sum up:

    • NFC is essentially a more secure implementation of RFID with a marketing budget.
    • NFC can deliver value in certain closed loop environments (e.g. transit systems) but adds cost and complexity and delivers no added value to consumers for payment or data transfer applications.
  • While NFC has found a home in Europe and the Asia Pacific region (primarily transit applications) there is no such demand in the US.

No value, no market, DOA.

Enterprise IoT – Flashback to the 1990’s

While researching the emerging “Internet of Things” (IoT) market I am reminded of the early .com days.  VC’s were throwing money at anyone who could write “Business Plan” on the back of a cocktail napkin, online companies like Compuserve, Prodigy and AOL were blindsided by the world wide web and Netscape Navigator, and intranets were just being adopted by a select group of enterprise visionaries.  This was a time of “discontinuous innovation” as best described by Geoffrey Moore in “Crossing the Chasm“.

This is where we are now with IoT.

Source: Gartner – Hype Cycle for Emerging Technologies, 2014

Yup, “Internet of Things” is officially at the top of the Peak of Inflated Expectations.  It is also important to note that “Machine-to-Machine” communications and “Mobile Health Monitoring” are just entering the Trough of Disillusionment sandwiched between “Cloud Computing” and “Hybrid Cloud Computing” and “Consumer Telematics” is firmly entrenched on the Plateau of Productivity.

I have a great deal of respect for the folks at Gartner and believe they are the class of the field among industry analysts.  But I would argue that that machine-to-machine communications is well established in the enterprise and mobile health monitoring is already on the rise.  If you add intelligent devices (e.g. smart phones and sensors), secure network connectivity, and cloud computing, you are pretty darn close to what I consider Enterprise IoT (EIoT).  If I am correct this would put EIoT in the 2-5 year window which I believe is on target.

During the “.com” years I worked for one of the first true web-centric enterprise middleware companies (Bluestone Software, successful IPO in 1999, eventually acquired by Hewlett Packard).  Where client-server was a two-tier architecture the web added a third tier; the browser.  Introducing concept of three-tier architectures and Java application servers to old school IT executives proved interesting.  The new kids developing web based applications on UNIX boxes were completely foreign to IT folks who rarely ventured outside the world of IBM, and to a lesser degree, Microsoft.  For production enterprise applications the three finger solute of “ctl-alt-del” was not an option.  “Enterprise Class” reliability was a new concept to the kids slinging web scripts and Java code.  Old school IT folks who may have felt threatened by this “cool new stuff” were looking for reasons to “just say no”.

Many lessons were learned at the time, the most important being: The Enterprise rarely wants to buy “cool stuff”.  They are tasked with providing and supporting the technologies that address identified business needs with a quantifiable ROI.  The truth is that many EIoT components are already in place, all that remains is stitching these “things” together in a way that delivers even greater value to the organization.

 If the three legged stool I proposed in The Third Tier and the promise of “Run Once, Run Anywhere” holds true then I believe we are well on our way to EIoT:

√  Hardware Infrastructure & Networks
√  Software Development Tools & Platforms
√  Manageable Runtime Environments (e.g. JVM)

So my guess is 2-5 years for EIoT.  The visionaries have already started…its only a matter of time.

The Third Tier and the promise of “Run Once, Run Anywhere”.

With the advent of the World Wide Web, HTML, and browsers such as Netscape Navigator, application developers unknowingly reverted back to a thin-client model that look awfully similar the days of the IBM green screen.  Yes, we now had a graphical user interface but the idea was the same.  The browser had become the thin client, the work that in a client-server architecture occurred on the desktop had now been shifted to the middle tier, and the data base and enterprise applications (e.g. SAP) were still managing the heavy lifting on the back end.  It is important to note that the middle tier, which came to be know as “application servers”, moved back in to the glass house.

Application servers proved to have a great deal of value to the enterprise.  They provided an environment where one could run their business logic and acted as an enterprise integration engine.  Changing anything on the mainframe was still as difficult as ever, but developing web-centric applications and deploying those in an application server environment that occasionally would interface with back end data and applications was getting increasingly easy.  An added benefit was that application servers could also act as integration engines across enterprise applications, an area that had been the exclusive domain of enterprise integration platforms such as TIBCO and IBM’s MQ Series.

Another key enabler of the growth of the three-tier architecture was the introduction of the Java programming language.  The introduction of internet technologies into the enterprise allowed UNIX systems into the enterprise.  This meant that developers had to design and compile their applications separately for each runtime environment where they might reside.  A windows application would not run on UNIX.  Applications developed for Sun Solaris would not run on HP/UX.  The Java promise of “Write Once, Run Anywhere” tackled this problem head on.

Key to the success of the Java programming language was adoption of the concept of the Java Virtual Machine  (JVM).  This meant that each platform provider (e.g. IBM, H-P, IBM, etc.) had to develop a Runtime “container” that was adhered to the Java standard and abstracted the platform specific interfaces.  Any manufacturer that wanted to play (survive) in the world of web-centric three-tier architectures had to develop a JVM for their platform.  Java developers no longer had to worry about developing for a specific platform which removed another significant barrier to adoption.

The Java centric application server also brought enterprise class management tools to the middle tier.  Application servers could be replicated and scaled with relative ease.  Enterprise class failover and fault tolerance was absolutely required for before an application server would be allowed to run mission critical applications in the enterprise.  The tools were now in place for the three tier architecture to be broadly adopted in the enterprise.

In a previous post on Client-Server computing I identified two key components that were required before the Enterprise would adopt the client-server architecture.  The three tier architecture added a third component.

The key components now included:

  • Hardware Infrastructure & Networks
  • Software Development Tools & Platforms
  • Manageable Runtime Environments (e.g. JVM)

This is my version of the three legged stool of the enterprise “Internet of Things” (IoT).  If any one of these is missing IoT centric vendors will have a very difficult time breaking into the enterprise.

Client-Server – The Birth of Distributed Computing

In The Evolution of Enterprise IT  I described how what had been known as MIS became Enterprise IT.  The catalyst for this change was when the personal computer replaced dumb terminals on enterprise desktops.  Subsequent steps along the evolutionary cycle would require a stable foundation upon which to build new and exciting applications that moved further and further away from the mainframe in the glass house.

The key components include:

  • Hardware Infrastructure & Networks
  • Software Development Tools & Platforms

PC’s running DOS and connected to the mainframe via IRMA cards and coax cables had been in place for years.  It wasn’t until ethernet cables, routers, switches, etc. were in place that the PC could truly be considered “connected”.  It wasn’t until Windows 3.1 came along and tools like Powerbuilder became available that programmers could start developing client-server applications.  When these key components came together it changed the course of enterprise computing.  It also introduced a whole new set of challenges.

Prior to the on-set of client-server computing IBM mainframes absolutely dominated the enterprise.  Since the mainframe was the system any unplanned outages brought company productivity to a screeching halt.  IBM developed very sophisticated systems for applying software patches and fixing bugs that allowed guys like me to spend weekends updating the mainframe software while the rest of the company was actually enjoying their weekends.  While applying software updates to mainframes took a great deal of planning the process worked extraordinarily well.

90’s Hub and Spoke Client-Server Architecture

As client-server computing grew it became apparent that it was much more difficult to manage operating system software and applications across hundreds of PCs.  With mainframe you had one central location to update and apply changes.  In this new world the IT people had to go from one PC to the next with large stacks of diskettes to get everyone up to the same level of Microsoft Windows or Novell network drivers.  We jokingly referred to this as “sneakernet”, and this was long before you could actually wear sneakers to work.

In time tools became available to manage the deployment of software updates across the network and establish manageable virtual runtime environments (ala mainframe) that were manageable from…wait for it…within the glass house.

And so it went until…

Oh The Places We Have Been: The Evolution of Enterprise IT

Inside the Glass House – 1980’s IBM Mainframe

Back in the days before the personal computer when IT was called MIS (management information systems) we had these giant behemoth mainframe computers.  Everything lived in the mainframe: files, early databases, programs, transaction processing systems…everything.  Distributed throughout a building or corporate campus were a number of “dumb” terminals connected via  control units and coaxial cables.  While the old IBM green screen terminals were quite large and rather heavy today they might be called “thin clients”.

As personal computers (PC’s) worked their way into the corporate enterprise some of the processing that had taken place on the mainframe could now been done at the desktop on a PC.  Throughout the 1980’s PCs ran the DOS Operating System and PC screens really did not look much different that mainframe terminals.  The initial desktop applications centered around word processing and early spreadsheets.  At first the lucky few that got a PC still had to have that dumb terminal on the desk .  It wasn’t long before they were asking why they had to have two “terminals” on their desk.  Eventually network cards and terminal emulation applications were became available that would allow a PC to connect to the mainframe and be used as a terminal.  It would take several years however before it was possible for the mainframe to “serve” data to the PC “fat client” application.  This new “client/server” architecture was my first experience with a “Discontinuous Innovation” as best described by author Geoffrey Moore in his 1991 book Crossing the Chasm.

Prior to the invasion of the PC large companies had huge MIS departments with teams of developers writing custom “programs” for the exclusive use of their employer.  MIS was the gatekeeper, and if you wanted a new application or just wanted a title on a report changed it was a long and arduous process.  In time software companies like SAP developed vertically aligned applications that they would sell directly to the business units.  This was attractive as it was now much easier for end users to get new applications developed and make changes without having to deal with MIS.  MIS fought this as long as they could but it was clearly a loosing battle.  In the 1990’s the Windows Operating System began to displace DOS as the predominant desktop operating system in the enterprise.  With the Windows Graphical User Interface (GUI) and software development platforms such as “Powerbuilder” programmers now had access to software development tools specifically designed for developing client/server applications.   Eventually MIS (a department within a company that controlled everything) became the Information Technology (IT) department as they now had to support a broad set of IT applications and equipment across the enterprise.

While companies had been sharing data between mainframe systems and certain “mini-computers”, “Client/Server” application was the first true form of distributed computing.  This changed everything.


Interesting Note:  In the world of centralized computing a system outage was felt across the entire organization and you did not want to be responsible for taking the system down.  You had to be very methodical in planning software updates or maintenance operations.  Since all of the devices such as dumb terminals and printers located inside the building(s) were connected via coaxial cable there was no internal “Network” for us to consider.  The network was primarily outside of the building and managed by companies like Bell Atlantic and AT&T.  Since we didn’t fully understand network communications, and it wasn’t our responsibility, the most common representation of the network was a cloud that floated across the top of systems architecture diagrams.  Hence the term “Cloud Computing”.