Long before Kevin Ashton first uttered “The Internet of Things” in 1999, a gentleman named Mark Weiser coined the term “ubiquitous computing” (aka “Pervasive Computing”) in 1988.
…The underlying technologies to support ubiquitous computing include the Internet, advanced middleware, operating systems, mobile code, sensors, microprocessors, new I/O and user interfaces, networks, mobile protocols, location and positioning, and new materials.
In the world of pervasive computing, all devices are network connected and constantly available. Throw in cloud computing, smartphones, beacons, aaaaannnnndddd…
Sound familiar?
OK, here’s where I stick my neck out…
Short of IoT platforms, which are effectively
next generation enterprise middleware, management, and application development
tools, there is no such thing as an IoT “Product”. Whether it’s consumer products marketed as
“IoT cameras” or industrial “IoT Sensors”, the “IoT” designation can (and
should) be replaced by the word “connected”.
Since any electronic device worth buying must be connected than we can
drop that designation all together.
Hence, there is no such thing as an IoT Camera or Industrial IoT
sensors…they are cameras and sensors…just as they have been all along.
That is not to say that IoT is not hugely important, it is, I just think some of the most important components are getting lost in the noise. Here are a few personal observations:
IoT is the embodiment of ubiquitous/pervasive
computing.
IoT is a journey, it is not a
destination.
IoT is distributed computing on a
mass scale.
And this is important because…
Mark Weiser’s vision is now a reality we call “The Internet of Things”.
The Internet of Things is the Foundation for Digital Transformation.
Consider the following. Before Mark Weiser, there was John Gage. Gage, the twenty-first employee of Sun Microsystems, stated in 1984 that “The Network is the Computer”. In 2011, Marc Andreeson, co-author of the Mosaic Browser, co-founder of Netscape, and Silicon Valley Venture Capitalist, wrote an essay called “Why Software Is Eating The World”.
NFV and SDN sit at the confluence of these two prophetic statements. Furthermore, with SDN, the historically closed telecommunications industry is adopting the tools, standards, and best practices that the enterprise has been touting for years. Enter, NetDevOps.
Enterprise DevOps has broken down walls that have existed between software development and operations to facilitate rapid, development, testing, deployment, etc. NetDevOps extends those same benefits to network architects and operators, and a software-based approach to networking allows for more dynamic, scalable, manageable, agile, and innovative networks that are required to support Enterprise IoT initiatives.
I’ve never really considered myself much of a techno geek, but as long as this stuff supports a business objective I think it’s pretty cool.
When speaking with some of my more experienced colleagues we often drift into the “what’s old is new again” conversation (e.g. “people think virtualization is new”). The key difference for me now is that, as an enterprise guy, the telco industry is coming my way…which is kinda cool. For those of us who remember a time when a system outage could be a career limiting event and “cntl-alt-delete” was not an option, these are exciting times for sure.
For the past several months I have been engaging with Enterprise IT executives talking about the “Internet of Things”. In many cases the eye rolls start before I’m through the first sentence. As the conversation continues they “get it” but still see IoT as more hype than substance and it is not currently on their radar screen. Their primary challenges include things like managing smart mobile devices, cloud computing, and converged networks. “Yes”, I say, “That’s IoT”, but I can tell they are still thinking about Fitbits and iPhones and even perhaps recalling the RFID pilot project they were considering a few years ago. Hmmm!
While I love driving and I am a huge music fan, most times during long drives the inside of my car is silent…this is thinking time. Reflecting on those conversations I understand completely. Analyst blogs, trade journals, and tech news websites are filled with stories about IoT (good and bad), many of which ultimately point to the infamous Gartner Hype Cycle where IoT is featured prominently at the top of the peak of inflated expectations. As a long time industry veteran who has seen many a technology slide into the trough of disillusionment and ultimately perish in Mr. Moore’s Chasm, I get it. But this is different; many of the technologies and methodologies that now fall under the banner of IoT are not new and may even be older than the individuals who just rolled their eyes.
So I ponder. Driving along a relatively empty highway (a rarity on the east coast) it occurred to me that we need to re-think how to convey the IoT message in terms that apply directly to the near term challenges IT executives face on a daily basis. Because of the aforementioned hype “IoT” is a fairly well known acronym. The beauty of an acronym is that it can be co-opted to suit ones own purpose. As the miles passed the gray matter churned…not only was I on a journey but I was on a mission to save IoT from a painful slide through the trough of disillusionment.
And then I recalled a few awkward moments when I tried to explain to friends and family what I do for a living:
“The Internet of Things is based on the concept that everything is connected to a network which means now you can control things like lights, television, and air-conditioning with your smartphone. I help companies do that in office buildings and big factories except you don’t need a smart phone because…have you ever seen the Cisco commercial about the girl that loved the cat that drank the milk?”
In an enterprise environment, the true value of IoT lies at the intersection of Information Technology (IT) and Operational Technologies (OT) where data from the physical world (e.g. machine data, sensors) can be analyzed and used to drive processes and improve operational efficiency. While quite broad, the term “Enterprise Integration” is fairly well understood in both IT and OT circles. So while driving along in the general vicinity of the posted speed limit, it hit me:
IoT = Integration of Operational Technologies
OT may appear to be a new concept but for this of us that have been around for a while we know that this is where things actually get done. OT includes things like machine-to-machine communications on a factory floor, building management systems, energy and lighting, safety and security, and fleet management systems. Smart mobile devices are increasingly moving into the OT world on factory floors and in the field in the energy and communications industries. Wireless connectivity is virtually ubiquitous and “Systems” are now more distributed that ever before where application components could be running on smart mobile devices in hybrid cloud environments across multiple converged networks.
Does any of this sound familiar? These are exactly the type of things the eye rollers where saying were their top priorities! And now we can clearly articulate…
“Yes, this is Enterprise IoT; The Enterprise Integration of Operational Technologies!”
In the early days of my career I worked as a systems engineer for Fortune 100 companies running IBM mainframe systems. These were the days when the mainframe “was” the system, and if it went down the entire company was down. The mainframe operating system and the various subsystems like transaction management, databases, batch processing, etc., all needed to be monitored constantly. If we got any indication there was a problem we immediately looked to tools like IBM’s OMEGAMON to identify the problem and begin working on a resolution. Sometimes that meant re-routing workloads to another logical partition or recycling a CICS region. The largest monitors in the data center were reserved for these tools and they were invaluable. At the time the company “data center” was in a single physical location so all of the “systems” were in one building. In today’s world that is rarely an option.
It has been a long time since I have had my hand in such matters but as I write this I am having flashbacks of running through a cubical farm to get to operations to start researching a problem. I cannot imaging how todays IT professionals can get the same level of visibility in highly distributed hybrid-cloud environments as we had so long ago. Adopting a cloud strategy in many ways is a pre-requisite to IoT. However, along with concerns about security and vendor lock-in, lack of visibility into system operations is a major concern among IT organizations when making the inevitable move toward introducing cloud computing into their enterprise architecture.
I recently stumbled upon a company called ScienceLogic that might just have this problem licked. The ScienceLogic platform discovers all IT elements—whether in the cloud, or on-premises, providing a holistic view of your entire enterprise in one place. The co-founder and CEO comes from the service provider industry and held senior management positions within IBM’s Software Division. The CTO and other members of the senior management team come from the telco industry where high availability is an absolute must. In short, these guys have been around long enough to truly understand the problem. Alliances with companies such as VMware, NetApp, Intel, and Cisco indicates they have visibility into platforms that are well established in the enterprise IT market. What is unique is that they also have formed deep partnerships with Amazon Web Services (AWS), Microsoft, and other major cloud providers and have similar access into processes running outside of their own four walls. What this means is enterprise IT professionals can now have full visibility at a granular level across on-premise and multi-cloud environments. This is VERY cool.
One of the major advantages of cloud computing is the dynamic nature of the resource. Historically IT would site down with their corporate business units, prioritize projects and hammer out a budget, meet with vendors, buy stuff, etc. In the cloud world If you need more computing power you just fire up a few more instances and away you go. Going back and shutting down resources that are no longer needed is often forgotten until the bill arrives. As cloud computing moves deeper into the enterprise business units have more autonomy than ever and for many “Shadow IT” is a way of life. Regardless of how the money changes hands it all rolls up into the corporate IT budget. Management tools that can providing visibility across both internal and external computing environments and help enterprises manage their IT budgets will become increasingly popular as hybrid-cloud architectures become the norm.
In simple terms Enterprise IoT is little more than distributed computing on a massive scale. It took several years for the enterprise to adopt web-based architectures in the early 2000’s and this will be no different. However, in an admittedly overhyped world of inflated expectations it is refreshing to see a real solution that addresses an identified market need. It’s worth noting that Intel Capital led a $15 million investment round in ScienceLogic in 2012 and Goldman Sachs led a $43 million round earlier this year – as a fan apparently I am in good company.
If the early money in Enterprise IoT will be in the picks and shovels of infrastructure, as I believe it will, then companies like ScienceLogic might just be the foundry. If you are contemplating (or already struggling with) a cloud migration you may want to give these guys a look…they are really on to something.
This falls under the banner “IoT? We’ve been doing that for years.”
Like many cash strapped cities across the country, The City of Philadelphia was trying to plug holes in the annual budget by any means necessary. In addition to the standard process of targeting wasteful spending, they were also looking for areas where people were skirting the law by not paying taxes and licensing fees. One of the areas where they discovered they had an opportunity was in stepping up their efforts related to regulating dumpsters within city limits.
The Dumpster Problem in The City of Philadelphia
In Philadelphia, dumpsters in the public right of way require an annual, renewable license from the Department of Licenses and Inspections. Based on the dumpster size and use the annual fees range from one to five hundred dollars per year for each dumpster. According to city records, there were only 2,231 dumpsters in the entire city. Given that almost every restaurant, business and apartment building has a dumpster, it was pretty safe to assume that there were a significant amount of unregistered dumpsters. A quick inspection found that there were thirty-two dumpsters on a single block. That means six percent of all the dumpsters in Philadelphia are located on that one street which seems highly unlikely.
Enforcement of dumpster laws had been an extremely cumbersome process. To check to see if a dumpster was properly registered inspectors from the Streets Department had to call back to their home office to verify the registration of a dumpster they are inspecting in the field. The city had evaluated several options but determined the “best fit” solution would be using hand-held mobile Radio Frequency Identification (RFID) readers with durable RFID tags (they call them “Medallions”) affixed to the dumpsters. Dumpster licensees were provided with instructions on where to get the tags and how to affix them to a dumpster and streets department personnel were provided with hand-held RFID readers. Once the system was in place the inspectors could instantly check on the registration status of a dumpster by pointing an RFID gun at the tag. If a ticket needs to be issued for a violation of a dumpster law, such as if a dumpster was overflowing with trash, using the RFID gun, the violation would be recorded instantly and the information would be sent back to a processing center and a ticket would be mailed to the violator. Since most RFID guns have the capability of capturing images pictures could be taken in case the violators claims that their dumpster was not overflowing with trash.
To date over 20,000 dumpsters have been tagged with “Dumpster Medallions” to support this initiative.
Green Incentives
Now that the tag system has been deployed the city is looking at leveraging that investment in other ways. Estimates indicate that the city as a whole generates about 300,000 tons of organic waste each year, about half of which is food waste. The city is now moving forward with a plan that encourages restaurant owners to turn their food waste into compost rather than just throwing it into a garbage disposal. Restaurants can now use a separate dumpster just for food waste that would be turned into compost. Most typically those dumpsters are aerated dumpsters, meaning they are self-contained and don’t allow odors to escape. Some of the food waste dumpsters also have capacity gauges, so that the restaurant would potentially have to empty it less often saving money on disposal fees. As an added incentive the price for the medallion for a compost dumpster is lower than that of a regular dumpster.
So how is this IoT?
As indicated in the first line of this post, many projects that are years old can now fall under the IoT banner. This particular application includes RFID tags, mobile RFID readers, legacy systems, and the potential for analytics. The city now has much greater visibility of how and where dumpsters are being used around the city and is now able to move forward with programs such as the composting incentives mentioned above. It is an example of connecting the real world with the virtual world and, at this point, is fully buzzword compliant.
So if you happen to be walking around the city of Philadelphia and spot a dumpster, which is actually pretty easy, then take a look for the RFID tag. That’s a real world production IoT application at work.
There is obviously a great deal of hype around “Internet of Things” (IoT) technologies. Ever the realist I am determined to sort through the IoT hype and find out what is really happening in the world of Enterprise IoT. Since PTC/ ThingWorx is rapidly becoming the Big Gorilla in the market for IoT platforms LiveWorx was a great forum for engaging with the major players in the industry and networking with early adopters of Enterprise IoT.
I was not disappointed.
Initial Impression
I went into LiveWorx with no other agenda than to take it all in and keep my finger on the pulse of the industry. The focus of the conference was almost exclusively on enterprise IT infrastructure; application development tools, IoT platforms, wireless communications solutions, and analytics. This is all the “Stuff” that needs to be in place before you can start connecting your “Things”. After spending time walking around the Expo floor and speaking with attendees the phrase that kept coming to mind was “This is where the grown ups go to talk about IoT.”
At first I spoke with several of the ThingWorx partners/vendors in attendance. As can be expected it can be hard to get beyond the sales pitch in this type of environment but I found most did a decent job. In my opinion the guys at Stream Technologies did the best job at telling me what they did (abstract wireless connectivity for ThingWorx applications…got it) and the value the provide to the customer.
What really got my attention was the genuine interest from several Fortune 500 executives in attendance including some heavy hitters from the manufacturing and healthcare sectors. These folks were engaged, listening intently to the presentations and networking with peers during breaks. This is a very good sign for the emerging Enterprise/Industrial IoT market.
Flashback to the 1990’s
As much as the emerging IoT market reminds me of the birth of enterprise middleware in the late 1990’s, what really surprised me was that the vernacular de jour goes back even further. I cannot remember hearing the term “client/server” spoken so much in the past fifteen years. While speaking with a few gentleman of similar experience in enterprise IT (translation: longer than we care to remember) over lunch and we all came to the same conclusion; it’s all been done before. While some of the names may have changed the core architectures are fairly similar. What IoT brings to this world is (hopefully) simplified enterprise integration, the flexibility to dynamically scale, distribute, and manage applications across what I expect to be hybrid cloud environments, and ubiquitous communications that make it easier than ever to reach out an touch sensors and gateways at the network edge. Toss in mobile devices and a variety of lightweight operating systems for good measure and your on your way. In a way similar to the move to the three-tiered architecture of web based applications, IoT represents a paradigm shift in how applications are developed and how data is collected and analyzed. Identifying the business case for adopting IoT is the easy part. Ultimately it will be the ability to abstract the “newish” stuff from developers and turning data into value that will determine the rate of adoption of IoT in the enterprise.
It’s about the data, stupid
Understandably analytics was on the mind of just about everyone at LiveWorx. The real benefit from an enterprise IoT implementation will come from the ability to harness sensor data and turn in it into actionable information close to the source. The fact is that the massive amount of data currently available to enterprise organizations is severely underutilized. The combination of IoT architectures and advanced analytics can make it significantly easier to extract value from the data that companies spend so much to produce.
There were a handful of analytics software vendors in the expo center but company that impressed me the most was Parstream with their Geo-Distributed Edge analytics server. By moving analytics processing close to the edge (e.g. a remote manufacturing facility) you can better manage your network bandwidth and virtually eliminate latency. It is going to be interesting to see where PTC’s newest analytics acquisition Coldlight fits as ThingWorx implementations continue to grow.
What’s Old is New Again…Again
In the early days of client/server architectures there was no such thing as “integration”. To share data across applications we would use shared databases. One or more application(s) would push data into a shared database. Other systems might then query a joined database tables to pull shared data into its own realm. Fast forward twenty-five years and we have a cloud based data service exchange like wot.io. Message oriented middleware is now a cloud based asynchronous integration engine. Analytics dashboards that were once only available to Sr. Executives (who rarely used them) are now available to a much broader collection of users, but the concept is still the same. As Y2K brought Mainframe COBOL developers out of retirement IoT will keep individuals that have been down this path in high demand for many years to come.
Security and Standards
For obvious reasons security is a major concern. When you consider the myriad of device types, operating systems, and wireless communications platforms involved this will be particularly challenging. Where should security be implemented? The device? The network? The platform? The app? All of the above? It is going to be a while before this is sorted out but I do not believe this will hold up pilot projects. However, a robust standards security model will be absolutely essential to the growth of the IoT market.
On the flip side there did not appear to be any concerns about the myriad of IoT standards currently being develop by groups such as the Industrial Internet Consortium. If IoT plays out like the enterprise middleware market in the early 2000’s then the market will select a de facto standard and eventually all of the industry leaders will get on board.
Missed Opportunities
While there was genuine interest from end user companies there was also genuine concern. Big companies make safe buying decisions. When you consider that a healthcare equipment manufacture would have to make a large commitment to integrate IoT capabilities into their products long before they can leverage that investment they want to make damn sure the vendor they select will be there to support them for the life of their products. The trouble is that in emerging markets it’s the small nimble startups that develop the innovative solutions. Catch 22.
This best way to address the concerns of the customers is to work through larger integrators (Accenture, CSC, etc.) who can not only help identify the visionary customers willing to step into the world of IoT but also provide a security blanket so if the startup goes under. Unfortunately there was a dearth of integrators in attendance and is an area I would suggest PTC work to address for next years LiveWorx. Missed opportunity for sure.
I was also a bit surprised by what I saw as mixed messages from some of the keynote presentations. The rapid application development video/demo drove home the message that application development could be done much faster at a cost well below current industry norms. In contrast, the message conveyed during the “Rise of Connected Products” presentation was that companies needed to hire a Chief IoT Officer and create an IoT group out of a cost center…and your already falling behind your competition. That is a pretty tough sell for attendees to take back to their respective organizations and I’m pretty sure that message did not go over so well with the LiveWorx audience.
The “Woz”
I would be remiss if I did not point out that one of the featured speakers at LiveWorx was none other than Apple co-founder Steve Wozniak (aka Woz). Rumor has it (ok, I Googled it) that Woz’s speaking fee is $50,000 which hints at the value PTC places on customer engagement. It wasn’t necessary but it was very cool and much appreciated.
In Summary
LiveWorx was a refreshing change from the typical conference where the speaking slots are thinly veiled promotional events for the companies that have the biggest booths on the expo floor. Nothing fancy, no dramatic trade booths, and not a ton of hype. It was just a bunch of experienced IT industry professionals looking at how emerging technology can help improve their business operations to either save or make money.
Through our RFID business we have a fairly large well-established network of partners. Most of these partners are small to medium systems integrators…exactly the type of companies that are in an excellent position to do well when IoT technologies and application development practices start catching on in the enterprise. Recently IoT has been working its way into conversations with these partners, and as can be expected, the primary question is “is anyone really doing anything?”.
The answer is yes, and they have been for some time…even before it was called IoT.
Mainspring Healthcare Solutions is a long time customer of RFID TagSource. The company has a great deal of subject matter expertise in managing specific types of healthcare assets from procurement through end-of-life (equipment, not patients). Examples include beds, IV pumps, and personal health monitors so small they can easily get caught up in bed linens. This equipment is expensive and must be properly maintained and, most importantly, be available on a moments notice to provide the highest level of patient care. Investing in a solution that can dramatically improve the process of managing and maintaining this critical equipment has provided significant benefits to Mainspring customers.
Mainspring has been using both active and passive Radio Frequency Identification Technology (RFID) technology for tracking healthcare assets for several years. With Mainspring data from RFID readers is transmitted to a hosted application that provides near real-time visibility of critical assets throughout a healthcare network. The real value of this system lies in its ability to rapidly combine asset location information and operational status driven by business rules that turn data into actionable information that is embraced by end users.
The infrastructure required for the Mainspring system is fairly typical of traditional asset management solutions that use RFID. All of the assets are tagged and a reader infrastructure is installed to track the equipment as it moves throughout the facility. The bulk of the processing is handled by a cloud based application and the user interface is an IOS application running on an iPod Touch. A subset of the users also have a handheld RFID reader that communicates with the IOS applications via a bluetooth connection. The solution is elegant in its simplicity and uses “cool” tech that makes people want to use the system.
With the Mainspring solution in place healthcare workers place requests for equipment through the IOS application. This request is processed and transmitted to the equipment management group whose team is also equipped with IOS devices to respond to the request. After the equipment has been used it is placed in a room near the point of care designated for equipment that needs to be decontaminated. The asset ID information stored in the RFID tags on the assets is then read by readers installed in these rooms and transmitted to the equipment management group. These individuals can now efficiently gather these assets as they make their rounds through the hospital and return them to the decontamination area. The decontamination room also has RFID readers that support a “Check in/out” process to make sure the equipment is properly cleaned before being transferred to the clean inventory. Once the equipment is in clean inventory it can be put back in services or removed from circulation for maintenance. Once it is released back into circulation the process begins all over again.
RFID industry proponents might suggest this is an RFID solution; it’s not. RFID is an enabling technology; it is not a “solution”. While the mobile application component is a big part of the overall solution it is dependent on back-end systems and could easily be transferred to another platform. The core Mainspring applications are designed to run in a hosted environment but it could also be installed on premise so it is not a “cloud” solution. What Mainspring delivers to their customers is a medical equipment asset utilization solution that leverages Internet of Things technologies including RFID tags and IOS devices hosted in a cloud environment that has proven to address an identified need in the healthcare industry. As time goes on any one of the enabling technologies, RFID, Mobile, or Cloud, could be swapped out without having a significant impact on the overall value of this solution. Furthermore, the system can be remotely monitored and easily updated to support added features. To me that is a true internet of things application.
So, the next time you are speaking with someone about IoT and they ask “is anyone doing anything?” don’t get tied up in technical jargon about the latest standards and specifications…just point them here.
I recently read an article by Dana Blankenhorn entitled “The Internet of Things Needs a Channel”. Dana’s article was exclusively focused on the consumer market but his theory is sound. Whenever a new technology enters the market the early adopters can expect to feel a little pain. With consumer IoT this pain will primarily be related to configuring devices from multiple manufacturers to operate on your home automation platform of choice. The individuals best suited to figure this stuff out are tech enthusiast who will combine their skills with a good bit of patience to be the first on the block who that can control their lights from an iPhone.
In the late 1990’s consumer adoption of the PC really started to take off. At the time there were a combination of small businesses and stores like CompUSA where you could buy what you needed, take classes, and purchase technical support services to learn how to install things like modems and device drivers. Best Buy offers something similar with Geek Squad but unfortunately they may not be around much longer. Apple does a great job at this as long as you buy into the Apple ecosystem. With the popularity of the iPhone and overall ease of use of their iPads, iMac, and Macbooks, I fully expect Apple to win the battle for the consumer IoT market.
The other side of the IoT coin is the enterprise market. Fortunately the enterprise channel is well established and always looking for new ways to deliver value to their customers. From small vertically aligned resellers to large systems integrators who serve broad markets like manufacturing, healthcare, and aerospace and defense, chances are a potential partner is directly in the path between you and your first enterprise customer.
Breaking Down the Enterprise Sales Channel
The amount of time and effort you spend recruiting channel partners will be highly dependent on your ability to understanding their business and articulate where your technology fits in their solution portfolio. You also need to make sure you understanding the role they play in the sales process to determine how you should prioritize your partner recruiting efforts.
Channel partners fall into three primary categories:
Value Added Resellers (VAR’s) – These organizations typically combine hardware and packaged software (e.g. Quickbooks) from one or more manufacturers and primarily sell to end users. These companies may also offer technical services that are limited to hardware installation and software configuration. In most cases they are dealing with commodity products with slim margins and they are not considered strategic in enterprise accounts.
Solution Providers – These companies tend to specialize in one or more industry verticals and combine hardware, software, and services to deliver a complete solution for their customer. Examples include companies that provide asset management solutions for the healthcare industry or work-in-process (WIP) solutions for industrial manufacturing. These organizations are considered strategic but in larger enterprise accounts they often play a secondary role to systems integrators.
Systems Integrators (SI’s) – These organizations are the most strategic partners for their enterprise customers. They combine industry expertise and technical resources and often manage the core of their customers enterprise systems. They maintain legacy systems and integrate new technologies and solutions that extend the value of the systems that run their customers business. SI’s primarily generate revenue through consulting and professional services. They are often part of the selection process of new technology products and/or solutions and manage sub-contractor relationships. While they may identify specific hardware/software platforms to support a customer initiative they would prefer customers buy from manufacturers or Value Added Resellers.
Depending on where your products fit in the industry you may need to consider more than one path to market. Manufacturers of small sensors or beacons would sell primarily to VAR’s and solutions providers. Higher level hardware providers may sell through Solution Providers but will also spend time influencing Systems Integrators to make sure their offerings are specified as part of an enterprise solution. Cloud companies may sell through solution providers or partner with systems integrators to deliver cloud services to one or more of the SI’s customers.
It is critically important that you are establishing the right channel. Channels sales follows the 80/20 rule so its important to focus resources on the channel partners in the best position to drive revenue. Pay attention to their current partners and research joint success stories. If a solution provider has established relationships with several large systems integrators that is a very good sign. A Value Added Reseller who is selling a lot of complimentary products (e.g. networking equipment) may also be a good channel partner. It is better to have a smaller number of high quality partners than a large number of resellers who require a lot of hand holding.
Channel Conflict and Co-Opetition
Channel conflict occurs when companies are “partnering” on an opportunity when at some point one of those partners either circumvents the channel or brings in a competitor to the original partner.
If your company has decided on a channels based sales strategy it should become part of the DNA of your organization. Situations may occur when a customer decides they would prefer to work directly with a manufacturer rather than a reseller. When these things occur speak with your partner and collectively determine how best to move forward and support the customer. Maintaining channel integrity is critical and any hint of potential conflict should be avoided. In many ways the channel is your lifeline and should be treated as such.
Co-opetion occurs when two organizations may cooperate in certain customer engagements while competing in others. There may be times where one of your competitors introduced your partner into an opportunity. At other times multiple partners may receive a request for proposal (RFP) from the same end-user customer. There may be times when the winning partner brings in the loosing partner as a sub-contractor. What important is that everyone is completely transparent so potential damage to the relationships can be avoided.
Making The Channel Work for You
An important point that is often left out of discussions about sales channels is how partners can help determine how your technology evolves. They have deep insight into the market and can help you identify unmet needs in the market that your product may be able to uniquely address. If they feel like they have contributed to your product direction then they are bought in and your technology will become an integral part of their solutions.
When it comes right down to it the only thing that matters is that you generate revenue. Enterprise buyers make safe buying decisions and invest in technology through established vendors. It is your responsibility to make sure these vendors know who you are how they and their customers will benefit by including your products into their solution portfolio. Make sure you put the proper legal agreements in place and establish rules of engagement. Good fences make good neighbors and getting things in writing can help reduce the potential for conflict as the market evolves.
Establishing a network of respected channel partners that can bring your technology into their customer accounts is a great way for a startup to gain traction. It will take a great deal of time and energy but in time it will be well worth the investment.
While the IoT market is in its infancy the hype is off the charts. Will the market be huge? Absolutely! For whom? Ah, that is a much more difficult question. Having been down this road in the past my suggestions for IoT startups are as follows:
If you are going to be an entrepreneur your most important job is to make money. Along the way you need to conserve resources (primarily money) to make sure you are still around when customers finally have money to spend on whatever it is you are selling. You also need to have a pretty good idea of how much those customers are willing to pay for your products and how/if your business will be profitable.
This is greatly oversimplified but I am trying to make a simple point. It is very easy to get swept up in frothy predictions referencing the number of connected devices over the next “X” years. There are plenty of analyst predictions you can reference that will look impressive in a business plan or pitch to investors. What is difficult, and your business depends on this, is understanding who is going to buy your product(s), when are they going to buy, why are they going to buy, and who are they going to buy them from (even it its your product don’t assume they will buy it from you). Most importantly, will you be cash flow positive or be able to bring in additional investments before your money runs out. You do not want to be the guy that has the best (potentially over-engineered) widget that everyone thinks is super cool…but not so cool that they start pulling out their checkbooks.
Finally, beware of customer science projects. Potential investors will want to know that you have at least one marquee customer. Visionary buyers inside marquee accounts may be able to acquire enough budget for a pilot project but there are no guarantees that even a hugely successful project will move past pilot stage. Keep in mind that your marquee customer is getting a paycheck every two weeks and can afford to take their time – chances are you may not have that luxury. You need to be comfortable having open, honest, difficult conversations with customers and investors (including your significant other) and be prepared to walk away from “Super-Mart” if you can’t see a light at the end of the tunnel.
That’s all for now. Now go to the top of this post…rinse and repeat.
in Living on the Edge – An IoT Architecture for The Enterprise I provided some history to support what I thought was the most logical architecture for Enterprise IoT. In this post I intend to point at a few examples that support my position.
As a result of a consulting engagement in 2004 I found myself smack dab in the middle of the chaos caused by the Wal-Mart RFID initiative. Wal-Mart had recently announced that their suppliers would be required to affix passive Radio Frequency Identification (RFID) “smart” labels to cases and pallets of products shipped to Wal-Mart distribution centers. In simple terms it is easiest to describe RFID as electronic barcode. The primary difference between RFID and barcode technology is that barcodes require that the scanner has direct line of sight and can only read one barcode at a time. RFID does not require line of sight between the reader and tags and hundreds of tags can be read simultaneously. At the time these readers were fairly basic data collection devices that sat at the edge of the network and passed raw data on to other applications for processing.
Wal-Mart would occasionally hold meetings for their suppliers where they could learn about new technologies and speak with vendors who could help them meet Wal-Mart’s requirements. A significant portion of the individuals in the RFID industry hail from the world of barcoding. A small percentage, the people who make tags and readers, have their roots in RF. Many have a solid understanding of software but a limited few have any experience in the enterprise. What this means is that Wal-Mart imposed a mandate on hundreds of suppliers that required they invest in immature technology from vendors with limited experience in enterprise IT. This was going to be interesting.
While attending one of these Wal-Mart supplier meetings I first heard the term “RFID middleware”. What these “middleware” venders were actually doing was aggregating data from RFID readers and sending it to a flat file or database. I distinctly remember speaking with a vendor who explained that their SAP “interface” was based on a CSV file. I’m fairly certain SAP would have taken a dim view of their claims. There were several stand-alone offerings that enabled suppliers to meet minimum Wal-Mart requirements but nothing that I would consider enterprise class.
A few years later I was at a conference where the mythical “RFID Middleware” kept popping up. There were a few true enterprise software companies (e.g. IBM, BEA Systems) that stuck their toe in the water but they moved on when they realized the market had yet to develop. Eventually I stumbled upon a company called Trancends which was promoting an Open Source middleware platform called Rifidi. What these folks had done was build Web Services interfaces for leading RFID readers that enabled Java or .NET developers to do much more than simply parse raw reader data. In my opinion the first true enterprise class solution that was designed to support edge operations in RFID implementations. Better yet, they offered this solution pre-installed on a low-cost Linux box that was damn near plug-and-play. For enterprise organizations interested in expanding their projects beyond the minimum Wal-Mart requirements this was an attractive option.
Fast forward several years to the emerging market for “Internet of Things” (IoT) technologies.
While a great deal of the hype surrounding IoT is based on consumer applications I am much more interested in the potential for IoT in the enterprise. The primary challenge would be trying to figure out how to securely integrate machine-to-machine and/or sensor data with applications distributed across multiple locations and platforms that may include mainframes, windows, linux, and mobile operating systems. To further complicate things you need to consider cloud computing and a variety of communication paths including wireline, wireless, cellular, and satellite across multiple service providers. Oh, and it all needs to be cheap, secure, and easy to deploy.
From the earliest days of distributed applications integration has always provided the biggest challenges. Early message-oriented-middleware solutions from companies like Tibco or IBM’s MQ Series where the first to provide truly robust integration platforms. With the advent of Java application servers, web services, and rapid application development platforms, enterprise middleware was born. The promise of enterprise IoT lies in extending the tried and truedistributed computing model closer to the point of sensing where data can be turned into actionable information in near real-time. In an effort to be fully buzzword compliant this is being referred to as “Fog Computing”.
The early market leader for a rapid application development platform that abstracts interfaces across an enterprise IoT ecosystem appears to be ThingWorx (now a PTC company). I am not in a position to offer an opinion on their technology but I am hugely impressed by the list of partners supporting their platform. One of these companies, Camgian, has developed a highly capable enterprise fog computing appliance (software pre-installed on COTS hardware) called “Egburt“. Egburt is designed to be installed on-premise to collect and processes data close to the point of sensing to minimize the potential impact of network latency and help avoid what could be significant costs associated with cloud computing. That’s a nice value proposition.
What companies like Transcends and Camgian have done is pragmatically applied proven technologies and decades of practical experience to provide enterprise class solutions to early adopters of emerging technologies. In both cases an edge server was the most logical choice. The “software as hardware” model keeps the total cost of ownership down (no software licensing “gotchas”), facilitates early adoption, and greatly simplifies the process of scaling applications across multiple locations. This approach is elegant in its simplicity and is exactly what the industry needs to help promote adoption of IoT solutions in the enterprise.
The world of enterprise IT has historically architected networks and applications in a hub-and spoke model. The reasons for this are simple: its all about the data, and the data is still kept in a glass house that is owned and managed by corporate IT. This could mean Oracle databases, SAP applications, etc. These environments take a great deal of care and feeding, must be backed up and have disaster recovery plans in place, require a unique skill set to manage, etc. If these systems are compromised in any way heads will roll and the teams responsible for the integrity of these systems play an integral role in the design the network infrastructure.
First, a brief history lesson…
Prior to the introduction of the IBM PC in the enterprise the network communications groups were fairly small and had unique skills in technologies like IBM’s Virtual Telecommunications Access Method and Systems Network Architecture. The number of external connections was minimal and generally limited to connecting with other enterprise locations that were dependent on the company “system”.
As the need for external communications grew the responsibility for designing and maintaining network connections fell to these same centrally located individuals. MIS teams frowned upon external network connections and made sure they kept a tight reign on things. In the early days the only possible way to “dial in” to the system was using a preconfigured PC and modem that was provided by the MIS team. They had a very limited number of “dial in” phone numbers that were shared across the company. If you tried to dial in and got a busy signal you simply tried one of the other numbers.
The Edge of the network had now extended to PC’s that were still configured and managed by MIS.
In the late 1990s the advent of the Internet Service Providers (ISP) and the “world wide web” meant IT could begin to move away from managing dial-in applications and get a handle on communications costs and scaling. When the first true web-centric application development tools that enables previously static web pages connect to back-end systems the important components such as the database and network management systems were still managed within the glass house. Even then, the first web-centric browser based applications were rolled out as Intranets that were only accessible from behind corporate firewalls. Chances are if people were in a position to pass through the corporate firewall they were still using a company supplied PC pre-configured by the IT group.
At this point the Edge of the network was the corporate firewall.
In the late 90s and early 2000s when the enterprise were broadly adopting a three tier architecture innovative vendors helped create a broad set of tools to develop and deploy scalable enterprise class web-based applications. The network groups still kept control of everything behind the firewall but the application development teams now needed to make sure their new web-based applications supported the popular browser of the day. This is especially important now that individuals were using equipment that had not been provided by the company.
The first true web browser accepted by the enterprise was Netscape Navigator. Microsoft had missed the boat with the rise of the internet and it took years for IE to become a viable alternative. In true Microsoft fashion they attempted to dominate the browser market by embedding their own “standards” and not supporting the de facto standards and technologies that had already been accepted by early web developers. Application testing now included browser testing, down to the version level, to make sure the end users understood the requirements to access the application. Ironically enough it was not uncommon for IT teams to fall back to the old trusty “sneakernet” to install “thin client” browsers across an enterprise prior to rolling out a new intranet application.
While not as cut and dry as before, the Edge of the Network was the web facing application server instances.
The first supported “mobile” applications were really thin client browsers based on the Wireless Access Protocol (WAP). WAP Microbrowsers were initially supported on a limited number of mobile devices and were essentially very thing stripped down web browsers that provided basic access to email and news headlines. Support for WAP was weak and the key application, email, was quickly dominated by the proprietary BlackBerry devices and ecosystems that had very successfully been deployed in the enterprise.
Mobile fat client hand-held computers such as those developed by Symbol Technologies and Intermec had been in place for many years before what we know today as “smart phones”. These devices communicated with enterprise applications typically through windows PCs by a limited sync process. Broader deployment of these systems required added capabilities that took the manual sync process out of the hands of end users. In most instances these applications were used in silo applications (e.g. asset management) that were not considered mission critical and almost exclusively stayed within the four walls of the enterprise.
The introduction of the iPhone in 2007, quickly followed by the android operating system in 2008, was the catalyst for consumer adoption of mobile smart phones. These devices were still dependent on the thin-client browser based model and communications was very slow (and expensive); 3G wireless networks were just being rolled out. The enterprise was still deeply committed to the Blackberry infrastructure which meant that non-blackberry smart phone access to the company email and applications was not supported.
As consumer smart phones became more ubiquitous and enterprise IT were effectively forced to support non-Blackberry devices it became obvious that the thin-client model was not going to work; the fat client smartphone app was born. This was very similar to the process the enterprise went through when going from mainframe to client server. The problems started to escalate when Sr. executives started showing up and the door of the IT group asking when their latest iThingy would be supported. The fact that they showed up with a budget helped move things along.
The old problems still began to arise: apps still needed to be tested across multiple platforms, versions of client apps needed to be managed, people still wanted to access apps through web browsers so they needed to be supported, and companies needed to develop mobile versions of web apps to limit the amount of data and improve performance on mobile devices. In time Mobile Device Management (MDM) companies appeared on the scene to provide the type of tools required to manage deployment of mobile applications in the enterprise.
At this point in time the mobile devices (limited to the legacy hand-held computers and MDM managed devices) were the edge of the network.
Whew…
In 2003 Wal-Mart issued a mandate to their suppliers that required cases and pallets of products shipped to Wal-Mart have a passive RFID tag affixed that met industry standards being developed at the time. As an early player in the RFID industry I saw a great deal of thrashing among companies trying to bridge the gap between RFID readers and the enterprise. Many of the “solutions” put forth were little more than a collection of batch data synchronization tools that were often referred to as “platforms”. However, when companies like Wal-Mart are driving the RFID industry to support enterprise class near real-time visibility of supply chain operations then those “platforms” were not going to cut it.
The challenge for customers was that they had dozens, and in a few cases hundreds, of fixed and hand-held RFID readers spitting out reams of data with no real way to parse and sort the data before sending it back to the enterprise. A limited set up companies began using traditional Java applications server technology and developed interfaces to select RFID readers (typically Windows OS or Linux devices). This still presented a different set of challenges since many of the RFID projects were pilot projects driven by customer mandates and support from enterprise IT was not forthcoming. To further complicate things many of these customer organizations had already standardized on a specific applications server platform (e.g. IBM WebSphere) and the responsibility to get in line with the corporate standards fell on to the platform vendor. With a very few limited exceptions the solutions that eventually were developed were silo applications that at best were loosely coupled with back end enterprise systems.
A few visionary companies saw this as an opportunity and started developing network appliances, essentially linux boxes with pre-configured software installed, specifically designed to manage broad deployments of auto-id devices including RFID readers and manipulate and parse data from those devices before shipping the data to the enterprise. These appliances could also communicate peer-to-peer and be configured to share workloads and support automated load balancing and fail-over. In my experience these devices were the first “physical” edge servers. They were also very expensive and provided much more capability than companies needed or were willing to pay for. Unfortunately some of these companies took too much money from investors too early in the market and ran out of cash before the market was ready for the solution they were offering.
At approximately the same time I came across a company that was developing an open source RFID middleware solution called RFIDi. They were also providing a software as hardware option of the RFIDi server on a very low cost linux appliance. While the deployments of these systems are still fairly rare this type of edge server appliance based architecture has been well received.
At this point the potential exists for Edge of the network to be an edge server appliance managing peripheral devices (e.g. RFID readers) and sensors.
If you buy into the fact that the value of IoT in the enterprise lives at the “Edge” (I certainly do) then you also recognize that the edge can be quite a mess. If you consider that you have smartphones, sensors, proprietary M2M solutions, device controllers, switches, power systems, infrastructure installations, etc., none of which was designed to work together, you have a tangled mess of highly capable devices that are incapable of communicating with one another. Furthermore, the traditional legacy enterprise hub & spoke architecture simply will not work if an organization is to get the true value of IoT. These devices must be able to communicate peer-to-peer and operate asynchronously with the enterprise through multiple communications methods and one or more cloud instances. They must support forward recovery similar to enterprise transaction systems and message queuing solutions and seamlessly plug-in to enterprise environments.
In a strange way it appears as if everything has come full circle. The legacy on premise “glass house” model is slowly moving toward “the cloud.” Instead of business unit specific purchased applications maintained by IT we now have cloud based subscription software licensing models. We have gone from thin client terminals which require very little support resources from IT , to fat client PC’s, to three-tier architectures with thin client browsers, to thin client mobile devices, to fat client consumer mobile devices…which interestingly enough require very little support from IT. All the while what was seen as the “edge” of the network has morphed in response to the needs of users with an eye on enterprise stability and security.
And the winner is…
In my option the best near term architect to support IoT in the enterprise would consist of software as hardware edge server appliances that communicate with either on premise or cloud based applications that offer the enterprise class scalability, fail-over, fault tolerance, and standards based integration modules. These devices would need to manage a multitude of device types and also be able to communicate peer-to-peer and support a variety of communication methods including ethernet, wifi, bluetooth, cellular, and satellite, and dynamically and securely switch based on availability and pre-defined business rules. These devices must support standard runtime environments (e.g. Java) for hosting edge centric applications (e.g. analytics) and support deployment in a way that enterprise developers can use existing tools and methodologies. Most importantly is that they must be easy to deploy.
As someone with a long history in enterprise IT, both as an early techie and later sales/marketing/biz dev guy, I find this space particularly interesting. I have stumbled upon a few select vendors from outside the traditional IT market that seem to be very well positioned to deliver exactly this type of solution. I’ll be digging into this a bit more and let you know what I find…stay tuned.