Follow Datanami:
July 10, 2014

Three Reasons to be Scared of the Internet of Things

Dana Sandu

We know the Internet of Things forecasts: 50 billion connected devices by 2020. Apparently, there’s huge money to be made from fridges reordering groceries, wearables speaking to doctors, and home systems chatting with cars.

Most technologists, investors, and consumers are floating in a Fata Morgana dream, peppered with cool applications, automated living and saving the Earth in real time. All very inspiring; but how do we make it happen and, most importantly, what are the side effects?

Theoretically, things are simple. The Internet of Things requires:

  • A. Sensors and devices- our world’s new kind of nervous system;
  • B. Connectivity- how fast, accurate and cheap we can digitize all the information, and how fast, secure and complete networks can gobble it up;
  • C. People & Processes- bi-directional systems that can integrate data, people, processes and systems for better decisioning.

While all the shiny numbers refer to the rapid evolution of sensors and devices, little attention is being given to connectivity, real-time data management, integrated processes, and scalable applications. With IoT being a Big Data problem and 48.6% of all current organizations having no Big Data strategy (according to financial services Big Data Heat Map), isn’t it about time we start worrying?

DATA

When industry applications start developing horizontally (think transportation+smart cities=smart parking; healthcare+smart home=independent elder care; smart buildings+mobility=smart energy), integrations have to become more tightly coupled across time, location and services. What results is MORE IoT data—large in volume, multi-modal, and varying in quality, format, dynamicity. All needs to be collected, processed, aggregated, and analyzed, and then integrated with enterprise systems and visualized. In real time.

The problem is that while the volumes grow (Stanford University says we’ll grow from generating 1,200 exabytes per year today to 35 zettabytes per year by 2020), systems also need to maintain low latency, which unfortunately is not what happens as databases expand. But most big data technologies lack standards. Proprietary languages are on the rise, and while some are acceptable, most face the same issues: integration with existing systems is slow, and acquisition and maintenance costs are exorbitant.

COST

With all this data comes the cost equation for monetization of Internet of Things—a tricky, double-edged sword with untested margins for error. Getting it right will separate the successful from the delusional, and the gap is not yet as dramatic as expected. On one side we have the cost of delivering a service—the raw infrastructure costs, like hardware and connectivity. With mobile backhaul cost declining, particularly with increasing use of older 3G and even 2G network capacity where available, the pressure is on keeping the data management infrastructure affordable. Big data solutions using traditional RDBMS or Hadoop platforms are expensive, given the combination of low-latency and high-throughput requirements. Storage costs are already a major concern, so the cost of real-time scalability for storage-based technologies is prohibitive – simply too much data to store and too much hardware required to deliver the performance.

PRIVACY AND SECURITY

The ethics of owning and handling personal data are brought up frequently as courts have yet to rule on what is private. Connectivity—if accomplished, will dissipate the responsibility even further and make it very hard to insure all the mining models respect the parties involved in the IoT transaction.

Big data has always been an asset. For example, the White House is building exhaustive databases of private data, the FBI is working on a face library, the Treasury Department is scanning government databases, and the Education Department is tracking student performances across geographies and ages. The Defense Department is even digging into commercial databases, which are increasingly more difficult to control when it comes to privacy of data being stored.

Launch dates, trying to make devices more affordable, and hacker threats make security an issue too big to handle. Add in the serene “that won’t happen to me” attitude plaguing the buyer behavior, and we get a pretty grim picture. Device security, network security (depending on VPN or encryptions), and data security (device and transient data) can all be prejudiced.

The next generation of big data is set to exacerbate the open wounds of real-time data processing. We will no longer be able to flush over privacy, real-time processes, and data volumes too large to physically store. Current technologies supporting the revenue dream bubble (managed services, enablement hardware, networks) have some fearing—and planning—to do.

One option is to forget storage, and start thinking about in-memory and stream processing. Incremental computations could allow for staggering volumes of data analyzed in very little time. Another good bet would be to look for standards-based technologies. Without them, the field menaces to become an oligopoly.

As the IoT evolves, big data technology will undoubtedly evolve to meet it. But rather than ignore the pressure (which would yield some unforeseen adaptations to the solutions of yesterday), maybe the technologies need to be projected for—and in parallel with—the applications driving the change in the first place.

Related Items:

Postal Service Eyes ‘Internet of Postal Things’

Can the Internet of Things Help Us Avoid Disasters?

How Fast Data is Driving Analytics on the IoT Superhighway

About the author: Dana Sandu is a Market Evangelista for SQLstream, Inc., a stream processing company powering smart services for the Internet of Things.

Datanami