Follow Datanami:
April 11, 2018

A Modern Way to Think About Your Next-Generation Applications

Anoop Dawar


Many organizations are on the path to digital transformation. This transformation is imperative to survive and thrive in this new digital and intelligent world.

According to a McKinsey report, “Seventy percent of executives say that over the next three years, they expect digital trends and initiatives to create greater top-line revenues for their businesses, as well as increased profitability.”

As data continues to grow at an insurmountable rate, organizations are struggling to keep pace with the rate of data growth. Last-generation technologies—and even today’s technologies—are unable to meet the data requirements to architect, develop, and deploy applications that attract, engage, and retain customers as well as automate business processes to help reduce costs. We call these applications transformative applications.

The Age of Data-Intensive Applications

Transformative applications must be able to interact with all types of data from anywhere and everywhere, from IoT to on-premises to multi-cloud to the edge, in order to achieve greater business results.

In addition to deriving value from new data types, organizations must continue to stay ahead of the competition and the market by moving core business applications from after-the-fact insights and processing to “in the moment,” leveraging machine learning and artificial intelligence driven data to impact the business now.

As you can see, data is the critical component when developing these next-generation applications. These modern apps are data-intensive and have demanding requirements that I call the three Cs.



Enterprises want to attract, engage, and retain their customers by providing personalized user experiences at the precise time, leveraging analytics, machine learning, and artificial intelligence. Pontis, a leading provider of telecommunications solutions, built digital engagement applications for telecom service providers to help them establish personal relationships with their subscribers, turning them into long-term loyal customers.


With data growing at a rapid pace from everywhere, it is challenging for organizations to ensure their customers have a seamless user experience across all channels and touch points. Aadhaar, the unique identification project for the government of India, operates the largest biometric database in the world. More than 1.3 billion people in India use this system to access a variety of services and benefits.

The project enables residents in India to receive food coupons and cooking gas deliveries, open checking accounts, and apply for loans, insurance, pensions, property deeds, and other services. In addition, the program makes it possible for the Indian government to make sure that welfare benefits go directly to the right person, as it gathers data from multiple sources to deliver a flawless end-user experience.


Today, all data is mission-critical. Enterprises and organizations need to quickly build applications, run them 24/7 without interruption or downtime, and scale effortlessly up or down, as the data, users, and business requires.

Transformative Applications Require a Data Fabric

Building modern applications requires a new way of thinking. Contrary to how it may sound, we must forget about starting with the application requirements and start prioritizing data requirements first. Here’s why.

Data fabrics hide underlying complexity (photographyfirm/Shutterstock)

Today’s data is complex and in silos, making it difficult to create applications that deliver contextual experiences.  Contextual Experiences are very different from personalized experiences. They are customized to an individual’s specific requirements, so that the right information is delivered to the right user, at the right time, on the right device.

To create such experiences, organizations require massive amounts of data, big and small, which sit in hundreds—if not thousands—of non-connected applications on their own database infrastructure from across the entire organization.

Traditionally, databases were not built for running multiple applications simultaneously, let alone making multiple connections between applications. This is not to say you cannot have workload management on traditional databases. It just means that typically each “database” is focused on a single app. And therefore, organizations have hundreds of databases for hundreds of apps. These databases apps are not talking to each other directly and in real-time. Organizations often compromise speed, scale, and reliability when they are making their database selection, and traditional RDBMS systems are rigid and have limited scalability.

NoSQL databases lack the enterprise-grade features that organizations require to run their system of record for business-critical applications. Databases are just one key component of an application architecture.

Over the years, companies have taken on a “build-your-own” data infrastructure approach, using last-generation and even newer “big data” technologies; none of these technology solutions alone, or even as an integrated approach, has helped organizations achieve their digital transformation goals. In a build-your-own environment, analytical applications are separated from operational applications. The workload requirements between the two are different and require separate data silos running on different technologies.

As customer demands increase and become more individualized, data continues to grow, and business strategies are ever-changing. A new way to think about developing next-generation applications is needed to avoid the complexity and to minimize the risk of running, managing, and maintaining multiple systems.

A data fabric brings together key technologies that make up a modern data architecture is needed, including a distributed file system, a multi-model NoSQL database, a publish/subscribe event streaming engine, ANSI SQL, and a broad set of open source data management and analytics technologies.

A data fabric supports multiple data types at very vast scale. It spans across multiple locations -whether they be edge, on-prem or cloud. They provide the application a single and secure view of all the data across all applications enabling applications to interject intelligence into operations.

Interjecting intelligence into operations is critical because as the competition heats up – the insights you build are going to be available to your competition shortly. And when that happens the insight gets commoditized. It is critical to capitalize on the insight before it perishes.

A modern data fabric should have these characteristics:

  1. A single platform that performs analytics and applications together.
  2. Complete data management from big to small, structured and unstructured, tables, streams, or files–all data types from any source.
  3. A modern database that runs rich, data-intensive applications and in-place analytics and supports document formats.
  4. A global cloud data fabric that brings together all data from across all clouds to ingest, store, manage, process, apply, and analyze data as it happens.
  5. Diverse compute engines to take advantage of analytics, machine learning, and artificial intelligence. These help interject intelligence into business operations.
  6. Cloud economic support by operating on any and every cloud of your choice, public or private.
  7. No lock-in, supporting open APIs.
  8. DataOps ready to champion the new process of creating and deploying new, intelligent modern applications, products, and services.
  9. Trusted with security built in from the ground up.
  10. Streaming and edge first for all data in motion from any data source as data happens and enablement of microservices natively.

Today, transform or fail is the new mantra. In the new world, where data is the foundational enabler for all next-generation applications, a new way of thinking about these data-intensive applications is necessary with no worries about complex trade-offs or compromises.

About the Author: Anoop Dawar is senior vice president of product marketing and management at MapR Technologies, where he is responsible for worldwide product, solutions and services marketing at MapR. Prior to this role, Anoop was VP of Product Management at MapR leading core data platform as well as the analytics stack. Anoop comes to MapR with over a decade of experience leading product management and development teams at Aerohive (HIVE) and Cisco (CSCO). His scientific approach to marketing problems stems from his deep background in both business & technology as a practitioner and a student. Anoop received a MS in Computer Science from University of Texas at Austin and a MBA from The Wharton School of the University of Pennsylvania.

Related Items:

Big Data Fabrics Emerge to Ease Hadoop Pain

Hiding Hadoop: How Data Fabrics Mask Complexity and Empower Users