Skip to content

May 15, 2011

5

New Tools for New Times – Primer on Big Data, Hadoop and “In-memory” Data Clouds

by Ravi Kalakota

Data growth curve:  Terabytes -> Petabytes -> Exabytes -> Zettabytes -> Yottabytes -> Brontobytes -> Geopbytes.  It is getting more interesting.

Analytical Infrastructure curve: Databases -> Datamarts -> Operational Data Stores (ODS) -> Enterprise Data Warehouses -> Data Appliances -> In-Memory Appliances -> NoSQL Databases -> Hadoop Clusters

———————

In most enterprises, whether it’s a public or private enterprise, there is typically a mountain of data, structured and unstructured data, that contains potential insights about how to serve their customers better, how to engage with customers better and make the processes run more efficiently.  Consider this:

  • Online firms–including Facebook, Visa, Zynga–use Big Data technologies like Hadoop to analyze massive amounts of business transactions, machine generated and application data.
  • Wall street investment banks, hedge funds, algorithmic and low latency traders are leveraging data appliances such as EMC Greenplum hardware with Hadoop software to do advanced analytics in a “massively scalable” architecture
  • Retailers use HP Vertica  or Cloudera analyze massive amounts of data simply, quickly and reliably, resulting in “just-in-time” business intelligence.
  • New public and private “data cloud” software startups capable of handling petascale problems are emerging to create a new category – Cloudera, Hortonworks, Northscale, Splunk, Palantir, Factual, Datameer, Aster Data, TellApart.

Data is seen as a resource that can be extracted and refined and turned into something powerful. It takes a certain amount of computing power to analyze the data and pull out and use those insights. That where the new tools like Hadoop, NoSQL, In-memory analytics and other enablers come in.

What business problems are being targeted?

Why are some companies in retail, insurance, financial services and healthcare racing to position themselves in Big Data, in-memory data clouds while others don’t seem to care?

World-class companies are targeting a new set of business problems that were hard to solve before – Modeling true risk, customer churn analysis,  flexible supply chains, loyalty pricing, recommendation engines, ad targeting, precision targeting, PoS transaction analysis, threat analysis, trade surveillance, search quality fine tuning,  and mashups  such as location + ad targeting.

To address these petascale problems an elastic/adaptive infrastructure for data warehousing and analytics capable of three things is converging:

  • ability to analyze transactional,  structured and unstructured data on a single platform
  • low-latency in-memory or Solid State Devices (SSD) for super high volume web and real-time apps
  • Scale out with low cost commodity hardware; distribute processing  and workloads

As a result,  a new BI and Analytics framework is emerging to support public and private cloud deployments.

The excitement is that Big Data capabilities fundamentally change the core premise of BI and analytics – the ability to have end-users (and even machines) perform ad-hoc analysis and reporting tasks over large and continuously growing amounts of structured and unstructured information such as log files, sensor data, streaming data, sales transactions, emails, research data and images collectively known as ‘big data.’

Technology Innovation around Big Data

Big Data is a hot topic because it represents the first time in about 30 years that people are rethinking databases and data management. Literally, since about 1980 the enterprise database market has consolidated around 3 vendors – Oracle, IBM and Microsoft.

So tremendous amount of innovation taking place around streaming databases,  low latency OLTP, NoSQL, in-memory, columnar, or cloud databases.

Innovation is in multiple categories:

  • Data Volume management (and parallel pipeline processing)
  • Data Structures
  • Data Dimensionality
  • Hardware architectures have changed — people want to scale horizontally like Google.

Innovation around Big Data is also happening on other fronts from the core (e.g., analytics and query optimization), to the practical (e.g., horizontal scaling), to the mundane (e.g., backup and recovery).

New Tools

So if you have not heard of these tools – Hadoop, NoSQL, MongoDB, Cassandra, HBase, Columnar databases, Data Appliances – then it’s time for a quick primer.

NoSQL stands for Not Only SQL. NoSQL databases do not use the popular SQL (Structured Query Language) to create tables and insert, delete or update data.  Many NoSQL deployments handle data that simply can’t be handled by a relational database, such as sparse data, text, and other forms of unstructured content. Unstructured content include social media/networks, Internet text and documents;  call detail records, photography and video archives;; and web logs.  Industry specific unstructured data include RFID; large scale eCommerce catalogs, sensor networks, astronomy, atmospheric science, genomics, biogeochemical, biological, and other complex and/or interdisciplinary scientific research; military surveillance; and medical records.

Cassandra was developed by Facebook and later open sourced in 2008. Cassandra is influenced by the Google BigTable model, but also uses concepts from Amazon’s Dynamo distributed key-value store.  Eventually, Cassandra became an Apache project. It falls under a category of databases called NoSQL, which stands for Not Only SQL.   Cassandra database is used by Facebook, Digg and Twitter.

Hbase – is NoSQL open-source, column-oriented store database modeled from Google’s BigTable system. Hbase is an Apache project. It is part of the Hadoop ecosystem. See this presentation on how FaceBook uses HBase in Production.

Hadoop – Apache Hadoop is a popular open-source software framework for distributed/grid-computing environments that enable applications to analyze large data sets. Relational database systems are good at data retrieval and queries but don’t accept new data. Hadoop and other tools get around this and allow data ingestion at incredibly fast rates.

Hadoop was built initially by Doug Cutting while he was at Yahoo, has become prominent first in unstructured data management and cloud computing.

Hadoop is designed to process terabytes and even petabytes of unstructured and structured data. It breaks large workloads into smaller data blocks that are distributed across a cluster of commodity hardware for faster processing.  But Hadoop requires additional programming tools such as Pig or Hive to write SQL-like queries to retrieve the data.

Technically, Hadoop, a Java based framework,  consists of two elements:  reliable very large, low-cost data storage using the Hadoop Distributed File System (HDFS) and high-performance parallel/distributed data processing framework called MapReduce.

HDFS is self-healing high-bandwidth clustered storage. Map-Reduce is essentially fault tolerant distributed computing.

Hadoop builds on the MapReduce algorithm.  MapReduce, first introduced by Google in 2004,  consists of two functions – Map and Reduce. Map takes large computational problems, breaks them down into smaller subproblems and distributes those to worker nodes, which solve the problem and pass the answer back to the master node. The Reduce function consolidates the answers from the Map function to produce the final output. Search algorithms (public cloud) are often designed in this fashion.

Hadoop runs on a collection/cluster of commodity, shared-nothing x86 servers. You can add or remove servers in a Hadoop cluster (sizes from 50, 100 to even 2000+ nodes) at will; the system detects and compensates for hardware or system problems on any server. Hadoop  is self-healing. It can deliver data — and can run large-scale, high-performance processing batch jobs — in spite of system changes or failures.

Columnar databases.  Examples include SAP/Sybase IQ, HP/Vertica, and ParAccel. Columnar querying’s performance efficiencies are unmatched by any row-oriented database.

Data Appliances

Purpose built solutions like Teradata, IBM/Netezza, EMC/Greenplum, SAP HANA  (High-Performance Analytic Appliance) and Oracle Exadata are forming a new category.

Data appliances are one of the fastest growing categories in Big Data.  Data appliances integrate database, processing, and storage in a integrated system optimized for analytical processing and designed for flexible growth. The architecture is based on the following core principles:

  •  Processing close to the data source
  • Appliance simplicity (ease of procurement; limited consulting)
  •  Massively parallel architecture
  •  Platform for advanced analytics
  •  Flexible configurations and extreme scalability

A number of vendors are going down the path of appliance and quasi-appliance offerings which have some preconfiguration of hardware and software,  cloud-supporting deployments, and reference configurations.

A leading example is Oracle Exadata Database Machine.  Exadata is Oracle‘s fast-selling appliance that bundles its database and hardware for optimized performance.  Oracle Exadata deployments mostly involve replacing data warehousing solutions for much better performance via compression, and dropping overhead like old indexes and partitions. See Oracle Analytics-as-a-Service strategy for more indepth discussion.

SAP HANA is an equivalent of Exadata and debuted at Sapphire 2011. HANA is based on a fundamental computer science principle:  when operating on large data sets and want fast response times, do not move data from disk unless absolutely necessary.  Separate OLAP (BI data) and OLTP (transaction data).  Have the OLAP in-memory and speed up the dashboards, reporting and analytics.

MongoDB

MongoDB is an open source database, combining scalability, performance and ease of use, with traditional relational database features such as dynamic queries and indexes. It has become the leading NoSQL database choice, with downloads exceeding 100,000 per month. Thousands of customers including Fortune 500 enterprises and leading Web 2.0 companies are developing large-scale applications and performing real-time “Big Data” analytics with MongoDB.  For more information, visit  www.mongodb.org or www.10gen.com. 10gen develops MongoDB, and offers production support, training, and consulting for the database.

There are many new database directions appearing on the landscape today. These include nonschematic DBMS ( “NoSQL”), cloud databases, highly distributed databases,  small footprint DBMS, and in-memory database (IMDB).  The business applications of these are driven by high performance, low latency and efficiency in deployment. All of these are driven by the premise that insight into data requires more than tabular analysis.

Google’s LevelDB – NoSQL

Google in May 2011 open-sourced a BigTable-inspired key-value database library called LevelDB under a BSD license. It was created by Dean and Ghemawat of the BigTable project at Google. A recent blog post from Google made the project more widely known. It’s available for Unix based systems, Mac OS X, Windows, and Android.

According to the announcement: “LevelDB may be used by a web browser to store a cache of recently accessed web pages, or by an operating system to store the list of installed packages and package dependencies, or by an application to store user preference settings. We designed LevelDB to also be useful as a building block for higher-level storage systems. Upcoming versions of the Chrome browser include an implementation of the IndexedDB HTML5 API that is built on top of LevelDB. Google’s Bigtable manages millions of tablets where the contents of a particular tablet are represented by a precursor to LevelDB.”

Big Data Use Cases

E-tailing – E-Commerce – Online Retailing

  • Recommendation engines — increase average order size by recommending complementary products based on predictive analysis for cross-selling.
  • Cross-channel analytics — sales attribution, average order value, lifetime value (e.g., how many in-store purchases resulted from a particular recommendation, advertisement or promotion).
  • Event analytics — what series of steps (golden path) led to a desired outcome (e.g., purchase, registration).
Retail/Consumer Products

  • Merchandizing and market basket analysis.
  • Campaign management and customer loyalty programs.
  • Supply-chain management and analytics.
  • Event- and behavior-based targeting.
  • Market and consumer segmentations.
Financial Services

  • Compliance and regulatory reporting.
  • Risk analysis and management.
  • Fraud detection and security analytics.
  • CRM and customer loyalty programs.
  • Credit risk, scoring and analysis.
  • High speed Arbitrage trading
  • Trade surveillance.
  • Abnormal trading pattern analysis
Web & Digital Media Services

  • Large-scale clickstream analytics.
  • Ad targeting, analysis, forecasting and optimization.
  • Abuse and click-fraud prevention.
  • Social graph analysis and profile segmentation.
  • Campaign management and loyalty programs.
Government

  • Fraud detection and cybersecurity.
  • Compliance and regulatory analysis.
  • Energy consumption and carbon footprint management.
New Applications

  • Sentiment Analytics
  • Mashups – Mobile User Location + Precision Targeting
  • Machine-generated data, the exhaust fumes of the Web
Health & Life Sciences

  • Health Insurance fraud detection
  • Campaign and sales program optimization.
  • Brand management.
  • Patient care quality and program analysis.
  • Supply-chain management.
  • Drug discovery and development analysis.
Telecommunications

  • Revenue assurance and price optimization.
  • Customer churn prevention.
  • Campaign management and customer loyalty.
  • Call Detail Record (CDR) analysis.
  • Network performance and optimization
  • Mobile User Location analysis

Smart meters in the utilities industry. The rollout of smart meters as part of the Smart Grid adoption by utilities everywhere has resulted in a deluge of data flowing at unprecedented levels.   Most utilities are ill-prepared to analyze the data once the meters are turned on.

Big Data Startup and Existing Companies to Watch

  • Emerging Players — Cloudera, DataStax, Northscale, Splunk, Palantir, Factual, Kognitio, Datameer, TellApart, Paraccel,  Hortonworks
  • Established Players — EMC Greenplum , HP Vertica,  IBM/Netezza, Microsoft, Oracle ExaData,  SAP HANA, Teradata (acquired Aster Data)

All these firms are going after two distinct opportunities:

  • Big Data in the Public Cloud
  • Big Data in the Private Cloud

As I speak to customers,  it is becoming more clear to me that there is going to be growing push towards an elastic / adaptive infrastructure for data warehousing and analytics.  With increasing focus on mobility and faster decision making…the business is going to push for this faster than Corporate IT can react.

Bottomline

 What’s next? That’s a simple question to ask, but it’s not so simple to answer.  

Big Data is a umbrella phrase for a set of technologies, skills, methods and processes, some new, some not for gaining insight from mountains of data. It is essentially the combination of the 3 V’s – volume, velocity and variety.

I am seeing the following trends:

  • the Enterprise IT roadmap is going to divide into a Compute Cloud AND Data Clouds.
  • The Compute Cloud (Private/Public/Hybrid) is being driven from the virtualization/resource side
  • The  Data Cloud (in-memory, data appliances) is being driven from mobility and decision making side.
  • Prediction from some circles – Half of the world’s data will be stored in Apache Hadoop within five years
  • Opportunity that startups like Cloudera are pursuing — Grow the Apache Hadoop Ecosystem by making Apache Hadoop easier to consume, profit by providing training, support and certification

Notes and References

  1. History of Hadoop till 2009

Other Sources:

Also checkout these articles for more coverage:

  1. Big data’s potential for businesses:   Financial Times
  2. Hadoop World – 2010 – Conference Presentations
  3. The Structure Big Data conference: GigaOM conferences
  4. The Vendor Landscape of BI and Analytics — list of Big Data vendors
  5. McKinsey Big Data Report:  http://www.mckinsey.com/mgi/publications/big_data/index.asp
5 Comments Post a comment
  1. Oct 16 2012

    A great article Ravi!

    Like

    Reply
  2. Mar 23 2014

    Making the system smart in terms of Big Data is something that isn’t too far. The world has witnessed many technological revolution and has changed for the better. The needs of storage are also changing day by day and Database have had their limits of elasticity. Big Data requires a lot of processing.Exadata by Oracle has proved to be an outstanding resource in terms of space and cost purposes. More space and efficient handling, Exadata gives an edge to traditional databases in all aspects. We as XDuce are organizing a workshop for Exadata Database Machine Administration. Connect here for more details and registrations: http://xduce.com/exadatadatabase/Exadata-Campaigns.html

    Like

    Reply

Trackbacks & Pingbacks

  1. New Tools for New Times – Primer on Big Data, Hadoop and “In-memory” Data Clouds | Big Data, crowdsourcing and strategy | Scoop.it
  2. SAP HANA | Acordo Coletivo (Petroleiros, Bancários, Prof de Saúde)
  3. Primer on Big Data, Hadoop and “In-memory” Data Clouds | analyticaltern

Leave a comment

Note: HTML is allowed. Your email address will never be published.

Subscribe to comments