Home Industry Verticals Data & Analytics What's in stock for Big Data in 2017; Tableau predicts

What’s in stock for Big Data in 2017; Tableau predicts

10 MIN READ

2016 was a landmark year for big data with more organizations storing, processing, and extracting value from data of all forms and sizes. In fact, a few months ago, IDC revealed that 53% of Asia Pacific (excluding Japan) organizations consider big data and analytics crucial for their business.

The same findings also shared that enterprises in the region are in the early stages of big data analytics adoption and the growing volume of data, as well as mobility and the Internet of Things (IoT), will continue this shift.

In 2017, systems that support large volumes of data will continue to rise. The market will demand platforms that help data custodians govern and secure big data, while empowering end users to analyse that data more easily than ever before. These systems will mature to operate well inside of enterprise IT systems and standards. Furthermore, the focus on big data analytics skills will continue to grow as it becomes a more central focus for enterprises across industries.

Each year at Tableau, we start a conversation about what’s happening in the industry. In Singapore, specifically, the Economic Development Board (EDB) has predicted that the data analytics sector will likely add $1 billion in value to the economy by 2017. The discussion drives our list of the top big data trends for the following year.

These are our predictions for 2017:

  1. A smarter everything, with big data skills

In 2016, the Singapore government spoke about the growth of big data analytics in the nation and the demand for employees with such skills.

As countries, cities, and communities continue to get smarter, the need for skilled talent in the big data analytics space will only continue to grow. Employees and governments will continue to focus on this – preparing the current and future workforce for jobs in this field.

In Singapore itself, we have already seen the government launch several incentives to encourage the workforce to develop these skills, while more academic institutions offer relevant courses to their students. This will continue to take main stage in 2017.

  1. Variety drives big-data investments

Gartner defines big data as the three Vs: high-volume, high-velocity, and high-variety information assets. While all three Vs are growing, variety is becoming the single biggest driver of big-data investments, as seen in the results of a recent survey by New Vantage Partners. This trend will continue to grow as firms seek to integrate more sources and focus on the “long tail” of big data.

Data formats are multiplying and connectors are becoming crucial. In 2017, analytics platforms will be evaluated based on their ability to provide live direct connectivity to these disparate sources.

  1. The convergence of IoT, cloud, and big data create new opportunities

It seems that everything in 2017 will have a sensor that sends information back to the mothership. In smart cities and nations, like Singapore, analysts have commented that products from the IoT sector will continue to feature. A year ago, Frost and Sullivan also projected that the number of connected devices will increase to 50 billion units globally in five years; this is equivalent to each person having ten connected devices.

Across the region, IoT is generating massive volumes of structured and unstructured data, and an increasing share of this data is being deployed on cloud services. The data is often heterogeneous and lives across multiple relational and non-relational systems, from Hadoop clusters to NoSQL databases. While innovations in storage and managed services have sped up the capture process, accessing and understanding the data itself still pose a significant last-mile challenge.

As a result, demand is growing for analytical tools that seamlessly connect to and combine a wide variety of cloud-hosted data sources. Such tools enable businesses to explore and visualize any type of data stored anywhere, helping them discover hidden opportunity in their IoT investment.

  1. Self-service data prep becomes mainstream

Making Hadoop data accessible to business users is one of the biggest challenges of our time. The rise of self-service analytics platforms has improved this journey. At the beginning of 2016, IDC predicted that spending on self-service visual discovery and data preparation will grow more than twice as fast as traditional IT-controlled tools for similar functionality (through till 2020).

Now, business users want to further reduce the time and complexity of preparing data for analysis, which is especially important when dealing with a variety of data types and formats.

Agile self-service data-prep tools not only allow Hadoop data to be prepped at the source but also make the data available as snapshots for faster and easier exploration.

  1. Big data grows up: Hadoop adds to enterprise standards

We’re seeing a growing trend of Hadoop becoming a core part of the enterprise IT landscape. And in 2017, we’ll see more investments in the security and governance components surrounding enterprise systems. Apache Sentry provides a system for enforcing fine-grained, role-based authorization to data and metadata stored on a Hadoop cluster. Apache Atlas, created as part of the data governance initiative, empowers organisations to apply consistent data classification across the data ecosystem. Apache Ranger provides centralized security administration for Hadoop.

These capabilities are moving to the forefront of emerging big-data technologies, thereby eliminating yet another barrier to enterprise adoption.

  1. Rise of metadata catalogs finds analysis-worthy big data

For a long time, companies threw away data because they had too much to process. With Hadoop, they can process lots of data, but the data isn’t generally organised in a way that can be found.

Metadata catalogs can help users discover and understand relevant data worth analyzing using self-service tools. This gap in customer need is being filled by companies like Alation and Waterline which use machine learning to automate the work of finding data in Hadoop. They catalog files using tags, uncover relationships between data assets, and even provide query suggestions via searchable UIs. This helps both data consumers and data stewards reduce the time it takes to trust, find, and accurately query the data. In 2017, we’ll see more awareness and demand for self-service discovery, which will grow as a natural extension of self-service analytics.

Jonah Kim
Jonah Kim is Product Manager, APAC, Tableau. Jonah Kim is Product Manager, APAC, Tableau. He recently relocated to Singapore to manage Tableau's APAC Product Consulting team. He joined Tableau over three years ago as a Product Consultant supporting the US Commercial team from Seattle. He had the opportunity to support the EMEA team out of London just prior to returning to the US and becoming the US Enterprise Product Consulting manager. He holds a B.A. in English from University of California, Irvine and a J.D. from University of California, Hastings.