This year’s Big Data Innovation Summit 2017 in San Francisco included leading data experts offering use cases, best practices, challenges they faced and the solutions they established in response.
With the tagline “Cultivate the Data, Yield the Profit,” the summit tackled weighty themes to help attendees avoid costly mistakes from inaccurate data, attain best practices for harvesting data with high potential and futureproof their current models, tools and predictive capabilities to name a few. Top discussion areas included:
Market research and advisory firm, Ovum estimates the big data market will grow from $1.7 billion in 2016 to $9.4 billion by 2020. As the market grows, enterprise challenges will shift, skills requirements will change and the vendor landscape will morph. As the biggest disrupter for big data analytics, the use of machine learning is growing to create a true 360° view of anything (customers, employees, products and suppliers). However, it requires a reliable date foundation, bringing together data from all internal, external and third-party sources. This blending requires careful matching and merging of the data. Machine learning within modern data management platforms can help derive the matching rules automatically from data and active learning training by data stewards. With a single click, they can show the machine learning system how to treat the data and determine new match rules. The system adapts to the customer data and user behavior.
Organizations that have decided to build data lakes are advised to pay attention to data governance, quality and security to keep it from becoming a data swamp. Even if enterprises use sophisticated tools to examine and interpret patterns for predictive analytics and machine learning from their structured and unstructured data, without proper metadata and quality assurance of data, the data in lakes becomes unusable over time. The lack of correlation back to accurate master profiles and operations means there are no guarantees that the answers are either relevant or reliable. With existing big data projects recognizing the need for a reliable data foundation, and new projects being combined into a holistic data management strategy, data lakes may finally fulfill their promise.
The use of predictive models and big data is transforming how we reach complex decisions such as consumer credit risk, personalized retail marketing and insurance pricing. Effective predictive modeling helps organizations figure out where to look for problems, how best to invest scarce resources and how to anticipate needs, instead of constantly playing catch-up. The consumer intelligence from a predictive model is only as good as the quality of the data collected for analytical customer relationship management. Starting with a reliable data foundation, business teams can ultimately benefit from recommended actions that in turn will allow them to confidently leverage the information for personalized customer engagement.
Successful businesses know that data is the new currency and the lifeblood of the entire organization. It should enlighten every function of the business, including customer experience, operations, marketing, sales, service and finance. In the age of the customer, everyone within the organization should be using a personalized contextual source of truth (not just a single/golden source) of information across all of the operational applications and channels needed to support a customer’s journey to deliver great customer experiences. Therefore, a data management strategy is critical for providing business functions with quick and complete access to the data and analytics that they need, both now and in the future.