(855) 360-DATA
RELTIO HQ
100 MARINE PARKWAY, REDWOOD SHORES, CA 94065, USA SALES@RELTIO.COM
Ankur Gupta, Sr. Product Marketing Manager, Reltio
Value from pharma should be measured in terms of clinical outcomes, patient satisfaction, and cost reduction. Using data, pharma companies can enhance value for patients along the entire lifecycle of a drug, from drug discovery to commercialization to end of exclusivity.
From the perspective of business strategy, value delivery can be seen as a three-step process as illustrated by David Ormesher, CEO of closerlook, in his PharmExec.com post.
Value Creation (discovery)
Value Capture (commercialization)
Value Extraction (end of exclusivity)
It is important to capture unique customer insight to inform drug innovation. The drug should be relevant (to an urgent disease burden) as well as differentiated (relative to alternate therapies). These two factors will largely determine market access, provider endorsement and patient acceptance for a new drug. However, departmental silos between medical affairs and commercial side of the business, and lack of access to quality data lead to incomplete understanding of competition and the market.
A Self-Learning Data Platform goes beyond a traditional master data management (MDM) offering and brings together patient, provider, payer, and plan data from internal, third party, and public sources to cleanse, match, merge, un-merge, and relate in real time. Platform’s multi-domain data organization capability helps perform deeper analysis to better understand the needs of patients, providers, payers, and relationships among these players. A Self-Learning Data Platform breaks down silos among medical affairs, marketing, business intelligence and manufacturing, and helps develop a common understanding of customer data and market insight across all departments.
Research indicates that 81% of future drug sales performance is determined by actions taken during clinical development and early commercialization phase. It’s even more critical for a pre-commercial pharma which is planning to bring its first drug to the market. Early adoption of a Self-Learning Data Platform helps a pre-commercial pharma develop future-proof commercial infrastructure and put up business processes to launch their first drug with safety, efficacy, and desired formulary placement in place. Read the pre-commercial pharma success stories about how they successfully launched their first drug with the help of a Self-Learning Data Platform.
A new product’s commercial performance during the first six months after FDA approval is often considered a very important indicator for how the product will do over the course of its patent life. During Value Capture or commercialization phase, the purpose of data is to build trust and respect via data-driven personalization and engagement. However, pharma companies are unable to recognize prescribers and patients consistently across multiple channels and touchpoints. They often fail to increase content speed to market in their customers’ preferred channel. This leads to negative Net Promoter Score (NPS), increased defection to competitors, and loss of revenue and market share.
The more you know about your customers – the physicians who can write the product – and what they care about, the more you’re able to build an effective campaign around a new product. What you need – an out-of-the-box, data-driven affiliation management application, with built-in MDM, for managing all relationships within and across HCOs and HCPs to support commercial operations, identify the right key opinion leaders (KOLs), and understand their influence.
A Self-Learning Data Platform helps you organize launch as a micro-battle (See the Infographic “Make Your Drug Launch Truly Take Off”, Bain Insights, September, 2017), gather continuous front-line feedback from sales reps before, during and after the launch, and make rapid adjustments as needed to the launch strategy. It helps you make quick decisions on messaging, targeting and marketing investments. Such platform powers reliable advanced analytics by enabling master data profiles and graph relationships to be seamlessly combined with real-time interactions and analyzed in Spark. For example, when a new drug is launched, it helps track sales performance compared to projections so that you can adjust strategies whenever needed.
Read the success story of a French multinational pharmaceutical company that built Customer 360 on top of a Self-learning Data Platform to support their account-centric field operations and personalized engagement.
At the point when a drug loses its patent protection, its price typically drops quickly as generic competitors enter the market. During this phase, there is often enormous pricing pressure from competitive products and health insurers. In addition to these external pressures, there is also internal competition for attention and resources, usually from a promising new product.
The business strategy during Value Extraction is to increase efficiency via operational excellence. The main cost now is sales and marketing. This is where digital can play a very strategic role. Digital sales and marketing through non-personal promotion can become an effective substitute for sales rep promotion. By replacing expensive personnel costs with lower cost digital channels, we can reduce overhead costs but still maintain market share.
Read the success story of one of the oldest and largest global pharma that consolidated customer profile across all business functions to improve customer experience across all digital touchpoints, and better engage high-value customers.
Successful pharma companies use data as a competitive weapon to develop new sources of differentiation, focus on building superior customer experiences and treat drug launches as a micro-battle. How did your last launch perform vs. expectations, and what were the reasons for under-performance or over-performance? Which interactions matter most for your target physicians, and do you provide a superior customer experience? What are the three largest internal challenges your launch team faces, and what would it take to eliminate them?
Ankur Gupta, Sr. Product Marketing Manager, Reltio
Given the vast volume and variety of data that CPG companies manage, ensuring the accuracy and reliability of data is critical. All digital transformation and personalization efforts would fail if data underneath is of poor quality, siloed and delayed. Using machine learning within modern data management platform not only helps determine and improve data quality but also enriches the data with relevant insights and provides intelligent recommended actions for data quality and operational improvements. For example, if you are running a campaign for a major product launch, you can eliminate consumer profiles with low data quality (DQ) scores.
Using legacy tools built on relational databases are too rigid and inflexible, making it difficult to support the dynamic needs of a modern business. For example, adding new data sources or attributes to the customer profiles can result in costly data migration projects. Another challenge is the inability to manage the relationships between various data entities, such as people, products, organizations and places. Modern data-driven CPG brands prevent big data indigestion by using a multi-model, polyglot storage strategy to store and efficiently manage the right data in the right storage. It helps them deliver faster and higher business value from their varied data assets.
With “single domain” Master Data Management (MDM), each data entity type has its own unique data store and business logic. On the other hand, a Modern Data Management Platform manages multi-domain (customer, products, stores, suppliers) master data along with transaction and interaction data, third-party, public and social data. Its graph technology makes it easy to describe and visualize complex, many-to-many relationships among customers, products, stores and locations for faster and reliable decision-making. For example, with the help of a graph, CPG brands can rapidly traverse links between consumers, products, purchases, and ratings to make personalized recommendations. They can also tell if the visitors and shoppers browsing their website are from the same household or not.
“Servitization” of products is commonly seen in consumer categories such as music (iTunes and Spotify) and books (Amazon Kindle) but also in business services such as Xerox moving from photocopiers to document services. Historically, CPG companies have been resistant to the move from products to services. Their relationship with their consumers has often been mediated via retailers. Modern data-driven CPG brands often bypass retailers and sell directly to customers (DTC). For example, Dollar Shave Club is offering a monthly subscription to deliver razors and other personal grooming products by mail. This gives them the opportunity to engage directly with their customers, to collect interaction data, and to expand their digital footprint.
Data is an enabler of innovation. To keep up with the rapid pace of digital transformation, CPG brands need to develop a culture of collaboration and pursue intra and extra-industry partnerships. They need to recognize that many new entrants are not simply additional competitors. Instead, they represent possibilities for completely new types of business models that over time will blur traditional distinctions between retailers and manufacturers.
Data-driven CPG companies look at AI through the lens of three business capabilities: automating business processes, gaining insight through data analysis, and engaging with customers and employees. They constantly innovate and disrupt by embracing new technologies to meet the high expectations of consumers. A Modern Data Management Platform coupled with Machine Learning enables contextual information and helps consumer brands answer high-impact business questions such as – Will my customer buy this product or not? Is this review written by a customer or a robot? Which category of products is most interesting to this customer? And so on.
CPG brands will be required to be more transparent about how they use consumer data. New regulations like GDPR and increased oversight has important implications in terms of regulatory compliance, product development and marketing messages. Moreover, there are increasing consumer demands for transparency on how companies perform when it comes to sustainability and corporate social responsibility as well as where products are made. A Modern Data Management Platform as a Service (PaaS) helps you create a complete consumer profile with full data lineage, governance, and workflows to continuously manage consumer rights and consents.
Consumer brands are facing unsteady growth, tightening profit margins, complex regulations, and growing competition from lower cost private label brands. Adopting these seven habits would help them reverse the digital curse, achieve hyper-personalized customer engagement, and stay ahead of competition.
Ramon Chen, Chief Product Officer, Reltio
According to Wikipedia:
“Fake news is a type of yellow journalism or propaganda that consists of deliberate misinformation or hoaxes spread via traditional print and broadcast news media or online social media. Fake news is written and published with the intent to mislead in order to damage an agency, entity, or person, and/or gain financially or politically, often with sensationalist, exaggerated, or patently false headlines that grab attention.”
“Dirty data, also known as rogue data, is inaccurate, incomplete or erroneous data, especially in a computer system or database. Dirty data can contain such mistakes as spelling or punctuation errors, incorrect data associated with a field, incomplete or outdated data, or even data that has been duplicated in the database. It can be cleaned through a process known as data cleansing.”
While Fake News is well known to the general public due to the wide reaching impact and possibly arguably influencing a Presidential election, Dirty Data in the form of unreliable, duplicate, or fraudulent information, may have even a larger impact, as much as 3 trillion dollars! Whether those numbers are accurate is debatable. Closer to home for businesses, Experian estimates that, on average, U.S. organizations believe 32 percent of their data is inaccurate.
And that’s just the perception and impact of basic data quality (DQ). Even more critical, business decisions are made every day on uncorrelated data which may not be “dirty,” but missing key information that might have resulted in a better decision and outcome.
So naturally, Dirty Data is a major concern for advanced analytics and machine learning. The Verge’s article titled “The biggest headache in machine learning? Cleaning dirty data off the spreadsheets” contains a humorous but no doubt close-to- home quote:
““There’s the joke that 80 percent of data science is cleaning the data and 20 percent is complaining about cleaning the data,” ”
Of course the two are not mutually exclusive, Fake News can be used to promote Dirty Data!
For example, you’ve just read this blog and perhaps agreed with the data which I sourced from other articles on the web. I readily admit that I did not have time to verify the accuracy of the data references in each story. In fact, the 3 Trillion dollar number comes from this Saleshacker.com article, which cites this post which dates back to 2011!
But you can trust me 🙂
What’s your most egregious example of Fake News or Dirty Data, please share horror or funny stories in the comments below.
After all these years, master data management (MDM) has finally emerged from its awkward teenage years as a pimply-faced young adult, not quite sure if it’s ready to take on the world. A few industry analysts have even said that MDM is officially in the “trough of disillusionment,” confirming that while MDM is no longer in diapers, it is not quite mature enough to get a real job or get married.
Having worked in data management for the past 23 years, with most of that time in MDM, I thought I had seen it all.
Traditional 20th century MDM has certainly seen its ups and downs throughout its short history, but what excited me about joining Reltio was the idea of starting with a clean slate and building a 21st century Modern Data Management solution from the ground up. A solution that not only revolutionizes MDM, but goes beyond the basic single version of the truth.
Fortunately, Reltio doesn’t have any legacy 20th century pieces and parts to “Frankenstein” the next generation MDM offering. A luxury that legacy MDM vendors typically cannot afford.
As part of redefining not just the MDM market, but data management in general, Reltio decided to focus on refining one of the key capabilities of MDM–data matching. Although matching algorithms and techniques haven’t changed much over the years, the way these algorithms and techniques are applied could certainly be improved.
By applying a modern approach, with techniques including an ongoing emphasis towards leveraging machine learning to improve how matching is done, allows companies to be flexible in the early phases of development.
At Reltio, we are about being right faster. Therefore, our ability to tune and re-match all of your company’s key business entities faster, enables your organization to be more agile and accurate in a way that’s a clear departure from today’s MDM norm.
Being able to fire off all match rules at once, versus the traditional way of traversing match rules one at a time, and stopping once a match is found is one example.
In another example, a life sciences customer of ours defined over 100 match rules with a non-Reltio MDM solution. When they deployed Reltio Cloud, they were able to reduce the number of match rules to just 16. Reltio Cloud is a clear departure from the norm that provides key stakeholders with a modern, agile and simplified approach to data matching.
When you distill all of this information down, you’ll find that today’s traditional MDM solutions suffer from the same fatal flaw–a relational database that is used to manage and store data used in the match process.
Today’s MDM requirements go beyond yesterday’s repository of simple “common” master data in the thousands of records, and necessitates a modern solution that is able to integrate millions of transaction and interaction data across multiple systems.
Trying to manage relational database cross-reference tables, joins, intersection tables and more across newly mastered entities, including millions of transaction and interaction relationships creates a relational “spaghetti” mess that just won’t scale.
In the end, what business users need today is a single place where they can find reliable data and relevant insights that drive recommended actions across their entire enterprise.
Ramon Chen, Chief Product Officer, Reltio
Today, organizations are considering newer approaches in data management to understand their customers better. Evolving from your legacy MDM to Modern Data Management helps lines of business and IT teams be right faster by providing reliable data, relevant insights and intelligent recommended actions. Lines of business gets immediate value in the form of data-driven applications built on the foundation of a Commercial Graph. Modern Data Management not only helps manage the master data, but also brings in omnichannel transactions and interactions at a big data scale to create 360° views of everything. While rethinking your MDM platform and strategy, consider the following:
Rapid Time to Business Value: How fast can IT connect to all data sources to create a reliable data foundation? How fast can you add new attributes or onboard new data sources? How fast can you connect to third-party data providers to enrich your data? And ultimately, how how fast can lines of business get data-driven applications, and solve problems unmet by traditional applications?
Empowering Lines of Business: Can lines of business get consumer grade, Facebook and LinkedIn-style, data-driven applications quickly? Can they collaborate with each other to make informed decisions or improve data quality?
Easy and Intelligent Recommended Actions: How easy is it to provision master data AND omnichannel transactions and interactions to your advanced analytics platforms, and bring insights back to operational and data-driven applications? How easy is it to maintain the shared data models between MDM and analytics?
Big Data Scalability: Is your data platform a multi-tenant platform as a service with big data scalability and high availability for your enterprise-class applications? Can you blend transactions and interactions with master data for a complete view? Does your data platform provide the elasticity to help you evolve with confidence, at the speed of your business? Are you worried about over- or under-investment?
With Reltio Cloud, business users get a new breed of enterprise data-driven applications, combining both operational and analytical capabilities for a true 360° view of everything, including many-to-many relationships across people, products, places, organizations and activities.
Go beyond your imagination of what MDM is and evolve to Modern Data Management. We have a guide to help you get there. Click here to get your copy today.
This year’s Big Data Innovation Summit 2017 in San Francisco included leading data experts offering use cases, best practices, challenges they faced and the solutions they established in response.
With the tagline “Cultivate the Data, Yield the Profit,” the summit tackled weighty themes to help attendees avoid costly mistakes from inaccurate data, attain best practices for harvesting data with high potential and futureproof their current models, tools and predictive capabilities to name a few. Top discussion areas included:
Data Science:
Market research and advisory firm, Ovum estimates the big data market will grow from $1.7 billion in 2016 to $9.4 billion by 2020. As the market grows, enterprise challenges will shift, skills requirements will change and the vendor landscape will morph. As the biggest disrupter for big data analytics, the use of machine learning is growing to create a true 360° view of anything (customers, employees, products and suppliers). However, it requires a reliable date foundation, bringing together data from all internal, external and third-party sources. This blending requires careful matching and merging of the data. Machine learning within modern data management platforms can help derive the matching rules automatically from data and active learning training by data stewards. With a single click, they can show the machine learning system how to treat the data and determine new match rules. The system adapts to the customer data and user behavior.
Data Governance:
Organizations that have decided to build data lakes are advised to pay attention to data governance, quality and security to keep it from becoming a data swamp. Even if enterprises use sophisticated tools to examine and interpret patterns for predictive analytics and machine learning from their structured and unstructured data, without proper metadata and quality assurance of data, the data in lakes becomes unusable over time. The lack of correlation back to accurate master profiles and operations means there are no guarantees that the answers are either relevant or reliable. With existing big data projects recognizing the need for a reliable data foundation, and new projects being combined into a holistic data management strategy, data lakes may finally fulfill their promise.
Predictive Analytics:
The use of predictive models and big data is transforming how we reach complex decisions such as consumer credit risk, personalized retail marketing and insurance pricing. Effective predictive modeling helps organizations figure out where to look for problems, how best to invest scarce resources and how to anticipate needs, instead of constantly playing catch-up. The consumer intelligence from a predictive model is only as good as the quality of the data collected for analytical customer relationship management. Starting with a reliable data foundation, business teams can ultimately benefit from recommended actions that in turn will allow them to confidently leverage the information for personalized customer engagement.
Data Strategy:
Successful businesses know that data is the new currency and the lifeblood of the entire organization. It should enlighten every function of the business, including customer experience, operations, marketing, sales, service and finance. In the age of the customer, everyone within the organization should be using a personalized contextual source of truth (not just a single/golden source) of information across all of the operational applications and channels needed to support a customer’s journey to deliver great customer experiences. Therefore, a data management strategy is critical for providing business functions with quick and complete access to the data and analytics that they need, both now and in the future.
Ramon Chen, Chief Product Officer, Reltio
The big data conundrum is one that bedevils most industries, but none more than life sciences. Because of the high stakes of healthcare, there is a great responsibility to get things right, and to pursue continual improvement, ideally with the proper use and analysis of data. In addition, life sciences companies are under more scrutiny and regulation than ever. With more data to collect than any other industry: disease states, scientific studies, individual patient info, clinical results and more, all of that data needs to be turned into actionable information in an increasingly complex digital world.
Undertaking a big data strategy really means that a company is ready to become more “data-driven.” Simply put, that means using more sources of data in order to gain relevant insights to make better decisions and take actions that yield better outcomes. Most successful companies are right more often than not – that’s why they are able to thrive and have a growing business. However, the competitive and regulatory landscape dictates that companies need to “be right faster” and a strategy that incorporates big data requires a serious look at their data management technologies, practices and how they can truely leverage new sources of data.
Many companies think big data is a research project, and mistakenly spawn or create a team tasked with testing and evaluating the latest “free” open source technologies such as Hadoop. It’s a cliche, but you can’t select a big data technology, then find a problem to solve. To be successful you must be aligned to a business problem that needs to be addressed. If that problem has a hard deadline, even better! Nothing spells focus like working towards a milestone, and the expectations of frontline business teams that want value immediately. Fortunately with the right modern data management platforms, many business challenges can be solved in weeks, not months or years.
Data governance and stewardship for example is a must. Most companies forget the discipline of trusted, secure and reliable data when they embark upon their big data strategy. The refrain of “the type of data we are capturing in big data projects don’t need governance” is a slippery slope. Big data lakes will turn into big data swamps if rigor, process and data quality are not applied. Worse, the insights that are derived from unreliable data are worth less than having no data at all.
At Reltio, we’ve seen our customers use data from traditional third-party vendors, and also bringing together public data sources from CMS.gov, Pubmed, clinicaltrials.gov, as well as social media data from Linkedin and Facebook. IoT (internet of things) data, which has the extreme big data volumes at velocity has so far been less of a concern for life sciences companies. In healthcare, health monitoring devices such as the Apple Watch will start to deliver information for physicians from patients that may eventually become part of their care. The trick is to bring capabilities together in a single platform, where data can be correlated, made reliable and for insights to be derived.
For the most part companies have been “playing” with big data technologies, using Hadoop, NoSQL databases, data scientist visualization tools. A lot, and I mean a lot, of money has been spent on pilots and trials. While there have been some successes, for the most part many companies are still immature in their use of these technologies. There are many reasons for this including the IT skills and expertise required to implement new big data tools, and the complexity of integrating data with traditional approaches and applications. Without a singular focus on the desired business outcome, and actual data-driven business applications that are mobile, collaborative and easy to use by frontline sales, marketing and compliance teams, companies will continue to see limited success with big data.
Insights gained run the gamut across healthcare and life sciences and include true 360-degree views and inter-relationships between HCP, HCOs, IDNs, ACOs, MCOs, plans, payers, products, patients and all of their interactions. There are many macro-level conclusions that can be drawn about overall operating efficiency (in the case of commercial operations), and additional data for clinical trials (in the case of R&D). But ultimately the insights derived are only relevant to what can be done with them, and that use is relative to the role and business goals of each user.
For all of the data management technology and visualization tools invested in bringing together and processing big data, companies are typically left to their own devices, to draw their own conclusions from the insights, and then to act upon them. New data- driven applications are able to synthesize that information and provide suggestions or recommended actions to the frontline business users that are actionable occur daily in the consumer world. Take LinkedIn for example. It brings together vast quantities of data, and delivers suggestions to you. LinkedIn suggests jobs that are relevant to you and your experience. It doesn’t just say here’s a pool of jobs, and makes you go filter and search for the ones that are relevant to you. It understands complex connections and relationships, and shows you the best path to connect to people you don’t know. Business teams, such as sales and account managers in life sciences, need similar help in their day-to-day operations. But they are saddled with legacy CRM and process-driven applications that capture data, but do not offer recommendations and suggestions gleaned from processing large amounts of data and relating them together.
In another simple example, a data-driven application for a pharma sales rep should provide a recommended best path to connect with a key influencer in a formulary committee. Or it might guide a marketing professional to the best candidates for key opinion leaders (KOL) for events. As data-driven applications become more mainstream in our everyday lives as consumers, business users are coming to expect the same degree of capabilities in their day-to-day applications.
Contrary to popular belief, big data is more than just about size. We’ve all heard about the 3 Vs of volume, velocity and variety, but one key “V” not discussed often enough is veracity. Simply put, that means data quality. Data that is not cleansed and continuously managed cannot be related together for insights. For people, not seeing data in a shared central pool is often a problem. Siloed data, no matter the size, causes issues. Different perspectives of the same customer, product or organization mean collaboration is not possible. Shared insight is as valuable as insight derived from the volume of big data that is now available. From a process perspective, companies need to manage and secure their new-found big data. Having valuable insights is competitive advantage. Many companies simply do not have the compliance and regulatory controls to protect their own data, or meet mandated guidelines such as HIPAA.
In conclusion whether Big Data in life sciences is a blessing or curse depends on whether the organization repeats the same mistakes, or proceeds down the wrong path to obtain much desired insight. Mistakes to avoid include:
Not ensuring the data is reliable as a foundation. Either ignoring it or making it someone else’s responsibility. This is why master data management (MDM) is a siloed billion dollar industry that hasn’t yielded expected results. Most new data-driven apps have MDM built-in
Using visualization tools and business intelligence to analyze big data to derive one-time high-level macro insights, but not having an integrated strategy or technology to execute on those insights
Forgetting about the end business user. It’s great to get lots of data and process in, but time to value and putting it into the hands of the business user in mobile, easy to use applications is often last on the list, when it should be the first
Gathering all the data they can, just because they can. Relevant data and insights that yield recommended actions don’t mean capturing the entire universe. A data sourcing plan is critical to determining what data is relevant and how you are going to leverage it. It’s okay to start small, then increase to big data volumes. A modern data management platform offers the ability to incrementally add data sources, without having to re-architect and start over.
Not closing the loop or measuring the benefits once you obtain insights, and take action. Continuous monitoring and correlation of insights to action to measure ROI and to allow machine learning systems to use historical context to predict trends is one of the biggest gaps in siloed, disparate tools today. Modern data management platforms provide a complete integrated loop that delivers reliable data, relevant insights and recommended actions that support IT and deliver data-driven applications to business users.
========== April 18, 2016 ==========
And the final results are in! The winning business initiative or technology of the 2016 Data-driven Madness tournament is …
winning easily over Master Data Management in the technology region. Final results and full bracket below. Thank you everyone for participating. In summary, this tournament indicated that Reliable High Quality Data is a business imperative, that is crucial as the foundation for any predictive analytics or machine learning for relevant insights. It also showed that Master Data Management, a stalwart technology and discipline for over 10 years cannot stand alone as the only offering to meet business demands. In fact, for many Reliable High Quality Data now goes beyond just MDM.
We will be publishing a full interactive infographic of the tournament shortly. Contact us to indicate your interest in receiving a copy.
The winner of the apple watch is Ryan McCormick with a narrow 3 point advantage over ericbless who receives a $50 amazon gift card for finishing second. As does Charles Joseph who won the random drawing among all participants. Thank you all for taking part in Reltio Data-driven Madness 2016!
========== April 11, 2016 ==========
The championship game is here!
Anyone can still enter because Championship correct selections score 10 points each! Also as a bonus, a random drawing will occur on everyone who votes for the championship winner for another mystery prize.
Here is the near final look at the bracket the voting percentages and participant leader boards. Scroll down for the highlights of the round and instructions on how to play. Click here to enter your selections for the Championship game.
And the updated leaderboard. It’s a 3 horse race, with Ryan McCormick taking the lead. ericbless in second and is only 3 points away. Big Dan scored 0 in the final four and drops to 3rd.
Highlights from the Final Four round (Feel free to provide your thoughts in comments below)
Business Region 1 vs Region 2
Technical Region 1 vs Region 2
Preview of Championship game
It’s all about data quality and reliability in the final match up. The question remaining to be answered is that if business expects #1 Reliable High Quality Data, is #1 Master Data Management the only and the best way to deliver in this modern data management era.
Results will be published and the champion crowned at the end of the week. Voting closes Friday 12am PT.
========== April 6, 2016 ==========
Even though March Madness is in the books, Reltio Data-driven Madness 2016 continues and for the first time in history, all #1 seeds have made it through to the final four!
Anyone can still enter because Final Four correct selections score 5 points each.
Here is the latest look at the bracket the voting percentages and participant leader boards. Scroll down for the highlights of the round and instructions on how to play. Click here to enter your selections for the Final Four.
And the updated leaderboard with “Big Dan” still holding serve with a slim 2 point advantage over Ryan McCormick. Fernando drops to 4th with a DNP in the last round, ericbless gains ground and is only 5 points away. Charles Joseph is still mathematically in the running for the Apple Watch!
Some highlights from the Elite Eight round (Feel free to provide your thoughts in comments below)
Business Region 1
Business Region 2
Preview of Final Four Business Region 1 vs Region 2
Battle of #1 seeds in the business region begs the question. Is it enough to get #1 Relevant Insight? Or does the data have to be #1 Reliable and High Quality for the insights to be correct and matter?
Tech Region 3
Tech Region 4
Preview of Final Four Tech Region 1 vs Region 2
In almost a carbon copy of the Business Region semi-finals, the Battle of Tech Region #1 seeds begs the question. Is it enough to use #1 Predictive Analytics? Or does the data have to be based on a #1 MDM foundation for reliable data quality for the insights to be correct and matter?
========== April 1, 2016 ==========
It’s down to the Elite Eight in Reltio Data-driven Madness 2016. Surprisingly all #1 seeds have made it through so far. Now it’s crunch time.
Anyone can still enter because Round 4 correct selections score 4 points each.
Here is the latest look at the bracket the voting percentages and participant leader boards. Scroll down for the highlights of the round and instructions on how to play. Click here to enter your selections for the Elite Eight.
And the updated leaderboard with “Big Dan” still holding serve with a slim 1 point advantage over Fernando. Ryan McCormick has dropped down to 5th (UPDATED due to a scoring/identification error Ryan’s Sweet Sixteen score has been updated and he is still in the running), ericbless and Charles Joseph are all within striking distance of the Apple Watch!
Some highlights from the Sweet Sixteen round (Feel free to provide your thoughts in comments below)
Business Region 1
The chance for a final four berth pits #1 vs #2 in a fascinating conundrum. Is access to data anytime, anywhere more important? Or does that data have to be reliable in order for it to matter?
Business Region 2
#1 vs #6 match-up here is almost too close to call. Do you need relevant insight first, before you get recommended actions? Or will you trust recommended actions to be based on relevant data and insight?
Tech Region 3
In another #1 vs #6 game, the analysts feel that #1 MDM has the edge because all modern data management platforms have built-in Graph capabilities. Meaning that #6 Graph Database can’t possibly win. However those who are on legacy MDM systems, might feel they have to go for #6, but they may encounter scalability issues with typical Graph DB technology when they hit higher volumes.
Tech Region 4
This final tech region 4 game is a doozie. Again who you favor depends on your perspective. Do you want just analytics and the #1 seed? Or #2 Data-driven Apps that are both analytical and operational in nature? This is one of the most anticipated games of the tournament.
========== March 24th, 2016 ==========
It’s Sweet Sixteen time in Reltio Data-driven Madness 2016. Not surprisingly all #1 seeds have made it through so far. Now it becomes a true test.
Anyone can still enter because Round 3 correct selections score 3 points each.
Here is the latest look at the bracket the voting percentages and participant leader boards. Scroll down for the highlights of the round and instructions on how to play.
And the updated leaderboard with “Big Dan” holding a slim 1 point advantage over Fernando. Ryan McCormick, ericbless and Charles Joseph are all within striking distance of the Apple Watch!
Some highlights from the second round (Feel free to provide your thoughts in comments below)
Business Region 1
Business Region 2
Tech Region 3
Tech Region 4
========== March 20th, 2016 ==========
Round 1 of Data-driven Madness is complete! While there were some tough matchups, there were relatively few upsets. Most of the seeded teams moved on to the second round, where competition will now prove to be tougher.
You can still enter because Round 2 correct selections score 2 points each.
Here is the latest look at the bracket the voting percentages and participant leader boards. Scroll down for instructions on how to play.
And here is the leaderboard …
Some highlights from the first round (Feel free to provide your thoughts in comments below)
Business Region 1
Business Region 2
Tech Region 3
Tech Region 4
========== Challenge kickoff and instructions March 12th, 2016 ==========
It’s that time of the year. Just like March Madness, Data-driven Madness has brackets, and an ultimate champion. Some of the head-to-head match ups are “lay-ups” (pun intended), and others are a little “I’m on a desert island and I have to choose one” hard.
To play:
1. Fill in the final selection championship round here. Select the winner of each of the head-to-heads by forecasting and selecting the winner of the second round. Your selections will also cast a vote for each selection, and the highest number of votes for each will move on to the next round.
2. You get a point each time your selection moves on. After each round is completed, you will be given the chance to select the winners of the next head-to-head rounds. Second round winners count for 2 points and so on.
3. The winner of the Apple Watch will be the person with the most number of points at the end of the tournament. Unlike March Madness, you can join at any time until the end of the tournament when all the results will be tabulated.
4. Tie breaker on points will be determined by the person who selected the Champion, and then further tie breakers will be by the number of correct entries in the final four and previous rounds.
B2B marketing today, as they say, is “Not Your Father’s Marketing.” Compared to 10 years back, marketing tools have become exceedingly sophisticated and specialized. Customer relationship management (CRM) is just meat and potatoes. In addition to CRM, organizations deploy marketing automation, email management, content management, social engagement, events management, survey, webinar and many other tools. The list is extensive. Moreover, there are the communication channels that organizations require, websites, call centers, emails, mobile apps, chat, virtual agents and social media.
All pieces of the “marketing stack” are generating data, every second. But marketing brings in even more data to get a better understanding of their customer. Marketing groups subscribe to third-party data sources like Dun and Bradstreet, Discover.org, Netprospex and Data.com to augment and further enrich B2B account and contacts data. Marketing also requires access to information from internal sources like billing, services and support to understand customer value, purchase recency and frequency and satisfaction levels.
Now it looks like we have too much data! How to use this data? How to get better insights about the customer? How to run real data-driven, evidence-based marketing? Sure, let’s put analytics in place. Marketing requires in-depth customer understanding, tracking the effectiveness of campaigns and measuring revenue contributions. Businesses utilize multiple analytic solutions for such insights. Analytics tools range from ones within the CRM application plus web analytics, social listening, funnel tracking, channel effectiveness, content performance and many others.
Multiple sources of data, duplicate and inconsistent information across sources and various analytics tools running on incomplete, disconnected customer data is not the best case scenario. This environment leads to poor segmentation, ineffective marketing campaigns, inconsistent and irrelevant messages to customers and ultimately poor customer experience.
Here are five steps for B2B marketers can take to improve marketing impact and get relevant, actionable insights from their data.
1. Create reliable data foundation
Use a Modern Data Management solution to blend data from all internal, third-party and external sources at a big data scale. Modern Data Management provide tools to match, merge, de-duplicate and clean data and creates a reliable foundation of single-source-of-truth about all B2B accounts and customers. Now you have all customer information in a single data-driven application and leverage it for accurate segmentation, campaign design or to run any predictive analytics models like customer value or churn propensity. Unified customer information also facilitates consistency of messages across all the channels, improves channel effectiveness and lowers the channel operating costs.
2. Collaboratively curate data
Data clean-up is not a one-time job. It is an ongoing undertaking and requires collaboration from all operational and customer-facing teams. Data-driven marketing applications offer data change request workflows as well as ad hoc collaboration features so that any user of the application can flag suspected bad data for review and updates. If a salesperson in the field find that address has changed, she can start a discussion thread and get the data corrected.
3. Visualize relationships
This aspect is quite important, especially in account-based marketing for B2B. Marketers must know the organization structure of the account, the key influencers, locations and products of interest. Data-driven applications leverage graph technology to uncover relationships between all entities like contacts, accounts, products and places. With this information, marketers can design very specific campaigns and offers for the accounts.
4. Bring relevant insights within marketing applications
Relevant insights, delivered in the context of role and objective is essential for marketing success. Running analytics on incomplete data, disconnected from operational applications, only provides macro-level insights. For actionable insights, first, establish a reliable data foundation and further, incorporate findings within the marketing application, in the context of the role. Modern Data Management provides capabilities for near-real-time analytics and also recommends next best actions using predictive analytics and machine learning. You can determine trends in customer preferences, the relative contribution of marketing programs, business value of an account and influence of a contact. Armed with this information, marketers can provide relevant information and offers to any customer, through the right channel of engagement, at the right time, delivering superior customer experience.
5. Close the loop
Data-driven, closed-loop marketing is the “Nirvana” that every marketer aspires to reach. Here all relevant insights and recommended actions are collected and processed for continuous closed-loop feedback into systems for improvements in campaign performance and customer experience. The insights are used to make customer profiles better and improve the efficiency and effectiveness of marketing operations.
These steps will guide B2B marketers towards better customer understanding. Blending data from all sources, cleansing and curating it and discovering ever so important relationships and hierarchies will provide complete customer understanding. Finally, bringing all relevant insights within operational applications will make an immediate business impact and close the loop, demonstrating marketing contribution.
Earlier this week I presented a 3 hour workshop at the MDM EU Summit in London titled “MDM Comes of Age with Big Data and Data-driven Applications”, covering best practices, case studies and technology considerations discussing the following topics and more:
I also presented the next day a session Leveraging MDM for M&A – The “Ultimate” Master Data Challenge with a case study discussing how to:
Please email me at ramon.chen@reltio.com if you’d like a copy of either presentation.
Both presentations were extremely interactive, with lots of great questions from the audience. During the workshop, and throughout the conference via Twitter and direct emails, I conducted an anonymous poll of the MDM EU Summit attendees. The latest results are summarized in the table below. If you would like to add to the survey results, feel free to take the poll at www.reltio.com/fullpoll:
@azornes #MDMDG "No single MDM vendor does it all well" pic.twitter.com/8V2z8JfKc4
— Ramon Chen (@RamonChen) May 20, 2015
@azornes “it may take 1 to 2 years before you see value from IBM, Oracle or SAP MDM implementations.” #mdmdg tough2justify #timetovalue
— Ramon Chen (@RamonChen) May 20, 2015
@azornes #mdmdg "graph databases help uncover relationships. A university uses it to find relationship between donors"
— Ramon Chen (@RamonChen) May 20, 2015
The mention and focus on Graph databases continued in a panel hosted by Aaron, which selected Graphs as the #1 topic for discussion. This is not a surprising trend since Aaron himself tweeted out at NYC’s MDM Summit last year.
@FORR_Mgoetz key takeaways from keynote @ #MDM & #DataGovernance Summit NY #MDMDG = 4th keynote to call out GRAPH DB! pic.twitter.com/i4h6befhkt
— Aaron Zornes (@azornes) October 7, 2014
My workshop contained an in-depth discussion around graphs and their impact in the world of MDM
@RamonChen speaking about Data Driven Applications #Reltio at #MDMDG. Notice graph model prop! pic.twitter.com/66Ga3TS7QD
— Vasu (@vasu_reltio) May 18, 2015
It was a great event, well organized with lots of excellent presentations and interactions. I concluded that many companies are just starting their MDM journey, and thanks to the pioneers, they can avoid the mistakes of years past. Judging by the interest level at the Summit in technologies beyond MDM, such as graph databases, cloud, machine learning, they are focused on doing so.