Harnessing Generic Big Data for Reliable Relevant Insights

Originally published at Eyeforpharma at http://social.eyeforpharma.com/market-access/harnessing-generic-big-data-reliable-relevant-insights

Harnessing Generic Big Data for Reliable Relevant Insights

Data-driven applications are pointing the way to a new era of modern data management.

At this point, the continuing angst over Big Data has almost become passé. We all know there are massive volumes of data coming in. We know the broadening adoption of wearable technologies, not to mention the Internet of Things, will bring in far more. We know that this data, hard as it is to manage, will continue to transform every segment of our industry, just as it had every corner of retail, financial services, travel and so on. And we know there’s no single “this is it” moment to be reached—this is a journey, not a destination.

So why does it still feel we could be doing so much more, and doing it so much better?

First, let’s acknowledge that we know the challenges too. There are definitely signs of budgets being pared down—in public hospitals, at insurers, and of course from the government. Reimbursements from private facilities are coming down also, just as industry consolidation inevitably brings belt-tightening. Finally, the specter of regulatory mandates always looms large.

Perhaps most importantly—since this seems more in our control—many of our critical processes haven’t changed enough. Despite the wealth of choices and relatively low barriers to entry, it’s remarkable how many legacy systems are still in place. Undoubtedly, it’s hard to make changes quickly in an industry that is in the life-saving business, and that’s so heavily regulated. So that still leaves plenty of room for improvement.

That’s why it’s instructive to look at some of the trends currently affecting pharma. As might be expected, they are fundamentally related and overlapping, and each in turn directly shapes the data-related initiatives launched by individual companies in this critical sector. Collectively, they are enhancing the quality of the data being used to fuel market-facing programs, and optimizing their usage.

Relating big to reliable

First, Big Data is only helpful when it’s the right data, and that can be a struggle for a whole host of reasons. The generic data streams now pouring into each enterprise come from a variety of sources (both inside and outside the company), and in a variety of formats. They feature different degrees of regulation and cost, and many different levels of relevance and context. Harnessing big data, and distilling it into relevant insights requires correlating it to the right people, products, organizations and devices to make the information valuable and actionable.

Some techniques to identify the right data have seen moderate levels of success in the past—the Master Data Management (MDM) tools in place consolidated data across a range of siloed applications, from CRM to ERP, to offer a cleansed and single view of relevant information, but at much smaller volumes. The inherent accuracy helped companies generate better reports for everything from sales to compliance.

But things have changed. For example, many healthcare providers are now connected to integrated delivery networks, and do more than just write prescriptions—they might be on industry committees, or have other positions of influence. So, more than just topline stats and details, it’s important to understand affiliations and hierarchies between people, products and organizations that is constantly shifting.

This is precipitating the move to cloud, as many organizations seek a more flexible and agile environment to elastically scale according to the increasing scale and complexity of data being acquired. To gain a 360-degree view across multi-domain master data, transaction and interaction data, and third-party/public/social data, clinical trials and eventually devices, a new breed of data-driven applications powered by modern data management platforms are maintaining veracity (aka reliability) on pace with volume, variety and velocity.

Connecting and visualizing relationships

Given the high level of complexity inherent in optimal use of Big Data, it might seem that the data-driven applications described above are intended primarily for IT specialists. In reality, it’s quite the opposite: they deliver the relevant data with easy-to-use interfaces, backed up by new features and functionality that most enterprise-class applications can’t match. In fact, the use of easily comprehensible graphs make this vortex of information astonishingly easy to digest.

This trend makes perfect sense—there’s a new generation of tech-savvy users weaned on business-specific tools such as LinkedIn and various Google apps who count on dense information to be easily accessible. To be clear, these aren’t just pretty pictures: the entities in graphs form nodes that can be continuously added to, with connections to other nodes (as opposed to traditional relational databases  that require schemas that conform to typical columns and tables). In this dynamic, patterns emerge with startling clarity: people-to-people, people-to-products, products-to-organizations, and more.

In the world of pharma, such big data graphs are invaluable in helping sales and marketing teams to gauge the ultimate influence of physicians to their peers, as well as their affiliations with hospitals and other institutions. The accessible data makes it much easier to identify the key players and issues that determine which drug or device might receive preferential placement or even approval within a healthcare organization.

Pre-aligning third party data

One additional means of augmenting and improving data reliability is to procure third-party data: Healthcare Professional (HCP) lists, Healthcare Organization (HCO) lists, and increasingly big volumes of scripts, plan and even patient/consumer data.

Lists are traditionally acquired from vendors based upon detailed selection criteria. Once the data is acquired, the technology teams upload it using ETL (Extract Transform and Load) tools. That sounds right, but the process is carried out infrequently, which in this era of real-time information leaves considerable room for error.

Just as importantly, we need to understand the context in which the data is applied—it’s usually blended with information already in-house from social media content, data built into applications, and the legacy data resident in internal networks. Making it all work coherently while uncovering those critical affiliations and relationships can be a huge challenge.

That’s why it might be wise to have a comprehensive checklist in place before finalizing a deal with third-party data providers. Questions here might include:

  • How can I provide updates and corrections to the data? Since this is a fairly new phenomenon, some vendors might be sensitive to getting critical feedback. However, the potential benefits far outweigh the downside.
  • Is the data accessible directly from within data-driven applications?
  • Can I filter and search through the data by the criteria I select?
  • How can we track what data belongs to the provider, and what attributes we own?
  • Is the data pre-aligned/cross referenced to other related data?

Making reliable more relevant

When there are so many ongoing consolidations shaped by a blur of mergers and acquisitions, it’s important to keep sight of the actual customer, and how that role might be changing.

Targeting physicians alone doesn’t cut it anymore—there’s now a complex web of constituencies, running the gamut from payers, providers and hospital chains, to IDNs and HCOs. That’s the force behind the push toward key account management (KAM): This process focuses on nurturing a deeper understanding of each target market’s core drivers, needs and objectives, and building long-term relationships.

It entails a comprehensive approach that encompasses strategy, processes, people, data/insights, and tools and infrastructure. Again, that last point is relevant because it represents a shift from legacy systems— and in this swirling ecosystem, most CRM applications can’t be depended on for full-force KAM programs, since they date back to a time when brand-focused programs were the main weapon in the market-facing arsenal.

Here, too, there’s a strong current of innovation. The newest generation of applications builds directly on the data to go far past CRM, and enables most professionals within the company to collaborate cross-functionally with the target market while adhering to the brand. Furthemore, these applications deliver insights through built-in analytics. Unlike CRM applications, they are role aware, narrowing the data and insights to what’s relevant to the user. This is juxtaposed with the use of separate business intelligence tools that require not only that the user understand how the data is constructed and stored, but also how to execute queries to pull out information. In short, they deliver answers but only if you know the questions to ask.

Converting relevance into intelligent recommendations

In contrast, data-driven applications not only provide relevant insights but recommended actions, based on an understanding of the goals of the user. These suggestions could be as simple as guiding best contact to ask for a warm introduction to a key influencer much like LinkedIn, or more complex ranking and rating of Key Opinion Leaders (KOLs).

Even without the looming specter of regulatory guidelines, identifying and engaging with critical thought leaders is a significant challenge. In many pharma organizations, the selection can vary from Byzantine complexity, to evaluating potential candidates based on anecdotes and dated associations. Relevant data from authorship in trade publications, to participation in clinical trials, affiliations, credentials, associations, blended with data coming from internal sources, represent a cadre of insights to sift through. Data-driven applications harness all of this data, small or big, to offer recommended actions based on machine learning; continuously refining data from a closed feedback loop, factoring what has worked in the past, to predict the best possible selections for the future.

Being right faster

It’s clear that the current levels of change roiling the industry won’t fade anytime soon. Similarly, the mountains of data coming in every minute will only get higher. Making sense of this data will spell the difference between survival and success. Fortunately, the tools to help achieve that goal are now available and deserve attention and adoption to help everyone across a life sciences’ organization to be right faster.