+1.855.360.3282 Contact Us

Can Data as a Service (DaaS) be the Magic Blend for a Healthier Life Sciences?

In my 12 years consulting for PwC and subsequently IBM, I estimate that I participated in life sciences strategic consulting projects totaling in the tens of millions of dollars in revenue. The vast majority of these projects were commercial IT and data architecture assessments to address key business pain points, or to determine how best to support new requirements in an ever changing life sciences landscape. Each architecture was inevitably complex and resulted in unhealthy complications including low ROI, unmet business requirements, long times to value, and ultimately significant tensions between business and IT.  

Developing a good IT vision and implementing the governance and enforcement needed to support it is a major challenge for life sciences companies. Usually, the architectures evolve organically based on point solutions to satisfied business requirements. These are the typical dynamics due to corporate budget cycles, lack of experience, the need to deliver business value quickly, and frankly office politics that do not favor successful governance or changes in direction.

This is why modern data management and architectures are now focusing on the data first. Then subsequently enabling data-driven applications to be rapidly created to solve any business problem. The ability to bring in data sources of any kind into a simple low cost, big data, cloud platform to manage the information, understand it, query it, share it, and make decisions from it isn’t trivial. The quality and latency of access to reliable data is often “the” bone of contention between IT and business, as well documented in the post “Bridging the Gap between IT and Business” by Dr. Tom Redman (aka the data doc himself).

So what will help life sciences companies cut to the chase, and truly allow them to have data-driven applications that both IT and business can agree on? I passionately believe that Data as a Service (DaaS) is a key component in this equation.

What is DaaS? Simply put, it’s the ability for data to be delivered regardless of geographic or organizational separation of provider and consumer. Today data from third party providers is still often delivered through batch files, IT dependent ETL uploads, laborious comparisons of what data has changed between updates, difficulty gauging the quality and value of the data provided, and slow manual communication back to data provider about corrections, uncovered by field teams that could be applied to improve data quality.

@Reltio we believe that data-as-a-service integrated into a modern data management foundation, and fully accessible by data-driven business applications are a game changer. But just sourcing and having on demand access to data is not enough. Data has to be converted into reliable information, by seamlessly cleansing and blending together related data from third party vendors, and social and public data sources, but also across departments within the enterprise. As a bonus it should also be flexible enough to be used as the backbone technology for data monetization.

This is why I joined Reltio, and why I’m focused on recruiting and enabling leading data providers through Reltio DaaS. In life sciences we are fortunate to have companies such as MedPro, DarkMatter2BD, Healthcare Management Systems and Enclarity (now LexisNexis) as data partners through our Delivered by Reltio program. Reltio DaaS benefits both pharma companies and data providers providing a two way on demand consumption of data, and closed feedback loop that can reduce the cost of improving data quality and delivery.

This is truly an exciting time, and I believe fully integrated DaaS is an integral offering that will enable life sciences companies to achieve an agile and data-driven architecture they need to succeed.

 

 

Explaining Data-driven, One Slice of Pizza at a Time

While a lot has been written about what it is to be data-driven, up until now there has been relatively little written about enterprise data-driven applications, and how it relates to other types of applications, and data management infrastructure.

I thought it might be fun to extend the pizza analogy that was previously blogged about by Albert Barron, IBM Software Client Architect, in which he cleverly showed the differences between on-premises systems and “as a service” concepts through the various components that make up a pizza meal.

If you will forgive me I’m taking his “pizza as a service” concept to an even cheesier level. Starting with a baseline of software as a service, modern data management Platform as a service (PaaS)data-driven applications and adding the following capabilities:

  • Data as a Service – to provide the raw multi-domain, structured and unstructured data from third party, social and transaction sources (such as through our Delivered by Reltio partner program)
  • Master Data Management – to create reliable data by cleansing, verifying, matching and merging, ultimately uncovering affiliations and relationships between siloed sets
  • Analytics – to deliver in context, relevant insights with visualize cues specific to the role of the frontline business user
  • Machine Learning – to give recommended actions, that can be immediately actioned within the data-driven application, by the user or autonomously executed on their behalf. With closed loop feedback to continuously improve outcomes

 

So there you have it, the secret “sauce” of data-driven applications delivered as a pizza for your enjoyment. Sorry I didn’t squeeze the beer into the post, but I think beer can be taken as foundational, like big data infrastructure and cloud for today’s modern data management Platform as a Service.

If you’d like a more technical view of what a data-driven application entails, please check out Phil Russom, PhD’s first of its kind checklist for data-driven applications.

Better data, better pizza, #berightfaster

Funding Data-driven Innovation

Seed funding is more plentiful and easier to raise today than I’ve ever seen during my career. What that means, ironically, is that this makes everything much harder. It sets an expectation — especially for young, first-time founders — that something they expected to be challenging is relatively easy, and this sets strong expectations for the next time they do it. The problem is that the number of A rounds hasn’t changed. That amount of Series A capital HAS NOT increased. So, if you have 4x the number of companies with seed funding, that’s 4x the players competing for the same money… making it 4x harder to raise an A round than it was five years ago.

 

That excerpt from this excellent article “What the Seed Funding Boom Means for Raising a Series A” by First Round Partner Josh Kopelman was a very interesting read for us here @Reltio

Mostly because we were excited to be able to announce our very own Series A funding today through Crosslink Capital and 406 Ventures. Even though we hadn’t read the article till now, as it turned out, we had the good fortune of pretty much following Josh’s guidance to a tee.

Specifically we:

  • Raised a modest seed round (this is the only place where we deviated from the recommendation in the article which encouraged raising a larger seed amount)
  • Gained significant traction with early customers and partners, generating revenue and even showing profitability
  • Didn’t do serious pitching until we hit the right milestones and took the time to set up a fundraising strategy
  • Avoided the “shopped deal” mentality by focusing on a few key VCs that we believed would be aligned with our vision
  • Did our research and found Crosslink Capital and 406 Ventures who have a tremendous track record of working jointly with their startups to help them succeed
  • Kept our funding requirements at exactly the same level as our original pitch
  • Showed “mastery in our numbers” during the pitch and process
  • Positioned our capabilities in-line with one of the hottest business user focused segments in the market “data-driven applications“, while also appealing to IT teams at companies who need to modernize their data management
  • Ensured that we could demonstrate traction and hit proof-points that represent real step-change
  • Kept tight fiscal management in place and laid out a well thought out plan to use the capital
  • Proved that we have a product in production and loved by Fortune 500 customers, not only for day-to-day business operations, but also M&A
  • Showed that our addressable market was in the multi-billions, while demonstrating commitment and focus for the healthcare and life sciences industry
  • Highlighted that our data-as-a-service can not only enrich data for internal operations, but also enable data monetization strategies for CDOs

Did we skin our knees? We sure did, not all the VCs understood or believed that we could do what we set out to achieve. But as Josh points out in his article, that’s not a bad thing. We learned, we adapted and ultimately we ended up with two superb VC firms that we are in lock-step with.

We don’t take the process or the funding lightly, it’s a small step in a long journey as we have our sights set high. On this exciting day for us, we’d like to extend best wishes and hopes to all companies looking to raise their own Series A. Please reach out to us if you feel we can offer any insight.

P.S. If you feel like exploring Reltio for career options, we have plenty of openings too 🙂

How Life Sciences Mergers & Acquisitions (M&A) Can Turn Into an Even Bigger Deal

2014 saw the U.S healthcare and life sciences industry Mergers and Acquisitions, (M&A), hit a record of $236.6 billion dollars. According to an EY report, Activis owned the two largest deals out of the top 14 (see table below). 

The first quarter of 2015 saw its share of big deal mergers with the intended purchase by Pfizer Inc. of Hospira Inc for about $15 billion, Canada’s Valeant Pharmaceuticals International Inc. intention to purchase Salix Pharmaceuticals Ltd for about $10 billion, and last night’s announcement that beats them all, with AbbVie Inc. intending an acquisition of Pharmacyclics Inc for about $21 billion.

While the main prize of these acquisitions is undoubtedly the drug pipeline and huge potential for future sales, merging together companies of such size and scale, with a myriad of people, processes, data and systems can be a lengthy, costly and challenging endeavor.

First there are pre-merger activities. A major step involves bringing together clean, reliable, relevant data in a timely fashion from the IT systems of both parties into a “clean room” for auditors to assess synergies, overlaps, and to respond to any regulatory objections and hurdles that may need to be overcome.

Surprisingly this complex task is often accomplished through significant manual effort, with little more than the power of the almighty spreadsheet as the tool of choice for M&A reconciliation and analysis. This makes the process resource intensive, rather imprecise and potentially extremely costly.

Furthermore upon completion of a successful merger, any work-performed pre-merger is often discarded. Post-merger integration then has to start anew, leading to further stress on IT and business teams, who should be focusing on deriving value from the combination of the two businesses.

So given the significant risk involved, why aren’t multi-million dollar enterprise-class systems such as traditional master data management (MDM) offerings leveraged in M&A more often? Unfortunately on-premises MDM systems simply can’t be stood up fast enough, cost too much in infrastructure to implement, and aren’t flexible enough to deliver the multi-dimensional, multi-domain analysis that is needed. Moreover, a closer look at each company in a multi-billion dollar merger might reveal that there are in fact already multiple siloed MDM systems, even within the walls of their own organization, deployed in order to fulfill a point-in-time business need, with no opportunity for consolidation.

Fortunately there is a new and better way to bring together critical data from both parties in a secure and controlled environment, in the timeframe needed, and to achieve tremendous cost savings, faster pre-merger analysis, accelerated post-merger integration and something even more valuable.

Cloud-based modern data management, bringing together master data management discipline on a big data foundation utilizing graph technologies, similar to those employed by LinkedIn, Google and Facebook, allows data to be analyzed efficiently regardless of format or source origination. A hybrid of columnar and graph technology provides unlimited flexibility when compared with traditional, relational row-and-column databases. This makes it possible to quickly reveal multi-dimensional relationships and correlations across multi-domain datasets that are crucial to planning and execution of an M&A transaction and beyond.

Granular security and visibility controls allow each company to have its own cloud workspace, while information is easily combined into a “clean room” cloud for auditors to do their work. Prior to this convergence, all the data is cleansed, enhance, deduplicated and linked, as you would expect.

Once the merger is approved, the converged and consolidated data in the cloud forms the foundation for new enterprise data-driven applications, and can also be pushed immediately to operational divisions of the merged company, jump-starting the integration or systems retirement process.  

Such a modern data management platform also provides compliance and governance features through deep auditability: the history of every data change to every attribute in the combined repository can be inspected at any point in time to see how it has grown and evolved over time.

But the jewel in the crown and a by-product of the M&A might well be the accelerated path to enterprise data-driven applications. They can be used to solve business problems in ways not previously achievable with legacy, process-driven applications. Once the combined company is on the path to being data-driven, the possibilities are limitless, with operating efficiencies and new business agility allowing the new entity to reap even bigger rewards than just the physical products and patents they’ve acquired, and the cost savings and efficiencies of a faster pre-merger.

A complex M&A presents an opportunity for a company to transform itself into a data-driven juggernaut, and that may prove to be an even bigger deal.

The Groundbreaking Book on how Enterprises can be Data-driven Today

A Review of “Data Driven: Profiting from Your Most Important Business Asset”

Being data-driven is a hot topic. It’s mostly used to describe the latest in big data processing and analytics, and as the Holy Grail for companies to achieve greater efficiency and profitability. However if you perform a Google search for data-driven applications there’s surprisingly little written about the use of these types of applications. So what does it really take to be data-driven?

Dr. Thomas Redman, also known as “the Data Doc” is the author of the groundbreaking book “Data Driven: Profiting from Your Most Important Business Asset” which describes all of the critical techniques, processes and organizational changes a company needs to make to make the leap to becoming a data-driven enterprise.

The book delves into many fascinating topics relevant to today’s agile enterprise:

  • The special properties that make data such a powerful asset
  • The hidden costs of flawed, outdated, or otherwise poor-quality data
  • How to improve data quality for competitive advantage
  • Strategies for exploiting your data to make better business decisions
  • The many ways to bring data to market
  • Ideas for dealing with political struggles over data and concerns about privacy rights

Dr. Redman’s passion and focus on data quality is a key differentiator compared with other books that focus more on visualization, discovery and analytical value of data. Of course data quality and master data management being a $1B+ software market is not a new topic to many of us. What’s different is Dr. Redman’s description of how data can be improved. He describes how data quality is everyone’s business, not just an IT back office function. He also outlines how business users can own and contribute to the quality of data.

As a clear indication of how far ahead of his time he was, Dr. Thomas Redman described within his book the concept of data lakes (long before the big data popularization of the term) and the need for a chief data officer (one of the hottest jobs on the market today).

Meanwhile our company Reltio in parallel developed a modern data management platform that enables data-driven applications to be delivered to frontline users to solve any business challenge. Having never read Dr. Redman’s book till now, it was amazing to see his concepts directly correlate to core offerings within our Reltio Cloud.

So it is with great excitement that we announce that in partnership with Dr. Redman we will publish posts revisiting his best work, together with his latest thoughts on what it means to be data driven today.

Within these posts Dr. Redman has graciously offered to answer any questions you may have.

We look forward to engaging with you and highlighting the work of Dr. Thomas Redman who has rightly earned his title “the Data Doc”.

Dr. Thomas C. Redman, “the Data Doc,” is President of Navesink Consulting Group.  He helps leaders craft programs to get in front on data quality, learn to compete with data, and build the organizational capabilities to do so.   His most recent article, “Data’s Credibility Problem,” appeared in the December 2013 issue of Harvard Business Review and his fourth book, Data Driven:  Profiting from Your Most Important Business Asset, Harvard Business Press was a Library Journal Best Business Book of 2008. Prior to forming Navesink, in 1996, Tom started and led the Data Quality Lab at Bell Labs. He holds two patents.  

Your Salespeople are Your Data-driven Advantage

Most enterprises understand the need to provide their commercial teams with timely and accurate information. We’ve all seen the metrics where a wrong address or incorrect email can lead to wasted time, inefficient operations and poor customer satisfaction.

It’s why billions of dollars are spent every year acquiring information from third-party data sources to enrich information, and even more is spent on IT efforts to reconcile and master that data across countless internal data sources. In many cases, there are even outsourced services that have legions of data stewards that “dial for accurate data”.

Interestingly your field teams, such as sales and account managers who have regular contact every day with customers and potential prospects, encounter the most up-to-date information as a course of their normal activities. Those that decide to update that information in their CRM systems typically make a mess of the database, leading to duplicates and inconsistencies that master data management solutions are then tasked to clean up.

More importantly, beyond the accuracy of name, address and phone number, these field teams encounter interesting affiliations and relationships that are almost never available through third-party vendor data. Again those that are willing to provide that information back to their CRM applications hit a dead end because those applications are simply not equipped to handle complex relationships. They may be as diverse as person-to-person, person-to-organization, product-to-organization, location, price and other real world elements.

And even if they are able to capture some of the details the affiliation is often considered “soft” (e.g. hearsay vs. fact) and cannot be verifiable via back office data stewarding.

Here’s where a new breed of data-driven applications with built-in Modern Data Management can help. These applications provide frontline business users with a way to contribute new and updated data, all at their fingertips using mobile devices of their choice, including their smartphones! The information can be routed to data stewards through workflow, or simply left as “soft contributions” in the system. Meaning that no facts are changed, but valuable data is captured. The value of that information is then subject to the power of social collaborative voting, much like you would see in that of business or restaurant review on Yelp. The power of crowd self-governs the accuracy and effectiveness of the information provided.

This form of collaborative curation delivers a combination of high quality up-to-date facts, with softer but yet still valuable data that is self-governed. With some enterprises having tens of thousands of feet on the street, the quality of the data they can contribute back to the mothership can be better than information purchased through a third-party source. In those scenarios companies may even have the option of monetizing their data without really trying.

As companies move to reduce costs by laying off salespeople due to reduced demands for face-to-face interactions, those that continue to have ongoing contact with customers and prospects become even more valuable, not just to sell product and close deals. Armed with a new breed of data-driven applications, they just might be your most valuable data-driven asset and competitive advantage. 

How to Monetize Your Enterprise Data … Without Really Trying

Data monetization as described on Wikipedia “… generating revenue from available data sources or real-time streamed data by instituting the discovery, capture, storage, analysis, dissemination and use of that data. It is the process by which data producers, data aggregators and data consumers, large and small, exchange sell or trade data.”

There have been some very interesting articles around this topic, primarily focused on data gathered from the Internet of Things (IoT) as described in Cap Gemini’s presentation Extracting Value from the Connectivity Opportunity or mobile devices in Accenture’s piece Monetization in the Age of Big Data.

But did you know that any enterprise can tap into the data they manage and use to run their businesses as a recurring revenue stream? In order to do so, the data must be, among other things:

  • Reliable
  • Relevant
  • Segmented
  • Secure and anonymized if necessary

And there should be an easy way to distribute and make the data available for purchase either in batch or by-the-click “Amazon style”. But it doesn’t stop there, like any form of commerce customers of the data need to be able to provide feedback and rate the value and quality of the data.

Better still, wouldn’t it be great if the users of data could contribute back more accurate information in exchange for discounts on future purchases, making it a win-win for all.  

While getting there seems daunting, the first steps start with just improving the reliability and relevance of your internal data to improve your business operations. Using Data as a Service (DaaS) to bring in third-party data assets to enrich information for your data-driven applications, and allowing your employees to collaboratively curate data, optimizes the efficiency and cost of your internal operations.

From there you can turn your own data into an asset and even begin profiting from it. The technology that you previously used to bring in third-party data can be used to distribute and license your own data externally. Effectively making you a Data as a Service provider. The caveat of course that any technology you use, must provide full audit and lineage as to where the data originated from so that licensing rights are clear.

It can be done and chief data officers (CDOs) everywhere are starting to think in terms of not just using data to improve operational efficiency within their enterprise, but monetizing data as a significant revenue stream. To do this they are selecting cloud-based Modern Data Management Platform as a Service (PaaS) that include Data as a Service and data-driven applications that support collaborative curation.