Struggling to get your pharma data AI-ready? You are not alone. 

  • Post comments:0 Comments

In the first of a series of insights exploring AI adoption barriers faced by pharma companies, Chih Han Chen, Chief Technology Officer explores the challenges in preparing AI-ready data and how partnering with Virtual Science AI can make the process quicker and easier. 

A large pharma clients medical affairs team recently approached us looking for more effective ways to derive insights from a data lake repository.  

The patient-related data was disorganised and in a wide range of formats including text, video and audio files, making it difficult for their team to quickly analyse and gain actionable insights.  

Our solution was to implement our digital insight management platform, which was able to collect the data, whatever format it was in, then clean it, label it, standardise it, filter out noise and analyse it.  

As a result, the client was able to:  

  • Save huge amounts of time looking for insights and answers to research questions.  
  • Enjoy a better user experience when searching through a standardised filing system. 
  • Save money and resources by not having to spend on costly in-house data cleansing.  

This successful partnership highlights how pharma companies can use vendors to overcome one of the biggest barriers to AI adoption: having high-quality, AI-ready data. 

Why this matters now 

A recent survey by Bessemer Venture Partners found that an overwhelming number of pharma, payer and provider organisations see AI as a strategic priority and are investing heavily – with 95% believing GenAI will be transformative and 60% reporting AI budgets now outpacing IT spend.  

While different pharma companies are at different stages of their AI journey, all are lured by the technology’s ability to create efficiencies and unlock insights by automating repetitive tasks, producing reports and summarising large volumes of data. 

But it’s the readiness of that data which underpins the success of all AI tools. If it’s inaccurate, incomplete or hard to find and analyse, it can lead to outputs that are misleading, unreliable and biased. As a result, AI becomes untrustworthy, ineffective and inefficient – at best obscuring much-needed insights and at worst putting you at regulatory and reputational risk. 

What do we mean by AI-ready data? 

In order to get the best, most reliable outputs from AI tools, data needs to be:  

  • Clean: Free from errors, inconsistencies and duplication.  
  • Well-labelled: Consistent, meaningful labels and clear categorisation provide greater context and more relevant outputs.  
  • Standardised: With data increasingly coming from different sources – like with our previously mentioned client – it needs to be turned into standardised, uniform formats.  
  • Accessible and integrated: Too often data ends up in organisational silos, not only spread out across different systems, functions and geographies, but also hidden away in individual laptops and files. Standardised methods of data collection and centralised storage are essential if you want to realise your data’s full potential.  
  • Compliant: The collection, storage and use of data is a complex legal area to navigate – even more so in the highly regulated pharma industry. Companies must take into account legal, regulatory, ethical and privacy concerns in the regions in which they operate. 

The problem with preparing AI-ready data in-house 

Despite the interest, ambition and investment in AI, many companies are still cautiously feeling their way. The Bessemer report highlights that pharma companies are still largely in the experimentation phase, running numerous proof-of-concepts, of which less than a quarter make it into production.

There are several reasons for this, including lack of in-house expertise and security concerns – topics we’ll explore in future insights. But almost half (47%) of the pharma companies surveyed said that preparing AI-ready data was one of the biggest hurdles to implementing in-house developed AI tools.   

It’s easy to see why. Most pharma companies lack the tech resources to undertake the mammoth task of cleaning and organising their data, while some are spending millions of dollars to transform systems and launch data integration projects.   

It would be much easier and more cost-effective to partner with a third-party vendor whose AI tools are specifically designed for pharma, meet regulatory requirements and, importantly, can do a lot of the data cleansing for you, so you don’t have to. 

As our CEO Tom Hughes recently wrote about in Forbes, were starting to see a shift in mindset away from lengthy, costly in-house AI development, with more strategically enlightened companiesmoving towards a partnership approach and the benefits of speed, efficiency and cost-effectiveness that brings.     

Unlocking an exciting future, faster 

Artificial intelligence is evolving at an incredible pace. Today, AI is largely seen as a support tool, using past data to offer information on which a human can act. But in the future it has the potential to become much more of a strategic partner, fully integrated with our activities and working side-by-side with us in real time – almost like another employee.  

In the near future, we could see an AI avatar moderating our meetings, offering advice through intuitive interactions, or joining in during a doctor-patient consultation to ask questions and make treatment recommendations.  

Before we reach that point, pharma companies must eliminate as many of the barriers as they can to successfully adopt new capabilities. Partnering with experts who can help you do that quickly and at scale will accelerate AI adoption, bolster your commercial advantage and help more patients get the life-changing treatments they need.

In our next insight, we’ll look at the challenges pharma companies face in terms of building in-house AI expertise. You can find out more about Virtual Science AI’s solutions here.

Leave a Reply