When it comes to collecting and analyzing data, many biopharma companies are still in the digital dark ages. They process data using tools such as Microsoft Excel, which has a lot of capabilities but isn’t tailored to biopharma. Those that do undergo a digital transformation often install fragmented software tools that generate data in silos, requiring a lot of manpower to consolidate, format and chart the data. This is a laborious process that involves manually collating and assimilating data from disparate systems.
As the volume of data generated by the biopharma industry explodes, this fragmented approach simply won’t cut it. Imagine a room full of bioreactors generating process-monitoring data every minute, with cell culture sampling performed a number of times a day and wanting to compare these bioreactors for performance and efficiency. That alone would generate hundreds of thousands of data points. Today, an increasing number of biopharma companies are looking to adopt sophisticated digital technologies that would accelerate their digitalization endeavors by continuously and automatically pulling in data from the vast network of machines they use in their laboratories, which allows them to innovate with available and reliable data sooner. One such application that’s of growing interest is the “digital twin,” which pulls in data from multiple sensors and systems to model a process in silico, analyze it and provide feedback that scientists can use to optimize the process in situ.
It’s easy to see how biopharma companies could benefit from establishing a “digital data backbone.” A digital data backbone is designed to enable an organization to collect, structure and organize all data from all operational activities, and facilitate timely and intelligent analysis within a single platform. A fully optimized digital backbone can automatically take data from a diverse set of instruments and contextualize them with experimental and scientific metadata for analysis – all without the need for human intervention. It can be implemented across all stages of drug development, facilitating smooth handoffs of process and product data. For example, the otherwise laborious task of creating a cell-line history report across teams, systems, scientists, experiments, etc., could now be streamlined by the availability, accessibility and context of all related data from within the same platform.
The rapid rise in the development of cell and gene therapies makes the digital backbone all that more valuable. Nearly 3,000 cell and gene therapies are currently in development, according to the American Society of Gene and Cell Therapy. Some of these advanced therapies – particularly those that are personalized to individual patients – can be developed and launched in about a month. This development process alone could generate millions of data-points very quickly. With an emphasis on accuracy of data transfer, high-risk material touchpoints and speed of development, cell and gene therapy makers need a platform that can centralize the data and provide a seamless, automated transfer of information – something that archaic information management and analysis systems simply cannot provide.
The rise of automation has sparked some questions about how the role of scientists will evolve. No doubt, with data more readily available, scientists will no longer be running from machine to machine to collect the data, and then figuring out how to put everything together in a spreadsheet. They’ll have all the related data at their fingertips, with faith that the datasets are in line with data integrity rules such as the ALCOA+ principles, while also having complete datasets, including failures and terminated experiments. Capturing failures along with successes provides a more complete picture of every experiment, allowing researchers to trace the sources of bad performance trends, and normalize the true success of their experimental work. Ultimately, scientists will be able to use these more accurately calibrated data models to leverage artificial intelligence tools that can help them predict trends and optimize their processes.
In short, scientists will be able to spend their time doing more cutting-edge science. The digital backbone will empower them to accomplish that goal. By having all correctly constructed and contextualized metadata, product and process data in one place, they’ll be able to gain the maximum potential of advanced analytics tools and generate more powerful insights sooner.
How can companies make the switch to the digital backbone? This is not something that IT departments can drive alone – it must be championed and led by scientists and their leaders. In isolation, IT experts may not be fully versed in the company’s therapeutic goals, making it challenging for them to envision how a digital technology could best drive the necessary scientific and business outcomes. True digital transformation initiatives need to be driven company-wide, with IT and scientists working together to drive the optimal outcomes. This cannot be taken on as a side project. It requires a coordinated, harmonized, global effort to assess the incumbent digital landscape and implement tools in a way that positions organizations at the forefront of scientific and digital advances.
This is an exciting time for patients, as genomic discoveries and advances in AI and automation converge to accelerate the discovery and development of novel therapies. Let’s embrace the digital backbone so the biopharma industry can make the most of this opportunity.
Photo: Madmaxer, Getty Images