Analytics is driving research abilities in life sciences and solving major challenges of data integration.
FREMONT, CA: Translational medicine is an area of research in life sciences that visions on bringing together early-stage research and development with downstream clinical results to better understand the impacts of drugs and therapies for real-world patient outcomes. This area has been considered a silver bullet of life sciences for years because remediating it effectively means clinical delivery of new medicines from laboratories will be seamlessly achievable.
To accomplish these outcomes, it is essential to use a plethora of data from different research and clinical areas. These data sources are usually spread across firms, exist in incompatible formats, and are often inconsistently labeled. This means several data sources are needed for translational medicine to be effective. Conventionally, software development in clinical research meant that large amounts of data needed to be migrated together, transformed, and mapped into a common repository so complex searches and analytics can be performed.
The first stage of the data transfer process includes building a pipeline for the data to be extracted, migrated, and loaded into the new system, a process that can take months or years to complete. Once built, these pipelines are rigid and brittle, in that they connect data in unique ways for specific purposes. So, when the research questions alter, often the ETL pipeline must be rebuilt or extended again. It is hard to have a flexible and ad hoc type of system that can keep up with translational medicine demands where new topics and patterns of interest are being thought of as potent research avenues.
The connectors are driven by semantic metadata and machine learning, which scans reads and automatically presents the original data schemas to the engine. Users can connect directly to data sources through a cloud-based portal, regardless of their physical location. It also provides nearly immediate access to a wide variety of sources and formats, significantly reducing the time required for other search and analytics systems that need the expense of building data pipelines and complicated mapping strategies.