Follow Datanami:
October 8, 2013

Big Data to Give Clinical Trials a Big Boost

Isaac Lopez

According to the National Institutes of Health, over 80% of clinical trials fail due to complications with enrollment timelines and troubles with patient retention. These problems lead to insufficient data sets, stalled drug development, and ultimately billions of lost dollars spent by pharmaceutical companies.

One of the chief challenges of the clinical trial process is recruitment – finding the right patient at the right time. The process can often take over a year, with teams of people canvassing data records trying to find patients who are qualified to participate in the trials, without tripping exclusion filters such as pre-existing diseases or medications.

“Traditional recruitment and enrollment into clinical trials is unfocused, laborious and expensive,” Lisa Griffin Vincent, president of PatientPoint Outcomes Research Solutions recently told InformationWeek.  “Recruitment methods typically include clinic staff poring through paper clinical charts page by page to identify patients that may meet the trial eligibility criteria. Access to big data can transform and accelerate the clinical trials process if leveraged in the right ways.”

This, however, is a challenge because of the way that data is siloed behind the walls of various healthcare providers, not to mention the wildly divergent formatting of such data from organization to organization. Companies like M2Gen are popping up to tackle this challenge by systematically collecting clinical data on consented patients, normalizing the data, and transforming it into a highly searchable data repository that researchers and clinicians can use for their discovery processes.

“This certainly isn’t something that one system can do effectively,” says Mark Hulse, CIO of the Moffet Cancer Center, the parent organization for the for-profit M2Gen. “There aren’t a sufficient number of patients in total. You need to combine data from many institutions,” he told InformationWeek.

Along with the Moffet Cancer Center, M2Gen has founded what they call the Total Cancer Care (TCC) Consortium, which seeks to gather the consent of patients and add their data into the repository. The consortium has nearly 20 different major cancer clinics and hospitals in their network, including the Carolinas Medical Center, the Center for Cancer Care & Research/Watson Clinic in Lakeland, Florida, and St. Vincent Hospital in Indianapolis, Indiana. At the time of this report, M2Gen boasts 100,371 consented patients in their database.

While data collection and matching is a chief function of the company, it’s not all about bits and bytes. M2Gen goes granular by collecting, analyzing and preserving tissue samples from the cancer patients in their program. These samples are analyzed, and data on specific genetic characteristics are added to the database, helping drug companies get down to high definition specifics in their criteria matching.

“There’s a lot of time wasted in clinical trials,” said Hulse. “If we can mine the data in the right way, it’s good on the patient side because it can get drugs to the market much faster, and it’s good on the business side because it saves the pharmaceutical companies money.”

It also shows that the market can react, even when culture and regulations aren’t flexible enough to meet the technology in the middle. While there are reservations in the United States and countries in Europe over sharing health data, it’s good to know that efforts like M2Gen’s are making progress in getting the data together to combat the complexities of cancer.

Related items:

Healthcare IT Activist Releases 30 Million Edge Prescription Dataset 

Open Source Graph Tool STEMs the Battle Against Malaria 

Doctors Look to Medical Informatics for Novel Cures 

Datanami