Description
Article | May 5, 2020
Source: Medidata Solutions
COVID-19 has forced a reckoning in how we conduct all aspects of our lives, and clinical trials are no exception. Patient safety is the foundational principle of any trial, and suddenly we have a situation where having patients come into clinical sites for their scheduled visits poses a significant risk to their health and well being. Vital data on the patient’s experience in the trial would typically be gathered during these visits, giving essential insight into the safety and effectiveness of treatments. We now must quickly determine how this data might instead be captured in the safety of the patient’s own home.
The entire industry is wrestling with this issue, but is also working together to
find viable paths forward. The regulators have already expressed a level of
pragmatism and flexibility that is traditionally unheard of in the ultra-conservative realms of experimental medicines. Outcomes research and electronic data capture groups are challenged to be equally as pragmatic and flexible. Medidata presented solutions to changing how patient data is being captured in live studies running on its platform, but the broader industry has also been working on ways to address this issue.
Medidata is a member of the The Critical Path Institute ePRO and PRO Consortiums—pre-competitive, collaborative groups of eCOA providers and sponsors who work together to advance the science of electronic data capture. Today we released our joint assessment and guidance for those using eCOA in clinical trials. The documentation includes:
Core principles that should underlie any potential solution An overview of the current regulatory guidance Suggestions for managing copyright questionnaires Considerations for IRBs
A risk mitigation table
A decision tree outlining potential solutions with pros and cons
The Consortiums have been forced to challenge some of the sacred cows of outcomes research and eCOA, with serious consideration given to changing modes of administration mid-study, administering questionnaires in ways they were never developed for, and implementing processes that only four weeks ago would be considered suboptimal. None of the solutions available are perfect, but we are faced with a decision of whether some imperfect data is better than no data at all. If we can clearly identify that potentially imperfect data and document how it was captured, we can make informed decisions down the road about how we use it.