One issue that emerges is the insidious effect such payments may have on the “nuances” and “creative license” a physician might take on reporting the data.
In fact the entire clinical trials process becomes suspect, perhaps even more so than in pharmaceutical industry. Here is why.
The following story appeared today in my local paper, the Philadelphia Inquirer:
“After hip replacements, a lawsuit: Implant company paid Penn surgeon consulting fees.”
Fed up with the constant pain in her hips, Katrina McKenzie took her surgeon’s advice and had them replaced with experimental implants. [Civil docket report here – ed.]
The 31-year-old from Galloway, N.J., who agreed to participate in a clinical study, knew there was a risk that her new hips could fail.
But she didn’t know that the manufacturer financing the study, Smith & Nephew, was also paying her surgeon tens of thousands of dollars a year as a consultant.In recent years, such payments to doctors from medical implant manufacturers and drug companies have become increasingly controversial.
Some leading orthopedic surgeons receive six- and seven-figure payments annually, in the form of royalties, consulting deals and speaking fees from the makers of artificial hips and knees.
… Garino and Penn responded, in court filings, that McKenzie received good care and that the payments had
no effect on her treatment.“On the merits of the medicine, we are going to vigorously defend this case,” said Susan E. Phillips, spokeswoman for the Penn health system.…
In his deposition, Garino said that while Smith & Nephew sponsored the study, he did not receive any direct financial benefit. The company paid Penn for the surgeon’s time and expenses.
However, under questioning from one of McKenzie’s lawyers, Garino acknowledged that he was being paid for other work by Smith & Nephew – which he did not disclose.
At the time of McKenzie’s surgery, Garino estimated, he was making “$20,000 to $50,000 annually” as a Smith & Nephew consultant.
Note the statement made by the treating surgeon that “the payments had no effect on this patient’s treatment.”
Let me show how such a statement, if it is being reported correctly, is disingenuous at best and downright sleazy at worst.
I shall do this via a series of questions. First, however, read this case study on the difficulties of building specialized information systems to evaluate complex new treatments and technologies: link.
My questions are:
- Who builds the datasets and analytics used in evaluation of new medical devices such as the implants mentioned in the Inquirer article? This is a extremely complex process, filled with nuance and “devil in the details” issues due to the complexity of biomedical science.
- Is it the device vendor/merchant?
- Could that be a problem in terms of the dataset elements, definitions, terminologies, data quality, and other factors affecting the transparency of results?
- Are medical informatics professionals – that is, experts formally trained in this activity – or others with comparable expertise involved at the vendor shop or at the healthcare organization then deploying the new devices?
- If not, why not?
In fact, a comprehensive dataset to study invasive cardiology alone ran several hundred data elements. It took months of effort working with a team of committed invasive cardiologists to develop the dataset to match the clinical realities of the field, as well as teach quality recordkeeping (i.e., each clinician needed to have the same understanding of what was meant by each term, down to a fine grained level).
Each case was reviewed by a neutral “data quality” evaluator as well before entered into a robust database with advanced metrics, again developed and refined over several months by the same team.
As the medical informaticist, I steered the development based on my knowledge of biomedical information science, relational database technology, and of medicine. That raises more questions:
- What training do the surgeons who implant orthopedic devices get in recording their data?
- Who does this training? What are their backgrounds?
- What motivations might they have to “go easy on the reporting“, i.e., blur their findings and and “which box they check on the data collection form” when a potentially bad thing happens? In medical data, this is easy to do, even innocently, let alone when one has motivation – e.g., when one is being paid handsomely by the device maker or seller and consciously or unconsciously seeks not to harm the gravy train.
- Do neutral QC person(s) check for this possibility?
- If not, why not?
I suggest to malpractice attorneys that they seek answers to these questions. It is negligent for a device manufacturer and medical center not to employ the very best “standard of care” regarding clinical datasets, in my opinion.
There are certainly many, many issues surrounding the quality of pharmaceutical clinical trials data, which to a certain extent are done in a secretive manner. Device trials are likely worse.
Who polices clinical trials data for, say, the new implants that caused the patient in the Phila. Inquirer article her problems?
The answer likely will not be pretty.
In summary, payments to doctors not only may affect their judgment on which devices to use and how often, but also on reporting the issues (using datasets that themselves may be compromised by vendor involvement). “Reportable” issues are often very subtle and amenable to being masked by “spin”, “blur” and “invisibility.”
I believe this issue is under-represented in considering the effects of payments to doctors, and in litigation when bad things happen.
This needs to change.