Failing to Report Adverse Effects of Treatments

We have frequently advocated the evidence-based medicine (EBM) approach to improve the care of individual patients, and to improve health care quality at a reasonable cost for populations. Evidence-based medicine is not just medicine based on some sort of evidence. As Dr David Sackett, and colleagues wrote [Sackett DL, Rosenberg WM, Muir Gray JA, Haynes RB, Richardson WS. Evidence-based medicine; what it is and what it isn't. BMJ 1996; 312: 71-72. Link here. ]


Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.

One can find other definitions of EBM, but nearly all emphasize that the approach is designed to appropriately apply results from the best clinical research, critically reviewed, to the individual patient, taking into account that patient's clinical characteristics and personal values.

When making decisions about treatments for individual patients, the EBM approach suggests using the best available evidence about possible benefits and harms of treatment, so that the treatment chosen is most likely to maximize benefits and minimize harms for the individual patient. The better the evidence about specific benefits and harms applicable to a particular patient, the greater will be the likelihood that a particular decision based on this evidence will result in the best possible outcomes for the patient.

A new study in the Archives of Internal Medicine focused on how articles report adverse effects found by clinical trials. [Pitrou I, Boutron I, Ahmad N et al. Reporting of safety results in published reports of randomized controlled trials. Arch Intern Med 2009; 169: 1756-1761. Link here.] The results were not encouraging.

The investigators assessed 133 articles reporting the results of randomized controlled trials published in 2006 in six English language journals with high impact factors, that is, the most prestigious journals, including the New England Journal of Medicine, Lancet, JAMA, British Medical Journal, and Annals of Internal Medicine. They excluded trials with less common designs, such as randomized cross-over trials. The majority of trials (54.9%) had private, or private mixed with public funding.

The major results were:
15/133 (11.3%) did not report anything about adverse events
36/133 (27.1%) did not report information about the severity of adverse events
63/133 (47.4%) did not report how many patients had to withdraw from the trial due to adverse events
43/133 (32.3%) had major limitations in how they reported adverse events, e.g., reporting only the most common events (even though most trials do not enroll enough patients to detect important but uncommon events).

The authors concluded, "the reporting of harm remains inadequate."

An accompanying editorial [Ioannidis JP. Adverse events in randomized controlled trials: neglected, distorted, and silenced. Arch Intern Med 2009; 169: 1737-1739. Link here] raised concerns about why the reporting of adverse events is so shoddy:
Perhaps conflicts of interest and marketing rather than science have shaped even the often accepted standard that randomized trials study primarily effectiveness, whereas information on harms from medical interventions can wait for case reports and nonrandomized studies. Nonrandomized data are very helpful, but they have limitations, and many harms will remain long undetected if we just wait for spontaneous reporting and other nonrandomized research to reveal them. In an environment where effectiveness benefits are small and shrinking, the randomized trials agenda may need to reprogram its whole mission, including its reporting, toward better understanding of harms.

Pitrou and colleagues have added to our knowledge about the drawbacks of the evidence about treatments that is publicly available to physicians and patients when making decisions about treatment. Even reports of studies with the best designs (randomized controlled trials) in the best journals seem to omit important information about the harms of the treatments they test.

It appears that the majority of the reports that Pitrou and colleagues studied received "private" funding, presumably meaning most were funded by drug, biotechnology, or device companies and were likely meant to evaluate the sponsoring companies' products. However, note that this article did not analyze the relationship of funding source to the completeness of information about adverse effects.

Nonetheless, on Health Care Renewal we have discussed many cases in which research has been manipulated in favor of the vested interests of research sponsors (funders), or in which research unfavorable to their interests has been suppressed. Therefore, it seems plausible that sponsors' influence over how clinical trials are designed, implemented, analyzed and reported may reduce information about the adverse effects of their products reported in journal articles. Trials may be designed not to gather information about adverse events. Analyses of some adverse events, or some aspects of these events may not be performed, or if performed, not reported. The evidence from clinical research available to make treatment decisions consequently may exaggerate the ratios of certain drugs' and devices' benefits to their harms.

Patients may thus receive treatments which are more likely to hurt than to help them, and populations of patients may be overtreated. Impressions that treatments are safer than they actually are may allow their manufacturers to overprice them, so health care costs may rise.

The article by Pitrou and colleagues adds to concerns that we physicians may too often really be practicing pseudo-evidence based medicine when we think we are practicing evidence-based medicine. We cannot judiciously balance benefits and harms of treatments to make the best decisions for patients when evidence about harms is hidden. Clearly, as Ioannidis wrote, we need to "reprogram." However, what we need to reprogram is our current dependence on drug and device manufacturers to pay for (and hence de facto run) evaluations of their own products. If health care reformers really want to improve quality while controlling costs, this is the sort of reform they need to start considering.

NB - See also the comments by Merrill Goozner in the GoozNews blog.

Failing to Report Adverse Effects of Treatments

We have frequently advocated the evidence-based medicine (EBM) approach to improve the care of individual patients, and to improve health care quality at a reasonable cost for populations. Evidence-based medicine is not just medicine based on some sort of evidence. As Dr David Sackett, and colleagues wrote [Sackett DL, Rosenberg WM, Muir Gray JA, Haynes RB, Richardson WS. Evidence-based medicine; what it is and what it isn't. BMJ 1996; 312: 71-72. Link here. ]


Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.

One can find other definitions of EBM, but nearly all emphasize that the approach is designed to appropriately apply results from the best clinical research, critically reviewed, to the individual patient, taking into account that patient's clinical characteristics and personal values.

When making decisions about treatments for individual patients, the EBM approach suggests using the best available evidence about possible benefits and harms of treatment, so that the treatment chosen is most likely to maximize benefits and minimize harms for the individual patient. The better the evidence about specific benefits and harms applicable to a particular patient, the greater will be the likelihood that a particular decision based on this evidence will result in the best possible outcomes for the patient.

A new study in the Archives of Internal Medicine focused on how articles report adverse effects found by clinical trials. [Pitrou I, Boutron I, Ahmad N et al. Reporting of safety results in published reports of randomized controlled trials. Arch Intern Med 2009; 169: 1756-1761. Link here.] The results were not encouraging.

The investigators assessed 133 articles reporting the results of randomized controlled trials published in 2006 in six English language journals with high impact factors, that is, the most prestigious journals, including the New England Journal of Medicine, Lancet, JAMA, British Medical Journal, and Annals of Internal Medicine. They excluded trials with less common designs, such as randomized cross-over trials. The majority of trials (54.9%) had private, or private mixed with public funding.

The major results were:
15/133 (11.3%) did not report anything about adverse events
36/133 (27.1%) did not report information about the severity of adverse events
63/133 (47.4%) did not report how many patients had to withdraw from the trial due to adverse events
43/133 (32.3%) had major limitations in how they reported adverse events, e.g., reporting only the most common events (even though most trials do not enroll enough patients to detect important but uncommon events).

The authors concluded, "the reporting of harm remains inadequate."

An accompanying editorial [Ioannidis JP. Adverse events in randomized controlled trials: neglected, distorted, and silenced. Arch Intern Med 2009; 169: 1737-1739. Link here] raised concerns about why the reporting of adverse events is so shoddy:
Perhaps conflicts of interest and marketing rather than science have shaped even the often accepted standard that randomized trials study primarily effectiveness, whereas information on harms from medical interventions can wait for case reports and nonrandomized studies. Nonrandomized data are very helpful, but they have limitations, and many harms will remain long undetected if we just wait for spontaneous reporting and other nonrandomized research to reveal them. In an environment where effectiveness benefits are small and shrinking, the randomized trials agenda may need to reprogram its whole mission, including its reporting, toward better understanding of harms.

Pitrou and colleagues have added to our knowledge about the drawbacks of the evidence about treatments that is publicly available to physicians and patients when making decisions about treatment. Even reports of studies with the best designs (randomized controlled trials) in the best journals seem to omit important information about the harms of the treatments they test.

It appears that the majority of the reports that Pitrou and colleagues studied received "private" funding, presumably meaning most were funded by drug, biotechnology, or device companies and were likely meant to evaluate the sponsoring companies' products. However, note that this article did not analyze the relationship of funding source to the completeness of information about adverse effects.

Nonetheless, on Health Care Renewal we have discussed many cases in which research has been manipulated in favor of the vested interests of research sponsors (funders), or in which research unfavorable to their interests has been suppressed. Therefore, it seems plausible that sponsors' influence over how clinical trials are designed, implemented, analyzed and reported may reduce information about the adverse effects of their products reported in journal articles. Trials may be designed not to gather information about adverse events. Analyses of some adverse events, or some aspects of these events may not be performed, or if performed, not reported. The evidence from clinical research available to make treatment decisions consequently may exaggerate the ratios of certain drugs' and devices' benefits to their harms.

Patients may thus receive treatments which are more likely to hurt than to help them, and populations of patients may be overtreated. Impressions that treatments are safer than they actually are may allow their manufacturers to overprice them, so health care costs may rise.

The article by Pitrou and colleagues adds to concerns that we physicians may too often really be practicing pseudo-evidence based medicine when we think we are practicing evidence-based medicine. We cannot judiciously balance benefits and harms of treatments to make the best decisions for patients when evidence about harms is hidden. Clearly, as Ioannidis wrote, we need to "reprogram." However, what we need to reprogram is our current dependence on drug and device manufacturers to pay for (and hence de facto run) evaluations of their own products. If health care reformers really want to improve quality while controlling costs, this is the sort of reform they need to start considering.

NB - See also the comments by Merrill Goozner in the GoozNews blog.

Who Should Sponsor Comparative Effectiveness Research?

We have tried to argue why comparative effectiveness research is a good idea. To cut and paste what I wrote in a previous post,

Physicians spend a lot of time trying to figure out the best treatments for particular patients' problems. Doing so is often hard. In many situations, there are many plausible treatments, but the trick is picking the one most likely to do the most good and least harm for a particular patient. Ideally, this is where evidence based medicine comes in. But the biggest problem with using the EBM approach is that often the best available evidence does not help much. In particular, for many clinical problems, and for many sorts of patients, no one has ever done a good quality study that compares the plausible treatments for those problems and those patients. When the only studies done compared individual treatments to placebos, and when even those were restricted to narrow patient populations unlike those patient usually seen in daily practice, physicians are left juggling oranges, tomatoes, and carburetors.
Comparative effectiveness studies are simply studies that compare plausible treatments that could be used for patients with particular problems, and which are designed to be generalizable to the sorts of patients usually seen in practice. As a physician, I welcome such studies, because they may provide very useful information that could help me select the optimal treatments for individual patients.

Because I believe that comparative effectiveness studies could be very useful to improve patient care, it upsets me to see this particular kind of clinical study get caught in political, ideological, and economic battles.

In particular, we have discussed a number of high profile attacks on comparative effectiveness research, which often have featured arguments based on logical fallacies. While some of the people making the attacks have assumed a conservative or libertarian ideological mantle, one wonders whether the attacks were more driven by personal financial interests. For example, see our blog posts here, here, here, and here. On the other hand, we discussed a clear-headed defense of comparative effectiveness research by a well-known economist most would regard as libertarian here.

Comparative effectiveness research has been discussed as an element of health care reform in the US. It turns out that the current version of the health care reform bill in the US Senate has a provision to create a Patient Centered Outcome Research Institute, which presumably would become the major organization which could sponsor comparative effectiveness research.

This institute, however, would not be a government agency (despite the name that makes it sound like it would be part of the National Institutes of Health). Moreover, here is a description of the Board of Governors who would run the institute from the current version of the bill :

BOARD OF GOVERNORS.—
(1) IN GENERAL.—The Institute shall have a Board of Governors, which shall consist of 15 members appointed by the Comptroller General of the United States not later than 6 months after the date of enactment of this section, as follows:
(A) 3 members representing patients and health care consumers.
(B) 3 members representing practicing physicians, including surgeons.
(C) 3 members representing private payers, of whom at least 1 member shall represent health insurance issuers and at least 1 member shall represent employers who self-insure employee benefits.
(D) 3 members representing pharmaceutical, device, and diagnostic manufacturers or developers.
(E) 1 member representing nonprofit organizations involved in health services research.
(F) 1 member representing organizations that focus on quality measurement and improvement or decision support.
(G) 1 member representing independent health services researchers.


Thus, only 3/15 members of the governing board would represent the patients who ultimately reap the benefits or suffer the harms produced by medical diagnosis and treatment. Further, 6/15 members represent for-profit corporations which stand to make more or less money depending on how particular comparative effectiveness studies come out. Also, 3/15 members would be physicians, some of who may get paid more to deliver particular treatments (e.g., procedures) than others (e.g., providing advice about diet and exercise).

We often discuss how clinical research sponsored by organizations with vested interest in the research turning out to favor their products or services may be manipulated to favor these interests, and sometimes suppressed if it does not. In the US, there are few unconflicted sources of sparse funds to support comparative effectiveness research. (The most significant current source is the Agency for Healthcare Research and Quality, AHRQ. For full disclosure, I have been an ad hoc reviewer of grants for that agency.)

The current draft of legislation would create the largest potential sponsor for comparative effectiveness research, but would make that organization report to representatives of for-profit companies whose profits may be affected by the results of such research. In my humble opinion, this is not much of an advance. Comparative effectiveness research controlled by corporations that stand to profit or lose depending on its results will forever be suspect.

If the government is going to support comparative effectiveness research, it ought to make sure such research is not run by people with vested interests in the outcomes coming out a certain way. I may be biased myself, but why not let the research be sponsored by AHRQ, an agency with relevant experience and no axe to grind vis a vis any particular product or service?

Who Should Sponsor Comparative Effectiveness Research?

We have tried to argue why comparative effectiveness research is a good idea. To cut and paste what I wrote in a previous post,

Physicians spend a lot of time trying to figure out the best treatments for particular patients' problems. Doing so is often hard. In many situations, there are many plausible treatments, but the trick is picking the one most likely to do the most good and least harm for a particular patient. Ideally, this is where evidence based medicine comes in. But the biggest problem with using the EBM approach is that often the best available evidence does not help much. In particular, for many clinical problems, and for many sorts of patients, no one has ever done a good quality study that compares the plausible treatments for those problems and those patients. When the only studies done compared individual treatments to placebos, and when even those were restricted to narrow patient populations unlike those patient usually seen in daily practice, physicians are left juggling oranges, tomatoes, and carburetors.
Comparative effectiveness studies are simply studies that compare plausible treatments that could be used for patients with particular problems, and which are designed to be generalizable to the sorts of patients usually seen in practice. As a physician, I welcome such studies, because they may provide very useful information that could help me select the optimal treatments for individual patients.

Because I believe that comparative effectiveness studies could be very useful to improve patient care, it upsets me to see this particular kind of clinical study get caught in political, ideological, and economic battles.

In particular, we have discussed a number of high profile attacks on comparative effectiveness research, which often have featured arguments based on logical fallacies. While some of the people making the attacks have assumed a conservative or libertarian ideological mantle, one wonders whether the attacks were more driven by personal financial interests. For example, see our blog posts here, here, here, and here. On the other hand, we discussed a clear-headed defense of comparative effectiveness research by a well-known economist most would regard as libertarian here.

Comparative effectiveness research has been discussed as an element of health care reform in the US. It turns out that the current version of the health care reform bill in the US Senate has a provision to create a Patient Centered Outcome Research Institute, which presumably would become the major organization which could sponsor comparative effectiveness research.

This institute, however, would not be a government agency (despite the name that makes it sound like it would be part of the National Institutes of Health). Moreover, here is a description of the Board of Governors who would run the institute from the current version of the bill :

BOARD OF GOVERNORS.—
(1) IN GENERAL.—The Institute shall have a Board of Governors, which shall consist of 15 members appointed by the Comptroller General of the United States not later than 6 months after the date of enactment of this section, as follows:
(A) 3 members representing patients and health care consumers.
(B) 3 members representing practicing physicians, including surgeons.
(C) 3 members representing private payers, of whom at least 1 member shall represent health insurance issuers and at least 1 member shall represent employers who self-insure employee benefits.
(D) 3 members representing pharmaceutical, device, and diagnostic manufacturers or developers.
(E) 1 member representing nonprofit organizations involved in health services research.
(F) 1 member representing organizations that focus on quality measurement and improvement or decision support.
(G) 1 member representing independent health services researchers.


Thus, only 3/15 members of the governing board would represent the patients who ultimately reap the benefits or suffer the harms produced by medical diagnosis and treatment. Further, 6/15 members represent for-profit corporations which stand to make more or less money depending on how particular comparative effectiveness studies come out. Also, 3/15 members would be physicians, some of who may get paid more to deliver particular treatments (e.g., procedures) than others (e.g., providing advice about diet and exercise).

We often discuss how clinical research sponsored by organizations with vested interest in the research turning out to favor their products or services may be manipulated to favor these interests, and sometimes suppressed if it does not. In the US, there are few unconflicted sources of sparse funds to support comparative effectiveness research. (The most significant current source is the Agency for Healthcare Research and Quality, AHRQ. For full disclosure, I have been an ad hoc reviewer of grants for that agency.)

The current draft of legislation would create the largest potential sponsor for comparative effectiveness research, but would make that organization report to representatives of for-profit companies whose profits may be affected by the results of such research. In my humble opinion, this is not much of an advance. Comparative effectiveness research controlled by corporations that stand to profit or lose depending on its results will forever be suspect.

If the government is going to support comparative effectiveness research, it ought to make sure such research is not run by people with vested interests in the outcomes coming out a certain way. I may be biased myself, but why not let the research be sponsored by AHRQ, an agency with relevant experience and no axe to grind vis a vis any particular product or service?

Health Care insurance policy

Help I Need Affordable Health Insurance : The statistics are startling when it comes to the amount of uninsured Americans. But what does one do when they don't have a job and can't get affordable individual or family health insurance? Or, what about all the families that have jobs but still cannot afford the health insurance offered by their employers and can't find an option for affordable health insurance?

Alternative Health Care Insurance Choices : Medical Sharing There is no doubt most Americans agree that health care costs are way out of line and it is next to impossible to find a lost cost alternative health insurance choice. This result has lead thousands of Americans to take a look at a different way to pay for health care premiums: Medical Sharing Networks and Societies

Membership Organizations and Health Insurance : Are you self-employed, home based, or a writer? Are you looking for discounted group health insurance? Most people obtain health insurance coverage through their employer. This leads to a lower insurance premium because the risk is spread out among the group of members. Not employed? You may still be able to benefit from group health insurance through a membership organization even if you do not work.

 
Copyright @ 2008-2010 Health Care Resources | Health Center | Powered by Blogger Theme by Donkrax