VIRUSMYTH HOMEPAGE


THE ROLES OF SCIENCE AND OF POLICY ANALYSIS IN HIV/AIDS RESEARCH

By George Kent

Jan. 1999


Science has provided a shaky foundation for modern understandings of the HIV/AIDS phenomena. The problem is not always bad science. Often good work has been devoted to seeking answers to the wrong questions. As more of a policy analyst than a scientist, I want to suggest that it is important to put science—even good science—in its place. How should we understand the role of science, and where should its questions come from? My premise is that science is about asking what is so and why it is so, while policy analysis is about asking what should be done.

Applied Science

The core concern here is applied science, not pure science. People doing pure science do not justify their work in terms of immediate practicalities. Applied science, however, is that science that helps us to make difficult and important decisions, and thus is designed to support policy analysis.. In the health field, applied science helps us to decide what course of action to take to help make people healthier. In medicine in particular, it helps us to decide what medical interventions will help to make people healthier.

If applied science is to help us make difficult decisions, we need to know what alternative courses of action are being contemplated. Thus, good applied science has its questions framed within the context of a policy analysis that identifies some action alternatives. The policy analyst then asks, what is it that I need to know to make a wise choice among these alternatives?

For example, a nutritionist may want to know whether to recommend a low salt diet for a particular population. The required research would then involve recommending a low salt diet to a test group and determining whether that factor makes any significant difference in terms of morbidity and mortality outcomes, in comparison with another similar group that did not receive this recommendation.

The idea for this recommendation may have come out of observations on different populations and noting that the population that had lower salt intake had fewer or less intense health problems. However, that observation on an existing population does not show the impact of an intervening action. The intervention—recommending changes in the diet—would still need to be tested to determine its effects.

In medicine, applied science is needed when some specific medical intervention is contemplated, and research must be undertaken to determine whether that intervention in fact leads to desired health outcomes. From the point of view of the policy analyst, it is nice, but not essential, to know how and why the intervention works. As a policy analyst, I am interested in evidence that shows, for example, that aspirin makes headaches go away, or that the Salk vaccine in fact helps to limit the spread of polio in actual field situations. While I may be curious about how and why these things work, I don’t really have to know that. In my role as a policy analyst, the first thing I want to see is evidence of the effectiveness of proposed therapies in producing net health benefits in actual field situations. The science explaining that effectiveness is secondary.

The standard model of applied medical research is that we target a particular health condition, establish a definition of what constitutes an improvement in that condition, and then propose an intervention that is hypothesized to lead to improvement in that condition. The reasoning that led to the proposal may have relied on good science, poor science, or no science, but once the hypothesis has been formulated that reasoning is no longer important. The question is: does it work.

There must be a specified marker or indicator for the targeted condition that is widely accepted. If the definition keeps changing, we have no hope of counting cases, explaining them, or remedying them. Science based on ever-changing definitions is weird science.

In the AIDS discourse, we cannot meaningfully debate how to explain why AIDS occurs or how it might be cured unless we come to an agreement on what the it is. What are its markers?

Suppose, for the moment, that we accept that AIDS is a particular kind of deficiency, and imagine that a valid, universally accepted indicator for it is, say, low "Q" counts. The acceptance of this indicator presumably would be based on clear evidence that people who have low Q counts tend to be less healthy than those who do not.

The task then is to find a medical intervention that will increase Q counts, and thus presumably lead to having healthier people. Initial research on some proposed intervention—call it "ABC"—might then indicate that it does in fact reduce Q counts. This would be a useful finding, but further research would still be needed. It would now be important to find out if the ABC treatment does in fact lead to having healthier people under a broad variety of field conditions. Of course in making this assessment we must take account of possible undesired effects as well, and not look only for the desired effects.

If a positive net effect is convincingly demonstrated I, as a policy analyst, no longer care much about the role of Q counts. That is only an intervening variable. It is something inside the "black box" that we call the human body. As a policy analyst, I am concerned only with how inputs relate to outputs, and not with the mechanics of what goes on in between. You as a scientist may have needed to know about the internal workings in order to arrive at the proposal that ABC might work, but once it is proposed, the critical test has to do with outcomes, not with the internal mechanism.

Breastfeeding

These concerns can be illustrated by reference to the question of how an HIV-positive mother should feed her infant.

Many people, concerned about the risk of virus transmission through breastmilk, are advocating formula feeding. They tend to neglect the fact that formula feeding has its own risks. The researchers have been very preoccupied with assessing the mechanics and the likelihood of transmission of the virus through breastmilk. There is practically no discussion of the consequences of that transmission. In the absence of explicit information, people tend to assume the worst.

For the purposes of formulating feeding advice, however, it is not necessary to know the likelihood of virus transmission via breastfeeding. To guide policy as to whether an HIV-positive mother should breastfeed or use some other specific feeding procedure, we need to know and compare the consequences, in terms of the infant's health, that are likely from taking each of these courses of action. HIV transmission via breastfeeding is an intervening variable that need not be visible in the analysis. For policy purposes, the research needs to focus on likely consequences of the proposed action for the infant (and possibly the mother), and not on the intervening mechanisms and likelihood of virus transmission. What needs to be known is how the health prospects for breastfed infants of HIV positive mothers differ from the health prospects of those who follow some specific alternative feeding strategy.

Predictor Markers

Suppose it has been shown that intervention ABC can improve health outcomes for those who have a particular condition, or who can be predicted as likely to get that condition. To whom should that intervention be provided? It may be impractical or unwise to provide ABC for the entire population as a kind of prophylactic. Arguably, it should be administered only to either (a) those who have the condition, or (b) those who predictably are going to have that condition. In the second category, the objective presumably is to prevent the fulfillment of that prediction or, if the condition does occur, to lessen its impact.

Some marker on which we have already agreed, such as a low Q count, presumably identifies those who already have the condition. How do we identify people in the second category, those who are going to have the condition? Hopefully we can find another marker, a predictor marker—call it "XYZ"—which reliably identifies those who will at some later time have the condition in question.

You may have fascinating arguments as to why you think XYZ predicts the condition. However, as a stern policy analyst, I can be satisfied only with a clear demonstration of that linkage, based on field studies. Following the supposed norms of my narrow-minded profession, I don’t care much why they are linked if you can in fact demonstrate that they are linked.

In my view, it is only when you can demonstrate that linkage that application of the ABC intervention to XYZ individuals would be warranted. However, even then I would want trials to test the hypothesis that intervention ABC applied to XYZ-marked individuals in fact leads to better health outcomes. Until that evidence is provided, the research remains incomplete, and the untested intervention should not be used in the field.

On the basis of this policy analytical perspective, we have the following critical questions to ask about HIV/AIDS research:

  1. What is AIDS? Is this the key health outcome that needs to be managed? If so, what are its definitive markers? Is there consensus on this, and does that consensus remain fixed over time?
  2. What interventions are recommended for dealing with AIDS?
  3. What evidence, in terms of actual health outcomes, is there to support this intervention? Is there evidence not only from controlled clinical trials, but also from practice in the field?
  4. What is HIV? What are its definitive markers?
  5. What is HIV’s demonstrated relationship to AIDS?
  6. What interventions are recommended for those with HIV?
  7. What evidence is there to support those interventions in terms of practice in the field?

In addressing these questions I, in the role of narrow-minded policy analyst, do not want to hear about any hypotheses or explanations about how things work internally, within the human body. I don’t want to hear about laboratory work. I want to see evidence of strong associations between specific interventions and specific health results out in the field. On the basis of this demanding standard, I would like to see clear answers to these questions, and thus know that we are looking at sound applied science, and not weird science.


VIRUSMYTH HOMEPAGE