What Role Do Values Play in Scientific Inquiry?

The idea that science is a “value-free” enterprise is deeply entrenched. “Under standard conditions, water boils at 100°C.” This and countless other facts about nature are mind-independent; that is, they do not depend on what you or I think or feel. And the procedures by which we discover such facts are available to and respected by a diverse public, man or woman, black or white, rich or poor. It may seem, then, that the activities and results of science are inherently insulated from racism, sexism, political agendas, financial interests, and other value-laden biases that permeate the larger social context. Some even vigorously insist on keeping values out of science.

Do you agree? Many philosophers of science do not. Indeed, the idea that science is or should aim to be value free — even as an ideal — has been widely challenged in recent decades, with some arguing that values can, in fact, be seen to influence scientific practice in all manner of ways. I would go so far as to say that this is a feature of science that we cannot afford to ignore.

Getting Beyond the Value-Free Ideal
Value judgments are clearly involved in decisions about the application of scientific findings, about how we ought to use the insights of scientists. But almost everyone also acknowledges that values influence certain aspects of scientific practice. For instance, values influence the scientist’s choices about whether or not to inquire into nature in the first place, which phenomena to select as significant enough for study, what methods to adopt, or how limited resources should be allocated in conducting research. Furthermore, there are questions about moral responsibilities in practicing science with integrity, ethical constraints that we impose on research, and how the personal biases and background assumptions of scientists can impact their work in subtle ways. Values might influence the choices scientists need to make about how they frame their research questions, about the concepts they employ in their theories, or about how they describe their observations.

If we accept that values play at least some role in scientific practice, a case can be made for not treating science as if it were a value-free enterprise. Rather than trying to eliminate values from science, we should attend self-consciously to the particular values that exert an influence on scientific practice and facilitate public discussion about them. Being mindful of the potential influences of social, political, moral, and religious values, we should squarely face questions about the most effective ways both to critique bad or illegitimate influences and to decide which values should play a role at those points where value judgments are appropriate or even necessary.

One might argue that, far from compromising the objectivity of science, recognizing the many ways in which aspects of scientific practice are value-laden is an important step to securing it.  Ensuring that the empirically grounded methods and the many other processes on which science relies are subject to intense public scrutiny from a diverse range of interests is at least part of what allows it to yield results that are more, rather than less, objective than other types of inquiry.

The Nature of Scientific Reasoning
What about the content of science? If the reasoning involved in formulating scientific conclusions or in appraising scientific theories — deciding which theory is correct — were even partially shaped by values, wouldn’t we end up with lousy science, with subjective opinions rather than scientific facts?

According to an influential picture of science, scientists put forward theories with testable predictions and then proceed to attempt to refute — or “falsify” — them directly through experiments. Those theories that withstand rigorous attempts at falsification are the ones we should accept. The logic of this process, formulated by philosopher Karl Popper, would seem to follow a clear-cut logical pattern, what philosophers call “modus tollens,” a rule of logic that can be represented like this: “If H, then O; Not O, therefore, Not H.” Consider, for example, some specific prediction entailed by Maxwell’s equations for electricity and magnetism: “If Maxwell’s equations are correct, then a test charge will respond in such and such a manner in the presence of this electric field.” If our observations then fail to match the prediction, it might seem that Maxwell’s equations are in trouble. Even more to the point, it might seem that pure logic and observational evidence together can decisively falsify a scientific theory, so that values play no necessary role in judging which scientific theories we should accept.

However, cases of real-world experimental testing are typically more complex than this, and not only because scientists often continue to work with, rather than discard, theories that face contravening evidence. The reason is that scientific hypotheses are not tested in isolation from other theories and background assumptions. When a scientist tests whether water boils at 100°C, say, she must make a number of assumptions, many of which remain implicit. For example, in addition to assuming that her laboratory conditions are sufficiently stable and standardized, that the sample water is sufficiently pure and that the quantity of air dissolved in it falls in a specified range, and that her instruments for measuring temperature and air pressure are reliable, she also assumes that the theories we draw upon to make these scientific claims and explain these tools — such as thermodynamics and physical chemistry, in this case — are correct. To return to the logical characterization above, the relevant antecedent is not “H” by itself, but rather “H” together with a host of other, “auxiliary” assumptions. (Logically, we would represent this not by “If H, then O,” but by “If H and A1 … and An, then O.”) Thus, when a scientific theory yields a prediction that conflicts with observations — say, the water does not boil at 100°C — all the scientist can deduce (assuming the observations are reliable and the interpretation of the evidence is satisfactory) is that something is amiss in that total package of theory and background assumptions. In other words, something is wrong; but the observation does not tell the scientist precisely what. Is it her hypothesis or one of her many assumptions? Does water actually boil at some other temperature? Or is there perhaps a degree of indefiniteness in the phenomenon we are trying to characterize? Or were her laboratory conditions inadequate? Or, less plausibly, is the theory of thermodynamics wrong?

In principle, there might be many ways to adjust the theoretical package to restore the fit with the evidence. But because, strictly speaking, observation and logic alone are not enough to tell us what is wrong, this task falls to the good sense of the practicing scientist. For instance, it would not make good sense to throw out the theory of thermodynamics based simply on the observation that the water in the scientist’s lab didn’t boil at 100°C. The important point is that in making this decision about which part of the theoretical package to reject, the scientist is making a kind of value judgment: “It is more plausible that my laboratory conditions were inadequate or that my own hypothesis is false than that thermodynamics is wrong.”

In practice, it is often easy enough to see where the problem lies, and the roles that values play in theorizing will be different in some areas of research than in others. We shouldn’t expect values to exert the same level of influence in every branch of inquiry or at every level of theorization. For example, the way that empirical data constrain the role that values play in appraising a given theory may be different in theoretical physics or molecular biology compared to the social or environmental sciences. But the scientist might well find herself faced with competing theories, each of which is just as compatible with all the available observational data as the other. In such a situation, the choice between rival theories may remain unsettled — whether only temporarily and in particular cases, or perhaps even in the long run with respect to theories of the world at large. In the choice between competing theories, value judgments will play a greater role than we often realize in determining which scientific theories we accept as true.

Which Values?
What considerations are relevant to the “good sense” by which scientists choose between theories in such circumstances? As our discussion has already supposed, adequacy to observations — what philosophers call empirical adequacy — is an essential criterion when judging the success of scientific theories. Indeed, one might even argue that the construction of theories that match all of the observable phenomena is the chief aim of natural science. Other criteria that are often called upon in the choice between theories include internal consistency, fit with other well-established theories, scope, explanatory power, and a track record of successful new predictions. “Don’t forget simplicity!” fans of Ockham’s razor will remind us.

But the degree to which a given theory fits these rather vague criteria is often itself a matter of dispute. What counts as simple, and to whom? Moreover, these values can stand in tension: What if the simplest theory is the least adequate to empirical observations? This raises questions about the relative weight to assign to these criteria. Should we value explanatory power over simplicity or predictive power? Here, again, it appears that certain value judgments will play a significant role in selecting one theory as overall better than another.

A number of questions lurk in these deeper philosophical waters. In ordinary usage, to “value” something is to take it to have importance or worth — to care about it. Leave aside questions about the clarity of this notion, or about whether the common distinction between facts and values is tenable. How are values to be justified as criteria when formulating and appraising scientific theories? And who is to say that some values are permitted to play a role but not others? And if appealing to values is permitted in science, why not also appeal to those that fit with our liberal democratic sensibilities, or other social, moral, and religious values, as criteria for theory choice? Is this not the path to an “anything-goes” relativism that would undermine the very objectivity that makes science a paradigm of rational inquiry?

One way to make sense of the idea that criteria such as simplicity, predictive power, and internal consistency play a role in science is to consider the role they play in helping us to achieve our goals. For instance, one reason for valuing the internal consistency of scientific theories is that, in science, we are aiming at truth; that is, we are trying to come up with true theories, and a logically incoherent theory cannot possibly be true. Thus “values” such as internal consistency or a track record of successful predictions might be included among our criteria for theory choice because we have learned over time that theories that fail to exhibit these properties are less likely to be true.

Some philosophers see a contrast between more truth-oriented values — what they call “epistemic values” — and so-called “non-epistemic values,” for example the interests of social justice. Precisely which values are to be regarded as epistemic is a matter of ongoing dispute, as are questions about how both kinds of values should or should not influence theory appraisal. Is there any reason to think, for example, that the fact that a theory conflicts with the interests of social justice makes it less likely to be true? Or, are simple scientific theories really more likely to be true than complex theories, or are simple theories just more useful, or easier to revise when they fail to fit the data?

Which kinds of values are relevant to theory choice will depend on what sort of a choice we are talking about. Is the choice about whether to accept or reject a scientific theory best seen as a question about what to think or how to act? If our aim is to form an opinion concerning the likely truth or falsity of a theory, non-epistemic values will not be directly relevant. For instance, if we are aiming for truth, we might come to believe a theory because of its explanatory power and fit with the available data, even though it appears to conflict with the interests of social justice.

But the distinction between “what to think” and “how to act” is not always so clear either. Most of us are good “fallibilists” these days, allowing that even our best scientific theories are, in principle, open to revision. How much evidence is sufficient for flat-out acceptance? Practical consequences and non-epistemic values may be relevant to where we set this threshold. While in some cases it won’t matter much if scientists make a mistake, sometimes the risks of being wrong are quite considerable. For example, even if we are ninety-five percent confident that a new chemical is not carcinogenic, we might, given the potentially serious moral and practical consequences of being mistaken in this case, think it wise to gather more evidence and to hold ourselves to a higher standard before classifying it as such, particularly if we are aware that this will likely lead to its approval for use as a widely distributed food preservative.

One solution to these problems is to argue that scientists should confine their judgments about theories to the probability that they are true — that is, to how likely they are to be true given the evidence — and should leave aside questions about practical consequences. Of course, when deciding how to act, we generally do also consider the possible outcomes and their utility. In deciding whether to accept or reject a scientific theory, then, we are well-advised to take into account what the potential consequences would be of getting things right and also of being mistaken. But this is to shift away from the theoretical aspect of science toward the realm of action. Thus, according to this view, scientists should merely assign probabilities to their theories, leaving non-epistemic values to the rest of us when deciding which theories to take as the basis for action.

Science and Democracy
But can judgments about the likely truth of scientific theories be so cleanly separated from decisions about the acceptance or rejection of those theories? And even if they can, does the act of acceptance or rejection really take us outside the proper domain of science?  We can imagine, for example, that assigning a very high likelihood to a theory already presupposes that this theory should be accepted as the basis for action. Moreover, one might wonder whether the assigning of probabilities to scientific theories really is so free of, or shielded from, the prior influence of non-epistemic values. After all, even if non-epistemic values are excluded from playing a direct role in assigning probabilities to theories, might not earlier value-laden choices — for example about what areas of research are most important for helping to solve certain social problems — have downstream consequences for what evidence is available? Or might these prior value-laden choices also have consequences for the range of available theories between which we are making comparative evaluations?

Such considerations have led some to reject the idea that it is the responsibility of scientists only to inform the public about how probable a theory is, given the available evidence, leaving policymaking to citizens or their elected representatives. Instead of this proposed division of labor, these scholars argue that scientific experts are citizens themselves, and since they are often best attuned to the relevant social and moral considerations, they should take a more active role in offering policy recommendations as part of their civic duty.

If we chart the latter course, we benefit from having scientists serve in more significant advisory roles, but we risk embroiling science in the worst manifestations of our political disagreements and disenfranchising non-scientific voices. Might there be a way to navigate this path in a democratic society, learning from scientists while also empowering ordinary citizens to participate in serious conversations about the role of values in science and related matters of moral concern?

Read more at www.bigquestionsonline.com

Trackback from your site.

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via