At the fringes of what the herd accepts as discourse, there are some who are chipping away at the modern myth. They imply that at some fundamental level, our assumptions are wrong, and this has infected every subsequent decision with illusion. This is happening simultaneously in many fields, and W.M. Briggs is doing so in the field of statistics. Read on for a Q&A with this creative, inventive thinker who has a finger in many disciplines, informing his primary study to push it toward broader vision.
You are, for lack of a better term, a professional statistician. What led you to this field, and how did you find your way to your present position as professor and writer?
From the Air Force doing cryptography, to meteorology and climatology, to statistics. I was interested in how good forecasts were, and what “good” meant. And from statistics to epistemology, which is the proper branch of probability. I used to be in Cornell’s Medical School, but it was eighty-percent writing grants. There’s too much government in science, so I’m now on my own, though I have an Adjunct position at Cornell. About writing, more people read one of my articles, or even blog posts, that would read a scientific paper.
Is there any truth to the statement “There are three kinds of lies: lies, damned lies, and statistics.” How do we tell the difference between true statistics and lies? How do statistics become misrepresentative?
Primarily through The Deadly Sin of Reification. This is when a researcher’s model of uncertainty, a matter of epistemology, becomes reality itself, or it is thought to be so close to reality as to make no difference. But probability models are not causal: probability and statistics have nothing to say about cause. Yet everybody thinks they do.
Probability is only a measure of uncertainty, but that uncertainty is not fixed. It is not real or tangible. It only measures a state of mind, not the state of reality. More damage in science is caused by assuming statistical models verify “hypotheses” than anything else.
Your book Uncertainty: The Soul of Modeling, Probability & Statistics seems to make the case that human cognitive approaches are basically wrong because we treat probability as a kind of absolute. How would you change the human perceptual outlook?
We have to let it sink in that probability is conditional on whatever assumptions we make. Change the assumptions, change the probability. Probability is epistemology, and only epistemology. Since probability doesn’t have physical existence, nothing has a probability.
Question: What’s the probability of being struck by lightning? Answer: there isn’t one. You have to supply premises or assumptions to form the probability, like, “You live in Oklahoma.” But even that premise is not enough to guarantee a numerical answer. The Cult of Measurement insists, wrongly, that all probabilities, be numerical. This is why you see asininities like “On a scale of -17.2 to 42 2/3 in increments of pi, how taciturn are you?” And then we treat those numbers as if they are real!
You also write about how scientific research is heavily skewed by who is funding it or “purchasing” it as an end product, for example mainstream science articles. How prevalent is this? How can it be avoided or ameliorated?
The government sets the agenda for nearly all science. In the cases of ideological bureaucracies like the EPA ‘the’ science is largely settled in advance, and then farmed out to compliant, money-universities for ‘validation’. The mark of a good scientists now is how much money he can bring in. That money not only pays his salary, and that of his assistants, but of his bosses, too, in the form of overhead, largess grabbed by Deans and spent on various initiatives, like Diversity. And you can’t get the money unless you want to play in the system the government dictates. Eisenhower, in this famous military-industry speech, also warned about government intrusion in science. Key quote, “The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded.”
Is it possible to state anything as truth without conditionals? How much does the interpretation of the individual receiving this truth limit what can be conveyed?
No. The conditions can be very basic, though, like sense impression, and our very occasional interactions of our intellects with the infinite. Simple example. Here’s a proposition, “For all natural numbers x, y and z, if x = y and y = z, then x = z.”
Part of the conditions are the understanding of the words used to convey them, so we have to know “natural numbers” are everyday numbers “0, 1, 2, …,” and where the infinite lurks in that “…” Now this proposition is a standard mathematical axiom, believed to be true by everybody who has ever given it thought. I think it’s true.
But since we cannot count to infinity, we must condition on our finite experience to believe something about the infinite. I don’t want to say that this works only in mathematics. It works for everything we believe true about universals; all arguments.
You say that the field of data science lacks a “firm philosophical grounding.” What kind of philosophy can serve as the basis for mathematics, statistics and other highly abstract disciplines?
You can graduate with a PhD in the hard sciences from the top universities in the land without having to have studied any philosophy formally. Of course, any set of thinking, including the thinking scientists do, is a philosophy. But since the thinking isn’t rigorous, neither is the philosophy, which leads otherwise decent scientists to say stupid things.
The biggest embarrassments are statements of metaphysics. There are respected physicists who, for instance, define ‘nothing’ as quantum fluctuations, or whatever. Somehow they are unable to grasp that the something which is a quantum fluctuation is not nothing. Our understanding of cause is particularly benighted, and that’s largely because of the fallacy of progress. Only recent philosophy is thought worthy of study, the fallacy insists, because progress.
Beginning philosophy with Descartes is an enormous mistake. Some philosophers, those not suffering from science envy, like Ed Feser and David Oderberg, are rectifying the situation.
Would you say that you have encountered a fracture between the notions of assessing truth by coherence (internal logicality of form) versus correspondence (reliable representation of external objects and events)?
Yes, sure. Given “Alice is a green unicorn,” it is conditionally true that “Alice is a unicorn.” But there are no unicorns, green or otherwise. There is coherence. Coherence can give you castles built in the air, but there has to be a real foundation if you want to live in the structure.
You cannot go far wrong with Aristotle. “To say of what is that it is not, or of what is not that it is, is false, while to say of what is that it is, and of what is not that it is not, is true.” That’s a form of correspondence, and the best definition of truth there is.
How much do you assess cycles in your work, such as the viewing a change in our world as having a life-cycle versus a categorical truth, much as it would be in a computer? Do you see yourself as introducing organic or biological principles to the field of mathematics?
No; no way. You might have a sociology of math that has these sorts of principles, something which says why mathematicians are working on these problems now, and might work on those later. But the organic principle itself would have nothing to say about the truth of the mathematics. Mathematics gives us truth, and philosophy aims to, as does physics. Now I said that all truth was conditional, but that does not mean that there are no capital-T Truths. And that leads to your next question.
You say, “Truth resides in the mind and not in objects except in the sense that objects exist or not.” How does this connect with the Nietzschean saying that there are no truths, only interpretations?
Nietzsche was wrong. If we agree on the premises, then we must agree on the truth the premises imply. It is always the case that if there is disagreement, it is in the premises and not on the proposition. And don’t forget the tacit premises, like word definitions. A universal truth, a capital-T Truth, is founded on a chain of reasoning backward to indubitable axioms or intellectual impressions.
So Nietzsche can say, “There are no truths,” which is, of course, contradictory. If he’s right, he’s wrong. If he’s wrong, he’s wrong. Now we all know the truth that Nietzsche’s statement is contradictory based on conditions including the meaning of the words in the proposition, the rules of logic, and so on, but most importantly on our intellects. There is no way for us to think it true that “There are no truths.” And so, conditional on this intellectual impression, we know the Truth that Nietzsche was wrong.
What is reification, and why is it misleading?
Reification shows up everywhere, and not just statistics. People confuse deterministic with causal models. A deterministic model can be a highly complex set of mathematical equations that say, in effect, “When X = x, Y = y.” Now even in this deterministic model works, in the sense of making skillful predictions, it is not necessarily the case X causes Y.
Understanding cause is something above. Scientists who study consciousness and free will are the biggest sinners here. They posit a deterministic model for the workings of the brain and confuse that model (which is anyway partial; another point oft forgotten) with a causal model, which leads them to say there is no such thing as free will. Yet obviously there is. Their models become more important than reality, which is tossed out and said not to exist.
In your view, is language a type of modeling? How can we make language more specific, or less likely to mislead?
In the sense that words imply universals, and our knowledge of universals, like knowledge of everything, is like a model. Words matter, because universals matter. We are not Humpty Dumpty. Communication is not possible with a shared, i.e. mutually believed, set of premises on what universals are true. But the infinite, the realm of universals, is a big place.
We cannot reach, with our finite minds, infinite precision in language. Recall Flaubert “Human speech is like a cracked kettle on which we tap crude rhythms for bears to dance to, while we long to make music that will melt the stars.” The more difficult the concept, i.e. the more it involves the infinite, the less precise our language. And it will always be that way.
Can the type of confusion that arises over statistics and probability influence the choices that a society makes? How can this error be limited?
Yes, especially in a culture that views science with such awe. How to limit? Everything is supposed to be scientific. Hence the Cult of Measurement and endless questionnaires with pseudo-quantified answers, and “nudging,” and on and on. Scientism pervades.
Science is silent on every important question. Why is murder wrong? Science has no answer. But when we think it does, we invent some statistical model that preposterously gives answers on the degree of wrongness of murder. The solution there, not to be too much hoped for, is again a return to philosophy.
And then the confusion about cause. For example, statistics supposedly prove “racism” by showing discrepancies in math questions. If we can eliminate causal language which accompany statistical models, we can fix much.
For those who would like to know more about your writing and research, how would someone stay on top of your latest news and doings?