Uncertainty and the World: An interview with W. M. Briggs

William Matt Briggs is currently a vagabond statistician and Adjunct Professor of Statistics at Cornell. Thought leader (it’s true, I’ve seen the p-values!). Previously a Professor at the Cornell Medical School, a Statistician at DoubleClick in its infancy, a Meteorologist with the National Weather Service, and a sort of Cryptologist with the US Air Force (the only title he ever cared for was Staff Sergeant Briggs). His PhD is in Mathematical Statistics, though he now claims to be a Data Philosopher, Epistemologist, Probability Puzzler, Unmasker of Over-Certainty, and (self-awarded) Bioethicist. Of course, he also blogs, here. I began by asking…

Dover Beach: Why a book on Uncertainty know?

William Matt Briggs: Why Uncertainty now? I’m not sure. Actually, I needed the money: I should make tens and tens on it. If I’m lucky.

The real reason is that exposing the over-certainty that is rampant in science was long overdue. People everywhere are taught “correlation is not causation”, but then they are also taught, “Use these statistical methods to show your correlation is causation.”  The methods themselves rely on false and mistaken views of probability. So I needed a whole book to describe what probability really is, so that we can we know just what methods are proper and which improper, and what these methods really teach us.

Consider that one week you will read in a headline “Chocolate lowers risk of heart disease” and the next you’ll read “Chocolate raises the risk of heart disease.” Both items are driven by the same mistaken views of probability.  Probability is not cause, nor is it decision, but the methods and theories people are taught assume it is.  All a researcher has to do is to find two numbers that pass a statistical test and he can write a paper “proving” one number caused by or is “linked to” the other. Result? Massive over-certainty. A vast amount of nonsense is published, particularly in the “soft” sciences.

DB: So the problem, or at least a significant problem, is a failure to come to grips with what may be really going-on by relying on a method that simply identifies correlations between A and B without having or requiring an explanation of how or why A and B are related in such and such a way?

WMB: That’s exactly right. The methods take A and B as input and they output something called a “p-value”. If this is less than a magic number (0.05), then A is said to cause B or that A is “linked to” B, or vice versa. “Linked to” always means “caused” or “partially caused.”

If the magic number threshold has been breached, permission is given to theorize how A caused B (or vice versa). No proof is needed at this stage, only assertion. My favorite—and very typical—example is from (where else?) Harvard. Two guys claimed that attending a Fourth of July parade as a child (the “A”) caused people to turn into Republicans as adults (the “B”). Researchers are always anxious to discover why people aren’t like them. Anyway, because it was from Harvard, the study was picked up everywhere.

Because the magic threshold was breached, the authors thought they had proof and so they theorized that the rampant patriotism on display caused impressionable kiddies to think warm thoughts about the Republican party.

The kicker is that the A was never measured. They couldn’t measure whether adults had attended Fourth of July parades, because who remembers? So what they did instead was to ask whether there was any rain on the Fourth in the towns where the kids said they grew up. If there was any rain, the Harvard researchers assumed the kids did not attend a parade. If there wasn’t rain, they assumed the kids must have attended.

Complete, utter nonsense. Yet since the research was done in strict accordance with the prevailing methods, it was accepted.

DB: You argue in the first chapter of Uncertainty that there are two kinds of truth, one, ontological, and the other, epistemological. You think the distinction between X and our understanding of X, respectively, is important and failing to recognize any such distinction an intellectual error. Why?

WMB: It’s a minor abuse of words to say there are ontological truths. All it means is existence: something is “ontologically true” if it exists, otherwise it is “ontologically false.” This abuse was necessary to show how what exists (or not) differs from our knowledge of what exists. Epistemology is not ontology. Mixing up the two is one of the main causes of over-certainty, especially in quantum mechanics (QM) and in the Deadly Sin of Reification, which is exactly the error of supposing a conditional epistemological truth is ontic.

An epistemological truth is simply a proposition we know is true given some list of premises. It is purely a matter of knowledge, of our thoughts. For instance, we can know (my favorite example) “George wears a hat” is true if we accept “All Martians wear hats and George is a Martian”. But there are no hat-wearing Martians. The proposition is conditionally true, epistemologically, but ontologically false.

In QM, and in so-called frequentist theory of probability, people suppose probability is ontic, that it is real, or that it ontologically exists. Probability is no different than charge or mass. If that’s so, we should be able to extract or measure probability as we do charge or mass. We should be able to go to the probability store and buy a bucket of it. That people think probability is ontologically true leads to all sorts of conundrums and paradoxes. And, as always, over-certainty.

Uncertainty: The Soul of Modeling, Probability & Statistics

DB: How did we come to mix up ontology with epistemology such that the latter is confused with the former? Do you see this arising with the medieval nominalists, or largely with Descartes’s Cogito, or some other position, say, radical skepticism? Or is the cause more prosaic?

WMB: Nominalism is a great sin. For nominalists, all that exists are individual things. Ontology is everything to the nominalist, and epistemology nothing. You might go to a car dealer and kick the tires of the “automobiles” on the lot, but the nominalist must say there really is no such universal thing as an “automobile.” There only exist chunks of metal on wheels. Only there can’t be metal or wheels, either, for the strict nominalist, for “metal” and “wheels” express the idea of universals, and universals don’t exist for the nominalist.  This is why there are no real-life strict nominalists; there are only people who claim to hold the theory.

Further, nominalists can’t do experiments. You might want to study cancer in rats. Well, that requires defining the universals “cancer” and “rats”. Those definitions must be grounded somewhere in reality; else, the researcher could never get started. Nominalism is always self-refuting. The opposite error is Idealism, which says that which exists is dependent on our minds. Our thoughts are reality, somehow. Idealism is popular with bizarre, Depak Chopra-like interpretations of QM. Our “minds” cause wave-functions to “collapse”.

There are more nominalists these days then idealists, though these things wax and wane.  The alternative is called Realism, which has various forms, but put simply, it asserts universals exists, and that a world outside our minds exists. Realism, of course, accords with common sense.

DB: Ah, common sense! Is a part of the problem nowadays this general suspicion, even disgust, in both philosophy and the sciences for common sense?

WMB: Yes, usually wherever scientism creeps in. Tradition, the common opinion of the village and the family, the accumulated knowledge of our culture, even everyday stereotypes are all held with great suspicion by those who have fallen prey to empiricism and credentialism. You’ll hear in philosophy terms like “folk knowledge” used, which are always meant as disparagements, and which are meant to describe what the little people who haven’t had the benefit of formal training have concocted in theories.  So “folk psychology” says free will exists, whereas strict theories of empiricism or materialism assures the choices we think we make we really don’t, because there is no us. There are only bags of chemicals wending along deterministic roads.

The second problem is when scientists feel (not think) they can’t believe what commonsense teaches unless the belief has been confirmed by some experiment.  For instance, one paper I read had the triumphant sentences, “Five- and 6-year olds are able to use one important heuristic in assessing the status of story protagonists. When hearing a story about an unfamiliar protagonist, they use the nature of the events in the narrative as a clue to the protagonist’s status.”  Now this duplicates what any parent has always known. But parents aren’t subjected to the pressure to publish.

DB: To return to the issue of uncertainty. What is it? And, what does probability help us to achieve in the face of uncertainty?

WMB: Uncertainty is simply when we have a proposition which we do not know is true or false.  Now knowledge of all propositions is conditional on what assumptions we make. So uncertainty depends on our assumptions or premises.  That perhaps obvious statement is the basis of all probability.  All probability is purely epistemological and conditional in the same sense logic is.

We know “Socrates is mortal” if we assume “All men are mortal and Socrates is a man”. This is a statement of standard logic and epistemology.  But we are uncertain of “Socrates is mortal” if we assume “Some men are mortal and Socrates is a man”. That’s the only trick.

Premises rule. In science and philosophy our job is to find premises which make the proposition of interest as close to true or false as we can make it, while at the same time also believing the premises are true conditional on still yet further information. All our knowledge is web-like in this sense.

DB: If uncertainty is purely epistemological, arising as it does from our ignorance of this or that going-on, does this also mean as corollary that randomness and chance are also only epistemological features of our knowledge, that they have no real or ontological existence in the world, and thus cannot be the cause of anything? 

WMB: Absolutely yes. Randomness does not exist, nor does chance. They are not ontic; they have no existence. And since they have no existence, no actuality, they cannot be causative. They have no power. They are mere expressions of our ignorance. You will see language like “caused by random chance”, “due to chance”, “random effects” and so on, all which award causal powers to what doesn’t exist. Might not seem like a big deal, but an entire scientific method has developed around the idea chance and randomness are real. It’s called hypothesis testing.

Data are submitted to ad hoc algorithms (they are always ad hoc) and if the test is positive, Y is said to be caused by X, or (as I said before) Y is “linked to” X, but where “linked” is always taken to mean “cause”. But if the test is negative, the Y is said to be “due to chance” or “randomness”. “Due to” means cause. Yet randomness and chance cannot be causes. The error flows from thinking probability is ontic.

Breaking the Law of Averages: Real-Life Probability and Statistics in Plain English

DB: It seems to me then that an enduring problem in modern philosophy, and one that impinges on the sciences, at present, is the absence of a coherent modern theory of causation. Do you agree? 

WMB: Of course. Well, there is a modern theory of causation, but it is all wrong. It all flows from Hume’s skepticism of cause and induction. The gist is that we can’t trust our intuitions regarding both and that therefore cause doesn’t really exist, or we can’t know it does, and that induction is fatally flawed.

These modern ideas appear to be backed by results from QM, where we can prove that we cannot know what will happen in certain experiments.  Since we cannot know what will happen, the results are said to be uncaused (or sometimes caused by chance). But this is a complete misunderstanding.  All we can prove is that we cannot know what will happen. So what? Most of us don’t know what will happen in most things.   What we have proved in QM is that our knowledge will be always dark in some matters. We have not proven that that which happens has no cause.

What’s needed is a return to the old ways of thinking about cause, the ones outlined by Aristotle, who also gave us our first proofs of the importance of induction, which are also now largely forgotten. To Aristotle, cause has four elements or dimensions: form, material, efficient, and final cause. Modern ideas only relate to efficient causation.  The idea is that anything that can happen has the potential to happen, but in order to make actual this potential, something actual must act. This is the old way of stating chance doesn’t exist. If we restore these old ways, it’s suddenly clear that probability is entirely epistemological and that the way we think of models must be in epistemological terms. Knowledge of cause is always deeper than that provided in models.

DB: So, what of these models? What types of models are there and what do they help us achieve in terms of understanding of reality given their or our limitations? 

WMB: Models come in all varieties. There is nothing wrong with modeling; models are, of course, extraordinarily useful.  The difficulty is, as I call it, the Deadly Sin of Reification. This is where the model is thought to be reality. The model becomes realer than reality. This happens surprisingly often, especially when using probability models.

Models, of course, are always abstractions of reality, and nearly all are no more than epistemological. Meaning very few models are truly causative and describe the essence of the observation of interest. I use the word “essence” in its philosophical sense, as the deep understanding of what it is that makes an object a member of a species. This is not as difficult as it sounds. Why is a chair not a refrigerator?  The primary goal is always a true understanding of cause and essence. The secondary goal is to make useful predictions, and, there, knowledge of cause is not necessary.

Here’s a real-life example of reification. Everybody has seen graphs of time series, numbers which run along time, usually bouncing up and down. Think of pictures of stock prices. What happens is that a model will be “fit” to the reality and then the model will be over-plotted over the reality, which then fades into the distance. The model is then spoken of as if it happened and not reality. This leads to vast over-certainty.

Worse are the multifarious statistical models which plague the “soft” sciences, where knobs and dials inside the models are take for reality. All kind of nonsense is created in this way.

DB: How has the scientific community come to terms with Ioannidis’s revealing findings regarding published medical research? Have they thought his findings point towards the sort of deeper issues you’ve elaborated, or as simply the result of pressures to publish, and the like?

WMB: Ioannidis is respected, and everybody agrees he put his finger on a real problem. And that is that many “false positives” are published. But everybody also thinks that they themselves are immune to the problems he noted, and that it’s the other guy that suffers. His work relates to those p-values I spoke of. He showed how their use naturally leads to publishing incorrect results. But he only showed how falsity creeps in generally and not in any specific case. That’s why people always think it’s not their problem.

What’s needed in models is a return to the old way. Models must be forced to prove themselves and make real-world predictions and not just pass internal tests. Models which cannot make skillful predictions of reality after the fact should be abandoned.  By “after the fact” I mean after the model is published. This way, independent parties may check a model’s validity. This would go a tremendous way towards solving the so-called reproducibility crisis.

It wouldn’t fix it. Nothing would. There is no ultimate solution. People are inventive and will always find ways of making mistakes.

DB: There seems to be an abundance of recent work of an Aristotelian flavor. We have your work in the philosophy of probability and statistics, the recent work of James Franklin (and the Sydney School) in the philosophy of mathematics which similarly applies an Aristotelian Realist framework to the philosophy of mathematics, in fact, understands mathematics as the science of structure and quantity. We have a real interest in causal powers emerging in metaphysics, from the work of George Molnar, Brian Ellis, Marmodoro and Mumford, to only name a few, and we even have interest emerging in the unlikeliest of corners, best represented recently by Thomas Nagel’s Mind and Cosmos. Does this rediscovery of Aristotelian Realism mark a real turning point and opportunity?

WMB: The beating of the wings are in the air at any rate. But it’s not clear what traction the new-old movement will gain. The bigger it gets, the more enemies it makes.  When Nagel’s book came out, an emergency meeting of the Old Guard was convened to plan how best to take Nagel out. The general mood was “How dare he!”  The reason for this was plain. Nagel opened the window and peeked out.  Philosophers committed to the the-house-is-all-there-is philosophy of materialism were horrified lest others realize there is more to the world than what they could see in front of them.

The problem with Aristotelian Realism is that while it can be kept on the ground and used for such lowly things like probability models and philosophy of science, it invariably and insistently points to higher things. And it is these Higher Things that badly frighten the Old Guard. They thought and taught theology belonged to a time when man was young. We are now older and left behind these childhood beliefs, they said. Yet that metaphor always forgets that man does not progress from childhood directly to adulthood. It has to pass through its rebellious teenage years. Which is where we are now.  There are a lot of feet kicking and temper tantrums to get through before we move back to the Tradition of our fathers.

DB: Robert Koons has suggested that the materialist metaphysical framework that is currently dominant underpins the moral and political theories (consequentialism and liberalism, respectively) that are also currently dominant, and that an Aristotelian metaphysic radically undermines the persuasiveness of both. What, then, do you think are the gravest obstacles, not so much to the rediscovery of Aristotelian realism, but to its intellectual success? Are they intellectual, or are they practical?

WMB: Our culture is saturated in materialism. Who do we look towards for answers to all our questions? Scientists. Big mistake. Science is utterly silent, mute by design, on every real question of interest. Why is rape wrong? Science has no idea. All it can do is catalog how many rapes take place, note their characteristics, and make projections (using proper methods of uncertainty!) of the biological consequences.

Yet why rape is wrong and not a keen method of spreading your genes is not something science can answer.   Why anything is right or wrong has nothing to do with science. Why anything exists is also not a scientific question. Why mathematics works is not within science’s purview.  And so on and on.

Since these points are well known and even obvious to Aristotelian Realists, as well as easy to demonstrate to non-realists if they’re willing to listen, the real problem is practical.  How do we change an entire system? Scientism is inculcated as early as Kindergarten. It taints everything. And we are not helped when even our major religious leaders concede to Science its high place in our culture.

DB: Finally, which three books do you think have been most influential in your philosophical formation?

WMB: This is like asking an old man to pick out his three favorite grandchildren [many old men nowadays are lucky to have three grand children].  Nevertheless…

The Rationality of Induction

I started from the base of a standard PhD in mathematical statistics, but none of these books were really formative. The book that really taught me the philosophy of probability was David Stove’s The Rationality of Induction.  He only meant one (or two) kinds of induction, incidentally. The second half of the book is a treatise on logical probability, which is also realist probability and the kind I embrace.  Any of Stove’s books are well worth reading. He was a self-professed atheist, but I have hopes he didn’t mean it in the end.

The Last Superstition: A Refutation of the New Atheism

The book that started me on Aristotelian Realism was Ed Feser’s The Last Superstition. Here is a man doing it right. He doesn’t begin with a defensive posture like so many others do. He forces his opponents back on their feet from the start, knocks them down, and never lets them up again. A necessary book. All Feser’s books should be read.

Probability Theory: The Logic of Science

Then E. T. Jaynes’s Probability Theory: The Logic of Science, which is what book reviewers mean when they say tour de force. You sit back in awe of Jaynes’s mind, of his range and versatility. In fact, it was from Jaynes that I learned of Stove—through a footnote! For many years, Jaynes was passed around in samizdat chapters since he never completed the book before his death. One of his students finally put it out.  I was never quite with Jaynes on his maximum entropy rule-of-thumb, if only because I came to a different view of what parameters in models really are. That’s too in-depth to explain here. Warning: Jaynes requires some stiff mathematics to read.

 

 

3 comments for “Uncertainty and the World: An interview with W. M. Briggs

  1. mct
    September 13, 2016 at 11:24 pm

    Thankyou to two of my favourite bloggers in this space for an entertaining discussion.

Leave a Reply

Your email address will not be published. Required fields are marked *