The Process of Faith and Rational Imperfect Bayesian Analysis

John Vandivier

I've previously written that Faith Without Certainty is OK, but in practice, many Christians act with certainty. Is there some contradiction here? This post argues against any contradiction. I distinguish between practical certainty and unqualified certainty, I show that all optimal and rational action is subject to constrained analysis, and I argue that applying a constrained analysis to the question of Christianity leads many people to rationally round the probability of various statements to certainty as a heuristic in practice.

I. The Apparent Problem

Suppose there is some belief X that can be held with a variety of degrees of certainty. If a person, B, believes X with certainty, we specify that they believe Y. An analyst, C, predicts that belief in Y leads to some action K that falls within a probabilistically identified range of behaviors {K+}. When you observe my behavior, I seem to behave according to K. You conclude that I believe Y.

This conclusion might or might not be actually correct. That is, I could believe Z which also leads me to behave according to K. What we can state in this case is that I am practically certain of X even if I do not possess unqualified certainty of X.

Now, suppose that the analyst, C, directly asks the observed person, B, about their credence in the statement that Jesus existed historically. Respondent B can take three courses of action:

  1. Specify unqualified certainty, with a credence of 1.
  2. Specify high credence, such that P(X|B) > P(X'|B)
  3. Specify low credence, such that P(X|B) <=f P(X'|B)

Respectively, these create various apparent problems:

  1. Credence of 1 is identifiable as irrational, close-minded, anti-empirical, or otherwise a signal of erroneous cognition with respect to B on various grounds.
    1. As one example, suppose B exists under a veil of ignorance and must decide between a binary position on the historicity of Jesus. We provide B with an extremely large volume of positive empirical evidence, but we assume B must process this evidence using Bayesian analysis. There is no way B can ever arrive at a probability of 1 in the affirmative if they correctly followed a Bayesian updating scheme.
  2. The leap from high credence to certainty is perfectly intuitive through a heuristic process a la Kahneman, Caplan,
    1. The main danger, in this case, is simply the possible misinterpretation of findings.
    2. Notably, an analyst might incorrectly apply labels of erroneous, biased, irrational, or suboptimal cognition to B in lieu of the more nuanced and correct label of constrained rationality, where the outputs of constrained rationality are often correct or approximately correct.
  3. There is an apparent inconsistency in that B claims X is unlikely yet acts as if X is true.

II. Four Solutions

Problem 3 has a trivial solution and problem 2 really isn't a problem to begin with, merely a case in which careful analysis needs to be applied. I will briefly solve problem 3 and proceed to the interesting case of problem 1 which has three nontrivial solutions.

Suppose X1...XN are mutually exclusive and collectively exhaustive beliefs. B espouses credence in X1 at less than 50%, which is interpretable as \"B thinks X1 is probably not true,\" yet they are observed uniquely acting as if X1 is true (the same behavior is inconsistent with 'X1). Why might they do this?

The trivial solution is that B believes the probability of X1 being true is still better than all other alternative beliefs. A slightly more complicated explanation is that B believes that acting as if X1 is true will maximize their quality of life. Both of these explanations might simultaneously hold.

With respect to problem 1, I will suggest four solutions, two of which amount to a complicated form of constrained optimization, and the fourth of which I call a process of faith and I consider it to be the most interesting option because it has a kind of higher rationality as we will see. It's possible that more than one of these explanations might simultaneously hold.

Before I describe the four solutions, let me clarify what I mean by constrained optimization. Here I am referring to a collection of economic and psychological models including Kahneman's fast thinking, Caplan's rational irrationality, or ecological rationality, with an emphasis on Vernon Smith's approach to ecological rationality.

  1. B might have made a mistake. That is, B might have espoused certainty when they are not actually certain. This might arise from imprecise speech, rounding, or otherwise imprecise or faulty derivation.
  2. B might have not started from a veil of ignorance.
    1. If B's prior is an assumption of truth about X, then affirmative empirical evidence will never disturb this prior, even if B is using a complex bayesian analysis or some other sophisticated method.
  3. B might have used a constrained optimization. Notably, B might have rounded or conducted a statistical test of important differences. A few examples:
    1. B might have credence greater than or equal to 0.95 and simply rounded as a constrained optimization on thinking and communication.
    2. B might have a credence that is not statistically significantly or practically different from certainty (even if the precise point estimate is less than 0.95, for example).
      1. B might have run a statistical explanatory check using usual or unusual null hypotheses. Again assume that B did not start from a veil of ignorance, but now allow that they received some non-affirmative evidence. This evidence may not importantly or significantly change their credence.
  4. B might have taken a leap of faith. Two interesting mechanisms for this, but I suspect they both routinely apply concurrently and jointly. That is, I think people run a real-world reconciliation with preference-weighted credence in mind, not merely pure epistemic credence.
    1. Finite Decomposition or Real-World Reconciliation
      1. A person may believe X with 51% credence. This is not prima facie certainty.
      2. In a second-stage reflection, though, this person may realize that \"X is 51% true\" is an incoherent real state of affairs, and they may choose the nearest possible real-world state of affairs. Namely, \"X is true.\"
    2. Preferential Selection
      1. Given X1...XN are all plausibly true subject to rigorous analysis, perhaps even plausibly true to approximately equal extents, then the person might simply select the one they prefer for whatever reason.

III. General Lessons for Comparing Participant Response and Observational Data

Recall the apparent problem. The core issue here can be viewed as a contradiction between survey response results and observational study. We subsequently reconcile these findings once certain theoretical corrections are made.

This should be a general lesson for the academy, where certain scholars have argued that actions speak louder than words, and differences between observational data and survey response data indicate that survey response data is generally not to be trusted.

As this article demonstrates, survey response data can be reconciled with observational data so long as a nuanced analysis is applied. Specifically, rational irrationality or constrained thinking on the part of the participant is very often a key mechanism to reconciliation. I would go beyond what has already been demonstrated and claim that such analytic correction is in principle always available.