SciELO - Scientific Electronic Library Online

 
vol.42 issue1The Problem of Adopting Logical Rules author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

  • Have no cited articlesCited by SciELO

Related links

  • Have no similar articlesSimilars in SciELO

Share


Análisis filosófico

On-line version ISSN 1851-9636

Anal. filos. vol.42 no.1 Ciudad Autónoma de Buenos Aires May 2022  Epub Jan 01, 2022

http://dx.doi.org/10.36446/af.2022.387 

Artículos

Bilateralismo y probabilismo

Bilateralism and Probabilism

1Instituto de Investigaciones Filosóficas, Sociedad Argentina de Análisis Filosófico / Consejo Nacional de Investigaciones Científicas y Técnicas, Argentina / Universidad de Buenos Aires, Buenos Aires

Abstract

The aim of this paper is to provide a philosophical interpretation of bilateralism in terms of probabilism. In particular, to interpret the main concepts of bilateralism –acceptance, rejection and incoherence– in terms of the probabilistic notions of degree of belief and coherence. According to bilateralism, the meaning of logical connectives is determined by the acceptance and rejection conditions of the sentences in which they are involved, where acceptance and rejection cannot be reduced to one another. I will focus on a variant of bilateralism that understands logical consequence as the statement that it is incoherent to accept all the premises of a valid argument while rejecting all its conclusions. On the other hand, probabilism states that it is possible to interpret our degrees of belief in terms of probabilities. The aim of this work is then to interpret the concept of incoherence in terms of probability functions and determine when it is coherent to accept or to reject a proposition according to some threshold defined in terms of degrees of belief.

To achieve this goal, we need both an interpretation of the concept of incoherence coined by the bilateralists as well as an interpretation of acceptance and rejection. I will show that a good interpretation of coherence in probabilistic terms can already be found in the literature. Then, I will give an interpretation of acceptance and rejection in terms of degrees of belief. In particular, I will show that it is possible to interpret these concepts in accordance with Locke’s thesis, the thesis that states that there is some threshold r such that if you believe some sentence in degree equal or higher than r you should accept it, without falling into epistemic paradoxes.

Keywords Bilateralism; Probabilism; P-Stability; Lottery Paradox

Resumen

El objetivo de este trabajo es dar una interpretación filosófica del bilateralismo en términos del probabilismo. Para eso interpretaré los conceptos principales del bilateralismo –aceptación, rechazo e incoherencia– en términos de las nociones probabilísticas de creencia justificada y coherencia. Según la tesis bilateralista, el significado de las conectivas está determinado por las condiciones de aceptación y rechazo de las oraciones en las que aparecen, donde aceptación y rechazo son actitudes irreductibles la una a la otra. En particular, me voy a centrar en una variante del bilateralismo que entiende la noción de consecuencia lógica como la idea de que es incoherente aceptar todas las premisas de un argumento válido y rechazar todas sus conclusiones. Por otro lado, el probabilismo sostiene que es posible interpretar nuestros grados de creencias en términos de probabilidades. El objetivo de este trabajo es interpretar el concepto de incoherencia en términos de funciones de probabilidad y determinar cuándo es coherente aceptar o rechazar una proposición en función de algún umbral definido para grados de creencias.

Para lograrlo serán necesarios dos pasos: el primero es interpretar el concepto de incoherencia bilateralista y el segundo los conceptos de aceptación y rechazo. Primero voy a mostrar que la literatura ya ha dado una buena lectura de la noción de incoherencia y luego voy a dar una interpretación de aceptación y rechazo en términos de grados de creencia. En particular, voy a mostrar que es posible interpretarlos de acuerdo con la tesis de Locke, la tesis que sostiene que existe un umbral r tal que si una cree una proposición en grado mayor o igual a r entonces debe aceptarla, sin caer en paradojas epistémicas.

Palabras clave Bilateralismo; Probabilismo; P-estabilidad; Paradoja de la lotería

1. Introduction

In this article, my aim is to offer a philosophical reading of bilateralism by means of probabilism. Since Rumfitt (2000)’s paper, “Yes and No”, bilateralism has been a salient position in logic according to which it is possible to provide a thesis about meaning by means of acceptance and rejection. The key to this thesis relies on the idea that acceptance and rejection are two basic and irreducible attitudes towards a proposition. When facing an inference, some bilateralists (Restall, 2005; Ripley, 2015) would say that it is valid if and only if it is incoherent to accept all its premises and reject all its conclusions. As a corollary, it is possible to determine the meaning of logical constants in terms of the acceptance and rejection conditions of the sentences in which they are involved. Bilateralism is a type of logical inferentialism –the thesis that the meaning of logical constants is defined by its use– that has been very much discussed in the literature. If it is possible to offer a solid bilateral account, then bilateralisms would stand as a strong candidate for proof theorists that defend the idea that the meaning of logical constants is determined by their inferential roles. At the same time, there is a strong normativity contained in the way this bilateralist thesis is formulated. And that is an interesting feature of this proposal. It would be interesting to see how far this normative reading can take us when an elucidation in terms of a model of everyday reasoning is done. And a reasonable place to start might be by appealing to some model of how people assign degrees of belief to the propositions they believe in.

As we will see later, there are two main variants of bilateralism. They both start from a proof theory in which acceptance and rejection are taken as primitives and the meaning of both constants and validity itself can be extracted from the system, once the rules for acceptance and rejection are settled. Unfortunately, because acceptance and rejection are taken as primitives, their precise meaning becomes slightly unclear. The literature mostly relies on our previous understanding of these concepts, which amounts to quite an obscure account. What does it exactly mean for an agent to be incoherent by accepting a bunch of premises and rejecting a bunch of conclusions? Does it mean that the agent is being irrational? Does it mean that the agent is obliged to revise her beliefs? Does this imply some penalty? Is this one of those Aquiles and the Tortoise cases (Carroll, 1895) in which logic comes and grabs you by the neck when you sustain an incoherent position?

This opacity brought several problems that have been looming around in the literature. A good example is the dispute on how to read the Cut rule (see Figure 1). On one side, Restall (2005) insists on a classical reading of Cut. This clashes with Ripley’s (2015) non-transitive reading of the rule. I think an elucidation of these concepts might help to solve this dispute among others.

A natural place to start might be aiming to answer the question, “when is it that an agent can accept (or reject) certain proposition according to how certain she is about the truth (or the falsity) of this proposition?”. 1

Probabilism (Hájek, 2008) is a philosophical interpretation of rationality in terms of probabilities, which consists of the idea that it is possible to model degrees of belief as probability functions. The aim of my work then, is to interpret the main concepts of bilateralism, acceptance, rejection and incoherence, in terms of the probabilistic notions of credence and coherence for a partial belief setting. And by doing so, provide a solid understanding of bilateralism that amounts to an argument in favor of this stance as a good contestant for a thesis about meaning.

In order to achieve this goal, we need both an interpretation of the concept of incoherence coined by the bilateralists as well as an interpretation of acceptance and rejection. I will show that the literature already gave a good interpretation of the notion of incoherence by means of a theorem due to Adams and the so-called constraining property. Then I will give an interpretation of acceptance and rejection in terms of degrees of belief. In particular, I will show that it is possible to interpret these concepts in accordance with Locke’s thesis. Locke’s thesis states that it is possible to find some threshold r, so that if you believe a proposition φ in a degree equal or higher than r, then you accept φ. This thesis is important because it allows us to introduce probabilism into the acceptance and rejection game. After all, up to this point, there was no clear relation between our degrees of belief and what we accept and what we reject. Unfortunately, Locke’s thesis is usually haunted by epistemic paradoxes such as the preface paradox (Makinson, 1965) and the lottery paradox (Kyburg, 1961). I will show that it is possible to give an interpretation of acceptance and rejection that complies with Locke’s thesis and that does not fall into these paradoxes. I will do it by reinterpreting Leitgeb´s P-stability, a theory developed in Leitgeb (2014) that allows an agent to find one (or several) thresholds of acceptance for any valid inference. As a result of this work, we will be able to understand acceptance and rejection in terms of degrees of belief in a probabilistic fashion.

The structure of the article will be as follows: in the next section, I will introduce the necessary apparatus to understand and deal with these issues. In section 2, I will start by clarifying the bilateralist framework and I will briefly explain how to understand probabilities and the paradoxes that usually haunt probabilism. I will then briefly revisit the ways incoherence has been interpreted in the literature. I will focus on Adam’s and Field’s proposal and its limitations. Then in section 3 I will dig into the question of how to find a threshold for acceptance and rejection that avoids the epistemic paradoxes by means of the P-stability theory. Finally, in section 4 I will adapt the bilateralist framework in accordance with the apparatus introduced in the previous sections.

2. Preliminaries

In this section, I will introduce what bilateralism and probabilism are as well as some paradoxes relevant for this work.

2.1. Bilateralism

Bilateralism is a type of inferentialism that has been salient in the last few years (Rumfitt, 2000; Restall, 2005; Ripley, 2015). Inferentialism is a philosophical stance that understands the meaning of logical constants in terms of their inferential role (Dummett, 1991; Murzi & Steinberger, 2017). Bilateralism spells out the inferential role of logical constants in terms of their acceptance and rejection conditions and in doing so it opposes to unilateralism by stating that rejecting a statement cannot be defined in terms of accepting its negation. 2 That means that given a proposition φ, one can either accept it or reject it. Nevertheless, accepting ¬φ –and here is how it opposes unilateralism– is not necessarily the same as rejecting φ, because there might be cases where one neither wants to accept φ nor reject φ. Rejection, in this view, has its own force, different and irreducible to acceptance.

There are two main branches of bilateralism, the one that focuses on natural deduction calculi (Rumfitt, 2000; Smiley, 1996; Incurvati & Schlöder, 2017) and the one that focuses on sequent calculi (Restall, 2005; Ripley, 2015). They are both concerned with the meaning of logical constants in terms of their acceptance and rejection conditions and they both agree on the fact that both acceptance and rejection must be taken as primitives. The main difference is that while the first branch builds a whole new presentation of classical logic with these two concepts embedded in the proof theory, the second one takes a classic sequent calculus of a logic (or some proof-theoretic presentation with multiple conclusions) and proposes a new way of reading valid inferences. I will focus on this second branch of bilateralism, the one involving sequent calculi.

First, we should start by becoming familiar with the standard sequent calculus for Classical Propositional Logic (CPL) with its operational rules (the rules for logical connectives, where material conditional, φ→ψ, can be defined as ¬φ˅ψ) and its structural rules (the rules where no appeal to any logical connective in particular is made). Let 𝔏 be a propositional language with {¬,˅,˄} representing the usual negation, conjunction and disjunction. Lowercase Greek letters represent formulas of 𝔏 and capital Greek letters represent sets of formulas of 𝔏:

Figure 1 

In order to understand this calculus in a bilateralist fashion we can start by reading David Ripley’s words:

What it is for a bunch of premises to entail a bunch of conclusions is that if you assert the premises and deny the conclusions, then you’re out of bounds. (Ripley, 2015, p. 28)

This is the main concept of this version of bilateralism. Now, let’s see how we can read this bilateralist slang in relation with concrete inferences. A bilateralist wants us to read a sequent in light of which collection of formulas are incoherent to accept and reject. So, identity can be read as follows:

Id: It is incoherent to accept φ and reject φ.

Weakening can be read as follows:

L-Wk: If we already are in an incoherent position by accepting all statements in Γ and rejecting all statements in Δ, then we will still be in an incoherent position if we also accept φ.

R-Wk: If we already are in an incoherent position by accepting all statements in Γ and rejecting all statements in Δ, then we will still be in an incoherent position if we also reject φ.

On the other hand, the Cut rule can be read as follows:

Cut: If accepting all statements in Γ and rejecting all statement in Δ as well as rejecting φ is incoherent and accepting all statement in Γ and φ and rejecting all statements in Δ is also incoherent, then the incoherence must rely on accepting all statements in Γ and rejecting all statements in Δ.

We might as well suppress the context in order to facilitate the reading and just focus on the main formulas. For conjunction, disjunction, and negation the rules should be understood in the following way:

L-¬: If it is incoherent to reject φ then it is also incoherent to accept ¬φ.

R-¬: if it is incoherent to accept φ then it is also incoherent to reject ¬φ.

L-˄: If it is incoherent to accept φ (or ψ), it is also incoherent to accept φ˄ψ.

R- ˄: If it is incoherent to reject φ and it is incoherent to reject ψ, it is also incoherent to reject φ˄ψ.

L-˅: If it is incoherent to accept φ and it is also incoherent to accept ψ, then it is also incoherent to accept φ˅ψ.

R-˅: If it is incoherent to reject φ (or ψ), it is also incoherent to reject φ˅ψ.

As it is easy to see, the bilateralist reading is a normative stance. Yet we can strengthen this normative reading by means of an elucidation of what it means to be incoherent (or to be out of bounds, as Ripley paraphrases it). As it was mentioned earlier, the literature leaves this to the reader’s imagination. Does being incoherent mean that the agent is being irrational? Is it somehow forbidden? Does it mean that the agent should revise her beliefs? And how do beliefs enter this picture?

What bilateralism is saying is that there is something wrong with the set of sentences that the agent accepts and the set of sentences the agent rejects. So, there is an agent and there is some mistake that must be avoided or corrected 3 . To understand what this mistake amounts to, it might be useful to know when it is correct to accept or to reject a certain proposition and when it is not. Of course, there are several considerations involved in judging whether a speech act like that is correct or not. There might be pragmatic or semantic considerations, but if we focus only on logical considerations, there are two main aspects. On one hand, there’s entailment: we shouldn’t reject the conclusion of a valid argument if we are certain of its premises. On the other, is whether we are actually certain enough of the premises. Most of the time, certainty is not the place we are at when reasoning. And that is why I think probabilism might be helpful. Because, even if we are not completely certain of our premises, we might have some information on how uncertain we are about the sentences we are stating, which can easily be translated into degrees of belief.

In order to understand the elucidation of bilateralism I am proposing and to understand how to avoid the problems that might stand in the way, we will devote the next section to learn some probabilities. I will explain what probabilism is and what probabilities are. Then I will jump to some epistemic paradoxes that come together with probabilism.

2.2. Probabilism

Probabilism states that it is possible to model degrees of belief by means of probability functions. In order to understand it, I write down here the axioms of probability theory and some theorems that will be relevant for the discussion that will follow. 4

This is a possible axiomatization of classic probability theory: 5

(Ax1) 0 ≤ P(φ) ≤ 1.

(Ax2) If ⊢ φ, then P(φ) = 1.

(Ax3) If φ and ψ are logically inconsistent, then P(φ˅ψ) = P(φ) + P(ψ).

Some relevant theorems that might come in handy are:

(T1) P(¬φ) = 1- P(φ).

(T2) If φ ⊢, then P(φ) = 0.

(T3) If φ ⊢ ψ then P(φ) ≤ P(ψ).

(T4) P(φ˅ψ) = P(φ) + P(ψ) - P(φ˄ψ)

Finally, conditional probability: 6

(C) P(φ|ψ) = P(φ∧ψ)

P(ψ)

Our first axiom (Ax1) states that the range of probabilities is the unit, that is, the real numbers between 0 and 1. (Ax2) states that if a formula is a logical truth, its probability will always be 1. We have two pieces of information about disjunction. (Ax3) states that when two disjuncts are logically inconsistent (or mutually exclusive) the probability of the disjunction equals the sum of the probability of each disjunct. (T4) explains what happens when the disjuncts are not logically inconsistent. In particular, you can get the probability of the disjunction by taking the probability of the first disjunct happening plus the probability of the second disjunct happening minus the probability of both happening together (for the probability of both happening together is already contained in each probability). The probability of conjunction can be extracted by pure algebra from (T4) and conditional probability (C). Conditional probability is read as follows: the probability of φ given ψ can be calculated as the ratio of the probability of φ and ψ happening together and the probability of ψ happening. If you move the terms in the equation, you get that the probability of conjunction can be obtained by making the product of P(φ|ψ) and P(φ).

The probability of a negation in this classical frame is the only compositional probability and it is defined as in (T1). (T1) and (Ax2) entail (T2), which says that if a formula is a logical falsity, its probability will always be 0. Finally, (T3) states that when facing a valid inference with one premise and one conclusion, the probability of our conclusion must be equal to or greater than the probability of the premise.

A natural way to translate these axioms and theorems into an interpretation of how our beliefs should interact is as follows: we interpret our degrees of belief as real numbers between 0 and 1, where believing in degree 1 is having absolute certainty that some formula is true and believing in degree 0 is being absolutely certain that some formula is false, as in tautologies and contradictions. We usually do not believe things in an absolute way. For all those non-absolute cases, we ascribe our beliefs some higher or lower value between 1 or 0. Then we simply interpret our degrees of belief in any formula as respecting the probability axioms.

The axioms stated above entail certain other theorems about the relation between logic and probability. These theorems together with the probabilistic interpretation entail the following classic result stated in Adams (1996), which we will refer to as Adams’ theorem. Take D as a disbelief function defined as D(φ) = 1 - P(φ), then:

If φ1, …, φn ⊢ψ, then D(ψ) ≤ D(φ1) +…+D(φn).

That is, our disbelief in the conclusion of a valid inference must be less or equal to the sum of the disbeliefs of its premises. Field (2015) generalizes this result to inferences with multiple conclusions:

If φ1, …, φn ⊢ψ1, …, ψm, then 1 ≤ D(φ1) + … +D(φn) + P(ψ1) + … +P(ψm).

This means that if the inference φ1,…, φn ⊢ψ1, …, ψm is valid, then the sum of one’s degree of disbelief in φ1,…, φn plus the sum of one’s degree of belief in ψ1, …, ψm has to be greater than or equal to 1.

These results can be understood as constraints on rationality. Every consistent assignment of probabilities for any valid inference is restricted by Adams’ theorem, which means that there are no consistent probability assignments where the number thrown by the disbelief function on ψ (for the case of single conclusions) can be higher than the sum of the disbeliefs of the premises. Or as the multiple conclusions extension result shows, there is no consistent assignment of probabilities where the sum of the disbeliefs of the premises and the probabilities of the conclusion can be smaller than 1. Dorothy Edgington (1997) calls this restriction the constraining property. This property can be of use for our quest, because if we are asking ourselves what it means exactly to be coherent, and we understand coherence as having beliefs that respect the probability axioms, then the set of sentences we accept will be coherent if and only if our degrees of belief in those sentences respect the constraining property. The constraining property will also give us information about invalid inferences. As Edgington puts it:

[...] if an argument is invalid [...] then there is an assignment of probabilities, consistent with the laws of probability, which give its premises probability 1 and its conclusion probability 0. So it does not have the constraining property. ...] An argument has the constraining property if and only if it is valid. (Edgington, 1997, pp. 300-301)

If we follow Field’s proposal, then for every valid inference it will be incoherent to accept all its premises and reject all its conclusions when our degrees of belief are consistent with the probability axioms. Yet, it will be coherent to accept all the premises and reject all the conclusions of an invalid inference because that’s exactly what it means for an inference not to have the constraining property. 7 Field is aiming at something different than we are. He is digging into what it means for two agents to differ on what is valid and how every agent determines what is valid and what is not (see Field, 2009, 2015). Yet, his project is related to ours because he chooses to address the problem by searching for some normative principle that restricts our credence when facing valid inferences. 8 For that, he analyses two different scenarios: when talking about partial beliefs he uses Adams’ theorem to restrict our degrees of belief. When talking about full belief and full disbelief he proposes to adjust the bilateralist approach to this framework. Field says the following:

The idea is that the sequent A1, …, An ⇒ B1, …, Bm directs you not to fully believe all the Ai while fully disbelieving all the Bj. (Field, 2015, p. 49)

So that:

To regard the sequent A1, …, An ⇒ B1, …, Bm as valid is to accept the consequent of (the Adams’ thesis extended to multiple conclusions) as a constraint on degrees of belief. (Field, 2015, p. 49) [italics are mine].

Field immediately points out the following:

Restall doesn’t explicitly say ‘fully’, but I take it that that’s what he means: otherwise the classically valid sequent A1, …, An ⇒ A1˄ … ˄ An would be unacceptable in light of the paradox of the preface. (Field, 2015, p. 49)

We could end our quest here and just conform with these two separate frameworks to understand constraints on validity. But we might want to know as well, when the degree of belief in φ is high enough so as to start acting as if we believed φ in degree 1. That is, when it is rational to have a full belief in φ, and then accept φ. And someone might ask why this bridge between partial and full belief is important when talking about validity, for we know that this bridge is, at least at a first strike, bound to fail because of epistemic paradoxes. Well, here I think the way Adams opens his book, A Primer on Probability Logic (1991), might be of use to motivate what is left of the quest. He says:

It is often said that deductive logic is concerned with what can be deduced with logical, mathematical certainty, while if probability has a part in logic at all, it falls into the province of Inductive Logic. This has some validity, but it also overlooks something important. That is that deductive logic is usually supposed to apply to reasoning or arguments whose premises aren’t perfectly certain, and whose conclusions can’t be perfectly certain because of this. (Adams, 1996, p. 1)

So, it would be good to be able to set a bridge between partial and full belief and these two different constraints on validity in order to be able to say something about these contexts in which we are not perfectly certain about our premises. I think this quest is well-motivated when we are asking about bilateralist statements, because, as we saw earlier, these statements have a great normative force in them, and the main use of normative statements is in real contexts of reasoning, which are the contexts where agents tend to be uncertain of the premises they are using. At the same time, if we can actually state when it is, according to our degrees of belief, that we can accept or reject a premise or a conclusion without being incoherent, then the elucidation we were looking for will be in our hands. We can already understand bilateralism in an abstract way. But by elucidating the concepts of acceptance, rejection, and incoherence in terms of degrees of belief, we might be able to actually use bilateralist statements in concrete contexts of reasoning. Let’s not forget that bilateralism states that we get to learn the meaning of our logical constants in usage. If we can actually instantiate bilateralist statements in concrete contexts, then the idea that we learn how to use logical connectives because we get to understand when it is out of bounds to accept them or to reject them takes an interesting new dimension.

The problem, as we said before, is that it is not that obvious how to settle this bridge. A natural first take might be by means of the Locke thesis, the one that states that there is some r for all φ, such that if P(φ) ≥ r then we can accept φ. The problem is that under a few minimal amount of natural assumptions, Locke’s thesis is haunted by epistemic paradoxes.

This is the point where trouble arises for our investigation. Mainly because as Field explains in the quote above, epistemic paradoxes seem to indicate a dead end. Either we take acceptance as full belief and rejection as full disbelief, or we simply forget about the probabilistic interpretation of bilateralism and we just stick to plain probabilism. I will show that we can fully merge these two philosophical proposals without getting to this dead end.

2.3. The paradoxes

As it was explained above, it is not easy to make probabilism compatible with a consistent probabilistic reading of acceptance and rejection. Mainly because under certain minimal conditions, some paradoxes arise. In this section, I will explain why it is not that easy to fix a threshold for acceptance if we understand our certainty or uncertainty in a probabilistic fashion.

The Lottery paradox (Kyburg, 1961) is settled in a lottery scenario (as expected) where one is willing to accept that for any given concrete ticket, that ticket will lose, but one is also aware that some ticket will win, or which is the same, one is willing to reject that every ticket will be a losing one. If we think of it as an inference, one is willing to accept every premise that states that each ticket will lose, but one is unwilling to accept the conjunction of every premise as the conclusion.

What the lottery paradox comes to show us is that we cannot hold these three principles about rationality altogether:

(P1) Rational acceptance is closed under logical consequence.

(P2) Degrees of belief must respect the probability axioms.

(P3) Locke’s thesis: There is a threshold 1 > r > 0.5 for every proposition φ, such that you should believe φ if and only if P(φ) ≥ r.

Locke’s thesis tries to capture the reasonable idea that, for example, if we believe that P(φ) = 0.99, we would like to accept φ in any scenario. Unfortunately, the lottery paradox shows us that this is not that simple. In particular, let’s imagine a scenario where we have 100 tickets, we believe that each ticket has a probability of winning of 0.01, which is equivalent to saying that the probability of each ticket of losing is 0.99. 9 Nevertheless, even though we have good reasons to accept every sentence of the form “The ticket n will lose”, we also know that some ticket will win (because that is how the lottery works), so we cannot accept the conjunction of every statement, namely every ticket will lose. Thus, we cannot close our beliefs under conjunction introduction. To see it in a more graphic way, even if our degree of belief in every premise is extremely high, we are unwilling to accept the conclusion of this argument that only uses conjunction introduction:

There are only 100 tickets 10

Ticket 1 will lose

Ticket 2 will lose

.

.

.

Ticket 100 will lose

Therefore, every ticket will lose.

That is the lottery paradox, we cannot have (P1), (P2) and (P3) together.

On the other hand, the preface paradox (Makinson, 1965) is described in the context of a book, where the scholar who wrote it is willing to accept any of its sentences but is also willing to accept there might be some mistake. Again, she is willing to accept each statement, but not all of them together. Just to be clear, the work I am constantly quoting (Field, 2015) talks about the preface paradox but I will focus on the lottery paradox instead, and I will explain why. The preface paradox falls under the same family as the lottery paradox. These paradoxes happen to be a problem for our approach because when trying to develop an apparatus for partial belief, we keep on finding sets of inconsistent beliefs that we wish would be consistent. Each paradox seemingly happens because one is willing to accept the truth of every statement of a given set but wouldn’t want to accept the conjunction of all of them, or similarly, one is willing to accept that the set of statements is false. Field chooses to talk about the preface paradox. I choose to talk about the lottery paradox. Even if they are not essentially the same phenomenon, I don’t think that the ways in which they differ will matter in what follows, and the lottery paradox is presented in an easier way to work with. That is, it is easier to think of degrees of belief when we know that there are 100 tickets than when we are talking about a book, where one doesn’t necessarily accept each sentence by itself, but might accept statements made of several sentences, and where the degrees of belief in each statement might vary or be codependent.

In the next section, I will provide a survey on different possible thresholds that will fail to work, and then explain how to settle a contextual threshold for acceptance.

3. A Probabilistic Interpretation of Bilateralism

We already have an interpretation of the bilateralist concept of incoherence. In this section, I will give an interpretation of acceptance and rejection in terms of degrees of belief and I will show that it is possible to interpret these concepts in accordance with Locke’s thesis without falling into epistemic paradoxes by means of the P-stability theory.

3.1. The thresholds that won’t work

When it comes to finding a threshold for acceptance and rejection, some of the most obvious proposals fail to solve the problem. The first proposal would come to be the full belief/full disbelief one, where r = 1. Believing in degree 1 for acceptance and believing in degree 0 for rejection. Then what bilateralism would tell us is that when facing a valid inference, it is incoherent to believe in degree 1 all the premises and believe in degree 0 all the conclusions. That makes sense. Yet, it sounds quite weak for our quest. One of the main motivations of this work, as well as Adams’ and Field’s, was expanding the span of logic for everyday reasoning scenarios in which we don’t necessarily have certainties about our premises. If we accept 1 and 0 as our thresholds, then we end up with a really strong norm for acceptance and rejection that takes us far from our goal of trying to guide us in an actual context of everyday reasoning, because as we said on many occasions, we don’t usually have absolute certainty in the propositions with which we reason.

Maybe, we could try setting our threshold as high as we can imagine without getting to 1 and 0. 11 Then, r = 0.99 should be a natural candidate. But as we have seen, the lottery paradox immediately discards 0.99 candidates. Naturally, we could set our threshold to r = 0.999 and avoid this lottery scenario. Yet again an extra-large lottery will arise, this time with 1000 tickets, and so on. So, r = “extremely high but not exactly 1” as a threshold is also ruled out. What should we do then? Is there any other option to consider?

The problems will arise no matter the threshold we choose. For extremely high cases, the lottery paradox will be the witness of our failure. Even if we want to try some critically low threshold, problems will still emerge. For example, another natural candidate would be to define our threshold as asserting φ when r > 0.5. That is, the lowest threshold possible, yet higher than being indifferent about our propositions. 12 At first glance, we can see why this fails. Take any two probabilistically independent formulas φ and ψ and take that P(φ) = P(ψ)=٠.٥١. Then, P(φ˄ψ) = 0.2601. That means that it is coherent to accept all the premises of our valid inference (conjunction introduction) yet reject its conclusion, for the degree of belief in our premises is higher than 0.5 and our degree of belief in our conclusion smaller. So, 0.5 is also ruled out.

These failed attempts leave us with some understanding of why Field might have given up on the project of finding a threshold for acceptance and rejection in the context of degrees of belief. But I think it is possible to find a different answer. If there cannot be an absolute threshold that works for every argument, then we might find a contextual threshold for each valid inference. In the next section, I will present the tools needed to mend this failure scenario in which it seems impossible to define a threshold for acceptance and rejection.

3.2. The threshold that will work: a contextualist solution

I will show how to solve the lottery paradox within a contextualist framework, which amounts to a reinterpretation of Locke’s thesis so that it doesn’t fall prey of the lottery paradox.

Contextualism here can be simply understood as the statement that we won’t be looking for a universal threshold for acceptance or rejection, but instead, we can look for a threshold for acceptance or rejection in the context of each argument depending on which are our degrees of belief in the involved propositions. 13 That is, the threshold for acceptance might vary between an argument with two premises (take p1,p2 ⊢ p1˄p2) and an argument with a thousand (take p1, p2, …, p1000 ⊢ p1˄p2˄… ˄p1000), even if the argument with a thousand premises contains the two premises of the first argument, as well as it might vary depending on our degrees of belief on the premises and conclusions.

Take P to be the classic probability function that we already know, Bel to be a full belief 14 function that commands us to believe a sentence φ (or for our purposes to accept φ), and r to be the number between 0 and 1 that will become our threshold. On Leitgeb (2014)’s proposal for solving the lottery paradox, he states the following:

[...] we need to distinguish a claim of the form ‘there is an r < 1 . . . for all P ...’ from one of the form ‘for all P… there is an r < 1…’ As we are going to see, the difference is crucial: while it is not the case that there is an r < 1; such that for all P (on a finite space of worlds) [...] there is an r < 1 such that the same conditions are jointly the case. (Leitgeb, 2014, pp. 133-134)

Even if it’s true that there is no threshold r that can satisfy the logical closure of Bel (P1), the probability axioms (P2), and Locke’s thesis (P3), it is also true that by modifying Locke’s thesis a little bit, we can find some threshold r for every valid inference that satisfies (P1) and (P2). The trick is to change the scope of the quantifier in Locke’s thesis. Now we won’t ask for one threshold r for every valid inference, but for every valid inference one threshold r. As we said, we turn to a contextual threshold. If we think about it, contextualism here seems pretty justified, for we cannot be as certain about the conclusion of some inference in which we have little uncertainty about two of its premises, compared to our uncertainty about the conclusion of an inference where we have little uncertainty about a thousand of its premises. To paraphrase Edgington (1997), accumulating uncertainties makes the conclusion inherit all the uncertainty of each premise.

So, how do we set this contextual threshold? 15 For that we will use conditional probability. We will say a proposition φ is P-stable if and only if learning any other proposition ψ compatible with φ, does not lower the agent’s credence in φ to a degree less than 0.5, that is, P(φ|ψ) > 0.5.

Now, imagine a truth table, where each line of the truth table is a possible world. We can name each line n with w n. Then, if we have two propositions expressed by the sentences φ and ψ, we get to have a line w1, which would come to be a world in which the sentences φ and ψ are true, another world, w2 in which φ is true and ψ is false, and so on. Now imagine you have all the possible worlds given by the set of all the propositions you are opinionated in (this is a huge truth-table, we will stick to two propositions for simplicity). By using the power set operation, you can construct a set of all subsets of possible worlds, you don’t need all of them, but you might want to have some different subsets. For example, you might want to express your degree of belief in the sentence that states that either w1 or w2 is true. That is one set that contains two possible worlds. You might want to express your degree of belief in the sentence that either w1 or w2 or w3 is true and so on. That set of subsets of worlds is called a sample space of worlds. Then, over that sample space, you might want to define the strongest sentence you believe, that is the sentence that excludes the greatest number of disjunctions of lines of your truth-table. That sentence will be called φw. With all this in mind, we can define P-stability in a more formal way:

Definition 1. Let P be a probability measure on a sample space of words W, and φ ⊆ W: For all φ, φ is P-stable if and only if for all ψ ⊆W, such that φ and ψ are not incompatible and P(ψ) > 0, then P(φ|ψ) > 0.5.

Then, Bel satisfies (P1), P satisfies (P2) and both Bel and P satisfy this contextual version of Locke’s thesis: for all φ, Bel(φ) if and only if P(φ) ≥ P(Bw) > 0.5 (Leitgeb, 2014). This way, what we get is at least one possible threshold for every consistent probability distribution. Let’s consider an example. Take a distribution with two formulas φ and ψ and a probability measure like this:

A way to calculate the threshold is the following: we can start by ordering our probability measures top to bottom from the higher probability to the lowest, as they stand on the table above. Then if P(φ˄ψ) > P(φ˄¬ψ) + P(¬φ˄ψ) + P(¬φ˄¬ψ), then {w 1} is the first and strongest P-stable set of propositions. We can see in this example that this is not the case, so we should move to the next: if P(φ˄¬ψ) > P(¬φ˄ψ) + P(¬φ˄¬ψ), then, {w 1 , w 2} is the strongest set, but again, it’s not the case. The case is that P(¬φ˄ψ) > P(¬φ˄¬ψ), so our strongest set of propositions is the one containing {w 1 , w 2 , w 3}, which sets the threshold at r = 0.9 (the sum of the probabilities of each w i in our set).

In this sense, Leitgeb’s P-stability theory mends the Lottery problem, because the Lottery scenario is one in which we don’t have P-stable beliefs. Because even if we learn that every ticket except tickets 1 and 2 will lose, our belief in the proposition “The ticket 1 will win” will still be 0.5. That is, the only P-stable proposition is the disjunction that states that some ticket among the one hundred tickets will win, and the probability of that statement must be 1.This means that the only degree of belief that would be coherent to have in order to accept a proposition of our lottery scenario is 1. Hence, we shouldn’t accept any proposition of our lottery if we believe it in a degree lower than 1.

Leitgeb sets a contextual threshold for partial belief that allows us to avoid the lottery paradox. My proposal is to import this theory into the bilateralist framework in order to understand acceptance where Leitgeb understands outright belief. This is quite a subtle move. It’s just adding something else to bilateralism.

4. Acceptance and Rejection

We can take P-stability as our rod for assertion. When Leitgeb states that if a proposition is P-stable then we can believe it, we will read it as saying that if a proposition is P-stable, we can accept it. This first bit is pretty straightforward. But, what about rejection? I think there are two options to consider. Both are plausible. The first way to address rejection is simply by stating that for every proposition believed in a degree lower than the one set for acceptance, we should reject it. That is, if we settle a threshold for acceptance as follows: if P(φ) ≥ r, then Bel(φ), then we should define our rejection function Dis as follows: if P(φ) < r, then Dis(φ). But in most of the cases this proposal seems quite odd, for nobody would find it intuitive to reject a proposition with, for example 0.899 probability just because we defined our threshold at 0.9 (because 0.899 is still really high).

Of course, there are cases in which this might be the right way to settle the threshold. There are certain contexts in which the only options we have are either to accept or to reject a proposition. A good example a referee points out is a case where a construction engineer must decide if some structure is safe or not in order to start the construction. Suppose the threshold for accepting the proposition “the structure is safe enough to be constructed” is 0.9. If the engineer is less than 0.9 sure about it, say 0.88 sure that the bridge is safe, the reasonable thing to do is to reject the proposition “the structure is safe enough to be constructed”. When the only two reasonable options to take are acceptance and rejection, as in this example, then it seems reasonable to settle the threshold for rejection as anything smaller than r. Yet, in most cases of everyday reasoning, we also face the option of suspending judgment over a proposition. If we are not too certain to accept φ, nor too certain to reject it, the reasonable thing to do is to suspend judgment over φ. For those cases, this proposal seems a bit constrained. For there is no room for suspension of judgment.

I think a better way to determine the threshold for rejection in the bilateralist spirit is to settle it as 1-r. This way, rejecting a proposition φ can be understood as something different from accepting ¬φ. That is, we will accept ¬φ whenever our degree of belief on ¬φ is equal or higher than r and we will reject φ whenever our degree of belief φ is lower than 1-r. And what I think is the most important feature, depending on the probability theory one is using and how negation is defined, rejection might collapse with negation or not. 16

This way, the bilateralist claim would be as follows:

An inference is valid if and only if:

• Our probability assignments respect the constraining property, and

• Given a probability distribution and a threshold r settled by the least P-stable proposition, we will be incoherent if and only if we believe the premises of our valid inference in a degree higher than r and believe each conclusion in a degree lower than 1-r.

In a more formal way:

• An inference Γ ⊢ Δ is valid if and only if it is incoherent to accept all the premises γ ∈ Γ when P(γ) ≥ r and reject every δ ∈ Δ when P(δ) < 1 – r, where r is settled by the degree of belief in the least P-stable proposition.

An interesting feature of this proposal is that it implies that in certain contexts there will be sentences that are neither incoherent to accept nor to reject. Take a threshold for acceptance r = 0.8, then the threshold for rejection would be r = 0.2. Now imagine a proposition φ you believe in degree 0.5. Then, you won’t be incoherent if you don’t accept φ and you don’t reject it. After all your degree of belief in φ is not high enough to accept it, and it is not low enough to reject it.

Take a 100 tickets lottery. It is rational to believe in degree 0.99 each statement “Ticket i will lose” (where i ∈ {1,…,100}) and in degree 0 that all the tickets will lose. Yet, as we have seen before, because lotteries are not p-stable set of sentences, the only degree of belief for which it is rational to accept “Ticket i will lose” is 1, 17 and the only degree of belief for which it is rational to reject “Every ticket will lose” is 0. So it is coherent not to accept the statement that says “Ticket i will lose”, and at the same time it is incoherent to reject it (for we believe it in degree 0.99). This is a good explanation of what happens with a valid argument such as P1: Ticket 1 will lose, …, P100: Ticket 100 will lose, therefore C: Ticket 1, 2, …, and 100 will lose. 18 In the bilateralist slang we will say it is incoherent to accept all the premises and reject all the conclusions. At the same time, we know we are not entitled to accept the premises of this valid argument, which is good, because in this way we are not being incoherent. But also, we are not entitled to reject a premise that we believe in degree 0.99, which is also good, because 0.99 is a really high degree of belief.

In this way, for every valid inference there will be certain probability assignments consistent with probability theory for which our degrees of belief in the premises are high enough and our degree of belief in the conclusions are low enough so that accepting all the premises while rejecting all the conclusions is incoherent. This criterion applies to every valid inference. If the premises of a valid argument are P-stable, then there will be some 0.5 > r ≥ 1 to settle a threshold for that particular inference. If they aren’t P-stable, as in the lottery paradox, the only possible threshold will be 1 and 0 for acceptance and rejection respectively.

5. Final Remarks

I have shown that it is possible to understand acceptance and rejection in terms of partial belief without falling into the lottery paradox. The main goal was to give a clear understanding of the bilateralist´s primitives in probabilistic terms. In order to do so, we first started by searching the literature for the way incoherence was interpreted by means of Adams’ theorem as well as acceptance was interpreted as belief, and rejection as disbelief. Then, to set a clear threshold in a partial belief framework, I used Locke’s thesis mediated by the P-stability theory and redefined it, so that it fit the bilateralist approach. That is, I took the threshold r for acceptance given by the least P-stable proposition and proposed a threshold of 1-r for rejection. This way I tried to capture the bilateralist spirit where accepting ¬φ is not equivalent to rejecting φ, because, if it were the case where we are working with non-classical logics, and we were to define a non-classical negation, then there might be a spectrum of real numbers between r and 1-r where it is coherent to neither accept φ nor reject φ. Also, depending on how we define our probabilities for negation, the rejection of a negated proposition might or might not clash with the acceptance of the proposition being negated.

By these means, we can have a consistent and clear definition to understand the concepts of acceptance, rejection and incoherence used in the bilateralist jargon. This also allows us to extend and merge both parts of Field’s proposal, the one with partial belief and the bilateralist interpretation in terms of full beliefs. In this sense, I put some P-stability to Field’s proposal and added some bilateralism to Leitgeb’s theory.

Achieving this goal makes it possible to build a bridge between logic and everyday reasoning. In particular, when facing the question about how normative logic for reasoning is, this approach might help us find an answer. We know that when an inference is valid, there are certain sets of probability assignments that are incoherent to have altogether, and that for every valid inference there is at least one threshold r and one threshold 1-r that will tell us that it is incoherent to believe all our premises in a degree equal or higher than r and our conclusions in a degree lower than 1-r, it will also tell us that it is incoherent to accept all the premises and reject all the conclusions given these already incoherent sets of probability assignments. This way, we can merge our knowledge about logical validities with our everyday reasoning where we are often uncertain about the truth of our premises. By looking to the bilateralist approach, now that we have this philosophical interpretation, we can assert some normative claims about how we should reason.

A philosophical understanding of the main bilateralist concepts was in need. We have a proposal on how to understand them. It first, allow us to use this normative reading of valid inferences in everyday reasoning. Now we can take our uncertain propositions and decide whether it is rational to accept all the premises and reject the conclusion depending on the argument and our degrees of belief in them. At the same time, this is an open door to solve the problem stated in section 1, about the different ways in which Restall (2005) and Ripley (2015) interpret the Cut rule (a classical reading versus a non-transitive reading, respectively). I think my proposal can solve this problem, but there is some interesting work left to do. First, we need to choose several probability theories (not just the classical one) and adapt Adams’ theorem in order to see the constraining properties that each theory imposes to probability assignments, and then we should compare readings of the Cut rule theory by theory. I have my guesses, yet the problem, for now, is left unsolved.

References

Adams, E. (1996). A primer of probability logic. CSLI Publications. [ Links ]

Carroll, L. (1895). What the tortoise said to Achilles. Mind, 104(416), 691-693. https://doi.org/10.1093/mind/104.416.691 Reprinted in Woollcott, A. (Ed.) (1982), The Penguin Complete Lewis Carroll (pp. 1104-1108). Penguin. [ Links ]

De Finetti, B. (1937). La prévision: ses lois logiques, ses sources subjectives. In Annales de l’Institut Henri Poincaré (vol. 7, pp. 1-68). Wiley. [ Links ]

Dummett, M. (1991), The logical basis of metaphysics. Harvard University Press. [ Links ]

Edgington, D. (1997). Vagueness by degrees. In Vagueness: A Reader. The MIT Press. [ Links ]

Field, H. (2009). What is the normative role of logic? In Aristotelian Society Supplementary Volume, 83, 251-268. Wiley Online Library. [ Links ]

Field, H. (2015). What is logical validity. In C. R. Caret & O. T. Hjortland, Foundations of logical consequence (pp. 33-70). Oxford University Press. [ Links ]

Hacking, I. (2001). Probability and inductive logic. Cambridge University Press. [ Links ]

Hájek, A. (2008). Arguments for, or against, probabilism? The British Journal for the Philosophy of Science, 59(4), 793-819. http://www.jstor.org/stable/40072312Links ]

Incurvati, L., & Schlöder, J. (2017). Weak rejection. Australasian Journal of Philosophy, 95(4), 741-760. [ Links ]

Joyce, J. (1998). A nonpragmatic vindication of probabilism. Philosophy of Science, 65(4), 575-603. [ Links ]

Kyburg, H. (1961). Probability and the logic of rational belief. Wesleyan University Press. [ Links ]

Leitgeb, H. (2014). The stability theory of belief. The Philosophical Review, 123(2), 131-171. [ Links ]

Makinson, D. (1965). The paradox of the preface. Analysis, 25(6), 205-207. [ Links ]

Murzi, J., & Steinberger, F. (2017). Inferentialism. In B. Hale (Ed.), A Companion to the Philosophy of Language (pp. 197-224). Wiley Blackwell. [ Links ]

Paris, J. B. (2001). A note on the Dutch Book method. In ISIPTA, 1, 301-306. [ Links ]

Ramsey, F. (1926/2016). Truth and probability. In Readings in Formal Epistemology (pp. 21-45). Springer. [ Links ]

Restall, G. (2005). Multiple conclusions. In P. Hájek, L. Valdes-Villanueva & D. Westerstahl (Eds.), Logic, methodology and philosophy of science: Proceedings of the Twelfth International Congress (pp. 189-205). King’s College. [ Links ]

Ripley, D. (2015). Anything goes. Topoi, 34(1), 25-36. [ Links ]

Rumfitt, I. (2000). Yes and no. Mind, 109(436), 781-823. [ Links ]

Smiley, T. (1996). Rejection. Analysis, 56(1), 1-9. [ Links ]

Staffel, J. (2016). Beliefs, buses and lotteries: Why rational belief can’t be stably high credence. Philosophical Studies, 173(7), 1721-1734. [ Links ]

Williams, J. R. G. (2012). Generalized probabilism: Dutch books and accuracy domination. Journal of Philosophical Logic, 41(5), 811-840. [ Links ]

Williams, R. (2016). Probability and nonclassical logic. In A. Hajek & Ch. Hitchcock (Eds.), The Oxford handbook of probability and philosophy. Oxford University Press. [ Links ]

1Some readers might feel suspicious of this strategy. Mainly because the nature of acceptance and rejection is quite different from the nature of degrees of belief. These first two concepts are absolute, (whether you accept/reject or you don’t) while the second, as the name suggests, comes in degrees. But if we think of concrete contexts of reasoning, one might be dubious of what attitude we should take towards some propositions, and yet one might know how certain or uncertain one is. And one might ask, when is it that our degrees of belief are high enough to accept or reject that proposition? Locke’s thesis will be the one in charge of answering this question. Nevertheless, we are quite a few paragraphs away from Locke’s thesis.

2It is also usually acknowledged that acceptance and rejection have their speech act counterparts, assertion and denial, which are propositional attitudes. The literature about bilateralism avoids differentiating between acceptance and assertion, as well as denial and rejection. As far as my work is concerned, I will stick to acceptance and rejection to avoid confusion.

3Or that are simply inconceivable, maybe?

4Here I will be talking of classical logic as well as classical probabilities. This is not to imply that the only way to set constraints on belief is classical probability. This is just a way to simplify the work for reasons of space and readability for unfamiliar readers. In fact, as Field (2015) remarks, it is possible to extend all these results to different non-classical probability theories.

5To learn more about probability logic see Hacking (2001).

6It is important to notice that conditional probability is a theory of its own that can be added to this or any other equivalent axiomatization.

7There is a wide and most interesting literature on why it is important to respect the constraining property. In particular, both the classic Dutch Book argument –Ramsey (1926/2016); De Finetti (1937)– and the Accuracy argument –Joyce (1998)– explain what happens if we have incoherent sets of beliefs.

8The main idea of Field’s work is that, because it is possible to adapt probability theory to different logics, then Adams’ theorem can still run with different antecedents, (that is, for classical logic the antecedent will be “φ1,…, φn ⊢CL ψ1, …, ψm”, while for a non-classical logic validity it will be “φ1,…, φn ⊢NC ψ1, …, ψm” and so on) with some differences in the consequent.

9The reason the probability of the proposition “The nth ticket will win” is 0.01 is that we have 100 tickets with equal probability of winning (at least if we are assuming it´s a fair lottery), so the probability of the ticket n, is 1 in 100, that is 1/100.

10I add this premise to make it a valid inference.

11Of course, this number doesn’t actually exist.

12Remember that 0.5 is usually interpreted as indifference.

13As an anonymous referee kindly points out, whatever we do, it must be done in a principled way, otherwise we could end up accepting things that we barely believe. There must be some conception that explains in every case why it is rational to accept or to reject some set of premises, in order to avoid an ad hoc solution. I think that p-stability will work as a reasonable proposal given the desideratum we are considering. Of course, there might be others.

14Full belief can be understood as outright belief or acting as if you believed in degree 1.

15I will follow Staffel’s (2016) presentation for simplicity.

16In the classical probability theory, our threshold for rejecting a proposition φ will be the same as the threshold for accepting ¬φ. Yet, if we define negation in a different way (as other probability theories manage to do, like paracomplete or paraconsistet theories (see, Williams, 2012, 2016 or Paris 2001) then they won’t collapse. For example, you might define a probability theory over a paracomplete logic, say SK, in which the probability of a sentence such as φv¬φ might be equal or less than 1. That is because negation is not defined as P(¬φ)=1-P(φ), as in the classical cases. A dual case happens for a probability theory built over a paraconsistent logic, such as LP.

17And that happens because the only possible threshold for lottery-like scenarios is 1, because no proposition on the lottery is P-stable. For the only P-stable proposition is the disjunction of every proposition, and the probability of the disjunction of every possible state of the world is always 1.

18There is the enthymematic premise that states that there are only 100 tickets in this lottery.

Received: December 14, 2020; Accepted: October 14, 2021

Creative Commons License Esta obra está bajo una Licencia Creative Commons Atribución-NoComercial 4.0 Internacional.