**by Axel Arturo Barceló Aspeitia**

This is the second post I am writing on the many ways philosophers have come up in trying to reconcile opposing views. The first one was on dialetheism – accepting that some features of reality are actually contradictory – and you can read it here. This time I will address gradualism – the claim that some properties come in degrees. Future posts will address both pluralism and relativism.

A first option to reconcile two positions without accepting a contradiction is to postulate a gradation between *P* and non-*P*. The key point of these proposals is to argue that the property in question *P* is not actually a property that an object of the proper kind either has or has not, but rather a gradual property that can only be had to some degree or other. This means that, between things that are *P* and those that are not-*P*, there are intermediate cases that are neither *P* nor not-*P*. For example, there are those who have argued that the reason why we have conflicting evidence regarding, for example, whether the domestic cat is a natural kind (since it is a biological species of animal) or artificial (since it was artificially created by humans according to their own preferences) is because artificiality is a matter of degree and between the artificial and the non-artificial there are several intermediate cases, including domestic cats, seedless grains, iron, etc. (Dennet 1990, Kroes 2012, Elder 2007, Sperber 2007, Grandy 2007, Asse 2015, p.39). The general strategy is to say that the cases for which we have contradictory evidence are less-*P* than those in which the overall evidence points towards *P*, and more-*P* than those in which our overall evidence points towards not-*P*. In the aforementioned example, we have completely artificial cases like railroads and paper flowers (for which we have no reason to think they are not artificial) and at the other extreme, we have completely natural objects like the sun or a tree in the middle of the forest (for which we have no reason to think they are artificial); objects in the middle, such as domestic cats, are more artificial than the sun, but not as artificial as railroads.

Consider now the well-known paradox of inevitable wrongdoing, linked to traffic dilemmas in meta-ethics. These are cases where agents face choices where whatever the agent does she will be doing something wrong. We have conflicting evidence regarding the moral assessment of these actions (and omissions): on the one hand, we have good reasons to conceptualise such actions as cases where one “must do wrong in order to do right.” (Thompson 1987) However, people acting in such circumstances “emerge feeling torn, guilty, and tainted and, furthermore, it seems that these are appropriate reactions. We would wonder about the person who could perform a tragic act and come out of it unscathed in these ways.” (Kent 2008, 6) Thus, we have evidence for the person having done right and also for her having done wrong. Gradualism solutions have attempted to solve the paradox by arguing that “wrong” and “right” are too coarse categories to morally assess human action. The very ideas of mitigating and aggravating circumstances in modern law make sense precisely because we understand that some actions are more *wrong *than others.

In logic, gradualism gave rise to fuzzy logic, i.e., to the development of logical systems explicitly built to better model the logical and semantic behaviour of gradual properties. In these logics, instead of the membership relation of set theory represeneting whether an objects belongs to a predicate’s extension, there is a membership function assigning values within the [0, 1] interval of real numbers to objects and sets representing the preciate’s extension. Thus, we can account for the behaviour of propositions like “John is bald” where John is a borderline case of baldness. Given how widespread vagueness and similar phenomena are, it is not suprirsing that fuzzy logic has found heaps of applications, mostly outside philosophy.

A good solution to a paradox or conflicting evidence should not only aim at reconciling the conflicting evidence, but also explain why they seemed to be in conflict in the first place. In other words, it is not enough to show that one can incorporate both the evidence for *A* being *P* and the evidence for *A* not being *P* into a unified account, one should also explain why that evidence seemed irreconciliable in the first place. In this regards, gradualism needs to explain why, if *P* actually corresponds to a gradual property, we have intuitions that it is something that objects like *A* simply have or have not. In other words, the main challenge to gradualism is to avoid the charge of ad-hocery.

One recent, and common way of defending gradualism from this sort of charge has been to appeal to linguistic data. For example, one can point out that, in natural conversational contexts, we usually use intensifier morphemes and adverbs when morally evaluating actions. Thus, we speak of some actions being “very bad” or “worse” than others. Given that the standard semantic account of this constructions presupposes a gradual structure (Kennedy & McNally 2005), this serves as evidence that our everyday moral assessments are gradual. In other words, unless we want to embrace a revisionism regarding moral assessment, we must prefer a theory of morality that respects its gradual nature.

This strategy consists in complementing the metaphysical claim that predicate *P* corresponds to a gradual property with a couple of semantic claims: first, that the extension of the predicate *P* (when ocurring without a modifier like “very”, “more”, etc.) is fixed by the set of objects that have the corresponding gradual property at least to a certain degree (Kennedy & McNally 2005). For example, we know that warmth is a gradual property and, therefore, that one location can be warmer than another. However, we still use the adjective “warm” to describe the weather, as if locations were just either warm or not. Our language allows for this dual way of speaking – gradual and non-gradual – thanks to what linguists call a null operator, so that objects or situations count as warm – that is, belogn within the extension of predicate “warm” – only if they are warm to a degree that surprasses some given threshold. This threshold, in turn, can be semantically fixed as part of the lexical meaning of the predicate – in what is commonly called an “invariantist” position – or contextually – in what is known as a “contextualism”. Thus, even though most gradualisms are contextualisms and vice versa, it is important to distinguish between:

1. Gradualism: The metaphysical claim that the property associated to predicate P is gradual

2. Degree semantics: The semantic claim that the extension of predicate P is fixed by a threshold degree of the associated property

3. Contextualism: The claim that the threshold degree that fixes the extension of predicate P is contextually determined

At this stage, it is important to take a second to compare the first two positions we have introduced so far. Prima facie, they seem substantially different; however, the similarities run deep actually. As a matter of fact, dialetheism can be easily modelled as a simple sort of gradualism, where true contradictions exist in an intermediate space between full-fledged truth and full-fledged falsity. In order to see this, it is helpful to look a little closer to the usual paraconsistent logics used in dialetheism, these multiple-valued logics effectively make a distinction between two kinds of truths: normal, classical truths for which if something is true, its negation cannot be, and paraconsistent truths for which even if something is true, its negation may also be true as well. However, each sort of truth is not independent of the other, which becomes clearer once we describe the logic in algebraic terms (Dunn 2000) so that there is an ordering relation between them. In this representation, paraconsistent truth is literally an intermediate truth value between classical truth and classical falsity. This is starting to look a lot like a truth gradualism. It seems like dialetheism takes “true” to be a predicate associated to a gradual property – truth – that can be had completely (in the case of classically true propositions), to no degree (in the case of false propositions) and to some degree, but not completely (in the case of paraconsistently true propositions). Now, the extension of the “true” predicated is fixed on this graduation by all the propositions that have true to at least some degree (in other words, it is an existential predicate, in linguistic’s terminology), and not contextually.

[in the case of analetheism, all that changes is that the condition is that a proposition needs to be true to the highest degree in order to belong in the extension of the “true” predicate].

**by Axel Arturo Barceló Aspeitia**

In philosophy, as in any other theoretical endeavour, it is not rare to face conflicting positions - one that says that an object *A* is *P* and another that says that *A* is not *P.* In these situations, philosophers have several options. Obviously, they can defend one of the positions and criticise the other – in what I will call a “monist” solution. In fact, in most philosophical debates, responses of these monist types are the most common. However, throughout history, philosophers have developed a few, more sophisticated options that attempt to incorporate the insights, intuitions and arguments from both sides of the debate, into a third conciliatory option. The purpose of this article is classify the kinds of possible positions that can be taken, in general, in philosophy when we faced with two conflicting positions, each based on prima facie equally good arguments and intuitions. Besides the aforementioned obvious choices of defending one thesis and criticising the arguments and intuition behind the other, there are at least five other options that try to incorporate the insights behind the two positions into a unified theory:

- Dialetheism
- Gradualism
- Pluralism
- Relativism

In this and following posts, I will analyse each of them in detail, giving not only the advantages and disadvantages of each one, but mostly trying to determine in which situations is one more appropriate than the others. I am starting with dialetheism:

The first, and perhaps most radical solution to conflicting evidence is to accept the ensuing contradiction, not as a problem to solve, but as a feature of the phenomenon. The basic idea is that, since both the evidence for and against *A* being *P* must be accepted as equally good, then what it shows is that, for the particular case of *A* and *P*, it must be true true that *A* its both *P* and not *P*. That this sort of solution is not absurd has been productively explored by philosophers and logicians like Graham Priest (1985) or JC Beall (2009), among others. This solution is usually accompanied by a proposal to change the underlying logic to a paraconsistent logic to allow this kind of contradictions. The main feature of these logics that makes them fit for dialetheism is that they help us distinguish between exploding and non-exploding contradictions, that is, contradictions that entail everything and, therefore, make any theory that contains them collapse into absurdity, and contradictions that do not and, therefore, can be incorporated into a theory without catastrophic consequences.

This option is especially attractive when the evidence in favor of each of the options is the same or similar. In such cases, inserting a wedge between *A* being *P* and *A* not being *P* is especially difficult. For example, in considering the Liar sentence – “This sentence is not true” – the reasons we might have for taking it to be true are so intimately interlocked with those we might have for taking it to be not true that any attempt to argue for one and against the other would seem doomed from the start. Similar considerations have moved some philosophers to embrace dialetheism as a solution to the metaphysical problem of *Transition states*. A very simple example is given by (Priest & Berto 2013):

when I exit the room, I am inside the room at one time, and outside of it at another. Given the continuity of motion, there must be a precise instant in time, call it

t, at which I leave the room. Am I inside the room or outside at timet? Four answers are available: (a) I am inside; (b) I am outside; (c) I am both; and (d) I am neither. (a) and (b) are ruled out by symmetry: choosing either would be completely arbitrary. As for (d): if I am neither inside not outside the room, then I am not inside and not-not inside; therefore, I am either inside and not inside (option (c)), or not inside and not-not inside (which follows from option (d)); in both cases, a dialetheic situation.

For another, recent example, JC Beall (Manuscript) has argued that, since the conciliar fathers if orthodox Christianity have supplied *the very same *sort of evidence in favour of the contradictory claims that Jesus is both mutable and immutable, the best option for the Christian theologian is to accept **as true** the contradiction that Jesus is both mutable and immutable. Arguing for just one of the contradictory claims would entail diverging from Christian theology in an unacceptable manner and, furthermore, since the evidence is exactly of the same sort for each one of the claims, there is no principled way of choosing a single side.

It has been argued (for example, in Littman & Simmons 2004) that since dialetheism is a symmetric proposal, the same reasons that make it seem as a better solution than monism define a dual argument for taking it as a worse solution than monism. Yes, it is true that dialetheism incorporates the good reasons we have for accepting that *A* is *P* and accepting that *A* is not *P* and thus seems, at least prima facie, a better option that just accepting the good reasons in favour of one of them and not those in favour of the opposite one. Unfortunately, defenders of *A* being *P* commonly also have good reasons for rejecting *A* not being *P* (actually, the same reasons can be interpreted both ways: as reasons for accepting that *A* is *P* and reasons for rejecting that *A* is not *P*) and vice versa. Thus, since dialetheism accepts both that *A* is *P* and that *A* is not *P*, it goes against both the reasons we have for rejecting that *A* is *P* and for rejecting that *A* is not *P*. And thus it is not better, but worse that either monist option. The monist option of, say, accepting only that *A* is *P*, and rejecting that *A* is not *P*, at least do justice of our good reasons for rejecting that *A* is not *P*, even if it does not incorporate our other good reasons for rejecting that *A* is *P.*

Similar concerns regarding the symmetry of dialetheism have given rise to a dual strategy, known as “analetheism” (Beall & Ripley 2004), according to which instead of accepting both that *A* is *P* and that *A* is not *P*, we must *reject* both. Since we have good reasons against each of the contradictory claims, that means that we have also good reasons for rejecting them both. And just as dialetheism is well accompanied by a paraconsistent logic where propositions of the form *P* and not *P* can be true, so analetheism is well accompanied by a paraconsistent logic where where propositions of the form *P* or not *P* can be false. Instead of distinguishing between exploding and non-exploding contradictions, what this logic allows the analetheist is to distinguish between imploding and non-imploding disjunctions of the form *P* or no *P, *where an imploding proposition is one that follows from anything.

** **

**by Axel Arturo Barceló Aspeitia**

Recently, JC Beall has been trying to show that the (transparent) truth predicate is logical in a sense that, say, the logical consequence relation is not. As far as I can tell his strategy goes along the following lines:

There is a sense in which both the truth operator (and the truth predicate) and the logical consequence operator (and relation) are topic neutral: both apply to propositions regardless of their content. In other words, for any *P* and *Q,* no matter the topic, the proposition that *P* is true and the proposition that *P* follows from *Q* are both acceptable. However, there seems to ve a further sense in which they are not equally topic neutral. True propositions of the form ‘*P* follows from *Q*’ are made true, not by facts regarding the topics *P* and *Q* are about, but by logical facts about the relation of logical consequence. In contrast, no proposition of the form “It is true that *P*” could be true in virtue only of properties of the truth predicate or operator. The way I remember Beall telling it, a theory about *T* is in the business of telling you what propositions about *T* are true and which are false. However, it is not its business telling you when a proposition about *T* follows from another proposition about *T*, logic does. That is why the truth predicate (and the false predicate) are topic neutral (and in that sense, logical) in a way that the relation of logical consequence.

What Beall wants is pluralism about logicl entailment, without pluralism about truth. He wants for there to be many ‘right’ relations of logical consequence (not all equally good for any purpose, for some better fit for certain theoretical purposes and others better fit for other purposes), without the undesirable relativistic consequence that there are many ‘right’ properties of truth. Thus, he needs to drive a wedge between truth and logical consequence. Thus, it is not that there be a difference between truth and logical consequence. After all, that – that truth is not validity – is something we learn in our first day of introductory logic! Beall needs to show that this well known distinction between truth and logical validity somehow corresponds with a substantial way of drawing the line so that there is room for pluralism regarding logical consequence, but not regarding truth!

In an unpublished manuscipt, he writes:

“The construction of true theories involves the construction of consequence (closure) relations for those theories – an entailment relation that serves to ‘complete’ the theory (as far as possible) by churning out all of the truths that follow (that are entailed by) the claims in the theory…The theorist’s task is to construct a set of truths about a target phenomenon and close that set of truths under the consequence rela- tion that, by the theorist’s lights, is the right relation to ‘complete’ the true theory of the given phenomenon.”

**by Axel Arturo Barceló Aspeitia**

Many philosophers of mathematics have been puzzled by the fact that mathematicians’ notebooks and blackboards are full of diagrams, but they rarely appear on published research materials. Until recently, the traditional explanation was that diagrams were good enough for heuristic purposes, that is, to informally understand, explain, explore and teach phenomena, but not fit for rigorous mathematical work. In more recent years, however, a competing account has emerged, where diagrams have been shown to be as rigorous as formulas and where their exclusion from the finished results of mathematicians is presented as based on a prejudice against non-formal methods in mathematics.

My take on the phenomenon, however, is different. I agree with the traditional account that diagrams are better fit for heuristic work than for the needs of published work; however, I also agree with the more recent re-assessments of diagrams in their assessment that they are as fit for the development of rigorous mathematical proofs and theories as formulas. This is because I take it that presenting proofs and theories is a task of a fundamentally different sort that understanding, explaining or teaching them. Thus the requirements for one are substantially different from the other, and the difference is so large that it does not boil down to one being more rigorous than the other. In particular, presenting proofs and theories is a communicative task and as such requires our representations to be easily understood by many, while exploring theories and finding proofs in them is the kind of work that is done either by ourselves and in close proximity with others, in other words, they are tasks that take place in heavily contextualized situations. Consequently, the representations we use in these situations can fruitfully exploit the information available in such contexts and need not be meaningful outside them. In other words, it does not matter if the diagram on the board is not understandable by anyone outside the discussion it was drawn for. However, diagrams in printed media, require being more explicit, not so much in themselves, but in their written context. In other words, in order to properly interpret a diagram in a written context, the contextual information necessary for their interpretation has to be mostly explicitly given in the text. This makes other synthetic means of putting that same information across, like formulas, a more efficacious tool. Thus, the difference is not one of rigorous vs non-rigorous, but between widely and narrow audiences, between poor and rich contexts.

When writing about diagrammatic reasoning, it is not rare to make a difference between what I elsewhere (2016) called the epistemic and ergonomic aspects of visual representations, which roughly corresponds also to the distinction Larkin and Simon (1987) make between their information content and its computational character, i.e., how is such information extracted from them, and in particular, how quickly and easy it is (the same distinction appears in Zhang 1997, Kulvcvki 2013, and Bechtel 2017). Thus, the reasons why diagrams are better suit for the mathematical notebook and blackboard that for the pages of the research journal are not epistemic, but ergonomic: in the context and for the goals of exploration and analysis, diagrams are easier to use than formulas; in the context son and for the goals of a published research paper, it is easier to use formulas. Diagrams could also be used, but they would be too cumbersome.

**by Axel Arturo Barceló Aspeitia**

The issue of whether deduction can gives us new knowledge is one of extreme philosophical awkwardness. On the one hand, almost anyone with some knowledge of logic or mathematics would find the question triffling in so far as it is obvious that at least some genuine knowledge must be obtained through competent deduction. However, finding a good example has proved an ellusive endeavour. The holly grial would be to find a simple, straightforward deduction that would certainly deliver new knowledge. But this seems almost impossible, in so far as if the deduction is simple, then the consequence must be obviously contained in the premises and as such, it would be difficult for anyone to have conscious knowledge of the premises without also knowing the conclusion. Nevertheless, in a recent talk, profrs. Héctor Hernández Ortíz and Víctor Cantero just gave what I think is the best example in this regards: a straightforward simple deduction from very intuitive premises – or, at least, premises that anyone with even the most rudimentary knowledge of arithmetic would accept – to a very unintuitive but true conclusion – in this case, a conclusion that only people with some sophisticated knowledge of arithmetic would accept. The arguments is as follows:

Controversial conclusion to prove: 0.999999999… = 1

Uncontroversial premise: 1/3 = 0.3333333333…

We multiply both sides by 3: 3(1/3) = 3(0.3333333333…)

An we immediately get the desired conclusion: 1 = 0.999999999…

According to Hernández and Cantero, this is a case of a non-vicious circular agument. But where exactly is the circularity, and is it actually non-vicious? Or is it question begging instead? Here is my take on where to find the ciruclarity in the argument. The key step is:

3(0.3333333333…) = 0.999999999

Presumably, the reason why we find this step uncontroversial is because it is based on the very elemental arithmetic fact that 3 times 3 is 9. However, it is not part of our basic arithmetical knowledge how to multiply infinite decimal expressions, and since this is a multiplication of an infinite decimal series a finite number of times, the issue in the background is how do we extrapolate from what we know about the multiplication of finite expressions to infinite expressions. One way of solving this would be to reason by some kind of informal induction thus:

3 x 0.3 = 0.9

3 x 0.33 = 0.99

3 x 0.333 = 0.999

3 x 0.3333 = 0.9999

…

3 x 0.33333333… = 0.9999999999…

Presumably, this, or something relevantly similar, is behind why we find the claim that 3 x 0.33333333… = 0.9999999999… very intuitive. However, one could question the validity of this induction by appealing to something also very basic, i.e., something we learn in elementary arithmetic lessons about how to multiply numbers in decimal system. As you all might recall, we do **not** start multiplying large numbers from the left to the right, but from the right to the left. However, this is what seems to be happenning in this case: since we cannot start multiplying from the right, because the series in infinite, we start from the left. Nevertheless, there is a reason why we do not start from the left: because numbers might “carry over” from the right and this might *mess up* our result. In other words, the numbers to the right might add up to more than ten (or a hundred or a thousand, depending on the size of the number) and this would require us to revise our result. For example, 371 times 2 does not start with 6, but with 7 because 71 times 2 is more than a hundred, so the one from the one hundred *carries over*, so to speak, to the left. In contrast, 312 times 2 starts with 6 because 12 times 2 is less than a hundred and so nothing gets carried over to the left. In a little more formal terms, the general rule when multiplying in decimal notation is that,

For any digit *D* occupying the *n*th place (from right to left) of number *C* expressed in decimal notation, the product of *C* times *N* expressed in decimal notation has digit (*D* times *N*) [or the rightmost digit of *D* times *N* expressed in decimal notation if *D* times *N* is larger than te] if and only if the decimal expression to the right of D (that is, the decimal expression composed of the first *n*-1 digits of *C* counting from right to left) times *N* is strictly less than the number corresponding to the numeral in decimal notation of a one followed by* n*-1 zeroes.

This sounds complex, but the idea is simple. Consider our previous sunple example: if we multiply by 2 a number starting with 3 (from left to right) and followed by, say, other two digits whatsoever (to its right), the result will start with 6 (from left to right) only if 2 times the number to the right of the 3 is less than 100 (that is, a one followed by two zeroes). That is, if we multiply 312 times 2, the result – 624 – starts with 6 – 3 times 2 – because the numer to the right of the 3 –12 – times 2 – 24 – is less than a hundred – a one followed by two zeroes, because there are two numerals to the right of 3 in 312; but if we multiply 371 times 2, the result – 742 – does not start with 6, because 71 times 2 – 142 – is more than a hundred. That is why we do not start multiplying from the left.

The rule is simple and basic. It is one of the first things we learn when we learn how to multiply expressions in decimal notation. However, when we apply it to this infinite case, we do not get the result that Hernández and Cantero wanted.

According to this rule, 0.33333…. times 3 would start with 0.9…. if and only if the number to the right of the leftmost 3, times 3, were strictly less that 0.1. This means that if 3(0.33333….) = 0.99999…., then 3(0.033333…) must be strictly less than 0.1. But if 3(0.33333….) = 0.99999…., then 3(0.033333…) is 0.0999999…. and thus 0.099999… would have to be strictly less than 0.1. Multiplying both sides of the inequality by ten, we get that 0.999999…. is strictly less than 1 which is in direct contradiction with our starting assumption – and Hernández and González desired result – that 0.999999…. = 1.

Thus, we have two different ways of extrapolating from the case of finite multiplication – that is, the multiplication of finite expressions in decimal notation – to infinite multiplication. Each one gives us different and inconsistent results. There is nothing wrong in adopting the first way and not the second, but there is nothing particulary right about it either. We must take other stuff in consideration to make the decision. And this is what should be meant when we say that this argument is circular but not viciously so. It is circular in so far as it is based on a step that is valid only if we reject other, equally suited ways of performing the relevant operation, inconsistent with it. It is not vicious in so far as it does establish a rational inferential link between the premises and the conclusion. Furthermore, it gives us new knowledge in so far as, as Hernández and Cantero correctly point out, the premises and operations involved in the argument are substantially more intuitive and uncontroversial than the final conclusion.

**By Jon Cogburn**

In AN EARLIER POST I generalized Evans' original argument against ontic vagueness to suggest a counterargument to those onticists who would respond to Evans by defending vague objects without vague identity. Here I want to do something similar, but aimed at semanticists who argue that vagueness lies in our representations of objects, not objects themselves.

**(1) Evans' Argument and Semanticism to the Rescue**

In natural language Evans' argument goes like this. Assume that it is indeterminate whether a and b are identical. Then a has the property of being such that it is indeterminate whether b is identical with a. But a does not have the property of being such that it is indeterminate whether it is identical with itself. So, b and a have distinct properties and are hence not identical with one another. But now on the assumption that it is indeterminate whether b is identical with a we have proven that b is not identical with a. But then any case of indeterminate identity will also be a case of flat out not being identical, which seems contradictory.*

Semanticism is the belief that objects in the world are not vague, only our manner of representing the world is. Our predicate "red" is such that it is indeterminate exactly which set of objects it picks out. But there is no vagueness among the objects, or among the real properties in the world, or the states of affairs that result when objects instantiate those properties. Even though Bertrand Russell's

Per contra, a represenation isvaguewhen the relation of the representing system to the represented system is not one-one, but one-many. For example, a photograph which is so smudged that it might equally represent Brown or Jones or Robinson is vague. A small-scale map is usually vaguer than a large-scale map, because it does not show all the turns and twists of the roads, rivers, etc. so that various slightly different courses are compatible with the representation that it gives. Vagueness, clearly, is a matter of degree, depending upon the extent of the possible differences between different systems represented by the same representation. Accuracy, on the contrary, is an ideal limit (Keefe and Smith 2002, 66).

This easily becomes the view that reality is precise and that this is something more or less captured by more or less accurate representations. With a little bit of Quinean optimism you can even see the job of the philosopher as providing a "canonical notation" for science which is 100% accurate in this sense.

The semanticist's *short* answer to Evans' argument is thus just to note that in an ideal language appropriate for science it is never indeterminate whether b equals a, so we shouldn't worry about whether b's identity with a entails that it is not indeterminate whether b equals a.

If we regiment Evans' argument we can better appreciate the semanticist's *long* answer, which cant be freed from the Quinean mythology. Let "▽" stand for "it is indeterminate whether," "λx" stand for what it does in the lambda calculus (the property of x such that), and " " be the absurdity constant. Then, when fully expressed in a natural deduction system, the argument is:

- ▽(b = a) assumption
- λ
*x*[▽(*x*= a)]b 1, lambda abstraction - ¬▽(a = a) truism
- | b = a assumption for ¬ introduction
- | | λ
*x*[▽(*x*= a)]a assumption for ¬ introduction - | | ▽(a = a) 5, lambda cancellation
- | | 3,6 ¬ elimination
- | ¬λ
*x*[▽(*x*= a)]a 5-7 ¬ introduction - | λ
*x*[▽(*x*= a)]a 2.4 = elimination - | 8,9 ¬ elimnation
- ¬(b = a) 4-10 ¬ introduction

The key inference here is the first one. For the semanticist it is not indeterminate whether b equals a but rather whether a is referred to by "b". But then the first two lines of the proof would look like this:

- ▽("b" refers to a) assumption
- λ
*x*[▽("x" refers to a)]b 1, lambda abstraction

But if one handles the indeterminacy of the reference of "b" in the way suggested by Russell, and handled by supervaluationist versions of semanticism, then the lambda abstraction involves a de-dicto/de-re fallacy.** That the word "b" indeterminately refers to distinct objects, including a, does not entail that there is some individual b such that it has the property attributed to it in line 2 of the proof. The virtue of recognizing this is that one can adopt a theory of vagueness of the sort suggested by Russell without holding that we need to replace ordinary language and run of the mill properties with something Quine would have found kosher. Instead, we just offer a supervaluationist semantics for ordinary, vagueness tolerant, predicates.

One upshot to this is that semanticism is a little closer to Ungerian nihilism that we might have thought. Keefe and Smith write,

For example, the indeterminacy of the identity statement "Barney =

P" (wherePis an associated p-cat) could be taken to show that it is indeterminate to which of hte precise p-cats "Barney" refers. There is, then, no vague object that is Barney--indeed there isnounique object which is determinately the cat of that name (Keefe and Smith 2002, 52-3).

Pretty cool. In addition, both Kamp and Fine showed how degree theoretic semantics fit nicely with supervaluationism, which allows the supervaluationist to avail herself to degree theoretic blocking of the sorites paradox.

**(2) Evans' Argument Against the Semanticist**

So we seem to have that Evans' argument presents a special problem for the defender of ontic vagueness, but not one for the semanticist. I don't think this is right though, and suspect that people who find it plausible have been underestimating the power of the lambada calculus. In THIS POST I constructed an Evans type argument in terms of properties of type <e,t> (those canonically expressed by monadic predicates). The upshot of that is that Evans' argument threatens more than just vague objects, but vague properties as well.

From this, there should be no problem with constructing an argument for higher types. One that lambda abstracts over expressions type <t> should cause problems for vague states of affairs and one that lambda abstracts over expressions of type <e<e,t>> should cause problems for vague relations. Let's try it out with respect to the reference relation!

In what f0llows let R*"x*"*y *denote that *x *refers to *y*. Also, assume that "◻" is hyperintensional in the sense that "◻P" means that P is true at all worlds, possible or impossible. Then, two relations R and R' will be hyperintensionally the same if ◻∀*xy*(R*xy* ↔ R'*xy*). This says that in every possible and impossible world they relate the same things. Unless one is a rabid Quinean (and some of my best friends are) this does not commit one to the existence of impossible worlds. In any case, one could run the same argument just using different notation for sameness of content. Or one could make sense of a hyperintensional box without talking about possible worlds. I use this notation because it tracks Montague's definition of identity at higher types, and thus shows how Evans' argument is generalizable. Of course Montague couldn't handle hyperintensionality, but impossible worlds fixes the problem easily.

So, again where R"b"a says that "b" refers to a, let us run Evans' argument on R.

- ▽◻∀
*xy*(Q"*x"y*↔ R"*x"y*) assumption (stating that it is indeterminate whether R and Q are the same reference relation) - λ
*X*[▽◻∀*xy*(Q"*x"y*↔*X*"*x"y*)]R 1, lambda abstraction - ¬▽◻∀
*xy*(Q"*x"y*↔ Q"*x"y*) premise (stating that it is not indeterminate whether Q is the same reference relation as itself) - | ◻∀
*xy*(Q"*x"y*↔ R"*x"y*) assumption for ¬ introduction - | | λ
*X*[▽◻∀*xy*(Q"*x"y*↔*X*"*x"y*)]Q assumption for ¬ introduction - | | ▽◻∀
*xy*(Q"*x"y*↔ Q"*x"y*) 5, lambda cancellation - | | 3,6 ¬ elimination
- | ¬λ
*X*[▽◻∀*xy*(Q"*x"y*↔*X*"*x"y*)]Q 5-7 ¬ introduction - | λ
*X*[▽◻∀*xy*(Q"*x"y*↔*X*"*x"y*)]Q 2.4 semantics for λ calculus (by substitution from 2, since 4 says that R and Q are hyper-cointensional) - | 8,9 ¬ elimination
- ◻∀
*xy*(Q"*x"y*↔ R"*x"y*) 4-10 ¬ introduction

The punchline here is that semanticism has no advantage over ontic accounts of vagueness, at least as far as Evans' argument is concerned. It was supposed to be reasonable to block the move from premise 1 to 2 in the original Evans argument because of the vagueness of the reference relation. But what analogous move would block the lambda abstraction with respect to the reference relation itself? I'm not seeing it. Meta-semanticism? What would that be?

**(3) Further Questions**

* (3A) Merricks: (2001) argument:*

Akiba and Abasnezhad attribute the following thought to Trenton Merricks:

To recall, supervaluationism postulates partial references to various precisifications: it holds that a vague expression partially refers to one precisification, partially refers to another precisificaiton, etc. But what is a partial reference? It's an indeterminate reference. So if epistemicism is incorrect and there are indeed partial references in reality, there ought to be ontic indeterminacy in reference relations. But simply, language is part of the world (Akiba and Abasnezhad 2014, 6).

I haven't read Merricks' paper yet, but I hope that the above argument can be seen as strengthening it.

*(3B) Ontic Supervaluationism:*

Ontic supervaluationists take vagueness to be in the world, but still find the basic framework of supervaluationism to be helpful in making sense of it. For example in Elizabeth Barnes' (2010) version vague objects are such that it is ontically indeterminate which of several possible determinate objects are the actual object in question. I *think* that ontic supervaluationists typically try to find other reasons to deny the lambda abstraction, in effect following David Lewis and defending *de dicto* (in premise 1) indeterminacy but not *de re* indeterminacy (which would license the lambda abstraction). I have no idea if or how the above argument intersects with such attempts. Barnes (2009) should provide guidance here, since Evans is in the title. It will be fun to read it and think about higher-order Evans arguments.

In THIS POST, I proposed a friendly amendment to Barnes' theory to save it from criticisms from Jessica Wilson and myself. I quite like the amended neo-Barnesian theory.***

*(3C) Epistemicism:*

All of the literature I've read thus far says that epistemicism, the view that both the world and our reference relations are in fact precise, doesn't face the kind of *tu quoque* arguments Merricks and I are raising for semanticist views. My intuition is that this is not the case, that in fact the epistemicist trades on sorites susceptible entities (such as how reasonable it is to believe something) when trying to make epistemicism sound plausible. But I haven't read Williamson's book on knowledge yet and fear that whatever weird stuff he says there is actually working to pre-emptively deflect such an argument.

[Notes:

*You can strengthen the argument to a clear contradiction if you add some modal principles. Contraposition gets you something very much like Kripke's necessity of identity. If two objects are identical, then they are determinately so. I'll consider some of this stuff in one or more future blog posts.

**See the introduction to Keefe and Smith as well as Akiba and Abasnezhad for a further discussion of this point.

***Though I'm alternatively drawn to denying premise 3. Huge swaths of the history of philosophy take it to be the case that self-identity is something earned, not given. I would love to see if Rescher or Seibt's analytic versions of process metaphysics elides or preservers this. In contrast to this Hegelian take on Evans argument, I earlier (and not very clearly) HERE suggested a more Kantian one, where the argument itself is a case of Moorean paradoxicality. The basic idea is that Evans showed that we're forced to think of distinct entities as determinately distinct, but this is of a piece with the idea that we believe anything we sincerely assert. But the universal assertibility of P therefore I believe that P, does not mean that you believe all P! There's an analogue with Berkeley's master argument about conceiving the inconceivable as well. I need to do a clearer post on this with respect to the proof theory of all three arguments.]

*Bibliography*

Akiba, Ken and Ali Abasnezhad, ed. 2014. *Vague Objects and Vague Identity: New Essays on Ontic Vagueness*. Dordrecht Springer.

Barnes, Elizabeth. 2009. "Indeterminacy, Identity and Counterparts: Evans reconsidered." *Synthese *168, 81-96*.*

Barnes, Elizabeth. 2010. "Ontic Vagueness: A guide for the perplexed." *Nous* 44, 601-27.

Keefe, Rosanna and Peter Smith, ed. 2002. *Vagueness: A Reader*. Cambridge: The MIT Press.

Merricks, Trenton. 2001. "Varieties of Vagueness." *Philosophy and Phenomenological Research*, 62, 145-167.

Vagueness Notes:

- Is the Evans/Salmon argument against metaphysical indeterminacy merely a case of Moorean paradoxicality?
- Vagueness versus (Wilsonian/Brandomian) Underdetermination
- some problems for Elizabeth Barnes' account of vagueness
- Vagueness Notes 4 - Saving Barnes from the Wilson and Cogburn criticisms
- Vagueness notes 5 - Relevant HTML symbols (also, can someone fix the logical symbols Wikipedia page?)
- Vagueness notes 6 - A Proposed Generalization of the Evans/Salmon Argument (Not Involving Identity)
- Vagueness Notes 7 - Did Peter Simons discover a shorter, lambda free, version of the Evans/Salmon argument?

**By. Axel Arturo Barceló Aspeitia**

When I was in graduate school I was fortunate enough to take several classes from Michel J. Dunn. Thinking about my grades back then, I can tell that perhaps I did not get out of them as much as I should, but they were still very influential in most of my future thinking. Perhaps the main lesson I learned from Mike was the importance of **symmetry** (actually, of a special kind of symmetry known as a *Galois Connection*, but it’s not necessary to get too technical here). It is surprising how much of philosophy is married to fundamental asymmetrical notions when (a pair of) more elegant and powerful symmetrical alternatives are close at hand [I am also convinced that this prejudice of asymmetrical notions over symmetrical ones is part of what Derrida criticised under the name of *phallogocentrism*].

Perhaps the best known example of this is the traditional (asymmetrical) notion of **logical consequence**:

P is a logical consequence of B iff the truth of P follows from the truth of (all the propositions in) B

which some logicians – like Mike and myself – substitute with the following symmetrical pair of notions of logical consequence:

A is a logical consequence of B iff the truth of at least one of the propositions in A follows from the truth of all the propositions in B.

A is a logical consequence of B iff the falsity of at least one of the propositions in B follows from the falsity of all of the propositions in A.

[When faced with a pair like this, Mike liked calling one “to the right” and the other “to the left”, instead of using other more usual notational mechanisms, like asterisks, to avoid suggesting that one of them is the *normal* and/or *fundamental one *and that the other one is derived from it].

Substituting the asymmetrical one for the symmetrical pair has lots of advantages that have been explored by many logicians. For example, I have used it to argue that the distinction between so-called syntactic and semantic methods is illusory.

However, I do no think there is enough work exploring other similar symmetrical alternatives to other central philosophical asymmetrical notions, like substituting the traditional (asymmetrical) notion of

**causation**:

P is an effect of Q iff the occurrence of (all the events, facts, states of affairs, or whatever else are the relata of causation in) Q is physically responsible for the occurrence of P.

with the following pair of symmetrical notions of causality:

A is an effect of B iff the occurrence of all of the events in B is physically responsible for the occurrence of at least one event in A.

A is an effect of B iff the absence of all of the events in A is physically responsible for the absence of at least one event in B.

or substituting the traditional (asymmetrical) notion of **metaphysical grounding**:

F is the total metaphysical ground of A iff the existence of all of the entities (facts, states of affairs, or whatever else are the relata of grounding) in F is metaphysically responsible for the existence of entity or fact A.

with the following pair of symmetrical notions of metaphysical grounding:

F is the total metaphysical ground of A iff the existence of all of the entities in F is metaphysically responsible for the existence of at least some of the entities in A.

F is the total metaphysical ground of A iff the absence of all of the entities in A is metaphysically responsible for the absence of at least some of the entities in B.

or the traditional (asymmetrical) notion of **truth-making**:

F makes P true iff the existence of all of the entities in F is metaphysically responsible for the truth of P.

with the following pair of symmetrical notions of truth-making/false-making:

F makes P true iff the existence of all of the entities in F is metaphysically responsible for the truth of at least some proposition in P.

F makes P false iff the falsity of all of the propositions in P is metaphysically responsible for the absence of at least some entities in F.

Wittgenstein's thing (*Tractatus *5.42) about the interdefinability of the logical connectives revealing that they are not basic - where perhaps ab-Notation, or truth-tables considered as propositions, are - is striking, and seems like it might contain some insight. However, what about the quantifiers and the modalities?! Do these not just show the underlying principle in question to be in error? And if so, then what is there in Wittgenstein's remark, which feels insighful? Or is it just a sheer mistake?