** **

**by Axel Arturo Barceló Aspeitia**

Recently, JC Beall has been trying to show that the (transparent) truth predicate is logical in a sense that, say, the logical consequence relation is not. As far as I can tell his strategy goes along the following lines:

There is a sense in which both the truth operator (and the truth predicate) and the logical consequence operator (and relation) are topic neutral: both apply to propositions regardless of their content. In other words, for any *P* and *Q,* no matter the topic, the proposition that *P* is true and the proposition that *P* follows from *Q* are both acceptable. However, there seems to ve a further sense in which they are not equally topic neutral. True propositions of the form ‘*P* follows from *Q*’ are made true, not by facts regarding the topics *P* and *Q* are about, but by logical facts about the relation of logical consequence. In contrast, no proposition of the form “It is true that *P*” could be true in virtue only of properties of the truth predicate or operator. The way I remember Beall telling it, a theory about *T* is in the business of telling you what propositions about *T* are true and which are false. However, it is not its business telling you when a proposition about *T* follows from another proposition about *T*, logic does. That is why the truth predicate (and the false predicate) are topic neutral (and in that sense, logical) in a way that the relation of logical consequence.

What Beall wants is pluralism about logicl entailment, without pluralism about truth. He wants for there to be many ‘right’ relations of logical consequence (not all equally good for any purpose, for some better fit for certain theoretical purposes and others better fit for other purposes), without the undesirable relativistic consequence that there are many ‘right’ properties of truth. Thus, he needs to drive a wedge between truth and logical consequence. Thus, it is not that there be a difference between truth and logical consequence. After all, that – that truth is not validity – is something we learn in our first day of introductory logic! Beall needs to show that this well known distinction between truth and logical validity somehow corresponds with a substantial way of drawing the line so that there is room for pluralism regarding logical consequence, but not regarding truth!

In an unpublished manuscipt, he writes:

“The construction of true theories involves the construction of consequence (closure) relations for those theories – an entailment relation that serves to ‘complete’ the theory (as far as possible) by churning out all of the truths that follow (that are entailed by) the claims in the theory…The theorist’s task is to construct a set of truths about a target phenomenon and close that set of truths under the consequence rela- tion that, by the theorist’s lights, is the right relation to ‘complete’ the true theory of the given phenomenon.”

**by Axel Arturo Barceló Aspeitia**

Many philosophers of mathematics have been puzzled by the fact that mathematicians’ notebooks and blackboards are full of diagrams, but they rarely appear on published research materials. Until recently, the traditional explanation was that diagrams were good enough for heuristic purposes, that is, to informally understand, explain, explore and teach phenomena, but not fit for rigorous mathematical work. In more recent years, however, a competing account has emerged, where diagrams have been shown to be as rigorous as formulas and where their exclusion from the finished results of mathematicians is presented as based on a prejudice against non-formal methods in mathematics.

My take on the phenomenon, however, is different. I agree with the traditional account that diagrams are better fit for heuristic work than for the needs of published work; however, I also agree with the more recent re-assessments of diagrams in their assessment that they are as fit for the development of rigorous mathematical proofs and theories as formulas. This is because I take it that presenting proofs and theories is a task of a fundamentally different sort that understanding, explaining or teaching them. Thus the requirements for one are substantially different from the other, and the difference is so large that it does not boil down to one being more rigorous than the other. In particular, presenting proofs and theories is a communicative task and as such requires our representations to be easily understood by many, while exploring theories and finding proofs in them is the kind of work that is done either by ourselves and in close proximity with others, in other words, they are tasks that take place in heavily contextualized situations. Consequently, the representations we use in these situations can fruitfully exploit the information available in such contexts and need not be meaningful outside them. In other words, it does not matter if the diagram on the board is not understandable by anyone outside the discussion it was drawn for. However, diagrams in printed media, require being more explicit, not so much in themselves, but in their written context. In other words, in order to properly interpret a diagram in a written context, the contextual information necessary for their interpretation has to be mostly explicitly given in the text. This makes other synthetic means of putting that same information across, like formulas, a more efficacious tool. Thus, the difference is not one of rigorous vs non-rigorous, but between widely and narrow audiences, between poor and rich contexts.

When writing about diagrammatic reasoning, it is not rare to make a difference between what I elsewhere (2016) called the epistemic and ergonomic aspects of visual representations, which roughly corresponds also to the distinction Larkin and Simon (1987) make between their information content and its computational character, i.e., how is such information extracted from them, and in particular, how quickly and easy it is (the same distinction appears in Zhang 1997, Kulvcvki 2013, and Bechtel 2017). Thus, the reasons why diagrams are better suit for the mathematical notebook and blackboard that for the pages of the research journal are not epistemic, but ergonomic: in the context and for the goals of exploration and analysis, diagrams are easier to use than formulas; in the context son and for the goals of a published research paper, it is easier to use formulas. Diagrams could also be used, but they would be too cumbersome.

**by Axel Arturo Barceló Aspeitia**

The issue of whether deduction can gives us new knowledge is one of extreme philosophical awkwardness. On the one hand, almost anyone with some knowledge of logic or mathematics would find the question triffling in so far as it is obvious that at least some genuine knowledge must be obtained through competent deduction. However, finding a good example has proved an ellusive endeavour. The holly grial would be to find a simple, straightforward deduction that would certainly deliver new knowledge. But this seems almost impossible, in so far as if the deduction is simple, then the consequence must be obviously contained in the premises and as such, it would be difficult for anyone to have conscious knowledge of the premises without also knowing the conclusion. Nevertheless, in a recent talk, profrs. Héctor Hernández Ortíz and Víctor Cantero just gave what I think is the best example in this regards: a straightforward simple deduction from very intuitive premises – or, at least, premises that anyone with even the most rudimentary knowledge of arithmetic would accept – to a very unintuitive but true conclusion – in this case, a conclusion that only people with some sophisticated knowledge of arithmetic would accept. The arguments is as follows:

Controversial conclusion to prove: 0.999999999… = 1

Uncontroversial premise: 1/3 = 0.3333333333…

We multiply both sides by 3: 3(1/3) = 3(0.3333333333…)

An we immediately get the desired conclusion: 1 = 0.999999999…

According to Hernández and Cantero, this is a case of a non-vicious circular agument. But where exactly is the circularity, and is it actually non-vicious? Or is it question begging instead? Here is my take on where to find the ciruclarity in the argument. The key step is:

3(0.3333333333…) = 0.999999999

Presumably, the reason why we find this step uncontroversial is because it is based on the very elemental arithmetic fact that 3 times 3 is 9. However, it is not part of our basic arithmetical knowledge how to multiply infinite decimal expressions, and since this is a multiplication of an infinite decimal series a finite number of times, the issue in the background is how do we extrapolate from what we know about the multiplication of finite expressions to infinite expressions. One way of solving this would be to reason by some kind of informal induction thus:

3 x 0.3 = 0.9

3 x 0.33 = 0.99

3 x 0.333 = 0.999

3 x 0.3333 = 0.9999

…

3 x 0.33333333… = 0.9999999999…

Presumably, this, or something relevantly similar, is behind why we find the claim that 3 x 0.33333333… = 0.9999999999… very intuitive. However, one could question the validity of this induction by appealing to something also very basic, i.e., something we learn in elementary arithmetic lessons about how to multiply numbers in decimal system. As you all might recall, we do **not** start multiplying large numbers from the left to the right, but from the right to the left. However, this is what seems to be happenning in this case: since we cannot start multiplying from the right, because the series in infinite, we start from the left. Nevertheless, there is a reason why we do not start from the left: because numbers might “carry over” from the right and this might *mess up* our result. In other words, the numbers to the right might add up to more than ten (or a hundred or a thousand, depending on the size of the number) and this would require us to revise our result. For example, 371 times 2 does not start with 6, but with 7 because 71 times 2 is more than a hundred, so the one from the one hundred *carries over*, so to speak, to the left. In contrast, 312 times 2 starts with 6 because 12 times 2 is less than a hundred and so nothing gets carried over to the left. In a little more formal terms, the general rule when multiplying in decimal notation is that,

For any digit *D* occupying the *n*th place (from right to left) of number *C* expressed in decimal notation, the product of *C* times *N* expressed in decimal notation has digit (*D* times *N*) [or the rightmost digit of *D* times *N* expressed in decimal notation if *D* times *N* is larger than te] if and only if the decimal expression to the right of D (that is, the decimal expression composed of the first *n*-1 digits of *C* counting from right to left) times *N* is strictly less than the number corresponding to the numeral in decimal notation of a one followed by* n*-1 zeroes.

This sounds complex, but the idea is simple. Consider our previous sunple example: if we multiply by 2 a number starting with 3 (from left to right) and followed by, say, other two digits whatsoever (to its right), the result will start with 6 (from left to right) only if 2 times the number to the right of the 3 is less than 100 (that is, a one followed by two zeroes). That is, if we multiply 312 times 2, the result – 624 – starts with 6 – 3 times 2 – because the numer to the right of the 3 –12 – times 2 – 24 – is less than a hundred – a one followed by two zeroes, because there are two numerals to the right of 3 in 312; but if we multiply 371 times 2, the result – 742 – does not start with 6, because 71 times 2 – 142 – is more than a hundred. That is why we do not start multiplying from the left.

The rule is simple and basic. It is one of the first things we learn when we learn how to multiply expressions in decimal notation. However, when we apply it to this infinite case, we do not get the result that Hernández and Cantero wanted.

According to this rule, 0.33333…. times 3 would start with 0.9…. if and only if the number to the right of the leftmost 3, times 3, were strictly less that 0.1. This means that if 3(0.33333….) = 0.99999…., then 3(0.033333…) must be strictly less than 0.1. But if 3(0.33333….) = 0.99999…., then 3(0.033333…) is 0.0999999…. and thus 0.099999… would have to be strictly less than 0.1. Multiplying both sides of the inequality by ten, we get that 0.999999…. is strictly less than 1 which is in direct contradiction with our starting assumption – and Hernández and González desired result – that 0.999999…. = 1.

Thus, we have two different ways of extrapolating from the case of finite multiplication – that is, the multiplication of finite expressions in decimal notation – to infinite multiplication. Each one gives us different and inconsistent results. There is nothing wrong in adopting the first way and not the second, but there is nothing particulary right about it either. We must take other stuff in consideration to make the decision. And this is what should be meant when we say that this argument is circular but not viciously so. It is circular in so far as it is based on a step that is valid only if we reject other, equally suited ways of performing the relevant operation, inconsistent with it. It is not vicious in so far as it does establish a rational inferential link between the premises and the conclusion. Furthermore, it gives us new knowledge in so far as, as Hernández and Cantero correctly point out, the premises and operations involved in the argument are substantially more intuitive and uncontroversial than the final conclusion.

**By Jon Cogburn**

In AN EARLIER POST I generalized Evans' original argument against ontic vagueness to suggest a counterargument to those onticists who would respond to Evans by defending vague objects without vague identity. Here I want to do something similar, but aimed at semanticists who argue that vagueness lies in our representations of objects, not objects themselves.

**(1) Evans' Argument and Semanticism to the Rescue**

In natural language Evans' argument goes like this. Assume that it is indeterminate whether a and b are identical. Then a has the property of being such that it is indeterminate whether b is identical with a. But a does not have the property of being such that it is indeterminate whether it is identical with itself. So, b and a have distinct properties and are hence not identical with one another. But now on the assumption that it is indeterminate whether b is identical with a we have proven that b is not identical with a. But then any case of indeterminate identity will also be a case of flat out not being identical, which seems contradictory.*

Semanticism is the belief that objects in the world are not vague, only our manner of representing the world is. Our predicate "red" is such that it is indeterminate exactly which set of objects it picks out. But there is no vagueness among the objects, or among the real properties in the world, or the states of affairs that result when objects instantiate those properties. Even though Bertrand Russell's

Per contra, a represenation isvaguewhen the relation of the representing system to the represented system is not one-one, but one-many. For example, a photograph which is so smudged that it might equally represent Brown or Jones or Robinson is vague. A small-scale map is usually vaguer than a large-scale map, because it does not show all the turns and twists of the roads, rivers, etc. so that various slightly different courses are compatible with the representation that it gives. Vagueness, clearly, is a matter of degree, depending upon the extent of the possible differences between different systems represented by the same representation. Accuracy, on the contrary, is an ideal limit (Keefe and Smith 2002, 66).

This easily becomes the view that reality is precise and that this is something more or less captured by more or less accurate representations. With a little bit of Quinean optimism you can even see the job of the philosopher as providing a "canonical notation" for science which is 100% accurate in this sense.

The semanticist's *short* answer to Evans' argument is thus just to note that in an ideal language appropriate for science it is never indeterminate whether b equals a, so we shouldn't worry about whether b's identity with a entails that it is not indeterminate whether b equals a.

If we regiment Evans' argument we can better appreciate the semanticist's *long* answer, which cant be freed from the Quinean mythology. Let "▽" stand for "it is indeterminate whether," "λx" stand for what it does in the lambda calculus (the property of x such that), and " " be the absurdity constant. Then, when fully expressed in a natural deduction system, the argument is:

- ▽(b = a) assumption
- λ
*x*[▽(*x*= a)]b 1, lambda abstraction - ¬▽(a = a) truism
- | b = a assumption for ¬ introduction
- | | λ
*x*[▽(*x*= a)]a assumption for ¬ introduction - | | ▽(a = a) 5, lambda cancellation
- | | 3,6 ¬ elimination
- | ¬λ
*x*[▽(*x*= a)]a 5-7 ¬ introduction - | λ
*x*[▽(*x*= a)]a 2.4 = elimination - | 8,9 ¬ elimnation
- ¬(b = a) 4-10 ¬ introduction

The key inference here is the first one. For the semanticist it is not indeterminate whether b equals a but rather whether a is referred to by "b". But then the first two lines of the proof would look like this:

- ▽("b" refers to a) assumption
- λ
*x*[▽("x" refers to a)]b 1, lambda abstraction

But if one handles the indeterminacy of the reference of "b" in the way suggested by Russell, and handled by supervaluationist versions of semanticism, then the lambda abstraction involves a de-dicto/de-re fallacy.** That the word "b" indeterminately refers to distinct objects, including a, does not entail that there is some individual b such that it has the property attributed to it in line 2 of the proof. The virtue of recognizing this is that one can adopt a theory of vagueness of the sort suggested by Russell without holding that we need to replace ordinary language and run of the mill properties with something Quine would have found kosher. Instead, we just offer a supervaluationist semantics for ordinary, vagueness tolerant, predicates.

One upshot to this is that semanticism is a little closer to Ungerian nihilism that we might have thought. Keefe and Smith write,

For example, the indeterminacy of the identity statement "Barney =

P" (wherePis an associated p-cat) could be taken to show that it is indeterminate to which of hte precise p-cats "Barney" refers. There is, then, no vague object that is Barney--indeed there isnounique object which is determinately the cat of that name (Keefe and Smith 2002, 52-3).

Pretty cool. In addition, both Kamp and Fine showed how degree theoretic semantics fit nicely with supervaluationism, which allows the supervaluationist to avail herself to degree theoretic blocking of the sorites paradox.

**(2) Evans' Argument Against the Semanticist**

So we seem to have that Evans' argument presents a special problem for the defender of ontic vagueness, but not one for the semanticist. I don't think this is right though, and suspect that people who find it plausible have been underestimating the power of the lambada calculus. In THIS POST I constructed an Evans type argument in terms of properties of type <e,t> (those canonically expressed by monadic predicates). The upshot of that is that Evans' argument threatens more than just vague objects, but vague properties as well.

From this, there should be no problem with constructing an argument for higher types. One that lambda abstracts over expressions type <t> should cause problems for vague states of affairs and one that lambda abstracts over expressions of type <e<e,t>> should cause problems for vague relations. Let's try it out with respect to the reference relation!

In what f0llows let R*"x*"*y *denote that *x *refers to *y*. Also, assume that "◻" is hyperintensional in the sense that "◻P" means that P is true at all worlds, possible or impossible. Then, two relations R and R' will be hyperintensionally the same if ◻∀*xy*(R*xy* ↔ R'*xy*). This says that in every possible and impossible world they relate the same things. Unless one is a rabid Quinean (and some of my best friends are) this does not commit one to the existence of impossible worlds. In any case, one could run the same argument just using different notation for sameness of content. Or one could make sense of a hyperintensional box without talking about possible worlds. I use this notation because it tracks Montague's definition of identity at higher types, and thus shows how Evans' argument is generalizable. Of course Montague couldn't handle hyperintensionality, but impossible worlds fixes the problem easily.

So, again where R"b"a says that "b" refers to a, let us run Evans' argument on R.

- ▽◻∀
*xy*(Q"*x"y*↔ R"*x"y*) assumption (stating that it is indeterminate whether R and Q are the same reference relation) - λ
*X*[▽◻∀*xy*(Q"*x"y*↔*X*"*x"y*)]R 1, lambda abstraction - ¬▽◻∀
*xy*(Q"*x"y*↔ Q"*x"y*) premise (stating that it is not indeterminate whether Q is the same reference relation as itself) - | ◻∀
*xy*(Q"*x"y*↔ R"*x"y*) assumption for ¬ introduction - | | λ
*X*[▽◻∀*xy*(Q"*x"y*↔*X*"*x"y*)]Q assumption for ¬ introduction - | | ▽◻∀
*xy*(Q"*x"y*↔ Q"*x"y*) 5, lambda cancellation - | | 3,6 ¬ elimination
- | ¬λ
*X*[▽◻∀*xy*(Q"*x"y*↔*X*"*x"y*)]Q 5-7 ¬ introduction - | λ
*X*[▽◻∀*xy*(Q"*x"y*↔*X*"*x"y*)]Q 2.4 semantics for λ calculus (by substitution from 2, since 4 says that R and Q are hyper-cointensional) - | 8,9 ¬ elimination
- ◻∀
*xy*(Q"*x"y*↔ R"*x"y*) 4-10 ¬ introduction

The punchline here is that semanticism has no advantage over ontic accounts of vagueness, at least as far as Evans' argument is concerned. It was supposed to be reasonable to block the move from premise 1 to 2 in the original Evans argument because of the vagueness of the reference relation. But what analogous move would block the lambda abstraction with respect to the reference relation itself? I'm not seeing it. Meta-semanticism? What would that be?

**(3) Further Questions**

* (3A) Merricks: (2001) argument:*

Akiba and Abasnezhad attribute the following thought to Trenton Merricks:

To recall, supervaluationism postulates partial references to various precisifications: it holds that a vague expression partially refers to one precisification, partially refers to another precisificaiton, etc. But what is a partial reference? It's an indeterminate reference. So if epistemicism is incorrect and there are indeed partial references in reality, there ought to be ontic indeterminacy in reference relations. But simply, language is part of the world (Akiba and Abasnezhad 2014, 6).

I haven't read Merricks' paper yet, but I hope that the above argument can be seen as strengthening it.

*(3B) Ontic Supervaluationism:*

Ontic supervaluationists take vagueness to be in the world, but still find the basic framework of supervaluationism to be helpful in making sense of it. For example in Elizabeth Barnes' (2010) version vague objects are such that it is ontically indeterminate which of several possible determinate objects are the actual object in question. I *think* that ontic supervaluationists typically try to find other reasons to deny the lambda abstraction, in effect following David Lewis and defending *de dicto* (in premise 1) indeterminacy but not *de re* indeterminacy (which would license the lambda abstraction). I have no idea if or how the above argument intersects with such attempts. Barnes (2009) should provide guidance here, since Evans is in the title. It will be fun to read it and think about higher-order Evans arguments.

In THIS POST, I proposed a friendly amendment to Barnes' theory to save it from criticisms from Jessica Wilson and myself. I quite like the amended neo-Barnesian theory.***

*(3C) Epistemicism:*

All of the literature I've read thus far says that epistemicism, the view that both the world and our reference relations are in fact precise, doesn't face the kind of *tu quoque* arguments Merricks and I are raising for semanticist views. My intuition is that this is not the case, that in fact the epistemicist trades on sorites susceptible entities (such as how reasonable it is to believe something) when trying to make epistemicism sound plausible. But I haven't read Williamson's book on knowledge yet and fear that whatever weird stuff he says there is actually working to pre-emptively deflect such an argument.

[Notes:

*You can strengthen the argument to a clear contradiction if you add some modal principles. Contraposition gets you something very much like Kripke's necessity of identity. If two objects are identical, then they are determinately so. I'll consider some of this stuff in one or more future blog posts.

**See the introduction to Keefe and Smith as well as Akiba and Abasnezhad for a further discussion of this point.

***Though I'm alternatively drawn to denying premise 3. Huge swaths of the history of philosophy take it to be the case that self-identity is something earned, not given. I would love to see if Rescher or Seibt's analytic versions of process metaphysics elides or preservers this. In contrast to this Hegelian take on Evans argument, I earlier (and not very clearly) HERE suggested a more Kantian one, where the argument itself is a case of Moorean paradoxicality. The basic idea is that Evans showed that we're forced to think of distinct entities as determinately distinct, but this is of a piece with the idea that we believe anything we sincerely assert. But the universal assertibility of P therefore I believe that P, does not mean that you believe all P! There's an analogue with Berkeley's master argument about conceiving the inconceivable as well. I need to do a clearer post on this with respect to the proof theory of all three arguments.]

*Bibliography*

Akiba, Ken and Ali Abasnezhad, ed. 2014. *Vague Objects and Vague Identity: New Essays on Ontic Vagueness*. Dordrecht Springer.

Barnes, Elizabeth. 2009. "Indeterminacy, Identity and Counterparts: Evans reconsidered." *Synthese *168, 81-96*.*

Barnes, Elizabeth. 2010. "Ontic Vagueness: A guide for the perplexed." *Nous* 44, 601-27.

Keefe, Rosanna and Peter Smith, ed. 2002. *Vagueness: A Reader*. Cambridge: The MIT Press.

Merricks, Trenton. 2001. "Varieties of Vagueness." *Philosophy and Phenomenological Research*, 62, 145-167.

Vagueness Notes:

- Is the Evans/Salmon argument against metaphysical indeterminacy merely a case of Moorean paradoxicality?
- Vagueness versus (Wilsonian/Brandomian) Underdetermination
- some problems for Elizabeth Barnes' account of vagueness
- Vagueness Notes 4 - Saving Barnes from the Wilson and Cogburn criticisms
- Vagueness notes 5 - Relevant HTML symbols (also, can someone fix the logical symbols Wikipedia page?)
- Vagueness notes 6 - A Proposed Generalization of the Evans/Salmon Argument (Not Involving Identity)
- Vagueness Notes 7 - Did Peter Simons discover a shorter, lambda free, version of the Evans/Salmon argument?

**By. Axel Arturo Barceló Aspeitia**

When I was in graduate school I was fortunate enough to take several classes from Michel J. Dunn. Thinking about my grades back then, I can tell that perhaps I did not get out of them as much as I should, but they were still very influential in most of my future thinking. Perhaps the main lesson I learned from Mike was the importance of **symmetry** (actually, of a special kind of symmetry known as a *Galois Connection*, but it’s not necessary to get too technical here). It is surprising how much of philosophy is married to fundamental asymmetrical notions when (a pair of) more elegant and powerful symmetrical alternatives are close at hand [I am also convinced that this prejudice of asymmetrical notions over symmetrical ones is part of what Derrida criticised under the name of *phallogocentrism*].

Perhaps the best known example of this is the traditional (asymmetrical) notion of **logical consequence**:

P is a logical consequence of B iff the truth of P follows from the truth of (all the propositions in) B

which some logicians – like Mike and myself – substitute with the following symmetrical pair of notions of logical consequence:

A is a logical consequence of B iff the truth of at least one of the propositions in A follows from the truth of all the propositions in B.

A is a logical consequence of B iff the falsity of at least one of the propositions in B follows from the falsity of all of the propositions in A.

[When faced with a pair like this, Mike liked calling one “to the right” and the other “to the left”, instead of using other more usual notational mechanisms, like asterisks, to avoid suggesting that one of them is the *normal* and/or *fundamental one *and that the other one is derived from it].

Substituting the asymmetrical one for the symmetrical pair has lots of advantages that have been explored by many logicians. For example, I have used it to argue that the distinction between so-called syntactic and semantic methods is illusory.

However, I do no think there is enough work exploring other similar symmetrical alternatives to other central philosophical asymmetrical notions, like substituting the traditional (asymmetrical) notion of

**causation**:

P is an effect of Q iff the occurrence of (all the events, facts, states of affairs, or whatever else are the relata of causation in) Q is physically responsible for the occurrence of P.

with the following pair of symmetrical notions of causality:

A is an effect of B iff the occurrence of all of the events in B is physically responsible for the occurrence of at least one event in A.

A is an effect of B iff the absence of all of the events in A is physically responsible for the absence of at least one event in B.

or substituting the traditional (asymmetrical) notion of **metaphysical grounding**:

F is the total metaphysical ground of A iff the existence of all of the entities (facts, states of affairs, or whatever else are the relata of grounding) in F is metaphysically responsible for the existence of entity or fact A.

with the following pair of symmetrical notions of metaphysical grounding:

F is the total metaphysical ground of A iff the existence of all of the entities in F is metaphysically responsible for the existence of at least some of the entities in A.

F is the total metaphysical ground of A iff the absence of all of the entities in A is metaphysically responsible for the absence of at least some of the entities in B.

or the traditional (asymmetrical) notion of **truth-making**:

F makes P true iff the existence of all of the entities in F is metaphysically responsible for the truth of P.

with the following pair of symmetrical notions of truth-making/false-making:

F makes P true iff the existence of all of the entities in F is metaphysically responsible for the truth of at least some proposition in P.

F makes P false iff the falsity of all of the propositions in P is metaphysically responsible for the absence of at least some entities in F.

Wittgenstein's thing (*Tractatus *5.42) about the interdefinability of the logical connectives revealing that they are not basic - where perhaps ab-Notation, or truth-tables considered as propositions, are - is striking, and seems like it might contain some insight. However, what about the quantifiers and the modalities?! Do these not just show the underlying principle in question to be in error? And if so, then what is there in Wittgenstein's remark, which feels insighful? Or is it just a sheer mistake?

*I am indebted here to conversations with N.J.J. Smith, and some correspondence years ago with Susan Haack. My main contention here should not be attributed to either of them though - at least, not on the basis of this post.*

I believe that the Tarskian device of using sequences of numbers in the definition of satisfaction in first order logic, and in turn of truth, is stupid. (For information about this device, see here or just Google it.) Either it is the product of philosophical confusion, or it panders to philosophical confusion.

Some believe otherwise. For instance, Peter Milne in his 'Tarski, Truth and Model Theory' calls it a 'stroke of genius':

This co-ordination of variable [sic] with objects through their indices allows the work that would otherwise be done by explicitly semantic assignments of values to variables to proceed without appeal to semantic notions.

There is, I think, a very strong inclination to feel that some sleight of hand is being effected here, that in some respect the wool is being pulled over one's eyes. This inclination is to be resisted. What Tarski does here is perfectly abobe board; not only that, it is, I believe, a stroke of genius.

Curious. (Thanks to N.J.J. Smith for the reference.)

Also, a former teacher of mine (whom I won't name) called the idea of using sequences of numbers 'a great insight'.

This bothered me for years, and I was hesitant in forming my current view of the matter, thinking that perhaps if I only understood better I would think otherwise.

N.J.J. Smith in a talk at the recent 2015 AAL Logic Conference made an interesting and amusing analogy concerning the device, and used the word 'coy' in relation to it; it is as if, at a 1950's dance, instead of pairing boys with girls (which would be vulgar), boys and girls alike are paired with numbers, and then the numbers paired. (This got me thinking about the matter again after putting it aside years ago, and led to this post.)

Smith's talk was not, in the main, about criticizing satisfaction by sequences, and this was (I think) more or less an aside. (His purpose was rather to argue that satisfaction-based definitions, whether by sequences or variable assignments, are inferior to another method when it comes to capturing the content of 'true'.)

Smith's remark here is on a similar track to my criticism, but is different. What Smith said may make it look like the conclusion to draw is that Tarskian satisfaction by sequences is just as semantic as variable assignment. Well, that's true, but I think the real insight comes from looking at it the other way: a variable assignment is *no more semantic* (in any sense which may create difficulties and philosophical problems pertaining to meaning) than the business with numbers.

Whenever you have two objects, there will be, out there in function (or set) space, all the possible mappings, and the notion of such a mapping is not *semantic *in any strong sense. (If you define the semantic as any kind of relation between symbols and other things, then it will be, but this just goes to show that that's not a good definition. To be semantic in the sense of having to do with meaning, the relation has to be of a special kind, with all sorts of surroundings.)

Thus, the Tarskian conjuring trick is philosophically ill-motivated. Any relevant misgivings about function- or set-theoretic assignments - which Tarski does indeed succeed in avoiding - are based on confusing these with real semantic relations. The avoidance has *no real value at all*.

Now, am I saying Tarski is guilty of stupidity? No - I am just saying that the device itself is stupid. Tarski's adoption of it may indeed be quite shrewd, given his audience. In that case, it may be argued to be intellectually irresponsible - by pandering to confusion instead of combatting it. (I take the last two verbs in this formulation, which I think very apt, from an email from Smith wherein he comes to understand my position.)

*This is a selection of methodological, historical, and* ad hominem* remarks dating from 2010, when I was experiencing an intense intellectual honeymoon. The quality is uneven, and a lot of them have a tone I don't tend to take these days. Nowadays I am less prone to hubris and mistakes, have more durable results, and am more sophisticated, but my thinking was broader and more fundamental then.*

** * **

1. A metaphysical doctrine like physicalism has the effect of keeping philosophical problems *open*. It no longer suffices that we feel alright, are extricated from confusion. For the project always remains. (A guard against 'loss of problems' of the sort Wittgenstein attributed to Russell.)

* * *

2. Ordinary language philosophy became a bore – everyone had become sane. Problems weren't solved – their life support became unfashionable.

Lewis reinvigorated philosophy by arguing for a lot of insane nonsense.

One way philosophy gets its vigour is by erecting false idols.

* * *

3. Philosophy must not worry about its future. (If it appears to, it is already dead.)

4. Philosophy proper is in the service of real life (but saying this at once leads to associations with the worst sort of junk).

The whole notion of 'disinterested contemplation' is confused – either that, or it denotes something contemptible. True philosophy arises from strength and vastness of interest. The notion of understanding as an end in itself should be replaced with that of understanding as a very general means, not least of all to more understanding.

* * *

5. After looking at problems in the philosophy of logic in the usual way, it comes as a great refreshment to take the following perspective. Forget for a moment these highly-wrought questions about logical truth, logical vocabulary, laws etc., and think of your own experience of learning logic. Let go of preconceived notions and say: I took a logic course, read this book, and now I am in a hell of a state. I have had some experiences involving speech and the writing down of symbols, which have confused me. How did this come about?

This approach has numerous virtues. It gets us away from just banging our heads against the same abstractions again and again, and it also makes the questioner take full responsibility for their predicament. And that is largely how it must be with confusion about logic. Very few books or people are able to help.

* * *

6. A dilemma for physicalism. 'Truth', 'everything' etc. retain their ordinary meaning. But then physicalist derivations or explanations will be unavailable in many cases. Alternatively, shift those very concepts toward physical theory, and away from mathematics, psychology, ethics and aesthetics. In which case the result cannot justly be reported *now* using these terms. (Cantor.) (Regarding this horn: I think words would spring up for the old concepts, which are very deep in us.)

* * *

7. 'Different ways the world might have been'. Here we have the tendency to see our world as a state of a system. But the more usual view, of course, is that it *is *the system.

* * *

8. Lewis and the thief who wanted to be caught.

Lewis's work can be read as a cry for help.

9. Metaphysics as a blasphemy against life.

Perhaps the only way some men and women can settle down after a painful period of striving toward development is by convincing themelves that they understand the world in broad outline. Others resolve against this, such as Socrates.

* * *

10. Logic is commonly taught in a manner fatal to the philosophy of logic.

* * *

11. The history of 20^{th} century philosophy as cyclical:

Metaphysical puzzles -> Transposed into semantic key -> Semantic puzzles, accounts wanted for semantic notions -> Metaphysical puzzles > etc.

Or world -> mind -> language-and-thought -> world -> etc.

* * *

12. Russell, upon reading part of Wittgenstein's *Philosophical Remarks* (or a typescript from around that period), nobly admitted that he hoped what Wittgenstein was saying there wasn't true. ('…as a logician, who likes simplicity...'.) Only a very personally secure man, a man with nothing left to prove, could say such a thing.

(With this belongs my thing about Russell's remark to Wittgenstein's sister: 'We expect your brother to take the next big step in philosophy', and then the elderly Russell's dim view of the *Investigations* in light of that. As predicted, Wittgenstein did take the next big step, but Russell was unable to follow.)

* * *

13. The philosopher must learn to live among ruins (moral concepts, etc.). This is just what a Quine or a Lewis can't bring themselves to do.

* * *

14. Due, roughly speaking, to miscategorizations (seeing putative identity and reduction instead of analogy and connection, for example), it is very easy to miss how simple and excellent some of the best ideas in analytic philosophy are. We take them in and benefit anyway if we get into analytic philosophy, but we don't see them distinctly.

What is often missed (including by me in the past) is that the birth of analytic philosophy is tied up with logicism and then phenomenology, and analysing problematic concepts, *by means of an expansion in logical concepts*. And in the end, that is the great achievement – not the analyses. (Examples, relations as irreducible to one-place predicates, quantificational statements as distinct from atomic.) This sort of expansion is the *opposite* of stuff like Lewis trying to show that modal statements can be treated as quantifications, that indicative conditionals are truth-functional after all, etc.

* * *

15. The false exactitude which those trained in analytic philosophy are liable to fall into. Distorting and stultifying a topic as a preliminary to discussion. The surest way to avoid breaking new ground, forming new ideas.

This kind of false precision is particularly pernicious in that it enables someone to avoid real philosophical work while still being very clever.

16. Another sort of procrastination you see a lot in philosophy is where a philosopher goes to great pains to get some science or history right, when this is quite inessential to what they are really doing. How pleasant it feels, to still be doing inquiry, while yet enjoying a complete rest from the real strain of philosophical thinking! Kripke nowadays affords himself these rests at every turn. (And it has to be said, he has earned all the rest he wants!)

* * *

17. The analytic philosopher has constantly to struggle to maintain an openness to learn about, or admit, propositions which work in ways unenvisaged. But if they didn't have to struggle, they wouldn't be an analytic philosopher. You have to *want* to see things as simple – simpler than they are, even. (Nietzsche on 'the thinker'.)

* * *

18. Often, an attempt to define a problematic term succeeds only in spreading the infection to the terms used in the definition. E.g. a metaphysician saying 'By the world, I mean everything which exists'.

* * *

19. Division of labour is extremely dangerous in philosophy. You have to take responsibility for a lot of things in order to do anything of value.

* * *

20. How odd for *Lewis* to use Neurath's Boat rhetoric. His system is above all a blueprint. He does not transform us bit by bit, like Nietzsche or Wittgenstein.

* * *

21. Philosophy need not be a means to any end, nor an end in itself. Think of laughter: it is obviously neither, and yet we want to laugh. (Saying that laughing is an end in itself gives a wholly wrong impression of the role it plays in our lives. We don't treat it *that* way at all.)

22. Philosophy isn't really voluntary.

* * *

23. A good plan for the philosopher who wishes to tackle deep problems with an approach which people will not want to take to those problems, would be to go in three phases. First, begin by solving some shallow problems with that approach. Secondly, become self-conscious about the approach. Thirdly, apply it to the deep problems.

24. Nietzsche and Wittgenstein both seem to have a much keener psychological radar, and a keener sense of the importance of using it honestly and courageously in philosophy, than philosophers of the present day. We are either so suspicious that we're outsiders from philosophy, or not suspicious enough.

25. I sometimes feel like what I really want to say won't be accepted if I just make it plain. (Although this may be an illusion, since I don't know how I would do that – but then, couldn't that just be inhibition, internalization of the non-acceptance I would predict for it?) As though I have to do it in a way which requires enough infrastructure that people will want me around anyway, if only for the infrastructure. But then perhaps in 100 years, that will just be so much obscuring baggage.

* * *

26. There is nothing wrong with false philosophy if it points away from itself.

* * *

27. All thinkers are partly intellectually vicious and partly virtuous. When discussing philosophy with someone, it is important to 'give them the benefit of the doubt', to treat them as fundamentally honest. When we are treated as honest, we rise to the occasion. So speak to what is virtuous in a thinker, give them a chance to be good. The *wrong* thing to do is to try to back them into a corner of their vice. If they're really all vice, they're not worth talking to. (This may be restricted to particular regions – someone might be all vice regarding X, but not Y.)

* * *

28. Philosophy has a lot to do with intellectual housekeeping. Having a bigger house means more housekeeping, so the better you are at housekeeping, the bigger you can manage to make your house.

* * *

29. With Lewis's payment for simplified ideology 'in the coin of ontology', we feel a silly temptation to think that, since the cost is so high, we must be getting *something* in return!

* * *

30. “Descriptive metaphysics” is bound to wind up being *pre*scriptive with respect to our metaconcepts.

* * *

31. Solitude as preparation for greater harmony with the world. Like standing on the sidelines before skipping, studying the rope's movements.

* * *

32. Russell always avoided seeming like a philosophical know-it-all, but sometimes at the cost of falsifying the dialectic. Nietzsche's great fault is his schoolteacherliness, his tendency to be a know-it-all. It usually rings false anyway, like he's kidding *himself*. His true nature was the opposite, so perhaps he needed this fault as a corrective, to buck himself up.

* * *

33. Rorty's way of philosophizing is like trying to do surgery with guns.

* * *

34. The enormous good will for something strange and fascinating surrounding Cantor's mathematical innovation, as extant in Cantor's soul and that of, e.g., Russell.

* * *

35. One subtle effect of the technicalization of philosophy is that it reduces the psychological impetus toward real progress – real solution or dissolution of a problem. It puts the problem in a form such that one feels less foolish about being stuck on it. It starts to look like an open problem in science. Contrast: the puzzle of negative existentials in its barest form. That can spur someone on. Someone gets confused and strongly feels that this is unsatisfactory – that they are failing cognitively in an acute manner.