**by Axel Arturo Barceló Aspeitia**

In philosophy, as in any other theoretical endeavour, it is not rare to face conflicting positions - one that says that an object *A* is *P* and another that says that *A* is not *P.* In these situations, philosophers have several options. Obviously, they can defend one of the positions and criticise the other – in what I will call a “monist” solution. In fact, in most philosophical debates, responses of these monist types are the most common. However, throughout history, philosophers have developed a few, more sophisticated options that attempt to incorporate the insights, intuitions and arguments from both sides of the debate, into a third conciliatory option. The purpose of this article is classify the kinds of possible positions that can be taken, in general, in philosophy when we faced with two conflicting positions, each based on prima facie equally good arguments and intuitions. Besides the aforementioned obvious choices of defending one thesis and criticising the arguments and intuition behind the other, there are at least five other options that try to incorporate the insights behind the two positions into a unified theory:

- Dialetheism
- Gradualism
- Pluralism
- Relativism

In this and following posts, I will analyse each of them in detail, giving not only the advantages and disadvantages of each one, but mostly trying to determine in which situations is one more appropriate than the others. I am starting with dialetheism:

The first, and perhaps most radical solution to conflicting evidence is to accept the ensuing contradiction, not as a problem to solve, but as a feature of the phenomenon. The basic idea is that, since both the evidence for and against *A* being *P* must be accepted as equally good, then what it shows is that, for the particular case of *A* and *P*, it must be true true that *A* its both *P* and not *P*. That this sort of solution is not absurd has been productively explored by philosophers and logicians like Graham Priest (1985) or JC Beall (2009), among others. This solution is usually accompanied by a proposal to change the underlying logic to a paraconsistent logic to allow this kind of contradictions. The main feature of these logics that makes them fit for dialetheism is that they help us distinguish between exploding and non-exploding contradictions, that is, contradictions that entail everything and, therefore, make any theory that contains them collapse into absurdity, and contradictions that do not and, therefore, can be incorporated into a theory without catastrophic consequences.

This option is especially attractive when the evidence in favor of each of the options is the same or similar. In such cases, inserting a wedge between *A* being *P* and *A* not being *P* is especially difficult. For example, in considering the Liar sentence – “This sentence is not true” – the reasons we might have for taking it to be true are so intimately interlocked with those we might have for taking it to be not true that any attempt to argue for one and against the other would seem doomed from the start. Similar considerations have moved some philosophers to embrace dialetheism as a solution to the metaphysical problem of *Transition states*. A very simple example is given by (Priest & Berto 2013):

when I exit the room, I am inside the room at one time, and outside of it at another. Given the continuity of motion, there must be a precise instant in time, call it

t, at which I leave the room. Am I inside the room or outside at timet? Four answers are available: (a) I am inside; (b) I am outside; (c) I am both; and (d) I am neither. (a) and (b) are ruled out by symmetry: choosing either would be completely arbitrary. As for (d): if I am neither inside not outside the room, then I am not inside and not-not inside; therefore, I am either inside and not inside (option (c)), or not inside and not-not inside (which follows from option (d)); in both cases, a dialetheic situation.

For another, recent example, JC Beall (Manuscript) has argued that, since the conciliar fathers if orthodox Christianity have supplied *the very same *sort of evidence in favour of the contradictory claims that Jesus is both mutable and immutable, the best option for the Christian theologian is to accept **as true** the contradiction that Jesus is both mutable and immutable. Arguing for just one of the contradictory claims would entail diverging from Christian theology in an unacceptable manner and, furthermore, since the evidence is exactly of the same sort for each one of the claims, there is no principled way of choosing a single side.

It has been argued (for example, in Littman & Simmons 2004) that since dialetheism is a symmetric proposal, the same reasons that make it seem as a better solution than monism define a dual argument for taking it as a worse solution than monism. Yes, it is true that dialetheism incorporates the good reasons we have for accepting that *A* is *P* and accepting that *A* is not *P* and thus seems, at least prima facie, a better option that just accepting the good reasons in favour of one of them and not those in favour of the opposite one. Unfortunately, defenders of *A* being *P* commonly also have good reasons for rejecting *A* not being *P* (actually, the same reasons can be interpreted both ways: as reasons for accepting that *A* is *P* and reasons for rejecting that *A* is not *P*) and vice versa. Thus, since dialetheism accepts both that *A* is *P* and that *A* is not *P*, it goes against both the reasons we have for rejecting that *A* is *P* and for rejecting that *A* is not *P*. And thus it is not better, but worse that either monist option. The monist option of, say, accepting only that *A* is *P*, and rejecting that *A* is not *P*, at least do justice of our good reasons for rejecting that *A* is not *P*, even if it does not incorporate our other good reasons for rejecting that *A* is *P.*

Similar concerns regarding the symmetry of dialetheism have given rise to a dual strategy, known as “analetheism” (Beall & Ripley 2004), according to which instead of accepting both that *A* is *P* and that *A* is not *P*, we must *reject* both. Since we have good reasons against each of the contradictory claims, that means that we have also good reasons for rejecting them both. And just as dialetheism is well accompanied by a paraconsistent logic where propositions of the form *P* and not *P* can be true, so analetheism is well accompanied by a paraconsistent logic where where propositions of the form *P* or not *P* can be false. Instead of distinguishing between exploding and non-exploding contradictions, what this logic allows the analetheist is to distinguish between imploding and non-imploding disjunctions of the form *P* or no *P, *where an imploding proposition is one that follows from anything.

**by Axel Arturo Barceló Aspeitia**

Philosophers have long been intrigued by the many devices we have developed for representation and communication and, in particular, by the striking differences between words and pictures. However, pinning down the exact difference between them has proved to be elusive, to say the least. These are some of the many ways philosophers have proposed for distinguishing (at least some sorts of) pictures from words just in the last few decades:

- By their
**content**: Words have conceptual content, Pictures have non-conceptual content - By their
**persuasive****force**: Words are Apollinean, Pictures are Dionisian - By their
**structure**: Words have a recursive syntatcic/semantic structure, Pictures are dense - By what
**makes**them representations: Words belong to languages, Pictures are autonomous depictions - By how they are
**related**to what they represent: Words are artificially related to what they represent, Pictures are naturally related to what they represent - By how we
**grasp**their content: The content of words is interpreted according to linguistic conventions plus contextual information, the content of pictures is seen in them - By their
**modality**: Pictures are visual, Words need not be - By their
**phenomenology**: Seeing a picture of X feels similar to seeing or being in the presence of X itself; reading a word meaning X, less so.

Thus, it has become uncontroversial to say that words and pictures are not fine enough categories for the study of representation and thought; that that we need, as philosophers and semioticians, to develop new vocabularies and to draw new, finer distinctions. That is why some philosophers have developed and adopted technical distinctions like Grice’s distinction between natural and artificial meaning, Peirce’s distinction between symbols, icons and signals, etc. The idea is to notice that some pictures may differ from words in some aspects, while other sorts of pictures might differ in other, different respects.

I have adopted Peirce’s distinction between symbols, signals and icons to address the fifth (and sixth) of the above questions. Unless I am mistaken in my reading of Peirce, his notion son “symbol” and “index” roughly correspond, on Grice’s distinction, to signs that have artificial and natural meaning, respectively. Icons are interesting, therefore, because they hold an interesting middle position between natural signals and artificial symbols. Words are paradigmatic symbols, for they commonly have no natural relation to what they stand for. The word “dog” has no natural relations to dos. Instead, the relevant semantic relation holds artificially through some sort of intentional, stipulation that becomes a socialised convention of use. On the other side, footprints are paradigmatic examples of signals, for they are naturally linked to what they carry information about. The footprint of a wildcat in the dirt is causally related to the wildcat whose presence it signals, and it is because of our knowledge of this causal link that we can infer one from the other. However, we must not read too much into the ‘natural’ moniker and think that the relation between signal and what it carries information of is always causal, unless we want to exclude structures and other abstract entities by definition.

Now, Peirce originally introduced the notion of an icon on his 1867 paper “On A New List of Categories” to classify representations linked to their objects via “a mere community in some quality” or likeness (p. 56). Paradigmatic examples of icons are realistic pictures, and other depictions. This sort of pictures hold a middle ground between symbols like words and signals like footprints. Like symbols, they represent what they represent by an artificial and intentional act – the act of artificially reproducing the visual appearance of its object –, but like signals they relay on something that is naturally linked to what they depict – the appearance they reproduce. However, icons of other sorts right reproduce other aspects of their object, for example, many scientific models reproduce structural features of their target systems.

Euclidean diagrams are an interesting case of scientific representations, because there is still ample debate as to whether they are symbols, indexes or icons, and of what sort. James R. Brown famously conceived them as *windows* into the platonic realm of mathematical objects. I interpret this as taking diagrams to be indexes non-artificially linked to the mathematical facts they give us epistemic access to. Kuvlicki has argued that they are governed to syntactic and semantic conventions very much like languages. Valeria Giardino and Danielle MacBeth hold that they are structural icons, i.e., they are fruitful in the discovery, understanding and proof of geometrical facts because of their homomorphism with genuine geometrical objects. In contrast, I hold them to be icons that reproduce at least some visual features of their objects. I know this is a very controversial position because it seems philosophically fishy to hold that mathematical objects have visible properties, even though it is a common sense view among non-philosophers. After all, we all know how a triangle and a circle look like.

In the talk on the video, my co-author Angeles Eraña assume delusions are beliefs and argue that they have an important rational aspect. To this aim, we will draw on a distinction among different senses of ‘rationality’ one of us has already developed (Eraña 2009). We will argue that delusions are not rational in the externalist sense of being the product of a well-functioning reliable cognitive system. On the contrary, there is ample evidence that some cognitive malfunctions are involved. Furthermore, they are not rational in an internalist, deontological sense either, since this sort of rationality requires the agent to have a well-functioning conscious, control system (corresponding to *S1* in dual system theory), and delusional agents’ conscious control systems are too tied to the false and recalcitrant delusional belief.

One might wonder, once we accept that delusions are rational neither in the internist sense of giving us epistemic justification nor in the externist sense of being the product of a well-functioning relieable mechanism, in what sense could they be rational? We will argue that there is a third sense, in which delusions are rational. This third sense depends on the effects and aims of the delusion, which is to recover a sufficient condition for human rationality: the well-functioning of S1. In this sense, delusions are rational and/and because they are adaptive.

** **

**by Axel Arturo Barceló Aspeitia**

Recently, JC Beall has been trying to show that the (transparent) truth predicate is logical in a sense that, say, the logical consequence relation is not. As far as I can tell his strategy goes along the following lines:

There is a sense in which both the truth operator (and the truth predicate) and the logical consequence operator (and relation) are topic neutral: both apply to propositions regardless of their content. In other words, for any *P* and *Q,* no matter the topic, the proposition that *P* is true and the proposition that *P* follows from *Q* are both acceptable. However, there seems to ve a further sense in which they are not equally topic neutral. True propositions of the form ‘*P* follows from *Q*’ are made true, not by facts regarding the topics *P* and *Q* are about, but by logical facts about the relation of logical consequence. In contrast, no proposition of the form “It is true that *P*” could be true in virtue only of properties of the truth predicate or operator. The way I remember Beall telling it, a theory about *T* is in the business of telling you what propositions about *T* are true and which are false. However, it is not its business telling you when a proposition about *T* follows from another proposition about *T*, logic does. That is why the truth predicate (and the false predicate) are topic neutral (and in that sense, logical) in a way that the relation of logical consequence.

What Beall wants is pluralism about logicl entailment, without pluralism about truth. He wants for there to be many ‘right’ relations of logical consequence (not all equally good for any purpose, for some better fit for certain theoretical purposes and others better fit for other purposes), without the undesirable relativistic consequence that there are many ‘right’ properties of truth. Thus, he needs to drive a wedge between truth and logical consequence. Thus, it is not that there be a difference between truth and logical consequence. After all, that – that truth is not validity – is something we learn in our first day of introductory logic! Beall needs to show that this well known distinction between truth and logical validity somehow corresponds with a substantial way of drawing the line so that there is room for pluralism regarding logical consequence, but not regarding truth!

In an unpublished manuscipt, he writes:

“The construction of true theories involves the construction of consequence (closure) relations for those theories – an entailment relation that serves to ‘complete’ the theory (as far as possible) by churning out all of the truths that follow (that are entailed by) the claims in the theory…The theorist’s task is to construct a set of truths about a target phenomenon and close that set of truths under the consequence rela- tion that, by the theorist’s lights, is the right relation to ‘complete’ the true theory of the given phenomenon.”

**by Axel Arturo Barceló Aspeitia**

Many philosophers of mathematics have been puzzled by the fact that mathematicians’ notebooks and blackboards are full of diagrams, but they rarely appear on published research materials. Until recently, the traditional explanation was that diagrams were good enough for heuristic purposes, that is, to informally understand, explain, explore and teach phenomena, but not fit for rigorous mathematical work. In more recent years, however, a competing account has emerged, where diagrams have been shown to be as rigorous as formulas and where their exclusion from the finished results of mathematicians is presented as based on a prejudice against non-formal methods in mathematics.

My take on the phenomenon, however, is different. I agree with the traditional account that diagrams are better fit for heuristic work than for the needs of published work; however, I also agree with the more recent re-assessments of diagrams in their assessment that they are as fit for the development of rigorous mathematical proofs and theories as formulas. This is because I take it that presenting proofs and theories is a task of a fundamentally different sort that understanding, explaining or teaching them. Thus the requirements for one are substantially different from the other, and the difference is so large that it does not boil down to one being more rigorous than the other. In particular, presenting proofs and theories is a communicative task and as such requires our representations to be easily understood by many, while exploring theories and finding proofs in them is the kind of work that is done either by ourselves and in close proximity with others, in other words, they are tasks that take place in heavily contextualized situations. Consequently, the representations we use in these situations can fruitfully exploit the information available in such contexts and need not be meaningful outside them. In other words, it does not matter if the diagram on the board is not understandable by anyone outside the discussion it was drawn for. However, diagrams in printed media, require being more explicit, not so much in themselves, but in their written context. In other words, in order to properly interpret a diagram in a written context, the contextual information necessary for their interpretation has to be mostly explicitly given in the text. This makes other synthetic means of putting that same information across, like formulas, a more efficacious tool. Thus, the difference is not one of rigorous vs non-rigorous, but between widely and narrow audiences, between poor and rich contexts.

When writing about diagrammatic reasoning, it is not rare to make a difference between what I elsewhere (2016) called the epistemic and ergonomic aspects of visual representations, which roughly corresponds also to the distinction Larkin and Simon (1987) make between their information content and its computational character, i.e., how is such information extracted from them, and in particular, how quickly and easy it is (the same distinction appears in Zhang 1997, Kulvcvki 2013, and Bechtel 2017). Thus, the reasons why diagrams are better suit for the mathematical notebook and blackboard that for the pages of the research journal are not epistemic, but ergonomic: in the context and for the goals of exploration and analysis, diagrams are easier to use than formulas; in the context son and for the goals of a published research paper, it is easier to use formulas. Diagrams could also be used, but they would be too cumbersome.

by Axel Arturo Barceló Aspeitia

I am currently reading a very interesting paper on the history of the semicolon [Cecilia Watson’s 2012 paper “Points of Contention; Rethinking the Past, Present, and Future of Punctuation", *Critical Inquiry*, 38, pp. 649-672] and it says something very interesting fact about the historical development of English Grammar: that as the written English language developed its autonomy from speech, the way grammarians conceived of its rules – in particular, its rules of punctuation – also changed from:

- telling us how different aspects of the written language corresponded to aspects of speech
- describing the actual way people write
- telling us how better to use the written language to enhance communication

Thus, for example, the semicolon used to be at first explained as corresponding to a pause (longer than the comma, but shorter than the point) that is, as corresponding to an element of speech. Then, it became something writers used for several different purposes, while now we conceive of the semicolon as a resource of the written language that has a proper rule-governed function and that we can use to better get our message across. As is written in the Chicago Manual of Style:

“Punctuation should be governed by its function, which is to make the author’s meaning clear, to promote ease of reading, and in varying degrees to contriobute to the author’s style”. (1982, pp. v11, 132; apud. Watson 2012, 668)

The particular case of the semicolon is not important in itself, but what it tells us about rules in general is very profound, I think. It tells us a lot about the roles rules play in practices and how these roles change as the practice matures and becomes autonomous (or not). So we can hypothesise that as practices mature, the roles rules play in them change accordingly:

- First. they tell us how aspects of the practice relate or correspond to analogous aspects in other, more entrenched, similar, practices. This makes sense at the beginning, when practices are new, and people need to familiarise with them.
- Then, as more people engage in the practice and a normal common way of engaging in it emerges, rules are conceived as describing what people actually do.
- And finally, as the practice becomes more mature and autonomous, rules are conceived as revealing the practice’s underlying logic, i.e., how different aspects of it contribute to its accomplishing its goals in a rational way (or, even better, how practitioners can exploit different features of the practice in order to reach their goals in a more efficient and efficacious way). [Thanks to Ian Cross for calling my attention to this last point].

Consider an example I have worked on for a long time: the adoption of algebraic methods in geometry. In the beginning, descriptions of the method by mathematicians emphasized its relation with more intuitive and better known ways of doing geometry, thus a lot of talk of curves, lines, movement, etc. However, as the practice became more common among geometers, it also started to become more and more autonomous, and so geometers felt embolded to adopt new conventions regarding their use. Finally, once algebraic methods achieved their total autonomy from other (arithmetic, diagramatic, constructive and mechanic) ways of doing mathematics, its rules started to be conceived as revealing an underlying logic. Nevertheless, remannants of the original conceptions remained for long, as mathematicians (and logicians) struggled with issues like *what is a function?* or *what do variables mean?* etc.

Of course, to be fair with Profr. Watson’s assessment, it is important to point out that this is, at most, one of the factors that shape our rule-making practices. Rules have many functions and uses and thus they are shaped by many forces, many of them contingent on historical circumstances and even tastes and fashions (I have written on this topic here and here).

**By Axel Arturo Barceló Aspeitia**

Yesterday, I was having an advising sessions with one of my graduate students. We were reviewing the concluding pages of her – awesome – dissertation, and I suggested she should improve her final sentences. After all, these would be the last words she and her readers would share after a long trip together and so, I thought, they should leave her reader with the impression that something was indeed accomplished, but that also is worth keep thinking about. In order to illustrate what I meant, I picked up some books from my shelf at the office to look for examples. I was looking for something like this:

My hope is that the beginnings sketched here are compeling enough to inspire those cleverer and more knowledgeable than myself – to correct my errors, to fill what’s been passed over in the case against V=L, an to extend naturalistic mehtods to the evaluation of higher and more controversial hypotheses.

Penelope Maddy (1997) 234

You can change “V=L” and/or “naturalistic” for any other hypothesis and method and find here a nice blueprint for ending any piece of research. Graham Priest offers us a similar example here:

What will happen to this account in the future, and what consensus, if any, will emerge in the twenty-first century, only time will tell.

Graham Priest (2001) 230

However, while looking for these canonical examples, I also find other interesting ways contemporary philosophers have ended their books. Here is a small selection, feel free to share your favourite examples in the comments section:

Philosophical knowledge … is not the product of successful encounters with the skeptic. It is the product of the continuing dialectic among nominalists, conceptualists, realists, positivists, empiricists, and rationalists.

Jerrold Katz (1998) 211

The laws of thermodynamics doom the universe to heat death. Everything, everywhere, will end in silence.

Roy Sorensen (2008) 290

Reflective understanding and constructive critique should, I believe, replace both sleepy complacency and Luddite rage. The philosophers have ignored the social context of science. The point, however, is to change it.

Phillip Kitcher (1993) 391

What are your favourite ways contemporary philosophers have ended their books? Please share

by **Axel Arturo Barceló Aspeitia**

A few weeks ago I was invited to give a talk on the annual conference on hate speech at the National Museum of Memory and Tolerance in Mexico City. Here are some of my notes:

- Theoretical humanists (and social scientists, not just theoretical philosophers) envy the relevance of practical humanists, while practical humanists envy the precision of theoretical humanists.
- I increasingly doubt the usefulness of using the term "racism" to account for discrimination in Mexico. Given the heterogeneity among phenomena such as the exclusion of indigenous peoples, anti-Semitism and discrimination against dark skinned people, I do not see what is the point of calling them all cases of racism. I do not see well what they have in common that is substantially different from, say, classism or any other sort of discrimination.
- It is very easy to point out the negative effects of the
*mestizaje*myth, but what is really necessary and truly difficult is to objectively evaluate the pros and cons of having built our national identity around this myth. furthermore, it is not necessary to think that this was a mistake to realize that we must repair its harmful effects on indigenous peoples, or that we need to keep looking for better, more just ways of building a national identity. - As long as our paradigm of the political is explicit and conscious public deliberation considering reasons and interests, we marginalize fundamental aspects of the political realm that are implicit, unconscious private and embodied, like emotions. (Even though emotions also build communities).
- Latin American political thought lays claim to anger as a positive political emotion, but we must recognize that once anger is unleashed there is no way to control it. At most, we may be able to channel it in positive directions.
- The European and American perspectives on free speech and censorship are diametrically different because of their radically different historical experiences, so that the default position is freedom in the United States and prohibition in Europe.
- It is possible – and a good habit – to talk to a person on the other side of the ideological fence without betraying our principles.
- “If you ask me, what would be the only thing I wish you would take away from this talk – if you could take only one thing away – it would be that education makes no difference: we are all capable of producing and being affected by dangerous speech regardless of our education.” Susan Benesch
- We respond in a more rational way to messages in a second language than we do to messages in our mother tongue.
- Speech acts are acts: they invovle decision, authority, choice: these demand different conceptual tools beyond semantics, pragmatics, communication, meaning, truth, etc.

**by Axel Arturo Barceló Aspeitia**

The issue of whether deduction can gives us new knowledge is one of extreme philosophical awkwardness. On the one hand, almost anyone with some knowledge of logic or mathematics would find the question triffling in so far as it is obvious that at least some genuine knowledge must be obtained through competent deduction. However, finding a good example has proved an ellusive endeavour. The holly grial would be to find a simple, straightforward deduction that would certainly deliver new knowledge. But this seems almost impossible, in so far as if the deduction is simple, then the consequence must be obviously contained in the premises and as such, it would be difficult for anyone to have conscious knowledge of the premises without also knowing the conclusion. Nevertheless, in a recent talk, profrs. Héctor Hernández Ortíz and Víctor Cantero just gave what I think is the best example in this regards: a straightforward simple deduction from very intuitive premises – or, at least, premises that anyone with even the most rudimentary knowledge of arithmetic would accept – to a very unintuitive but true conclusion – in this case, a conclusion that only people with some sophisticated knowledge of arithmetic would accept. The arguments is as follows:

Controversial conclusion to prove: 0.999999999… = 1

Uncontroversial premise: 1/3 = 0.3333333333…

We multiply both sides by 3: 3(1/3) = 3(0.3333333333…)

An we immediately get the desired conclusion: 1 = 0.999999999…

According to Hernández and Cantero, this is a case of a non-vicious circular agument. But where exactly is the circularity, and is it actually non-vicious? Or is it question begging instead? Here is my take on where to find the ciruclarity in the argument. The key step is:

3(0.3333333333…) = 0.999999999

Presumably, the reason why we find this step uncontroversial is because it is based on the very elemental arithmetic fact that 3 times 3 is 9. However, it is not part of our basic arithmetical knowledge how to multiply infinite decimal expressions, and since this is a multiplication of an infinite decimal series a finite number of times, the issue in the background is how do we extrapolate from what we know about the multiplication of finite expressions to infinite expressions. One way of solving this would be to reason by some kind of informal induction thus:

3 x 0.3 = 0.9

3 x 0.33 = 0.99

3 x 0.333 = 0.999

3 x 0.3333 = 0.9999

…

3 x 0.33333333… = 0.9999999999…

Presumably, this, or something relevantly similar, is behind why we find the claim that 3 x 0.33333333… = 0.9999999999… very intuitive. However, one could question the validity of this induction by appealing to something also very basic, i.e., something we learn in elementary arithmetic lessons about how to multiply numbers in decimal system. As you all might recall, we do **not** start multiplying large numbers from the left to the right, but from the right to the left. However, this is what seems to be happenning in this case: since we cannot start multiplying from the right, because the series in infinite, we start from the left. Nevertheless, there is a reason why we do not start from the left: because numbers might “carry over” from the right and this might *mess up* our result. In other words, the numbers to the right might add up to more than ten (or a hundred or a thousand, depending on the size of the number) and this would require us to revise our result. For example, 371 times 2 does not start with 6, but with 7 because 71 times 2 is more than a hundred, so the one from the one hundred *carries over*, so to speak, to the left. In contrast, 312 times 2 starts with 6 because 12 times 2 is less than a hundred and so nothing gets carried over to the left. In a little more formal terms, the general rule when multiplying in decimal notation is that,

For any digit *D* occupying the *n*th place (from right to left) of number *C* expressed in decimal notation, the product of *C* times *N* expressed in decimal notation has digit (*D* times *N*) [or the rightmost digit of *D* times *N* expressed in decimal notation if *D* times *N* is larger than te] if and only if the decimal expression to the right of D (that is, the decimal expression composed of the first *n*-1 digits of *C* counting from right to left) times *N* is strictly less than the number corresponding to the numeral in decimal notation of a one followed by* n*-1 zeroes.

This sounds complex, but the idea is simple. Consider our previous sunple example: if we multiply by 2 a number starting with 3 (from left to right) and followed by, say, other two digits whatsoever (to its right), the result will start with 6 (from left to right) only if 2 times the number to the right of the 3 is less than 100 (that is, a one followed by two zeroes). That is, if we multiply 312 times 2, the result – 624 – starts with 6 – 3 times 2 – because the numer to the right of the 3 –12 – times 2 – 24 – is less than a hundred – a one followed by two zeroes, because there are two numerals to the right of 3 in 312; but if we multiply 371 times 2, the result – 742 – does not start with 6, because 71 times 2 – 142 – is more than a hundred. That is why we do not start multiplying from the left.

The rule is simple and basic. It is one of the first things we learn when we learn how to multiply expressions in decimal notation. However, when we apply it to this infinite case, we do not get the result that Hernández and Cantero wanted.

According to this rule, 0.33333…. times 3 would start with 0.9…. if and only if the number to the right of the leftmost 3, times 3, were strictly less that 0.1. This means that if 3(0.33333….) = 0.99999…., then 3(0.033333…) must be strictly less than 0.1. But if 3(0.33333….) = 0.99999…., then 3(0.033333…) is 0.0999999…. and thus 0.099999… would have to be strictly less than 0.1. Multiplying both sides of the inequality by ten, we get that 0.999999…. is strictly less than 1 which is in direct contradiction with our starting assumption – and Hernández and González desired result – that 0.999999…. = 1.

Thus, we have two different ways of extrapolating from the case of finite multiplication – that is, the multiplication of finite expressions in decimal notation – to infinite multiplication. Each one gives us different and inconsistent results. There is nothing wrong in adopting the first way and not the second, but there is nothing particulary right about it either. We must take other stuff in consideration to make the decision. And this is what should be meant when we say that this argument is circular but not viciously so. It is circular in so far as it is based on a step that is valid only if we reject other, equally suited ways of performing the relevant operation, inconsistent with it. It is not vicious in so far as it does establish a rational inferential link between the premises and the conclusion. Furthermore, it gives us new knowledge in so far as, as Hernández and Cantero correctly point out, the premises and operations involved in the argument are substantially more intuitive and uncontroversial than the final conclusion.