One of the things you can do with formulae of formal logic is use them to make statements about how things are. This may not be their main point, but it is undoubtedly something you can do. What ways are there to do it? I will take the propositional calculus as a case study and consider three existing ways, before introducing a new one.

Early on in modern logic, one common way to use the formulae of the propositional calculus to make statements was to have the atomic formulae (and sometimes connectives too) denote or refer to things, such as sentences or propositions. This sometimes led to use-mention confusions, but there are ways around that.

An illuminating study of this kind of approach is a very early paper of Quine's, called 'Ontological Remarks on the Propositional Calculus'. While Quine in that paper did straighten out a way for the denotational approach to work while avoiding use-mention issues, his ultimate recommendation was that we move to an *abbreviational* approach.

On the abbreviational approach, atomic formulae are taken to abbreviate sentences like 'Snow is white', and the connectives are taken to abbreviate words like 'and' and 'or'. The brackets are not taken to abbreviate, but rather to be a mild regimentation of the natural language being abbreviated. On this approach, formulae can be used to make statements in virtue of the pre-existing statement-making powers of the expressions they abbreviate. Questions may then arise as to what the connectives may properly abbreviate, such that the rules of logic do not lead us from truth to falsity (for instance, can the hook properly abbreviate 'if...then'?).

Another approach, which is perhaps the most prevalent one in our day, is the truth-conditional approach, on which truth-conditions are stipulated for formulae. The idea is that by saying under what conditions a formula is to be true, we confer a meaning on it. There are some subtle philosophical concerns we could raise about this, along the lines of: does a set of truth-conditions really pick out a single, unique meaning? Nevertheless, the procedure seems to work in practice, even if there are some issues which arise when we try for a theoretical description of how it works.

In this post I want to point out that there is a fourth way we can use formulae of the propositional calculus to state things. On this approach, we don't end up using the formulae by themselves to make statements. Rather, we stipulate things about them *conditionally* on how the world is in various respects, and then make statements about them which, in virtue of the stipulations, have implications concerning the world. This approach may be called the conditional-stipulation approach, or the modelling approach.

It is very important to note that I am not proposing this as a rival to the other approaches. Rather, I just think it is interesting to see that such a thing is possible. More tentatively, I think it opens up a new way of looking at formal logic, and may help us with some questions in the philosophy of logic.

In the remainder of this post I will just give a simple illustration of one way of pursuing the conditional-stipulation approach with respect to formulae of the propositional calculus. This particular way works by assigning values to the atomic formulae conditionally on how things are.

For instance, we might say:

Let the value of '*p*' be 1 if snow is white, 0 otherwise, and let the value of '*q*' be 1 if grass is green, 0 otherwise.

Now, if we assert that the value of '*p*' is 1, this assertion has, *via *the above stipulation and its being in force, the implication that snow is white. We have thus used the formula '*p*' in saying something about how the world is.

We can now say things about the values of compound formulae, and these statements, *via *the value-tables governing the assignment of values to compounds, together with the conditional stipulations, can also have implications for how things are. For instance, if I say that the value of '*p* v *q*' is 1, we can "work backwards" using the value-table for 'v', and work out that, if that's true, either snow is white, or grass is green, or both.

That concludes my basic illustration of the conditional-stipulation or modelling approach. It is a simple enough idea. I find it natural to call it a 'modelling' approach, because once we make the conditional stipulations, the state of the formulae as regards their assigned values corresponds to, or is isomorphic with, how things are in various respects. But note that it is a peculiar sort of modelling: the model's state is guaranteed to be faithful to how things are, but we may not know what that state is - we may be mistaken about it. So it is not like a situation where we have a model before us and know its state, but are unsure whether that state is accurate.

I may do some or all of the following in future posts:

- Illustrate how we can do the same trick, but use proof-theory to do the work of the value-tables. (Seeing that may shed light on the close relationship between proof- and valuation-theory.)

- Use the proof-theoretic version of the conditional-stipulation approach to try to shed some light on a confusing result of Carnap's concerning the "categoricity" or meaning-determiningness of rules of inference (or more fundamentally, the consequence relation which they can be used to determine). (This result sometimes plays a role in debates about inferentialism.)

- Illustrate how this approach might be extended to quantification theory. (A certain optional version of this extension, where we use "points" or some supply of abstract objects whose intrinsic features do not matter in place of a domain containing the objects we ultimately want to talk about, might be used to give us a new perspective on Etchemendy's arguments against the Tarskian approach to analyzing logical consequence.)