home | archives

Philosophy Blog

Life at the thought-face.

Tuesday, October 14, 2003

Silence


Two months silence will have indicated to just about everyone that this Blog is comatose, if not moribund. Well, the unofficial silence-due-to-hassle-at-work is going to be official for the next 8 weeks. I have set up another Blog for the students on my Berkeley module, which will be taking most of my attention this term. It is a public blog, so if you have an interest in the minutiae of Berkeley interpretation, click along to:
Berkeley's Principles

.: posted by Tom Stoneham 12:01 PM


Friday, August 15, 2003

Memory and Knowledge



Sitting in the car during my daughter’s gym class last night I made the decision to completely abandon a paper that has been kicking around and haunting me for a long time. The paper tried to defend the following claim:

(KPK) If someone is to have memory knowledge, then he must now be justified in thinking that he previously knew what he now remembers.

The idea is that this places a slightly stronger condition on memory knowledge than the currently fashionable:

(TP') If someone knew that p at t1, and has an accurate memory that p at t2 (and has no reason to doubt either the accuracy of her memory or the justifiedness of the prior belief), then she knows that p at t2.

After several attempts, I came to the conclusion that there was no example which required (KPK) to explain our intuitions: all of them could be adequately explained by (TP’). But I was undaunted because it should be clear that the method of example and counter-example is not the only way to argue in epistemology. In fact, the apparent conclusiveness of such well-known arguments by example as Gettier’s famous paper might even be an illusion. (Brain Weatherson has recently given an argument to the effect that it is an illusion, but whatever the success of that particular argument, which rests on an objectionable theory of meaning, the moral remains that we should be more cautious about examples in epistemology.) And in fact if you look at arguments for claims like (TP’), they are based not on examples but upon very general considerations about the possibility of rational belief. The other thing you notice about those arguments is that they tend to lump memory and testimony together, as if they are in epistemically the same position (e.g. Dummett, Burge, Owens). Now I agree that the general considerations make the testimony version of (TP’) correct, but I decided to argue for (KPK) by showing that there were significant disanalogies between memory and testimony.

But the arguments I came up with were lousy: not even rhetorically effective. If you have less sense than time, you can read them here (the lousy stuff is in Section III, the rest I still think is quite good).

Anyway, I am pleasantly surprised at what a relief it is to completely abandon a paper.

.: posted by Tom Stoneham 10:43 AM


Monday, August 04, 2003

More on testimony



Over the weekend I was thinking about how to respond to a potential objection to the argument I offered on Foley’s behalf, and realized that Foley cannot accept my argumentative gift.

The objection is that I have cut methods too coarsely. If I look hard enough, there will be some ‘method’ of forming opinions which it is a priori guaranteed any two people will share. Surely I trust myself not merely because I form my opinions by reflective self-criticism, but because when I am critical of my own opinions, I appeal to a particular set of epistemic principles (e.g. Bayes Theorem, or the injunction not to believe a logical falsehood). However, when I know as little about someone as I know about Anonymous, it cannot be an epistemic default position that she has formed her opinions by the same methods which I trust in my own case.

To see why this criticism fails we need first to make clear a terminological move of Foley’s (p.27). His primary thesis is that one should trust one’s opinions in proportion to their invulnerability to self-criticism, and he uses a sense of ‘rational belief’ which allows the paraphrase: one’s opinions are epistemically rational in inverse proportion to their vulnerability to self-criticism. This allows him to say (p.41) that someone who, unlike us, thought that dreams were a better guide to the external world than sense perception, could turn out to be epistemically rational so long as this conviction would survive reflective self-criticism. Such a person should trust her dream judgements in the same way that we trust our perceptual judgements. Similarly, Foley allows the possibility that someone suffering from Cotard’s delusion is ‘not necessarily being irrational’ (p.41). It seems to follow that the only thing which matters when considering whether to trust an opinion of one’s own is that it survives reflective self-criticism. In particular, it does not matter which epistemic principles one endorses, for none are intrinsically better than any others, just so long as those principles also stand up to reflective self-criticism. In other words, rationality and trustworthiness goes with standing up to self-criticism, whatever the epistemic principles endorsed by the subject.

Given this, we need to distinguish the direct argument I offered for the prima facie trustworthiness of testimony from Foley’s own argument. The direct argument goes: if surviving self-criticism is sufficient to make an opinion of mine trustworthy, whatever epistemic principles I endorse, then surviving criticism by someone else, whatever epistemic principles she endorses, is sufficient to make an opinion of hers prima facie trustworthy for me (with lots of qualifications). I think there is something quite plausible about this argument, but it falls foul of Foley’s requirement that the trustworthiness of testimony be contingent. So Foley’s argument must use as a premise the allegedly contingent truth that in general other people’s epistemic principles would stand up to reflective criticism by me. And it must be reasonable to believe this without any evidence. So what Foley needs to appeal to is the thought that it is an epistemic default position to take it that if someone else’s epistemic principles have survived scrutiny (by them) then they would survive scrutiny by me. And now I am inclined to agree with Owens: it is hard to see how this could be contingent and yet reasonable to believe without recourse to evidence.

.: posted by Tom Stoneham 11:39 AM


Friday, August 01, 2003

Setting the issue of whether Foley is committed to some form of transparency thesis to one side (I think he is, but I also think he would initially deny that he is), I want to look at his account of testimony. The reason I started reading Foley’s book was a review by David Owens in Mind which roundly rejected the account in less than two pages. Now, Owens’ reading of Foley is a perfectly acceptable interpretation of what he says, and on that interpretation, the account is as flawed as Owens says. But I want to argue that there is another, more subtle, reading of Foley which avoids Owens’ objection.

Foley’s starting point is the claim that:

1. It is perfectly reasonable to trust one’s own intellectual abilities.

By this he means that, though we cannot defeat the sceptic and prove (1), we are being rational when we take those of our opinions which survive reflective self-criticism to be true. If we are brains in a vat, this trust is misplaced, but it is not irrational. The argument for this involves some large claims about the nature of epistemology, the relation between rationality and knowledge, and the impossibility of defeating the sceptic, so I am just going to grant it for present purposes.

Foley’s next move (that is ‘next in my rational reconstruction’) is to argue for the contingent empirical claim:

2. My intellectual faculties and the contexts in which I have exercised them are not significantly different from those of others.

He concludes:

3. It is inconsistent not to trust the intellectual abilities of others.

That is, in the absence of any evidence about either the proposition in question or the reliability of the other with respect to such issues, we are reasonable in taking someone’s opinion that p as being correct. Foley graphically illustrates this with a character called Anonymous. All we know about Anonymous is a list of some of the propositions he or she believes. We cannot infer from these anything about when or where Anonymous lived, and what topics he may have been expert or unreliable on. Foley thinks that if all we know is that Anonymous believed p, and we do not have any opinion of our own about p, it is prima facie reasonable to accept p on that basis alone. Foley thinks that, given (1), the only way to deny the conclusion is to deny (2), but while possible, that is very hard to do, for it involves treating myself as special in a way which is not warranted by the evidence.

Owens’ criticism is simple. The notion of inconsistency being used in (3) is epistemic, i.e. one would be in a position which would not survive reflective self-criticism. But that means that (2) must not only be true but also known to be true. And Owens rightly points out that if (2) is an empirical claim, then it is hard to see how we could know it to be true without recourse to the social and biological sciences, and all such scientific knowledge inevitably involves testimony. The solution, Owens suggests, is to give an a priori defence of (2).

What Foley writes is:

Given that it is reasonable for me to think that … my intellectual faculties and my intellectual environment have broad commonalities with theirs, I risk inconsistency if I have intellectual trust in myself and do not have intellectual trust in others. (p.106)

So Owens is assuming that if (2) is a contingent, empirical claim, it can only be reasonable for me to think it (Foley is careful not to talk of knowledge here, given his views about the structure of epistemology) if I have a posteriori evidence for it. And when one asks what such evidence would be, we immediately see how it relies on testimony.

There are two things we can say in reply: First, we might challenge the assumption and argue that there are empirical claims which it can be prima facie reasonable to believe in the absence of evidence to the contrary. Secondly, we can say that the alleged circularity is not vicious, for Foley is not trying to show that we can find reason to trust testimony which does not itself rely on testimony, rather, he is trying to show that if we are to be epistemically rational, we cannot avoid relying on testimony.

Taking the second first, we should note that Foley has another argument, which only establishes the weaker:

3’. It is inconsistent not to trust the intellectual abilities of anyone else at all.

The argument for this is that my opinions have been thoroughly and unavoidably influenced by others already, so if I trust myself I must also be trusting those others who have influenced me. Whatever the merits of this argument, it makes clear what Foley’s objective is, namely to show that it is very hard, though not completely impossible, to trust myself and not trust others, without falling into epistemic inconsistency. He clearly thinks that to trust oneself is nearly always already to trust some others, so he does not accept that there is some testimony-free starting point from which the question of whether to accept the testimony of others can be raised and needs to be answered without accepting testimony. When describing the main argument, he repeatedly says that ‘intellectual self-trust creates a pressure’ (e.g. p.107) to trust others. The thought is not that intellectual self-trust gives us an independent basis from which we can see that trust in others is justified, but that there is a certain sort of instability in the position of one who trusts himself but not others.

What then of the first point? It is clear that Foley thinks we can have empirical reasons to deny (2), so in one sense it is not a priori. But when he argues against the view that the prima facie trustworthiness of testimony is a priori (e.g. Burge), his objection seems levelled at the thought that it is necessary (p.98), rather than the thought that we can come to that opinion without appealing to empirical data. Perhaps the key idea here is that thinking one is, intellectually speaking, no different from the average person is reasonable in the absence of evidence to the contrary. It is, as it were, the epistemic default position. Whether this is Foley’s view or not, I cannot tell from reading his book, but it is a possible position which maintains most, perhaps all, of the attractive features of Foley’s position and deftly avoids Owens’ criticism.

Perhaps it is just too implausible to think that a contingent, empirical proposition could be such that it is reasonable to believe it in the absence of evidence either for or against it? One way of making it plausible is to appeal to the epistemic asymmetry between the claim that one is special and the claim that one is ordinary, no different from the others. It might seem that the claim of specialness always needs more evidence than its denial, and (2) is just the denial that I am intellectually special.

The claim of specialness wears the epistemic burden of proof only against the background of expected homogeneity. But what prior reason is there to expect humans to be homogenous with respect to their intellectual faculties and environments? It is hard to see how there can be an answer to this which does not undermine the status of (2) as a contingent empirical claim.

Perhaps, instead, we should look at (2) given (1). If we accept that our own intellectual faculties and past are such as to give prima facie grounds for taking our own opinions to be true, then to deny (2) is to say that the intellectual faculties of others are not such as to generally get things right. Now, the ‘others’ we are talking about here are reflective and self-critical reasoners, so the denial of (2) given (1) entails that my reflective, self-critical reasoning achieves its goal (truth) whereas other people’s does not. So the situation is this. There is a end (true opinion), a means for achieving it (reflective self-criticism), and an assumption (when I use that means I achieve the end). The question is, then, whether in the absence of evidence to the contrary, it is reasonable to think that other people using the same means will succeed in achieving the same end. And it does seem that reasonableness favours this opinion over agnosticism, because the assumption already commits me to the thought that the means is the right sort of thing to produce the end.

Now we can see a minor difficulty for Foley. What (1) and (2) get him is that the opinions of others which are such as to survive reflective self-criticism, what he calls deeply and confidently held, have prima facie credibility for me. But that does not entail that Anonymous is credible, because we do not know if the list represents deeply and confidently held opinions or the ‘doxastic counterparts of whims’ (p.26). Perhaps in real life cases we can usually tell on a case by case basis whether someone is expressing a deeply held opinion or a mere hunch or inkling, and that is certainly what Foley needs, because the general claim that most of what people sincerely assert expresses opinions that would survive self-criticism seems palpably false.

.: posted by Tom Stoneham 2:28 PM


Tuesday, July 29, 2003

That last post of mine clearly shows that I am spending too much time on admin, and even beginning to care about these things. Time to get back to some philosophy. I have been reading Richard Foley’s book Intellectual Trust in Oneself and Others, which is very stimulating, though so far rather lacking in detailed argument. Anyway, there is one thing I wanted to take issue with. Foley appears to assume that the following are equivalent (e.g. 28, 39):

1. S’s opinion that p is ‘in accord’ with S’s other reflective first and second order opinions.
2. S’s opinion that p will survive S’s most thorough and searching reflection.

Foley’s view is thus that if someone accepts p because q, but has a second-order opinion about what counts as good evidence, which opinion entails that q is not a good reason for p, then that person will be able to discover and resolve the tension purely by solitary reflection. Foley accepts that such reflection may sometimes be unwise, because it will take time and energy which could be put to better use, but it is always possible for the individual to discover and resolve tensions in her opinions.

What this view neglects (caveat: I am only on p.48, so Foley may discuss this later), is that reflective self-criticism is a skill, and a difficult one at that. Someone unskilled in spotting incompatibilities in their opinions may well reflect away as hard as Foley cares to require and never spot an inconsistency. And yet when it is pointed out to him by a suitably skilled teacher, she accepts it and revises her opinions accordingly. Some of the evidence on the Wason selection tasks suggests exactly this phenomenon: subjects fail the task and are told that they have failed, but cannot work out on their own what their mistake was. Foley talks a great deal about ‘self-criticism’, but this could mean either criticism achievable by oneself alone, or criticism of someone by her own standards. The two come apart when X’s ability to criticize herself (by her own standards) falls short of Y’s ability to criticize X by X’s standards.

If Foley is really overlooking this point, then it would seem that he is assuming some sort of transparency of the semantic relations between our beliefs/opinions. For to be a good (self-)critic one has to be able to spot how different opinions are related, to spot what is incompatible with what, and if everyone is potentially as good as their own best critic, then there can be no incompatibility between someone’s opinions which is hidden to her most careful reflection. Maybe Foley is happy with this assumption, but with the current state of the philosophy of mind, I think it needs a fairly substantial defence.

.: posted by Tom Stoneham 11:32 AM


Rather long rambling post, initially about examination rules but ending up commenting on the point of getting a degree …


I got into a surprisingly heated (on my part) argument about examinations a few weeks ago. The question was this: suppose a student turns up very late or even misses an examination completely, should she be given a chance to take the examination?

The view I was opposing says:
Yes if the following conditions are met:
1. The lateness/absence was a genuine mistake
2. The examiner believes there has been no collusion

My view is that we should answer affirmatively only if there are genuine medical or compassionate circumstances explaining the lateness/absence, and that these have been confirmed by the an appropriate independent source, for example, there is a medical certificate. The disagreement, then, was over cases where the student forgets, gets the time wrong, or sets her alarm wrong, or some similar organizational error. I wanted to argue that these are culpable mistakes and the student should suffer the consequences. My colleagues were of the mind that we should be more sympathetic and do what we can to enable the student to prove her intellectual merit.

Less heated reflection suggests that the real disagreement here is over the very nature of assessment by examination. If one wants to assess someone’s ability to do X, then one has to choose whether to have a form of assessment which minimizes the gap between competence and performance (to adapt a useful distinction of Chomsky’s), or to simply test performance. For example, the person who wins the gold medal in the 100m at the Olympics is not necessarily ‘the fastest man in the world’, rather it is the person who ran the fastest at a particular time in particular conditions. To win the gold medal, one has not only to be very fast, but also to to perform on the day. In contrast, holding the world record is completely different: you can keep on trying whenever it suits you. Of course, performance is a function of competence, so testing performance also tests competence, but it tests other things as well. In particular it tests whether someone can exercise her ability to run fast, or to argue philosophy, on demand and under pressure.

Now here at York we use two methods of assessment of our students: closed examinations and essays (aka term papers). Assessment by essay aims to reduce the performance-competence gap, because the students have months and months to write their essays, they can show drafts to their tutors, discuss their ideas with anyone who will listen, check and recheck what they have written etc. etc. Of course, they have to get in it by a deadline, but even deadlines can be extended for sympathetic reasons. In contrast, assessment by closed examination explicitly tests performance. The students must do their best in very specific and rather artificial circumstances. To prepare for a closed examination one must not only make sure one knows lots of philosophy, but also that one can recall and express it under pressure, and in a very short time. There are special skills here, because not only must material be recalled, but it has to be marshalled and pared down in order to answer a precise and unexpected question, all in a short period of time. A well-prepared student may have the competence to write a 10,000 word dissertation on, say, free will, but in an examination she is being asked to select the 800 or 1000 words which best answer the question which was set.

So now we have a way to defend my hard-line answer to the original question. If a student is late for a closed examination because his alarm did not go off, or he got the date and time wrong, then he should not be given extra time or other opportunities to redeem himself. For to do so would be to miss the point that the examination is testing the ability to answer certain questions, at a specified time and place, much like the Olympics is testing the ability to run fast on a certain track at a specified time and place.

But this argument is too hasty, for the point of the closed examination is not to test the students ability to perform on, say, Tuesday 15th June at 9.30 a.m., but her ability to perform under pressure and on demand. And whether she does the exam on Tuesday at 9.30 or Wednesday at 12.30 makes no difference to this (so long as there is no cheating, collusion or special advantage to taking the exam later). The ability to perform ‘under pressure and on demand’ is equally satisfied by someone who does the exam at a slightly different time to everyone else, because he still has to produce the philosophy there and then under exam conditions.

In order to maintain my hard-line position, I need to ask why we have closed examinations at all. What is so good about being able not merely to write good philosophy but to do so on demand and under pressure? Or to be more precise: what is the lifelong benefit that a student gets from being able to do this? Now it should be fairly obvious why medics or lawyers should be assessed like this, because in their careers they will be asked to apply their knowledge at specific times and places. (If you are wondering about the point of the Olympics, think of how such tests of skill indicated someone’s abilities as a warrior.) But no such consideration applies to philosophy students, not least of all because it is, for the overwhelming majority of our students, not a vocational degree.

However, we do hope that spending three years at University studying philosophy benefits our students in ways which have relevance to their future lives and careers. We often talk up the critical and analytical skills we teach, the ability to write clearly about complex issues, to argue coherently and persuasively, to see both sides of an issue etc. etc. How much better if our students can not merely display these skills but also display them on demand and under pressure? If you are an employer looking for someone who has good analytical skills, would you not also like to know that these skills will be employed whenever and wherever you ask? And if we think that one important point of closed examinations is to show potential employers that our students have not merely competence but can also perform at a high level, we devalue the goods by being too flexible. If we give the same certification of ability to perform under pressure and on demand, to someone who oversleeps and misses the exam, as we give to someone who turns up on time, then the ability displayed is not one of particular interest to potential employers.

There is no academic reason to insist that a closed examination be taken on time or not at all, but there is a reason for all that. We award degrees, and the value of those qualifications matters to our survival as a University. It would be ridiculous to think that the only value of a degree from York is the effect it has on our graduates’ employability, but that is one undeniable benefit, and, depressingly, it is the one which most of our students value most highly. If the point of teaching philosophy was only to bring our students one step closer to eudaimonia, then there would be no point in having closed examinations at all, let alone insisting they be taken at a specific time and place. But so long as we accept that it is some part of what we are doing to make our students better at their chosen careers, be that civil servant, diplomat, lawyer, accountant, journalist or whatever, we should take the hard-line on closed examinations. To give special treatment to someone who is late or absent through his own fault, is to reduce the value of the degree as an indicator of suitability for such careers.

.: posted by Tom Stoneham 10:54 AM


Thursday, July 10, 2003

Ralph Wedgwood emailed to say that something in my last post was wrong because a necessary truth cannot explain a contingent truth. Independently of whether what I wrote was wrong, this seemed an interesting issue (even, or perhaps especially, when we set aside all that stuff about necessary beings creating the universe).

My strategy was to try to get an example where p because (q and r), with q necessary and r contingent, and then argue that in that case ‘p because q’ was also true. The basic thought being that not all true explanations are complete explanations. So my example went:

Definition: x is a misanthrope iff (y)( x hates y if y is human)
Assumption: being human is an essential property, so John is necessarily human.

Suppose Peter is a misanthrope and he lacks all other reason to dislike John. So Peter hates John because John is human. Now a fuller explanation, offering a sufficient condition, might be that Peter hates John because Peter is a misanthrope and John is human. But that does not make the elliptical explanation false, just incomplete. It is true because it picks out the feature of John which is responsible for Peter’s hate. Suppose Mary is no misanthrope but still hates John, would not an adequate (though not logically sufficient) explanation be that John is boorish?

Anyway, whatever one’s views on that, Bryan Frances has come up with a better example. Explanation is an epistemic notion, and according to Kripke, necessity is not. So an a posteriori necessity might explain a contingent truth. Here is an example:

Suppose the author Ruth Rendell has a distinctive stylistic quirk. We then notice that the author Barbara Vine has the same quirk. We begin to wonder if they went to the same school etc. Then we discover that they are one and the same person, that she writes under two names. That explains it! However, it is not necessary that an author who writes under two names writes in the same style under each name, so a contingent fact has been explained by a necessary one.

This example, like mine, trades on the fact that a correct explanation might be incomplete, in that it does not cite a sufficient condition. For if we make the similarity between RR and BV such that RR=BV is sufficient for it, then what is being explained is not contingent.

.: posted by Tom Stoneham 11:36 AM


Monday, July 07, 2003

Deduction


Recent work by Paul Boghossian suggests that we should be very careful in distinguishing different questions which might fall under the general heading ‘the justification of deduction’. So I want to see how many I can separate.

1. What can we say to persuade someone who seriously doubts that all of our rules of inference are necessarily truth-preserving?
2. Can we justify, to ourselves, our use of deduction by showing each of the rules of inference to be necessarily truth-preserving?
3. Can we justify the practice of using truth-preserving rules of inference?
4. How can we show that a particular inference is valid?
5. How does inferring according to a valid rule from premises one is justified in believing justify one in believing the conclusion?

The answer to the first question is probably ‘Nothing’, because while one may be able to use non-deductive patterns of thought to persuade someone that deduction was useful, or generally reliable, it does not look like one could show that it was necessarily truth-preserving without using deduction itself. That is, of course, why the extreme foundationalist empiricism of Mill failed. The second question, however, looks to be the perfectly sensible one we are pursuing when we try to choose between various logics.

The third question is an interesting one. For many philosophers it is too ‘obvious’ to need addressing: valid inferences can never lead us astray. But there are two reasons for thinking this answer inadequate. (i) If we are not certain of our premises, which is the usual epistemic situation of humans, we may want a way of drawing conclusion from them which respects that uncertainty rather than conditionalizes it away. (ii) Deductive inference may have costs associated with it, such as time and computational complexity, which make rough and ready heuristics much more useful in real world situations. The extent to which you think that there is a disagreement here is a function of whether you think justification is merely a permissive matter. For one might think that deduction is always permitted, and that heuristics and probabilistic inferences are also permitted, though perhaps only in certain circumstances. But someone else might think that there is also a prescriptive notion in play around this area, a notion which tells us what inferences we ought to make, and if we have the prescriptive notion in play, it is not hard to generate a conflict between deduction and other rules.

Question 4 is one I find particularly interesting: do we recognize the validity of a particular inference directly, or by recognizing it as an instance of a valid form of reasoning? I think that the answer is ‘Both’. Sometimes we take an inference to be valid because we cannot imagine the premises being true and the conclusion false. Other times we do not need to engage in such imaginings because we recognize that the inference is an instance of a pattern we unhestitatingly accept (e.g. modus ponens). [Aside: Consider Vann McGee’s famous counter-examples to modus ponens. Only someone who has done some logic will recognize that as an instance of the same pattern as everyday modus ponens. Does learning logic teach us to overgeneralize the acceptable patterns? Or does it extend our recognitional capacities?]

Question 5 is the question Boghossian addresses in his most recent paper on the topic, ‘Blind Reasoning’. The difficulty here is seeing why, once we have answered all the others, this question remains to be answered. I guess just about everyone could agree on the following as jointly sufficient and severally necessary:
(i) Belief in the premises is justified.
(ii) The justification of the premises is independent of the thinker having justified belief in the conclusion.
(iii) The inference is valid.
(iv) The conclusion is believed because it follows validly from the premises.
Epistemological externalists and internalists will disagree on how to understand the ‘because’ in the conclusion, but should agree with the general form of the answer. So what is the problem? According to Boghossian, the problem is to flesh out (iv) in a way which is both true and non-circular. If the ‘because’ is merely causal, then it makes obviously unjustified conclusions justified. But if it is justificatory, then a vicious regress looms.

The difficulty lies in establishing these two claims.

.: posted by Tom Stoneham 9:01 AM


Monday, June 23, 2003

Religion and Art



Perhaps I should clarify this question. For present purposes a philistine is someone who thinks that there is no such thing as aesthetic value, who thinks there is no sense, not even a subjectivist one, in which it is ever true that a drawing by Leonardo or Picasso is better than one by a child. There are facts about which drawings people are prepared to buy and to display, but these are psychological facts about individuals. (And it would seem that most individuals are more interested in looking at a drawing by their own child than by Leonardo.) To put it another way, a philistine is one who thinks that there is no more to art than there is to cookery. A philistine will think, with Norman Tebbit, that there is no significant difference between a Titian and a topless photo on page three of The Sun.

Now it seems to me that there is an unexamined assumption in academic philosophy that a philistine must have failed to understand art. The understanding in question probably involves both cognitive and conative faculties, but it is understanding for all that. In other words, a philistine is ill-equipped to teach aesthetics because, almost by definition, he does not understand the subject.

But the analogous view about the philosophy of religion, that an atheist does not understand the subject, is not taken seriously in most Philosophy Departments. A philistine thinks that philosophical aesthetics has no subject matter, while an atheist thinks that philosophy of religion has no subject matter. What is the difference?

One response I can expect is that if a philistine reached his views by a process of philosophically respectable reasoning, then he would be just as well equipped to teach aesthetics. But my concern was that it seems to be a widely held assumption that no one could be a philistine as a result of such a process. If you consider the matter properly, you cannot deny that there is such a thing as aesthetic value, and, contraposing, if you are a philistine, you must have a failure of understanding.

.: posted by Tom Stoneham 12:08 PM


Saturday, June 21, 2003

Something for the weekend:

Why is it not merely acceptable but quite common for atheists to teach the philosophy of religion, but unheard of for philistines to teach aesthetics?
(I hope someone offers me a real-life counterexample to that thought!)

.: posted by Tom Stoneham 5:26 PM


Friday, June 13, 2003

‘The number of planets is possibly less than 7’



I have been looking at ‘Reference and Modality’ with my second years, and one of the things I like to do in seminars is to make them really critical about philosophers’ assertions along the lines of ‘these statements would be regarded as true … and these as false’.

Now Quine says that:

(17) The number of planets is possibly less than 7

is true, and:

(20) 9 is possibly less than 7

is false. We should note that he is understanding the modality here to be analyticity, but for my purposes that does not matter. I asked my students if they could imagine circumstances in which they would use the sentence:

(17.1) The number of planets is less than 7.

They came up with the obvious stories, including gratuitous use of the DeathStar, but I wanted to push the point: would they use that very sentence. And now we began to see an oddity. If one was trying to describe the quantity of planets in the solar system, one would say that there are fewer than 7. Quine is not being careless about the less/fewer distinction, because ‘9 is fewer than 7’ is clearly incorrect. [Illustration: ‘Do you have fewer than 7 apples?’, ‘Yes, I have 5 apples, and 5 apples is fewer than 7 apples because 5 is less than 7.’]

‘Less than’ and ‘greater than’ express relations between numbers, but ‘more than’ and ‘fewer than’ express relations between quantities (of things). So Quine must be intending us to read (17.1) as a claim about the relations between numbers, not a claim about the quantity of planets.

First comment: once we spot this, (17) is not so obviously true, at least when the modality in question is not analyticity.

Second comment: Quine gets from (17) to (20) by substitution, and that substitution turns upon a non-modal claim, viz.:

(A) The number of planets is 9.

But now we can see that this is ambiguous, because we do not know whether this is a claim about numbers, or about the quantity of planets. Quine needs it to be the former, so it must be understood along the lines of:

(B) The square root of 81 is 9.

This statement says that the value of a function for a given argument is 9. I.e. f(81)=9. So (A) must say something similar: f’(planets)=9. What is the function here? Counting, surely. So (A) says:

(C) The result of counting the planets is 9.

And now I am beginning to wonder whether that is not elliptical. Surely the proper result of counting the planets is a quantity of something, viz.: ‘9 planets’. If this is right, then subsititution on (17) would give us the nonsensical:

(20*) 9 planets is possibly less than 7.

So there is no true identity statement which can be used to get from (17) to (20) [or from Quine’s (15) to his (18), for that matter].

Unfortunately, saying (C) is elliptical in this way appears to entail the falsity of:

There is a 1-1 mapping of the Fs to the Gs iff the number of Fs = the number of Gs.


And I guess a good many philosophers would be reluctant to let that go. Still, the argument was fun while it lasted.

.: posted by Tom Stoneham 3:55 PM


Thursday, June 12, 2003

I have been a bit silent on the blog recently in part because I had a confidence-knocking philosophical experience: I gave the McCulloch paper at the conference in London and was so very nervous that I bombed. I have not been that nervous giving a paper since I started on the job circuit 10 years ago. Now what is interesting is that the content of the paper was fine: not even that audience came up with a knock-down objection. I received some pretty fierce questions from John McDowell, Tim Williamson and Mike Martin, but had little trouble answering them. What went wrong was the performance. The paper was unsatisfying and unconvincing because, out of sheer nervousness, I abandoned my normal presentation style. Normally I produce a handout or OHP with the key points on and then speak without reference to my notes. That is how I give undergraduate lectures too, though usually with less detailed preparation. On black-Saturday I stood at the lectern and read the paper – which is an appropriate use of a lectern, but hardly counts as good communication skills.

Why am I telling this? Because it has made me reflect about the importance of performing, of putting on a show, in philosophy. Despite protestations to the contrary, despite insistence that all that really matters in philosophy is the quality of the arguments, it seems that in practice we all require the public presentation of philosophy, be it oral or written, to consist in more than merely good reasoning. We want, in effect, to be entertained by the books we read and the talks we attend. Is this a reasonable demand, or does it indicate a lack of seriousness in the contemporary academy? (That is, as ever, an inclusive disjunction.)

I suppose that what is worrying me here is the way that contemporary analytic philosophy is trying to maintain a seriousness of purpose while rejecting a seriousness of style. It is not merely acceptable, but in some cases de rigeur to litter philosophical papers with popular culture jokes. I don’t want to be a prude about this, but it does bear thinking about. One way of seeing my worry is: we try hard to be entertaining lecturers for our undergraduates, though in the back of our minds we slightly disapprove of the fact that students need to be enticed to learn in this way, that they are not able to pay attention to, and remember, what we teach without some non-academic tidbits thrown in as well. Does the current vogue for light-hearted styles in philosophy suggest the same is true of our colleagues? Of course not, but then why do we think it is appropriate?

UPDATE: Thinking about this post I realized that it could be misconstrued. My intended line of thought was:

My performance was bad. What makes a good performance? Engaging the audience. One way to do that is through talking informally. But increasingly it is becoming normal to be not merely engaging but also entertaining. How odd.


.: posted by Tom Stoneham 9:10 AM


Tuesday, May 20, 2003

Equivocation



I have just found a nice illustration of a point I make in my short paper 'On Equivocation', but unfortunately it has already gone to press. So here is the example. The other day I was being given a lift by someone who took an unexpected turn and commented: 'This is my new rat-run. It is sometimes quicker than the other route.' The second sentence is ambiguous between:

1. There are times at which route A is quicker than route B.
2. Sometimes route A takes less time than route B sometimes takes.

Clearly the first reading is the intended one. But if you look at the evidence attested for the claim (along the lines of 'Today it took 3 minutes by route A but yesterday it took 10 minutes by route B'), it would seem only to support the second reading. By my analysis, this is an example of equivocation, because we have an ambiguous statement and the conclusion drawn from it requires it to be interpreted one way but the evidence offered requires it to be interpreted the other way.

.: posted by Tom Stoneham 11:35 AM


Monday, May 19, 2003

Self-identity



Writing this paper about Greg McCulloch, I was looking for an example of a plausible cognitive situation in which somone might find a statement of self-identity (a=a) informative. I came up with someone who believed in contingent identity and also thought Leibniz law applies within the scope of modal operators. If such a person believed that a=b and poss-not-(a=b), he could infer poss-not-(a=a). But if one believes poss-not-(a=a), then surely a=a is potentially informative.

I managed to find a real-life believer in contingent identity (they are pretty rare in the UK), in the person of Murali Ramachandran from Sussex, who was visiting last week. But unfortunately he does not believe in contingent self-identities. But then it occurred to me that someone in the cognitive situation I have just described could still find a=a uninformative if he was a two-dimensionalist. Because he might hold that poss-not-(a=a) was only true when worlds were considered as counterfactual. If worlds are considered as actual, a=a is necessary, and uninformative. So we need the cognitive situation to include denial or ignorance of two-dimensionalism.

So, three modal claims, each of which has been held by more than one serious metaphysician, combine to make self-identities potentially informative. Interesting, eh? Well, I thought so ...

.: posted by Tom Stoneham 6:46 PM


Wednesday, May 07, 2003

Judging by the web stats, a few people check this blog for updates every day, which makes me feel slightly guilty about the erratic posting. [That is an interesting example of how creating expectations in others can create obligations to fulfil those expectations. Had I but world enough and time, I would try to show you that this phenomenon grounds almost all our moral obligations towards non-human animals.] Anyway, the main reason for the sparsity of postings (which Brian Weatherson keeps commenting on) is my lack of imagination, but the secondary reason I can now appeal to is that I am currently Head of Department. For any readers who are not familiar with how universities work, that means I have to do everything I was doing before, and a whole lot more. And since York has no faculties,and thus no Deans, being HoD can be quite time-consuming.

My other excuse is that I am writing a conference paper for the end of the month. Or not writing it to be more precise. The conference is in memory of Greg McCulloch and I want to look at a particular example of his which has been appropriated for exactly the opposite ends by Ruth Millikan.

The example is this: Imagine someone looking at a very very long train, the middle section of which is obscured (perhaps by a shorter train), which Greg calls The Passengergobbler. This person, let us call her Mercedes as Greg did, does not realize that trains can have engines at both ends, and has never come across such a long train. Consequently Mercedes thinks she is looking at two trains. She says to her friend: 'That train [indicating the northern end of The Passengergobbler] is going to York, but that one [indicating the southern end] is not. So I want to board that train [north] but not that one [south].'

The point of the example is to show that there can be two utterances of the demonstrative expression 'that train' which pick out the same train and yet do not have the same meaning. How does it show that? Well, the interesting thing is that Greg's handling of the example is rather more sophisticated than the average Fregean's. He asks us to compare the desire with which this little bit of Mercedes' practical reasoning ends, with the desire of someone clearly insane, who wants to both board and not board the very same train, that is someone who might say 'I want to board that train [north] but not that train [north]'. Note that a reasonable person could have two desires, the desire to board and the desire not to, but they could not have a single desire to do something impossible. Greg says that any adequate philosophy of mind must distinguish Mercedes from someone who is clearly irrational in this way, someone who we shall call 'Ian'.

I guess that much is pretty uncontroversial, but the interest of Greg's account is how he imposes further constraints upon which ways of distinguishing Mercedes from Ian will do the job. Take, for example, the suggestion that in Mercedes' case, while she uses the same words to refer to the same train twice, the causal chain which links her demonstration of the train in each case is different, whereas Ian demonstrates the same train twice via the same causal chain. Such a move could be read either of two ways. On the first way, the difference in causal link is meant to constitute a difference in content, so Mercedes and Ian have different desires. On the second way, they have the same desire, but Mercedes is not irrational because of the difference in causal link between her two tokens of 'that train' and The Passengergobbler.

Now Greg only considers the first move here, and though I would not want to defend the second, mentioning it opens up some possibilities which can easily be overlooked. The problem with the first move, according to Greg, is that it cannot allow content to be a phenomenological notion. If the difference in content between two desires consisted in nothing but a difference in the causal chains linking the demonstratives to the object, then that would not be a difference for the subject. In Greg's writings it is a constraint upon any adequate account of the mind that content is a phenomenological notion.

Had Greg considered the second move, he would have raised a similar objection: how can a difference which has no phenomenological impact on the subject explain how one person is rational and the other is not?

If that is the objection, then we can see a new possibility. If we can give an account of the difference between Mercedes and Ian which is phenomenological, but which does not entail that they have different desires, then Greg's constraints upon any adequate philosophy of mind do not narrow the options down to just Fregeanism. In particular, they do not require us to find a semantic difference between utterances of demonstratives which pick out the same object.

So the paper, when I get around to writing it, will have two parts. The first which spell out exactly what Greg's 'phenomenological' constraint amounts to, and the second will argue that the non-Fregean can meet this constraint. Now I had better switch off the phone and the e-mail and get typing.

.: posted by Tom Stoneham 10:38 AM


Thursday, April 03, 2003

More on Betting



I have been sent an interesting e-mail about Betting by someone who knows a bit about poker. Sam writes:

"What you failed to include in your analysis are the aspects of bluffing and other [bizarre] circumstances. I have been in poker games when a player tossed his cards away before showing them and by default lost the hand even though he had a better hand."

This is a case in which someone is so convinced that they have lost a bet that they do not bother to check. Of course, if there is a significant cost involved in checking, this might be rational. But could one rationally (= without irrationality) enter into a bet when one was so convinced one would lose that it was not worth the expected effort of checking? The claim needed for the argument I was considering is that this could only be made rational by considerations apart from the bet itself.

I did not include these issues because I was granting Hawthorne/Weatherson that a pure bet was possible. But I really doubt that myself. A bet occurs when: (1) I have a belief about the probability of p, (2) someone offers me the following option: if p is true then he gives me X and if p is false then I give him Y, (3) for me the value of X > the value of Y. This would be a pure bet if and only if (4) refusing the bet has no costs or benefits to me of any sort, (5) accepting the bet has no costs or benefits other than Y and X respectively. So pure bets only occur if there are no costs or benefits attached to accepting or refusing bets in general or this bet in particular as such, i.e. costs or benefits which arise purely from the fact that one has accepted or rejected the bet and are independent of the outcome. That is what I doubt.

.: posted by Tom Stoneham 8:48 AM


Wednesday, April 02, 2003

I am not very keen on this incestuous 'blogging community' idea, but Brian Weatherson has again posted something I cannot resist commenting on. It is an argument he extracted from a talk by John Hawthorne. I gather the talk is aimed at contextualist solutions to scepticism, but I just want to talk about the premises. One crucial premise is:

Betting: For any contingent proposition p, there are odds at which it is rational to accept a bet on p.

Here are some qualifications:

1. This only applies to decidable propositions, in fact only to ones with an agreed decision procedure. It would not be rational for me to accept a bet on time of death of the last human being ever, or whether there is an evil demon deceiving me.
2. This is rational permission. I doubt it can ever be rationally required that one accept a (pure) bet.
3. It is not only the odds which matter, but also the stake. I know it is Wednesday. Bill Gates (he can pay) offers to bet me it is Wednesday. I know I will lose. He offers the odds of a billion to one. Should I accept? That depends upon the stake: if he will accept a bet of one penny at those odds, I should make the bet, but if he requires a stake of £50, I should not. At that stake I should not accept even if he raises the odds to 5000 billion to one. In other words, the rationality of accepting bets is a function of both the odds and the stake.
4. Bets always happen in a context of other concerns and interactions, so the odds are never the only consideration which is relevant to accepting the rationality of a bet. Consider my bet with Bill Gates, supposing a stake of £1. If I am curious to find out how Bill Gates will choose to prove to me what day it is (will he use the internet?), I may have a reason to accept this which is independent of my knowledge that I will lose. Whereas if I have an abhorrence of gambling in all its forms, I will have a reason to refuse. Similarly, knowing one will lose is a reason to accept a bet when one wants to give covert charity.

What might be true is:

Betting' : For any practically decidable proposition p, there is an odds-stake combination at which it is rationally permitted to accept a bet on p purely on the basis of one's expected winnings.

So, the stake is one penny, the odds a billion to one, I know I am going to lose, but it is not irrational for me to accept the bet. But now, how about this principle:

Practical Reasoning : If you know that doing F makes you worse off than doing not F, then it is irrational to do F.

This seems to cause a problem. The bet with Bill is both rationally permitted and prohibited.
Or at least it is under the assumption that the loss of a penny in a pure bet makes me worse off. Does it? Well, it certainly makes me financially worse off, but if that were the sense of 'worse off' in Practical Reasoning, it would obviously be false. No, for Practical Reasoning to have a chance of being true, 'worse off' must mean 'perceived by oneself to be worse off'. [As a matter of fact, I doubt it is true even then, at least on any understanding of 'rational' which does not make it uninterestingly analytic. But that is another story, the story of my disaffection with micro-economics.] But now here are two problems:

1. I have only accepted the bet with Bill because I do not care about losing one penny. It is not clear that I would have accepted a no-hope bet, with whatever odds, if the stake was something losing which would make me perceive myself to be worse off.

2. Suppose I do believe the adage 'Look after the pennies and the pounds will look after themselves', so I do perceive the loss of one penny as making myself worse off and yet I accepted the pure bet without irrationality. Is it also true that it is irrational for me to act in such a way that I will lose one penny and gain nothing? This seems a strong charge to pin on me over one penny. My intuition (which may be a bit odd) has it that, because the objective value of one penny is so small, it would be not-rational rather than irrational for me to do something which lost me a penny and gained me nothing at all. But now we need to know the trouser word in the rational/irrational distinction. Is something rationally permitted merely something not prohibited? In which case there is no conflict between saying that according to Betting I am rationally permitted to accept the bet, but according to Practical Reason I am not rational in so doing.

.: posted by Tom Stoneham 5:52 PM


Nihilism



I have been thinking a bit about the subtraction argument for modal nihilism, the view that there is a possible world which contains no concrete objects. Normally the argument operates at the level of metaphor (and thus proves nothing in my book). It goes: there could be a possible world with a finite number, n, of concrete objects. Since each of these is a contingent existent, we could take one away, creating a world with n-1 objects. If we keep repeating this, we end up with a world with no objects.

The trouble with this metaphorical subtraction is that there is no operation of removing concrete objects which can be performed on possible worlds. If the idea is meant to be that one takes objects away from the start world at the rate of one per second or something, then the conclusion would be that this world could become a world with no concrete objects, and that is not what the nihilists want, for whether a world can become empty of concrete objects will depend upon other issues.

So the claim must be that there is a relation between worlds such that one world is a subtraction world of another. Then if you have a set of worlds one of which has n concrete objects and all the others are subtraction* (i.e. the ancestral of the subtraction relation) worlds of that one, then that set must contain a world with no concrete objects. But we need another premise: for every world in the set, there is a subtraction* world of it. If we formalize that premise thus (using Colin Allen's internet friendly quantifier notation, with variables 'x' and 'y' ranging over concreta and 'w' and 'v' over worlds):

(B*) @x@w(E!xw -> $v(~E!xv & @y(E!yv -> E!yw)))

the argument is valid. Premise (B*) asserts that for every world which contains a concrete object, there is a subtraction* world. The problem is whether, setting metaphors aside, this can be given a non-question-begging defence. It clearly does not follow from the contingency of concreta. If we deny it, then we are saying that in some sets of worlds which have a finite start world and all others members are subtraction* worlds of that, there is a contingent concrete object such that at every world it does not exist, some other concrete object exists. Which amounts to denying nihilism, so the denial of (B*) just is the denial of nihilism, making the subtraction argument question-begging. I am beginning to suspect that my eminent colleague Tom Baldwin has been taken in by a metaphor.


If (B*) is to be defended, it must be a modal claim about concrete objects. The metaphor of subtraction invites us to read it as a claim about coming into and going out of existence: if a concrete object ceases to exist, that does not require any other concrete objects to come into or continue in existence. As it stands, that is at best a claim about what relations exist between objects existing at different times within a world, not a claim about which worlds exist. Now, someone who denied it because they denied that a world which had some concreta could become an empty world, is not thereby committed either way on nihilism. But someone who thinks that a world with some concreta could become empty is obviously committed to nihilism. So there is no hope in this route providing a non-question-begging defence of (B*).

.: posted by Tom Stoneham 12:43 PM


Thursday, March 27, 2003

Imaginative Resistance



There is a neat little debate, going on in the obscure corner of the blogosphere where only philosphers hang out, about imaginative resistance, uncontroversial cases of which occur when the author makes a moral judgement within the fiction and the reader is not able even to accept that it is fictionally true. In other words, it appears that moral beliefs place a constraint upon what the author can get the reader to accept as true in the fictional world. Now I get lost when the phenomenon is extended away from moral judgements, but it looks as if these are the claims at issue in the new deabte:

1] If being A supervenes upon being B, then no author can make it true in their fiction that two things are B-identical but not A-identical. (See Brian's brilliant example of Don Quixote's interior design.)

2] If the reader believes that being A supervenes upon being B, then she will resist the invitation by an author to imagine that two things are B-identical but not A-identical.

It appears that 2] is false: I can enter imaginatively into a fiction in which p is true even though I think p is necessarily false, just so long as p is not obviously necessarily false. Philosopher's convinced of the impossibility of time-travel can still read The Time-Traveller without feeling resistance, and materialists can fear for the fictional victims of hauntings. (Might not emotion be a better gauge of imaginative resistance than belief? )

But I think things are more complex than this. In one of my many unpublished/unpublishable papers I make the following suggestion:

'In entering imaginatively into a fiction we enter into someone else's imaginings. ... So as long as we can understand that the author thought p was possible, we can share her imaginings without being able to imagine those things ourselves.'

In fact, I think that if we are clever enough in distinguishing between implied authors and historical authors, we can even write fictions which we believe to be impossible. If I am right about this, 2] may be true but unable to explain imaginative resistance, and 1] would be simply false. That means we need another explanation about what is going wrong in Brian's Quixotic Victory, and I am working on that (it is to do with the fact that an author can no more stipulate how we should interpret her words than could Humpty-Dumpty). I ought to add that I have no account of imaginative resistance myself, just the hunches mentioned above that it is properly restricted to the moral case and is best measured by emotional responses.

On a more general note, I do not see much future in dealing with the problem of imaginative resistance by appealing to supervenience, because supervenience is a modal notion, so this strategy is a way of solving a small problem by assimilating it to a huge one (the relation between imaginability and possibility).

.: posted by Tom Stoneham 1:26 PM


Tuesday, March 25, 2003

Publishing in Philosophy



"(I hate writing the bit of a philosophy paper that is pitched at the referee and the referee only, and only aims to convince him (ever her?) that the question the paper addresses is worth an article. Hate it. But if I don’t do it, who knows if anything I write will ever be published.)"

This is from Brian Weatherson's philosophy Blog. I hate it too, so I don't do it. Obviously the problem here is the referees' conception of what deserves to be published, viz. something which advances a live debate which they already know about. True, some questions in philosophy can be addressed by teamwork, but this attitude seems to me to be an attempt to make philosophy safe: all you need to do to succeed is be clever and keep up with the literature.

It may be time to rebel, Brian.

.: posted by Tom Stoneham 9:08 AM


Monday, March 24, 2003

Substance and the Veil of Perception



When I was writing the Berkeley book, one of the readers for OUP suggested that I was, in a particular sentence, confusing the doctrine of substance with the veil of perception. I wasn’t, but I have been thinking about this recently and have come to the conclusion that the two are connected in Locke.

First of all we have the real-nominal essence distinction. By using the term ‘essence’ Locke is unavoidably connecting his discussion with 17th century, post-Cartesian discussions of substance: a substance is defined by its essential quality.

Secondly, we have the substratum, the ‘I know not what’ which underlies and supports the qualities of objects. For the Cartesians, this substratum is the substance defined by the principal attribute, or essence. Sometimes Locke talks as if the real essence is the corpuscular nature of the object, but at others he is sceptical about our ability to discover real essence. So it is not too implausible to interpret Locke as seeing a connection between real essence and substratum.

Thirdly, we have the matter which causes our ideas. While this is clearly a substance, most commentators think that to attack this notion of matter is to attack the veil of perception doctrine, not to attack the notion of substance. But I am no longer sure about this. After all, the real essence is the cause of the nominal essence, and the nominal essence can be identified with the phenomenal properties of the object. So, reading ‘appear’ very broadly, real essence (aka substance) is causing the appearances. If someone like Berkeley has an objection to objects having hidden real essences causing their nominal essences, and to qualities being supported by a mysterious substratum, then this objection may equally apply to unperceived matter causing ideas, which are after all the qualities which constitute the perceived world.

.: posted by Tom Stoneham 10:22 AM


Friday, March 21, 2003

Hell



David Efird has just reminded me of an interesting 'water cooler' (I wish!) argument I came up with last term. It being an argument in the philosophy of religion, I had not cared much for it, but others may:

1. Hell is the absence of God. (Definition)
2. If Hell is possible, then there is a possible world which consists of nothing but Hell. (Premise)
3. If Hell is possible, then there is a possible world in which God is absent (= does not exist). (1,2)
4. If God is possible, then God is a necessary existent. (Theology / Ontological Argument)
5. If Hell is possible, then God does not exist.

What is interesting about this argument is (a) the modal metaphysics required to make out (2), and (b) the fact that it gets to the conclusion that Hell is incompatible with God's existence without using a premise about God being benign. Or at least, without appearing to invoke anything about God being benign, since some of that may be built into the definition (1).

.: posted by Tom Stoneham 1:31 PM


Friday, February 21, 2003

What is research in philosophy?



How can one distinguish between the activities of making up your mind on the deep questions of life, doing philosophy, and engaging in philosophical research? We can extract a fairly standard and pat answer from the way academic philosophy operates in this country. Making up your mind is something personal and not really an academic activity at all, though it may be informed by academic knowledge and skills where those are available. Doing philosophy is very similar to making up your mind, except that it is more structured and better informed by what other people have thought on the question, and it places greater value in how the activity is conducted and slightly less in whether it comes to a stable conclusion. Philosophical research, however, is what researchers do, that is it is what people are paid by Universities and research boards to do, and it is a process with a public, measurable product: articles and books.

I have recently come to the conclusion that not only are these distinctions wrong, but by acquiescing in a system which presupposes them, we are destroying the very heart of philosophy.

Let me start with an example. Suppose I read a book on some area of philosophy in which I am interested. I find the book interesting and troubling, so I decide to devote my ‘research time’ to thinking very very hard about it. I go through it sentence by sentence, I follow up references, I talk to lots of people about the ideas in it. At the end of say two or three months, I come to the conclusion that the author is right, and that the arguments she gives are the best arguments for the position. I have tried this example out on a few colleagues, and all agree with me that this would be a good use of my time. But what could I publish as a result of it? The simple statement: I, Tom Stoneham, agree with everything written in such-and-such book. And that is not worth sharing with the world.

Every activity must have an internal goal which defines success, and it seems that the internal goal of anything worth calling 'philosophy' is a state of mind of the person who is doing it. There can be other goals, of course, such as publication or fame, but these can only be achieved by a philosophical activity if the internal goal of that activity is to achieve a state of mind. This is not to deny that philosophy is truth-directed, for the state of mind might well be knowledge, it is just that the value of this knowledge is not what is known, but the state of knowing. Compare this with, say, chemistry. Chemists want to end a research project by knowing something they did not know before, but the value of achieving that knowledge is precisely the value of what they know. If what they thereby know is something which others can take on board and make use of, then the research was valuable. But when a philosopher ends up knowing something philosophical, the value of that is not what he knows, for his knowledge has no use for anyone else except in so far as they too go through the process he went through in order to achieve it. Rather, the value is the state of knowing, of having made up his mind correctly.

What I am suggesting is that philosophical research is just a highly sophisticated version of making up one’s mind on the deep questions in life. That is a personal activity, the internal goal of which is a state of mind of the person who engages in it. Sometimes it has by-products which are of value, such as books and papers which are essential reading for anyone else trying to do the same thing, but the success of the activity is independent of these by-products.

If I am right about this, the consequences are very far reaching. One consequence is that it undermines any reason that might be given for paying academics to do philosophical research.

I hope no one ever quotes that last sentence out of context, because I need to make VERY clear exactly what I mean. If A pays B to do something, that is in nearly all cases because B’s activity produces something of value to A. But if I am right about the nature of philosophical research there is little or no reason to expect that, if you pay me to do it, I will produce something of value to you. In so far as the activity has a product, it is a product which is only of value to me. It may have by-products which are of value to others, but the success of the activity cannot be measured by those by-products, and if A is paying B to produce those by-products, he is actually paying B to do something other than philosophy.

Now the caveats. It DOES NOT FOLLOW that academic philosophers should not be given time to do research by their employers. Let me repeat that: I think that philosophers should not be paid to do research, but that their employers should give them time to do research. For example, my employer does not pay me to be a School Governor, but it does give me time to do that. The point of this fine distinction is control: whose time is it? Who says what you should be doing in that time? If Universities and research councils are not paying philosophers to do research, then they cannot say what philosophers should spend their research time doing, nor can they assess whether that time has been used well or not. Their attitude, if they are merely giving time rather than paying for research, should be ‘Here is your research day – off you go and do whatever it is you do with your research time’.

Nor does it follow that philosophers should not accept money for the by-products of their research, for the books and papers they write as part of the activity of making up their minds. In fact, we currently have the slightly absurd situation that so much philosophy is published, but so few read it, that the publishers cannot afford to pay the authors enough to cover their time in writing. So the authors ask to be paid by the tax payers to write books and articles which next to no one reads. How did this arise? Well, the philosophers insisted that they must be allowed to do research, but the managers insisted that if they were paying for something there must be a public, measurable product, so the publishers grinned as they realized that they had a huge resource of authors who they did not need to pay, and agreed to publish vast quantities of not very interesting philosophy. If the managers could be persuaded that they were not paying philosophers to do research (merely allowing them to), then they would not insist on published product, and publishers would make more money by publishing just the very good philosophy, that is just the pieces which are essential reading.

But …


This is not an argument for teaching only philosophy departments. But if we are to avoid the unpleasant situation we have arrived at by pretending that philosophical research, like scientific research, has a public, shareable product, we must be quite clear what the reasons are for Universities to give philosophers time to do research, that is to sit around and think about philosophy.

One reason, which is often repeated but often betrayed in practice, is that research informs teaching. To put it bluntly, the idea is that people who are engaged in the thinking about philosophical problems with the sophistication expected of researchers will be better teachers. Not all of them will be organized, efficient and charismatic, and it is perfectly reasonable for an employer to seek these virtues as well. The point is that they are not sufficient. At primary school we think that if you are a good teacher, you can teach anything. At secondary, we restrict the scope of ‘anything’, but we do not have any problem with one teacher teaching ‘science’ or ‘humanities’. At University, however, the level of knowledge and fluency with the material required is so much higher, that it is not possible to teach well unless one is also operating on, or near to, the level of those philosophers whose work one teaches. A University lecturer in philosophy will not be content with telling her students that X said Y is wrong because of R. Learning that does not improve their minds. Rather we will say ‘Y is wrong because of R’, and to do with any credibility, that the lecturer needs to have the knowledge and skills to be able to stand up to Y and put the objection and be treated as an intellectual peer. Only a person whose own philosophical opinions are as sophisticated as those of the writers they encourage their students to discuss and criticize can be a good teacher of philosophy at University level.

The other reason is more straightforward, and may be easier to use on the managers. I spent 9 years as a student to get the qualifications I need for my current job. That is almost twice as long as your GP, and yet I get paid less than half as much as an average GP (according to a BBC report the average salary of GPs is £70k). And don’t get me making comparisons with lawyers! So why do I do the job? Because it is the only job which pays me a reasonable amount and still gives me time to spend thinking about philosophy. If I wanted to earn as much in a different job, I would not get the time to do the philosophy. Now, I think it is true of most academic philosophers that the reason they do the job is exactly that: they want to spend time thinking hard about philosophy, but they do not want their kids to go without. Having research time is like having a company car – it is a perk, something to attract and retain the best people to the job. If any University did not give its philosophers research time, then they would simply leave and take jobs elsewhere, either in other more generous institutions or doing something else altogether.

These two arguments cohere nicely. The best teachers are those who are ‘research active’ and you will not attract the best to your University unless you give them the time to do the research. And if those are the reasons for giving philosophers the time to sit around thinking, then University managers should not have any interest in whether that time is used ‘productively’, and should certainly not care one jot about whether the philosophers they employ publish anything. Of course, they have a right to ensure that those people they have employed on these generous terms really are as good as they claimed to be, but there are plenty of ways of checking that other than by counting publications.

.: posted by Tom Stoneham 10:39 AM


Tuesday, February 04, 2003

Time and Consciousness


A guy I knew at graduate school in London, Dan Cotterill, has produced this interesting philosophical website: www.krasny.co.uk. Take a look at the video called 'Transience'.



A possible solution to the problem would go like this. Transience is an illusion created by the fact that the mind/brain is an information processor. Information can only enter such a system via causal channels and causation follows the direction of time. Consequently, for any given moment in time there is an asymmetry about the information that the mind/brain has available to process. It has no information about the exact present, nor about the future, but it does have information about the past. The information about the past includes information about the past states of the system. So at time t1, the system gets the impression that the past has already happened but the future is still to come. At t2, it still has that impression, but it also knows that at t1 it thought it was in the present, but that t1 is now in the past. So it gets the further impression that the present flows, that t1 was now, but t2 is now now.

Dan would not be happy with this as a response to the problem of transience, and I can see why, but trying to work out exactly what is wrong with it can only help us deepen our understanding of the problem.

And there is another aspect of the neurophysiological approach to consciousness that needs addressing: the binding problem. When the mind/brain processes information, that processing is distributed spatially over the brain, and as a consequence some parts of the brain complete their processing task a few milliseconds before the other parts. Yet, when reporting on conscious experience we speak as if there is a single moment of consciousness where it all comes together. This is also an illusion.

.: posted by Tom Stoneham 10:19 AM


Monday, December 09, 2002

Truth and Opinion

"Institutions of higher education are conducted for the common good and not to further the interest of either the individual teacher or the institution as a whole. The common good depends upon the free search for truth and its free exposition."

This quotation from the Statement of Principles on Academic Freedom and Tenure, American Association of University Professors (AAUP), might seem pretty innocuous to most of us, but serious problems arise when you try to combine this salutary principle with the belief that there is no truth, only opinions. Consider this response to the website www.noindoctrination.org (which aims to reveal cases of outrageous political bias at U.S. universities):

"If students are informed (and usually are by leftists) that the PERSPECTIVE on information they are receiving is by definition a product of the teacher's mind, students are intelligent enough to weigh various views. However, when there is a pretense of ‘objectivity,’ nearly always a lockstep, status quo OPINION is being propagated."

I suspect that the author (apparently a Professor at California State, but I have no way of confirming this) is equivocating on objectivity. I might present an opinion as being objective, in which case I am making an epistemic claim, to the effect that it does not rest upon anything peculiar to me, that I expect others to agree with me. Thus my preference for chocolate ice-cream over strawberry is not objective in this sense, since the reasons I give are not one's I could expect others to share. Alternatively, I might claim that my opinion is about something objective, i.e. that my opinion is only correct if it says how things are independently of my thinking them so. Thus, it is an objective matter whether Oswald killed Kennedy, so any opinion I might have about it is either right or wrong independently of what I think. But I might form that opinion objectively, by examining the evidence, or subjective, by appealing to my hunches or penchant for conspiracy theories.

[FOOTNOTE: It is, I suppose, worth pointing out that every opinion I have is a 'product of [my] mind', whether or not it is objective in either the first or the second sense. This reveals that there is a third sense of the subjective-objective contrast which the author of this passage appears to have managed to add to the confusion: something is subjective if it is a feature of a thinking subject. Thus my pain is subjective in this sense, even if your knowledge of it is objective in the first sense, and there is an objective fact of the matter that I am in pain. The level of unclear thinking that you need to get this notion of objectivity confused with the others would automatically exclude anyone from a job teaching philosophy at a U.K. University.]

Now it seems to me that it is very hard to be certain one has achieved objectivity in the first sense, for one is always making judgements about the importance to attach to certain pieces of evidence, and different people will be inclined to make those judgements differently. However, there are times and cases where a differing judgement would be simply perverse. But equally, if we are not at least striving for objectivity in the epistemic sense, then what we are saying is no more interesting than listing our food preferences. What is dangerous, and must be rightly guarded against in academic circles, are unjustified claims to have achieved objectivity in this sense. Presenting your opinion as actually being objective, rather than just striving to be objective, will often mislead your students. When I tell a student that Locke's Essay was first published in 1690, I present that as an objective opinion about an objective fact. When I tell a student that Locke was not a mind-body dualist, I present that as a striving-to-be-objective opinion about an objective matter. When I tell a student that Locke's prose is ungainly, I present that as a subjective opinion about a subjective matter.

Where, if anywhere, there is objectivity in the second, metaphysical, sense is a substantial philosophical question which, despite the widely advertised claims of philosophically-illiterate 'theorists' in literature and sociology departments, is very far from being resolved one way or the other.

There are three common mistakes which seem to motivate the alleged Professor from California. One is the thought that one can infer from the difficulty of achieving objectivity in the first sense to the lack of objectivity in the second sense. The second is that one can infer from subjectivity in the second, metaphysical sense, to impossibility of objectivity in the first, epistemic, sense. The third is that someone who claims that there is an objective matter of fact about the question being discussed is forcing their opinions on others.

The first mistake is really a simplification of a correct point: one might be able infer from the impossibility of achieving agreement, plus the thought that this is a topic about which we are guaranteed to be able to know any facts that there are, to the conclusion that there are no objective facts. But neither premise has been proven. (Also, there are some very difficult issues about how the surface form of the claims in question combines with the basic properties of the truth-predicate to entail the Law of Excluded middle. But that is why questions of objectivity and relativity are so hard.)

The second mistake is also an exaggeration of a plausible point. If some issue is (metaphysically) subjective, then it is unlikely that all opinions can be brought to agreement. But the epistemic notion of subjectivity is a matter of degree. There is no reason at all to expect agreement about which flavour of ice-cream is better, but one can expect much greater agreement about the unpleasantness of eating raw worms, and universal agreement that being tortured is not a good thing. We recognize that some subjective matters turn on features which vary between individuals, others on features which vary between cultures and times, and yet others may turn on features which are common to all humans.

The third mistake is both amusing and depressing. It is an unthought about assumption that only relativists properly respect the freedom of others to make up their own minds. But if you are a relativist, the only reason you can offer anyone for agreeing with you is entirely personal: they want to be like you and your friends. Opinions are like fashion choices, so while you are free to think what you like, it takes an uncommon strength of mind not to simply follow the herd or to copy a dominant figure. And those who do not follow the herd are liable to ridicule, for their decision, according to the relativist, does not demonstrate the virtue of independent thought but the vice of bad taste. Even worse, it is the nature of markets that most shops only stock the fashionable goods.

But if you are an objectivist, fashion and personality is totally irrelevant: you could be the coolest person on the campus and still be mistaken. All that matters is the evidence you offer and the arguments you give. So in fact, if a teacher starts a debate with the assumption of objectivity (in the second, metaphysical, sense), there is a much greater risk of being disagreed with. Of course, a bad teacher can use their greater knowledge and experience to stifle debate, but even then the risk remains, for if the matter in hand is an objective one, it is always possible that they have made a mistake and that a student will point it out. To put it crudely, asserting that there is an objectively right answer to a question is very different from asserting that one has infallible knowledge.


.: posted by Tom Stoneham 11:55 AM


Sunday, November 24, 2002

Meaning and Objectivity

A distinctive belief of ‘analytic’ philosophers is that if our utterances and the thoughts they express do not have determinate truth-conditions, then we are in an even worse position than the Cartesian sceptic: not only can we know nothing about the world, but also we are unable even to think that there is a world.

This is why Kripke’s little book on Wittgenstein has been so influential. Wittgenstein as Kripke reads him, who we can call Kripkenstein to distinguish him from the real Wittgenstein, poses a problem for this view. First we need to make explicit some commitments. What we mean by our words must be fixed by some actual facts about us, that is if two people mean different things, then there must be some difference between them we can point to, some difference in what they do or say (or even some difference in how their brains are functioning). But meaning something determinate entails a potentially infinite number of facts, for if I mean something determinate by, say ‘round’, then it must be that for every object that ever has or will exist anywhere in the universe, either my word ‘round’ applies to it or it does not. This is how Kripke interprets Wittgenstein’s claim that meaning is like ‘rails stretching to infinity’.

Kripkenstein argues that these two commitments are incompatible, because all the actual facts about us do not distinguish between our meaning round by ‘round’ and our meaning something slightly different which would apply differently to objects so distant in space that we will never encounter them. The basic point is that the actual facts about any one of us are always finite, in that they are always subject to more than one interpretation about how the use of our words should be extended.

There seem to be only three possible responses to Kripkenstein:

1. Deny that meaning is determinate, or at least that it is determinate as it extends beyond our actual usage. I attempted to defend this in the academic adolescence of my Ph.D. thesis.
2. Deny that the actual facts about us can be specified independently of what we mean. This is McDowell’s naturalized Platonism.
3. Deny that the actual facts about us, even when specified independently of what we mean, fail to determine a potentially infinite number of facts about how our words apply.

The basic idea behind the third option is that natural laws entail an infinite number of counterfactual conditionals, but can truly describe finite systems. Thus a law like Newton’s F=ma, if true, applies to all forces applied to all masses everywhere at any time. But it also applies to the balls on the billiard table. Given that we are natural objects, all the actual facts about us include the natural laws which apply to us, so if our currently meaning what we do is a law-like property, then Kripkenstein’s problem is avoided.

Normativity

The problem with option 3 is alleged to be the normativity of meaning. Notice how, when describing the argument, I talked about how the use of our words ‘should be extended’. Laws tell us how natural systems will behave in various non-actual circumstances, but they do not tell us how they should behave. But part of meaning what we do by, say ‘round’ involves the possibility that we might come across objects to which our word does apply, but which we do not apply it. That is, we make a mistake in the application of our word. The charge is that adherents of option 3 cannot make this distinction, that they have to say about our language use that whatever does happen should happen.

Before we can assess this charge, we need to distinguish different things that could be meant by the normativity of meaning:

A. There is a distinction between correct and incorrect applications of a word.
B. Unless there are over-riding considerations, we ought to use a word in accordance with its correct use.
C. In virtue of understanding a word, we know how we ought to use it.

It seems that A follows directly from the claim that our utterances and the thoughts they express have determinate truth-conditions. And it would seem that anyone who wants to defend B ought to defend C. Much ingenious work has been done in an attempt to show how a law-based account could secure A. So anyone sceptical of normativity, or at least of the claim that this imposes constraints upon meaning which require us to take option 1 or 2 above, must either attack the inference from A to B directly, or attack the consequence C.

Personally I don’t see how one could get from A to B except via some sort of desire or goal to aim at the truth. But then the problem remains that many of us do sometimes have that goal, so we would still need C to be true (in at least some cases).

.: posted by Tom Stoneham 11:51 AM


Sunday, November 10, 2002

War

I have just watched the laying of wreaths at the Cenotaph. I always find this poignant, but my education trashed my ability to handle emotions by the time I was 15. Anyway, that aside, I find it a good moment to reflect on war.

It is worth getting to the philsophical root of the disagreement between hawks and doves in international relations. Doves will go to almost any length to avoid war, where hawks believe that there are certain disagreements which can only be resolved on the battlefield. Put like that, it is very easy to portray the hawks as having a 'cowboy' or 'playground' mentality, but they consistently and fairly respond that diplomacy and negotiation presupposes a trust which has to be earned and can easily be lost. When dealing with a country where that framework of mutual trust and esteem is missing, diplomacy will fail. The doves now look to have a childish tendency to trust others even though there is good reason not to.

I suspect the disagreement goes a little deeper, and like so many, is rooted in conceptions of human nature. The causes of war are almost invariably psychological: pride, greed, ideological belief ... The doves seem to think that we should be addressing these causes, and the way to address moral or cognitive failings is through discussion and persuasion. You do not cure someone of false belief or wrong judgement by fighting them. Hawks agree, but are rather more pessimistic about whetehr you can cure them in any other way either, or at least, whether you can cure all of them. So both sides agree that, say, Saddam Hussein is doing much that needs to be stopped, and that the reason he is doing this is some range of moral and psychological failings (caveat below). The doves think we should address the cause of his 'bad behaviour' through the means of diplomacy and sanctions. The hawks doubt that we can ever change him, so think we must prevent him having the effects he has, which means using force. I am a dove by nature and belief, but the hawks seem to have a point here: the enlightenment ideal of human nature as indefinitely susceptible to change through reason and persuasion looks pretty implausible.

Caveat: I am not discussing the depressingly large group of doves who have fallen into a mindless relativism, who think that we must not make moral judgements in international affairs. One of the many confusions of this position is between the epistemic and the metaphysical: I can be more certain that there is a right answer than that my answer is right, in moral matters just as much as in empirical matters. Consequently, I should be cautious in making moral judgements, I should make every effort to question my own assumptions and prejudices, but once I have gone through a proper process of decision, it is quite reasonable for me to think that I have made the right judgement, that opposing opinions are mistaken. Of course, if I really do believe that there is a moral fact of the matter, I should also expect that I will sometimes get it wrong and need to change my mind. The irony of relativism is that it breeds moral inflexibility and intolerance, since if no one can get it right, no one can get it wrong, so there can never be a reason to change your mind.

.: posted by Tom Stoneham 11:43 AM


Thursday, November 07, 2002

The Brains Trust

Professor CEM Joad of Birkbeck College was not a really a professor. He was, if I remember right, a Reader in Philosophy. He will not be remembered for the many books he wrote (trust me on that one, unless you want to waste a day in the BL too), but for his appearances on The Brains Trust, a TV programme of the 1950s and 60s in which difficult questions were posed to a panel of allegedly brainy people. He quickly became infamous for beginning every answer with 'It depends what you mean by ...'.

Why do philosophers do that, and why do non-philosophers hate it so much?

If you ask an engineer whether it is possible to build a suspension bridge over a particular river at a particular point he will investigate the local rocks and do some calculations. His answer will be 'Yes' or 'No' or perhaps 'Yes, but not one which will carry cars'. This answer will be informed by his mathematical calculations but it will not involve thoise calculations. We can understand, appreciate and exploit his answer without having toi understand the maths.

When you ask a philosopher a question, for example, 'Has science shown there to be no colours?', the tools he will use for answering it are conceptual. He will aim to clarify various concepts so that we can see their relations better, he will draw out consequences of claim, and maybe even criticize aspects of a conceptual framework. The philosophers answered will not only be informed by these conceptual calculations, but will involve them. Someone who has not gone through the process will not fully understand or appreciate the answer.

Joad's method was right, but his presentation was terrible. For a start the phrase 'It depends upon what you mean by ...' suggests that either he is going to appeal to stipulative definitions, or that there is some sort of freedom of choice of meaning here. Presumably what he was often aiming to show was that the question or problem presupposed an entailment from F to G, but that neither concept necessitated that entailment. That is, philosophically speaking, doing the calculations. The real philosophy comes in when you argue that there is something misguided about modifying the concept F to F*, so that it does entail G. But when we go to philosophers for such arguments, they allude to their intuitions or speak of 'changing the subject'.

I suspect that conceptual analysis is to philosophy what maths is to engineering: a tool to stop you making errors. But judgement is still needed in both disciplines. Our engineer must judge whether the rocks around the river are strong enough to hold the anchors. Our philosopher must judge which conceptual changes or modifications are acceptable and which not. Engineering came of age when it stopped using trial and error at this crucial stage of the proceedings. Has philosophy come of age yet?

.: posted by Tom Stoneham 9:07 AM


Wednesday, November 06, 2002

Normative Necessity

Kit Fine points out that there is a difference between claiming that all wars are as a matter of fact wrong, and that all war must, purely in virtue of being a war, be wrong. This distinction requires us to recognize a notion of normative necessity.

The example is instructive too, because it shows a confusion in the inference from 'Past wars were wrong' to 'This war is wrong'. Is that being offered as an induction? In which case it remains possible that this war not be wrong. Or are we being offered empirical (!) evidence for a modal claim?

.: posted by Tom Stoneham 3:41 PM


Wednesday, October 16, 2002

Since writing the last post I have begun to wonder whether I have fallen into the trap of asking bad questions which create confusion out of clarity. Think of the parallel question 'Why be moral?'. Think about this long enough and you will realize that there is a big difference between giving an answer which serves to justify my existing disposition to good behaviour and one which would persuade an amoralist to change her mind.

If someone seriously doubted that they should, at least in some contexts, conform their thinking to good patterns of inference, then there would be no persuading them otherwise. Is my worry anything more than that? I am not sure.

.: posted by Tom Stoneham 2:10 PM


Why be rational?


We can distinguish within moral discourse between terms which are evaluative, such as good and bad, and terms which are normative or prescriptive, such as required or permitted. And there is a position, often attributed to Hume, which says that the only way to move from an evaluation to a prescription, from the judgement that doing X is would be cruel, to the judgement that one ought not to do X, is via a desire to do what is good and avoid what is bad. However, there is a great deal to be said for the opposing view that moral evaluations are essentially normative, that we cannot make sense of judging that an act is good without also judging that it should be done.

Is there a similar structural feature in theoretical reasoning? We have ways of assessing patterns of reasoning as good or bad, by reference to logic, probability theory, scientific methodology etc. We also talk as if someone who believes p and that if p then q, ought to believe q. So in talking about reasoning we make both evaluative and normative judgements. Setting aside any worries there might be about the ground of the evaluative judgements, we still face the puzzle about how to move from the evaluative to the normative. Logic is obviously evaluative, but equally obviously not normative. Logic and allied disciplines give us a means of partitioning inferences, conceived of not as psychological transitions but as relations between propositions, into the good and the bad, but it should (!) be puzzling how that has any bearing on what we should believe.

Here is a typical example of the slide (from Dororthy Edgington):

"Logic does not tell you what to believe, but rather that some beliefs rule out others: some combinations of belief are consistent, other combinations are not."

Surely logic tells us that some combinations of propositions are inconsistent, and thus that some combinations of belief consitiute belief in an inconsistent set of propositions. But how does the fact that one proposition is inconsistent with another make the belief in one proposition 'rule out' (causally prevent??) the belief in the other? That is my problem.

Suppose logic tells us that the inference from p to q is a good one. I believe p and I am aware of the validity of if p then q. One reason it does not follow that I should believe q is that a perfectly reasonable response would be to change my mind about p. Sometimes when we realize the logical consequences of our beliefs, we thereby realize that we were mistaken. But that can easily be met by making the normative consequence disjunctive: I ought to either reject p or accept q, which is equivalent to saying that I ought not to (continue to) accept p and deny q. But we are still left with a transition from the evaluative to the normative which needs explaining.

So far I can only think of three options:

1. Logic is not merely evaluative but also normative. Since logic is ostensibly about the relations between propositions, but the normative claims we are searching for are about the relations between mental states, this view is faced with a stark choice between psychologism logic or logicism about psychology. That is, we could say that logic is in fact about the relations between mental states, or we could say that mental states are essentially logical. Psychologism about logic is infamous. Logicism about psychology is rarely made explicit (the notable exception being Colin McGinn), but it is not clear that it does the trick. First off, when we say that q follows from p, there is no implication of a causal relation. But logicism about psychology has to face the fact that the relations between mental states are causal, so that in a logical mind the belief that p causes the belief that q. Secondly, if logicism is to get the consequence that someone who believes p should not also deny q, rather than the merely evaluative claim that someone who believes p and not-q is not functioning properly, the relation of logical consequence needs to be given a normative reading along the lines of if p is true then q ought to be true (and everything that ought be true is necessarily true?). This is pretty implausible.

2. We need to be Humeans about rationality. So the only way to get from 'it would be logically bad to believe p and deny q' to 'someone who believes p ought not to deny q' is via a desire to think what is logically good. There are two problems with this. To see them, let us consider how the process would work. First I recognize that p entails q. I conclude that, if I want to be logically good, I should not believe p and not-q. I do want to be logically good. So I should not believe p and not-q. The content of this normative claim is conditional: if I am going to get what I want, I should not believe p and not-q. The first problem is that this normative claim seems to weak. If I want and ice-cream and the ice-cream is in the freezer, then I should go to the freezer. Maybe I don't bother, then I have not got what I wanted. But it is a fact of life that for one reason or another we do not always get what we want, so I can live with that. The normative force of such claims is very weak, but the normative force of the claim that, since p entails q, one should not believe p and not-q is supposed to be much stronger: it is no defence against the charge of irrationality that one could not be bothered! The second problem is that while the combination of the conditional and my desire do entail that I should not believe p and not-q, they only give me a reason not to believe that if I accept the logical consequences of things I believe, but that is exactly what is at issue.

3. Constraining one's thoughts by reference to good patterns of inference is part of what it is to be a person / have a mind. The idea here is that a person just is someone who recognizes evaluative claims about good and bad inferences as creating normative constraints upon their thoughts. Perhaps, even more strongly, it might be claimed that mental states such as beliefs are necessarily such that their typical transitions follow patterns which are logically good. However, like the similar position of logicism about psychology, that threatens to lose the normative element: it does not follow from the fact that if p entails q, the belief that p and not-q is, in ideal circumstances, impossible, that one ought not to believe that p and not-q. Whichever version we were to adopt, the vacuity of this proposal should be apparent when we consider the analogous proposal in ethics: a creature which did not find moral evaluations as having normative force would not be a person. Even if true, this does not explain how evaluations have normative force for me (assuming I am a person).

One paragraph discussing each of these options is a pretty cursory treatment, so there may be more to be said for them. And there may be other options which I have not considered. But for the time-being, the whole thing looks very puzzling indeed.

.: posted by Tom Stoneham 10:27 AM


Tuesday, October 15, 2002

Education

Well, term will have started at Universities everywhere by now. I am not teaching this term, but recently came across a good quotation which, when taken out of context, seems very relevant to that perennial question 'What good is a philosophy degree?':

Education is what is left when what has been learnt has been forgotten.

This is such a good aphorism that there is not even any need to explain it. The education acquired through the study of philosophy might be achievable through the study of some other subject, so the reason to choose philosophy is that what has to be learnt (and later forgotten) in order to gain the education is of sufficient interest to motivate the student.

Why did I say that it was a good quotation 'when taken out of context'? Because it comes from BF Skinner, the notorious behaviourist. Skinner was, I gather, really just a methodological behaviourist, in that he did not deny the existence of mental states, but merely said that they were irrelevant to scientific psychology. However, given the explanatory aspirations of a scientific psychology, this amounts to the philosophically significant doctrine that we can understand and explain human actions without reference to mental states. The basic idea is that human, and animal, actions are merely responses to external stimuli, and the responses are by and large learned. So learning is the creation of stimulus-response pairs. But in the quotation, he is using 'learn' in the everyday sense of being able to repeat information, and reserving 'education' for acquired stimulus-response pairs.

.: posted by Tom Stoneham 11:00 AM


Thursday, October 10, 2002

Moral Dilemmas


The moral dilemmas which most of us face in our lifes are pretty insignificant. No one will die and any hurt or offence will probably be forgotten or forgiven within a year. And yet these minor dilemmas are major problems at the time they occur.

My current dilemma is over two incompatible invitations. This is not a mere social dilemma for two reasons. Firstly, I am not only deciding for myself, but also for my daughter: by accepting one of the invitations on behalf of the family, I would cause her to upset her best friend. Secondly, both invitations to a greater or lesser extent involve the obligations of friendship, and those obligations have an ineliminable ethical dimension.

At first sight it might seem that we could apply utilitarian reasoning here, because the adverse onsequences are of the same kind: disappointment. Surely we can weigh up the likely intensity and duration of the disappointment caused by rejecting each invitation and identify a best choice?

The problems with this approach are immediate. For example, do you measure intensity of disappointment by the subjective feeling or by how easily the feeling can be overtaken by new pleasures? How do you take account of previous disappointments? How do you aggregate over people? What if one of the people involved has become so jaded from experience that they rarely feel disappointment at being let down? But even if there were answers to be found to these questions, there remains the more fundamental problem that a result like 'Action A will cause 10% more disappointment than action B' does not seem to resolve the dilemma in the way that 'Action A will cause great disappointment but action B will be happily accepted by all' does.

We could instead start looking for principles that might resolve the dilemma. For example, one might thnk that the obligations of childrens' friendships are trivial compared to the obligations of lifelong adult friendship. But on the other hand, such adult friendships are only made because of the way things worked out in childhood, and a parent who puts too little value on their child's friendships may prevent that child from learning to make the deep bonds which structure a worthwhile adult life. After all, an adult friendship worth keeping will survive a large measure of disappointment, but maybe not a child's friendship. Equally one might question the appropriateness of impartiality in these cases, for should one not care more about a friend's disappointment than his wife's? Or you own child than someone else's?

The more one thnks about this sort of case, the more apparent it becomes that there is nothing, no calculation, no principle, no argument, which will solve the problem. The decision is so hard because the normal model of working out which is the right course of action does not seem to apply here. The problem with this model is that it concentrates on choices and decisions that can be seen as trying to conform to some pre-existing goal or norm (it need not be objective). But moral dilemmas, however trivial, do not appear to work like that, they require a different kind of decision. Somewhere in The Salterton Trilogy, Robertson Davies notes that the idiom 'making one's mind up' also has a literal reading: sometimes our decisions have a creative function. When faced with a moral dilemma one is choosing not so much a course of action as the type of person one wants to be. Of course, one decision does not a personality make. Rather, in making such a decision soeone is not casting an opinion on which is the right course of action, but showing that she wants to be the type of person who puts, say, the obligations of parenthood over friendship, or loyalty over an easy life.

That is the kernel of truth in virtue ethics.

.: posted by Tom Stoneham 12:11 PM


Friday, September 27, 2002

Spelling

Poor spelling is a hindrance to accurate writing and clear communication of ideas, but it is not a serious intellectual failing. However, in an exam, which is after all just an opportunity for students to show off their knowledge and intellect, bad spelling reflects badly on the candidate. Here is a collection of spelling mistakes from this year's finals:

are (for our)
binnoccular
bi-product
biproduct
committment
composit
consequent (for consequence)
couloures
devisions
dispell
evolute (for evolve!)
existance
expediant
extention
fallable
fallicious (different script from fallable)
flexability
implausable
learn't
nonexistant
occuring
paradies
parralel
permissable
permisserble (diff. script from permissable)
populous (for populace)
precice
prieviously
prooving
questionned
spacio-temporal
sollution
stimulae
strongely
supervein
superveniant
thisis (for thesis)
tomatoe
transative
vaugue
verifi
visule
wait (for weight)

.: posted by Tom Stoneham 9:38 AM


Monday, September 23, 2002

Law and Morality

Gossip, the end of which is always to make the gossipers feel superior (that is one way to distinguish gossip from merely sharing inofrmation), is a cruel and unpleasant activity which is conducted purely for pleasure. It can be done in private, behind the victim's back, or in public, and even in print (tabloid newspapers). It can be fun and most people succumb to it at times, but it is something to disapprove of and discourage. Should we also ban it? Does our distatste for gossip, and the fact that the world would be a better place without it, justify making it illegal? Surely not.

What is the point of this example? Well, there are activities which our ethical principles might prohibit but which it would be a mistake to criminalize. This is because the harm done is (usually) small and the main objection is to the fact that participants are taking pleasure in the activity. But if that is the objection, the best way of dealing with it is to change people, to improve their moral education so that they no longer take pleasure in the activity, or perhaps do not think the pleasure is worth the degradation.

.: posted by Tom Stoneham 8:01 AM


Sunday, September 22, 2002

Idealism

The definition of idealism in common currency holds that it is a sufficient condition for one to be an idealist about X's that one thinks there is a modal dependence of X's upon mental states, be they experiences or thoughts. The modal dependence is the claim that necessarily, if X's exist then appropriate mental states exist.

But this modal definition sems to miss the deeper point of idealism, for it has the consequence that everyone is an idealist about mental states. One way of getting around this is to specify the mental states upon which X's depend to be experiences of, or thoughts about, X's. Then idealism would be false of a given mental state (type) if there is a possible world in which a creature has that state but does not have any higher-order state which is an experience of or thought about it. And the existence of creatures with a mental life but no psychological concepts, such as dogs, would establish that possibility.

This will do for most of us, but it would seem that at least two of the staunchest realists about qualia, Searle and Chalmers, are in trouble here. They both think that consciousness brings with it awareness of one's conscious states. Searle is a bit vague about what this awareness consists in, but Chalmers is explicit: knowledge by acquaintance. Knowledge by acquaintace is not propositional and does not involve concepts, hence if I am acquainted with my pain, so is my dog acquainted with his pain, whether or not he has the concept of pain. Idealism about qualia threatens.

It would be much better not to use modal dependence as one's criterion of idealism. I have argued elsewhere that Berkeley has a concept of ontological dependence, but that has no place in our conceptual scheme. We need a new idea about idealism.

.: posted by Tom Stoneham 11:18 AM


Thursday, September 19, 2002

Bill Hart uses time-travel as an example of something which we appear to be able to imagine, but since it is impossible, we cannot imagine it. Apart from the oddity of using 'imagine' as some sort of success verb so that it is impossible to imagine the impossible, his reasons for thinking time-travel impossible are rather dodgy.

The reasoning goes like this: if (backwards) time-travel were possible, then someone, call him Tim, could go back to his actual father's childhoood and kill him before Tim is conceived (in a fit of Oedipal rage!), thus preventing Tim being born. But that is impossible: if the person Tim shot was his actual father, then that person must have lived long enough to father Tim. Conclusion: backwards time-travel is impossible.

The problem with this argument is that what is definitely impossible is a conjunction: Tim goes back in time AND kills his father (before ...). Why be so sure that if Tim travelled back in time he would succeed in killing his father? Surely, we know, and he knows, that however hard he tries he will fail. Either he will shoot the wrong man, or be caught, or a bird will fly into the line of the speeding bullet, or ...

The thinking behind Hart's argument (and he is not alone in this) is that for any two ordinary mortals who are suitably situated in space and time, it is possible for one to kill the other. Whether Tim, with his loaded gun and good line of sight, can kill his father is independent of the identities of Tim and his victim. The person who believes time-travel to be possible must deny this. Put it like this: Tim is pointing his gun at a 12 year old boy. Is it possible for Tim to kill that boy? That depends on whether the boy is (=) Tim's father or not, because if the boy is Tim's father, then it is true of that boy that he will (actually) go on to father a child. But if that is true of him, it is also true that he does not die now.

The key move here is the thought that a world with a different future is a different world. You and I all know that the actual world is not one of the millions of possible worlds in which our fathers died before we ere conceived. Tim also knows that. If you look at a photo of your father aged 12, you know that that boy (indirect demonstrative reference to a boy existing many years ago, hence future tense verb to follow) will not die before you are conceived. Tim is standing in front to the 12 year old boy, so can make a direct demonstrative reference to him, but he too knows that that boy will not die before ...

Which is not to say that time-travel is posisble, merely that the impossibility of preventing your own birth does not prove it to be impossible. the issue turns on the metaphysics of time, but that is another matter for another day.

.: posted by Tom Stoneham 2:38 PM


It seems that the obvious view, that imagining p is good evidence that p is possible, can be defended against all extant objections. But the interesting question remains whether we have a faculty of imagining/conceiving which is more intellectual than sensuous imagining (e.g. visualizing) but more constrained than supposition? Yablo simply assumes the coherence of such a faculty (PPR 1993) whereas Bill Hart argues against it (The Engines of the Soul, p.15). I think Hart is wrong because he forgets the role of inetntions in determining the imaginative project: the same 'image' can be used in imagining a suitcase and in imagining a cat hiding behind a suitcase. I cannot see how to explain this without allowing that one can imagine p without p being the content of one's sensuous imagination.

.: posted by Tom Stoneham 11:41 AM


Wednesday, September 11, 2002

I needed to get some things clear in my own mind about fox hunting so I wrote this short essay trying to sort things out. I concentrate on the question of the morality of hunting, but of course hunting being wrong is neither necessary nor sufficient for a ban to be right. Hunting could be immoral but a matter of personal choice and thus not something we should ban in a liberal democracy, or it could be morally permissible but so offensive to that majority that it should be banned. But the moral issues must be the beginning of any sensible debate.

.: posted by Tom Stoneham 4:11 PM


Friday, September 06, 2002

Hey, my book is out! The cover is a bit purple, but that is OK - at least it exists at long last. Why not buy a copy? Only £15.99 in paperback.

.: posted by Tom Stoneham 12:31 PM


Monday, September 02, 2002

Is it possible to be a pluralist about philosophical methods and styles? Is there just one way to do philosophy?

I am increasingly inclined to think that there are several different activities, each with different objectives and approaches, and each deserving to be called philosophy. One obvious application of this would be the continental-analytic divide, but I suspect that is too simple. Consider instead the differences between naturalists, Wittgenstein-inspired quietists, and a priori metaphysicians: the questions each asks and the answers each will accept are completely different. The naturalists will have it that the quietists are failing to say anything worthwhile and the metaphysicians are just making guesses; the quietists will have it that the naturalists are not doing philosophy at all and the metaphysicians are self-deceived; the metaphysicians will have it that the others are making no progress becasue they (stupidly) refuse to use all the intellectual resources available to them. Yet a single philosophy department can happily contain people from each camp. How?

.: posted by Tom Stoneham 1:29 PM


Wednesday, August 14, 2002

Here is a question to test your epistemological intuitions: is it possible to have a belief which is totally unjustified but which one has, even after sustained reflection, no reason to reject? Example: you come to believe something for very bad reasons, later you retain the belief but not the reasons, and have no new reasons available (for or against the belief).

Gilbert Harman, and many others, appear to think that if you have no reason to reject the belief, you must be justified in continuing to believe. But the only thing that could provide the justification is the mere fact that you previously believed it. The alternative is to separate the question of when you do or do not have reasons to *change* your beliefs from the question of justification. This may be linked to the possibility of non-aetiological justifications, which are the big white hope for self-knowledge and a priori knowledge.

.: posted by Tom Stoneham 10:31 AM


Monday, August 12, 2002

Humpty-Dumpty was wrong: we cannot make our words mean whatever we want them to. There have to be some rules for language to work, even if those rules are just fleeting agreements between two people. As Dummett put it:

The paradoxical character of language lies in the fact that while its practice must be subject to standards of correctness, there is no ultimate authority to impose those standards from without.

In the States a fierce public debate is currently raging between, to caricature, relativists and conservatives. What seems to be missing from the debate is an attempt to apply Dummett's thought about language to other areas which have normative structure, like politics and personal behaviour. The conservatives are right that human flourishing, on both the individual and the social level, requires acting out of a sense of right and wrong: morals maketh man. The relativists are right that there is no external authority which determines what is right and wrong: man maketh morals.

It is the main public task facing analytic philosophy to show how these two thoughts can be consistent. Unfortunately the ideas needed to do this are not sexy and require the grasp of some pretty subtle 'technical' concepts.

.: posted by Tom Stoneham 8:44 AM


Friday, August 09, 2002

Yesterday I was asked (by the new VC) what is the hottest area in Philosophy right now. I said Rationalism, on the grounds that its high-profile comeback is so surprising that it will certainly generate interest. Three years ago I thought it was Quietism, but that fashion is fizzling out \o/.

.: posted by Tom Stoneham 8:50 AM



Site Meter