Thursday, April 18, 2024

Evaluating some theses on dignity and value

I’ve been thinking a bit about the relationship between dignity and value. Here are four plausible principles:

  1. If x has dignity, then x has great non-instrumental value.

  2. If x has dignity, then x has great non-instrumental value because it has dignity.

  3. If x has dignity and y does not, then x has more non-instrumental value than y.

  4. Dignity just is great value (variant: great non-instrumental value).

Of these theses, I am pretty confident that (1) is true. I am fairly confident (3) is false, except perhaps in the special case where y is a substance. I am even more confident that (4) is false.

I am not sure about (2), but I incline against it.

Here is my reason to suspect that (2) is false. It seems that things have dignity in virtue of some further fact F about them, such as that they are rational beings, or that they are in the image and likeness of God, or that they are sacred. In such a case, it seems plausible to think that F directly gives the dignified entity both the great value and dignity, and hence the great value derives directly from F and not from the dignity. For instance, maybe what makes persons have great value is that they are rational, and the same fact—namely that they are rational—gives them dignity. But the dignity doesn’t give them additional value beyond that bestowed on them by their rationality.

My reason to deny (4) is that great value does not give rise to the kinds of deontological consequences that dignity does. One may not desecrate something with dignity no matter what consequences come of it. But it is plausible that mere great value can be destroyed for the sake of dignity.

This leaves principle (3). The argument in my recent post (which I now have some reservations about, in light of some powerful criticisms from a colleague) points to the falsity of (3). Here is another, related reason. Suppose we find out that the Andromeda Galaxy is full of life, of great diversity and wonder, including both sentient and non-sentient organisms, but has nothing close to sapient life—nothing like a person. An evil alien is about to launch a weapon that will destroy the Andromeda Galaxy. You can either stop that alien or save a drowning human. It seems to me that either option is permissible. If I am right, then the value of the human is not much greater than that of the Andromeda Galaxy.

But now imagine that the Whirlpool Galaxy has an order of magnitude more life than the Andromeda Galaxy, with much greater diversity and wonder, than the Andromeda Galaxy, but still with nothing sapient. Then even if the value of the human is greater than that of the Andromeda Galaxy, because it is not much greater, while the value of the Whirlpool Galaxy is much greater than that of the Andromeda Galaxy, it follows that the human does not have greater value than the Whirlpool Galaxy.

However, the Whirlpool Galaxy, assuming it has no sapience in it, lacks dignity. A sign of this is that it would be permissible to deliberately destroy it in order to save two similar galaxies from destruction.

Thus, the human is not greater in value than the Whirlpool Galaxy (in my story), but the human has dignity while the Whirlpool Galaxy lacks it.

That said, on my ontology, galaxies are unlikely to be substances (especially if the life in the galaxy is considered a part of the galaxy, since following Aristotle I doubt that a substance can be a proper part of a substance). So it is still possible that principle (3) is true for substances.

But I am not sure even of (3) in the case of substances. Suppose elephants are not persons, and imagine an alien sentient but not sapient creature which is like an elephant in the temporal density of the richness of life (i.e., richness per unit time), except that (a) its rich elephantine life lasts millions of years, and (b) there can only be one member of the kind, because they naturally do not reproduce. On the other hand, consider an alien person who naturally only has a life that lasts ten minutes, and has the same temporal density of richness of life that we do. I doubt that the alien person is much more valuable than the elephantine alien. And if the alien person is not much more valuable, then by imagining a non-personal animal that is much more valuable than the elephantine alien, we have imagined that some person is not more valuable than some non-person. Assuming all non-persons lack dignity and all persons have dignity, we have a case where an entity with dignity is not more valuable than an entity without dignity.

That said, I am not very confident of my arguments against (3). And while I am dubious of (3), I do accept:

  1. If x has dignity and y does not, then y is not more valuable than x.

I think the case of the human and the galaxy, or the alien person and alien elephantine creature, are cases of incommensurability.

Wednesday, April 17, 2024

Desire-fulfillment theories of wellbeing

On desire-fulfillment (DF) theories of wellbeing, cases of fulfilled desire are an increment to utility. What about cases of unfulfilled desire? On DF theories, we have a choice point. We could say that unfulfilled desires don’t count at all—it’s just that one doesn’t get the increment from the desire being fulfilled—or that they are a decrement.

Saying that unfulfilled desires don’t count at all would be mistaken. It would imply, for instance, that it’s worthwhile to gain all the possible desires, since then one maximizes the amount of fulfilled desire, and there is no loss from unfulfilled desire.

So the DF theorist should count unfulfilled desire as a decrement to utility.

But now here is an interesting question. If I desire that p, and then get an increment x > 0 to my utility if p, is my decrement to utility if not p just  − x or something different?

It seems that in different cases we feel differently. There seem to be cases where the increment from fulfillment is greater than the decrement from non-fulfillment. These may be cases of wanting something as a bonus or an adjunct to one’s other desires. For instance, a philosopher might want to win a pickleball tournament, and intuitively the increment to utility from winning is greater than the decrement from not winning. But there are cases where the decrement is at least as large as the increment. Cases of really important desires, like the desire to have friends, may be like that.

What should the DF theorist do about this? The observation above seems to do serious damage to the elegant “add up fulfillments and subtract non-fulfulfillments” picture of DF theories.

I think there is actually a neat move that can be made. We normally think of desires as coming with strengths or importances, and of course every DF theorist will want to weight the increments and decrements to utility with the importance of the desire involved. But perhaps what we should do is to attach two importances to any given desire: an importance that is a weight for the increment if the desire is fulfilled and an importance that is a weight for the decrement if the desire is not fulfilled.

So now it is just a psychological fact that each desire comes along with a pair of weights, and we can decide how much to add and how much to subtract based on the fulfillment or non-fulfillment of the desire.

If this is right, then we have an algorithm for a good life: work on your psychology to gain lots and lots of new desires with large fulfillment weights and small non-fulfillment weights, and to transform your existing desires to have large fulfillment weights and small non-fulfillment weights. Then you will have more wellbeing, since the fulfillments of desires will add significantly to your utility but the non-fulfillments will make little difference.

This algorithm results in an inhuman person, one who gains much if their friends live and are loyal, but loses nothing if their friends die or are disloyal. That’s not the best kind of friendship. The best kind of friendship requires vulnerability, and the algorithm takes that away.

Tuesday, April 16, 2024

Value and dignity

  1. If it can be reasonable for a typical innocent human being to save lions from extinction at the expense of the human’s own life, then the life of a typical human being is not of greater value than that of all the lion species.

  2. It can be reasonable for a typical innocent human being to save lions from extinction at the expense of the human’s own life.

  3. So, the life of a typical innocent human being is not of greater value than that of the lion species.

  4. It is wrong to intentionally kill an innocent human being in order to save tigers, elephants and giraffes from extinction.

  5. It is not wrong to intentionally destroy the lion species in order to save tigers, elephants and giraffes from extinction.

  6. If (3), (4) and (5), then the right to life of innocent human beings is not grounded in how great the value of human life is.

  7. So, the right to life of innocent human beings is not grounded in how great the value of human life is.

I think the conclusion to draw from this is the Kantian one, that dignity that property of human beings that grounds respect, is not a form of value. A human being has a dignity greater than that of all lions taken together, as indicated by the deontological claims (4) and (5), but a human being does not have a value greater than that of all lions taken together.

One might be unconvinced by (2). But if so, then tweak the argument. It is reasonable to accept a 25% chance of death in order to stop an alien attack aimed at killing off all the lions. If so, then on the plausible assumption that the value of all the lions, tigers, elephants and giraffes is at least four times that of the lions (note that there are multiple species of elephants and giraffes, but only one of lions), it is reasonable to accept a 100% chance of death in order to stop the alien attack aimed at killing off all four types of animals. But now we can easily imagine sixteen types of animals such that it is permissible to intentionally kill off the lions, tigers, elephants and giraffes in order to save the 16 types, but it is not permissible to intentionally kill a human in order to save the 16 types.

Yet another argument against physician assisted suicide

Years ago, I read a clever argument against physician assisted suicide that held that medical procedures need informed consent, and informed consent requires that one be given relevant scientific data on what will happen to one after a procedure. But there is no scientific data on what happens to one after death, so informed consent of the type involved in medical procedures is impossible.

I am not entirely convinced by this argument, but I think it does point to a reason why helping to kill a patient is not an appropriate medical procedure. An appropriate medical procedure is one aiming at producing a medical outcome by scientifically-supported means. In the case of physician assisted suicide, the outcome is presumably something like respite from suffering. Now, we do not have scientific data on whether death causes respite from suffering. Seriously held and defended non-scientific theories about what happens after death include:

  1. death is the cessation of existence

  2. after death, existence continues in a spiritual way in all cases without pain

  3. after death, existence continues in a spiritual way in some cases with severe pain and in other cases without pain

  4. after death, existence continues in another body, human or animal.

The sought-after outcome, namely respite from severe pain, is guaranteed in cases (a), (b) and (d). However, first, evidence for preferring these three hypotheses to hypothesis (b) is not scientific but philosophical or theological in nature, and hence should not be relied on by the medical professional as a medical professional in predicting the outcome of the procedure. Second, even on hypotheses (b) and (d), the sought-after outcome is produced by a metaphysical process that goes beyond the natural processes that are the medical professional’s tools of the trade. On those hypotheses, the medical professional’s means for assuring improvement of the patient’s subjective condition relies on, say, a God or some nonphysical reincarnational process.

One might object that the physician does not need to judge between after-life hypotheses like (a)–(d), but can delegate that judgment to the patient. But a medical professional cannot so punt to the patient. If I go to my doctor asking for a prescription of some specific medication, saying that I believe it will help me with some condition, he can only permissibly fulfill my request if he himself has medical evidence that the medication will have the requisite effect. If I say that an angel told me that ivermectin will help me with Covid, the doctor should ignore that. The patient rightly has an input into what outcome is worth seeking (e.g., is relief from pain worth it if it comes at the expense of mental fog) and how to balance risks and benefits, but the doctor cannot perform a medical procedure based on the patient’s evaluation of the medical evidence, except perhaps in the special case where the patient has relevant medical or scientific qualifications.

Or imagine that a patient has a curable fracture. The patient requests physician assisted suicide because the patient has a belief that after death they will be transported to a different planet, immediately given a new, completely fixed body, and will lead a life there that is slightly happier than their life on earth. A readily curable condition like that does not call for physician assisted suicide on anyone’s view. But if there is no absolute moral objection to killing as such and if the physician is to punt to the patient on spiritual questions, why not? On the patient’s views, after all, death will yield an instant cure to the fracture, while standard medical means will take weeks.

Furthermore, the medical professional should not fulfill requests for medical procedures which achieve their ends by non-medical means. If I go to a surgeon asking that my kidney be removed because Apollo told me that if I burn one of my kidneys on his altar my cancer will be cured, the surgeon must refuse. First, as noted in the previous paragraph, the surgeon cannot punt to the patient the question of whether the method will achieve the stated medical goal. Second, as also noted, even if the surgeon shares the patient’s judgment (the surgeon thinks Apollo appeared to her as well), the surgeon is lacking scientific evidence here. Third, and this is what I want to focus on here, while the outcome (no cancer) is medical, the means (sacrificing a kidney) are not medical.

Only in the case of hypothesis (a) can one say that the respite from severe pain is being produced by physical means. But the judgment that hypothesis (a) is true would be highly controversial (a majority of people in the US seem to reject the hypothesis), and as noted is not scientific.

Admittedly, in cases (b)–(d), the medical method as such does likely produce a respite from the particular pain in question. But that a respite from a particular pain is produced is insufficient to make a medical procedure appropriate: one needs information that some other pain won’t show up instead.

Note that this is not an argument against euthanasia in general (which I am also opposed to on other grounds), but specifically an argument against medical professionals aiding killing.

A version of computationalism

I’ve been thinking how best to define computationalism about the mind, while remaining fairly agnostic about how the brain computes. Here is my best attempt to formulate computationalism:

  • If a Turing machine with sufficiently large memory simulates the functioning of a normal adult human being with sufficient accuracy, then given an appropriate mapping of inputs and outputs but without any ontological addition of a nonphysical property or part, (a) the simulated body dispositionally will behave like the simulated one at the level of macroscopic observation, and (b) the simulation will exhibit mental states analogous to those the simulated human would have.

The “analogous” in (b) allows the computationalist at least two difference between the mental states of the simulation and the mental states of the simulated. First, we might allow for the possibility that the qualitative features of mental states—the qualia—depend on the exact type of embodiment, so in vivo and in silico versions of the human will have different qualitative states when faced with analogous sensory inputs. Second, we probably should allow for some modest semantic externalism.

The “without any ontological addition” is relevant if one thinks that the laws of nature, or divine dispositions, are such that if a simulation were made, it would gain a soul or some other nonphysical addition. In other words, the qualifier helps to ensure that the simulation would think in virtue of its computational features, rather than in virtue of something being added.

Note that computationalism so defined is not entailed by standard reductive physicalism. For while the standard reductive physicalist is going to accept that a sufficiently accurate simulation will yield (a), they can think that real thought depends on physical features that are not had by the simulation (we could imagine, for instance, that to have qualia you need to have carbon, and merely simulated carbon is not good enough).

Moreover, computationalism so defined is compatible with some nonreductive physicalisms, say ones on which there are biological laws that do not reduce to laws of physics, as long as these biological laws are simulable, and the appropriate simulation will have the right mental states.

In fact, computationalism so defined is compatible with substance dualism, as long as the functioning of the soul is simulable, and the simulation would have the right mental states without itself having to have a soul added to it.

Computationalism defined as above is not the same as functionalism. Functionalism requires a notion of a proper function (even if statistically defined, as in Lewis). No such notion is needed above. Furthermore, the computationalism is not a thesis about every possible mind, but only about human minds. It seems pretty plausible that (perhaps in a world with different laws of nature than ours) it is possible to have a mind whose computational resources exceed those of a Turing machine.

Physician assisted suicide and martyrdom

  1. If physician assisted suicide is permissible, then it would have been permissible for early Christians facing being tortured to death by the Romans to kill themselves less painfully.

  2. It would not have been permissible for early Christians facing being tortured to death by the Romans to kill themselves less painfully.

  3. So, physician assisted suicide is not permissible.

The parity premise (1) is hard to deny. The best case for physician assisted suicide is where the patient strives to escape severe and otherwise unescapable pain while facing imminent death. That’s precisely the case of an early Christian being rounded up by Romans to be tortured to death.

Premise (2) is meant to be based on Christian tradition. The idea of suicide to escape pain could not have failed to occur to early Christians, given the cultural acceptance of suicide “to escape the shame of defeat and surrender” (Griffin 1986). It would have been culturally unsurprising, then, if a Christian were to fall on a sword with the Roman authorities at the door. But as far as I can tell, this did not happen. The best explanation is that the Christian tradition was strongly opposed to such “escape”.

There were, admittedly, cases of suicide to avoid rape (eventually rejected by St. Augustine, with great sensitivity to the tragedy), as well as cases where the martyr cooperated with the executioners (as Socrates is depicted having done).

Saturday, April 13, 2024

Legitimate and illegitimate authority

It is tempting to think that legitimate and illegitimate authorities are both types of a single thing. One might not want to call that single thing “authority”. After all, one doesn’t want to say that real and fake money are both types of money. But it sure seems like there is something X that legitimate and illegitimate authorities have in common with each other, and with nothing else. One imagines that a dictator and a lawfully elected president are in some way both doing the same kind of thing, “ruling” or whatever.

But this now seems to me to be mistaken. Or at least I can’t think what X could be. The only candidate I can think of is the trivial disjunctive property of being a legitimate authority or an illegitimate authority.

To a first approximation, one might think that the legitimate and illegitimate authorities both engage in the speech act of commanding. One might here try to object that “commanding” has the same problem as “authority” does: that it is not clear that legitimate and illegitimate commands have anything in common. This criticism seems to me to be mistaken: the two may not have any normative commonality, but they seem to be the same speech act.

However, imagine that Alice is the legitimate elected ruler of Elbonia, but Bob has put Alice in solitary confinement and set himself up as a dictator. Alice is not crazy: when she is in solitary confinement she isn’t commanding anyone as there is no one for her to command. Alice is a legitimate authority and Bob is an illegitimate authority, yet they do not have commanding, or ruling, or running the country in common. (Similarly, even without imprisonment, we could suppose Alice is a small government conservative who ran on a platform of not issuing any orders except in an emergency, and no emergency came up and she kept her promise.)

One might think that they have some kind of dispositional property in common. Alice surely would command if she were to get out of prison, after all. Well, maybe, but we need to specify the conditions quite carefully. Suppose she got out of prison but thought that no one would follow her commands, because she was still surrounded by Bob’s flunkies. Then she might not bother to command. It makes one look bad if one issues commands and they are ignored. Perhaps, though, we can say: Alice would issue commands if she thought they were needed and likely to be obeyed. But that can’t be the disposition that defines a legitimate or illegitimate authority. For many quite ordinary people in the country presumably have the exact same disposition: they too would issue commands if they thought they were needed and likely to be obeyed! But we don’t want to say that these people are either legitimate or illegitimate authorities.

We might argue that Alice isn’t a legitimate authority while imprisoned, because she is incapacited, and incapacitation removes legitimate authority. One reason to be dubious of this answer is that on a plausible account of incapacitation, insanity is a form of incapacitation. But an insane illegitimate dictator is still an illegitimate authority, and so incapacitation does not remove the disjunctive property legimate or illegitimate authority, but at most it removes legitimacy. Thus, Alice might still be an authority, but not an illegitimate one. Another reason is this: we could imagine that in order to discourage people from incapacitating the legitimate ruler, the laws insist that one remains in charge if one’s incapacitation is due to an act of rebellion. Moreover, we might suppose that Bob hasn’t actually incapacitated Alice. He lets her walk around and give orders freely, but his minions kill anybody who obeys, so Alice doesn’t bother to issue any orders, because either they will be disobeyed or the obeyers will be killed.

Perhaps we might try to find a disposition in the citizenry, however. Maybe what makes Alice and Bob be the same kind of thing is that the citizens have a disposition to obey them. One worry about this is this: Suppose the citizens after electing Alice become unruly, and lose the disposition to obey. It seems that Alice could still be the legitimate authority. I suppose someone could think, however, that some principles of democracy would imply that if there is no social disposition to obey someone, they are no longer an authority, legitimate or not. I am dubious. But there is another objection to finding a common disposition in the citizenry. The citizenry’s disposition to obey Bob could easily be conditional on them being unable to escape the harsh treatment he imposes on the disobedient and on him actually issuing orders. So the proposal now is something like this: z is a legitimate authority or an illegitimate authority if the citizenry would be disposed to obey z if z were to issue orders backed up credible threats of harsh treatment. But it could easily be that a perfectly ordinary person z satisfies this definition: people would obey z if z were to issue orders backed up by credible threats!

Let’s try one more thing. What fake and real money have in common is that they are both objects made to appear to be real money. Could we say that both Alice and Bob claim have this in common: They both claim to (“pretend to”, in the old sense of “pretend” that does not imply “falsely” as it does now) be the legitimate authority? Again, that may not be true. Alice is in solitary confinement. She has no one to make such claims to. Again, we can try to find some dispositional formulation, such as that she would claim it if she thought it beneficial to do so. But again many quite ordinary people would claim to be the legitimate authority if they thought it beneficial to do so. Moreover, Bob can be an illegitimate authority without any pretence to legitimacy! He need not claim, for instance, that people have a duty to obey him, backing up his orders by threat rather than by claimed authority. (It is common in our time that dictators pretend to a legitimacy that they do not have. But this is not a necessary condition for being an illegitimate authority.) Finally, if Carl is a crazy guy who claims to have been elected and no one, not even Carl’s friends and family, pays any attention to his raving, it does not seem that Carl is an illegitimate authority.

None of this denies the thesis that there is a similarity between illegitimate authority and legitimate authority. But it does not seem possible to turn that similarity into a non-disjunctive property that both of these share. Though maybe I am just insufficiently clever.

Thursday, April 11, 2024

Of snakes and cerebra

Suppose that you very quickly crush the head of a very long stretched-out serpent. Specifically, suppose your crushing takes less time than it takes for light to travel to the snake’s tail.

Let t be a time just after the crushing of the head.

Now causal influences propagate at most at the speed of light or less, the crushing of the head is the cause of death, and at t there wasn’t yet time for the effects of the crushing to have propagated to the tip of the tail. Furthermore, assume an Aristotelian account of life where a living thing is everywhere joined with its form or soul and death is the separation of the form from the matter. Then at t, because the effects of crushing haven’t propagated to the tail, the tail is joined with the snake’s form, even though the head is crushed and hence presumably no longer a part of the snake. (Imagine the head being annihilated for greater clarity.)

Now as long as any matter is joined to the form, the critter is alive. It follows that at time t, the snake is alive despite lacking a head. The argument generalizes. If we crush everything but the snake’s tail, including crushing all the major organs of the snake, the snake is alive despite lacking all the major organs, and having but a tail (or part of a tail).

So what? Well, one of the most compelling arguments against animalism—the view that people are animals—is that:

  1. People can survive as just a cerebrum (in a vat).

  2. No animal can survive as just a cerebrum.

  3. So, people are not animals.

But presumably the reason for thinking that an animal can’t survive as just a cerebrum is that a cerebrum makes an insufficient contribution to the animal functions. But the tail of a snake makes an even less significant contribution to the animal functions. Hence:

  1. If a snake can survive as just a tail, a mammal can survive as just a cerebrum.

  2. A snake can survive as just a tail.

  3. So, a mammal can survive as just a cerebrum.

Objection: Only physical effects are limited to the speed of light in their propagation, and the separation of form from matter is not a physical effect, so that instantly when the head is crushed, the form leaves the snake, all at once at t.

Response: Let z be the spacetime location of the tip of the snake’s tail at t. According to the object, at z the form is no longer present. Now, given my assumption that crushing takes less time than it takes for light to travel to the snake’s tail, and that in one reference frame w is just after the crushing, there will also be a reference frame according to which z is before the crushing has even started. If at z the form is no longer present, then the form has left the tip of the tail before the crushing.

In other words, if we try to get out of the initial argument by supposing that loss of form proceeds faster than light, then we have to admit that in some reference frames, loss of form goes backwards in time. And that seems rather implausible.

Tuesday, April 9, 2024

Absolute reference frame

Some philosophers think that notwithstanding Special Relativity, there is a True Absolute Reference Frame. Suppose this is so. This reference frame, surely, is not our reference frame. We are on a spinning planet rotating around a sun orbiting the center of our galaxy. It seems pretty likely that if there is an absolute reference frame, then we are moving with respect to it at least at the speed of the flow of the Local Group of galaxies due to the mass of the Laniakea Supercluster of galaxies, i.e., at around 600 km/s.

Given this, our measurements of distance and time are actually going to be a little bit objectively off the true values, which are the ones that we would measure if we were in the absolute reference frame. The things we actually measure here in our solar system will be objectively off due to time dilation and space contraction by about two parts per million, if my calculations are right. That means that our best possible clocks will be objectively about a minute(!) off per year, and our best meter sticks will be about two microns off. Not that we would notice these things, since the absolute reference frame is not observable, so we can’t compare our measurements to it.

As a result, we have a choice between two counterintuitive claims. Either we say that duration and distance are relative, or we have to say that our best machining and time measuring is necessarily off, and we don’t know by how much, since we don’t know what the True Absolute Reference Frame is.

Monday, April 8, 2024

Eclipse

The day started off all cloudy, but the clouds got less dense, and then when the eclipse in our front yard reached totality, we had a big break in the clouds.




The first picture has a sunspot in the middle. In the totality picture, slightly to the right of the bottom of the sun in the totality picture there is a hint of a reddish prominence which in my 8" telescope had lovely structure. A quick measurement from the photo shows that the prominence is about seven times the size of the earth.

Saturday, April 6, 2024

Plastic belt buckle

Quite a while back, I came across a discarded belt with a broken buckle. I kept it in my "long stringy things" box in the garage until I could figure out what to do with it. Finally, today, I designed and 3D printed a new buckle for it, along with plastic rivets. I replaced all the metal, and now I have a no-metal belt that hopefully can clear airline security without being removed (not tested yet).





Friday, April 5, 2024

A weaker epiphenomenalism

A prominent objection to epiphenomenalist theories of qualia, on which qualia have no causal efficacy, is that then we have no way of knowing that we had a quale of red. For a redness-zombie, who has no quale of red, would have the very same “I am having a quale of red” thought as me, since my “I am having a quale of red” thought is not caused by the quale of red.

There is a slight tweak to epiphenomanalism that escapes this objection, and the tweaked theory seems worth some consideration. Instead of saying that qualia have no causal efficacy, on our weaker epiphenomenalism we say that qualia have no physical effects. We can then say that my “I am having a quale of red” thought is composed of two components: one of these components is a physical state Ï•2 and the other is a quale q2 constituting the subjective feeling of thinking that I am having a quale of red. After all, conscious thoughts plainly have qualia, just as perceptions do, if there are qualia at all. We can now say that the physical state Ï•2 is caused by the physical correlate Ï•1 of the quale of red, while the quale q2 is wholly or partly caused by the quale q1 of red.

As a result, my conscious thought “I am having a quale of red” would not have occurred if I lacked the quale of red. All that would have occurred would be the physical part of the conscious thought, Ï•2, which physical part is what is responsible for further physical effects (such as my saying that I am having a quale of red).

If this is right, then the induced skepticism about qualia will be limited to skepticism with respect to unconscious thoughts about qualia. And that’s not much of a skepticism!

Thursday, April 4, 2024

Divine thought simplicity

One of the motivations for denying divine simplicity is the plausibility of the claim that:

  1. There is a multiplicity of divine thoughts, which are a proper part of God.

But it turns out there are reasons to reject (1) independent of divine simplicity.

Here is one reductio of the distinctness of God and God’s thoughts.

  1. God is distinct from his thoughts.

  2. If x’s thoughts are distinct from x, then x causes x’s thoughts.

  3. Everything caused by God is a creature.

  4. So, God’s thoughts are creatures.

  5. Every creature explanatorily depends on a divine rational decision to create it.

  6. A rational decision explanatorily depends on thoughts.

  7. So, we have an ungrounded infinite explanatory regress of thoughts.

  8. Ungrounded infinite explanatory regresses are impossible.

  9. Contradiction!

Here is another that also starts with 2–5 but now continues:

  1. God’s omniscience is identical with or dependent on God’s thoughts.

  2. None of God’s essential attributes are identical with or dependent on any creatures.

  3. Omniscience is one of God’s essential attributes.

  4. Contradiction!

Intending the bad as such

Here is a plausible thesis:

  1. You should never intend to produce a bad effect qua bad.

Now, even the most hardnosed deontologist (like me!) will admit that there are minor bads which it is permissible to intentionally produce for instrumental reasons. If a gun is held to your head, and you are told that you will die unless you come up to a stranger and give them a moderate slap with a dead fish, then the slap is the right thing to do. And if the only way for you to survive a bear attack is to wake up your fellow camper who is much more handy with a rifle than you are, and the only way to wake them up is to poke them with a sharp stick, then the poke is the right thing. But these cases are not counterexamples to (1), since while the slap and poke are bad, one is not intending them qua bad.

However, there are more contrived cases where it seems that you should intend to produce a bad effect qua bad. For instance, suppose that you are informed that you will die unless you do something clearly bad to a stranger, but it is left entirely up to you what the bad thing is. Then it seems obvious that the right thing to do is to choose the least bad thing you can think of—the lightest slap with a dead fish, perhaps, that still clearly counts as bad—and do that. But if you do that, then you are intending the bad qua bad.

Yet I find (1) plausible. I feel a pull towards thinking that you shouldn’t set your will on the bad qua bad, no matter what. However it seems weird to think that it would be right to give a stranger a moderate slap with a dead fish if that was specifically what you were required to do to save your life, but it would be wrong to give them a mild slap if it were left up to you what bad thing to do. So, very cautiously, I am inclined to deny (1) in the case of minor bads.

Tuesday, April 2, 2024

Abstaining from goods

There are many times when we refrain from pursuing an intrinsic good G. We can classify these cases into two types:

  1. we refrain despite G being good, and

  2. we refrain because G is good.

The “despite” cases are straightforward, such as when one refrains from from reading a novel for the sake of grading exams, despite the value of reading the novel.

The “because” cases are rather more interesting. St Augustine gives the example of celibacy for the sake of Christ: it is because marriage is good that giving it up for the sake of Christ is better. Cases of religious fasting are often like this, too. Or one might refrain from something of value in order to punish oneself, again precisely because the thing is of value. These are self-sacrificial cases.

One might think another type of example of a “because” case is where one refrains from pursuing G now in order to obtain it by a better means, or in better circumstances, in the future. For instance, one might refrain from eating a cake on one day in order to have the cake on the next day which is a special occasion. Here the value of the cake is part of the reason for refraining from pursuit. On reflection, however, I think this is a “because” case. For we should distinguish between the good G1 of having the cake now and the good G2 of having the cake tomorrow. Then in delaying one does so despite the good of G1 and because of the good of G2. The good of G1 is not relevant, unless this becomes sacrificial.

I don’t know if all the “because” cases are self-sacrificial in the way celibacy is. I suspect so, but I would not be surprised if a counterexample turned up.