Friday, June 29, 2012

An argument for a necessary being from healthy wonder

  1. (Premise) A constitutive part of wondering why p is a desire to know why p.
  2. (Premise) A healthy wonder has only healthy desires as constitutive parts.
  3. (Premise) Some people have a healthy wonder why there are contingent beings.
  4. (Premise) A desire for an impossible state of affairs is not healthy.
  5. So, some people have a healthy desire to know why there are contingent beings. (1-3)
  6. So, it is possible to know why there are contingent beings. (4 and 5)
  7. (Premise) Necessarily, if someone knows why p, then there is an explanation of why p.
  8. So, it is possible for there to be an explanation of why there are contingent beings. (6 and 7)
  9. (Premise) If there is no necessary being, there cannot be an explanation of why there are contingent beings.
  10. So, there is a necessary being. (8 and 9)

Thursday, June 28, 2012

Naturalism and the problem of pain

Here's one way to formulate the argument from pain against theism: "Granted, beings like us need an intense sensation normally triggered by damage that in turn normally triggers strongly aversive behavior. But if God were designing us, we would not expect this sensation to be painful. We would instead expect some intense non-painful sensation to be triggered by damage that in turn triggers strongly aversive behavior. Hence God did not design us." This version of the problem of pain is based on the Possibility Premise:

(PP) It would be possible to have a non-painful sensation that normally has the same triggers as pain and normally leads to the same aversive behavior.

Now, the typical atheist is a naturalist and the best naturalistic theories of mind are functionalist theories on which mental states are defined by their functional interconnections. If functionalism is true, PP is unlikely to be true. This creates a dialectical problem for the typical atheist running the above version of the argument from pain.

Here's one way to see the dialectical problem. Either there is good reason to believe in PP or there isn't. If there isn't, then the theist shouldn't be saddled with PP either. Nothing in theism commits one to PP. It's true that the typical theist is a dualist, and dualism does make PP plausible, but even that is a fairly weak "make plausible": it is easy to be a dualist who denies PP. This is especially true if what pushes one to dualism is not the problem of consciousness but the problem of mental content. Now if there is good reason to believe in PP, then it's fair enough to use PP in an argument against theism. But now the dialectical problem is that PP also provides significant evidence against functionalism, and hence against naturalism, and hence against typical versions of atheism.

Of course, one can give versions of the argument from pain that don't make use of PP. What I say only applies to one version of the argument from pain.

Wednesday, June 27, 2012

An ontological argument from the essentiality of origins

  1. (Premise) If a perfect being can't desire a state of affairs A, then A is not a good state of affairs.
  2. (Premise) A perfect being can't desire any state of affairs incompatible with the existence of a perfect being.
  3. (Premise) Necessarily, if a perfect being exists and if Jean Vanier (or Barack Obama or some other person you admire) exists, then Jean Vanier (etc.) is created by the perfect being.
  4. (Premise) If x is not created by a perfect being, then x cannot be created by a perfect being. (Essentiality of origins)
  5. (Premise) The state of affairs of Jean Vanier (etc.) existing is good.
  6. So, a perfect being can desire Jean Vanier (etc.) to exist. (1 and 5)
  7. So, Jean Vanier's (etc.) existence is compatible with the existence of a perfect being. (2 and 6)
  8. So, possibly a perfect being exists and creates Jean Vanier (etc.). (3 and 7)
  9. So, a perfect being created Jean Vanier (etc.). (4 and 8)
  10. So, a perfect being exists. (9)

[Note added later: This was, of course, written before the revelations about Jean Vanier's abusiveness. I would certainly have chosen a different example if I were writing this post now.]

Tuesday, June 26, 2012

An ontological argument based on powerful-making properties

  1. (Premise) The property, WA, of being such that for every world w it is possible for one to weakly actualize w is a powerful-making property.
  2. (Premise) No powerful-making property entails the property, PL, of being powerless.
  3. If WA is not possibly exemplified, then it entails all properties. (Fact about property entailment)
  4. So, WA is possibly exemplified. (1-3)
  5. (Premise) Necessarily, if x weakly actualizes w then x exists and w has been actualized.
It follows by S5 that there actually exists a being that has WA and that has weakly actualized our world.

Weak actualization is Plantinga's concept. We might inductively say that whatever x directly creates is weakly actualized by x, and whatever comes from something weakly actualized by x is weakly actualized by x.

Monday, June 25, 2012

In-the-limit vagueness

I don't know much about vagueness, so I suspect that this is much more learnedly discussed in the literature. But I am in the midst of unpacking, so I don't have time to look things up.

Let D be the definitely operator. Write Dn for D...D with n iterations of D. Let D* be the super-definitely operator: D*p if and only if Dp and D2p and D3p and ....

According to Williamson, in his Vagueness book:

  1. If D*p, then DD*p, and indeed D*D*p.
Williamson doesn't give the argument, but here's one. Suppose:
  1. If Dp1&Dp2&..., then D(p1&p2&...).
Then (1) follows quite easily. For let pn be Dnp. Then if D*p, we have Dpn for all n since Dpn is just Dn+1p, and by (2) it follows that we definitely have the conjunction of the pn. But the conjunction of the pn is equivalent to D*p. So we have DD*p. Iterating this argument we get DnD*p for all n and hence D*D*p.

The point behind the super-definitely operator is to capture the idea of maximal definiteness. But it doesn't. For given arbitrarily high levels of vagueness one can have a situation structurally similar to the following:

  • People who are 5 1/4 feet tall are short but not definitely short.
  • People who are 5 1/8 feet tall are definitely short but not definitely definitely short.
  • People who are 5 1/16 feet tall are definitely definitely short but not definitely definitely definitely short.
  • ...
  • People who are 5 feet tall are super-definitely short.
Now consider an infinite sequence of people, x1, x2, x3, ..., such that xn has height 5+1/22+n. Let sn be the proposition that xn is short. Suppose x0 is exactly five feet tall and let s0 be the proposition that x0 is short. Then by the above we have:
  • Ds1 but not D2s1
  • D2s2 but not D3s2
  • D3s3 but not D4s3
  • ...
  • D*s0

Let p be the disjunction s1 or s2 or .... Assume the very plausible axiom:

  1. If p is a disjunction, finite or infinite, that has a disjunct q such that Dnq, then Dnp.
Fix any n. Then Dnp. For one of the disjuncts of p is sn and Dnsn by the above. Hence D*p by definition of D*.

If D* captured the idea of maximal definiteness, then p would have to be maximally definite. But I don't think p is maximally definite. Each of the disjuncts in p has some higher level vagueness, and this vagueness does not disappear in the disjunction (in the way it perhaps does in "Sam is bald or not bald"). Intuitively, s0 is maximally definite, but p has less higher level definiteness than s0.

We might say that p suffers from in-the-limit higher level vagueness.

I am also not sure I want to say that DD*p. I grant that each conjunct of D*p (the n conjunct being Dnp) holds definitely. But I am not happy with saying that the whole infinite conjunction holds definitely. Thus I wonder if (2) shouldn't be rejected.

Wednesday, June 20, 2012

Laws violating religious freedom and conscience

This post is an oblique response to one of the lines of thought in a petition against Notre Dame University's lawsuit against the HHS contraception mandate.

If your religion or conscience (and on my view of conscience, the former is a special case of the latter if you sincerely accept the religious teachings) forbids you to obey a law, then the law violates your religious freedom or your freedom of conscience. (There is also a further question whether this violation is justified, and I won't address that question.) But the converse is not true. A law can violate your religious freedom, and maybe your freedom of conscience (that's a harder question), even if obedience is not forbidden by your religion or your conscience.

This is easiest shown by example. A paradigm example of a law violating religious freedom is a law prohibiting Christians from meeting to worship on Sunday under pain of death. But obedience to such a law need not go against the requirements of Christianity. Christianity does not require public Sunday worship when such worship seriously endangers innocent life, including one's own. Thus, there is no duty to get to Sunday worship if there is a hurricane, and to get to church one would have to leave the hurricane shelter one is in. Thus, a law that prohibited Christians from Sunday worship on pain of death would violate religious freedom without Christianity holding it to be wrong to obey the law. In case it's not clear that this law violates religious freedom, one can run this a fortiori argument. A law forbidding Sunday worship with a five dollar fine as a penalty would be wrong to obey according to Christianity, unless one is quite poor, and hence violates religious freedom. But if forbidding Sunday worship under pain of a five dollar fine violates religious freedom, a fortiori so does forbidding Sunday worship under pain of death.

For another example, consider a law explicitly prohibiting Jews from meeting to pray together on the Sabbath. It is my understanding that while rabbinical Judaism encourages meeting to pray together on the Sabbath, it does not require this (if I am wrong, just make it a hypothetical example). Thus, this would be a law that it is not wrong to obey, but it surely violates religious freedom.

In fact, one might even have a law that violates freedom of religion without requiring or forbidding the practitioners to do anything. For instance, consider a law requiring doctors who are not themselves Jehovah's Witnesses to forcibly administer blood transfusions to Jehovah's Witnesses when this is medically indicated, even when the Witness does not consent. Such a law violates the patient's freedom of religion, even though the patient is not being required or forbidden to do anything by the law. (The law may also violate the doctor's freedom of conscience.)

It is harder to see whether a law obedience to which does not violate conscience can violate freedom of conscience. There is a prima facie case for a negative answer: How can freedom of conscience be violated by something that doesn't require one to go against conscience?

But I think a case can be made that it is possible to violate freedom of conscience without requiring something contrary to conscience. The cases parallel the above two.

The case of Christian Sunday worship was one where something is required unless there are serious reasons to the contrary. Now, typical vegetarians do not think it is always wrong to eat meat. They would not, for instance, think that an Inuit child whose parents only make meat available to her in winter is morally required to refuse to eat it and thus starve to death. But now imagine a law put in place by the pork lobby that requires everyone to eat six ounces of pork daily, under penalty of death. If it is permissible to eat meat to preserve one's life, it would be permissible for the vegetarian to eat the pork. But surely there is something very much like violation of the vegetarian's freedom of conscience here.

The common thread between the Sunday worship and vegetarian cases is that these are situations where there is a strong duty to go against what the law says, but it is the law's penalty that provides a defeater for the law.

To parallel the case of rabbinical Jewish attitudes to Sabbath worship, consider a Kantian. Now, Kantians believe that there is an imperfect duty to help others, i.e., a duty where it is not specified to what degree and in what way one should help others. Imagine, then, a law that prohibited one from helping others except between 4:30 pm and 5:00 pm on Tuesdays. Such a law might not be such that Kantianism forbids one to obey it. But it is a law that surely in some important sense violates the Kantian's freedom of conscience, by forbidding that which her conscience very strongly encourages her to do, namely help people at other times, even if it does not specifically require it.

Tuesday, June 19, 2012

Visits to a nonmeasurable set and a new sceptical worry

Let X1,X2,... be a sequence of independent, identically distributed random variables. Let H be a set of values, and let Rn(H) be the proportion of X1,...,Xn that are in H. Thus Rn(H)=Vn(H)/n, where Vn(H) is the number of times that the sequence X1,...,Xn has visited H. We can call Rn(H) the rate of visits to H.

The strong Law of Large Numbers then shows that if H is a measurable set, then, almost surely (i.e., with probability one), Rn(H) converges to P(X1 in H). We can use X1 (or any of the other variables, since they are identically distributed) to induce a probability measure P0 on the set of possible values via the formula P0(H)=P(X1 in H). Thus, for measurable H, almost surely, lim Rn(H)=P0(H). I.e., the asymptotic rate of visits to a measurable set H is equal to the probability of that set.

But what if H is nonmeasurable? We could consider the general case, but let's simplify and make things more interesting. What if H is maximally nonmeasurable? A set H is maximally nonmeasurable with respect to a probability measure P0 if and only if:

  • All the measurable subsets of H have measure zero.
  • All the measurable supersets of H have measure one.
On a reasonable assignment of interval-valued probabilities, a maximally nonmeasurable set is one that gets the full interval [0,1].

Such sets are intuitively a mess. So what should we expect the rate of visits to a maximally nonmeasurable set to behave like. It was my intuition that we can expect the rate of visits to be a mess—to not converge to any particular value.[note 1] Here's a precise way to formulate the question. Let B be any non-empty proper subset of the interval [0,1]. Form the following subsets of our original P-probability space:

  • L(H): the set of points of the probability space such that lim Rn(H) exists.
  • LB(H): the set of points of the probability space such that lim Rn(H) exists and falls in B.
  • IB(H): the set of points of the probability space such that liminf Rn(H) falls in B.
  • SB(H): the set of points of the probability space such that limsup Rn(H) falls in B.

My intuition that we should expect the rate of visits to be a nonconvergent mess is an intuition that L(H) should have high probability, or, if it is itself nonmeasurable, it should contain a measurable subset of high probability. If some proofs that I haven't checked all the details of are correct, this intuition is wrong.

Conjecture (Theorem if my proofs are right): The sets L(H), IB(H), SB(H) and LB(H) are all maximally nonmeasurable if H is maximally nonmeasurable.

If this is correct, then there is basically nothing probabilistic you can say about the asymptotic convergence of Rn(H) for a maximally nonmeasurable set H. You can't say that the rate of visits probably will converge (no surprise there) and you can't say that it probably won't.

So what? Well, consider now a new sceptical problem. We perform some experiment E a thousand times, and 405 times we get outcome H. We very reasonably want to conclude that the circumstances of the experiment E have approximately a 40% tendency of producing outcome H. And the greater the number of experiments we do, as long as the observed rate of H's is around 40%, the more confident we are of this judgment, with our confidence going to one in the limit.

But wait! What about the sceptical hypothesis that the objective chances are such that H is maximally nonmeasurable given E? It is tempting to say: "Well, that could be true, but the longer our sequence of experiments with a rate of around 40%, the more confident we should be that H is measurable and has measure around 40%. We just wouldn't expect to get such nice convergence if H were maximally nonmeasurable." However, the Conjecture, assuming it's correct, shows that as a piece of probabilistic reasoning, this is completely wrong. For it is neither likely nor unlikely on the maximal nonmeasurability hypothesis that we would observe an asymptotic rate of 40% (or of any other value). To see this, let B be the singleton {0.4}, and note that LB(H) is maximally nonmeasurable. Thus, its interval-valued probability is all of [0,1], and we can have no probabilistic expectations about it.

If the maximal nonmeasurability hypothesis cannot be ruled out a posteriori, and yet must be ruled out, then it must be ruled out a priori. I think our best hope is a postulate that outcomes are always at least partly measurable, i.e., aren't maximally nonmeasurable. And that's a kind of Principle of Sufficient Reason.

I think my (unchecked) proofs of the Conjecture can generalize to give a more complicated result in the case of nonmaximally measurable sets.

Wednesday, June 13, 2012

A theory of time

This isn't meant to be a very good theory, but it's a start. The primitive notion I want to explicate is this notion of temporal priority between events: A is at least in part earlier than the start of B. I will abbreviate this to "A is earlier than B". And then we say that A is earlier than B if and only if there is a chain of at least partial causation starting at A and ending at B.

A consequence of this theory is that it is not possible to have simultaneous causation: if A causes B, then A is earlier than B. That's a count against it, but perhaps not a fatal one.

Another consequence of this theory is that it gives no account of simultaneity between events. That may not be such a bad thing.

A limitation is that we have no notion of a time, just of temporal ordering of events. That may be fine. But the costs are adding up.

I am more troubled by the fact that this rules out time travel and, more generally, temporally backwards causal influences. This makes me want to reject the theory.

But I can reprise the theory, not as a theory of the temporal priority between events, but of the temporal priority between accidents (or maybe just modes?) of a single substance. Just say that an accident A of a substance S is earlier than an accident B of S if and only if there is a chain of at least partial causation between accidents of S starting at A and ending at B.

We still have to rule the possibility of temporally backwards causation within the life of a single substance. But that's less costly, I think, than ruling out temporally backwards causation between events in general.

We still have the problem of not having simultaneous causation or any account of simultaneity for that matter. And no notion of times.

We can introduce times as follows. In some worlds, it will happen that there are nomic relationships between the accidents of a substance that are simply parametrized in terms of some parameter t such that accident A is earlier than accident B (in the above causal sense) if and only if t(A)<t(B). In such a case, we can call values of this parameter times. In worlds where there is no such neat parametrization, there may be temporal priority, but no times.

We get divine internal atemporality now as a corollary of the claim that God has no accidents.

But there are still a lot of costs. For one, the lack of a notion of simultaneity makes it hard to make sense of the transcendental unity of apperception. Maybe that's just too bad for that unity?

Tuesday, June 12, 2012

Eternal suffering and materialism

The following argument is valid:

  1. No possible four-dimensional arrangement of mere matter is intrinsically such that it is worth sacrificing one's life to prevent its existence.
  2. If materialism is true, then there is a possible four-dimensional arrangements of mere matter that is a society consisting entirely of good people who suffer horrible torment forever.
  3. A society consisting entirely of good people who suffer horrible torment forever is intrinsically such that it is worth sacrificing one's life to prevent its existence.
  4. So, materialism is false.

Is this a good argument against materialism? I think there is a lot of intuitive plausibility in the idea that arrangements of matter, in themselves, are just not very important, except for possible esthetic value. But nothing is so intrinsically ugly that it is worth sacrificing one's life precisely to prevent its existence.

The materialist, I think, will simply deny (1). Nonetheless, I think that there is some cost to denying (1).

Monday, June 11, 2012

Absolutely nonmeasurable sets

The ideal of a non-zero (point) probability assignment to all possibilities is incoherent for cardinality reasons. Moreover, as Alan Hajek has insisted, the existence of nonmeasurable sets provides further difficulties.

One might try to get around both issues by problem-specific Bayesianism, where one only insists on a probability assignment specific to a particular problem at hand. This gets around my no-go theorem, since that theorem shows that there is no single non-zero probability assignment to all the possibilities there are. But in any given probabilistic calculation, the collection of possibilities is restricted to some set, and then there could well be a generalized probability (e.g., satisfying the axioms here) for that problem.

One might even have some hope that problem-specific Bayesianism could handle the issue of nonmeasurable sets. For there are isometrically invariant extensions of Lebesgue measure (i.e., extensions invariant under translation, rotation and reflection) that make some Lebesgue nonmeasurable sets be measurable.

But no such luck. Start by noting that there are absolutely nonmeasurable sets. A bounded absolutely nonmeasurable set (I'm making up this technical term) is a subset A of n-dimensional Euclidean space Rn such that there is no isometrically invariant probability measure that (a) makes A measurable, (b) assigns finite measure to every bounded measurable subset of Rn, (c) assigns non-zero measure to some bounded subset of Rn. The Hausdorff Paradox then shows that there is a bounded absolutely nonmeasurable set if n=3, assuming the Axiom of Choice.

In fact, from the Hausdorff Paradox we can prove that there is a bounded subset A of R3 such there is no isometrically invariant generalized finitely additive probability measure, e.g., in the sense of this post, on the cube [0,1]3 or on the three-dimensional ball of unit radius that makes A measurable.

So the problem-specific approach also runs into trouble, at least assuming the Axiom of Choice. And the Axiom of Choice (or, more weakly, the Boolean Prime Ideal Theorem--I don't know if this makes a difference, but in any case BPI has no intuitive support beyond the fact that AC implies it) is also assumed by hyperreal extensions of probability theory.

Of course, if one allows for interval-valued measures, that's different kettle of fish.

Friday, June 8, 2012

Non-measurable sets and interval-valued probabilities

I think there is nothing new here, but I want to collect together some facts that are interesting to me.

Suppose m is a countably additive measure on a set U. Then it's pretty easy to show that for any subset B of U, measurable or not, there exist measurable sets A and C such that:

  • A is a subset of B and B is a subset of C
  • A is maximal in measure among the measurable subsets of B: for every measurable subset A' of B, m(A')≤m(A)
  • C is minimal in measure among the measurable supersets of B: for every measurable superset C' of B, m(C')≥m(C).
(Cf. Van Vleck.) We can now define the lower measure lm(B)=m(A) and the upper measure um(B)=m(C). Of course lm(B)≤um(B) for all B.

If m is a probability measure (i.e., m(U)=1), we can then extend the measure m on U to a complete measure (i.e., one such that any subset of a set of measure zero is also measurable) simply by taking as measurable all sets B such that lm(B)=um(B), and then setting m(B)=um(B).

From now on assume m is a complete probability measure. Then a set B is measurable if and only if lm(B)=um(B).

Suppose that X1,X2,... are independent random variables taking their values in U, with the probability that Xi is in B being equal to m(B). Let Sn(B) be the number of the variables X1,...,Xn whose values are in B. If B is measurable, then the Strong Law of Large Numbers implies that almost surely (i.e., with probability 1), Sn(B)/n converges to m(B). It immediately follows that in general, whether or not B is measurable, almost surely

  • lm(B)≤ liminf Sn(B)/n≤ limsup Sn(B)/num(B).
In other words, almost surely, all the limit points of the asymptotic frequency Sn(B)/n fall between the lower and upper measures of B.

It would be interesting to see what else we can say about the limit points of the asymptotic frequency. One might speculate that lm(B) and um(B) are almost surely limit points of the asymptotic frequency, but I think that's not true in general. But could it be true in the special case where m is Lebesgue measure on an interval?

I've been thinking from time to time about this question: What do asymptotic frequencies of visits to a nonmeasurable set look like? Still no answer.

In any case, the above stuff suggests that dealing with nonmeasurable sets might be a good application for interval-valued probabilities, where we assign the interval [lm(B),um(B)] as the probability of B.

Oh, and finally, it's worth noting that Van Vleck has in effect shown that if m is Lebesgue measure on [0,1], then there is a subset of [0,1] whose lower measure is zero and whose upper measure is one.

Thursday, June 7, 2012

Punishment, time, and a poor objection to identity after fission

It is a truism that the punishment should follow the crime. Thus, even if you know with enough certainty for court conviction that Smith will commit a crime, that is not enough for punishment.

But we need a qualifier. Punishment only needs to follow the crime in internal time. Smith builds a time machine while in prison and travels to 160 million BC. Now, there is a small police outpost in 160M BC, protecting time-traveling scientists from nefarious time travelers, and there is a small jail. It turns out that backwards time travel is much cheaper than forwards time travel (at any decent speed, measured in the ratio of external to internal time). To send Smith back to his time would be prohibitively expensive. There would be no injustice in jailing him in the year 160M BC, notwithstanding his protestations that he hasn't committed any crimes yet. For that's only true according to external time, since by his internal past, he had committed crimes.

Now, consider an apparent case of a person fissioning, say due to a Star Trek transporter malfunction. I used to argue that it is not tenable to suppose that the result of that is a single bilocated person on the grounds that it would be then be appropriate to punish the person in one location for what their copy in the other location did, which seems absurd. But I now think this argument is mistaken. For on the hypothesis that the person comes to be bilocated, we should now think of the person's internal time as having different branches corresponding to each location (this can best be seen by noting that we can run twin paradoxes between them). But then it is false that the person in location A can be justly punished at t2 for what the person in location B had done at an earlier time t1. For punishment should follow the crime in internal time. But since the two internal timelines are parallel, what B did is neither earlier than, nor simultaneous with, nor later than the punishment of A—there is no comparison between these internal times. Of course the external time of the punishment is later than the external time of the crime, but that is irrelevant. If parents took a 14-year-old Hitler for a time-travel excursion to our time, it would be wrong for us to now punish the young Hitler for the crime he had committed in the 1930s, since those crimes would not be earlier than the punishment in his internal time.

So while there might be objections to identity-after-fission, the punishment objection is not very strong.

One qualifier: It would not be wrong to set things up so that the punishment would be simultaneous with the crime, as long as the crime caused the punishment in the right way. So where I say that the punishment should follow the crime, I should include the possibility that the two are internally simultaneous, but with the crime explanatorily prior.

Tuesday, June 5, 2012

Transit of Venus

Here are some photos over the first 45 minutes or so. They are in sequence, but not evenly spaced in time.

This is from my 8" F/4.5 scope, stopped down to about 3", with photo taken hand-held with my Canon G7 camera off the projection funnel.


Here is the last photo in a larger size.  The sunspots were very nicely visible in the funnel (I counted about 15), and I could even see two without a telescope in the #14 welder's glass.  The photo doesn't do justice to the sunspots, especially the nice bright area that was just barely visible at the bottom of the disc.


Friday, June 1, 2012

A moderately smart being that knows all necessary truths can know everything

Suppose Fred knows all necessary truths and is at least as smart as the author of this post. Fred wants to know whether a proposition p is true. So Fred says: "I stipulate that P is the singleton set {p} and that S is the subset of all the members of P that are true." But sets have their members essentially. So S is necessarily empty or necessarily non-empty. If S is necessarily empty, then Fred knows that, and if S is necessarily non-empty, then Fred knows that, too. Since Fred is at least as smart as the author of this post, if Fred knows that S is necessarily empty, he can figure out that therefore S is empty, and hence that all the propositions in P are false, and hence that p is not true. And if Fred knows that S is necessarily non-empty, then Fred can figure out that therefore S is non-empty, and hence that p is true. In either case, then, Fred can figure out whether p is true.