Monday, August 14, 2017

Difficult questions about promises and duress

It is widely accepted that you cannot force someone to make a valid promise. If a robber after finding that I have no valuables with me puts a gun to my head and says: “I will shoot you unless you promise to go home and bring me all of the jewelry there”, and I say “I promise”, my promise seems to be null and void.

But suppose I am a cavalry officer captured by an enemy officer. The enemy officer is in a hurry to complete a mission, and it is crucial to his military ends that I not ride straight back to my headquarters and report what I saw him doing. He does not, however, have the time to tie me up, and hence he prepares to kill me. I yell: “I give you my word of honor as an officer that I will stay in this location for 24 hours.” He trusts me and rides on his way. (The setting for this is more than a hundred years ago.)

However, if promises made under duress are invalid, then the enemy officer should not trust me. One can only trust someone to do something when in some way a good feature of the person impels them to do that thing. (I can predict that a thief will steal my money if I leave it unprotected, but I don’t trust the thief to do that.) But there is no virtue in keeping void promises, since such promises do not generate moral reasons. In fact, if the promise is void, then I might even have a moral duty to ride back and report what I have seen. One shouldn’t trust someone to do something contrary to moral duty.

Perhaps, though, there is a relevant difference between the case of an officer giving parole to another, and the case of the robber. The enemy officer is not compelling me to make the promise. It’s my own idea to make the promise. Of course, if I don’t make the promise, I will die. But that fact doesn’t make for promise-canceling duress. Say, I am dying of thirst, and the only drink available is the diet gingerale that a greedy merchant is selling and which she would never give away for free. So I say: “I promise to pay you back tomorrow as I don’t have any cash with me.” I have made the promise in order to save my life. If the merchant gives me the gingerale, the promise is surely valid, and I must pay the merchant back tomorrow.

Is the relevant difference, perhaps, that I originate the idea of the promise in the officer case, but not in the robber case? But in the merchant case, I would be no less obligated to pay the merchant back if we had a little dialogue: “Could you give me a drink, as I’m dying of thirst and I don’t have any cash?” – “Only if you promise to pay me back tomorrow.”

Likewise, in the officer case, it really shouldn’t matter who originates the idea. Imagine that it never occurred to me to make the promise, but a bystander suggests it. Surely that doesn’t affect the binding force of the promise. But suppose that the bystander makes the suggestion in a language I don’t understand, and I ask the enemy officer what the bystander says, and he says: “The bystander suggests you give your word of honor as an officer to stay put for 24 hours.” Surely it also makes no moral difference that the enemy officer acts as an interpreter, and hence is the proximate origin of the idea. Would it make a difference if there were no helpful bystander and the enemy officer said of his own accord: “In these circumstances, officers often make promises on their honor to stay put”? I don’t think so.

I think that there is still a difference between the robber case and that of the enemy officer who helpfully suggests that one make the promise. But I have a really hard time pinning down the difference. Note that the enemy officer might be engaged in an unjust war, much as the robber is engaged in unjust robbery. So neither has a moral right to demand things of me.

There is a subtle difference between the robber and officer cases. The robber is threatening your life in order to get you to make the promise. The promise is something that the robber is pursuing as the means to her end, namely the obtaining of jewelry. My being killed will not achieve the robber’s purpose at all. If the robber knew that I wouldn’t make the promise, she wouldn’t kill me, at least as far as the ends involved in the promise (namely, the obtaining of my valuables) go. But the enemy officer’s end, namely the safety of his mission, would be even more effectively achieved by killing me. The enemy officer’s suggestion that I make my promise is a mercy. The robber’s suggestion that I make my promise isn’t a mercy.

Does this matter? Maybe it does, and for at least two reasons. First, the robber is threatening my life primarily in order to force a promise. The enemy officer isn’t threatening my life primarily in order to force a promise: the threat would be there even if I were unable to make promises (or were untrustworthy, etc.). So there is a sense in which the robber is more fully forcing a promise out of me.

Second, it is good for human beings to have a practice of giving and keeping promises in the officer types of circumstances, since such a practice saves lives. But a practice of giving and keeping promises in the robber types of circumstances, since such a practice only encourages robbers to force promises out of people. Perhaps the fact that one kind of practice is beneficial and the other is harmful is evidence that the one kind of practice is normative to human beings and the other is not. (This will likely be the case given natural law, divine command, rule-utilitarianism, and maybe some other moral theories.)

Third, the case of the officer is much more like the case of the merchant. There is a circumstance in both cases that threaten my life independently of any considerations of promises—dehydration and an enemy officer whom I’ve seen on his secret mission. In both cases, it turns out that the making of a promise can get me out of these circumstances, but the circumstances weren’t engineered in order to get me to make the promise. But the case of the robber is very different from that of the merchant. (Interesting test case: the merchant drained the oases in the desert so as to sell drinks to dehydrated travelers. This seems to me to be rather closer to the robber case, but I am not completely sure.)

Maybe, though, I’m wrong about the robber case. I have to say that I am uncomfortable with voidly promising the robber that I will get the valuables when I don’t expect to do so—there seems to be a lie involved, and lying is wrong even to save one’s life. Or at least a kind of dishonesty. But this suggests that if I were planning on bringing the valuables, I would be acting more honestly in saying it. And that makes the situation resemble a valid promise. Maybe not, though. Maybe it’s wrong to say “I will bring the valuables” when one isn’t planning on doing so, but once one says it, one has no obligation to bring them. I don’t know. (This is related to this sort of a case. Suppose I don’t expect that there will be any yellow car parked on your street tonight, but I assert dishonestly in the morning that there will be a yellow car parked on your street in the evening. In the early afternoon, I am filled with contrition for my dishonesty to you. Normally, I should try to undo the effect of dishonesty by coming clean to the person I was dishonest to. But suppose I cannot get in touch with you. However, what I can do is go to the car rental place, rent a yellow car and park it on your street. Do I have any moral reason to do so? I don’t know. Not in general, I think. But if you were depending on the presence of the yellow car—maybe you made a large bet about it wit a neighbor—then maybe I should do it.)

Computer languages

It is valuable, especially for philosophers, to learn languages in order to learn to see things from a different point of view, to think differently.

This is usually promoted with respect to natural languages. But the goal of learning to think differently is also furthered by learning logical languages and computer languages. In regard to computer languages, it seems that what is particularly valuable is learning languages representing opposed paradigms: low-level vs. high-level, imperative vs. functional, procedural vs. object-oriented, data-code-separating vs. not, etc. These make for differences in how one sees things that are if anything greater than the differences in how one sees things across natural human languages.

To be honest, though, I’ve only ever tried to learn one language expressly for the above purpose, and I didn’t persevere: it was Haskell, which I wanted to learn as an example of functional programming. I ended up, however, learning OpenSCAD which is a special-purpose functional language for describing 3D solids, though I didn’t do that to change how I think, but simply to make stuff my 3D printer can print. Still, I guess, I learned a bit about functional programming.

My next computer language task will probably be to learn a bit of Verilog and/or VHDL, which should be fun. I don’t know whether it will lead to thinking differently, but it might, in that thinking of an algorithm as something that is implemented in often concurrent digital logic rather than in a series of sequential instructions might lead to a shift in how I think at least about algorithms. I’ve ordered a cheap Cyclone II FPGA from AliExpress ($17 including the USB Blaster for programming it) to use with the code, which should make the fun even greater.

All that said, I don’t know that I can identify any specific philosophical insights I had as a result of knowing computer languages. Maybe it’s a subtler shift in how I think. Or maybe the goal of thinking philosophically differently just isn’t furthered in these ways. But it’s fun to learn computer languages anyway.

Thursday, August 10, 2017

Uncountable independent trials

Suppose that I am throwing a perfectly sharp dart uniformly randomly at a continuous target. The chance that I will hit the center is zero.

What if I throw an infinite number of independent darts at the target? Do I improve my chances of hitting the center at least once?

Things depend on what size of infinity of darts I throw. Suppose I throw a countable infinity of darts. Then I don’t improve my chances: classical probability says that the union of countably many zero-probability events has zero probability.

What if I throw an uncountable infinity of darts? The answer is that the usual way of modeling independent events does not assign any meaningful probabilities to whether I hit the center at least once. Indeed, the event that I hit the center at least once is “saturated nonmeasurable”, i.e., it is nonmeasurable and every measurable subset of it has probability zero and every measurable superset of it has probability one.

Proposition: Assume the Axiom of Choice. Let P be any probability measure on a set Ω and let N be any non-empty event with P(N)=0. Let I be any uncountable index set. Let H be the subset of the product space ΩI consisting of those sequences ω that hit N, i.e., ones such that for some i we have ω(i)∈N. Then H is saturated nonmeasurable with respect to the I-fold product measure PI (and hence with respect to its completion).

One conclusion to draw is that the event H of hitting the center at least once in our uncountable number of throws in fact has a weird “nonmeasurable chance” of happening, one perhaps that can be expressed as the interval [0, 1]. But I think there is a different philosophical conclusion to be drawn: the usual “product measure” model of independent trials does not capture the phenomenon it is meant to capture in the case of an uncountable number of trials. The model needs to be enriched with further information that will then give us a genuine chance for H. Saturated nonmeasurability is a way of capturing the fact that the product measure can be extended to a measure that assigns any numerical probability between 0 and 1 (inclusive) one wishes. And one requires further data about the system in order to assign that numerical probability.

Let me illustrate this as follows. Consider the original single-case dart throwing system. Normally one describes the outcome of the system’s trials by the position z of the tip of the dart, so that the sample space Ω equals the set of possible positions. But we can also take a richer sample space Ω* which includes all the possible tip positions plus one more outcome, α, the event of the whole system ceasing to exist, in violation of the conservation of mass-energy. Of course, to be physically correct, we assign chance zero to outcome α.

Now, let O be the center of the target. Here are two intuitions:

  1. If the number of trials has a cardinality much greater than that of the continuum, it is very likely that O will result on some trial.

  2. No matter how many trials—even a large infinity—have been performed, α will not occur.

But the original single-case system based on the sample space Ω* does not distinguish O and α probabilistically in any way. Let ψ be a bijection of Ω* to itself that swaps O and α but keeps everything else fixed. Then P(ψ[A]) = P(A) for any measurable subset A of Ω* (this follows from the fact that the probability of O is equal to the probability of α, both being zero), and so with respect to the standard probability measure on Ω*, there is no probabilistic difference between O and α.

If I am right about (1) and (2), then what happens in a sufficiently large number of trials is not captured by the classical chances in the single-case situation. That classical probabilities do not capture all the information about chances is something we should already have known from cases involving conditional probabilities. For instance P({O}|{O, α}) = 1 and P({α}|{O, α}) = 0, even though O and α are on par.

One standard solution to conditional probability case is infinitesimals. Perhaps P({α}) is an infinitesimal ι but P({O}) is exactly zero. In that case, we may indeed be able to make sense of (1) and (2). But infinitesimals are not a good model on other grounds. (See Section 3 here.)

Thinking about the difficulties with infinitesimals, I get this intuition: we want to get probabilistic information about the single-case event that has a higher resolution than is given by classical real-valued probabilities but lower resolution than is given by infinitesimals. Here is a possibility. Those subsets of the outcome space that have probability zero also get attached to them a monotone-increasing function from cardinalities to the set [0, 1]. If N is such a subset, and it gets attached to it the function fN, then fN(κ) tells us the probability that κ independent trials will yield at least one outcome in N.

We can then argue that fN(κ) is always 0 or 1 for infinite. Here is why. Suppose fN(κ)>0. Then, κ must be infinite, since if κ is finite then fN(κ)=1 − (1 − P(N))κ = 0 as P(N)=0. But fN(κ + κ)=(fN(κ))2, since probabilities of independent events multiply, and κ + κ = κ (assuming the Axiom of Choice), so that fN(κ)=(fN(κ))2, which implies that fN(κ) is zero or one. We can come up with other constraints on fN. For instance, if C is the union of A and B, then fC(κ) is the greater of fA(κ) and fB(κ).

Such an approach could help get a solution to a different problem, the problem of characterizing deterministic causation. To a first approximation, the solution would go as follows. Start with the inadequate story that deterministic causation is chancy causation with chance 1. (This is inadequate, because in the original dart-throwing case, the chance of missing the center is 1, but throwing the dart does not deterministically cause one to hit a point other than the center.) Then say that deterministic causation is chancy causation such that the failure event F is such that fF(κ)=0 for every cardinal κ.

But maybe instead of all this, one could just deny that there are meaningful chances to be assigned to events like the event of uncountably many trials missing or hitting the center of the target.

Sketch of proof of Proposition: First note that there is an extension Q of PI such that Q(H)=0. This shows that any PI-measurable subset of H must have probability zero.

Let Q1 be the restriction of P to Ω − N (this is still normalized to 1 as N is a null set). Let Q1I be the product measure on (Ω − N)I. Let Q be a measure on Ω defined by Q(A)=Q1I(A ∩ ΩN). Now let A be any of the cylinder sets used for generating the product measure on ΩI. Thus, A = ∏iIAi where there is a finite J ⊆ I such that Ai = Ω for i ∉ J. Then
Q(A)=∏iJQ1(Ai − N)=∏iJP(Ai − N)=∏iJP(Ai)=PN(A).
Since PN and Q agree on cylinder sets, by the definition of the product measure, Q is an extension of PN.

To show that H is saturated nonmeasurable, we now need to show that any PI-measurable set in the complement of H must have probability zero. Let A be any PI-measurable set in the complement of H. Then A is of the form {ω ∈ ΩI : F(ω)}, where F(ω) is a condition involving only coordinates of ω numbered by a fixed countable set of indices from I (we can make this precise). But no such condition can exclude the possibility that a coordinate of Ω outside that countable set is in H, unless the condition is entirely unsatisfiable, and hence no such set A lies in the complement of H, unless the set is empty. And that’s all we need to show.

Tuesday, August 8, 2017

Naturalists about mind should be Aristotelians

  1. If non-Aristotelian naturalism about mind is true, a causal theory of reference is true.

  2. If non-Aristotelian naturalism about mind is true, then normative states of affairs do not cause any natural events.

  3. If naturalism about mind is true, our thoughts are natural events.

  4. If a causal theory of reference is true and normative states of affairs do not cause any thoughts, then we do not have any thoughts about normative states of affairs.

  5. So, if non-Aristotelian naturalism about mind is true, then we do not have any thoughts about normative states of affairs. (1-4)

  6. I think that I should avoid false belief.

  7. That I should avoid false belief is a normative state of affairs.

  8. So, I have a thought about a normative state of affairs. (6-7)

  9. So, non-Aristotelian naturalism about mind is not true. (5 and 8)

Note that the Aristotelian naturalist will deny (2), for she thinks that normative states of affairs cause natural events through final (and, less obviously, formal) causation, which is a species of causation.

I think the non-Aristotelian naturalist’s best bet is probably to deny (2) as well, on the grounds that normative properties are identical with natural properties. But there are now two possibilities. Either normative properties are identical with natural properties that are also “natural” in the sense of David Lewis—i.e., fundamental or “structural”—or not. A view on which normative properties are identical with fundamental or “structural” natural properties is not a plausible one. This is not plausible outside of Aristotelian naturalism. But if the normative properties are identical with non-fundamental natural properties, then too much debate in ethics and epistemology threatens to become merely verbal in the Ted Sider sense: “Am I using ‘justified’ or ‘right’ for this non-structural natural property or that one?”

"Finite"

In conversation last week, I said to my father that my laptop battery has a “finite number of charge cycles”.

Now, if someone said to me that a battery had fewer than a billion charge cycles, I’d take the speaker to be implicating that it has quite a lot of them, probably between half a billion and a billion. And even besides that implicature, if all my information were that the battery has fewer than a billion charge cycles, then it would seem natural to take a uniform distribution from 0 to 999,999,999 and think that it is extremely likely that it has at least a million charge cycles.

One might think something similar would be the case with saying that the battery has a finite number of charge cycles. After all, that statement is logically equivalent to the statement that it has fewer than ℵ0 charge cycles, which by analogy should implicate that it has quite a lot of them, or at least give rise to a uniform distribution between 0, inclusive, and ℵ0, exclusive. But no! To say that it has a finite number of charge cycles seems to implicate something quite different: it implicates that the number is sufficiently limited that running into the limit is a serious possibility.

Actually, this may go beyond implicature. Perhaps outside of specialized domains like mathematics and philosophy, “finite” typically means something like not practically infinite, where “practically infinite” means beyond all practical limitations (e.g., the amount of energy in the sun is practically infinite). Thus, the finite is what has practical limits. (But see also this aberrant usage.)

Thursday, August 3, 2017

Connected and scattered objects

Intuitively, some physical objects, like a typical organism, are connected, while other physical objects, like a typical chess set spilled on a table, are disconnected or scattered.

What does it mean for an object O that occupies some region R of space to be connected? There is a standard topological definition of a region R being connected (there are no open sets U and V whose intersections with R are non-empty such that R ⊆ U ∪ V), and so we could say that O is connected if and only if the region R occupied by it is connected.

But this definition doesn’t work well if space is discrete. The most natural topology on a discrete space would make every region containing two or more points be disconnected. But it seems that even if space were discrete, it would make sense to talk of a typical organism as connected.

If the space is a regular rectangular grid, then we can try to give a non-topological definition of connectedness: a region is connected provided that any two points in it can be joined by a sequence of points such that any two successive points are neighbors. But then we need to make a decision as to what points count as neighbors. For instance, while it seems obvious that (0,0,0) and (0,0,1) are neighbors (assuming the points have integer Cartesian coordinates), it is less clear whether diagonal pairs like (0,0,0) and (1,1,1) are neighbors. But we’re doing metaphysics, not mathematics. We shouldn’t just stipulate the neighbor relation. So there has to be some objective fact about the space that decides which pairs are neighbors. And things just get more complicated if the space is not a regular rectangular grid.

Perhaps we should suppose that a physical discrete space would have to come along with a physical “neighbor” structure, which would specify which (unordered, let’s suppose for now) pairs of points are neighbors. Mathematically speaking, this would turn the space into a graph: a mathematical object with vertices (points) and edges (the neighbor-pairs). So perhaps there could be at least two kinds of regular rectangular grid spaces, one in which an object that occupies precisely (0,0,0) and (1,1,1) is connected and another in which such an object is scattered.

But we can’t use this graph-theoretic solution in continuous spaces. For here is something very intuitive about Euclidean space: if there is a third point c on the line segment between the two points a and b, then a and b are not neighbors, because c is a better candidate for being a’s neighbor than b. But in Euclidean space, there is always such a third point, so no two points are neighbors. Fortunately, in Euclidean space we can use the topological notion.

But now we have a bit of a puzzle. We have a topological notion of a physical object being connected for objects in a continuous space and a graph theoretic notion for objects in a discrete space. Neither notion reduces to the other. In fact, we can apply the topological one to objects in a discrete space, and conclude that all objects that occupy more than one point are scattered, and the graph theoretic one to objects in Euclidean space, and also conclude that all objects that occupy more than one points are scattered.

Maybe we should have a disjunctive notion: an object is connected if and only if it is graph-theoretically connected in a space with a neighbor-relation or topologically connected in a space with a topological structure.

That’s not too bad, but it makes the notion of the connectedness of a physical object be a rather unnatural and gerrymandered notion. Maybe that’s how it has to be.

Or maybe only one of the two kinds of spaces is actually a possible physical space. Perhaps physical space must have a topological structure. Or maybe it must have a graph-theoretic structure.

Here’s a different suggestion. Given a region of space R, we can define a binary relation cR where cR(a, b) if and only if the laws of nature allow for a causal influence to propagate from a to b without leaving R. Then say that a region of space R is connected provided that any two distinct points can be joined by a sequence of points such that successive points are cR-related in one order or the other (i.e., if di and di+1 are successive points then cR(di, di+1) or cR(di+1, di)).

On this story, if we have a universe with pervasive immediate action at a distance, like in the case of Newtonian gravity, all physical objects end up connected. If we have a discrete universe with a neighbor structure and causal influences can propagate between neighbors and only between them, we recover the graph-theoretic notion.

Wednesday, August 2, 2017

Disconnected bodies and lives

We can imagine what it is like for a living to have a spatially disconnected body. First, if we are made of point particles, we all are spatially disconnected. Second, when a gecko is attacked, it can shed a tail. That tail then continues wiggling for a while in order to distract the pursuer. A good case can be made that the gecko’s shed tail remains a part of the gecko’s body while it is wiggling. After all, it continues to be biologically active in support of the gecko’s survival. Third, there is the metaphysical theory on which sperm remains a part of the male even after it is emitted.

But even if all these theories are wrong, we should have very little difficulty in understanding what it would mean for a living thing to have a spatially disconnected body.

What about a living thing having a temporally disconnected life? Again, I think it is not so difficult. It could be the case that when an insect is frozen, it ceases to live (or exist), but then comes back to life when defrosted. And even if that’s not the case, we understand what it would mean for this to be the case.

But so far this regarded external space and external time. What about internally spatially disconnected bodies and internally temporally disconnected lives? The gecko’s tail and sperm examples work just as well for internal as well as external space. So there is no conceptual difficulty about a living thing having a disconnected body in its inner space.

But it is much more difficult to imagine how an organism could have an internal-time disconnect in its life. Suppose the organism ceases to exist and then comes back into existence. It seems that its internal time is uninterrupted by the external-time interval of non-existence. An external-time interval of non-existence seems to be simply a case of forward time-travel, and time-travel does not induce disconnectes in internal time. Granted, the organism may have some different properties when it comes back into existence—for instance, its neural system might be damaged. But that’s just a matter of an instantaneous change in the neural system rather than of a disconnect in internal time. (Note that internal time is different from subjective time. When we go under general anesthesia, internal time keeps on flowing, but subjective time pauses. Plants have internal time but don’t have subjective time.)

This suggests an interesting apparent difference between internal time and internal space: spatial discontinuities are possible but temporal ones are not.

This way of formulating the difference is misleading, however, if some version of four-dimensionalism is correct. The gecko’s tail in my story is four-dimensional. This four-dimensional thing is connected to the four-dimensional thing that is the rest of the gecko’s body. There is no disconnection in the gecko from a four-dimensional perspective. (The point particle case is more complicated. Topologically, the internal space will be disconnected, but I think that’s not the relevant notion of disconnection.)

This suggests an interesting pair of hypotheses:

  • If three-dimensionalism is true, there is a disanalogy between internal time and internal space with respect to living things at least, in that internal spatial disconnection of a living thing is possible but internal temporal disconnection of a living thing is not possible.

  • If four-dimensionalism is true, then living things are always internally spatiotemporally connected.

But maybe these are just contingent truths Terry Pratchett has a character who is a witch with two spatially disconnected bodies. As far as the book says, she’s always been that way. And that seems possible to me. So maybe the four-dimensional hypothesis is only contingently true.

And maybe God could make a being that lives two lives, each in a different century, with no internal temporal connection between them? If so, then the three-dimensional hypothesis is also only contingently true.

I am not going anywhere with this. Just thinking about the options. And not sure what to think.