Monday, August 14, 2017

Difficult questions about promises and duress

It is widely accepted that you cannot force someone to make a valid promise. If a robber after finding that I have no valuables with me puts a gun to my head and says: “I will shoot you unless you promise to go home and bring me all of the jewelry there”, and I say “I promise”, my promise seems to be null and void.

But suppose I am a cavalry officer captured by an enemy officer. The enemy officer is in a hurry to complete a mission, and it is crucial to his military ends that I not ride straight back to my headquarters and report what I saw him doing. He does not, however, have the time to tie me up, and hence he prepares to kill me. I yell: “I give you my word of honor as an officer that I will stay in this location for 24 hours.” He trusts me and rides on his way. (The setting for this is more than a hundred years ago.)

However, if promises made under duress are invalid, then the enemy officer should not trust me. One can only trust someone to do something when in some way a good feature of the person impels them to do that thing. (I can predict that a thief will steal my money if I leave it unprotected, but I don’t trust the thief to do that.) But there is no virtue in keeping void promises, since such promises do not generate moral reasons. In fact, if the promise is void, then I might even have a moral duty to ride back and report what I have seen. One shouldn’t trust someone to do something contrary to moral duty.

Perhaps, though, there is a relevant difference between the case of an officer giving parole to another, and the case of the robber. The enemy officer is not compelling me to make the promise. It’s my own idea to make the promise. Of course, if I don’t make the promise, I will die. But that fact doesn’t make for promise-canceling duress. Say, I am dying of thirst, and the only drink available is the diet gingerale that a greedy merchant is selling and which she would never give away for free. So I say: “I promise to pay you back tomorrow as I don’t have any cash with me.” I have made the promise in order to save my life. If the merchant gives me the gingerale, the promise is surely valid, and I must pay the merchant back tomorrow.

Is the relevant difference, perhaps, that I originate the idea of the promise in the officer case, but not in the robber case? But in the merchant case, I would be no less obligated to pay the merchant back if we had a little dialogue: “Could you give me a drink, as I’m dying of thirst and I don’t have any cash?” – “Only if you promise to pay me back tomorrow.”

Likewise, in the officer case, it really shouldn’t matter who originates the idea. Imagine that it never occurred to me to make the promise, but a bystander suggests it. Surely that doesn’t affect the binding force of the promise. But suppose that the bystander makes the suggestion in a language I don’t understand, and I ask the enemy officer what the bystander says, and he says: “The bystander suggests you give your word of honor as an officer to stay put for 24 hours.” Surely it also makes no moral difference that the enemy officer acts as an interpreter, and hence is the proximate origin of the idea. Would it make a difference if there were no helpful bystander and the enemy officer said of his own accord: “In these circumstances, officers often make promises on their honor to stay put”? I don’t think so.

I think that there is still a difference between the robber case and that of the enemy officer who helpfully suggests that one make the promise. But I have a really hard time pinning down the difference. Note that the enemy officer might be engaged in an unjust war, much as the robber is engaged in unjust robbery. So neither has a moral right to demand things of me.

There is a subtle difference between the robber and officer cases. The robber is threatening your life in order to get you to make the promise. The promise is something that the robber is pursuing as the means to her end, namely the obtaining of jewelry. My being killed will not achieve the robber’s purpose at all. If the robber knew that I wouldn’t make the promise, she wouldn’t kill me, at least as far as the ends involved in the promise (namely, the obtaining of my valuables) go. But the enemy officer’s end, namely the safety of his mission, would be even more effectively achieved by killing me. The enemy officer’s suggestion that I make my promise is a mercy. The robber’s suggestion that I make my promise isn’t a mercy.

Does this matter? Maybe it does, and for at least two reasons. First, the robber is threatening my life primarily in order to force a promise. The enemy officer isn’t threatening my life primarily in order to force a promise: the threat would be there even if I were unable to make promises (or were untrustworthy, etc.). So there is a sense in which the robber is more fully forcing a promise out of me.

Second, it is good for human beings to have a practice of giving and keeping promises in the officer types of circumstances, since such a practice saves lives. But a practice of giving and keeping promises in the robber types of circumstances, since such a practice only encourages robbers to force promises out of people. Perhaps the fact that one kind of practice is beneficial and the other is harmful is evidence that the one kind of practice is normative to human beings and the other is not. (This will likely be the case given natural law, divine command, rule-utilitarianism, and maybe some other moral theories.)

Third, the case of the officer is much more like the case of the merchant. There is a circumstance in both cases that threaten my life independently of any considerations of promises—dehydration and an enemy officer whom I’ve seen on his secret mission. In both cases, it turns out that the making of a promise can get me out of these circumstances, but the circumstances weren’t engineered in order to get me to make the promise. But the case of the robber is very different from that of the merchant. (Interesting test case: the merchant drained the oases in the desert so as to sell drinks to dehydrated travelers. This seems to me to be rather closer to the robber case, but I am not completely sure.)

Maybe, though, I’m wrong about the robber case. I have to say that I am uncomfortable with voidly promising the robber that I will get the valuables when I don’t expect to do so—there seems to be a lie involved, and lying is wrong even to save one’s life. Or at least a kind of dishonesty. But this suggests that if I were planning on bringing the valuables, I would be acting more honestly in saying it. And that makes the situation resemble a valid promise. Maybe not, though. Maybe it’s wrong to say “I will bring the valuables” when one isn’t planning on doing so, but once one says it, one has no obligation to bring them. I don’t know. (This is related to this sort of a case. Suppose I don’t expect that there will be any yellow car parked on your street tonight, but I assert dishonestly in the morning that there will be a yellow car parked on your street in the evening. In the early afternoon, I am filled with contrition for my dishonesty to you. Normally, I should try to undo the effect of dishonesty by coming clean to the person I was dishonest to. But suppose I cannot get in touch with you. However, what I can do is go to the car rental place, rent a yellow car and park it on your street. Do I have any moral reason to do so? I don’t know. Not in general, I think. But if you were depending on the presence of the yellow car—maybe you made a large bet about it wit a neighbor—then maybe I should do it.)

Computer languages

It is valuable, especially for philosophers, to learn languages in order to learn to see things from a different point of view, to think differently.

This is usually promoted with respect to natural languages. But the goal of learning to think differently is also furthered by learning logical languages and computer languages. In regard to computer languages, it seems that what is particularly valuable is learning languages representing opposed paradigms: low-level vs. high-level, imperative vs. functional, procedural vs. object-oriented, data-code-separating vs. not, etc. These make for differences in how one sees things that are if anything greater than the differences in how one sees things across natural human languages.

To be honest, though, I’ve only ever tried to learn one language expressly for the above purpose, and I didn’t persevere: it was Haskell, which I wanted to learn as an example of functional programming. I ended up, however, learning OpenSCAD which is a special-purpose functional language for describing 3D solids, though I didn’t do that to change how I think, but simply to make stuff my 3D printer can print. Still, I guess, I learned a bit about functional programming.

My next computer language task will probably be to learn a bit of Verilog and/or VHDL, which should be fun. I don’t know whether it will lead to thinking differently, but it might, in that thinking of an algorithm as something that is implemented in often concurrent digital logic rather than in a series of sequential instructions might lead to a shift in how I think at least about algorithms. I’ve ordered a cheap Cyclone II FPGA from AliExpress ($17 including the USB Blaster for programming it) to use with the code, which should make the fun even greater.

All that said, I don’t know that I can identify any specific philosophical insights I had as a result of knowing computer languages. Maybe it’s a subtler shift in how I think. Or maybe the goal of thinking philosophically differently just isn’t furthered in these ways. But it’s fun to learn computer languages anyway.

Thursday, August 10, 2017

Uncountable independent trials

Suppose that I am throwing a perfectly sharp dart uniformly randomly at a continuous target. The chance that I will hit the center is zero.

What if I throw an infinite number of independent darts at the target? Do I improve my chances of hitting the center at least once?

Things depend on what size of infinity of darts I throw. Suppose I throw a countable infinity of darts. Then I don’t improve my chances: classical probability says that the union of countably many zero-probability events has zero probability.

What if I throw an uncountable infinity of darts? The answer is that the usual way of modeling independent events does not assign any meaningful probabilities to whether I hit the center at least once. Indeed, the event that I hit the center at least once is “saturated nonmeasurable”, i.e., it is nonmeasurable and every measurable subset of it has probability zero and every measurable superset of it has probability one.

Proposition: Assume the Axiom of Choice. Let P be any probability measure on a set Ω and let N be any non-empty event with P(N)=0. Let I be any uncountable index set. Let H be the subset of the product space ΩI consisting of those sequences ω that hit N, i.e., ones such that for some i we have ω(i)∈N. Then H is saturated nonmeasurable with respect to the I-fold product measure PI (and hence with respect to its completion).

One conclusion to draw is that the event H of hitting the center at least once in our uncountable number of throws in fact has a weird “nonmeasurable chance” of happening, one perhaps that can be expressed as the interval [0, 1]. But I think there is a different philosophical conclusion to be drawn: the usual “product measure” model of independent trials does not capture the phenomenon it is meant to capture in the case of an uncountable number of trials. The model needs to be enriched with further information that will then give us a genuine chance for H. Saturated nonmeasurability is a way of capturing the fact that the product measure can be extended to a measure that assigns any numerical probability between 0 and 1 (inclusive) one wishes. And one requires further data about the system in order to assign that numerical probability.

Let me illustrate this as follows. Consider the original single-case dart throwing system. Normally one describes the outcome of the system’s trials by the position z of the tip of the dart, so that the sample space Ω equals the set of possible positions. But we can also take a richer sample space Ω* which includes all the possible tip positions plus one more outcome, α, the event of the whole system ceasing to exist, in violation of the conservation of mass-energy. Of course, to be physically correct, we assign chance zero to outcome α.

Now, let O be the center of the target. Here are two intuitions:

  1. If the number of trials has a cardinality much greater than that of the continuum, it is very likely that O will result on some trial.

  2. No matter how many trials—even a large infinity—have been performed, α will not occur.

But the original single-case system based on the sample space Ω* does not distinguish O and α probabilistically in any way. Let ψ be a bijection of Ω* to itself that swaps O and α but keeps everything else fixed. Then P(ψ[A]) = P(A) for any measurable subset A of Ω* (this follows from the fact that the probability of O is equal to the probability of α, both being zero), and so with respect to the standard probability measure on Ω*, there is no probabilistic difference between O and α.

If I am right about (1) and (2), then what happens in a sufficiently large number of trials is not captured by the classical chances in the single-case situation. That classical probabilities do not capture all the information about chances is something we should already have known from cases involving conditional probabilities. For instance P({O}|{O, α}) = 1 and P({α}|{O, α}) = 0, even though O and α are on par.

One standard solution to conditional probability case is infinitesimals. Perhaps P({α}) is an infinitesimal ι but P({O}) is exactly zero. In that case, we may indeed be able to make sense of (1) and (2). But infinitesimals are not a good model on other grounds. (See Section 3 here.)

Thinking about the difficulties with infinitesimals, I get this intuition: we want to get probabilistic information about the single-case event that has a higher resolution than is given by classical real-valued probabilities but lower resolution than is given by infinitesimals. Here is a possibility. Those subsets of the outcome space that have probability zero also get attached to them a monotone-increasing function from cardinalities to the set [0, 1]. If N is such a subset, and it gets attached to it the function fN, then fN(κ) tells us the probability that κ independent trials will yield at least one outcome in N.

We can then argue that fN(κ) is always 0 or 1 for infinite. Here is why. Suppose fN(κ)>0. Then, κ must be infinite, since if κ is finite then fN(κ)=1 − (1 − P(N))κ = 0 as P(N)=0. But fN(κ + κ)=(fN(κ))2, since probabilities of independent events multiply, and κ + κ = κ (assuming the Axiom of Choice), so that fN(κ)=(fN(κ))2, which implies that fN(κ) is zero or one. We can come up with other constraints on fN. For instance, if C is the union of A and B, then fC(κ) is the greater of fA(κ) and fB(κ).

Such an approach could help get a solution to a different problem, the problem of characterizing deterministic causation. To a first approximation, the solution would go as follows. Start with the inadequate story that deterministic causation is chancy causation with chance 1. (This is inadequate, because in the original dart-throwing case, the chance of missing the center is 1, but throwing the dart does not deterministically cause one to hit a point other than the center.) Then say that deterministic causation is chancy causation such that the failure event F is such that fF(κ)=0 for every cardinal κ.

But maybe instead of all this, one could just deny that there are meaningful chances to be assigned to events like the event of uncountably many trials missing or hitting the center of the target.

Sketch of proof of Proposition: First note that there is an extension Q of PI such that Q(H)=0. This shows that any PI-measurable subset of H must have probability zero.

Let Q1 be the restriction of P to Ω − N (this is still normalized to 1 as N is a null set). Let Q1I be the product measure on (Ω − N)I. Let Q be a measure on Ω defined by Q(A)=Q1I(A ∩ ΩN). Now let A be any of the cylinder sets used for generating the product measure on ΩI. Thus, A = ∏iIAi where there is a finite J ⊆ I such that Ai = Ω for i ∉ J. Then
Q(A)=∏iJQ1(Ai − N)=∏iJP(Ai − N)=∏iJP(Ai)=PN(A).
Since PN and Q agree on cylinder sets, by the definition of the product measure, Q is an extension of PN.

To show that H is saturated nonmeasurable, we now need to show that any PI-measurable set in the complement of H must have probability zero. Let A be any PI-measurable set in the complement of H. Then A is of the form {ω ∈ ΩI : F(ω)}, where F(ω) is a condition involving only coordinates of ω numbered by a fixed countable set of indices from I (we can make this precise). But no such condition can exclude the possibility that a coordinate of Ω outside that countable set is in H, unless the condition is entirely unsatisfiable, and hence no such set A lies in the complement of H, unless the set is empty. And that’s all we need to show.

Tuesday, August 8, 2017

Naturalists about mind should be Aristotelians

  1. If non-Aristotelian naturalism about mind is true, a causal theory of reference is true.

  2. If non-Aristotelian naturalism about mind is true, then normative states of affairs do not cause any natural events.

  3. If naturalism about mind is true, our thoughts are natural events.

  4. If a causal theory of reference is true and normative states of affairs do not cause any thoughts, then we do not have any thoughts about normative states of affairs.

  5. So, if non-Aristotelian naturalism about mind is true, then we do not have any thoughts about normative states of affairs. (1-4)

  6. I think that I should avoid false belief.

  7. That I should avoid false belief is a normative state of affairs.

  8. So, I have a thought about a normative state of affairs. (6-7)

  9. So, non-Aristotelian naturalism about mind is not true. (5 and 8)

Note that the Aristotelian naturalist will deny (2), for she thinks that normative states of affairs cause natural events through final (and, less obviously, formal) causation, which is a species of causation.

I think the non-Aristotelian naturalist’s best bet is probably to deny (2) as well, on the grounds that normative properties are identical with natural properties. But there are now two possibilities. Either normative properties are identical with natural properties that are also “natural” in the sense of David Lewis—i.e., fundamental or “structural”—or not. A view on which normative properties are identical with fundamental or “structural” natural properties is not a plausible one. This is not plausible outside of Aristotelian naturalism. But if the normative properties are identical with non-fundamental natural properties, then too much debate in ethics and epistemology threatens to become merely verbal in the Ted Sider sense: “Am I using ‘justified’ or ‘right’ for this non-structural natural property or that one?”

"Finite"

In conversation last week, I said to my father that my laptop battery has a “finite number of charge cycles”.

Now, if someone said to me that a battery had fewer than a billion charge cycles, I’d take the speaker to be implicating that it has quite a lot of them, probably between half a billion and a billion. And even besides that implicature, if all my information were that the battery has fewer than a billion charge cycles, then it would seem natural to take a uniform distribution from 0 to 999,999,999 and think that it is extremely likely that it has at least a million charge cycles.

One might think something similar would be the case with saying that the battery has a finite number of charge cycles. After all, that statement is logically equivalent to the statement that it has fewer than ℵ0 charge cycles, which by analogy should implicate that it has quite a lot of them, or at least give rise to a uniform distribution between 0, inclusive, and ℵ0, exclusive. But no! To say that it has a finite number of charge cycles seems to implicate something quite different: it implicates that the number is sufficiently limited that running into the limit is a serious possibility.

Actually, this may go beyond implicature. Perhaps outside of specialized domains like mathematics and philosophy, “finite” typically means something like not practically infinite, where “practically infinite” means beyond all practical limitations (e.g., the amount of energy in the sun is practically infinite). Thus, the finite is what has practical limits. (But see also this aberrant usage.)

Thursday, August 3, 2017

Connected and scattered objects

Intuitively, some physical objects, like a typical organism, are connected, while other physical objects, like a typical chess set spilled on a table, are disconnected or scattered.

What does it mean for an object O that occupies some region R of space to be connected? There is a standard topological definition of a region R being connected (there are no open sets U and V whose intersections with R are non-empty such that R ⊆ U ∪ V), and so we could say that O is connected if and only if the region R occupied by it is connected.

But this definition doesn’t work well if space is discrete. The most natural topology on a discrete space would make every region containing two or more points be disconnected. But it seems that even if space were discrete, it would make sense to talk of a typical organism as connected.

If the space is a regular rectangular grid, then we can try to give a non-topological definition of connectedness: a region is connected provided that any two points in it can be joined by a sequence of points such that any two successive points are neighbors. But then we need to make a decision as to what points count as neighbors. For instance, while it seems obvious that (0,0,0) and (0,0,1) are neighbors (assuming the points have integer Cartesian coordinates), it is less clear whether diagonal pairs like (0,0,0) and (1,1,1) are neighbors. But we’re doing metaphysics, not mathematics. We shouldn’t just stipulate the neighbor relation. So there has to be some objective fact about the space that decides which pairs are neighbors. And things just get more complicated if the space is not a regular rectangular grid.

Perhaps we should suppose that a physical discrete space would have to come along with a physical “neighbor” structure, which would specify which (unordered, let’s suppose for now) pairs of points are neighbors. Mathematically speaking, this would turn the space into a graph: a mathematical object with vertices (points) and edges (the neighbor-pairs). So perhaps there could be at least two kinds of regular rectangular grid spaces, one in which an object that occupies precisely (0,0,0) and (1,1,1) is connected and another in which such an object is scattered.

But we can’t use this graph-theoretic solution in continuous spaces. For here is something very intuitive about Euclidean space: if there is a third point c on the line segment between the two points a and b, then a and b are not neighbors, because c is a better candidate for being a’s neighbor than b. But in Euclidean space, there is always such a third point, so no two points are neighbors. Fortunately, in Euclidean space we can use the topological notion.

But now we have a bit of a puzzle. We have a topological notion of a physical object being connected for objects in a continuous space and a graph theoretic notion for objects in a discrete space. Neither notion reduces to the other. In fact, we can apply the topological one to objects in a discrete space, and conclude that all objects that occupy more than one point are scattered, and the graph theoretic one to objects in Euclidean space, and also conclude that all objects that occupy more than one points are scattered.

Maybe we should have a disjunctive notion: an object is connected if and only if it is graph-theoretically connected in a space with a neighbor-relation or topologically connected in a space with a topological structure.

That’s not too bad, but it makes the notion of the connectedness of a physical object be a rather unnatural and gerrymandered notion. Maybe that’s how it has to be.

Or maybe only one of the two kinds of spaces is actually a possible physical space. Perhaps physical space must have a topological structure. Or maybe it must have a graph-theoretic structure.

Here’s a different suggestion. Given a region of space R, we can define a binary relation cR where cR(a, b) if and only if the laws of nature allow for a causal influence to propagate from a to b without leaving R. Then say that a region of space R is connected provided that any two distinct points can be joined by a sequence of points such that successive points are cR-related in one order or the other (i.e., if di and di+1 are successive points then cR(di, di+1) or cR(di+1, di)).

On this story, if we have a universe with pervasive immediate action at a distance, like in the case of Newtonian gravity, all physical objects end up connected. If we have a discrete universe with a neighbor structure and causal influences can propagate between neighbors and only between them, we recover the graph-theoretic notion.

Wednesday, August 2, 2017

Disconnected bodies and lives

We can imagine what it is like for a living to have a spatially disconnected body. First, if we are made of point particles, we all are spatially disconnected. Second, when a gecko is attacked, it can shed a tail. That tail then continues wiggling for a while in order to distract the pursuer. A good case can be made that the gecko’s shed tail remains a part of the gecko’s body while it is wiggling. After all, it continues to be biologically active in support of the gecko’s survival. Third, there is the metaphysical theory on which sperm remains a part of the male even after it is emitted.

But even if all these theories are wrong, we should have very little difficulty in understanding what it would mean for a living thing to have a spatially disconnected body.

What about a living thing having a temporally disconnected life? Again, I think it is not so difficult. It could be the case that when an insect is frozen, it ceases to live (or exist), but then comes back to life when defrosted. And even if that’s not the case, we understand what it would mean for this to be the case.

But so far this regarded external space and external time. What about internally spatially disconnected bodies and internally temporally disconnected lives? The gecko’s tail and sperm examples work just as well for internal as well as external space. So there is no conceptual difficulty about a living thing having a disconnected body in its inner space.

But it is much more difficult to imagine how an organism could have an internal-time disconnect in its life. Suppose the organism ceases to exist and then comes back into existence. It seems that its internal time is uninterrupted by the external-time interval of non-existence. An external-time interval of non-existence seems to be simply a case of forward time-travel, and time-travel does not induce disconnectes in internal time. Granted, the organism may have some different properties when it comes back into existence—for instance, its neural system might be damaged. But that’s just a matter of an instantaneous change in the neural system rather than of a disconnect in internal time. (Note that internal time is different from subjective time. When we go under general anesthesia, internal time keeps on flowing, but subjective time pauses. Plants have internal time but don’t have subjective time.)

This suggests an interesting apparent difference between internal time and internal space: spatial discontinuities are possible but temporal ones are not.

This way of formulating the difference is misleading, however, if some version of four-dimensionalism is correct. The gecko’s tail in my story is four-dimensional. This four-dimensional thing is connected to the four-dimensional thing that is the rest of the gecko’s body. There is no disconnection in the gecko from a four-dimensional perspective. (The point particle case is more complicated. Topologically, the internal space will be disconnected, but I think that’s not the relevant notion of disconnection.)

This suggests an interesting pair of hypotheses:

  • If three-dimensionalism is true, there is a disanalogy between internal time and internal space with respect to living things at least, in that internal spatial disconnection of a living thing is possible but internal temporal disconnection of a living thing is not possible.

  • If four-dimensionalism is true, then living things are always internally spatiotemporally connected.

But maybe these are just contingent truths Terry Pratchett has a character who is a witch with two spatially disconnected bodies. As far as the book says, she’s always been that way. And that seems possible to me. So maybe the four-dimensional hypothesis is only contingently true.

And maybe God could make a being that lives two lives, each in a different century, with no internal temporal connection between them? If so, then the three-dimensional hypothesis is also only contingently true.

I am not going anywhere with this. Just thinking about the options. And not sure what to think.

Monday, July 31, 2017

Self-consciousness and AI

Some people think that self-consciousness is a big deal, that it’s the sort of thing that might be hard for an artificial intelligence system to achieve.

I think consciousness and intentionality are a big deal, that they are the sort of thing that would be hard or impossible for an artificial intelligence system to achieve. But I wonder whether if we could have consciousness and intentionality in an artificial intelligence system, would self-consciousness be much of an additional difficulty. Argument:

  1. If a computer can have consciousness and intentionality, a computer can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature here is 300K”.

  2. If a computer can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature here is 300K”, then it can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature of me is 300K”.

  3. Necessarily, anything that can have a conscious awareness whose object would be aptly expressible with the phrase “that the temperature of me is 300K” is self-conscious.

  4. So, if a computer can have consciousness and intentionality, a computer can have self-consciousness.

Premise 1 is very plausible: after all, the most plausible story about what a conscious computer would be aware of is immediate environmental data through its sensors. Premise 2 is, I think, also plausible for two reasons. First, it’s hard to see why awareness whose object is expressible in terms of “here” would be harder than awareness whose object is expressible in terms of “I”. That’s a bit weak. But, second, it is plausible that the relevant sense of “here” reduces to “I”: “the place I am”. And if I have the awareness that the temperature in the place I am is 300K, barring some specific blockage, I have the cognitive skills to be aware that my temperature is 300K (though I may need a different kind of temperature sensor).

Premise 3 is, I think, the rub. My acceptance of premise 3 may simply be due to my puzzlement as to what self-consciousness is beyond an awareness of oneself as having certain properties. Here’s a possibility, though. Maybe self-consciousness is awareness of one’s soul. And we can now argue:

  1. A computer can only have a conscious awareness of what physical sensors deliver.

  2. Even if a computer has a soul, no physical sensor delivers awareness of any soul.

  3. So, no computer can have a conscious awareness of its soul.

But I think (5) may be false. Conscious entities are sometimes aware of things by means of sensations of mere correlates of the thing they sense. For instance, a conscious computer can be aware of the time by means of a sensation of a mere correlate—data from its inner clock.

Perhaps, though, self-consciousness is not so much awareness of one’s soul, as a grasp of the correct metaphysics of the self, a knowledge that one has a soul, etc. If so, then materialists don’t have self-consciousness, which is absurd.

All in all, I don’t see self-consciousness as much of an additional problem for strong artificial intelligence. But of course I do think that consciousness and intentionality are big problems.

Monday, July 24, 2017

Death, harm and time

For the sake of this post, stipulate death to be permanent cessation of existence. Epicurus famously argues that death is not a harm to one, because the living aren’t harmed by death while the dead do not exist.

As formulated, the argument appears to require presentism—the view that only presently existing things exist. If eternalism or growing block is true, the dead would exist, albeit pastly. This would give us a nice little argument against presentism:

  1. If presentism is true, the Epicurean argument is sound. (Premise)

  2. The conclusion of the Epicurean argument—namely, that death is not a harm—is absurd. (Premise)

  3. So, presentism is false.

But things aren’t quite so simple, because one can reconstruct an Epicurean argument without presentism.

  1. One is intrinsically harmed by x iff there is a time t at which one is intrinsically harmed by x. (Premise)

  2. One is intrinsically harmed at t by x only if one exists at t. (Premise)

  3. One is not intrinsically harmed by death at any time at which one exists. (Premise)

  4. One is not intrinsically harmed by death at any time. (5 and 6)

  5. One is not intrinsically harmed by death. (4 and 7)

This argument distinguishes intrinsic from extrinsic harm. Here’s an illustration of the distinction I have in mind: if I lose a finger, that’s an intrinsic harm; if people say bad things about me behind my back, that’s an extrinsic harm—unless it causally impacts me in some negative way. Epicurus didn’t seem to think there was such a thing as extrinsic harm, so he formulated his argument in terms of harm as such. But, really, his argument was only plausible with respect to intrinsic harm, in that a no longer existent person certainly could suffer extrinsic harms, say by losing reputation or having loved ones suffer harm. And the conclusion that death is not an intrinsic harm is implausible enough. Death seems to be among the worst of the intrinsic harms. (In particular, I think my little argument against presentism remains a good one even if we weaken the conclusion of the Epicurean argument to say that death is not an intrinsic harm.)

Of course, the conclusion (8) is still false! So which premise is false?

Here is a pretty convincing argument for (5):

  1. One is intrinsically harmed at t by x only if has or lacks an intrinsic property at t because of x. (Premise)

  2. One does not have or lack any intrinsic properties at times when one doesn’t exist. (Premise)

  3. So, (5) is true.

Premise (6) is also pretty plausible.

Premise (4) is also plausible.

But there is a way out of the argument. If four-dimensionalism is true, we have a good way to reject (4). Consider first the spatial analogue of (4):

  1. One is intrinsically harmed by x if and only if there is a point z in space at which one is intrinsically harmed by x.

But (12) is implausible. Consider a spherical plant that suffers the harm of being made cylindrical. To be distorted into an unnatural shape seems to be an intrinsic harm. But it need not an intrinsic harm locatable at any point in space. At any point in space where the plant is not, surely it’s not harmed. At points where the plant is, it might be harmed—say, by the stresses induced by the unnatural shape—but it need not be. We could, in fact, suppose that the plant is nowhere stressed, etc. The harm is simply the intrinsic harm of being deformed. For another example, suppose materialism is true, and consider an animal in pain. The pain is an intrinsic harm, plausibly, but there is no harm at any single point of the brain—only at a larger chunk of the brain.

What the examples show is that spatially extended objects can be intrinsically harmed in respect of properties that cannot be localized to a single point. If four-dimensionalism is true, we are also temporally extended. We should then expect the possibility of being intrinsically harmed in respect of properties that cannot be localized to a single instant of time, and hence we should not believe (4). And death seems to be precisely such a case: one is harmed by having only a finite extent in the temporally forward direction. This could be just as much an intrinsic harm as being spatially distorted.

In fact, once we see the analogy between harm not located at a point of space and harm not located at a point of time, it is easy to find other counterexamples to (4). Consider a life of unremitting boredom. Suppose someone lives from t1 to t2 and is bored at every time. At every time t between t1 and t2 she suffers the intrinsic harm of being bored; but she has the additional temporally non-punctual intrinsic harm of being always bored. Or suppose that materialism is true. Then just as pains do not happen in respect of properties at a single spatial point, they probably do not happen in respect of properties at a single instant either: pain likely requires a sequence of neural events.

In fact, the multiplication of examples is sufficiently easy that even apart from the more abstruse question of the harms of death, someone whose theory of time or persistence forces her to endorse (4) is in trouble.

But on reflection, the moves against three-dimensionalism and maybe even presentism were too quick. Maybe even the presentist can say that we have intrinsic properties which hold in virtue of how we are over a temporally extended period of time.

Thursday, July 20, 2017

Life in the interim state and the nature of time

Assume this thesis:

  1. We go out of existence at death and return to existence at the resurrection.

Suppose, further, that:

  1. There is a last moment t1 of earthly life and a first moment t2 of resurrected life.

Then:

  1. If there are no intervening moments of time between t1 and t2, one is never dead.

  2. Whether there are any intervening moments of time between t1 and t2 depends on what happens to things other than one.

  3. So, whether one is ever dead depends on what happens to things other than one.

  4. So, whether one is ever dead is extrinsic to one.

But that’s absurd in itself, plus it implies the absurdity that death is only an extrinsic harm. So, we should reject 1. We exist between death and the resurrection.

There are two controversial assumptions in the argument: 2 and 4. Assumption 4 follows from an Aristotelian picture of time as consisting in the changes of things. Since one doesn’t exist between t1 and t2, those changes would have to be happening to things other than oneself. If one doesn’t accept the Aristotelian picture of time, it’s much harder to argue for 4.

Assumption 2 is obviously true if time is discrete. If time is continuous, it might or might not be true. For instance, it could be that one lives from time 0 to time 100, both inclusive, in which case t1 = 100, but it could also be that one lives from time 0 to time 100, non-inclusive, in which case t1 doesn’t exist. Similarly, one could be resurrected from time 3000, inclusive, to time infinity, non-inclusive, in which case t2 = 3000, but it could also be that one is resurrected from time 3000, non-inclusive, in which case t2 doesn’t exist.

However, even in the continuous case the argument has some force. For, first of all, it’s obvious that death is an intrinsic harm to us, and that obviousness does not depend on obscure details about whether the intervals of one’s life include their endpoints. Second, it is at least metaphysically possible for 1 to hold. But then in a world where 1 were to hold, our death would be merely an extrinsic harm to us, which would still be absurd.

AI and ontology

  1. Only things that exist think.

  2. Only simples and living things exist. (Cf. van Inwagen and Aristotle.)

  3. Computers are neither simple nor alive.

  4. So, computers don’t think.

Monday, July 17, 2017

Computer consciousness and dualism

Would building and running a sufficiently “smart” computer produce consciousness?

Suppose that one is impressed by the arguments for dualism, whether of the hylomorphic or Cartesian variety. Then one will think that a mere computer couldn’t be conscious. But that doesn’t settle the consciousness question. For, perhaps, if one built and ran a sufficiently “smart” computer (i.e., one with sufficient information processing capacity for consciousness), a soul would come into being. It wouldn’t be a mere computer any more.

Basically the thought here supposes that something like the following is a law of nature or a non-coincidental regularity in divine soul-creation practice:

  1. When matter comes to be arranged in a way that could engage in the kind of information processing that is involved in consciousness, a soul comes into existence.

Interestingly, though, a contemporary hylomorphist has very good reason to deny (1). The contemporary
hylomorphist thinks that the soul of an animal comes into existence at the beginning of the animal’s existence as an animal. Now consider a higher animal, say Rover. When Rover comes into existence as an animal out of a sperm and an egg, its matter is not arranged in a way capable of supporting the kind of information processing involved in consciousness. Yet that is when it acquires its soul. When finally the embryo grows a brain capable of this kind of information processing, no second soul comes into existence and hence (1) is false. (I am talking here of contemporary hylomorphists; Aristotle and Aquinas both believed in delayed ensoulement which would complicated the argument, and perhaps even undercut it.) The same argument will apply to those Cartesian dualists who are willing to admit that they were once embryos without brains.

Perhaps one could modify (1) to:

  1. When matter comes to be arranged in a way that could engage in the kind of information processing that is involved in consciousness and a soul has not already come into existence, then a soul comes into existence.

But notice now two things. First, (2) sounds ad hoc. Second, we lack inductive evidence for (2). We know of no cases where the antecedent of (2) is true. If we were to generate a computer with the right kind of information processing capabilities, we would know that the antecedent of (2) is true, but we would have no idea if the consequent is true. Third, our observations of the world so far all fit with the following generalization:

  1. Among material things, consciousness only occurs in living things.

But a “smart” computer would still not be likely to be a living thing. If it were, we would expect there to be non-“smart” computers that are alive, by analogy to how just as there are conscious living things, there are unconscious ones. But it is not plausible that there would be computers that are alive but not “smart” enough to be conscious. One might as well think that the laptop I am writing this on will be conscious.

This isn’t a definitive refutation of (2). God has the power to (speaking loosely) provide an appropriately complex computer with a soul that gives rise to consciousness. But inductive generalization from how the world is so far gives us little reason to think he would.

Sunday, July 16, 2017

Informed organs surviving the death of an individual

In my last post, I offered a puzzle, one way out of which was to accept the possibility of informed bits of an animal surviving the death of the animal. But the puzzle involved a contrived case--a snake that was annihilated.

But I can do the same story in a much more ordinary context. Jones is lying on his back in bed, legs stretched out, with healthy feet, and dies of some brain or heart problem. How does the form (=soul) leave his body? Well, there are many stories we can tell. But here's one thing that's clear: the form does not leave the toes before leaving the rest of the body. I.e., either the toes die (=are abandoned by the form) last or they die simultaneously with the rest. But in either case, then Special Relativity and the geometry of the body (the fact that one can draw a plane such that one or more toes are on one side of the plane, and the rest of the body is on the other) imply that there is a reference frame in which the form leaves one or more of the toes last. Thus, there will be a reference frame and a time at which only toes or parts of toes are informed. It is implausible to think that one is alive if all that's left alive are the toes. So organs can survive death while informed by the individual's form.

Friday, July 14, 2017

Snake annihilation and partial death

The following five principles seem to be rationally incompatible:

  1. Every part of a living organism is informed by its form.

  2. If any part of an organism is informed by its form, the organism is alive.

  3. An snake would be dead if everything but the tailmost one percent of its length were annihilated.

  4. Simultaneity is relative, as described by Special Relativity.

  5. Being informed by a form is not relative to a reference frame.

To see the incompatibility, consider this case. A snake of ordinary proportions is lying stretched out in a line and is then instantaneously completely annihilated. Notice an interesting fact about this snake:

  1. Every bit of this snake is informed by the form of the snake whenever it exists.

This follows from (1) and the setup of the situation. Note that (6) will not be true in the case of snakes that meet a more ordinary end than by complete instant annihilation: those snakes leave behind parts that are no longer informed (they may be parts only in a manner of speaking, but I think nothing in my argument hangs on this). It is to make (6) true that I supposed the snake annihilated instantaneously.

Now, by (4), the claim that the snake is must have been said with respect to some reference frame F1. But it follows from Special Relativity and the geometry of linear snakes that there will be a reference frame F2 relative to which the snake is annihilated gradually from the head to the tail rather than simultaneously. There will thus be a time t2 such that relative to F2 at t2 the snake has been annihilated except for the tailmost one percent. At t2 relative to F2, that tailmost one percent is informed by the form of the snake by (5) and (6). By (2), the snake is alive at t2 relative to F2. But by (3), it is dead at t2 relative to F2. So, the snake is both alive and dead at t2 relative to F2, which is absurd.

I am not sure what to do about this argument. I feel pushed to deny (2). Perhaps something could be dead simpliciter but still have living parts. But that’s an uncomfortble position.

Life and non-life

Assume a particle-based fundamental physics. Then the non-living things in the universe outnumber the living by many orders of magnitude. But here is a striking fact given a restricted compositionality like van Inwagen’s, Toner’s or mine on which all there are is in the universe are particles and organisms: the number of kinds of living things outnumbers the number of kinds of non-living things by several orders of magnitude. The number of kinds of particles is of the order of 100, but there are millions of biological species (they may not all correspond to metaphysical species, of course).

Counting by individuals, living things are exceptional. But counting by kinds, physical things are exceptional. Only a tiny portion of the universe is occupied by life. But on the other hand, only a tiny portion of the space of kinds of entities is occupied by non-life.

I am not sure what to make of these observations. Maybe it is gives some credence to an Aristotelian rather than Humean way of seeing the world by putting the the kinds of features as teleology that are found in living things at the center of metaphysics.