There are various games where one cannot assign finite expected utilities in the standard way. St Petersburg is an extreme case, but there are milder and in some ways more interesting cases. Besides these cases from the literature, there interesting cases of gambling on nonmeasurable events (which I've used to argue for incommensurability).
One might put these cases aside as idle curiosities—when was the last time someone offered you a bet on the St Petersburg game?—were it not for the fact (noted by Hajek and others) that they threaten contagion to ordinary decisions as soon as there is some non-zero probability that such a game will happen. For if the utility of game A is undefined, and B is some perfectly ordinary game, but one thinks there is some tiny non-zero probability p of game A actually occurring, then one's expected utility for playing B won't be E[B] but E[B]+pE[A], which will be undefined. (A similar contagion problem applies to Pascal's Wager.)
The problems I am interested in all take place on some honest to goodness probability space (P,Ω,F), with P being a perfectly standard countably additive probability that can be used to define a classical expectation E. However, the problem comes when one needs to make decisions that involve gambles G (a gamble is just a real-valued function on Ω) which don't have a classically defined expectation E(G), due to either convergence problems or nonmeasurability.
Here is a solution. Say that a function E* defined for all gambles G on Ω is "extended expectation" provided that:
- E* has values in some totally ordered extension of the reals (say, the hyperreals)
- E*(aG)=aE*(G) for any real number a and gamble G
- E*(G+H)=E*(G)+E*(H) for any gambles G and H
- E*(G)≥0 if the gamble G is nowhere negative
- E*(G)=E(G) if G has a finite classical expectation.
Now we can do our decision theory. Say that H is at least as good as G provided that E*(G)≤E*(H) for all extended expectations E*. And say that H is strictly better than G provided that H is at least as good as G but G is not at least as good as H. (There is a variant definition here that one might want to consider: E*(G)<E*(H) for all extended expectations E*.)
Note that by condition (5), this is going to give us the same answers as the classical theory when the classical theory gives answers. But it's also going to give answers in cases where the classical theory gives none. For instance, if A is one of those games with an undefined or infinite utility, and B is eating a cookie while C is getting a mild electric shock, then the classical theory won't be able to define the expected value of A+B and A+C and won't be able to conclude that A+B is better than A+C, even though clearly B is better than C. But the above works, as long as some E* exists. For E*(A+C)=E*(A)+E*(C)=E*(A)+E(C)>E*(A)+E(B)=E*(B)+E*(B)=E*(B+C), since E(B) and E(C) are defined. So, A+C is better than B+C.
The point generalizes, and the contagion problem is solved.
Of course, for this approach to be non-trivial, there have to actually exist extended expectations. I have a rough sketch of a proof of the following claim:
- For any classical probability space (P,Ω,F) there is an extended expectation E*.