This is the beginning of an attempt to give a reductionist account of a certain fragment of morality in terms of Schelling points. To divorce it from the halo effect and show its gears for what they are and are not. To show what controls it and what limits and amplifies its power.

How much money would you ask for, if you and I were both given this offer: “Each of you name an amount of money without communicating until both numbers are known. If you both ask for the same amount, you both get that amount. Otherwise you get nothing. You have 1 minute to decide.”?

Now would be a good time to pause in reading, and actually decide.

My answer is the same as the first time I played this game. Two others decided to play it while I was listening, and I decided to join in and say my answer afterward.

Player 1 said $1 million.

Player 2 said $1 trillion.

I said $1 trillion.

Here was my reasoning process for picking $1 trillion.

Okay, how do I maximize utility?

Big numbers that are Schelling points…

3^^^3, Graham’s number, BB(G), 1 googolplex, 1 googol…

3^^^3 is a first-order Schelling point among this audience because it’s quick to spring to mind, but looks like it’s not a Schelling point, because it’s specific to this audience. Therefore it’s not a Schelling point.

Hold on, all of these would destroy the universe.

Furthermore, at sufficiently large amounts of money, the concept of the question falls apart, as it then becomes profitable for the whole world to coordinate against you and grab it if necessary. What does it even mean to have a googol dollars?

Okay, normal numbers.

million, billion, trillion, quadrillion…

Those are good close to Schelling numbers, but not quite.

There’s a sort of force pushing toward higher numbers. I want to save the world. $1 million is enough for an individual to not have to work their whole life. It is not enough to make saving the world much easier though. My returns are much less diminishing than normal. This is the community where we pretend to want to save the world by engaging with munchkinny thought experiments about that. This should be known to the others.

The force is, if they have more probability of picking a million than of picking a billion, many of the possible versions of me believing that pick a billion anyway. And the more they know that, the more they want to pick a billion… this process terminates at picking a billion over a million, a trillion over a billion …

The problem with ones bigger than a million is that, you can always go one more. Which makes any Schelling point locating algorithm have to depend on more colloquial and thus harder to agree on reliably things.

These are both insights I expect the others to be able to reach.

The computation in figuring just how deep that recursive process goes is hard, and “hard”. Schelling approximation: it goes all the way to the end.

Trillion is much less weird than Quadrillion. Everything after that is obscure.

Chances of getting a trillion way more than a quadrillion, even contemplating going for a quadrillion reduces ability to go for anything more than a million.

But fuck it, not stopping at a million. I know what I want.

$1 trillion.

All that complicated reasoning. And it paid off; the other person who picked a trillion had a main-line thought process with the same load bearing chain of thoughts leading to his result.

I later asked another person to play against my cached number. He picked $100.

Come on, man.

Schelling points determine everything. They are a cross-section of the support structure for the way the world is. Anything can be changed by changing Schelling points. I will elaborate later. Those who seek the center of all things and the way of making changes should pay attention to dynamics here, as this is a microcosm of several important parts of the process.

There’s a tradeoff axis between, “easiest Schelling point to make the Schelling point and agree on, if that’s all we cared about” (which would be $0), and “Schelling point that serves us best”, a number too hard to figure out, even alone.

The more thought we can count on from each other, the more we can make Schelling points serve us.

My strategy is something like:

- locate a common and sufficiently flexible starting point.
- generate options for how to make certain decisions leading up to the thing, at every meta level.
- Try really hard to find all the paths the process can go down.that any process you might both want to run and be able to both run.
- Find some compromise between best and most likely, which will not be just a matter of crunching expected utility numbers. An expected utility calculation is a complicated piece of thought, it’s just another path someone might or might not choose, and if you can Schelling up that whole expected utility calculation even when it points you to picking something less good but more probable, then it’s because you already Schellinged up all the options you’d explicitly consider, and a better, more common, and easier Schelling step from there is just to pick the highest one.
- Pay attention to what the perfectly altruistic procedure does. It’s a good Schelling point. Differences between what people want and all the games that ensue from that are complicated. You can coordinate better if you delete details, and for the both of you, zero-sum details aren’t worth keeping around.
- Be very stag-hunt-y.
- You will get farther the more you are thinking about the shape of the problem space and the less you are having to model the other person’s algorithm in its weakness, and how they will model you modeling their weakness in your weakness, in their weakness.

Call how far you can get before you can’t keep your thought processes together anymore “Schelling reach”.

It’s a special case to have no communication. In reality, Schelling reach is helped by communicating throughout the process. And there would be stronger forces acting against it.

It’s striking here I didn’t model any peculiarities of the cognition of the other players. But I still won.