I love the microfoundations of economics. I love seeing emergent, productive orders arise among individuals who are self-interested and maybe not even fundamentally “good.” That’s one of the reasons why I enjoyed reading David Friedman’s “A Positive Account of Property Rights” [1], which puts forth a theory on how property rights can arise out of the interactions of individuals. I intend to read more of the literature as I have more time.
For now, I’m interested in doing some basic exploration of how it could be possible for cooperation to arise in what I will call the “simple Hobbesian Jungle” – after Hobbes’s understanding of human life as short, brutish, and warlike without the existence of a government.
I wrote most of this post as I was traveling by bus to New York last summer. It began as an attempt to provide microfoundations to fight back against the general belief that humans, left to themselves, could not establish some semblance of peace and stability. As is suggested by the name, the model is very much simplified – meaning it doesn’t encompass the totality of human choices made in the Hobbesian jungle. However, I made the model very harsh to the individual on purpose, so that there would be as little reason to cooperate as possible without outright having everyone destroy everyone else.
As I continued to write, I realized that in its harshness, the world I had created could very well result in stability. Not only this, but this mental experiment led to a realization that it’s possible to explain not only why cooperation arises, but also possibly how the ideas behind morality (don’t kill, don’t steal) get created in the first place. The model also held out some explanatory power for why war could possibly exist and why cooperation fails to arise sometimes. Surprisingly, there are even implications for the question of whether animals have rights.
Let’s dive in.
The simple Hobbesian jungle
Let’s attempt to see how human cooperation could emerge under certain restrictive assumptions that would make it even more difficult than normal for it to emerge. Specifically, why doesn’t everyone suddenly go kill everyone else?
I will model the emergent order through iterated games.
The setup
Imagine a state of existence comprised of atomic, separate, primitive, rational, self-interested men. That is, imagine that we go back in time 20,000 years, and let’s assume away tribes and family relations.
We have a “society” of 50 individuals in a large, closed-off geographical area. These individuals have similar mental capacities, though their physical powers differ in magnitude. At time t=0 they begin by being separated from each other, each owning a certain good or tool that is useful to each of them in varying degrees of magnitude. The items are randomly distributed across the various individuals.
The individuals, by assumption, cannot settle down to create agriculture or establish a division of labor except for in the field of protection services (that’s later). These people, each by himself, roam the countryside at t=0.
There is another assumption – the individuals have omniscience on a limited time horizon. That is, when two people encounter each other, each knows what the outcome of a battle between them would be with 100% certainty. When a battle is concluded, the winner is immediately fully healed and takes the property of the loser. Each possession on the map is valued by all individuals.
At t=0, the individuals may be given an ordinal rank of their fighting capabilities from 1 to 50, with 1 being the weakest, and 50 being the strongest. Each individual is aware that there are 49 other people on the map, and each individual is aware of his ordinal ranking and that of everyone else. The individuals know the rules of the scenario, and go through the following thought process ahead of time: Each person considers every other person and thinks about what would happen in an encounter between them. Let each individual be named as his ordinal rank. In that situation, if 12 were to encounter 17, 12 knows that he would lose in a fight and 17 would win. 17 knows the same information.
Assume away any feelings of morality in these people or any feelings of kinhood. Each individual desires to obtain as many of the goods as possible and also desires to stay alive so that he may make use of the goods. The goods are infinitely durable, and provide a constant level of satisfaction when used in each time step.
Assume that when individuals wander the map they cannot see beyond a limited space around them, and hence may not avoid their peers. People also cannot scout out opponents and cannot escape a battle (although it is theoretically possible that a battle never begins because both parties decide not to fight for whatever reason). In essence, there is a fog of war.
Ready, set, go! First guess
A shallow, first-level analysis of the situation would suggest the following thought process: 50 knows that he can beat any other individual and take his property. 49 knows that he can beat all individuals besides 50. 48 knows he can beat all but two individuals – 50 and 49. This continues until the analysis reaches 1, who knows that he cannot beat any other person on the map and will lose all battles with other people.
Say that 25 meets with 33. What should each actor do in his self-interest? The first instinct might be for 33 to kill 25. After all, it is 100% certain that 33 will gain in the short term without permanent bodily injuries. 25 doesn’t have an option to run, and cannot win a fight in this scenario. It appears that this map is doomed for players to successively kill each other until only 50 is left with all the goods and the highest utility level achievable. Remember that after a given battle is over, the winner is healed immediately by assumption, and hence has no temporary weakness after the battle is over.
Is this what will happen on this map? Remember that individuals by assumption had their omniscience limited to a short time horizon. What is meant is that a person knows the outcome of the coming fight, but does not have perfect information about future social conditions – individuals work under the limited information that we have today in that regard. We may only guess what will happen in the future and support it with evidence. Same goes for them.
A second look
With that in mind, return to the scenario at t=0 between 25 and 33. One obvious outcome is for 33 to win and take 25’s possessions. Are there any alternatives, however? There are. 25 knows that he will lose to 33 in a battle, yet he also knows that 33 would lose to anyone above him. Therefore, it is beneficial for 25 to remind 33 of this fact and to propose an agreement – that they form an alliance for protection. Thus, 25 and 33 agree to not kill each other and to fight together in battle. Why is this beneficial? It is beneficial for 33 because with the help of 25 he can likely take on individuals with a much higher fighting power – say, 37, 42 or even possibly 50. Say that the highest level person they can take on together in battle is 39. Hence, their combined rank is above person 39, but below person 40.
Assume, furthermore, that no person can beat a combination of all of the people below his rank – except for 2, of course (who can always beat 1 in a one-on-one fight).
What social patterns can we predict to emerge?
Let’s look at the lower end of the scale. Individuals below, say, 18, if they encounter each other, know that they stand to gain in the short term by fighting and killing each other off. However, they also know that they might very well be killed soon thereafter by upper-level players. Hence, they have an incentive to band together. If they happen to meet, 12 and 5 might group together, and so would 17 and 16, for example. Players in the middle range, say 18-34, know that they could easily (and with certainty) take on the weaker players and steal their property. However, they also know that people above their rank could kill them and take their property. Hence, it is useful for those players to band together as well. Players in the upper strata of fighting abilities, 35-50, know that at t=0 they are the strongest on the map and that they could take on any individual below them. The lower section of the 35-50 range might fear the upper section, however, so it might decide to group together to protect itself from the very best fighters.
Whenever a person allies himself with someone else, it is less useful for him to ally himself with a weaker person than with a stronger one. However, note that if every player has a personal rule where he decides to only ally himself with a stronger person, then there would be no alliances ever made. In every encounter there is necessarily a stronger player and a weaker player. Hence, while the weaker player may want to ally himself with the stronger one, the stronger one would never want the weaker one under this rule.
We see that for some sort of alliance to come into existence, stronger individuals must ally themselves with weaker ones at some point.
Multiple levels of logic and strategy
Now, two points:
1) It is also clear a person in the 1-17 range, for example, is better off allying himself with a person above his range than a person within the range.
2) A person in the 18-34 range is better off being allied with a person in the 35-50 range than a person in the 18-34 range. Yet even the 18-34 range is better for him than the 1-17 range.
While it might initially appear that people in the 1-17 range will have little use for each other (because they are all relatively weak), these players may also realize that the 18-34 range people have even less use for them. Therefore, 1-17 have a higher chance of an opponent in the 1-17 range allying himself with them than in the 18-34 range. Therefore, it would be useful for people in the 1-17 range to offer each other alliances if they happen to meet. The same goes for 18-34.
Yet what if the initial meetings are from people vastly different in strength from different ranges? For example, what if 12 meets 36? It appears that 36 would not gain very much from allying with 12. However, as was said before, future societal structure is uncertain. 12 might very well employ the following reasoning:
“Sure, 36, you could kill me, because I do not contribute all that much to our joint defense. Yet consider this: Some number of pairs of people, each below 25, have met or are meeting at this very moment. Say that number is X. These people are likely to ally with each other [by the analysis presented previously for people in the same range.] The expected meeting of people in this group is a meeting of players 12.5 and 12.5 (using statistical expectation). Once these people join up, they might be able to take on a player who is ranked 13, 17, or even 20. Therefore, this meeting of the lower ranks increases their power, which shortens the range of variability of power. At time t=1, then, we will have people in lower ranks allying together and bullying people at the lower end of the upper ranks. This lower end, if it meets the growing group of underdogs at t=1, has a chance of joining them, and making an even more powerful group at t=2. As this process slowly wears on, the stronger and stronger players are recruited, and it’s very possible that at t=3 you, 36, will be meeting strong groups of underdogs. Not only this, but you could be meeting people above your rank as well. Therefore, even though I only improve your ranking so that you can beat maybe person ranked 39 or 40, if you do not ally with me, you stand a much higher chance of dying at t=3. We should ally with each other and with any other players that we might happen to meet.” Let this be named argument *.
If 36 buys this reasoning, he will join in. If not, he will kill 12 and take his good. I propose that it is likely for 36 to ally himself with 12.
The tough case
But what if 50 encounters 1 at t=0? The chances of an alliance are much lower. It may even be that 1 offers very little of value to 50, and that 50 decides to kill 1. Remember that fighting has no costs for the group that is predestined to win a given fight (besides the lack of a future alliance).
Hence, the worst-case scenario that we can imagine is that at t=0, 50 meets 1, 49 meets 2, 48 meets 3, and so on with 26 meeting 25. Suppose the cutoff for argument * working is 36 meeting 14. Hence, when players above 37 meet players below 14, * doesn’t work.
What could 1 say to 50 to not have him kill him? Well, note that if 50 kills 1 and 37-49 kill their respective weaklings, that means that in the next round the players left will be 14-50. Furthermore, 14-36 will all have allies (by argument *). Hence, 38-50 are in a very much weakened position at t=1 relative to t=0. Hence, it might be advantageous for 50 to get any help it can at t=0 to protect itself at t=1. [2][3]
Throughout this whole explanation, we must remember that although there are a lot of factors that are constant by assumption (such as knowledge about who would win a battle), the exact social outcome will vary according to the explanatory power of the individuals who bargain with the superiors for inclusion in the “tribes.”
I do not purport to prove that one social arrangement will in fact turn out – it all depends on the powers of the players to convince each other. What I am merely pointing out is various plausible tendencies in the situation. Of course, all of this relies on the individuals realizing that they can call future uncertainty to help them in the first place.
Conclusions for Part 1
In this conclusion section, I will cheat a little bit and point out some of the ideas on which I stumbled after the end of the simple Hobbesian jungle.
The first thing to note is that even in a winner-takes-all, no division of labor or trade, amoral world with no attachments or regret you can have cooperation arise. I have not by any means proved that it will result in sunshine and utopia for everyone. Yet I have shown that uncertainty about future social order could be a driver of social cooperation for the provision of protection.
In future posts, when we strip away some of the assumptions, we will see that different complications introduced in the model will decrease and increase incentives for cooperation – what the net direction will be, we will see (though I expect it will be in the positive direction).
An important note to make is that cooperation was allowed to arise in the model because the players shared a language. If they had no capability to convince each other, they would not have been able to develop this system of mutual protection, but instead would have most likely ended up killing each other, and 50 indeed would have won (although, perhaps, slavery might have arisen instead… That’s another interesting dynamic for another time).
This puts forth a possible explanation for early warring tribes. Without communications, even if they had good intentions, they might not have been able to get them across. Assuming away good intentions and focusing only on self-interest, they still might have been able to develop some mutual protection relationship, yet the language barrier prevented this from happening.
So we see that defense is one of the possible drivers of cooperation. Looking ahead, another driver is tasks that can be completed together more easily than separately (separate from the division of labor). I’m thinking of things like, say, rolling large logs. A man might not be able to do it by himself, yet can achieve the goal with 3 other men. Upon further consideration, defense is in fact a subset of this “cooperative strength.”
The other possible driver of cooperation, we see, will be the division of labor. In our Hobbesian jungle, this was assumed away to simplify the model. Yet upon a first glance, there appears to be a strong case for why the division of labor would be conductive to peace instead of war. Varying levels of talent mean that people have comparative advantages in the production of different goods. Not only this, but specialization allows for an increase in productivity of the laborer. As such, if they can communicate, Hobbesian strangers might prefer to trade instead of to fight.
Taking cooperative strength and the division of labor together, we begin seeing how property rights, at least the concept of self-ownership, might have emerged.
Going back to the importance of a shared language, we can see why cooperation among 1) animals and other animals, and 2) humans and animals is difficult. They have no way to make the case to each other for why they shouldn’t kill each other. Animals cannot make pacts for mutual protection unless it is genetically instilled in them. Humans also cannot face a bear reared on its hind legs and argue for why no, Mr. Bear, you shouldn’t kill me because then you will lose the benefits I can offer you.
The language barrier hinders both the possibilities for cooperative strength and the division of labor. If animals were to wake up tomorrow and be able to communicate completely fluently with each other, we would see more cooperation. If they could also engage in the division of labor, they would start off on the path to creating human-like societies. However, they do not possess these capabilities (beyond their simple abilities to communicate). As such, avoiding conflict and protecting one’s self from animals make having a meaningful society with them impossible. The stronger has always dominated the weaker. Yes, we can keep pets, and even live peacefully and happily with them, but this is only after having “enslaved” them and forced them to fit into our society after extensive “brainwashing” (training) – to put the affair in human terms. The question of animals’ rights extends beyond understanding “don’t hit or kill,” but also to recognizing property boundaries (which will be discussed in future posts). As such, until animals can properly understand these concepts, they remain subordinate to humans and their property (I suppose some select animals, such as some primates, could be excepted).
In future posts, I look forward to making the Hobbesian jungle a little more realistic.
Notes and References:
[1] http://www.daviddfriedman.com/Academic/Property/Property.html
[2] Another reason 50 might choose to team up with a very weak player is to serve as signaling. While players might choose to team up, they could in theory, at any time, turn on each other. 50 choosing to ally himself with 1 sends a signal that he will restrain himself from killing weaker players and will cooperate well with others. Not a perfect signal – true – but 50 could find a way to make it appear legitimate.
[3] At some point in the discussion, someone might bring up the possibility of everyone teaming up into the same team and being one big happy group. But then, the hypothetical continues, why wouldn’t the best 49 players decide that 1 is useless and take him out? (Another version is that they decide 50 is too powerful by himself and decide to off him). This certainly may happen, yet it’s also possible that 2 will realize that if 1 is killed off, 2 is the remaining weakest player – and the next person on the chopping board. And so he might be agreeing to a slippery slope. So might 49 in the case of killing 50. By backwards induction, more and more players might get killed over time. This would create a disincentive to implement such a “kill the worst (or best) player” policy as long as the players have enough foresight to realize the consequences of their actions.