Starting from:

$25

Gov1372 - Data Exploration - Cooperation - Solved

 Before you begin, make sure you have the files data_assignment_sept16.Rmd and data_assignment_sept16.pdf as well as the folders axelRod-master and rmd_photos in the same folder on your computer. You will be using a Shiny app, developed by Simon Garnier and edited slightly by the TF team, that will only work if those things are all in the same place.

Next, you must set your working directory to the source file location. To do so, select ‘Session’ from the menu bar at the top of your screen, hover over ‘Set Working Directory’, then select ‘To Source File

Location’.

When knitting your RMarkdown file to a PDF, make sure to say “No” when it asks you if you would like to “Render and view this document using Shiny instead.”

If you have trouble getting the Shiny app to work (or with anything else), please come to the teaching team. We are happy to help!

The Evolution of Cooperation
Axelrod’s Evolution of Cooperation uses the construct of the Prisoner’s Dilemma to illustrate how cooperation can emerge despite incentives not to. In the Prisoner’s Dilemma game, players must choose whether to cooperate or defect. The payoffs for doing one or the other depend on what the other player does, but no matter if the other player cooperates or defects, it is always strictly better for players to defect.

This can be seen in the table below, which replicates the game Axelrod uses throughout his book. If player 2 cooperates, it’s better for player 1 to defect, since then player 1 would receive 5 points instead of 3. If player 2 defects, player 1 definitely wants to defect and receive 1 point rather than 0. So no matter what each player expects the other to do, they will both choose to defect, yielding 1 point to each player.

P1 ↓; P2 →
C
D
C
R = 3, R = 3
S = 0, T = 5
D
T = 5, S = 0
P = 1, P = 1
But ideally, in the long run (in a repeated game) players would like to cooperate and receive 3 points on each round. Axelrod explains how strategies of cooperation can evolve even in circumstances where players are antagonists (like Allied and Axis soldiers in the trenches of World War I) or when the players are not capable of foresight (as is the case for cooperation in biological systems).

The Prisoner’s Dilemma Simulator
For this week’s Data Exploration assignment, you will be working with a Shiny app that simulates prisoner’s dilemma games. To use it, simply run the code chunk above labeled ‘setup’, then run the following code (shiny_tournament()). Doing so will open the app in a separate window.

P.                   S., the “Instructions” tab in the app is broken. Don’t worry if it doesn’t display anything for you. Referto this document for instructions.

P.P.S., when you close the app window, there may be some warnings or errors (like “no loop for break/next, jumping to top level”). You can just ignore them.

Setup
Now we’re going to do a round-robin style tournament between strategies of your choosing, similar to the ones Axelrod conducted. All students must complete this part, as the subsequent questions are based on the tournament you conduct here.

First, choose at least 6 strategies from the menu that look interesting to you. Your task is to play each one against all the other opponents and record the results in the excel file available in the Google Drive called prisoners_dilemma_data.xlsx.

Second, once you have chosen your strategies, type all the pairwise combinations of those strategies into the columns player1 and player2. Make sure you type the strategies exactly as they appear in the application, including the case of the letters! Your spreadsheet should look this like after you have done so (but with the strategies you choose):

 

Figure 1: This is what your table should look like after step 2.

Note that there are 15 ways to combine 6 elements into pairs[1], so if you don’t have 15 pairs, check your work. Also note that the more strategies you choose, the more typing you will have to do.

Third, set the app so that “Tournament Type” = “Repeated”, “Number of Rounds” = 100, and “Number of Replications” = 100. Just as in Axelrod’s simulation, we are playing repeated games (this is determined by the “Number of Rounds”). The “Number of Replications” changes how many times the computer plays

2

each repeated game. So in the example above, the computer will repeat the 100-round game of titfortat vs. inverser 100 times over and take the average outcome of each of those replications. This is useful because some of the strategies rely on probability (e.g. play “Defect” with probability .5) and so the outcome will be different each time. We average over many outcomes to see which strategy wins on average. Once you’re done, your spreadsheet should look something like this, but with different strategies:

 

Figure 2: This is what your table should look like after step 3.

Of course, don’t just copy these numbers, since they’re made up.

Fourth, save the file as a CSV. To do so, go to File > Save As, then set the File Format to be CSV UTF-8 (Comma delimited) (.csv). Make sure to save it with the name prisoners_dilemma_data.csv.

Now you can finally read the data you created into R using the following code and start answering the questions that follow.

pd_data <- read_csv("prisoners_dilemma_data.csv") %>%

mutate(winner = case_when( # if you are interested, case_when() is a very useful score1 > score2 ~ player1, # function to create new variables. check out how it score1 < score2 ~ player2, # works by googling. score1 == score2 ~ "tie" ))
## Parsed with column specification:

## cols(

##               player1 = col_character(),

##               player2 = col_character(),

##             score1 = col_double(),

##            score2 = col_double()

## )

Question 1
How do the strategies you chose rank against each other based on the number of wins? How do they rank based on the number of points won? Discuss the patterns you see here as they relate to what you read from Axelrod. Keep in mind the concepts of niceness, retaliation, forgiveness, and clarity.

The following code makes a data frame called player_data_long that you can use to rank the strategies based on the number of points won during the tournament. As a hint, you may want to try using group_by(), summarize(), and arrange() on player_data_long.

If you want, try to figure out what each line of code does. Cleaning the data and rearranging it like this is an important part of data science, not just running regressions and taking means. This is why we are leaving some of it to you, via group_by() and summarize().

To rank the strategies based on the number of wins, think about making use of the winner variable in the pd_data data frame.

player1_data <- pd_data %>% select(player = player1, score = score1, opponent = player2) player2_data <- pd_data %>% select(player = player2, score = score2, opponent = player1) player_data_long <- bind_rows(player1_data, player2_data)

player_data_long %>% group_by(player) %>%

summarize(total = sum(score)) %>% arrange(desc(total))
## # A tibble: 6 x 2

## player total ## <chr> <dbl>

## 1 punisher        1292 ## 2 backstabber 1290 ## 3 titfortat          1244 ## 4 appeaser 1172 ## 5 alternator           1006 ## 6 handshaker           779

pd_data %>%

count(winner) %>% arrange(desc(n))

## # A tibble: 5 x 2

##        winner                        n

##        <chr>                  <int>

## 1 backstabber  4 ## 2 handshaker               4

## 3 tie                                    4

## 4 alternator                      2

## 5 punisher                         1

ANSWER: Three strategies tied for highest number of wins, with 4 each: backstabber, handshaker, and tie. For number of points won, punisher came in first (1292 points), followed closely by backstabber (1290). Axelrod didn’t study the total number of wins, so I’ll consider the total points in my analysis. Punisher won the tournament because it’s somewhat similar to titfortat in its qualities- it cooperates initially, doesn’t retaliate excessively (proportionally to the opponent’s defection), then continues to cooperate. In other words: It’s nice - generally cooperative. It retaliates, but it’s forgiving - it doesn’t retaliate blindly, and bases its defection on the other player’s fairly. And it’s clear when it cooperates - other strategies, like handshaker, will recognize its qualities and cooperate.

Question 2
Make a plot like the one below that displays how your winning strategy (according to total points) fared against the strategies it played against. Using player_data_long is probably easiest, but complete this question however makes most sense to you. Comment on what you find.

 

Figure 3: Here is an example of a plot you might make.

player_data_long %>% filter(player == "punisher") %>%

ggplot(aes(x = opponent, y = score)) + geom_col(fill = "azure3") +

labs(x = "opponent", y = "score", title = "Punisher Score Against Opponents") +

theme(plot.title = element_text(face = "bold", hjust = 0.5), axis.text.x = element_text(angle = 0, vjust = 0.5), plot.background = element_rect(fill = "white"), axis.title = element_text(face = "bold"),

panel.background = element_rect(fill = "white", color = "black"))
Punisher Score Against Opponents
 

ANSWER: The overall winner, punisher, fared well against most strategies - it cooperated with titfortat, backstabber, and appeaser due to its initial cooperative move, leading to high scores for both strategies. It isn’t too easy on alternator, and punishes it for its defection, “catching on” to its pattern and making it the sucker. Handshaker is less predictable, and their behaviors lead both to generally co-defect, which explains why each leave with low scores.

Question 3: Data Science Question
Create a plot, similar to the one above, that displays how each strategy fared against all the other strategies. Which strategies were most successful against which other ones? Comment on the patterns that you see.

player_data_long %>% group_by(player) %>% mutate(avg = mean(score)) %>% ungroup() %>% group_by(opponent) %>%

mutate(avg_against1 = mean(score)) %>% ungroup() %>% group_by(player) %>%

mutate(avg_against = mean(avg_against1)) %>% ungroup() %>%

ggplot(aes(x = opponent, y = score)) + geom_col(fill = "azure3") + facet_wrap(~player) +

geom_hline(aes(yintercept = avg, color = "mean score")) +
geom_hline(aes(yintercept = avg_against, color = "mean opponent score")) + labs(x = "opponent", y = "score against opponent", title = "Strategies' Scores Against Each Opponent:", color = NULL) +

theme(plot.title = element_text(face = "bold", hjust = 0.5), axis.text.x = element_text(angle = 45, vjust = 1, hjust = 1), plot.background = element_rect(fill = "white"), axis.title = element_text(face = "bold"),

panel.background = element_rect(fill = "white", color = "black"), strip.background = element_rect(color = "NA", fill = "NA"),

strip.text.x = element_text(size = 12, face = "bold", color = "black")) +

scale_colour_manual(values = c("red", "blue"))
Strategies' Scores Against Each Opponent:

 

ANSWER: The two most successful strategies, according to this graph, were backstabber and punisher, with the highest mean scores. However, interestingly, both saw a relatively high mean opponent score, meaning that both players tended to benefit from repeated interactions. This makes sense for punisher, which has qualities similar to titfortat, and backstabber, which only defects endlessly if the opponent defects three times (making it cooperative against strategies like punisher, titfortat, handshaker, and more). This again ties back to Axelrod’s conclusion that traits like niceness, forgiveness, and clarity are successful. Backstabber isn’t necessarily clear, but with opponents who are cooperative, it exhibits a cooperative pattern, making it clear in these cases. Punisher could stand to be more forgiving, but this is irrelevant when playing cooperative strategies.

Question 4
What is the main difference between the game you played here and the tournament Axelrod implemented? How do you think this difference might matter?

ANSWER: Axelrod used many more strategies in his simulation, and the entire thing was carried out on a larger scale, with more simulations. The fact that mine only involved six strategies and 100 replications limits the strength of my conclusions; I’d predict that if I involved more strategies - particularly “smarter” strategies, designed carefully by professors like those participating in Axelrod’s tournament - titfortat would come out on top, even beating punisher in total points. Titfortat is more forgiving than punisher, which Axelrod identified as a successful trait.


 
[1] In math terms, this results from the fact that 6 = 15

More products