Ridere, ludere, hoc est vivere.

Showing posts with label Game theory. Show all posts
Showing posts with label Game theory. Show all posts

Wednesday, February 22, 2017

Notes on Games of Strategy

Over three years ago, I wrote about my effort to approach a simple three-player race game using game theory.  Economist and game designer Dr. Aaron Honsowetz responded, which led to his recommendation that I look up the book Games of Strategy by Avinash Dixit, Susan Skeath, and David Reiley.  I finally obtained the third edition recently, and that has led Aaron, fellow designer Austin Smokowicz, and I to explore Dixit Skeath and Reiley's text in a kind of virtual book club.

Thinking about strategic games

We live-streamed last night's discussion.  We start with the first two chapters, which motivate the study of game theory and then define some terms and categories. 

Strategic games depend on player decision-making, as distinguished from games of chance, which depend on luck, and from games of skill, which depend on proficiency, dexterity, quickness of mind, or practice.  So, for example, chess is a game of strategy whose outcome is determined by the decisions of the players.  Bingo is a game of chance, whose outcome is determined by the order of randomly selected numbers.  Bowling is a game of skill, whose outcome is determined by the strength, accuracy, and proficiency of the bowlers.

The authors distinguish decisions that people make that are independent of the decisions of others from strategic games, in which people know that the results of their decisions depend on the decisions of others as well, i.e. that their decisions are interactive.  So, for example, blackjack plays out based on the decisions of each player in isolation, regardless of the other players at the table and of the dealer, who follows strict rules and makes no independent choices.  Poker, by contrast, plays out based on the interaction of the decisions among players.  So poker meets the definition of a strategic game, while blackjack does not.

The authors proceed to classify games in anticipation of the structure of the rest of the book:
  • In sequential games, players make decisions whose immediate outcome is unaffected by other players, as opposed to games with simultaneous moves, in which players must anticipate the unknown decisions of other players.  Chess is sequential, while rock-paper-scissors is simultaneous.
  • Some games pose players with strictly conflicting interests while others might involve common interests among the participants.  So most wargames are strictly conflicting, while Dead of Winter introduces both common and individual goals.
  • One-time games are distinguished from repeated games with the same opponents, which in turn are distinguished from games involving changing opponents.  So a one-time-only game might be the courtship, engagement, and marriage of a couple.  A bridge club plays the same game with the same opponents repeatedly.  And a single-elimination game tournament generally involves playing the same game with different opponents in each session.
  • The authors define games with full and equal information vs partial or unequal information:  External uncertainty applies to unknown information independent of others' decisions.  Strategic uncertainty applies to the unknown decisions by others.  A game with either or both has imperfect information; a game with no such uncertainty has perfect information.  A game in which one player has more information than another has asymmetric information.  So for example, a card game with a shuffled deck involves external uncertainty.  Rock-paper-scissors involves strategic uncertainty.  Chess is a perfect information game.  Scotland Yard has asymmetric information.  
Strategies that share or reveal information deliberately are called signaling.  Strategies that seek to motivate an opponent to reveal information are called screening. We spent quite a bit of time discussing examples of signaling, such as playing chicken and throwing the steering wheel out the window to demonstrate to an opponent your commitment not to swerve.  Aaron cited an example from Euchre in which card play signals to a partner information about your hand or about what the opponents do or don't have.  To Aaron's point, the important component of signaling is the degree of commitment to a decision.  Austin used an example from Hanabi as a method of communicating intent to fellow players.
 After the chat, Aaron further refined our discussion of signaling: 
Bluffing itself is an uncreditable signal. You claim you are committing to something (to alter player behavior).  To be a creditable signal, the price to make the signal must be sufficiently high that only a person committed to the action is willing to pay it. When I remove my steering wheel and toss it out the window I am saying I am willing to drive straight no matter the price. If the price of making the signal is too low, than people can make the signal without creditably committing to the action (bluffing) and destroy the ability for the signal to indicate you are going to take a particular action.
Screening involves eliciting information, and one way of doing that is to make an offer like a trade in Catan to see how players respond and thereby gain information about the contents of their hand and to an extent their future intentions.
During our discussion, I provided an example of sequential interactive decision-making that led to another follow-up by Aaron regarding signaling and screening.  In a game of Agricola, I built fences in anticipation of an opportunity to take sheep.  An opponent, who had no need of sheep and no means to store or cook them, took them anyway and let them run free to deprive me of that opportunity.  Said Aaron,
Your sheep example from Agricola was (if there was no other value for that many fences) a credible commitment that you would take sheep if they were available.  If that is the case it could also be part of a screening tactic.  If everyone is paying attention, it forces the last person before you who has a lower value play (assuming you are competitive in the game) to reveal that by responding to you.  And while you may not get the points from the sheep, it may still have been the best move because it blocked an opponent from taking an action to advance their score.
  • Games can have fixed vs manipulable rules.  Most tabletop games with which we are familiar have fixed rule-sets.  I imagine games with manipulable rules to include the interactions within a legislative body, whose rules of order may be modified by a party in power. 
  • The book defines cooperative games differently from the conventional use of the term that modern game-players might know.  The authors refer to games in which agreements are enforceable as Cooperative, while games with unenforceable agreements and that allow players to act in their own best interests are called non-cooperative games.  By these definitions, Catan is cooperative, inasmuch as a trade is an enforceable agreement; both parties are required by the game rules to hold up their end of the bargain.  Diplomacy is a non-cooperative game, since a player can commit to a future action and then renege on that commitment. 
Terminology and other concepts:
  • Strategies are available choices, or more generally a set of guidelines or algorithms by which individual decisions are made - a plan for a succession of actions in response to evolving circumstances, presumably due to the actions of other players.
  • Payoffs are the outcomes of interactive decisions, including expected payoff based on a probability distribution of random outcomes.
  • Rationality, or rational behavior, assumes perfect calculating players that consistently follow the best strategy pursuant to a completely known self interest.
  • Games involve a common knowledge of rules, specifically knowing who the players are, their available strategies or choices, the payoffs for each interaction of strategies, and an assumption of rational behavior.
  • When rational players interact, the game reaches an equilibrium where by each player is using the strategy that best responds to the other players' strategies. 
  • As opposed to assumed perfect rationality and calculated equilibrium, an evolutionary approach to games allows for a dynamic process in which poor calculators are motivated to choose strategies that proved more successful in previous plays of the game through observation, imitation, and learning.
  • Observations and experiments can help structure game theory and provide a check against its results.
The book stipulates that game theory can help to explain observed behavior of interacting decision makers, predict likely choices of rational actors, and prescribe strategic decisions.

I had one question that did not get addressed in our chat:  Can a game have more than one equilibrium, i.e. can there be local points of optimization that could emerge in an evolutionary approach different from what an analysis of perfect strategies would indicate?  The answer may come up in later discussions.

Next we will explore Chapter 3, "Games with Sequential Moves."

Thursday, September 19, 2013

Game Theory: A simple multi-player case

Earlier this week I was listening to Episode 36 of the Flip the Table podcast, which discussed the obscure 1979 Bruce Jenner Decathlon Game (publisher Parker Brothers).  The game consists of ten mini-games using an eclectic variety of mechanics.  One of them caught my attention as an elegant bluffing and second-guessing procedure used to resolve the "foot races" in the decathlon.

Friday, August 26, 2011

Revisit: Incan Gold and game theory

[I've been on business travel this week, so in the absence of original material, I'm reposting an article from last spring when I was first discovering Incan Gold.]


We had a family session of Incan Gold this afternoon [original post 16 April 2011].  An interesting development came up when my wife Kathy and I had bailed out of an expedition, and only my two sons Liam and Corey remained to explore the ruins.  One instance each of three different monsters had been turned up, which meant that there was a very real possibility that a second monster of one type would appear and scare the remainder of the party out of the ruins at any point.  But then an artifact showed up, and a very interesting stand-off ensued.  By the rules of the game, if there are two or more people in the expedition, neither gets the artifact, and it stays on the card.  In a subsequent turn, if exactly one of the remaining two people decides to return to his tent, he gets all treasure left on cards from previous turns - including the coveted artifact.  If both players turn back, neither gets the artifact, and the round is over.  If both continue on, both continue to share discovered treasure but risk encountering a monster and losing everything.

What followed was an almost comical staring contest between the two of them to try to figure out whether the other was going to stay or return, and therefore whether to return (in hopes that the other was staying, which would leave the artifact to the returning player) or stay (and keep any subsequent treasure for oneself).

The decision to turn back or to continue is simultaneous among remaining players, so the result is a fairly classic game theory problem, in which the outcome of a decision depends upon an opponent's simultaneous unknown decision.

Own decision  Opponent decides to stay  Opponent decides to go
Stay          Turn over another card    Opponent gets artifact
Go                  Get artifact          Nobody gets artifact


Since "Turn over another card" is mutually risky or mutually beneficial but in no case advantageous for one player over the other if both players stay, then game theory would conclude that the only logical decision would be to go.  But if both players decide to go, then neither gets the artifact.

The piece that's missing in my decision table above, however, is that if either player stays, another card will be turned over, to the risk or benefit of the player(s) staying.  So there might be an advantage to staying if a player perceives a potential treasure greater than getting the artifact.  But that's really unlikely, in fact, so the stand-off will typically end up in both players going back and neither getting the artifact. Having said that, however, the game actually plays unpredictably, and perceived risk and reward tend to rule over cold logic.

We've really come to like this risk management game.  I'm apparently way too conservative, however.  I came in last today, and Corey (10) beat us all.  (I seem to recall that he ended up with the artifact more than once, by the way.)

Saturday, April 16, 2011

Incan Gold and Game Theory

We had a family session of Incan Gold (or, more precisely, my home-made knock-off) this afternoon.  An interesting development came up when my wife Kathy and I had bailed out of an expedition, and only my two sons Liam and Corey remained to explore the ruins.  One instance each of three different monsters had been turned up, which meant that there was a very real possibility that a second monster of one type would appear and scare the remainder of the party out of the ruins at any point.  But then an artifact showed up, and a very interesting stand-off ensued.  By the rules of the game, if there are two or more people in the expedition, neither gets the artifact, and it stays on the card.  In a subsequent turn, if exactly one of the remaining two people decides to return to his tent, he gets all treasure left on cards from previous turns - including the coveted artifact.  If both players turn back, neither gets the artifact, and the round is over.  If both continue on, both continue to share discovered treasure but risk encountering a monster and losing everything.

What followed was an almost comical staring contest between the two of them to try to figure out whether the other was going to stay or return, and therefore whether to return (in hopes that the other was staying, which would leave the artifact to the returning player) or stay (and keep any subsequent treasure for oneself).

The decision to turn back or to continue is simultaneous among remaining players, so the result is a fairly classic game theory problem, in which the outcome of a decision depends upon an opponent's simultaneous unknown decision.

Own decision  Opponent decides to stay  Opponent decides to go
Stay          Turn over another card    Opponent gets artifact
Go                  Get artifact          Nobody gets artifact


Since "Turn over another card" is mutually risky or mutually beneficial but in no case advantageous for one player over the other if both players stay, then game theory would conclude that the only logical decision would be to go.  But if both players decide to go, then neither gets the artifact.

The piece that's missing in my decision table above, however, is that if either player stays, another card will be turned over, to the risk or benefit of the player(s) staying.  So there might be an advantage to staying if a player perceives a potential treasure greater than getting the artifact.  But that's really unlikely, in fact, so the stand-off will typically end up in both players going back and neither getting the artifact. Having said that, however, the game actually plays unpredictably, and perceived risk and reward tend to rule over cold logic.

We've really come to like this risk management game.  I'm apparently way too conservative, however.  I came in last today, and Corey (10) beat us all.  (I seem to recall that he ended up with the artifact more than once, by the way.)