Conclusion
   - These are often useful techniques, but they don't work for 
       every game.  
 
   - The search space for Go is so big, and the evaluation function
     so poorly understood that Minimax doesn't work. That was the
     case when I wrote this slide in 2015.  Now, deep learning (Alpha-Go)
     has learned a good enough evaluation function to win.
 
   - Take Home Points
      
	- Adversarial games can be represented by a game tree. 
 
	- Minimax is a good way to play these. 
 
	- In all but the simplest game, you need a heuristic
            evaluation function. 
 
	- Minimax can be improved (without loss) by alpha-beta pruning 
            and iterative deepening.
 
      
    
   - Reading:  For this week is Russell and Norvig's Adversarial Search
       chapter through section 4 (pp. 164-179).
   
 
   - Reading:  for next week is Brachman and Levesque Chpt 6.