Conclusion
   - MLPs work.  Use them for machine learning 
       problems of moderate complexity.
 
   - Take Home Points
      
	- Perceptrons have inputs that are typically weighted. 
 
	- There are a range of transfer functions, including linear,
            sigmoid, and step. 
 
	- One layer of perceptrons can't learn some simple things by
	  themselves, so             we use MLPs. 
 
	- MLPs can represent almost any function. 
 
	- Using backpropagation, they can learn any function. 
 
      
    
   - Reading:  For this week is
       the 
       MLP Wiki.
   
 
   - Reading:  For next week is
Russell and Norvig's Learning from Examples Chapter 
       section 9 (pp. 755-758).