CSC321: Introduction to Neural Networks

and Machine Learning

Winter 2014 UTM

Tutorial page

TA: Yue Li

yueli [at] cs [dot] toronto [dot] edu

 

Tutorial 1 (January 15 and 17):

  1. partial derivative examples

  2. matlab tutorial slides

  3. matlab code


Tutorial 2 (January 22 and 24):

  1. matlab tutorial 2

  2. matlab code

  3. Handwritten notes


Tutorial 3 (January 29 and 31):

  1. A1 explained

  2. Handwritten notes


Tutorial 4 (February 5 and 7):

  1. Basic Probability Theory

  2. Handwritten notes

  3. Course on Coursea: Probabilistic Graphical Models


Tutorial 5 (February 12 and 14):

  1. A1 review

  2. A2 explained

  3. Handwritten notes


Tutorial 6 (February 26 and 28):

  1. Recurrent neural network to Feed-forward network + math review on Forward/Backward propagation

  2. Combining models: Bagging and AdaBoost

  3. Optional textbook (see Chp 5.1-5.3): PR & ML

  4. Course on Coursera: Machine learning


Tutorial 7 (March 5 and 7):

  1. A2 and midterm review

  2. Discussed sample midterm partA 1-4, partB 1,2a, forward pass and back prop. on a simple feed-forward net, etc

Tutorial 8 (March 12 and 14):

  1. A3 explained


Tutorial 9 (March 19 and 21):

  1. Midterm review

  2. Review of Boltzmann machines and simulated annealing

  3. Simulated annealing demo from Wikipedia (Simulated_Annealing)

  4. Demo of deep learning on recogenizing handwritten digit


Tutorial 10 (March 26 and 28):

  1. A3 review

  2. A4 explained


Tutorial 11 (April 2 and 4):

  1. Final exam review (Q2-6) (exam.pdf) and study materials additional to the lecture notes. Answers for Q2-6 are demonstrated on the blackboard in this tutorial.

  2. Q2: Clustering (T8)

  3. Q3: RNN (refer to midterm)

  4. Q4: Boltzmann Machine (T9 and T10)

  5. Q5: Stacking RBM (q5_stack_RBM.pdf). For details, refer to  the pseudocode in Appendix B in Hinton, G. E., Osindero, S. and Teh, Y. (2006) A fast learning algorithm for deep belief nets. Neural Computation, 18, pp 1527-1554. NB: ignore the very top of the label layer in the original paper to answer Q5.

  6. Q6: Autoencoder. Refer to  Figure 1 in Hinton, G. E. and Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 28 July 2006. The matlab code for autoencoder is also straightforward to understand.