CSC321: Introduction to Neural Networks
and Machine Learning
Winter 2014 UTM
Tutorial page
CSC321: Introduction to Neural Networks
and Machine Learning
Winter 2014 UTM
Tutorial page
TA: Yue Li
yueli [at] cs [dot] toronto [dot] edu
Tutorial 1 (January 15 and 17):
Tutorial 2 (January 22 and 24):
Tutorial 3 (January 29 and 31):
Tutorial 4 (February 5 and 7):
•Course on Coursea: Probabilistic Graphical Models
Tutorial 5 (February 12 and 14):
Tutorial 6 (February 26 and 28):
•Recurrent neural network to Feed-forward network + math review on Forward/Backward propagation
•Optional textbook (see Chp 5.1-5.3): PR & ML
•Course on Coursera: Machine learning
Tutorial 7 (March 5 and 7):
•Discussed sample midterm partA 1-4, partB 1,2a, forward pass and back prop. on a simple feed-forward net, etc
Tutorial 8 (March 12 and 14):
Tutorial 9 (March 19 and 21):
•Midterm review
•Simulated annealing demo from Wikipedia (Simulated_Annealing)
Tutorial 10 (March 26 and 28):
Tutorial 11 (April 2 and 4):
•Final exam review (Q2-6) (exam.pdf) and study materials additional to the lecture notes. Answers for Q2-6 are demonstrated on the blackboard in this tutorial.
•Q2: Clustering (T8)
•Q3: RNN (refer to midterm)
•Q5: Stacking RBM (q5_stack_RBM.pdf). For details, refer to the pseudocode in Appendix B in Hinton, G. E., Osindero, S. and Teh, Y. (2006) A fast learning algorithm for deep belief nets. Neural Computation, 18, pp 1527-1554. NB: ignore the very top of the label layer in the original paper to answer Q5.
•Q6: Autoencoder. Refer to Figure 1 in Hinton, G. E. and Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 28 July 2006. The matlab code for autoencoder is also straightforward to understand.