Matrix chain multiplication. This is a dynamic programming problem from CLRS discussed in Section 15.3. Assume that you want to computing A = A_1 A_2 ... A_n where each A_i is a matrix of dimension a_i \times a_{i+1} the question is in which order do you do the matrix multiplications? Do you do them left to right? Break at the middle? Or something else. In fact, the optimal way of doing this depends on the dimension of the matrices (here are some examples: a_1 huge, a_2, ...., a_n+1=2....) Assume that multiplying a b \times c matrix by a c \times d matrix (which results in an b \times d matrix) takes precisely bcd steps. The question is how do you parenthesis the matrix multiplications above so that the total number of steps is minimized. The sol is a Dynamic programming approach where you define: BEST[i, j] to be the min. number of operations needed to compute the matrix A_i A_{i+1} .... A_{j}. You can see that, ---BEST[1, n] is what we want (at least the cost of the sol.) ---BEST[i, i] = 0 ---BEST[i, j] = min_{i <= k < j} BEST[i, k] + BEST[k+1, j] + a_i a_{k+1} a_{j+1} I just need you to go over this one problem and if they understood how to compute the opt. number of operations then go over how to actually find the best parenthesizing (by first computing BEST[i, j] for all i and j and then finding the best parenthesizing recursively using those.)