=========================================================================== CSC 363H Lecture Summary for Week 13 Summer 2007 =========================================================================== ---------------- Space Complexity ---------------- SPACE(s(n)) = { L : L is a language that can be decided by a TM running in worst-case space O(s(n)), i.e., the TM never uses more than O(s(n)) tape cells on any input } NSPACE(s(n)) = { L : L is a language that can be decided by a NTM running in worst-case space O(s(n)), i.e., the NTM never uses more than O(s(n)) tape cells on any input } Fact: If language L is recognized by a TM M running in space O(s(n)), then L is is recognized by a TM running in w.c. time 2^{O(s(n))}. Proof: Main idea: The only way that M can loop is by repeating a configuration exactly (because it is limited in how many cells it can use). We can simulate M until it stops or repeats a configuration, thereby deciding L(M). Details: Since M uses <= c*s(n) tape cells on any input, there are at most m^{c*s(n)} possible tape contents that M can enter during its computation (where m = number of symbols in tape alphabet) -- m symbols per cell for c*s(n) cells. For each possible tape content, there are c*s(n) positions that M's head can be in, and k = |Q| different states that M can be in. So M can run through at most k*c*s(n)*m^{c*s(n)} many different configurations before it enters some configuration twice (meaning M is in an infinite loop). Since k, c, m are constants with respect to the input size, this means it is possible to decide L(M) by simulating M and rejecting if M ever runs for more than k*c*s(n)*m^{c*s(n)} = 2^{O(s(n))} steps. Corollary: SPACE(s(n)) subset of TIME(2^O(s(n))) Unlike time, space much less affected by details of model (e.g., using k tapes saves time but not space -- information must still be stored). Surprising result: SAT in SPACE(n) -- keep track of original formula, truth-value assignment to its variables, and simplified formula, all in linear space; simply evaluate formula on each possible truth-value assignment, reusing space. So space seems much more "powerful" than time. Intuition: space can be reused; time cannot. Surprising result (Savitch's Theorem): NSPACE(s(n)) subset of SPACE((s(n))^2). Proof idea: NTM running in space O(s(n)) runs in nondeterministic time 2^O(s(n)). Trying out all computation branches takes too much space. Instead, use algorithm to test whether NTM can get from initial configuration to accepting configuration by recursively breaking up computation in two halves -- doing this properly (see textbook) gets space usage down to O((s(n))^2): O(s(n)) for storing configurations, and log(2^O(s(n))) = O(s(n)) for recursion depth. Details: in book, section 8.1. Define: PSPACE = U_{k >= 0} SPACE(n^k) = { all languages decided in polyspace } NPSPACE = U_{k >= 0} NSPACE(n^k) EXP = U_{k >= 0} TIME(2^{n^k}) By Savitch's Theorem, NPSPACE = PSPACE. Clearly, P subset PSPACE and NP subset NPSPACE, so P subset NP subset NPSPACE = PSPACE. By the first result in this lecture, PSPACE subset EXP. What about coNP? coNP subset coNPSPACE = coPSPACE = PSPACE (because deterministic polyspace decider for L yields deterministic polyspace decider for L^C by simply swapping accept/reject). Even more so than with NP, it seems "clear" that P =/= PSPACE (think of the linear-space algorithm for SAT). However, question still open! Thus: P subset NP subset PSPACE subset EXP We know that P =/= EXP, so at least one of the inclusions above must be proper, but we do not know which one it is. In fact, most researchers believe that all these inclusions are proper.