=========================================================================== CSC 363H Lecture Summary for Week 2 Summer 2007 =========================================================================== Formal definition: - A Turing machine is a 7-tuple (Q,S,T,d,q_0,q_accept,q_reject), where . Q is a fixed, non-empty, finite set of "states" . S is a fixed, non-empty, finite set of symbols (the "input alphabet", with "blank" symbol _ not in S) . T is a fixed, non-empty, finite set of symbols (the "tape alphabet", with S subset of T and _ in T) . q_0 in Q is the "start state" (or "initial state") . q_accept in Q is the "accepting state" . q_reject in Q is the "rejecting state" (q_reject =/= q_accept) . d : Q-{q_accept,q_reject} x T -> Q x T x {L,R} is the "transition function" - A "configuration" of a TM represents the current state, tape content, and head position as follows: "u q v", where q is current state, uv is current tape content (nonblank portion), and head is on leftmost symbol of v. Note: because tape is infinite to the right, configuration "u q v" is equivalent to "u q v_", "u q v__", "u q v___", etc. - For all states q_i in Q-{q_accept,q_reject}, q_j in Q, symbols a, b, c in T, strings u, v in T*, . if d(q_i,b) = (q_j,c,R), then configuration "u q_i bv" yields "uc q_j v" in one step of computation; . if d(q_i,b) = (q_j,c,L), then configuration "ua q_i bv" yields "u q_j acv" in one step of computation and configuration "q_i bv" yields "q_j cv" (because head cannot move "off" left end). - On input w, a Turing machine M computes as follows: . start from initial (or "start") configuration C_1 = "q_0 w" -- alternate convention C_1 = "_ q_0 w" to simplify computation; . go from configuration to configuration following the transition function (i.e., C_1 yields C_2 yields C_3 yields ...); . stop as soon as a "halting configuration" is reached: either an "accepting configuration" (where state is q_accept) or a "rejecting configuration" (where state is q_reject) -- third possibility is infinite loop (never reach halting state). - The sequence of configurations that M goes through on input w is called the "computation of M on w". - If a Turing Machine M is given an input w, there are three possibilities: 1. M accepts w. 2. M rejects w. (in both cases 1 and 2, the computation of M on w is finite) 3. M does not halt on input w. Definitions: - The "language of M", L(M) = { w in S* | M accepts input w }. - A TM is called a "decider" if it halts on every input, either accepting or rejecting. - L is "recognizable" (Turning-recognizable/semi-decidable/recursively enumerable) if there is some TM M such that L = L(M). - L is "decidable" (Turing-decidable/recursive) if there is some TM M that halts on every input such that L = L(M) and M is a decider. A previous example shows { 0^k 1^k | k >= 0 } is decidable. Furthermore, all regular languages are decidable because TMs can simulate FSAs. We will see many other examples. -------- Variants -------- How do we know TMs are the "right" model? Consider different decisions we made when defining TMs and how they affect the outcome. Compare different models by the languages they recognize: models that can recognize the same class of languages are equivalent. To show that some model is equivalent to regular Turing Machines, there are two things to prove: - start with an arbitrary TM M from the augmented model, show that there exists a regular TM M' that has the same behaviour on every input string (i.e. accept/reject/not halt) - start with an arbitrary regular TM M', show that there exists a TM M from the augmented model that has the same behaviour on every input string. Note: in general, one of the directions is trivial because the new model is simply an extension of the regular one, e.g. TMs with stay-put transitions. However, this is not always the case. For example, in the case of 2-way infinite tape TMs, something non-trivial needs to be said to establish both directions. See below. Turing machines with "stay put" head movement: - Allow head to stay on same square during a transition. Formally, d : Q x T -> Q x T x {L,R,S}. - Equivalent to TM: simulate stay put move by moving right to "extra" state then left to correct state (need more than one extra state). - Details: tutorial exercise... Turing machines with doubly infinite tape: - Similar to regular TMs except no leftmost square. Tape initially all blank except for input, and head starts on first symbol of input. - Equivalent to regular TM: . Doubly-infinite tape TMs can simulate regular TMs: use a few extra states at start of computation to write a special symbol to the left of input; then, during computation, remain in same state and move right whenever special symbol is read. . Regular TMs can simulate doubly-infinite tape TMs: start by placing special symbol on leftmost square (shift input one square right if necessary); during computation, whenever special symbol is read, shift entire tape content one square right (insert blank), then return to leftmost blank square to continue computation. - Note that both directions are necessary (even if one is easier). Multi-tape Turing machines: - Allow more than 1 tape, each with independent head. Initially, all tapes blank except tape 1 that contains input. In one step, current state and symbols read on all tapes determine next state, symbols written and head movements for all tapes. Formally, transition function d : Q x T^k -> Q x T^k x {L,R}^k. Note: Different from k independent machines running concurrently! - Equivalent to TM: keep track of contents of k tapes separated by special symbol, as well as head positions by using additional symbols (duplicates of tape alphabet), and use additional states to remember k symbols to read/write (since k is fixed). Simulating one step of multi-tape machine requires single-tape machine to scan over its entire tape twice: once to read k tapes, once to update k tapes. Other direction is trivial since 1 tape is just a special case of k tapes. - Details: Theorem 3.13 on p.149.