CS4330 HW6 Solution
CS4330 HW6 Solution
CS4330 HW6 Solution
Problem 3.8b)
1. Scan the tape and mark the first 1 that hasn’t been marked. If no unmarked 1’s are found, go to stage 5.
Else move head back to start of tape.
2. Scan the tape until an unmarked 0 is found and mark it. If no 0s are found then reject.
3. Scan the tape once more until an unmarked 0 is found and mark it. If no 0s are found then reject.
4. Move the head back to the start of the tape and go to stage 1.
5. Move the head back to the start of the tape. Scan tape for unmarked 0s. If none are found then accept.
Else reject.
Problem 3.8c)
1. Scan the tape and mark the first 1 that has not been marked. If no unmarked 1’s are found, go to stage 5.
Else move the head back to the start of the tape.
2. Scan the tape until an unmarked 0 is found and mark it. If no 0s are found then accept.
3. Scan the tape once again until an unmarked 0 is found and mark it. If no 0s are found, accept.
4. Move the head back to the start of the tape and go to stage 1.
5. Move the head back to the start of the tape. Scan the tape to see if any unmarked 0s are found.
If none are found, reject. Else accept.
Else reject.
1
Problem 3.9a)
To prove that 2-PDA is more powerful than 1-PDA, we need to give a language that
is accepted by 2-PDA but not 1-PDA. Suppose you have a language L = {an bn cn | n ≥ 0}.
By using 2-PDA, you have two stacks. Initialize each stack with a $. Then, whenever a character ’a’ is seen,
push ’a’ into both stacks. Whenever ’b’ is seen, pop ’a’ from the first stack. Similarly, whenever ’c’ is seen,
pop ’a’ from the second stack. When the last ’c’ is seen, accept if the symbols were seen in the correct order
and if we have a $ on top of the stack, accept the language.
However, 1-PDAs cannot recognize this language because L is not context free.
Here, we showed that 2-PDA recognizes every language that 1-PDA recognize (2-PDA using 1 stack is same
as 1-PDA), and also showed a language that 2-PDA can recognize but 1-PDA cannot.
Therefore, we have proved that 2-PDAs are more powerful than 1-PDAs
Problem 3.9b)
Based on the ‘hint’ we can simulate a Turing machine tape with two stacks. Here’s a brief explanation on how to
simulate a Turing machine with two stacks. Each stack (let’s call it left stack and right stack) will simulate either
the left half or the right half of the tape. Let the symbol on the top of the right stack denote the current position
of the tape. We pop a symbol from the left stack and push a symbol on the right to move left on the tape and do
the opposite for moving right. This means that 2-PDAs can do anything that a Turing machine can do.
Turing machines can simulate a k-PDAs for a finite k, which means that Turing machines can simulate 3-PDAs,
4-PDAs, and so on. Therefore, 2-PDAs is just as powerful as 3-PDAs. Therefore, we have proved that 3-PDAs are
not more powerful than 2-PDAs.
Problem 3.11
A TM with a doubly infinite tape can simulate an ordinary TM. It marks the left-hand end of the input to detect
and prevent the head from moving off of that end. To simulate the doubly infinite tape TM by an ordinary TM,
we show how to simulate it with a 2-Tape TM. The first tape of the 2-tape TM is written with the input string
and the second tape is blank. We cut the tape of the doubly infinite tape TM into two parts, at the starting cell of
the input string. The portion with the input string and all the blank spaces to the right appear on the first
tape of the 2-tape TM. The portion to the left of the input string appears on the second tape in reverse.
2
Problem 3.13
(i) Show that this Turing machine variant is not equivalent to the usual version
We can simulate any DFA on this Turing machine. This Turing machine cannot read anything that’s previously
written on the Tape because it cannot move left, which means given a stream of input, it only has one-way access
to it.
Problem 3.15b)
M = "On input w:
1. Non-deterministically partition w into strings xy
2. Input x into MA and y into MB
3. Accept if both MA and MB accept."
Problem 3.15d)
For a decidable language L, let ML denote the Turing Machine that decides L. Then the TM
for the complement is MLc which on input w, accepts if ML rejects, and accepts otherwise.
Problem 3.16c)
For a Turing recognizable language L, we construct a non-deterministic Turing machine ML that recognizes L
we construct a Turning Machine M that decides AB.
ML = "On input w:
1. Non-deterministically cut w into parts w1 w2 ....wn
2. Run ML on wi for all i. If ML accepts all of them, accept. If ML halts and rejects for any i, reject."
3
Problem 3.16d)
Let A and B be decidable languages. Let MA and MB be Turing machines that decides A and B respectively.
We construct a Turing Machine MAB that recognize A ∩ B.